They then used data on these individuals’ labour-market outcomes to see whether the Photo Big Five had any predictive power. The answer, they conclude, is yes: facial analysis has useful things to say about a person’s post-mba earnings and propensity to move jobs, among other things.
Correlation vs causation. More attractive people will be defaulted to better negotiating positions. People with richer backgrounds will probably look healthier. People from high stress environments will show signs of stress through skin wrinkles and resting muscles.
This is going to do nothing but enforce systemic biases, but in a kafkaesque Gattica way.
And then of course you have the garden of forking paths.
These models have zero restraint on their features, so we have an extremely large feature space, and we train the model to pick features predictive of the outcome. Even the process of training, evaluating, then selecting the best model at this scale ends up being essentially P hacking.
And I’m not just talking about “average joes” who don’t know the first thing about statistics. It is mind-boggling how many people with advanced degrees do not understand the difference between correlation and causation, and will argue until they’re blue in the face that it doesn’t affect results.
AI is not helping. Modern machine learning is basically a correlation engine with no concept of causation. The idea of using it to predict the future is dead on arrival. The idea of using it in any prescriptive role in social sciences is grotesque; it will never be more than a violation of human dignity.
Billions upon billions of dollars are being invested in putting lipstick on that pig. At this point it is more lipstick than pig.
Racial profiling keeps getting reinvented.
Fuck that.
Correlation vs causation. More attractive people will be defaulted to better negotiating positions. People with richer backgrounds will probably look healthier. People from high stress environments will show signs of stress through skin wrinkles and resting muscles.
This is going to do nothing but enforce systemic biases, but in a kafkaesque Gattica way.
And then of course you have the garden of forking paths.
These models have zero restraint on their features, so we have an extremely large feature space, and we train the model to pick features predictive of the outcome. Even the process of training, evaluating, then selecting the best model at this scale ends up being essentially P hacking.
Exactly. It’s like saying that since every president has been over 6’ tall we should only allow tall people to run for president.
The problem here is education.
And I’m not just talking about “average joes” who don’t know the first thing about statistics. It is mind-boggling how many people with advanced degrees do not understand the difference between correlation and causation, and will argue until they’re blue in the face that it doesn’t affect results.
AI is not helping. Modern machine learning is basically a correlation engine with no concept of causation. The idea of using it to predict the future is dead on arrival. The idea of using it in any prescriptive role in social sciences is grotesque; it will never be more than a violation of human dignity.
Billions upon billions of dollars are being invested in putting lipstick on that pig. At this point it is more lipstick than pig.