Blog Layout

“We need to be careful about how this technology is used.”

Judge, Jury, Executioner

Machine learning researchers are teaching neural networks how to superficially judge humans — and the results are as brutal as they are familiar.

A study about the judgmental AI, published in the prestigious Proceedings of the National Academy of Sciences  journal , describes how researchers trained the model how to judge attributes in human faces, the way we do upon first meeting each other, and how they trained it to manipulate photos to evoke different judgments, such as appearing “trustworthy” or “dominant.”

“Our dataset not only contains bias,” Princeton computer science postdoctoral researcher Joshua Peterson wrote in a tweet thread about the research, “it deliberately reflects it.”

Human Error

The  PNAS paper notes that the AI so mirrored human judgment that it tended to associate objective physical characteristics, such as someone’s size or skin color, with attributes ranging from trustworthiness to privilege.

Indeed, in his thread Peterson explained  that most of the 34 judgment values the researchers trained the AI to assign had corresponding political inferences. For instance, when using the study’s interactive site , Futurism found that the algorithm marked white faces as more “conservative,” and when one searches for “liberal” on the study’s interactive site, most of the faces it comes up with are people of color.

In a press release , cognitive scientist and AI researcher Jordan W. Suchow of the Stevens Institute of Technology, who worked on the study,   admitted that “we need to be careful about how this technology is used,” since it could conceivably take on nefarious purposes like boosting or tarnishing a public figure’s reputation.

Biased Much?

Though it’s fairly esoteric, Suchow noted in the press release that this kind of machine learning can “study people’s biased first impressions of one another.”

“Given a photo of your face, we can use this algorithm to predict what people’s first impressions of you would be,” he added, “and which stereotypes they would project onto you when they see your face.”

With AI bias being an increasingly salient issue , this paradigm twist is as delightful as it is telling. You can check out the interactive research yourself at OneMillionImpressions.com.

READ MORE:  Deep models of superficial face judgments [ Proceedings of the National Academy of Sciences ]

Share This Article

By Laurence November 21, 2022
Usually, the winners of a pitching competition are bathed with accolades, media attention, and applause. After it’s done and dusted, all they have to think about is what to spend
By Laurence November 19, 2022
Above all else, FTX advertisements wanted you to know two things: that cryptocurrency is a force for good, and that you don’t need to be an expert to buy and
By Laurence November 19, 2022
This article was originally published on .cult by Luis Minvielle. .cult is a Berlin-based community platform for developers. We write about all things career-related, make original documentaries, and share heaps
By Laurence November 18, 2022
Okay, that’s a good question. Red Crew, Blue Crew Had it not been for the heroics of three members of NASA’s specialized “Red Crew,” NASA’s absolutely massive — and incredibly
By Laurence November 18, 2022
Pharmaceutical manufacturing is closely linked to mass production. In order for medicines to be sold cheaply, they often have to be made in huge amounts. But what happens if you
By Laurence November 17, 2022
“I’m in checkmark purgatory.” Checkmate They say “don’t meet your heroes,” but what’s even worse? When your hero buys Twitter, forces you and others to start paying eight dollars per
More Posts
Share by: