Princeton study warns of robo-racism, sexism
The machine reviewed online sources to make associations based on the proximity of certain job-related words to gender-specific pronouns, associating women with the word "nurse," but men with the word "programmer."
Robots can be just as biased as humans, according to a recent study conducted at Princeton University that uncovered gender and racial bias in an Artificial Intelligence (AI) machine.
Robots can be just as biased as humans, according to a recent study conducted at Princeton University that uncovered gender and racial bias in an Artificial Intelligence (AI) machine.
According to the researchers, robots have always exhibited racist and sexist word associations, such as connecting women with families and households but not professionalism or careers.
Researchers from Princeton’s Center for Information Technology Policy decided to test this concept with Stanford University’s Global Vectors for Word Representation, or GLoVe, an AI machine that uses the internet to learn how to associate words and concepts, reports The Tartan.
The researchers put GLoVe through a replica of the Implicit Association Test (IAT), a test developed by Harvard University that is used to detect implicit bias in humans by having them associate certain images with positive or negative adjectives.
[RELATED: Research finds implicit bias training is ineffective]
One IAT, for example, has participants match up images of black people and white people to adjectives like “pleasant” and “unpleasant.” If the person takes longer to match the black images to “pleasant,” the the IAT determines that they are biased against black people.
GLoVe demonstrated a variation on this type of racial and gender bias in the study’s version of the IAT, identifying black names as less pleasant than white names and associating women with the arts rather than the sciences.
GLoVe also linked certain job-related words to masculinity, such as “programmer” and “professor,” whereas women were identified more closely with roles like “nurse” and “assistant professor” based on the proximity of those terms to gender-specific pronouns in online sources.
Since robots and AI machines learn by gathering real-world data, they apparently reflect the biases present in human language. Therefore, if humans exhibit gender and racial bias, the machines we create will too.
[RELATED: Social justice on Mars]
“The main scientific findings that we’re able to show and prove are that language reflects biases,” said Aylin Caliskan of Princeton University’s Center for Information Technology Policy. “If AI is trained on human language, then it’s going to necessarily imbibe these biases, because it represents cultural facts and statistics about the world.”
According to the study summary published in Science Magazine, AI bias could lead to “unintended discrimination” if the machines are used for tasks such as sorting resumes for job openings.
“In addition to revealing a new comprehension skill for machines, the work raises the specter that this machine ability may become an instrument of unintended discrimination based on gender, race, age, or ethnicity,” the summary warns.
Follow the author of this article on Twitter: @amber_athey