Profs develop tool for flagging 'social media prejudice'
Professors at the University of Buffalo and Arizona State University have developed a system to flag social media posts that have "the potential to spread misinformation and ill will."
“Terms like ‘prejudice’ are well defined in social psychology literature," one of the professors insisted to Campus Reform.
Amid backlash to social media platforms being accused of squelching conservative voices, two university professors are promoting a new system designed for “automatically detecting prejudice in social media posts.”
The program has the ability to flag certain posts as “having the potential to spread misinformation and ill will,” according to the University of Buffalo. The system is a result of a recent study by University of Buffalo assistant professor in the Department of Management Science and Systems, Haimonti Dutta, and Arizona State University Walter Cronkite School of Journalism and Mass Communication assistant professor, K. Hazel Kwon.
[RELATED: Rutgers punishes prof for anti-Semitic social media posts]
“In social media, users often express prejudice without thinking about how members of the other group would perceive their comments,” Dutta said, according to the university’s website. Dutta further asserted that “this not only alienates the targeted group members, but also encourages the development of dissent and negative behavior toward that group.”
The study analyzed “intergroup prejudice” by using Twitter data collected immediately after the 2013 Boston Marathon bombing. Using this data, the researchers set up a way to detect “intergroup prejudice” with artificial intelligence and machine learning. The messages identified by the new system are then automatically flagged.
[RELATED: Colorado State: ‘avoid gendered emojis’]
“Terms like ‘prejudice’ are well defined in social psychology literature and we adopted the same,” Dutta told Campus Reform. “Our paper cites several references, the most prominent being Gordon Allport’s Nature of Prejudice, a 1954 publication.”
According to Dutta, detection of “prejudiced” social media content is an “increasingly important” but daunting task, one with which the university hopes the new system will assist.
Dutta explains that social media monitoring is critical because prejudiced messages have the ability to spread “far more rapidly and broadly” on social media than through person-to-person interactions.
“We would like to have a system like this integrated into browsers on the client side so that users can use them to tag social media content that causes hate, aversion and prejudice,” Dutta told Campus Reform.
Follow the author of this article on Twitter: @celinedryan