UW-Tacoma developing tool to scan social media for 'hate speech'

A tool to detect tweets that are “misogynistic” or include “hate speech directed at immigrants” is being developed at the University of Washington-Tacoma.

UW-Tacoma professors Martine De Cock and Anderson Nascimento are spearheading the initiative, according to a university news release.

“There is a lot of this material that appears in tweets or on Facebook and it’s becoming a real issue,” De Cock in the news release, referring to “misogynistic” and anti-immigrant tweets. “It’s very difficult for social media companies to do proper moderation, simply because of the sheer amount of content that is published every day.”

 “Essentially, we’re interested in using artificial intelligence techniques that try to grasp meaning or categorize natural language into some specific, preassigned categories—this tweet is misogynistic or that one is offensive toward immigrants,” Nascimento noted.

The dataset used by the professors, along with UW-Tacoma computer science and systems students to create the tool consists of 10,000 tweets. Different computers will process “secret shares” extracted from encrypted messages.

[RELATED: Profs develop tool for flagging ‘social media prejudice’]

”We developed general techniques for text messages classification. These techniques can be applied to any text classification problem: spam detection, suicidal messages, terrorism, etc.,” Nascimento told Campus Reform. “We have decided to showcase our general techniques by applying them to the problem of misogyny and hate-speech detection. However, we have no interest whatsoever to deploy such specific classifier in practice. Our main target is to apply our general techniques to the health-care sector.” 

The professor stated that the algorithm’s performance equaled that of the human-created dataset. Nascimento said that experts from Italy’s Universita degli Studi di Milano-Bicocca prepared the dataset, but the professor did not provide Campus Reform with the dataset.

Nicole Neily, president of the nonprofit Speech First, which is currently suing three universities for free speech violations, spoke with Campus Reform about the project, claiming that while university bias response teams account for students’ comments online, she believes comments are flagged by other students’ algorithms.

”For these incidents, context is incredibly important - for example, are students engaging in humor, parody, or satire? Referencing a TV show or movie? Discussing an incident that they witnessed and condemning it?” Neily said. “It concerns me that this research may end up statistically aggregated without taking into account this context - which would then give the false impression that the campus is a more hostile or hateful environment than it truly is.”

”Freedom of speech is a constitutional right, and the foundation of every free and democratic society,” Tyson Langhofer, senior counsel and director of the Center for Academic Freedom at nonprofit the Alliance Defending Freedom, told Campus Reform. “American schools and business should work to promote a culture of free speech with a robust marketplace of ideas. The answer is more speech, not censorship.”

UW-Tacoma spokesman John Burkhardt told Campus Reform that no federal research money was being used for this study. UW-Tacoma receives a little over two million dollars in federal research grants. President Donald Trump signed an executive order in March threatening schools that do not uphold the First Amendment with the loss of such funds.

[RELATED: Trump officially signs free speech exec. order: If schools censor, ‘we will not give them money’]

De Cock did not return a request for comment in time for publication.

Follow the author of this article on Twitter: @ShimshockAndAwe