Prof develops AI to determine who is spreading ‘misinformation’ online
A research team at the University of California, Santa Barbara is using AI to identify whether social media posts contain “misinformation,” or whether links shared are from “established” news sites.
The system will also analyze images in the same way.
Researchers at the University of California, Santa Barbara are developing an artificial intelligence system that will identify whether or not what someone shares on social media is “genuine” or “misleading.”
Titled Dynamo: Dynamic Multichannel Modeling of Misinformation, the project is headed up by UCSB professor William Wang. Wang specializes in “natural language processing,” the subsect of artificial intelligence dealing with a computer’s ability to process human language.
Using this type of analysis, Wang and his team have set out to create a means of analyzing text in social media posts and online articles to reveal information about their origins, as well as individuals who are sharing them. Wang’s research will be used to create tools that identify the ideologies, motivations, intended audiences, and affiliations of those sharing information online, according to a UCSB news release.
[RELATED: Profs develop tool for flagging ‘social media prejudice’]
This information will then be used to determine if a post is “misleading,” “clickbait,” or whether or not it comes from what UCSB terms “established’ news sites.
“So the question is, given a post, how would you be able to understand whether this is specifically misleading or if this is a genuine post,” Wang said. “And, given the structure of the network, can you identify the spread of misinformation and how it is going to be different compared to standard or nonstandard articles?”
Wang’s algorithm scrapes news articles posted on social media platforms. It then analyzes these articles based on content, titles, and links.
“A lot of us take websites for granted and casually retweet or repost misinformation and that’s how it gets propagated, cascades and spreads virally,” Wang said, according to the news release. “Some of the most important questions we’re asking are: What are the patterns? What are the incentives?”
In order to establish what the “incentives” are behind social media posts, the team seeks to establish an AI tool that determines why certain links get shared, as well as the accuracy of the content. Wang says that during this process, his team could identify the culprits behind the spread of supposed misinformation.
[RELATED: Berkeley researchers offer social media platforms easier way to censor]
Images will also be examined during the project. Wang and his team have not stated how their system would account for memes or other images that are not intended to be factual in that they do not represent an actual event, but are still meant to convey a message.
In addition to addressing potentially “misleading” articles, the team has developed a way of supposedly identifying “clickbait,” which Wang defines as “low-quality articles which can indeed contain a lot of misinformation and false information because they have to exaggerate.”
UCSB hopes that this use of artificial intelligence will “give us insight on how we, intentionally or unwittingly, propagate misinformation.”
“That’s really the beauty of natural language processing and machine learning,” Wang said. “We have a huge amount of data in different formats, and the question is: How do you turn unstructured data into structured knowledge? That’s one of the goals of deep learning and of data science.”
Campus Reform reached out to Wang to ask about his vision for the practical application of the project but did not receive a response in time for publication.
In a similar effort, University of Buffalo and Arizona State University professors have created a system designed for “automatically detecting prejudice in social media posts,” which flags posts as “having the potential to spread misinformation and ill will.”
One of the leaders of the project told Campus Reform that the team “would like to have a system like this integrated into browsers on the client side so that users can use them to tag social media content that causes hate, aversion and prejudice.”
[RELATED: Berkeley researchers offer social media platforms easier way to censor]
Researchers at the University of California, Berkeley are also using artificial intelligence to create an “online hate index” that could be used to assist social media platforms in identifying and eliminating “hate speech” on their platforms.
Follow the author of this article on Twitter: @celinedryan