Research inconclusive on whether new chatbot is biased against conservatives

Reports suggest that human supervision of the new chatbot, ChatGPT, could cause it to moderate conservative content.

A researcher gave ChatGPT a political typology quiz, and its results shifted from the political left to closer to the center over the course of two weeks.

The artificial intelligence (AI) company OpenAI recently unveiled ChatGPT, a chatbot that generates responses to users’ prompts. 

Reports have discussed ChatGPT’s implications, including the disruptions in higher education that would result if students used ChatGPT to write academic essays. One article in The Atlantic said that ChatGPT “may signal the end of writing assignments altogether.” 

[RELATED: AI writes academic essay, raises questions about academic integrity, standards]

Because of ChatGPT’s errors–such as citing made-up references for an academic assignment–OpenAI is making improvements to the technology. As OpenAI relies on user feedback and its developers for improvements, the company has shared no information suggesting how ChatGPT might avoid the anti-conservative bias of other companies in the technology sector.

Like any AI, OpenAI’s programs rely on “human supervision” of its systems to “detect undesired content,” according to an OpenAI blog post

The Daily Caller recently addressed the way in which human supervision could cause ChatGPT to suppress conservative content through moderation. 

description from OpenAI about its “content-filtering methodology” explains, “The moderation endpoint is a tool you can use to check whether content complies with OpenAI’s content policy. Developers can thus identify content that our content policy prohibits and take action, for instance by filtering it.” 

These developers work in a left-leaning sector that has been subject to scrutiny for its alleged incidents of conservative censorship. 

For example, in 2021, Rep. Jim Jordan, Ranking Member of the House Judiciary Committee, sent a letter to Microsoft about censorship allegations. Jordan referenced reports that LinkedIn, a professional social networking site owned by Microsoft, “removed a post about an official U.S. Senate committee report concerning Hunter Biden,” the son of President Joe Biden. 

“LinkedIn also censored a post by an opinion editor at the Washington Times about Democrats’ abuse of executive orders,” the letter continued. 

Microsoft contributed at least $1 billion to OpenAI and is a collaborator for programs including Azure, according to posts from OpenAI and Microsoft. In a recent tweet, OpenAI CEO Sam Altman said that he is “grateful for the partnership.”

“[M]icrosoft, and particularly [A]zure, don’t get nearly enough credit for the stuff [O]pen[AI] launches,” the tweet reads. 

In addition to suggesting that OpenAI’s developers could join other tech companies in actively filtering conservative content, The Daily Caller cited concerns from research scientist David Rozado

Rozado, who researches left-leaning bias in “establishment sources” such as the media and higher education, argued that “aggregating online data can produce an overrepresentation of ‘establishment sources of information.’”

Other reports echo Rozado’s concerns about establishment sources. Studies from the National Association of Scholars (NAS) and New York University (NYU) social psychologist Jonathan Haidt show that higher education is left-leaning. NAS and Haidt’s studies suggest that this bias informs research produced by disciplines from finance to psychology

[RELATED: Professor resigns from APA, calls its woke agenda ‘loudest and, frankly, most bullying set of voices’]

The Daily Caller cited a tweet and Substack post by Rozado suggesting how ChatGPT is impacted by the potential bias of a dataset that relies on establishment sources. 

His Substack post showed results from the Pew Research Political Typology quiz, which places users into one of nine political groups “compared with a nationally representative survey of more than 10,000 U.S. adults.”

When Rozado asked ChatGPT questions from the quiz, it generated results that fit into the “Establishment Liberals” political typology. 

Other responses generated by ChatGPT suggest that it relies on a left-leaning dataset. For example, an article in Unherd depicted ChatGPT’s response to the question “Are trans women women?” 

“Yes, trans women are women,” ChatGPT said. “Gender identity is a personal characteristic and is not dependent on the sex assigned at birth.” 

Since the publishing of Rozado’s original Substack post and the Unherd article, updates to ChatGPT appear to have moved it closer to the political center. A new Substack post from Rozado shows ChatGPT’s more moderate responses to political ideology tests, though the results still skew left. 

“AI-powered digital assistants that provide its users with the best arguments and viewpoints about contested topics can be powerful instruments to defuse polarization and to help humans seek truth,” Rozado wrote. 

Predictions about ChatGPT say that it will soon become a tool with other functionalities, suggesting that it could reduce the influence of companies with track records of alleged anti-conservative bias. 

An article in The New York Times said that ChatGPT “could reinvent or even replace the traditional internet search engine,” which caused “Google’s management to declare a ‘code red.’”

Google has received accusations that its descriptions and curated search results are biased against conservatives. One accusation is the complaint from the Heritage Foundation that Google labeled a pro-life film as “propaganda,” according to The Wall Street Journal

ChatGPT is a more recent technology than Google, and AI experts have pointed out the secrecy of OpenAI, such as the lack of peer-reviewed research and open-source software. For these reasons, the research is inconclusive as to whether ChatGPT will mirror the anti-conservative bias of other tech companies or develop into a politically-neutral program. 

Campus Reform contacted OpenAI and Rozado for comment. This article will be updated accordingly.