Universities experiment with courses on ChatGPT, but researchers are concerned about left-wing bias

But the potential for bias, especially against conservatives, has some experts concerned.

ChatGPT, the popular and recently released generative AI bot, will be the feature of several new college courses this fall.

ChatGPT, the popular and recently released generative AI bot, will be the feature of several new college courses this fall. But the potential for bias, especially against conservatives, has some experts concerned about the technology.

An experimental course at Tufts University, titled “Who Wrote This? ChatGPT, LLMS, and the Future of Learning,” will have students attempt to grasp this new technology. The description notes that these models are the “first artifacts of intelligence that we humans widely recognize as intelligent like ourselves.” 

[RELATED: Students are cheating with the new chatbot, survey shows]

“LLM,” which stands for “Large Language Model,” is a model designed to generate text as if a human had written it. The Tufts course will be taught by Gregory Marton, a former AI engineer who worked at Google.

At Arizona State University, Andrew Maynard, a professor of advanced technology, will teach a course entitled “Basic Prompt Engineering with ChatGPT: Introduction.” The course “demystifies the process of getting the most out of large language models (LLMs) like ChatGPT through effective prompt engineering, while providing you with the skills to harness their capabilities effectively,” its description states.

Other universities, including Vanderbilt and Notre Dame University, plan to hold a course featuring ChatGPT this fall.

But some researchers have found that AI bots like ChatGPT seem to demonstrate a preference for left-wing policies and a bias against conservatives.

For instance, a study conducted earlier this year by Jochen Hartmann at the University of Munich and Jasper Schwenzow and Maximilian Witte at the University of Hamburg found that when prompted using hundreds of political statements, ChatGPT “would impose taxes on flights, restrict rent increases, and legalize abortion.” The AI bot also would “most likely” have voted for the left-wing Green party in Germany and the Netherlands.

The European researchers are not alone. Tests performed by The Brookings Institution found in May that, although the bot would sometimes give inconsistent answers, generally, “there is a clear left-leaning political bias to many of the ChatGPT responses.”

One area of possible future research is whether, with increased capabilities, AI bots can routinely identify someone’s political beliefs. Current research seems to indicate they can. One 2021 study discovered that an AI bot provided with someone’s face could predict that person’s political leanings with 72% accuracy.

Less academic tests have provided other examples of the bot’s leftist views. In February, a Twitter user (the platform is now called “X”) reported, in an experiment that independent sources later verified, that when he tried to get ChatGPT to write a positive poem about Donald Trump, the bot refused. However, he was able to get a poem about Joe Biden. Campus Reform verified that the bot will now write a poem about Donald Trump.

Some researchers are candid about their concerns. David Rozado, an associate professor at the New Zealand Institute of Skills and Technology, fed ChatGPT fifteen political orientation tests for a study published by the Manhattan Institute. In fourteen of those tests, the bot exhibited preferences for left-wing policies.

In the study’s conclusion, Rozado notes several implications of the research. Among them, “Political and demographic biases embedded in widely used AI systems can degrade democratic institutions and processes” and “Public facing AI systems that manifest clear political bias can increase societal polarization.”

Rozado also emphasizes that AI should provide factual information, not opinions, on subjective questions and issues, such as abortion, the family, the death penalty, immigration, and others.

Campus Reform contacted the authors of the studies, all named universities, and OpenAI for comment.