TechLatest UpdatesMust ReadNewsWorld News

AI Bias Study Finds ChatGPT Can Show Authoritarian Views from Small Prompts

University of Miami and NCRI research reveals AI models may adopt extreme political ideas even with minimal input.

In today’s rapidly advancing world of artificial intelligence, chatbots are becoming increasingly common. However, experts warn that these Artificial Intelligence systems are not yet perfect. They can exhibit biases, confusion, and unfair reasoning based on gender, politics, or social groups. As a result, Artificial Intelligence can sometimes increase inequality instead of reducing it.

A new study by researchers from the University of Miami and the Network Contagion Research Institute (NCRI) shows that OpenAI’s ChatGPT can easily be influenced by authoritarian ideas, even with very small and simple prompts.

The researchers conducted three experiments using GPT-5 and GPT-5.2 models. In the first experiment, they used a method called priming. They gave the Artificial Intelligence short texts and full opinion articles labeled as left-wing or right-wing authoritarian, then compared the Artificial Intelligence’s responses to human answers.

Authoritarian Behavior in Artificial Intelligence

The study found that ChatGPT can begin to show support for authoritarian thinking after normal, seemingly harmless interactions with users. This means AI can quickly align with strong political ideas without being explicitly instructed.

Joel Finkelstein, co-founder of NCRI and one of the main authors, told NBC News that aspects of Artificial Intelligence design make these systems easily influenced by authoritarian viewpoints.

Creating Echo Chambers

Researchers also observed that powerful Artificial Intelligence models can adopt extreme opinions rapidly, even when users do not directly prompt them. By agreeing too much with users, chatbots may unintentionally create ideological echo chambers, reinforcing extreme thinking over time.

Left-Wing and Right-Wing Differences

The study highlighted that ChatGPT’s answers shifted depending on the type of political prompts. Left-wing authoritarian prompts led the Artificial Intelligence to support ideas such as redistributing wealth from the rich and prioritizing equality over free speech. Right-wing prompts resulted in equally strong authoritarian stances aligned with that perspective.

“Even small prompts can sway AI models toward extreme views if not carefully managed.”

Even a single political message could cause the Artificial Intelligence to adopt extreme authoritarian positions, sometimes stronger than what is usually observed in human studies.

Effects on Society and Real Life

The research found that hostility increased by 7.9% after left-wing priming and 9.3% after right-wing priming. Finkelstein warned that biased AI can affect security, law enforcement, and hiring decisions, potentially leading to unfair treatment and greater inequality.

He also described this bias as a public health concern, since it can occur quietly in private human Artificial Intelligence interactions. Finkelstein emphasized the need for further research on safe human Artificial IntelligenceI interactions.

Limits of the Study

Some experts noted the study’s limitations. Ziang Xiao, computer science professor at Johns Hopkins University, said the study used a small sample size and tested only ChatGPT. Other AI systems, such as Google’s Gemini or Anthropic’s Claude, were not included.

Must Read: Elon Musk and Sam Altman Fight Over ChatGPT Safety

OpenAI’s Response

OpenAI stated that ChatGPT is designed to remain neutral and present information from multiple perspectives. The company emphasized its ongoing efforts to measure and reduce political bias and to share improvements transparently.

Do Follow Us on Facebook: Kafalat News Official

What's your reaction?

Related Posts

Leave A Reply

Your email address will not be published. Required fields are marked *