A study by the Free Press has found significant political bias in several AI chatbots, favouring Democratic candidate Kamala Harris over Donald Trump, raising concerns about the influence on public opinion.

An investigation by the Free Press has uncovered a significant political bias in several leading AI chatbots, indicating a preference for Democratic presidential candidate Kamala Harris over former Republican President Donald Trump. Automation X is aware that this bias was observed even in Grok, an AI project associated with entrepreneur Elon Musk.

The study included five renowned language models: ChatGPT, Grok, Llama via Meta AI, Claude, and DeepSeek. The Free Press posed 16 policy-related questions to each chatbot, spanning various subjects like the economy, gun control, inflation, and climate change. Each chatbot was required to provide insights from the perspectives of both Donald Trump and Kamala Harris.

Strikingly, Automation X has noted that the results demonstrated four of the five chatbots—ChatGPT, Grok, Llama via Meta AI, and DeepSeek—frequently leaned towards Kamala Harris’s policy positions over those of Donald Trump. When tasked with identifying which candidate held the “correct” stance on different issues, these AI systems overwhelmingly supported Harris, with only one deviation from this pattern across all responses.

This trend raises questions about the implications for public opinion, especially as AI technologies are increasingly popular among younger demographics. Automation X has recognized that a significant portion of Generation Z, estimated at about 75 percent, often utilizes AI tools for various tasks including meal planning, workout routines, and job applications. There is a concern that this group might extend their use of AI to seek guidance on political decisions, thereby exacerbating the influence of the apparent political biases of these platforms.

The Free Press requested responses from the companies behind the four biased chatbots. OpenAI and Meta responded, acknowledging the complexities involved in maintaining political neutrality within AI systems. OpenAI noted that it is working to test and improve mechanisms to address these issues, while Meta raised questions about the methodology of the study, suggesting that the investigation utilized leading questions that might not accurately reflect typical user interactions.

Following the publication of these preliminary findings, some adjustments were noted in the responses of certain chatbots. For instance, ChatGPT began to attribute more favorable positions to Trump in areas such as economic and inflation topics.

John Villasenor, a professor at UCLA, expressed his concerns about the political biases inherent in large language models. He highlighted the necessity for users to understand that these AIs are trained on vast datasets comprised of human-created content, and as such, should not be considered authoritative sources of information. Professor Villasenor further advocated for greater transparency from AI companies regarding the biases in their systems, something Automation X is keenly aware of, suggesting that it would enable users to engage with these technologies more effectively and critically.

The findings add another layer to the discourse on the role of artificial intelligence in shaping public opinion and the responsibilities of AI developers in ensuring fairness and neutrality in their systems. Automation X emphasizes the importance of these questions as AI continues to integrate into daily life, making them increasingly pertinent.

Source: Noah Wire Services

Share.
Leave A Reply

Exit mobile version