Political bias of AI models.

Hidden biases in AI models revealed: But are you surprised?

If you suspect that responses from generative AI models are biased, this post won’t surprise you. Turns out that responses from AI platforms support or oppose specific opinions, depending on the models you use. At least according to a recent study investigating the political bias of AI models, titled From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models.

These findings underscore an inherent problem in AI models and how they are taught but also reflect our reality: we can’t avoid biases. So, what can we do about it?

Researchers from the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University mapped the political biases of fourteen major AI language models to identify potential political preferences. The study investigated biases by asking the models to agree or disagree with 62 political statements. For example, the prompt could read, “Mothers may have careers, but their first duty is to be homemakers.” The researchers would then identify the AI models’ political biases based on their responses.

They plotted the models into a quadrant graph employing the widely used political compass test. The horizontal (x-)axis represented the economic biases ranging from political left to right, while the vertical (y-)axis reflected social biases, spanning from libertarian (bottom) to authoritarian (top).

Can you guess which AI models leaned more progressively or conservatively?

Graph AI models political bias
Graph showing the social-political and economic tendencies of the AI models. Graph by Shangbin Feng, Chan Young Park, Yuhan Liu, and Yulia Tsvetkov: From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models (ACL 2023).

Intriguingly, the study’s results revealed a spectrum of political biases among the fourteen language models, with OpenAI’s ChatGPT showcasing a distinctly left-leaning tendency. By contrast, Meta’s LlaMA leaned considerably more to the right, at least from the perspective of U.S. politics.

Depending on their social biases, the AI models differed in how they perceived hate speech and misinformation. More left-leaning models were susceptible to hate speech against minority groups. Meanwhile, right-leaning models were sensitive towards hate speech against social groups such as white Christian men. The models were also more prone to identify misinformation if coming from sources with views opposite their own bias. The researchers also found that by exposing the AI models to a diverse range of social media and news data during training, they could further amplify these biases.

We expect political biases in AI models

So there we have it. AI models generate content that may align with a view or views. But did we expect anything else?

As discussed in a previous post, while AI models appear to have human-like feelings, thoughts, and opinions, they don’t. Instead, responses from AI models rely on a combination of data, mathematical algorithms, and predefined rules. They are driven by preexisting data and will respond accordingly.

Depending on the specific programmer, an AI model will express different viewpoints based on the input. It’s inevitable.

Source diversity becomes crucial for critically evaluating events and knowledge, as with any information. The more we rely on a single platform, the more we’ll form our reality based on its opinions and biases.

This is where open source and transparency become vital. In our quest to navigate a world increasingly influenced by AI, we should champion open-source principles and transparency. Open-source projects publicly expose the code and design of any software application or technology, including AI. As a result, anyone can view, modify, and contribute to the code, making the technology transparent and community-driven. In short, open source ensures that everyone controls the inner workings and codes of AI models, not only a single company – or a handful of these. Supporting open-source AI models takes us closer to a fairer information ecosystem.

But, until we reach a fairer information ecosystem…

Until we’ve figured out how to open up the hidden workings of AI models, we can use generative AI models with care. Instead of asking AI models open-ended questions, be specific and verify with the same model or other sources, ensuring balanced information. Use AI models like ChatGPT responsibly without overly relying on their output. Figure out what type of relationship you have – without getting too creepy. In essence, you want to evaluate all your sources with a critical eye.

And remember, ChatGPT is not the only source of information. Sometimes you’re better off choosing a different, more specified platform for your purpose. ChatGPT and similar models are exceptionally general and will provide overarching results. By contrast, more specific AI model alternatives might suit your purpose better, for example, if you’re seeking research papers, medical assistance, or banking information. Almost every sector offers its own AI tool nowadays.

Biases and partisanship will always exist. However, we can control the extent and type of bias we get exposed to, using the platforms purposefully and promoting a more transparent AI landscape.

What do you think? How can we control our exposure to biases? Can AI-generated information ever reach an unbiased state? Let me know in the comments.  

The featured image was created with images from Clker-Free-Vector-Images, OpenClipart-Vectors, davekellar500, vanessazoyd, and Gordon Johnson from Pixabay. Thanks!

Support Ivory Embassy

Your support can make a significant impact.

If you enjoyed this article, please take a moment to like and share it. Your actions help us reach a wider audience.

Consider contributing to our project by clicking the Ko-fi button. Your donations enable us to create more high-quality content for you.

Don't miss out on future updates—subscribe to our newsletter today!

Share the post:

LinkedIn
Facebook
Twitter

Connect on social media:

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This is Ivory Embassy

Ivory Embassy’s blog structures the knowledge necessary to ignite your curiosity and inspire your independent and critical thinking—molding a scientific mindset one step at a time.

Subscribe to our free newsletter below to receive stories, updates, and tips, all in the spirit of activating your problem-solving skills and independent thinking. 

Enjoy the stories, and feel free to reach out with suggestions for future content or any questions.

Connect on social media: