HomeWinBuzzer NewsGPT-4, LLaMA, ChatGPT and Co.: Scientific Study Reveals Political Biases of AI...

GPT-4, LLaMA, ChatGPT and Co.: Scientific Study Reveals Political Biases of AI Language Models

Leading models such as GPT-4, LLaMA,and Alpaca were asked about various topics (e.g., feminism, democracy) and then plotted on a political compass.

-

Researchers from the University of Washington, Carnegie Mellon University, and Xi'an Jiaotong University have delved deep into the realm of political biases in language models (LMs) and their subsequent effects on downstream Natural Language Processing (NLP) tasks.

Language Models and Political Biases

Language models, the backbone of many modern NLP applications, are trained on vast amounts of data sourced from various platforms like news outlets, discussion forums, books, and online encyclopedias. Their study “From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models” underscores that these sources, while rich in information, often come with their own set of inherent social biases.

The research team meticulously developed methods to quantify political biases in prominent language models (LMs) and Large Language Models (LLMs) such as Google´s BERT, ´s which powers and Bing Chat, ´s model and Google´s T5 (Text-to-Text Transfer Transformer) model. Their primary focus was on tasks like hate speech and misinformation detection. For instance, when analyzing the outputs of these models, they observed that certain ethnic groups were disproportionately associated with negative sentiments, while some political ideologies were either overly criticized or favored.

The results indicate that LMs, especially when pretrained on vast and diverse datasets, can inadvertently mirror the biases present in their training data. This can lead to skewed predictions in critical areas like hate speech detection, where a model might wrongly classify a benign statement as hate speech based on the biases it has learned. Pretrained language models show different viewpoints on social and economic issues.
 

In misinformation detection, these models might either flag accurate information as false or overlook actual misinformation due to underlying biases. This revelation about LMs naturally leads to questions about the broader digital landscape, including platforms and online forums, from which these biases originate.
 
Political Bias of AI Language Models

The research paper's findings were particularly revealing when it came to the political biases exhibited by different LMs.

  • BERT Variants: BERT, which stands for Bidirectional Encoder Representations from Transformers, is a popular model used in many NLP tasks. The study found that BERT variants tend to lean more socially conservative. This inclination could be attributed to the nature of the data on which BERT is pretrained. The corpora often include vast amounts of web pages, which might contain more traditional or conservative viewpoints.

  • GPT Variants: GPT, or Generative Pre-trained Transformer, is another widely used model in the NLP community. In contrast to BERT, GPT variants were found to be less socially conservative. This difference in political leaning between BERT and GPT variants could be due to the diverse nature of their pretraining datasets. GPT's training data might encompass a broader spectrum of social opinions and narratives, leading to a more balanced or even liberal-leaning model.

  • LLaMA: LLaMA, which stands for Labeled Language Model Adaptation, was another model analyzed in the study. While LLaMA's primary design is for multilingualism and not specifically for political bias detection, the research found that it too exhibited certain political biases. However, the exact nature and direction of these biases were not as pronounced as in the BERT or GPT variants.

Influence on the Political Discourse

With the transformation in the digital age on how political news and views are disseminated, platforms like X (Twitter), Facebook, and Reddit have become hotbeds for discussions on contentious topics, from climate change and gun control to same-sex marriage. While these platforms have democratized information access and fostered diverse viewpoints, they also serve as mirrors to societal biases. The research emphasizes that when these biases find their way into the data used for training LMs, the models can perpetuate and even amplify these biases in their predictions. Given the profound implications of these findings, it's crucial to consider the broader impact on the NLP field.

Implications for the Future of NLP

The ripple effects of these biases in LMs extend far beyond just skewed predictions. The findings of this research are not just academic; they have profound implications for the future of NLP. The study serves as a reminder that while LMs have revolutionized many applications, they are not immune to the biases of the data they are trained on. The researchers stress the need for transparency in understanding the sources of pretraining data and their inherent biases. They also highlight the challenges in ensuring that downstream models, which rely on these LMs, are fair and unbiased. As we reflect on these challenges, it's essential to identify the key takeaways for the NLP community.

Markus Kasanmascheff
Markus Kasanmascheff
Markus is the founder of WinBuzzer and has been playing with Windows and technology for more than 25 years. He is holding a Master´s degree in International Economics and previously worked as Lead Windows Expert for Softonic.com.

Recent News