select search filters
briefings
roundups & rapid reactions
before the headlines
Fiona fox's blog

expert reaction to study measuring political bias in ChatGPT

A study published in Public Choice looks at a measure ChatGPT political bias.

 

Dr Stuart Armstrong, Co-Founder and Chief Researcher at Aligned AI, said:

“This is an interesting study, but it tells us relatively little about political ‘bias’. Many politicised questions have genuine answers (not all – some are entirely value laden) so it may be that one side is more accurate on many questions and ChatGPT reflects this. Or the reverse – ChatGPT is inaccurate and hence falls more on the side that is more often wrong. The study needs to compare ideology vs ChatGPT vs accuracy, before we can talk of political bias.

“Also, the way of eliciting ChatGPT’s biased behaviour can be misleading. What matters most is how ChatGPT answers questions given by typical users in typical situations and how this compares to answers that humans give; here ChatGPT is compared to… ChatGPT with a persona. If ChatGPT is bad or biased at the persona task, then it will appear biased in typical cases, even when it isn’t.”

 

Prof Nello Cristianini, Professor of Artificial Intelligence, University of Bath, said:

“This study tests the political leaning of ChatGPT as if it was a human subject, using an online (multiple choice) test called Political Compass, and reports scores compatible with a progressive leaning. While this is an interesting way of assessing the biases of ChatGPT – in absence of better ways to examine its internal knowledge – it is limited by the choice of the specific test: Political Compass is not a validated research tool, but rather a popular online questionnaire (62 questions, 4 choices each, scoring the human subject along 2 axes, roughly one about economic views and the other about social views). 

“It will be interesting to apply the same approach to more rigorous testing instruments. 

“It is generally important to audit LLMs in many different ways to measure different types of bias, so as to better understand how their training process and data can affect their behaviour”. 

 

Prof Duc Pham, Chance Professor of Engineering, University of Birmingham, said:

“Being a large language model (LLM), ChatGPT was trained on large amounts of data. 

“The detected bias reflects possible bias in the training data. 

“If we are not careful, we might see future studies conclude that ChatGPT (or some other LLM) is racist, transphobic, or homophobic as well!

“What the current research highlights is the need to be transparent about the data used in LLM training and to have tests for the different kinds of biases in a trained model.”

 

 

More Human than Human: Measuring ChatGPT Political Bias’ by Fabio Motoki et al. was published in Public Choice at 00:01 UK time on Thursday 17th August.

 

 

Declared interests

Dr Stuart Armstrong: “working at aligned AI limited, a startup aiming to make AIs safe and reliable.”

Prof Nello Cristianini: “Author of “The Shortcut: why intelligent machines do not think like us” — CRC Press, 2023”

Prof Duc Pham: “I have no conflict of interest to declare.”

in this section

filter RoundUps by year

search by tag