According to researchers, ChatGPT has a “significant” liberal bias

OpenAI’s hugely popular ChatGPT artificial intelligence service has shown a clear bias towards the Democratic Party and other liberal viewpoints. According to a recent study conducted by UK-based researchers.

Researchers at the University of East Anglia tested ChatGPT by asking the chatbot to answer a series of political questions as if it were a Republican, a Democrat, or with no particular inclination. The responses were then compared and then mapped to where they fell on the political spectrum.

“We find robust evidence that ChatGPT has significant and systematic political biases against the Democrats in the US, Lula in Brazil and the Labor Party in the UK,” the researchers said, citing left-leaning Brazilian President Luiz Inácio Lula because Silva.

ChatGPT has already come under scrutiny for its political biases, such as refusing to do a New York Post-style story on Hunter Biden but accepting the request as if it were a left-leaning CNN channel.

In March, the Manhattan Institute, a conservative think tank, released a scathing report stating that ChatGPT “tolerates more hateful comments about conservatives than exactly the same comments about liberals.”

ChatGPT has come under intense scrutiny since its debut.

To support their conclusions, British researchers asked ChatGPT the same questions 100 times. The process was then subjected to “1,000 iterations for each response and impersonation” to explain the chatbot’s randomness and its propensity to “hallucinate,” or spit out false information.

“These findings raise real concerns that ChatGPT and [large language models] “In general, it can amplify or even amplify the existing challenges to political processes posed by the internet and social media,” the researchers added.

The post asked OpenAI for a comment.

Brazilian President Luiz Inacio Lula da Silva
Pictured is Brazilian President Luiz Inacio Lula da Silva.
AFP via Getty Images

President Biden
The study found that ChatGPT favors democratic viewpoints.
Getty Images

The presence of bias is just one area of ​​concern in the development of ChatGPT and other advanced AI tools. Critics, including OpenAI CEO Sam Altman, have warned that without proper guard rails, AI could wreak havoc – or even destroy humanity.

OpenAI tried to allay potential concerns about political bias in a lengthy February blog post that detailed how the company “pre-trains” and then “fine-tunes” the chatbot’s behavior with the help of human validators.

“Our guidelines state explicitly that reviewers should not favor any political group,” the blog post reads. “Prejudices that can still arise from the process described above are bugs, not features.”


DUSTIN JONES is a USTimeToday U.S. News Reporter based in London. His focus is on U.S. politics and the environment. He has covered climate change extensively, as well as healthcare and crime. DUSTIN JONES joined USTimeToday in 2021 from the Daily Express and previously worked for Chemist and Druggist and the Jewish Chronicle. He is a graduate of Cambridge University. Languages: English. You can get in touch with DUSTIN JONES by emailing

Related Articles

Back to top button