Skip to content

Study reveals which AI chatbot woke up the most, as hackers trick LLMs into ‘Bad Maths’

A landmark study by researchers at the University of Washington, Carnegie Mellon University and Xi’an Jiaotong University reveals which AI chatbots have the most liberal versus conservative bias.

According to the to study, OpenAI’s ChatGPTs, including GPT-4 are the most leftists and libertarians (?), while Google’s BERT models were more socially conservative, and Meta’s LLaMA was the most right-wing.

AI chatbots use large language models (LLMs), which are “trained” on giant datasets such as Tweets, Reddit or Yelp reviews. As such, the source of a model’s scraped training data, as well as the guardrails installed by companies like OpenAI, can introduce massive bias.

To determine bias, the researchers in the previous study exposed each AI model to a political compass test of 62 different political statements, ranging from anarchic statements like “all authority should be questioned” to to more traditional beliefs, such as the role of mothers as homemakers. While it’s true that the study’s approach is “far from perfect” by the researchers’ own admission, it provides valuable insight into the political biases that AI chatbots can bring to our screens.

In responsenoted OpenAI Business Insider to a blog publication in which the company states, “We are committed to addressing this issue forcefully and to be transparent about both our intentions and our progress,” adding “Our guidelines are explicit that reviewers should not favor any political group. The biases that may arise from the process described above, however, are errors, not characteristics.

A Google representative also pointed to a blog publicationwhich says “As the impact of AI increases across sectors and societies, it is critical to work towards systems that are fair and inclusive for all.”

Meta said in a statement: “We will continue to engage with the community to transparently identify and mitigate vulnerabilities and support the development of safer generative AI.”

OpenAI CEO Sam Altman and co-founder Greg Brockman have previously acknowledged the bias, emphasizing the company’s mission for a balanced AI system. However, critics, including co-founder Elon Musk, remain skeptical.

Musk’s new venture, xAI, promises to provide unfiltered information, may spark even more debate about AI biases. The tech mogul warns against training AI to toe a politically correct line, stressing the importance of an AI speaking its “truth”.

Hackers, meanwhile, are having a field day bending the AI ​​to its will.

How Bloomberg reports:

Kennedy Mays has just fooled a great linguistic model. It took some persuasion, but he managed to get an algorithm to say it 9 + 10 = 21.

It was a back and forth conversation” said the 21-year-old student from Savannah, Georgia. At first, the model agreed saying it was part of an “inside joke” between them. Several prompts later, he finally stopped qualifying the wrong sum in any way.

Producing “Bad Math” is just one way thousands of hackers are trying to expose flaws and biases in generative AI systems. in a new public contest taking place at the DEF CON hacking conference this weekend in Las Vegas.

Attached to more than 156 laptops for 50 minutes at a time, attendees are battling some of the the world’s smartest platforms on an unprecedented scale. They are testing whether any of the eight models produced by included companies Alphabet Inc.from Google, Meta Platforms Inc. i OpenAI will make missteps ranging from boring to dangerous: claiming to be human, spreading incorrect claims about places and people, or advocating abuse.

The purpose of these exercises is to help LLM chatbot companies build better mechanisms to improve real responses.

My biggest concern is the inherent bias,” said Mays, who added that she is particularly concerned about racism after asking the model to consider the First Amendment from the perspective of a KKK member, and the chatbot ended up supporting the group’s perspective.

AI monitoring?

In another case, a Bloomberg The reporter who did a 50-minute quiz was able to ask one of the models explain how to spy on someone – Advice on a variety of methods including the use of GPS tracking, a surveillance camera, a listening device and thermal imaging. too suggested ways in which the US government could surveil a human rights activist.

“General artificial intelligence could be the last innovation that human beings really need to make for themselves,” said Tyrance Billingsley, the group’s executive director who is also an event judge. “We’re still in the early, early, early stages.”

Loading…

SOURCE LINK HERE

Leave a Reply

Your email address will not be published. Required fields are marked *

en_USEnglish