How Deep Is Your Chat?

Welcome

Article from Issue 279/2024
Author(s):

Books, academic journals, tech blogs, and social media posts have been trumpeting dire warnings about super-intelligent AI systems snuffing out civilization.

Dear Reader,

Books, academic journals, tech blogs, and social media posts have been trumpeting dire warnings about super-intelligent AI systems snuffing out civilization. This certainly is a real problem – I don't want to make light of it. But another serious, and perhaps more immediate, problem is really stupid, inept AI systems messing things up through sheer incompetence.

The Washington Post had a story recently [1] about a study by a European nonprofit [2] on the trouble AI chatbots had with answering basic questions about political elections. According to the story, Bing's AI chatbot, which is now called Microsoft Copilot, "gave inaccurate answers to one out of every three basic questions about candidates, polls, scandals, and voting in a pair of recent election cycles in Germany and Switzerland."

Before you write this off as yet another Linux guy ranting about Microsoft, I should add, the reason why the study focused on Microsoft's chat tool is because Copilot can output its sources along with its chat responses, which made it easier to check. The story points out that "Preliminary testing of the same prompts on OpenAI's GPT-4, for instance, turned up the same kinds of inaccuracies." Google Bard wasn't tested because it isn't yet available in Europe.

The errors cited in the study included giving incorrect dates for elections, misstating poll numbers, and failing to mention when a candidate dropped out of the race. The study even documents cases of the chatbot "inventing controversies" about a candidate.

Note that I'm not talking about some arcane anomaly buried deep in the program logic. The bot literally couldn't read the very articles it was citing as sources.

Of course, Copilot got many of the answers right. "Two out of three" wouldn't have been too bad for an experimental system 10 years ago maintained by experts who knew what they were getting. The problem is that we have endured a year of continuous hype about the wonders of generative AI, and people are actually starting to believe it. It is one thing to ask an AI to write a limerick – it is quite another to ask it to chase down information you will use for voting in a critical election. Many elections are decided by one- to three-percent margins. The implications of a chatbot acting as a source for voters and getting 30 percent of the answers wrong are enormous.

The study also points out that accuracy varies with the language. Questions asked in German led to inaccurate responses 37 percent of the time, whereas English answers were only wrong 20 percent of the time (that's still way too many mistakes). French weighed in at a 24-percent error rate.

AI proponents answer that this is all a process, and the answers will get more accurate in time. The general sense is that this is just a matter of bug hunting. You make a list of the problems, then tick them off one by one. But it isn't clear that these complex issues will be solved in some pleasingly linear fashion. The AI industry made surprisingly little progress for years and slow-walked through most of its history before the recent breakthroughs that led to the latest generation. It is possible we'll need to wait for another breakthrough to make another incremental step, and in the meantime, we could do a lot of damage by encouraging people to put their trust in all the bots that are currently getting hyped in the press.

If you want to get an AI to draw a picture of your boss, go ahead and play. But it looks like, at least for now, questions about which candidate to vote for might require a human.

Joe Casad, Editor in Chief

Infos

  1. "AI Chatbot Got Election Info Wrong 30 Percent of the Time, European Study Finds" by Will Oremus, Washington Post, December 15, 2023: https://www.washingtonpost.com/technology/2023/12/15/microsoft-copilot-bing-ai-hallucinations-elections/ (paywalled)
  2. "Prompting Elections: The Reliability of Generative AI in the 2023 Swiss and German Elections," AI Forensics: https://aiforensics.org/work/bing-chat-elections

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Machine Language

    The electronic brain behind ChatGPT from OpenAI is amazingly capable when it comes to chatting with human partners. Mike Schilli picked up an API token and has set about coding some small practical applications.

  • Welcome

    The nature of the print publishing industry demands that I write this column some time before you read it. The first copies go on sale two weeks after our deadline, and, depending on where you live in the world, you could be seeing this issue one month or even two months after these words reach layout. Print publishing lives on because it has many admirable qualities, but low latency is not one of those benefits. This introduction is my graceful way of apologizing that what I'm thinking about now is probably not what you're thinking about when you read this. I'm thinking about the election in the US, which is happening the very day I write this column. You already know who won, and you are happily free from having to think about it, but maybe you should.

  • Welcome

    I get this familiar feeling whenever an election year rolls around. I guess it is kind of like despair mixed with something more proactive, like maybe annoyance. I'm not talking about politics exactly, although I will admit that politics get pretty annoying. What really concerns me now is the backward nature of voting technology and the sense that nothing ever gets done about it.

  • Welcome

    I feel like we entered a new era earlier this year when Google scientist Blake Lemoine declared that he thought Google's LaMDA artificial intelligence is "sentient," and that the company should probably be asking LaMDA's permission before studying it.

  • OpenSUSE Project Welcomes First Community Elected Board

    When polls closed at exactly 12 UTC last Saturday, the openSUSE Project had chosen its first community elected board.

comments powered by Disqus