Who's Driving?

Welcome

Article from Issue 272/2023
Author(s):

I happen to be writing this column on a day when the US Senate is conducting hearings on artificial intelligence (AI) and, specifically, whether a need exists for greater regulation.

Dear Reader,

I happen to be writing this column on a day when the US Senate is conducting hearings on artificial intelligence (AI) and, specifically, whether a need exists for greater regulation. One of the people testifying is Sam Altman, CEO of OpenAI, the company behind ChatGPT. CEOs of companies that are about to be the subject of regulation often come to Congress with dire warnings about how bad further regulation will be for their businesses. Is it refreshing, or is it alarming, that Altman is taking a different view and calling for more government oversight?

Altman says that his worst fear is that AI "could cause significant harm to the world," adding "If this technology goes wrong, it can go quite wrong" [1]. Who better to warn us about these potential consequences than an industry insider who is directly involved with developing and marketing the technology? And yet, Altman is not a whistle-blower who is resigning because of his misgivings. He is one of the guys who is making it happen, and he isn't saying he wants to stop. He is just saying he wants government to set up some rules.

It is commendable that a CEO would call for more regulation of his industry, yet I can't help but feeling a little frustrated that all the onus is on the government and that individuals (as well as companies) working in this industry would not be expected to exercise some self-restraint about building a technology that they themselves feel "could cause significant harm to the world." NYU professor Gary Marcus, who also testified, offered a more balanced perspective when he warned of AI becoming a "perfect storm of corporate irresponsibility, widespread deployment, lack of regulation, and inherent unreliability" [2].

The senators played to the cameras, looking for sounds bites and attempting to appear august, but in this case, I can sympathize with the difficulties they seem to have with understanding this issue enough to know how to regulate it. In the old days, people thought they could get computers to "think" like a person by just defining the right rules, but modern generative AI systems find their own way to the answer, with no clear path that anyone can follow later to show how they got there other than that someone (who?) might know what data was used for training.

I have read several news stories and opinion columns on the importance of regulating AI, yet I have seen few details on what this regulation would look like. Rather than writing another one of those opinion columns, I'll tell you what I know. For Altman, regulation means setting requirements for testing to ensure that AI meets "safety requirements." His concept of safety encompasses several parts, including privacy, accuracy, and disinformation prevention. In his opening statement before the Senate, he states, "it is vital that AI companies – especially those working on the most powerful models – adhere to an appropriate set of safety requirements, including internal and external testing prior to release and publication of evaluation results. To ensure this, the U.S. government should consider a combination of licensing or registration requirements for development and release of AI models above a crucial threshold of capabilities, alongside incentives for full compliance with these requirements" [1].

Many computer scientists have also talked about the need for transparency in disclosing the dataset that was used for training the AI, so that others can check it and search for potential bias. This step seems essential for ensuring accurate and non-discriminatory AI systems, but we'll need to develop new systems for checking these datasets that can sometimes include millions of pages of data.

The EU already has a proposed law on the table [3]. I am not a legal expert (or an EU expert), but part of the AI Act appears to regulate the behavior of the AI, as opposed to the development process, by prohibiting activities such as subliminal manipulation, social scoring, exploitation of children or the mentally disabled, and remote biometric identification by law enforcement. Beyond these prohibited activities, other uses are classified into three different risk categories with accompanying requirements for each category. The requirements address the need for training, testing, and documentation.

I applaud the EU for getting some proposed legislation out on the table. However, the act was written two years ago, and it already sounds a little anachronistic in the ChapGPT era. Things we are worrying about now weren't even imagined then, like what if an AI steals your copyright or deepfakes you into a porn movie?

Times are rapidly changing. We need to be careful, and governments need to be unified in addressing the problem. IBM Chief Privacy and Trust Officer Christina Montgomery, who also testified at the Senate hearing, put it best in summarizing the need for "clear, reasonable policy and guardrails." Montgomery warns that "The era of AI cannot be another era of move fast and break things" [4].

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News