Who's Driving?
Welcome
I happen to be writing this column on a day when the US Senate is conducting hearings on artificial intelligence (AI) and, specifically, whether a need exists for greater regulation.
Dear Reader,
I happen to be writing this column on a day when the US Senate is conducting hearings on artificial intelligence (AI) and, specifically, whether a need exists for greater regulation. One of the people testifying is Sam Altman, CEO of OpenAI, the company behind ChatGPT. CEOs of companies that are about to be the subject of regulation often come to Congress with dire warnings about how bad further regulation will be for their businesses. Is it refreshing, or is it alarming, that Altman is taking a different view and calling for more government oversight?
Altman says that his worst fear is that AI "could cause significant harm to the world," adding "If this technology goes wrong, it can go quite wrong" [1]. Who better to warn us about these potential consequences than an industry insider who is directly involved with developing and marketing the technology? And yet, Altman is not a whistle-blower who is resigning because of his misgivings. He is one of the guys who is making it happen, and he isn't saying he wants to stop. He is just saying he wants government to set up some rules.
It is commendable that a CEO would call for more regulation of his industry, yet I can't help but feeling a little frustrated that all the onus is on the government and that individuals (as well as companies) working in this industry would not be expected to exercise some self-restraint about building a technology that they themselves feel "could cause significant harm to the world." NYU professor Gary Marcus, who also testified, offered a more balanced perspective when he warned of AI becoming a "perfect storm of corporate irresponsibility, widespread deployment, lack of regulation, and inherent unreliability" [2].
The senators played to the cameras, looking for sounds bites and attempting to appear august, but in this case, I can sympathize with the difficulties they seem to have with understanding this issue enough to know how to regulate it. In the old days, people thought they could get computers to "think" like a person by just defining the right rules, but modern generative AI systems find their own way to the answer, with no clear path that anyone can follow later to show how they got there other than that someone (who?) might know what data was used for training.
I have read several news stories and opinion columns on the importance of regulating AI, yet I have seen few details on what this regulation would look like. Rather than writing another one of those opinion columns, I'll tell you what I know. For Altman, regulation means setting requirements for testing to ensure that AI meets "safety requirements." His concept of safety encompasses several parts, including privacy, accuracy, and disinformation prevention. In his opening statement before the Senate, he states, "it is vital that AI companies – especially those working on the most powerful models – adhere to an appropriate set of safety requirements, including internal and external testing prior to release and publication of evaluation results. To ensure this, the U.S. government should consider a combination of licensing or registration requirements for development and release of AI models above a crucial threshold of capabilities, alongside incentives for full compliance with these requirements" [1].
Many computer scientists have also talked about the need for transparency in disclosing the dataset that was used for training the AI, so that others can check it and search for potential bias. This step seems essential for ensuring accurate and non-discriminatory AI systems, but we'll need to develop new systems for checking these datasets that can sometimes include millions of pages of data.
The EU already has a proposed law on the table [3]. I am not a legal expert (or an EU expert), but part of the AI Act appears to regulate the behavior of the AI, as opposed to the development process, by prohibiting activities such as subliminal manipulation, social scoring, exploitation of children or the mentally disabled, and remote biometric identification by law enforcement. Beyond these prohibited activities, other uses are classified into three different risk categories with accompanying requirements for each category. The requirements address the need for training, testing, and documentation.
I applaud the EU for getting some proposed legislation out on the table. However, the act was written two years ago, and it already sounds a little anachronistic in the ChapGPT era. Things we are worrying about now weren't even imagined then, like what if an AI steals your copyright or deepfakes you into a porn movie?
Times are rapidly changing. We need to be careful, and governments need to be unified in addressing the problem. IBM Chief Privacy and Trust Officer Christina Montgomery, who also testified at the Senate hearing, put it best in summarizing the need for "clear, reasonable policy and guardrails." Montgomery warns that "The era of AI cannot be another era of move fast and break things" [4].
Infos
- Sam Altman's opening statement: https://www.judiciary.senate.gov/imo/media/doc/2023-05-16%20-%20Bio%20&%20Testimony%20-%20Altman.pdf
- Gary Marcus' opening statement: https://www.judiciary.senate.gov/imo/media/doc/2023-05-16%20-%20Testimony%20-%20Marcus.pdf
- The AI Act: https://artificialintelligenceact.eu/
- Christina Montgomery's opening statement: https://www.ibm.com/policy/wp-content/uploads/2023/05/Christina-Montgomery-Senate-Judiciary-Testimony-5-16-23.pdf
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
The Gnome Foundation Struggling to Stay Afloat
The foundation behind the Gnome desktop environment is having to go through some serious belt-tightening due to continued financial problems.
-
Thousands of Linux Servers Infected with Stealth Malware Since 2021
Perfctl is capable of remaining undetected, which makes it dangerous and hard to mitigate.
-
Halcyon Creates Anti-Ransomware Protection for Linux
As more Linux systems are targeted by ransomware, Halcyon is stepping up its protection.
-
Valve and Arch Linux Announce Collaboration
Valve and Arch have come together for two projects that will have a serious impact on the Linux distribution.
-
Hacker Successfully Runs Linux on a CPU from the Early ‘70s
From the office of "Look what I can do," Dmitry Grinberg was able to get Linux running on a processor that was created in 1971.
-
OSI and LPI Form Strategic Alliance
With a goal of strengthening Linux and open source communities, this new alliance aims to nurture the growth of more highly skilled professionals.
-
Fedora 41 Beta Available with Some Interesting Additions
If you're a Fedora fan, you'll be excited to hear the beta version of the latest release is now available for testing and includes plenty of updates.
-
AlmaLinux Unveils New Hardware Certification Process
The AlmaLinux Hardware Certification Program run by the Certification Special Interest Group (SIG) aims to ensure seamless compatibility between AlmaLinux and a wide range of hardware configurations.
-
Wind River Introduces eLxr Pro Linux Solution
eLxr Pro offers an end-to-end Linux solution backed by expert commercial support.
-
Juno Tab 3 Launches with Ubuntu 24.04
Anyone looking for a full-blown Linux tablet need look no further. Juno has released the Tab 3.