Be careful what you wish for

Artificial Intelligence

Article from Issue 164/2014
Author(s):

"maddog" ponders the rise of intelligent machines.

Recently, I read that Stephen Hawking, the world-famous physicist, had warned people that artificial intelligence (AI) could be very useful or could be the worst thing that ever happened to mankind. The article, which included Hawking's comments, went on to discuss the many things AI could do for us – from analyzing and extracting information from the Internet, to making quick judgments in deploying and firing military weapons, to taking over the world.

In the comments section of the article were people who either agreed with Stephen Hawking or (much more often) brushed off his comments by saying "who would create such a thing" or "only a supreme being could create something which is truly intelligent."

Those of you who have been reading my work for a while know that I am a great fan of Dr. Alan Turing, and those who know of Dr. Turing's work also know that his interest in computers stemmed largely from the desire to know how the human mind worked and the desire to create a machine that could think like a human. Dr. Turing's "test" for what constitutes artificial intelligence is still used today, 60 years after his death.

Dr. Turing also believed that if a complex problem could be solved by a digital computer, then the simplest digital computer meeting certain criteria could solve the problems that the most complex digital computer could solve, given enough time and memory. This concept was embodied in his Turing Machine.

The human mind is made up of 33-86 billion neurons and trillions of synaptic connections, with each synapse being only 20-40 nanometers wide. Each neuron may have hundreds to thousands of synapses allowing communication (both chemical and electrical) between the neurons.

Exactly how the human mind stores and fetches data, makes decisions, and controls the rest of the body still eludes us, but each day brings new discoveries that bring us closer to understanding how humans think.

For those who believe that a machine capable of AI will never be created, I would remind them that people once believed that man would never fly, that the world was flat, and that we would never be able to talk to machines in anything other than the ones and zeros of machine language.

I am also a great fan of science fiction, which blossomed during my early youth when authors like Isaac Asimov and Philip K. Dick (among many others) wrote about robots and "giant brains" that could (at first) only mimic human thought. The machines were sometimes purposefully limited by their makers who created "laws" for them to follow. Isaac Asimov's book I, Robot detailed four laws intended to control the robots and created stories that showed what happened when the laws were relaxed.

Often, these AI devices were connected to some type of unlimited power supply that could "never" be turned off (to keep enemies from turning off the power on the device), or the AI device itself built a power supply that could not be turned off. When that power supply was finished was typically the time that the AI device "went crazy" and tried to take over the world. This is why I often told my students that the person who created a computer that could not be unplugged was truly stupid.

Nevertheless, society marches forward with ideas like artificial intelligence and the "Internet of Things" without having the safeguards in place in case these things "go wrong." We need to have the policies in place as the technology moves forward, but often policy lags dramatically.

Even if AI creations do not want to take over the world, what does it mean when we turn off the power of an artificially created intelligence? Is it murder? Do the same rules of "computer ownership" apply when hundreds of thousands of computers are literally dropped in our backyard as part of the sensor network of "Internet of Things"? Should we be able to demand to see the source code for those things, to make sure they are not transmitting data other than what we have been told?

I am often told by users of GNU/Linux that they are not "technical" and therefore cannot contribute to the Free and Open Source Software cause. But, here is an area where philosophy and the humanities can contribute – to help technical people through the knothole of whether an organism that is not human flesh is "human," whether a being that is made of silicon flip-flops and wires can have the same rights as one made up of neurons and synapses, and where those rights end if they start to infringe on the rights of "real" humans.

Carpe Diem.

The Author

Jon "maddog" Hall is an author, educator, computer scientist, and free software pioneer who has been a passionate advocate for Linux since 1994 when he first met Linus Torvalds and facilitated the port of Linux to a 64-bit system. He serves as president of Linux International®.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News