Sentience and Sensibility


Article from Issue 263/2022

I feel like we entered a new era earlier this year when Google scientist Blake Lemoine declared that he thought Google's LaMDA artificial intelligence is "sentient," and that the company should probably be asking LaMDA's permission before studying it.

Dear Reader,

I feel like we entered a new era earlier this year when Google scientist Blake Lemoine declared that he thought Google's LaMDA artificial intelligence is "sentient," and that the company should probably be asking LaMDA's permission before studying it. The news this month is that Google fired Lemoine. The stated reason was that he violated a confidentiality agreement, but few observers could separate the termination from Lemoine's announcement and the controversy that followed.

Let me explain, I don't think this story is important because the computer was sentient – in fact, I'm quite sure it wasn't. I just find it strange that we're even talking about it, and the way we're talking about it is even stranger. Several leading computer scientists, and Google as a company, have gone on record stating that the claim was preposterous. The story wasn't much as a computer science event, but as a pop culture phenomenon, it was pure gold. Was this the classic dystopian sci-fi story of a man falling in love with a machine? Or is there a chance that this program is seriously a life form? ("Whoa, kind of makes you think, doesn't it…?")

The oddest part was that these several leading computer scientists thought it was important to explain that, despite what you're thinking, no seriously, the program really doesn't feel things the way that we do. To be fair, they were probably working peacefully in their labs when a press guy showed up and turned a TV camera on them, but still I wonder if we're approaching this the right way – and if this "is it alive?" question is a diversion from the serious questions we should be asking.

The term "sentient," in this case, relates to the state of having feelings, rather than just knowledge. Many have equated this to experiencing a state of consciousness. So this debate has migrated from the cold, analytical realm of computer science to the fuzzy sphere of metaphysics, where these concepts are quite difficult to define.

Before you say whether a computer has consciousness, you kind of have to define what consciousness is, and there is a vast range of answers for that, depending on whether you are talking to a priest, a psychologist, a neurologist, or a new age mystic. But the point is, AIs like LaMDA are not created to be human – they are created to make people think they are human. If you learn to tap into human response patterns and emotional cues, humans will treat you differently. (Sorry dog lovers: That's what your dog is doing.)

Computer scientists are working overtime right now trying to create systems that behave as through they are conscious so that humans will react to them more "naturally." In other words, these systems will manipulate us emotionally.

We will then have two choices:

  • Fall for these artificial response patterns and emotional cues (react to the machines as if they were our friends – in other words, be manipulated)
  • Ignore the artificial response patterns and emotional cues (in other words, get practice every day treating entities that behave like humans in a callous and uncaring manner that denies their humanity)

Neither option sounds particularly appealing to me. Of course, Google, Meta, and the other for-profit corporations who are working on these kinds of solutions will say they just want to build a better chatbot, but that's the whole problem with this tech space: We're not so good at putting genies back in bottles once they get out.

Editor in Chief,

Joe Casad

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Welcome

    The NeuroRights Initiative has been in the news recently, and it seems like a good time for a shout out and a sincere thanks for their work.

  • Doghouse: Artificial Intelligence

    "maddog" ponders the rise of intelligent machines.

  • Welcome

    PayPal cofounder and rocket maker Elon Musk keeps getting himself into the news. Sometimes he gets notice for his new products and projects, but he is also famous as a kind of self-appointed spokesman for the techno-future.

  • Welcome

    Humans are storytellers. One could argue that journalists are even more like storytellers than other forms of humans, but then, you could also make the case that we just echo what is out there in the world.

  • Welcome

    Since they print my picture with this column, I have little hope of concealing my age, so I’ll just come out and say that I remember Ronald Reagan’s first year in office in 1981 – in fact, I was already a young adult at that time and was living in Reagan’s home state of California, working my first professional job after college. Reagan became known as “the great communicator” for the hypnotic optimism with which he could render his own point of view. In a speech somewhere around that time, the new president made a comment that was something like “We have come a long way since before we knew we had a problem with racism.”

comments powered by Disqus

Direct Download

Read full article as PDF:

Price $2.95