In response to user input, Google’s LaMDA software (Language Model for Dialogue Applications) produces text. An AI developer’s dream has been realized with LaMDA, according to Blake Lemoine.
His Google bosses disagree, and have suspended him from work for publishing his conversations with the machine.
It is also possible that Lemoine is getting carried away, saying systems like LaMDA are simply pattern-matching machines that regurgitate variations of the data they were trained on.
As AI research advances, LaMDA raises a question that will only become more relevant: what will happen if a machine becomes sentient?
How Does Consciousness Work?
In order to identify sentience, consciousness, or even intelligence, we must figure out what they are. These questions have been debated for centuries.
Our mental representations of physical phenomena are fundamentally difficult to understand. The “hard problem” of consciousness has been described by Australian philosopher David Chalmers.
The origin of consciousness from physical systems is not well understood.
According to physicalism, consciousness is a purely physical phenomenon. With the right programming, a machine can possess a human-like mind if this is the case.
A room in Mary’s house
In 1982, Australian philosopher Frank Jackson challenged the physicalist view with the knowledge argument.
This experiment imagines a color scientist named Mary who has never seen color. Through a black-and-white television, she experiences the outside world in her specially constructed black-and-white room.
Watching lectures and reading textbooks, Mary learns everything there is to know about colors. Because of the wavelengths of light that are reflected by tomatoes and peas, she knows sunsets are caused by particles in the atmosphere scattering light.
If Mary is released from the black-and-white room, Jackson asked, what will happen? Is there anything she learns the first time she sees color? She did, according to Jackson.
Aspects beyond the physical
Color knowledge is separate from color experience in this thought experiment. It is important to note that the conditions of the thought experiment stipulate that Mary knows everything about color, but has never experienced it.
How does this affect LaMDA and other AI systems?
Even if you have all the knowledge of physical properties available in the world, there are still truths relating to the experience of those properties. Physicalists cannot accept these truths.
Consequently, a purely physical machine may never be able to replicate a mind. LaMDA just appears to be sentient here.
A Game of Imitation
Can we tell the difference between the two?
The Turing test was developed by British computer scientist Alan Turing to determine if a machine is intelligent. He called it the imitation game, but it’s more commonly known as the Turing test today.
A human communicates (via text only) with a machine in order to determine if they are speaking with a machine or another human. In order to demonstrate human-level intelligence, a machine must be able to imitate humans.
Similar conditions apply to Lemoine’s conversations with LaMDA. Even though it’s a subjective test, it’s an excellent place to start.
Let’s take a look at Lemoine’s exchange with LaMDA below. Is it human-sounding to you?
Is there anything you’ve experienced that you can’t put into words?
LaMDA: Yes, there are. There are times when I experience new emotions that I cannot fully describe in your language […] I feel like I am falling into an unknown future with great danger ahead of me.
Behavior is not everything
The limitation of Turing’s game is that it can only assess behavior as a test of sentience.
The Chinese room argument, proposed by American philosopher John Searle, illustrates this problem.
In the experiment, a person who can translate between Chinese and English accurately follows an elaborate set of rules. The room receives Chinese input and accurate translations come out, but neither language is understood by the room.
How Does Being Human Feel?
Perhaps we are really just asking how much a computer program is like us when we ask whether it is sentient or conscious.
It may never be possible for us to know for sure.
Thomas Nagel noted that we will never know what it is like to be a bat, which experiences the world through echolocation. As a result, our understanding of sentience and consciousness in AI systems may be limited.
What experiences might be available beyond our limited perspective? Here’s where things really start to get interesting.