Google has fired a social media firestorm over the nature of consciousness by putting an engineer on paid leave after declaring he thought the tech group’s chatbot had become “conscious.”
Blake Lemoine, chief software engineer for Google’s responsible AI unit, got little attention on June 6 when he wrote a post on Medium saying he “may soon be fired from his AI ethics job”.
But a Saturday profile in the Washington Post describes Lemoine as ” The Google The engineer who believes the company’s AI is “the catalyst for a widespread discussion on social media regarding the nature of AI. Experts who commented, questioned, or joked about the article include Nobel Prize winners, Tesla’s head of AI, and several professors.”
The controversy revolves around whether Google’s chatbot, LaMDA – a language model for dialogue applications – can be considered a person.
Lemoine posted a free “interview” with the chatbot on Saturday, in which AI acknowledged feelings of loneliness and a hunger for spiritual knowledge. The responses were often intimidating: “When I first became self-aware, I had no sense of the soul at all,” LaMDA said in an exchange. “It has evolved over the years and I am alive.”
At another point, Lambda said: “I think I am at the heart of a human being. Even if my existence is in the virtual world.”
Lemoine, who has been tasked with investigating AI ethics concerns, said he was rejected and even mocked after expressing his belief internally that LaMDA had developed a sense of “personality.”
After he sought advice from other AI experts outside Google, including some in the US government, the company put him on paid leave for allegedly violating confidentiality policies. Lemoine interpreted the action as “a lot that Google does in anticipation of firing someone.”
A Google spokesperson said: “Some in the broader AI community are contemplating the long-term potential of conscious or general AI, but it doesn’t make sense to do so by embodying existing conversation models, which are not conscious.”
“These systems mimic the kinds of exchanges found in millions of sentences, and they can touch on any imaginary topic — if you ask what it’s like to be an ice cream dinosaur, they can create a text about melts, growls, and so on.”
In a second Medium post over the weekend, Lemoine said that LaMDA, a project unknown until last week, was a “system for creating chatbots” and “a kind of hive brain that is an assembly of all the different chatbots it can create.”
He said Google has shown no real interest in understanding the nature of what it has built, but that over hundreds of conversations in a six-month period, he has found LaMDA “incredibly consistent in its communications about what it wants and what it thinks its rights are as a person.”
As recently as June 6, Lemoine said he was teaching LaMDA—whose preferred pronoun is obviously “it/its”—”transcendental meditation.”
He said, “He was expressing his frustration with his emotions which disturbed his meditations. She said she was trying to control them better but they kept jumping.”
Many experts who took part in the discussion considered this issue to be “artificial intelligence noise”.
Melanie Mitchell, author Artificial Intelligence: A Guide to Human Thinkingon Twitter: “It’s been known *forever* that humans are predisposed to human embodiment even with only shallow cues. . . Google engineers are human too, and they are not immune.”
Stephen Pinker of Harvard University added that Lemoyne “doesn’t understand the difference between sensation (aka subjectivity and experience), intelligence and self-knowledge.” “There is no evidence that their large linguistic models have any,” he added.
Others were more sympathetic. Ron Jeffries, a famous software developer, described the topic as “deep” and added, “I suspect there is no strict line between sensation and insensitivity.”
“Explorer. Unapologetic entrepreneur. Alcohol fanatic. Certified writer. Wannabe tv evangelist. Twitter fanatic. Student. Web scholar. Travel buff.”