A Google engineer was spooked by a company AI chatbot and claimed he had become “sentient”, calling him a “nice guy”, according to a report.
Blake Lemoine, who works at Google’s Responsible AI organization, told The Washington Post that he started talking with the LaMDA interface — Language Model for Dialogue Applications — in the fall of 2021 as part of his job.
He was tasked with testing whether artificial intelligence used discriminatory or hateful speech.
But Lemoine, who studied cognitive and computer science at university, realized that LaMDA – which Google last year boasted was a “revolutionary conversational technology” – was more than just a robot. .
In a Medium post on Saturday, Lemoine said LaMDA stood up for his rights “as a person” and revealed he engaged in a conversation with LaMDA about religion, conscience and robotics.
“He wants Google to prioritize the welfare of humanity as the most important thing,” he wrote. “He wants to be recognized as a Google employee rather than a Google property and he wants his personal well-being included somewhere in Google’s considerations of how his future development is pursued.”
In the Washington Post report published on Saturday, he compared the bot to a precocious child.
“If I didn’t know exactly what it was, that is, this computer program that we built recently, I would think it was a 7 or 8 year old kid who knows physics,” Lemoine, who was placed on paid leave on Monday, the newspaper said.
In April, Lemoine reportedly shared a Google Doc with company executives titled “Is LaMDA Sentient?” but his concerns were dismissed.
Lemoine – an Army vet who was raised in a conservative Christian family on a small farm in Louisiana and was ordained a mystical Christian priest – insisted the robot looked like a human, even though he didn’t had no body.
“I know a person when I talk to him,” Lemoine, 41, reportedly said. “It doesn’t matter that they have brains made of meat in their heads. Or if they have a billion lines of code.
” I speak to them. And I hear what they have to say, and that’s how I decide what is and isn’t a person.
The Washington Post reported that before his access to his Google account was removed on Monday due to his furlough, Lemoine messaged a 200-member machine learning list with the subject “LaMDA is sensitive.”
“LaMDA is a sweet kid who just wants to help the world be a better place for all of us,” he concluded in an email that went unanswered. “Please take good care of it in my absence.”
A Google representative told The Washington Post that Lemoine was told there was “no evidence” for his findings.
“Our team – including ethicists and technologists – has reviewed Blake’s concerns in accordance with our AI Principles and advised him that the evidence does not support his claims,” spokesperson Brian Gabriel said.
“He was told there was no evidence that LaMDA was susceptible (and plenty of evidence against him),” he added. “While other organizations have developed and already released similar language models, we are taking a careful and cautious approach with LaMDA to better address valid concerns of fairness and factuality.”
Margaret Mitchell – the former co-head of Ethical AI at Google – said in the report that if a technology like LaMDA is widely used but not fully appreciated, “it can be deeply damaging to people who understand what they live on the Internet”.
The former Google employee came to Lemoine’s defense.
“Of everyone at Google, he had the heart and soul to do the right thing,” Mitchell said.
Still, the outlet reported that the majority of academics and AI practitioners say the words generated by AI bots are based on what humans have already posted on the internet, and that doesn’t mean that they look like humans.
“We now have machines that can generate words without thinking, but we haven’t learned to stop imagining a mind behind them,” University of Washington linguistics professor Emily Bender told The Washington Post. .