HomeBusinessNo, Google's AI is not sentient

No, Google’s AI is not sentient



CNN Business

Tech companies are constantly touting the capabilities of their ever-improving artificial intelligence. But Google was quick to shut down claims that one of its programs had advanced so far that it had become susceptible.

According to a telling Washington Post account on Saturday, a Google engineer said that after hundreds of interactions with a cutting-edge, never-before-seen AI system called LaMDA, he believed the program had reached a level of consciousness.

In interviews and public statements, many in the AI ​​community have pushed back against the engineer’s claims, while some have pointed out that his story highlights how technology can cause people to attribute attributes to him. humans. But the belief that Google’s AI could be sentient arguably underscores both our fears and expectations about what this technology can do.

LaMDA, which stands for “Language Model for Dialog Applications”, is one of many large-scale AI systems that has been trained on large amounts of text from the Internet and can respond to written prompts. They are basically responsible for finding patterns and predicting which word or words should come next. Such systems have become increasingly efficient at answering questions and writing in a way that can seem humanly compelling – and Google itself featured LaMDA last May in a blog post as one that can “engage seamlessly on a seemingly endless number of topics.” But the results can also be goofy, bizarre, disturbing, and prone to rambling.

The engineer, Blake Lemoine, reportedly told the Washington Post that he had shared evidence with Google that LaMDA was sensitive, but the company disagreed. In a statement, Google said Monday that its team, which includes ethicists and technologists, “reviewed Blake’s concerns in accordance with our AI principles and advised him that the evidence does not support his claims.”

On June 6, Lemoine posted on Medium that Google had put him on paid administrative leave “as part of an investigation into the AI ​​ethics issues I was raising within the company” and that he could be fired “soon”. (He mentioned the experience of Margaret Mitchell, who had led Google’s Ethical AI team until Google fired her in early 2021 following her outspokenness regarding the late 2020 release of Timnit Gebru, then co Gebru was ousted after internal scuffles, including one related to a research paper that the company’s AI leadership told him to remove from consideration for a presentation at a conference, or to remove his name.)

A Google spokesperson confirmed that Lemoine remains on administrative leave. According to the Washington Post, he was furloughed for violating the company’s privacy policy.

Lemoine was unavailable for comment on Monday.

The continued emergence of powerful computer programs trained on massive data has also raised concerns about the ethics governing the development and use of these technologies. And sometimes progress is viewed through the prism of what can happen, rather than what is currently possible.

Responses from members of the AI ​​community to Lemoine’s experiment ricocheted across social media over the weekend, and they generally came to the same conclusion: Google’s AI is far from perfect. conscious. Abeba Birhane, Trustworthy AI Principal Researcher at Mozilla, tweeted Sunday, “we’ve entered a new era of ‘this neural network is aware’ and this time it’s going to drain so much energy to refute.”

Gary Marcus, founder and CEO of Geometric Intelligence, which was sold to Uber, and author of books including “Rebooting AI: Building Artificial Intelligence We Can Trust,” called LaMDA’s idea sensitive. “absurd on stilts” in a tweet. He quickly wrote a blog post pointing out that all of these AI systems match patterns by tapping into huge language databases.

Blake Lemoine poses for a portrait at Golden Gate Park in San Francisco, Calif., Thursday, June 9, 2022.

In a Monday interview with CNN Business, Marcus said the best way to think of systems like LaMDA is as a “glorified version” of auto-complete software that you can use to predict the next word in a text message. . If you type “I’m really hungry, I want to go to a”, it might suggest “restaurant” as the next word. But this is a prediction made using statistics.

“No one should think that autocomplete, even on steroids, is conscious,” he said.

In an interview, Gebru, who is the founder and executive director of the Distributed AI Research Institute, or DAIR, said Lemoine is a victim of many companies claiming that conscious AI or artificial general intelligence – an idea that refers to AI that can perform human tasks and interact with us in meaningful ways – are not far away.

For example, she noted, Ilya Sutskever, co-founder and chief scientist of OpenAI, tweeted in February that “today’s large neural networks may be slightly aware”. And last week, Google Research VP and colleague Blaise Aguera y Arcas wrote in an article for The Economist that when he started using LaMDA last year, “I had more and more impression of talking to something intelligent”. (This article now includes an editor’s note pointing out that Lemoine has since “been furloughed after claiming in an interview with The Washington Post that Google’s chatbot LaMDA had become ‘responsive.'”)

“What’s happening is there’s such a race to use more data, more computation, to say you’ve created this general thing that knows everything, answers all your questions or whatever, and it’s is the drum you played,” said Gebru. . “So how are you surprised when this person takes things to extremes? »

In its statement, Google noted that LaMDA has undergone 11 “separate reviews of AI principles,” as well as “rigorous research and testing” related to quality, safety, and the ability to formulate statements based on facts. “Of course, some in the wider AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models. , which are not sensitive,” the company said.

“Hundreds of researchers and engineers have conversed with LaMDA, and we don’t know of anyone else making sweeping claims, or anthropomorphizing LaMDA, as Blake did,” Google said.

Must Read