HomeBusinessGoogle suspends engineer who claims its AI is sensitive

Google suspends engineer who claims its AI is sensitive

Google has placed one of its engineers on paid administrative leave for allegedly violating its privacy policies after it grew concerned that an AI chatbot system had reached sensitivity, the Washington Post reports. The engineer, Blake Lemoine, works for Google’s Responsible AI organization and was testing whether its LaMDA model generates discriminatory language or hate speech.

The engineer’s concerns are said to have arisen from the compelling answers he saw generated by the AI ​​system about his rights and the ethics of robotics. In April, he shared a document with executives titled “Is LaMDA Sentient?” containing a transcript of his conversations with the AI ​​(after being furloughed, Lemoine posted the transcript via his Medium account), which he says shows her arguing “that she’s sensitive because she has feelings, emotions and subjective experience”.

Google believes Lemoine’s actions related to his work on LaMDA violated its privacy policies, The Washington Post and The Guardian report. He reportedly invited a lawyer to represent the AI ​​system and spoke to a House Judiciary Committee representative about allegedly unethical activities at Google. In a June 6 post on Medium, the day Lemoine was placed on administrative leave, the engineer said he had sought “a minimum of outside consultation to help guide me in my investigations” and that the list of people he had discussions with included the US government. employees.

The search giant publicly announced LaMDA at Google I/O last year, which it hopes will improve its conversational artificial intelligence assistants and make conversations more natural. The company already uses similar language model technology for Gmail’s Smart Compose feature or for search engine queries.

In a statement given to WaPo, a Google spokesperson said there was “no evidence” that LaMDA was susceptible. “Our team – including ethicists and technologists – have reviewed Blake’s concerns in accordance with our AI Principles and advised him that the evidence does not support his claims. He was told there is no had no evidence that LaMDA was susceptible (and plenty of evidence against it),” spokesperson Brian Gabriel said.

“Of course, some in the wider AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models. , which are not sensitive,” Gabriel said. “These systems mimic the types of exchanges found in millions of sentences and can riff on any fantastic topic.”

“Hundreds of researchers and engineers have conversed with LaMDA and we don’t know of anyone else making sweeping claims, or anthropomorphizing LaMDA, as Blake did,” Gabriel said.

A linguistics professor interviewed by WaPo agreed that it is incorrect to equate persuasive written responses with sensitivity. “We now have machines that can generate words without thinking, but we haven’t learned to stop imagining a mind behind them,” said Emily M. Bender, a professor at the University of Washington.

Timnit Gebru, a prominent AI ethicist fired by Google in 2020 (although the search giant says she quit), said the discussion of AI sensitivity risked “derailing” opinions. larger ethical conversations regarding the use of artificial intelligence. “Instead of discussing the wrongdoings of these corporations, sexism, racism, AI colonialism, centralization of power, white man’s burden (build the right ‘AGI’ [artificial general intelligence] to save us when what they are doing is exploitation), spent the whole weekend discussing sensitivity,” she said. tweeted. “Derailment mission accomplished.”

Despite his concerns, Lemoine said he intends to continue working on AI in the future. “My intention is to stay in AI whether Google keeps me on or not,” he said. wrote in a tweet.

Updated June 13, 6:30 a.m. ET: Updated with additional statement from Google.

Must Read