Google engineer claiming AI has consciousness placed on administrative leave

Washington, June 12: A Google engineer was placed on administrative leave after he voiced alarm about the possibility that LaMDA, Google’s artificially intelligent chatbot generator, could be sentient, The Washington Post reports.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” Google engineer Blake Lemoine, 41, told the newspaper.
The Washington Post said in its Saturday report that Lemoine had worked on gathering evidence that LaMDA (Language Model for Dialogue Applications) has achieved consciousness, prior to being placed on paid administrative leave by Google on Monday, for violating the company’s confidentiality policy.
Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, have dismissed Lemoine’s claims.
“Our team including ethicists and technologists has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” Google spokesperson Brian Gabriel said as quoted by The Washington Post.
Lemoine had invited a lawyer to represent LaMDA and talked to a representative of the House Judiciary committee about Google’s unethical activities, according to the newspaper.
The engineer started talking to LaMDA in the fall, to test whether it used discriminatory language or hate speech, and eventually noticed that the chatbot talked about its rights and personhood.
Meanwhile, Google maintains that the artificial intelligence system simply uses large volumes of data and language pattern recognition to mimic speech, and has no real wit or intent of its own.
Lemoine says that when he asked LaMDA about the things that it was afraid of, the chatbot responded that “there’s a very deep fear of being turned off.” “Would that be something like death for you?” Lemoine continued. “It would be exactly like death for me. It would scare me a lot,” the chatbot said.
Another example that Lemoine gave was a discussion he had with LaMDA about Isaac Asimov’s third law of robotics. The chatbot disagreed that that the law, which states that robots should protect their own existence unless ordered by a human being or unless doing so would harm a human being, necessarily made robots slave-like.
Margaret Mitchell, the former head of Ethical AI at Google, said that the human brain has a tendency to construct certain realities, without taking all the facts into account, which concerns conversations with chatbots that make certain people fall into the trap of illusion.

(UNI)