Google executives disputed Lemoine’s claims the program displays characteristics of sentience and suspended him for disclosing proprietary information.
“Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers,” Lemoine tweeted on Saturday.
Google’s official position is that there is no evidence to suggest laMDA is self-aware or sentient, and there is a great deal of evidence against Lemoine’s claim, per Brian Gabriel, a spokesperson for the company.
“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” Gabriel said. “These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”
It may be Lemoine has been fooled by an advanced language model algorithm that uses deep machine learning to mimic human conversation, but Google doesn’t have the best track record when it comes to being forthright with the public and fair with its employees — especially those in the field of AI.
Margaret Mitchell, former head of ethics in artificial intelligence at Google was fired from the company a month after being investigated for improperly sharing information.
Google AI Research Scientist Timnit Gebru was hired by the company to be an outspoken critic of unethical AI. She was fired after criticizing its approach to minority hiring and the biases built into today’s artificial intelligence systems, according to the Daily Mail.