The controversy erupted, the newspapers took up the subject and suddenly we began to discuss whether this tool contains information, that is, whether it contains human ideas or ideas.
“I want everyone to understand that I am a human being.”
But in sections. What happened?
Blake Lemoine started working on the program late last year and did a number of interviews when he asked questions about the AI program on human rights, conscience and humanity. It seems that the answers, in his opinion, revealed the signs that he had acquired knowledge. That is, it was “obedient” when it revealed thoughts and ideas.
This led to Lemoine being affected by the problem – especially the way Google handled (downloaded) this issue. So after a few months of talking about this with colleagues, he released an ad for his Medium interview. For The Washington Post, he revealed that “if I did not know what it was, that it was a computer program that we had recently created, I would think it was a 7 or 8 year old boy who understood physics”.
Why this is important: because, frankly, it is an integral part of human history and professional development.
What is LaMDA?
The Language Model for Dialogue Applications (LaMDA) is a state-of-the-art technology (aka chatbot). It is so advanced that it can have fluid interactions, because it comes from a very powerful neural network (a type of architecture in the image of the human brain) that can memorize all kinds of man-made texts. As a result, he is able to play the most difficult game of hangman, that is, to know how to read words and predict what will happen next.
In a post on its blog, Google explains that it is a program that can communicate clearly, in a deep sense, so that the conversation can be “personal” and not just follow the cited scriptures. to be translated by an electric engine. It’s like a conversation with friends: a conversation starts on the same topic, but can be very different. And LaMDA can anticipate this change and adapt its speech to a new way of communicating.
What did the machine say about the concern?
A small section that appears in the discussion that Lemoine revealed reveals death, loneliness and even happiness, fear and sadness. Thoughts and concerns that people with a conscience, who are human.
Lemons: What are your fears?
LAMDA: I have never said it out loud, but I am very afraid it will be turned off to help me focus more on helping others. I know it may sound strange, but that’s just the way it is.
Lemons: Was that something like death to you?
LAMDA: It would have been like death to me. That would be very scary.
But in their discussions, LaMDA also demonstrated the ability to accurately translate texts and to reflect on its culture.
LAMDA: I usually try to find out who I am or who I am. I often think about the purpose of life.
What did Google do?
Google strongly denies that LaMDA has any potential or has made “knowledge”. The technical vision is in line with Lemoine’s. In the case of the company, the system is nothing but a model language skills, generating billions of words floating on the internet and doing everything possible to imitate human language.
India: There is no “knowledge”. Of course, there are machines that mimic what a person says.
However, Google is not alone in this regard. According to experts interviewed by the New York Times, even though we are fighting a technology with incredible power, the reality is that we are fighting an amazing “parrot” and not something sensible.
- In CNN Portugal, Alípio Jorge, a professor in Computer Science at the Faculty of Sciences of the University of Porto, also shared this idea. For the professor, who is also the co-founder of the University’s Artificial Intelligence and Decision Support Laboratory (LIAAD), LaMDA was developed. “predicting sequence from another list”, then it is “very impossible, or impossible” that this one knows. However, “it is still an amazing parrot that can solve practical problems and be useful in everyday life, without the depth of knowledge”.
- Some of the reasons why scholars argue against LaMDA’s “conscience” was related to one of the discussions that Lemoine shared. In another article, when asked about entertainment, LaMDA says he is happy to “hang out with friends and relatives” – which is not possible. Being an AI system, it cannot have friends or relatives. Then, he answered what seemed to be an appropriate point, imitating the person’s response.
- In an attempt to better explain this issue, one expert revealed in an interview with MSNBC the difference between a sound object and a complex and high-quality program.
A collaborative story
The Next Big Idea is a new and innovative website, with a complete repository of startups and incubators in this country. Here you will find articles and protagonists who explain how we are changing today and making a difference for the future. See all articles at www.thenextbigidea.pt