Home > News content

"AI has souls" caused a false alarm. Can we cross the "terror"?

via:新浪科技     time:2022/6/26 12:02:48     readed:76

Wen Zhu Changjun

Will artificial intelligence defeat humans? The worries rooted deep in human beings seem to have come to the pass that must be faced. A few days ago, a news about "AI has souls" shocked the global technology circle. A software engineer in Google said that Google artificial intelligence chat robot Lamda, known as "open conversation black technology", already has the same perception of people and even has the "soul" of people. Subsequently, Google suspended his position and denied his statement.

Public information shows that LAMDA is the name of a AI language model. It is specially used for dialogue. The goal is to start high -quality conversations with humans. The application vision is to provide services such as Google's search and voice assistant. In layman's terms, it is an AI robot that is specifically used to "talk" and "chat". Similar AI applications have actually appeared in many fields. However, the reason why Lamda triggered a small storm was that the engineers who had dealt with it for a long time found that it was "more and more like people." Even accused of "consciousness and soul". Therefore, the engineer proposed to Google that it should obtain its consent in advance before taking it to the experiment, otherwise it would violate moral ethics.

Judging from some dialogue records, Lamda seems to show the surprising "human nature". For example, when asked, "(to be) what is the secret of a truly good paper plane?" Lamda asked, "I must first ask you, you refer to 'good', what is its definition?" Even when talking about language, Lamda naturally used "us" to include human beings and itself. When pointed out that human -machine differences, it explained: "This does not mean that I do not have the same needs as humans." After human imagination of AI robots, it has triggered many people's hearts about the secret and true fear of "whether AI will be smarter than humans".

However, judging from the current opinions of professionals, "Lamda has consciousness and soul" judgment does not have enough scientific principles. Google said that Lamda is a natural language model. In essence, its work is not different from the automatic completion in Google search columns. They are predicting the intention of users through a given context. However, when LAMDA's parameters reached 137 billion, it completed this work very well, so that it could be deceived by humans briefly. In other words, LAMDA may be more "smart" than ordinary AI robots, but it is still working under the conditions given by people, and it cannot be said that it has consciousness and soul.

Some experts said that the reason why Lamda's answers are so like people are related to the deliberate guidance of the questioner. Some experts even gave an analogy: claiming that they had personality, it was equivalent to the dog's sound in the voice machine, thinking that the owner was inside. With the rise of attention, there is also a voice questioning. This incident may be Google's "self -directed" marketing activity for its AI products. Therefore, the so -called "AI has consciousness and soul", but it is just a false alarm.

Of course, such a big movement is worth discussing. On the one hand, from the public's response to this incident, everyone's concerns about AI "overcome humans" and "smarter than humans" did exist objectively. This reminds the scientific community in the process of AI development, the corresponding popular science work must not be less. It is worth noting that in reality, most people's awareness of AI and concerns about "AI will defeat humans" are mainly from the "influence" of various science fiction novels and film and television works. And how much is these "fictional" AI maps from the development and application of AI in the scientific sense in reality? How many misunderstandings will it bring, and then the people's unnecessary panic and prejudice for AI? In this regard, it is obviously needed to build a popular and interactive bridge between AI and the public to enhance the society's awareness of AI's "true face".

On the other hand, with the rapid advancement of AI research and applications, its technical ethical norms have indeed become a problem that cannot be ignored. This incident was actually triggered by ethical disputes. The focus of the engineer involved in the dispute between Google is on the surface is whether Lamda has a soul argument, but the essence behind it is the ethical differences in artificial intelligence application. In addition, public information shows that this is at least the second time that Google has controversial with employees on scientific and technological ethics issues. This reality shows that in the process of artificial intelligence research and application, even within the same company, it will easily touch the ethical dispute.

Based on this, the need for artificial intelligence's scientific and technological ethical norms and standards as soon as possible, and the necessity of setting up "traffic lights" for emerging technical ethics governance has become increasingly prominent. In fact, Tesla CEO Musk had previously advocated that "treating artificial intelligence, humans should preemptively develop people in regulations and regulations, instead of taking regulatory measures after a problem." Of course, as a new thing, there is still too many unknown people's cognition of AI. The ethical specifications must take into account the balance of risks and innovation, and work in this area may be more complicated than expected. At present, the world is still in the exploration stage. It is worth mentioning that in March of this year, my country issued the "Opinions on Strengthening the Governance of Science and Technology Ethics", which clearly proposed to focus on strengthening the research on scientific and technological ethical legislation in the fields of life sciences, medicine, artificial intelligence and other fields. Guiding documents of scientific and technological ethics governance at the national level.

Judging from this case alone, the so -called "AI robot has already had consciousness and soul" judgment, and has not received enough scientific support. In this regard, we can let go of it. In addition, AI robots are becoming smarter and whether they have souls. Strictly speaking, they are also two concepts, and they should not be confused. For example, the science fiction writer Liu Cixin has a widely spread saying: "AlphaGo can't win Ke Jie, and he will be angry, pick up the checkerboard and smash on Ke Jie's head." This is true artificial intelligence. And we have a long distance from this step. Therefore, now I am too worried that artificial intelligence will threaten and transcend humans.

But in the future, as the research of artificial intelligence technology continues to advance, thinking and actively cope with how human beings cross the "terror", and to deal with the relationship between people and AI in an ideal way, it still has strong practical significance. The concept of "Horror Valley" was proposed by Japanese robotic expert Mori Mori Mori in the 1970s. Its main theory is that because robots are similar to humans in appearance and movements, humans will also have positive emotions on robots; and of their positive emotions; while When the similarity between robots and humans reaches a certain degree, human response to them suddenly becomes extremely negative and disgusted; when the similarity between robots and humans continues to rise, it is equivalent to the similarity between ordinary people. Human response to their emotional reactions will return to the front again. Judging from the public reaction inspired by the "LAMDA Incident", we may be in the transition period from the first stage to the second stage. At this stage, how to reduce the dislike and even fear of society's development of AI is worth answering from enterprises to regulators.

The author is a media commentator

translate engine: Google

China IT News APP

Download China IT News APP

Please rate this news

The average score will be displayed after you score.

Post comment

Do not see clearly? Click for a new code.

User comments