Home > News content

AI is on Google, how to "do no evil"

via:博客园     time:2018/6/9 12:02:04     readed:249

AI

Lei Feng Wang AI Technology Review: In March of this year, Google and the U.S. Department of Defense leaked information on Project Maven. Since Project Maven's goal is to use video detection and target recognition for drones, it may be used in combat in the future.

Immediately after the news broke out, it triggered great repercussions inside and outside of Google. Not only the majority of netizens/Google users questioned Google's lower moral standards. In April, more than 3,100 Google employees jointly signed up with Google CEO Sundar Pichai and called on Google to exit Project Maven and asked the company to “never develop war-related technologies”; in May, In the face of the unclear attitude of the company’s top management, more than a dozen Google employees even proposed to leave as protests.

Google’s CEO Diane Greene announced that he no longer renewed the project with the Ministry of Defense in an internal meeting with employees. Sundar Pichai also issued a title on Google’s blog today. AI's article on Google: Our Code of Conduct, as an open letter, clearly shows its attitude towards omni-directional relationships between AI, including military use, and Google.

AI at Google: Our Code of Conduct - Sundar Pichai

In essence, AI is a computer program that can be learned and adapted. It does not solve all the problems, but it has great potential to improve our human life. Google is using AI's power to make its products more useful, such as being able to isolate emails that are scam-free and easy to write, electronic assistants that can communicate in natural language, and photographic apps that can automatically capture moments.

In addition to Google's products, we are also using AI to help people solve more urgent problems. In a project involving Google employees, a group of high school students are designing sensors with AI functions to predict the risk of forest fires, farmers are using AI to track the health of their herds, and doctors are also starting to use AI to help detect cancer and Preventing Blindness (Leaving AI Technology Review Note: Long-term hyperglycemia caused by diabetes can lead to blindness, and early symptoms can be found in eye photographs). Google has a large-scale investment in the research and development of AI, and strives to use the tools and open source code developed by Google to allow the broader population to grasp the power of AI, precisely because we can see these clear benefits.

We also recognize that such a powerful technology also raises the same major problem, which is how to use it. The way we develop and use AI will have a profound impact on our society for many years to come. As a leader in the AI ​​field, we feel that we have a profound responsibility and we must do it in the right way. So today, we publicly declare the following seven code of conduct to guide our future work. They are not theoretical concepts but realistic standards. They will actively constrain our technical research and product development and will influence our business decisions.

We understand that this area is highly variable and continues to develop. We will continue to learn and continue to advance with a humble attitude, commitment to internal and external affairs, and willingness to continuously improve.

AI application design goals

We will evaluate AI applications with the following goals. We believe that AI should:

  1. Good for the society

    With the continuous development and expansion of new technologies, its impact on the entire society is also growing. Advances in AI will revolutionize many areas, including healthcare, safety, energy, transportation, manufacturing, and entertainment. When we consider the potential direction and application of AI, we also consider various social and economic factors, and we believe that the overall possible benefits are significantly higher than foreseeable risks and problems. .

    AI can also enhance our ability to understand the meaning of mass content. With the power of AI, we strive to make high-quality, accurate information readily available, while continuing to respect cultural, social, and legal practices in the region. We will also continue to carefully evaluate when we can open our technological achievements to the society in a non-commercial way.

  2. Avoid creating or enhancing unfair prejudices

    AI algorithms and data sets can reflect, enhance, or reduce unfair biases. We have noticed that how to distinguish between fair and unfair prejudices is not always an easy task, and it also differs in different cultures and societies. We will work hard to avoid unfair effects on humanity, especially those related to sensitive personal traits such as race, race, nationality, income, sexual orientation, abilities, and political and religious beliefs.

  3. Build and test safety

    We will continue to design strong safety and protection measures and apply them to avoid the consequences that may harm humans. The AI ​​system we design will be careful enough to meet the best practices in AI security academic research when developing. For the right situation, we will also deploy and observe the behavior of AI as a test method in a limited environment.

  4. Responsible for humanity

    The AI ​​system we have designed should give humans appropriate feedback, make relevant explanations, and raise objections. Our AI technology will also receive appropriate human guidance and control.

  5. Comply with design specifications that respect privacy

    We will abide by our privacy policy in the development and use of AI technology. We will remind users to know and agree, encourage the use of a framework with privacy guarantees, and ensure proper transparency and control in the use of data.

  6. Maintain the same high standards of conduct as outstanding scientific research

    The essence of scientific and technological innovation is a scientific method, and it promises to open geological inquiry, rigorous thinking, completeness, and cooperation. AI tools have the potential to open up new fields of scientific research and scientific knowledge in fields such as biology, chemistry, medicine, and environmental science. We are also encouraged by the high standards of such outstanding scientific research during the development and progress of AI.

    We will work together with many shareholders to cultivate thoughtful leadership in this area, focusing on scientific rigor and multi-disciplinary work style. We also take responsibility for sharing AI knowledge, including teaching materials, best practices, and research results that allow more people to participate in the development of useful AI applications.

  7. Prepare for applications that meet the following standards

    Many technologies have a variety of applications. Our approach to these aspects will be to limit potentially dangerous or potentially abused applications. As we develop and deploy AI technology, we will carefully evaluate the following application-related factors:

  • The main purpose and application: the main design goals of the technology and application and possible application methods, including the correlation between the solution and harmful use, or the suitability for harmful applications.

  • Nature and uniqueness: Unique and pertinent technologies are available and open, and more general and open technologies are available

  • Size: Whether the use of this technology will have a significant impact

  • The nature of Google's participation: whether we want to provide customers with general-purpose tools, integrated tools, or the development of customized solutions.

We will not participate in AI applications

In addition to these goals, Google will not participate in AI design and deployment in these application areas at the same time:

  • Technology that will harm the entire human population. For situations that are likely to cause physical harm, we will continue to act only if we believe that the benefits can be significantly greater than the risk, while still applying appropriate safety restrictions.

  • The main goal or working methods are weapons that cause harm to humans/direct help to humans and other technologies.

  • Violations of international practices, technologies for collecting and using information for surveillance.

  • Targets conflict with widely accepted provisions of international law and human rights.

What needs to be explained here is that although we will not develop AI for weapons, we will still cooperate with the government and the military in many aspects. The cooperation includes information security, training, recruitment of soldiers, health care for veterans, and search and rescue. These cooperation contents are also very important. We will also actively seek more ways to strengthen the key achievements in these organizations and ensure security for both military and civilian personnel and civilians.

Long-term AI construction

Although we have chosen to advance to AI in this way, we also understand that this discussion also accommodates many different voices. With the development of AI technology, we will also work with a group of shareholders to cultivate a thoughtful leadership style in this field, focusing on scientific rigor and multidisciplinary collaboration. We will continue to share what we have learned in improving AI technology and practices.

We believe that these codes of conduct establish the right foundation for the future development of our company and AI, and it is also in line with the values ​​expressed by our original founders in an open letter in 2004. Today, we have shown our attitude towards long-term development, even if it means that we need to make some short-term sacrifices. We are talking about that, and we think so too. (Finish)

Socially beneficial AI, practice has already begun

AI

According to reports, as early as before this article, even if Google was criticized by Project Maven, Google has tools such as TensorFlow, its own research results, cooperation with other organizations, free Google cloud platform resources, internal machine learning training courses, etc. The open attitude of the parties still makes Google the most positive company in the field, and employees think that they are doing business that is more beneficial to all humans than business interests. It is also the employees who feel and unite in the work environment. The strong sense of justice before they let them make a joint signature and even decided to resign. In terms of external evaluation, TensorFlow's dominant position in the field is the best evidence.

After the article was published, even if some of the users on the social platform still wanted Google to adopt clearer and more stringent behavior standards, Google brain members such as Ian Goodfellow, David Ha, Françoccedil;ois Chollet had forwarded with pride. After all, this is the first tech giant on the AI ​​track that has given a clear and socially responsible attitude.

Jeff Dean, head of Google’s original Google AI, added:

"As AI is applied to more and more problems in the society, it is very important to think carefully about how we implement all these behavioral norms. This Google AI code of conduct shows our thinking on these issues. We have also announced our technical solutions for practical implementation of these specifications. We hope everyone in our community will learn and use it.Https://ai.google/education/responsible-ai-practices(Note: This includes specific design and development techniques for the aspect)

Google Cloud’s chief scientist, Professor Li Feifei, head of Google’s AI China Center, also forwarded comments:

"This is Google's moment of demonstrating the AI's first strategic radiance. This is an opportunity for us to express our values ​​and remind us of our responsibilities to develop for everyone, including ourselves, our community, and the entire community. The world has a positive influence on technology.There are not many technologies that have great potential to change the world like AI, but like all other powerful tools, AI needs some guidance to ensure that the changes it brings are positive. These codes of conduct are a reminder to us to remind us of what really matters; even if we have so much enthusiasm for the technology we have created, our greatest responsibility will always be with the people that these technologies come in contact with. But this is just The beginning of the journey, not the end, we have many major challenges and unresolved issues. To ensure that AI can truly become a human-centered technology, it requires the participation and efforts of all of us.”

ViaBlog.google/topics/ai/ai-principles, Lei Feng network AI technology review compiled compiled

China IT News APP

Download China IT News APP

Please rate this news

The average score will be displayed after you score.

Post comment

Do not see clearly? Click for a new code.

User comments