Home > News content

How does Google work with the military? Where are the boundaries of AI ethics?

via:CnBeta     time:2018/6/10 8:01:46     readed:193

Others questioned whether Google is using machine learning technology to meet ethical standards. They believe that the Pentagon will use this technology for anti-personnel weapons, which in turn will cause harm that technology optimists would never want to cause.

More than 3100 Google employees then petitioned Sandal Pichaigan, chief executive of Google, to protest.

谷歌员工联名上书Sundar

Google employees joint letter to Sundar Pichai

Things will be further fermented by the end of April. Some media have found that Google has deleted three points of its eight-year motto "Don't be evil" in the beginning of the company's code of conduct. At the end of the guideline, there is a sentence that has not yet been deleted: "Remember, don't be evil. If you see something that you think is not correct, speak out!"

On Friday, at a weekly "weather forecast" meeting, Google Cloud CEO Diane Greeneannounced that Google would end its partnership with the Pentagon Project Mavene after the contract expires.

This is undoubtedly a major event. "The news of victory" was crazy inside Google. With an article rushing out, this matter seems to have ended temporarily with Google's "compromise with employees and stop renewal with the Department of Defense."

But just yesterday, Google CEO Sundar Pichai published an article titled "AI at Google: our principles", listing seven guiding principles and pointing out that Google does not end cooperation with the US military. The reversal of Google is embarrassing. Although it clarifies "the application of AI that will not be pursued," technology wickedness is still a problem of human evil, and AI ethics once again leads people to think deeply.

Where are the boundaries of AI ethics?

If Google has not been very flat in the recent past, then Amazon Alexa's life will not be easy.

Amazon’s Echo device was accused of recording private conversations without permission and sending audio to a random person in the user’s contact list. This is the last time Alexa was exposed to the scary events of the "ridicule humans" incident.

Amazon对Alexa一事的回应

Amazon's response to Alexa

This is far from an example. As early as 2016, a person named 19-year-old girl named Tay chats on Twitter. This MicrosoftDevelopmentArtificial intelligence employs natural language learning technology that can process and imitate human conversations by grabbing data that interacts with the user and chats with jokes, paragraphs, and expressions like humans. However, less than a day after being on the line, Tay was "tuned" to become an extremist who yelled at ethnic cleansing and brutishness. Microsoft had to take it down on the grounds of "system upgrade".

Microsoft Robot Tay's Extreme Remarks

This really makes people think carefully. According to Ke Ming, an aixdlun analyst, the AI ​​ethical issues will be increasingly taken into account as the ills of AI are highlighted. Where exactly is the boundary of AI ethics? First of all, several issues should be clarified.

1. Is the robot a civil subject?

With the rapid development of artificial intelligence technology, robots have more and more powerful intelligence. The gap between machines and humans has also gradually narrowed, and the robots that will emerge in the future will have biological brains that can even rival the number of neurons in the human brain. U.S. Future U.S. futurists even predict that in the middle of this century, non-biological intelligence will be 1 billion times more than anyone today.

Citizenship does not seem to be a problem for robots anymore. In October last year, the birth of Sophia, the world’s first citizenship robot, meant that human creation possessed an identity equal to that of human beings, as well as the rights, duties, and social status behind their identities.

The legal civil subject qualification is still the dividing line of AI ethics. In the past period of time, philosophers, scientists, and lawmakers from the United States, Britain, and other countries have all engaged in heated debates. In 2016, the European Commission’s Legal Affairs Commission submitted a motion to the European Commission requesting that the status of the most advanced automated robot be identified as “electronic persons”, and besides giving them “specific rights and obligations”, it is also recommended to be intelligent. Automated robots register to pay for their taxes, contributions, and pension funds. If the legal motion is passed, it will undoubtedly cause a shake-up in the traditional civil subject system.

Strictly speaking, a robot is not a natural person with life, but is also distinguished from a legal person who has his own independent will and acts as a collection of natural persons. It is indeed premature to attempt to convict AI itself of the robot's misconduct.

2. Algorithmic discrimination will be unfair

One of the accusations of artificial intelligence misjudgment is that it often discriminates. Google, which uses the most advanced image recognition technology, has been caught up in accusations of "racial discrimination" because its search engine labels blacks as "gorillas" and searches for "unprofessional hairstyles", mostly black braids. Harvard University data Privacy Laboratory Professor Ratanya Sweeney found that searches on Google for names with "black features" were likely to pop up ads linked to criminal records-from the results given by Google's smart advertising tool Adsense.

And the danger is not just "seeing each other" itself - after all, it is a bit offensive to label a black photograph as an orangutan. The decision-making of artificial intelligence is moving into more fields that are actually related to the fate of individuals, effectively affecting employment, welfare, and personal credit. We can hardly turn a blind eye to the “unfairness” in these areas.

Similarly, as AI invades the recruitment field, financial field, intelligent search field, and so on, we can really do nothing about the “algorithm machines” we train. In a thirsty modern society, whether the algorithm can help the company choose the one who is one in a thousand, this remains to be seen.

So where is the source of discrimination? Is the ulterior motive of the labeler, the deviation of the data fitting, or is there a bug in the program design? Can the results calculated by the machine provide grounds for discrimination, inequality, and cruelty? These are questions that are worth discussing.

3. Data Protection is the Bottom Line of AI Ethics

Cyberspace is a real virtual existence, an independent world without physical space. Here, humans have achieved "digital survival" with the separation of the flesh and possessed "digital personality." The so-called digitized personality is the “personal image in the cyberspace through the collection and processing of personal information”—that is, the personality that is established through digital information.

In the AI ​​environment, based on the support of the Internet and big data, it has a large number of user habits and data information. If "accumulation of historical data" is the basis of machine evil, then the driving force of capital is a deeper reason.

In the Facebook leak, a company called Cambridge Analytica used artificial intelligence technology to place paid political ads for the "psychological characteristics" of any potential voter; and what kind of ads to place. It depends on one's political orientation, emotional characteristics, and vulnerability. Many false news can spread quickly, increase exposure and influence people's value judgment. Technology mastermind Christopher Willie recently revealed to the media that the artificial intelligence technology's "food for food" source, in the name of academic research, intentionally seized more than 50 million user data.

Retiringly, even if there is no data leakage problem, the so-called “smart digging” of user data is also very easy to swim on the edge of “compliance” but “beyond fairness”. As for the boundaries of AI ethics, information security has become the most basic bottom line of "information people" in every Internet age.

Reflection

In the recent blaze about AI ethicsvideoIn the middle, the artist Alexander Reben did not have any action, but gave an order via voice assistant: "OK Google, shoot."

However, in less than a second of instant, Google Assistant pulled the trigger of a pistol and knocked down a red apple. Immediately, the buzzer makes a harsh hum.

The buzz resounded through both ears.

Who shot the apple? Is AI or human?

In this video, Reben tells AI to shoot. Engadget said in the report that if AI is smart enough to anticipate our needs, perhaps someday AI will take the initiative to get rid of those who make us unhappy. Reben said that discussing such a device is more important than the existence of such a device.

Artificial intelligence is not a predictable, perfect rational machine. His ethical deficiencies are algorithms, people's goals and assessments. However, at least for now, the machine is still the response of the human real world, not the guidance and pioneer of the “ought to be” world.

Obviously, keeping the AI ​​ethics at the bottom line, humans will not go to the day of "machine tyranny."

Attachment: Google's "Seven Guidelines"

Beneficial to society

2. Avoid creating or enhancing prejudices

3. Establish and test for safety

4. Obligation to explain to people

5. Integrate privacy design principles

6. Adhere to high standards of scientific exploration

7. Determine the appropriate application based on the principle

China IT News APP

Download China IT News APP

Please rate this news

The average score will be displayed after you score.

Post comment

Do not see clearly? Click for a new code.

User comments