Home > News content

Adding noise to the image will fool Google's top image recognition AI

via:博客园     time:2017/4/30 15:30:27     readed:1219

1.png1.png

Recently, a group of computer experts from the University of Washington Network Security Laboratory (NSL) found that malicious attackers can deceive Google's CloudVision API, which will lead to API users to submit the image of the wrong classification.

In recent years, AI-based image classification system has become increasingly popular, and this research is for this picture classification system. Now, many online services will use this system to capture or block some special types of pictures, such as those with violent nature or pornographic nature of the picture, and based on the AI ​​image classification system can prevent users to submit and publish prohibited pictures.

Although this classification system uses a highly sophisticated machine learning algorithm, the researchers say they have found a very simple way to trick Google's Cloud Vision services.

Google's Cloud Vision API has a vulnerability

They designed the attack technology is actually very simple, just add a small amount of noise in a picture to successfully deceive Google's Cloud Vision API. Which can be in the range of 10% to 30% of the noise level, but also can guarantee the clarity of the picture, and this is enough to deceive Google's image classification AI.

2.png2.png

Add noise to the picture is actually very simple, the whole process does not need how high-end technology, all just need a picture editing software can be achieved.

Researchers believe that cybercriminals can use this technology to spread pictures of violence, pornography, or terrorism. In addition, Google's own image search system also uses this API, which means that when users use Google for image search, it is likely to search for unexpected images.

The solution to this problem is simple

The researchers said that the repair of this problem is actually the same as the attack process is simple, so Google engineers are completely unnecessary tension.

In order to prevent this attack, Google only need to run its image classification algorithm before the noise in the picture can filter it. The researchers found that, with the help of noise filters, Google's Cloud Vision API can be the appropriate classification of the picture.

3.png3.png

After the words

Researchers have described the complete technical details of this attack in their published papers, and interested users can read this paperPortalThe It is noteworthy that these researchers have also used a similar approach to deceive Google's Cloud Video Intelligence APIReference materialThe Note: they in a video every two seconds to insert a same picture, and finally Google's video classification AI will be repeated according to this picture to repeat the video classification, and classification is not based on the contents of the video itself.

Reference Source:Bleepingcomputer, FB small series Alpha_h4ck compiler, reproduced please specify from FreeBuf.COM

China IT News APP

Download China IT News APP

Please rate this news

The average score will be displayed after you score.

Post comment

Do not see clearly? Click for a new code.

User comments