Home > News content

Amazon into the AI ​​chip battle, Alexa's ambition is when the family brain

via:博客园     time:2018/2/15 9:33:19     readed:213

orgsrc=//images2017.cnblogs.com/news/66372/201802/66372-20180214223340531-790003211.jpg

Text / new intellectual yuan

The new intelligence unit to read: February 12, The Information report confirmed that Amazon has begun to design and manufacture of AI chips to enhance the quality of Alexa voice assistant, to provide support for Echo devices. This move follows the pace of Apple and Google, is regarded as the Amazon homeopathy for the profound knowledge, so far, prosperous AI chip market, a hundred schools of thought, the winner?

On February 12, according to a report by The Information, Amazon has already started to design and manufacture AI chips. The dedicated AI chip can also help Alexa reduce its dependence on the overall remote server and provide support for Echo devices.

Amazon wants Alexa to be the brains in the house, controlling the door locks, security cameras and ovens and other appliances. If you do not need the Internet, then the connection with these devices will be more secure.

The report said Amazon hopes to remain competitive in the smart home hardware market and consumer-facing artificial intelligence products and already has nearly 450 chip specialists thanks to Amazon's hiring and acquisitions.

In the past few years, in 2015, Amazon bought Israeli chipmaker Annapurna Labs for $ 350 million and Blink, a security camera maker acquired by the end of 17. Blink's chips reduce production costs and extend the battery life of other products, starting with Amazon's cloud cameras and expanding into the Echo speaker family.

orgsrc=//images2017.cnblogs.com/news/66372/201802/66372-20180214223340546-133751514.jpg

The move follows Apple and Google and the latter two companies have developed and deployed custom AI hardware of all sizes. Artificial Intelligence Tasks Because of their compute-intensive nature, custom designs are often required for the device itself, and even custom design servers for the data center are typically trained on artificial intelligence algorithms from the cloud.

The news did pose a risk to the businesses of companies such as Nvidia and Intel, which both shifted most of their chip manufacturing expertise to AI and emerging fields and made money by designing and manufacturing chips for companies such as Apple and Amazon.

Amazon acquisition of Israeli chip company and camera Blink, hair force AI chip supreme strength

Amazon plans to develop its own AI chip to use Alexa-powered products in its expanding Echo product line to do more computing without having to communicate with the cloud, which will speed up operations.

Apple and Google have started a similar shift. Apple has already started to develop its own iPhone chip, such as the device's graphics processor and power management unit, and thus cut off long-term suppliers. About artificial intelligence, Apple Inc. designed a new "neural engine", which runs on devices that process machine learning algorithms as part of its A11 bionic chip, to help Face ID and ARKit applications.

On the other hand, Google has developed its own AI hardware for years, starting with a custom ASIC processor that is tailor-made for its TensorFlow AI training platform, upgraded to the second version last year, to demonstrate the benefits of machine learning tasks. PU is the foundation for Google's DeepMind AlphaGo system, which helps Google remain competitive with Facebook, which also designs its own AI training server hardware.

In the past two years or so, Google has shifted its knowledge to consumer areas, started developing custom AI chips to support camera devices, and also designed image processors in Pixel 2.

Prosperous AI chip market, a hundred schools of thought, the winner?

General chips are not well adapted to the requirements of deep learning algorithms, with low efficiency, high power consumption and high cost. Various neural network algorithms need special chips to ensure their operating efficiency. The wave of artificial intelligence, gave birth to a special AI chip outbreak.

Both the cloud computing and mobile computing require a chip specifically designed for the AI ​​algorithm, but both require different requirements for AI ASICs. Cloud requirements AI chip to adapt to a variety of neural network architecture, while high-precision floating-point operations, the peak performance of at least Tflops (10 ^ 12 floating-point operations per second implementation) level, no stringent power requirements; support for the array Structure to further improve performance.

Mobile terminal AI chip design requirements are completely different. A fundamental requirement is to control power consumption, which requires some means such as network compression to increase computational energy efficiency while minimizing computational performance and loss of computational accuracy.

Various manufacturers have made efforts in these two directions AI chip research and development, of course, cloud and mobile terminals can not be completely separated. For example, the Cambrian, previously developed Cambrian Deep Learning Processor for large-scale neural network and a variety of machine learning algorithms, and launched in 2016 Cambrian 1A processor (Cambricon-1A) is for smart phones , Security surveillance, wearable devices, drones and intelligent driving and other terminal equipment.

In the cloud, in addition to the above NVIDIA, Intel introduced the FPGA-based Dedicated Deep Learning Accelerator Card after its acquisition of Altera, acquired Nervana for ASIC chips tailored and optimized for deep learning, acquired Movidius, its high-performance vision Processing chip will make up Intel's lack of mobile AI chip. In addition there are IBM-like brain chip TrueNorth. Of course, the beginning of this article mentioned Google TPU. Recently, Baidu officially launched XPU, which is a new generation of AI processing architecture based on Baidu's FPGA. With the versatility of GPUs and the high efficiency and low power consumption of FPGAs, Baidu's PaddlePaddle has been highly optimized and accelerated.

On the mobile side, Google, Apple and Samsung are building mobile phones with specialized AI chips. Microsoft is designing such a chip specifically for augmented reality headsets. At the same time from the technology giant Google to the traditional depot Toyota, everyone is engaged in research and development of autonomous vehicles, is needed to be able to run well on the mobile side of the AI ​​chip.

For example, Apple is good at the underlying architecture improvements, the latest release of Apple X uses a custom chip to deal with artificial intelligence workload. This is a dual-core "A11 bionic neural engine" chip with up to 600 billion operations per second. The chip's most important thing to do is enable Face ID authentication to quickly identify faces, unlocking iPhone X or making purchases.

Chip customization to meet the needs of AI software in the industry has become a new trend. Google has designed two generations of chips to handle the AI ​​computing workload in the data center. Microsoft also developed an AI chip for the future version of the HoloLens hybrid reality helmet. Installing a new dedicated chip on the iPhone means less work on the main chip, improving battery life. Otherwise, for example, when carrying out object recognition and video recording at the same time with the camera of the mobile phone, the battery may be exhausted quickly. In addition, in the near future, more mobile devices besides the iPhone may include processors for AI.

Another example is Huawei. During the IFA 2017 in Germany, Huawei unveiled Kirin 970, the world's first artificial intelligence mobile computing platform. Huawei said the handset mobile computing platform with strong AI computing power is the industry's first handset chip with dedicated hardware processing unit (NPU). Innovative integration of NPU dedicated hardware processing unit, innovative design HiAI mobile computing architecture, its AI performance density significantly better than the CPU and GPU. Compared to four Cortex-A73 cores, handling the same AI tasks, the new Heterogeneous Computing Architecture has about 50X energy efficiency and 25X performance advantages with image recognition speeds of up to about 2000 images per minute. Kirin 970 high-performance 8-core CPU, compared to the previous generation of energy efficiency increased by 20%. The first to commercialize the Mali G72 12-Core GPU, up to 20% better graphics performance and 50% more energy efficiency than the previous generation, allowing longer 3D games to run smoothly.

In addition, several companies in China are also developing AI chips. Prior to release Tencent's AI industry report pointed out that the AI ​​chip as the industry core, but also the highest technical requirements and added value link, the value of the industry and strategic position is far greater than the application layer innovation. At this point, the gap between China and the United States is still huge. The report shows that from the base of the number of chip companies, China has 14, the United States 33, China is only 42% of the United States.

In domestic enterprises that have outstanding performance in research and development of AI chips, in addition to the Cambrian described above, there is also the launch of Vimicro Corporation, an embedded system for compressing and coding system-in-chips with embedded depth-learning artificial intelligence, Horizon robots dedicated to hardware and software integration solutions, and Deep Vision Technologies, which created the Deep Processing Unit (DPU). Deep Kam Technology's goal is to ASIC-level power consumption, better than GPU performance, the first batch of products based on FPGA platform.

Gill Pratt, a project manager at Darpa, a research arm of the U.S. Department of Defense, said this trend towards a shift to specialty chips and a new computer architecture could lead to an "artificial Cambodian explosion" of AI chips. As he has seen, scaling the amount of computation to a large number of tiny, low-power chips allows the machine to function like the human brain, thus effectively utilizing energy.

China IT News APP

Download China IT News APP

Please rate this news

The average score will be displayed after you score.

Post comment

Do not see clearly? Click for a new code.

User comments