Home > News content

Tsinghua University Thinker Team Releases Two Extremely Low Power AI Chips in VLSI 2018

via:雷锋网     time:2018/6/27 13:01:39     readed:365

The Tsinghua University Thinker team presented two papers related to very low power AI chips (Thinker-II and Thinker-S) at the conference, in the "Machine Learning Processor" sub-site (C4) and "Robots and Machines." The "learning application" session was reported by the venue (C13). In addition, the Thinker-S chip was invited to a live demo at the DEMO Session of the conference. The authors include: Yin Shouyi, Ouyang Peng, Yang Jianxun, Zheng Shixuan, Lu Tianyi, Song Dandan, Li Xiudong, Liu Leibo, and Wei Shaojun.

  1. Shouyi Yin, Peng Ouyang, Jianxun Yang, Tianyi Lu, Xiudong Li, Leibo Liu, Shaojun Wei, An ultra-high energy-efficient reconfigurable processor for deep neural networks with binary/ternary weights in 28nm CMOS, Symposia on VLSI Technology and Circuits, Honolulu, USA, 2018.

    Thinker-II Chip Reports Live

  2. Shouyi Yin, Peng Ouyang, Shixuan Zheng, Dandan Song, Xiudong Li, Leibo Liu, Shaojun Wei, A 141 uW, 2.46 pJ/Neuron Binarized Convolutional Neural Network based Self-learning Speech Recognition processor in 28nm CMOS, Symposia on VLSI Technology and Circuits , Honolulu, USA, 2018.

Thinker-S Chip Reports Live

First, research background

In recent years, the breakthrough development of deep learning has led to advances in areas such as machine vision, speech recognition, and natural language processing, which has caused great academic and industrialDevelopmentEnthusiasm and research interest. However, due to the huge storage overhead and computing requirements of deep neural networks, power consumption has become a major obstacle to Deploy AI Everywhere, and the widespread use of artificial intelligence algorithms in mobile devices, wearable devices, and IoT devices has been constrained.

Second, architectural innovation

To overcome these bottlenecks, the Thinker team of Tsinghua University conducted a systematic study of the low-wide quantization method, computational architecture, and circuit implementation of neural networks. It proposed a reconfigurable architecture that supports low-breadth-width network energy-efficient computing and designed a neural network general-purpose computing chip Thinker. -II and Speech Recognition Chip Thinker-S. When the Thinker-II chip runs at 200 MHz, it consumes only 10 milliwatts; the Thinker-S chip's lowest power consumption is 141 microwatts and its peak efficiency reaches 90 TOPs/W. Both chips are expected to be widely used in battery-powered devices and self-powered IoT devices.

Thinker-II chip design two kinds of binary / ternary convolution optimization calculation method and hardware architecture, greatly reducing the complexity of the algorithm, effectively remove the redundant calculation. In addition, for the problem of load imbalance caused by sparseness, a hierarchically balanced scheduling mechanism is designed. Through the two-level task scheduling of software and hardware, the resource utilization rate is effectively improved. Thinker-II chips use a 28nm process and support neural network general-purpose calculations through architecture and circuit-level reconstruction.

Thinker-II overall architecture

In the Thinker-S chip, a speech recognition framework based on binary convolutional neural network and user adaptation is designed. At the same time, by using the characteristics of speech signal processing, time domain data multiplexing, approximate calculation and weighted regularization optimization are proposed. Technology, substantially optimized neural network inference calculations. The Thinker-S chip uses a 28nm process, and the energy consumed by each neuron in the single-reasoning calculation is only 2.46 picojoules.

Thinker-S chip overall architecture

Third, the results show

Thinker-II demo system

Thinker-S Live Demo at VLSI 2018 Conference

Fourth, summary

The Thinker team of Tsinghua University has designed the Thinker series of artificial intelligence chips in recent years. The relevant results have been published in top academic conferences and periodicals such as VLSI Symposia, ISCA, and IEEE JSSC, attracting extensive attention from the academic and industrial communities. The results of this research, focusing on the requirements for artificial intelligence in mobile devices, embedded devices, and the Internet of Things, are expected to be widely used in battery-powered and self-powered smart devices, furthering the goal of Deploy AI everywhere.

China IT News APP

Download China IT News APP

Please rate this news

The average score will be displayed after you score.

Post comment

Do not see clearly? Click for a new code.

User comments