Home > News content

Amazon Tesla V100 NVIDIA deployment card: eight parallel computing

via:博客园     time:2017/11/2 9:05:30     readed:910

After the GTX 1070 Ti video card is on the stage, the NVIDIA 16nm Pascal Pascal family has completely completed its historic mission, and the next generation will be 12NM Volta volts. Although the game card still has to wait until next spring, but in the field of high-performance computing, the new architecture, the new core of Tesla V100 has been on the stage, and gradually open the situation.

Prior to this, Google deployed the Pascal architecture computing card Tesla P100, and now, Amazon embraced the new Tesla V100, for its own AWS cloud services.

亚马逊部署

Tesla V100 has 5120 CUDA core and 640 core Tensor, an area of 815 square millimeter, integrated 21 billion transistors, 30TFlops, single precision floating point performance of semi precision 15TFlops, double precision 7.5TFlops, Tensor deep learning property was 120TFlops, HBM2 high bandwidth memory collocation 16GB.

By contrast, Tesla P100 has 3584 stream processors, single precision floating point performance just close to 10TFlops, and no Tensor core for expertise in neural network training and reasoning.

Amazon uses three different ways to deploy Tesla V100, which are single, four, eight (the latter uses NVLink bus interconnect), each with 64GB, 256GB, 512GB system memory.

亚马逊部署

China IT News APP

Download China IT News APP

Please rate this news

The average score will be displayed after you score.

Post comment

Do not see clearly? Click for a new code.

User comments