Google Cloud announced that Google Compute Engine Virtual Cloud Computing will introduce NVIDIA's latest Pascal architecture GPU to expand its computing power. Currently supporting the NVIDIA P100 GPU interface option has been tested, NVIDIA V80 GPU options are also officially open to the public.
Clouds can accelerate workloads, including training and reasoning in machine learning, geophysical data processing, modeling, seismic analysis, molecular modeling, genomics, and many high-performance computing use cases.
The Tesla P100 is based on the Pascal GPU architecture, allowing users to increase throughput while reducing costs while using fewer instances. Compared with the K80, P100 GPU can K80 10 times the speed to speed up the workload.
Compared to traditional solutions, cloud GPUs offer better flexibility, performance and lower cost:
Flexibility: Google's custom VM shape and added cloud GPU determine the ultimate flexibility. Users can customize CPU, memory, disk and GPU configurations.
Faster performance: Transparent mode cloud GPU can provide bare metal performance. Google Cloud installed four P100 or eight K80 GPUs on each VM. For users who want to improve disk performance, you can choose to attach a 3TB local SSD to any GPU's VM.
Low cost: With the cloud GPU, the user can count the number of minutes, and have continued to use discounts. How much to pay how much.
Cloud integration: Users can use cloud GPUs at all levels of stacks. For the infrastructure, the computing engine and the container engine allow the user to run the GPU workload using VMs or containers. For machine learning projects, the cloud machine learning can be configured to reduce the time with the TensorFlow mass training model with the GPU.
Currently, the P100 and K80 GPUs are available in four locations around the world, including Oregon in the western United States, South Carolina in the eastern United States, Belgium in western Europe and Taiwan in eastern Asia. All GPUs have continued use discounts that can reduce usage costs.
The two chips of Google provide the flexibility to choose teams for compute-intensive tasks, making it easy for users to optimize their workload while balancing speed and price.