“We have never before seen such rapid adoption of a datacenter processors,” said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. “Just 60 days after the T4’s launch, it’s now available in the cloud and is supported by a worldwide network of server makers. The T4 gives today’s public and private clouds the performance and efficiency needed for compute-intensive workloads at scale.”
The T4’s success can be attributed to a number of factors including its compact low-profile dimensions. The T4 is also fairly efficient, and it’s 70-watt power draw is relatively low for a server GPU. Thanks to NVIDIA’s new Tensor cores, the T4 excels at performing deep learning and AI tasks. NVIDIA claimed that in AI inference workloads, a server equipped with just two T4 GPUs would have performance equivalent to 54 CPU-only servers.
NVIDIA also reported that the T4 is multiple times faster than its older Pascal-based P4 server GPUs. In FP16 workloads the T4 has up to eight times the throughput over Pascal, and that performance gain grows up to a full 32 fold increase over Pascal in INT4 tasks.
Companies transitioning their existing servers to use T4 GPUs also stand to reduce their operating costs considerably thanks to the GPUs performance and high efficiency.
“NVIDIA’s T4 GPUs for Google Cloud offer a highly scalable, cost-effective, low-latency platform for our ML and visualization customers. Google Cloud’s network capabilities together with the T4 offering enable customers to innovate in new ways, speeding up applications while reducing costs,” said Damion Heredia, senior director of Product Management at Google Cloud.
Despite the reportedly fast adoption of the T4, it’s availability on Google’s Cloud Platform appears to be somewhat limited. Google has put up a sign-up sheet for those interested in trying out the mighty processing power of the T4 GPUs inside of Google’s Cloud servers. It’s unclear if the limited availability is due to high-demand for access to the servers or due to a shortage of T4 GPUs.