Google Cloud N2 VM Instances Executed 2.82x the Wide & Deep Inference Work and Delivered 2.75x the Performance Per Dollar Than N1 VM Instances

HiBench:

  • Complete 2.82x the wide & deep inference work than N1 instances.

  • Reap 2.75x the inference performance per dollar than on N1 instances.

author-image

By

Get a Better Return on Your Google Cloud Investment by Selecting New N2 VM Instances Featuring 2nd Gen Intel® Xeon® Scalable Processors

As the amount of data online explodes, companies increasingly rely on inference tools that help users cut through this abundance to find meaningful relationships in the data. One such tool is the Wide & Deep Learning Recommender System. It uses a deep learning topology that brings together wide linear models and deep neural networks.

Like other deep learning workloads, recommender systems are, by nature, compute-intensive and take advantage of powerful processors. To carry out your Wide & Deep workloads quickly and deliver recommendations sooner, select a new Google Cloud N2 instance type enabled by 2nd Gen Intel® Xeon® Scalable processors.

In TensorFlow Wide & Deep tests comparing 32-vCPU Google Cloud VM instances, new N2 VM instances enabled by 2nd Gen Intel Xeon Scalable processors completed 2.82x the work of older N1 VM instances. With a cost that is less than three percent higher than the older instances, the new instances achieved 2.75x the performance per dollar.

For your deep learning recommender system workloads, choose a new N2 instance enabled by 2nd Gen Intel Xeon Scalable processors.

Figure 1. TensorFlow test results comparing the Wide & Deep performance of the 32-vCPU N2 standard VM instance type to the 32-vCPU N1 standard VM instance type.

You’ve decided to run your Wide & Deep Learning Recommender System workloads on Google Cloud N-series instances. Did you know that by selecting new N2 instances enabled by 2nd Gen Intel® Xeon® Scalable processors you could cut the number of instances you need in half?

As Figure 1 shows, in TensorFlow tests comparing the Wide & Deep performance of VM instances with 32 vCPUs Google Cloud N2 VM instances enabled by Intel Xeon Scalable processors completed 2.82x the work of an N1 instance using older processors. This means a single N2 VM instance could perform more work than two N1 VM instances.

Another way to understand the benefits of choosing new Google Cloud N2 VM instances enabled by Intel Xeon Scalable processors is to consider pricing. Despite their much greater performance, these new instances cost only 2.2 percent more than their older counterparts. As Figure 2 shows, when we combine pricing and performance, the new N2 VM instances deliver 2.75x the work per dollar than the older N1 VM instances.

Figure 2. Relative Wide & Deep inference performance per dollar of the 32-vCPU N2 standard VM instance type and the 32-vCPU N1 standard VM instance type.

If you chose the new N2 VM instances enabled by Intel Xeon Scalable processors, you would get greater performance per dollar and save because of the fact that you’d be able to replace two or three older N1 VM instances with a single new one to meet the a given performance threshold.

Learn More

To begin running your deep learning applications on Google Cloud Platform N2 VM instances with 2nd Gen Intel Xeon Scalable processors, visit http://intel.com/googlecloud.