PyTorch 2.0 Boosts ML Inference on Arm Processors: AWS, Arm & Meta Collaboration Unveils Major Performance Gains

PyTorch 2.0 Boosts ML Inference on Arm Processors: AWS, Arm & Meta Collaboration Unveils Major Performance Gains

PyTorch 2.0 Boosts ML Inference on Arm Processors: AWS, Arm & Meta Collaboration Unveils Major Performance Gains

As Seen On

Introduction

The latest generation of CPUs with built-in specialized instructions has enabled significant performance improvement in machine learning (ML) inference. This advancement results from the collaboration of industry giants AWS, Arm, and Meta, coming together to optimize the performance of PyTorch 2.0 on Arm-based processors.

Performance Improvements

With the optimization of PyTorch 2.0 on Arm-based processors, AWS Graviton-based instance inference performance has seen a remarkable increase. The performance has improved up to 3.5 times for Resnet50, and up to 1.4 times for BERT, as compared to the previous PyTorch release. In addition, the use of AWS Graviton3-based Amazon Elastic Cloud Compute C7g instances for PyTorch inference on Torch Hub Resnet50 and multiple Hugging Face models has resulted in cost savings of up to 50% and improved latency.

Optimization Details:

  1. GEMM kernels
    To enhance the performance of PyTorch 2.0, support for Arm Compute Library (ACL) GEMM kernels via OneDNN backend is introduced. With the ACL library, SIMD hardware utilization is improved, and end-to-end inference latencies are reduced.
  2. bfloat16 support
    Efficient deployment of various models is achieved through bfloat16 support in Graviton3. The performance is up to two times faster, as compared to the already existing fp32 model inference, which was without bfloat16 fast math support.
  3. Primitive caching
    For efficient computation, conv, matmul, and inner product operators have implemented primitive caching. This results in the reduction of redundant GEMM kernel initialization and tensor allocation overhead.

How to Take Advantage of the Optimizations

To fully leverage these optimizations, users can deploy AWS Deep Learning Containers (DLCs) on Amazon Elastic Compute Cloud (Amazon EC2) C7g instances or Amazon SageMaker. Using AWS DLCs is easy, as they include optimized builds of PyTorch along with other pre-configured deep learning frameworks.

To begin using AWS DLCs, follow these simple steps:

  1. Choose an appropriate EC2 C7g instance type or Amazon SageMaker instance according to your elemental requirements.
  2. Select the latest version of the PyTorch Deep Learning Container compatible with C7g instances.
  3. Launch the container and start using PyTorch 2.0 optimized for performance on Arm-based processors.

Key Points

  1. AWS, Arm, and Meta’s collaboration has optimized PyTorch 2.0 performance for Arm-based processors.
  2. Increased performance with AWS Graviton-based instance inference for PyTorch 2.0: Up to 3.5 times for Resnet50 and up to 1.4 times for BERT.
  3. PyTorch inference costs are reduced up to 50% with AWS Graviton3-based Amazon Elastic Cloud Compute C7g instances.
  4. Improved latency of inference in recent instances.
  5. The optimization process focuses on GEMM kernels, bfloat16 support, and primitive caching.
  6. Users can access these optimizations by employing AWS Deep Learning Containers (DLCs) on Amazon EC2 C7g instances or Amazon SageMaker.

To sum up, this innovative collaboration between AWS, Arm, and Meta has led to tremendous performance gains in PyTorch 2.0 for Arm-based processors. With substantial cost savings and improved latency, these optimizations are set to revolutionize the ML inference landscape, providing significant benefits to developers and end-users alike.

 
 
 
 
 
 
 
Casey Jones Avatar
Casey Jones
1 year ago

Why Us?

  • Award-Winning Results

  • Team of 11+ Experts

  • 10,000+ Page #1 Rankings on Google

  • Dedicated to SMBs

  • $175,000,000 in Reported Client
    Revenue

Contact Us

Up until working with Casey, we had only had poor to mediocre experiences outsourcing work to agencies. Casey & the team at CJ&CO are the exception to the rule.

Communication was beyond great, his understanding of our vision was phenomenal, and instead of needing babysitting like the other agencies we worked with, he was not only completely dependable but also gave us sound suggestions on how to get better results, at the risk of us not needing him for the initial job we requested (absolute gem).

This has truly been the first time we worked with someone outside of our business that quickly grasped our vision, and that I could completely forget about and would still deliver above expectations.

I honestly can't wait to work in many more projects together!

Contact Us

Disclaimer

*The information this blog provides is for general informational purposes only and is not intended as financial or professional advice. The information may not reflect current developments and may be changed or updated without notice. Any opinions expressed on this blog are the author’s own and do not necessarily reflect the views of the author’s employer or any other organization. You should not act or rely on any information contained in this blog without first seeking the advice of a professional. No representation or warranty, express or implied, is made as to the accuracy or completeness of the information contained in this blog. The author and affiliated parties assume no liability for any errors or omissions.