HomeWinBuzzer NewsMicrosoft Unveils New AI Compilers: The Heavy Metal Quartet

Microsoft Unveils New AI Compilers: The Heavy Metal Quartet

The in-depth research supporting Microsoft's all new compilers is focused on tackling the intricacies of deep neural networks.


Microsoft Research has introduced a groundbreaking suite of AI compilers known as the “Heavy Metal Quartet.” This group consists of four advanced compilers: Rammer, Roller, Welder, and Cocktailer.  Each has been presented in an indivudal paper and a wider collective project from Microsoft . The primary objective behind these compilers is to enhance the efficiency and speed of deep learning models.

An AI compiler is a tool that can convert a machine learning model into a more effective and optimized code that can run on different hardware platforms. Compilers can make various changes and improvements on the model, such as pruning, quantization, fusion, and parallelization. An AI compiler can also produce code for different frameworks.

Microsoft Research has unveiled the “Heavy Metal Quartet” as a response to the challenges faced by AI developers and researchers. The extensive research behind these compilers is aimed at addressing the complexities of deep neural networks. A spokesperson from Microsoft Research emphasized their commitment to tackling these challenges.

A Closer Look at the Compilers:

  1. Rammer: Holistic Deep Learning Compiler Optimizations

    Rammer is engineered to optimize the execution of deep neural network (DNN) workloads on massively parallel accelerators. Unlike traditional approaches that treat DNN operators individually, Rammer adopts a holistic strategy. By co-scheduling inter and intra-operator tasks, Rammer maximizes hardware utilization. Research reveals that Rammer significantly outperforms other leading compilers, such as TensorFlow XLA and TVM.

  2. Roller: Efficient Tensor Compilation

    Addressing the challenge of generating efficient kernels for DNN operators, Roller introduces a novel approach. By utilizing a tile abstraction called rTile, Roller achieves rapid kernel generation. This construction-based technique ensures that efficient kernels are produced in seconds, resulting in performance levels comparable to existing solutions on popular accelerators.

  3. Welder: Memory Access Optimization

    With the rise of memory-intensive deep neural networks, Welder focuses on optimizing memory access for improved execution efficiency. Through the introduction of the tile-graph abstraction, Welder facilitates meticulous data management. This innovative approach not only provides various optimization patterns but also surpasses alternative solutions.

  4. Cocktailer: Control Flow in Deep Learning

    As the complexity of DNNs grows, control flow logic becomes more intricate. Cocktailer addresses this by co-optimizing control and data flow execution on hardware accelerators. The introduction of the uTask abstraction unifies DNN model representation, enabling Cocktailer to reschedule control flow to accelerators. This innovation optimizes across control flow boundaries, reducing synchronization overhead.

The release of Microsoft Research’s Heavy Metal Quartet signifies a significant advancement in deep learning model optimization. These compilers address critical challenges, including scheduling, memory access, tensor compilation, and control flow. With the promise of transforming AI model development and deployment, the Heavy Metal Quartet sets a new standard for efficiency and innovation in the field.

Luke Jones
Luke Jones
Luke has been writing about all things tech for more than five years. He is following Microsoft closely to bring you the latest news about Windows, Office, Azure, Skype, HoloLens and all the rest of their products.

Recent News