You may or may not have heard of Project Catapult. Microsoft uses the experimental servers to deliver fast and accurate Bing results, and they’ve just received a major upgrade.
Central to Catapult servers is their use of reprogrammable FPGA chips. It’s what allows Bing to quickly filter, rank and score search results, among other functions. At the Scaled Machine Learning Conference in Stanford, Microsoft revealed an expansion of their role.
The new Catapult v2 server design is more flexible than its predecessor, going far beyond traditional data processing and giving FPGAs and even bigger role as accelerators. They’re now connected to the DRAM, CPU and network switches.
FPGAs can also accelerate local applications or Microsoft can utilize them for large-scale deep learning processes. This means they have specific advantages in the natural language processing and AI field. Microsoft claims its FPGAs also offer a 10X improvement in energy efficiency, cost and latenecy when compared to CPU.
Catapult v2 has the potential to be used as a blueprint for other tech firms. Companies like Baidu have used them for deep learning already, and Intel recently acquired FPGA vendor Altera for $16.8 billion.
The company plans to implement FGPAs in servers, drones, cars, robots and other tech. As such, Microsoft’s innovation could have an impact outside of their internal processes.
The University of Texas, Austin uses a small catpult server, for example. The system sits in the Advanced Computing Centre with 32 two-socket Intel Xeon servers, which contain an Altera Statix V D5 FPGA chip. The new design specification could allow them to upgrade their hardware, opening up new avenues of research.
What exactly this means for Bing and other Microsoft services is unclear. We’ll have to wait and see what, if any, changes this brings. However, as the company charges forward with AI, the new design is bound to help tremendously.