How ASUS Shifts to Supercomputers, Datacenters and LLMs

The Taiwanese company has been enhancing its tech portfolio and achieving success locally, with plans to scale up its innovations globally.

ASUS, predominantly known for its consumer electronics like the recently presented AI powered Ryzen AI 300 laptops, is shifting its focus more and more toward enterprise technology and cloud solutions. The Taiwanese company has been enhancing its tech portfolio and achieving success locally, with plans to scale up its innovations globally.

ASUS Supercomputing Ventures

At the recent Computex event in Taiwan, Jackie Hsu, Senior Vice President and co-head of the Open Platform Business and IoT business groups, highlighted ASUS’s involvement in several significant projects, building on previous investments in supercomputing for Taiwan´s National Center for High-Performance Computing (NCHC).

The Taiwania 2 supercomputer, a creation of ASUS and Quanta Computer, has made strides in recent years with its nine-petaflop performance scoring 20th in the Top 500 Supercomputer list in 2018. In the current ranking, Taiwania 2 has dropped to 106, with more and more supercomputers entering the arena. But as demand for compute is surging due to AI, the market for such systems is becoming more lucrative than ever before.
 
ASUS Taiwania 2 supercomputer official

The follow-up project Taiwania 3, while only achieving 2.7 petaflops, has broader capabilities and specialized focus, contributing to its overall performance and utility. It was tailored for various applications, such as biomedical research and climate change studies. Taiwania 3 integrates more sophisticated security measures, such as the requirement for one-time passwords (OTP) for system access, which enhances its utility in sensitive research areas and better networking capabilities.

In May 2023, ASUS won the bid from the National Center for High-Performance Computing (NCHC) to build the Taiwania 4 supercomputer, which includes a data center with a power usage efficiency (PUE) rating of 1.17—an extraordinary feat in Taiwan’s climate. Taiwania 4 will utilize a state-of-the-art AI computing architecture, emphasizing the integration of next-generation CPUs and GPUs to handle even more complex data processing tasks at higher efficiencies. This machine is expected to bolster Taiwan’s capabilities in AI-driven research and industrial applications

ASUS Formosa Foundation Model links Chinese and English

ASUS has also made strides in artificial intelligence, creating the Formosa Foundation Model, a large language model (LLM) based on the Bloom and Llama 2 models. These LLMs consist of 176B and 70B neuron parameters for various applications and cross-language understanding. The model is engineered to generate traditional Chinese text, addressing the gap in AI models typically trained on American English.
 
ASUS Formosa Foundation Model official

Engaging the Enterprise Market

The company also offers a range of servers, encompassing standard models, supercomputer nodes, and AI servers. Though not yet dominant in the server market, ASUS has proven its ability to produce energy-efficient servers, drawing attention from large-scale cloud providers.
 

ASUS integrates a variety of technological solutions to cater to enterprise customers. A senior ASUS executive recently pointed out the company´s interest in ASUS’s datacenters with a PUE of 1.17 and the Formosa Foundation Model. ASUS has undertaken several projects involving large AI systems, which encompass both software and hardware.

Even though ASUS operates on a smaller scale compared to its rivals, its comprehensive solutions have garnered clientele willing to invest in superior technology. Hsu sees enterprise technology as a key growth area, particularly with the rising demand for AI and computing power.

GPU Server Introductions at NVIDIA GTC 2024

At the recent NVIDIA GTC 2024 conference, ASUS introduced new GPU server innovations featuring its MGX architecture. Options ranged from entry-level to high-end GPU solutions and liquid-cooled rack systems designed for diverse workloads.
 

These servers will integrate the NVIDIA Grace Hopper Superchip and NVIDIA Grace CPU Superchip, providing strong performance with 144 Arm Neoverse V9 CPU cores and Scalable Vector Extensions (SVE2). The modular design allows for flexibility and scalability in server configurations.

Advanced Cooling and AI Platforms

ASUS is also developing advanced liquid-cooling technologies, such as direct-to-chip (D2C) cooling and Immersion Cooling, which can swiftly reduce power usage effectiveness (PUE). The servers support manifolds and cool plates and can feature a rear-door heat exchanger that is compatible with standard rack-server designs, negating the need to replace all racks.
 
ASUS Liquid Cooling Solutions

At NVIDIA GTC 2024, ASUS also unveiled a no-code AI platform with an integrated software stack. This platform enables businesses to accelerate AI development, supporting LLM pre-training, fine-tuning, and inference. It can handle various LLMs from 7B to over 180B with customized software, ensuring efficient data processing and maximizing system performance.

Last Updated on November 7, 2024 7:39 pm CET

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x