Compal launches AI servers with AMD Instinct MI355X for LLM and generative AI workloads

Compal Electronics (2324.TW) debuts its liquid-cooled AI server series with AMD Instinct MI355X GPUs, aiming to lead the LLM and generative AI infrastructure space.

TAGS

Compal Electronics (Taiwan Stock Exchange: 2324.TW), a global leader in computing infrastructure, unveiled its next-generation AI server platform SG720-2A/OG720-2A at the AMD Advancing AI 2025 conference in the United States and concurrently at the International Supercomputing Conference (ISC) 2025 in Europe. The dual reveal underscores Compal’s commitment to global AI infrastructure innovation and marks a strategic evolution in server hardware aimed at meeting the surging demand for generative AI and large language model (LLM) training.

Historically known for its extensive portfolio in IT and consumer electronics, Compal Electronics has steadily grown its enterprise server division through long-standing collaborations with chipmakers such as AMD. This latest product introduction signals a new level of ambition in the AI infrastructure sector, with cutting-edge thermal engineering and GPU density designed to meet enterprise-scale AI compute requirements.

What are the capabilities of Compal’s new AI servers with AMD Instinct MI355X?

The newly launched SG720-2A and OG720-2A platforms are engineered to address the growing challenges of thermal management, compute density, and energy efficiency associated with training foundation models and running inference at scale. Powered by AMD’s latest Instinct MI355X GPU built on the CDNA 4 architecture, the servers support up to eight GPUs and are optimized for large-scale AI workloads in sectors such as cloud services, autonomous vehicles, financial modeling, and pharmaceutical R&D.

Each unit incorporates either air cooling or dual liquid cooling configurations, including a co-developed two-phase solution in partnership with thermal specialist ZutaCore. The ZutaCore HyperCool® 2-Phase Direct Liquid Cooling (DLC) system enables safe and consistent operation under extreme compute loads, reducing the need for massive airflow-based cooling infrastructure and thus improving operational efficiency.

Technical specifications include 288GB of HBM3E memory per system, achieving memory bandwidth of up to 8TB/s. This hardware configuration supports AI-favored data formats such as FP4 and FP6, critical for the throughput demands of generative AI models. Additionally, high-speed interconnects via PCIe Gen5 and AMD Infinity Fabric facilitate low-latency GPU-to-GPU communication, a requirement for orchestrating parallelized training and inference across large clusters.

See also  Infosys partners with Rich Products for SAP S/4HANA-driven business transformation

Why is thermal design a key differentiator in AI server platforms?

As the AI hardware market matures, developers and data center operators increasingly emphasize thermal performance and sustainability over brute-force compute metrics alone. Industry experts note that LLM workloads often run at sustained high power draw for prolonged durations, making thermal throttling and cooling efficiency decisive factors in server adoption.

Compal’s move to integrate both single-phase and two-phase liquid cooling into a modular server design represents a shift toward data center sustainability. These innovations are not only aligned with increasing energy cost pressures in North America and Europe but are also designed for compatibility with global data center standards, supporting both EIA 19-inch and ORv3 21-inch rack formats.

Analyst commentary from institutional sectors suggests that infrastructure vendors offering proprietary or restrictive cooling models face challenges in integration at scale. In contrast, Compal’s choice to align with ZutaCore and support open ecosystem architectures gives the Taiwanese developer an edge in the global rollout of AI-capable infrastructure.

How does this move deepen Compal’s partnership with AMD?

The SG720-2A/OG720-2A represents the latest chapter in Compal Electronics’ long-term strategic collaboration with Advanced Micro Devices. Over several generations of server platforms, Compal has chosen AMD Instinct GPUs for their performance-per-watt and compatibility with open-source software stacks such as ROCm, PyTorch, and TensorFlow.

Alan Chang, Vice President of the Infrastructure Solutions Business Group at Compal, emphasized this alignment in his remarks at the product launch. “The future of AI and HPC is not just about speed. It’s about intelligent integration and sustainable deployment,” Chang stated. “Each server we build addresses real-world technical and operational challenges. SG720-2A/OG720-2A is a collaboration with AMD that empowers customers with a scalable, stable, high-performance compute foundation.”

See also  eTeam acquires Sirius Solutions to strengthen business and technology transformation capabilities

Market analysts view such partnerships as critical in an era where server OEMs must respond to shifting chip roadmaps and increasingly competitive time-to-deployment metrics for AI infrastructure providers.

How are institutional and enterprise buyers responding?

Institutional sentiment following the announcement was cautiously optimistic, with early feedback focusing on the relevance of thermal performance and open architecture compatibility. Major hyperscale cloud providers and AI-focused SaaS vendors are projected to increase infrastructure spend by double digits year-over-year through 2026, according to recent research from IDC and Gartner. This positions Compal’s latest offering as a timely solution for tier-1 and tier-2 buyers looking to deploy or refresh AI clusters optimized for generative models.

In Europe, where energy constraints and sustainability mandates are particularly stringent, the introduction of ZutaCore’s DLC-integrated models has generated interest among operators aiming to lower power usage effectiveness (PUE) ratings. Furthermore, the use of industry-standard rack configurations makes the SG720-2A/OG720-2A attractive for large-scale retrofits and greenfield deployments alike.

What is the growth outlook for Compal Electronics in AI infrastructure?

With the release of SG720-2A/OG720-2A, Compal Electronics is poised to deepen its role in the rapidly evolving AI compute landscape. Analysts expect the global AI server market to surpass USD 50 billion by 2027, with compound annual growth rates exceeding 25%. As enterprise demand shifts from experimentation to production-scale deployment, vendors offering energy-efficient, interoperable, and developer-friendly solutions stand to gain the most traction.

See also  IBM to acquire Applications Software Technology to enhance Oracle cloud expertise

Given its comprehensive server capabilities, broad support for AI frameworks, and growing thermal innovation, Compal is well-positioned to serve both mature AI adopters and emerging research institutions. Continued alignment with AMD’s product cadence and global data center standardization efforts is expected to further accelerate Compal’s reach across North America, Europe, and Asia-Pacific.

What comes next after the global debut at AMD Advancing AI and ISC 2025?

Compal Electronics’ choice to debut the SG720-2A/OG720-2A across both AMD Advancing AI 2025 in the U.S. and ISC 2025 in Europe signifies a dual-market expansion strategy. This move enhances visibility among enterprise buyers and institutional partners alike, while reinforcing Compal’s global ambitions in intelligent computing.

In the coming quarters, industry observers anticipate that Compal will expand its server product line to address specialized use cases such as edge AI, federated learning, and multi-modal generative systems. New deployments in climate research, bioinformatics, and defense AI are also areas where Compal’s liquid-cooled, high-density architecture may find strategic footholds.

As next-generation AI applications demand infrastructure that can balance performance, flexibility, and sustainability, Compal’s new server lineup stands as a timely and technically robust addition to the global AI hardware ecosystem.


Discover more from Business-News-Today.com

Subscribe to get the latest posts sent to your email.

CATEGORIES
TAGS
Share This