SK hynix stock surges 7% as it unveils world’s first HBM4 mass production and NAND breakthrough
SK hynix stock jumped 7% after unveiling the world’s first HBM4 mass production system and high-performance ZUFS 4.1 NAND launch. Find out what this means for AI memory innovation.
SK hynix Inc. (KRX: 000660) saw its share price surge 7.00% to ₩328,500 in intraday trading on September 12, 2025, adding ₩21,500 to its market capitalization, according to Google Finance data. This rally marks a fresh multi-year high, fueled by a trio of milestone technology announcements that position the South Korean semiconductor major as a key player in the next generation of AI infrastructure and memory innovation.
The company’s stock momentum followed the official confirmation that SK hynix had completed development and prepared mass production of HBM4 — the industry’s first — signaling a major leap in high-bandwidth memory (HBM) leadership. This announcement was bolstered by the rollout of ZUFS 4.1, a high-performance NAND solution targeting mobile AI applications, and the earlier September 2 debut of ASML’s High NA EUV lithography equipment at SK hynix’s M16 fab.
The back-to-back product rollouts come at a time when investor interest in AI chipmakers and advanced memory suppliers has intensified globally. With rivals such as Samsung Electronics and Micron Technology also racing to ramp AI-optimized memory portfolios, SK hynix’s string of “firsts” is being interpreted as a defining moment that could structurally reset its competitive standing.

What makes SK hynix’s HBM4 a critical milestone in AI-era memory evolution?
In its September 12 press release, SK hynix confirmed that it had completed both the development and industrial readiness of HBM4, the sixth-generation high bandwidth memory (HBM), and would begin mass production — the first semiconductor company globally to do so.
HBM, which vertically stacks DRAM chips to deliver higher bandwidth at lower power consumption compared to traditional memory, has become the cornerstone of AI accelerators used in large language models (LLMs), high-performance computing (HPC), and training clusters for generative AI.
SK hynix stated that HBM4 offers double the bandwidth of HBM3E — its immediate predecessor — through the use of 2,048 I/O terminals, and boosts power efficiency by over 40%. When deployed in AI systems, the company projects up to 69% improvement in service performance, which would dramatically alleviate memory bottlenecks in AI workloads and reduce power draw at data centers.
By exceeding JEDEC’s standard operating speed of 8 Gbps and surpassing 10 Gbps, SK hynix is effectively rewriting industry expectations for memory throughput. It has also enhanced thermal and reliability characteristics by integrating its proven Advanced MR-MUF packaging and 1bnm (fifth-generation 10nm) process technology.
Joohwan Cho, Head of HBM Development at SK hynix, described the launch as “a new milestone for the industry,” underscoring the company’s intent to deliver timely, performance-aligned solutions for hyperscale AI needs. Justin Kim, President & Head of AI Infrastructure, further called HBM4 a “symbolic turning point” that breaks past traditional limitations in AI hardware architecture.
How does the ZUFS 4.1 NAND launch expand SK hynix’s mobile AI footprint?
Just two days prior to the HBM4 announcement, SK hynix revealed it had started mass production and customer shipments of ZUFS 4.1, a new generation of Zoned Universal Flash Storage (ZUFS) optimized for mobile devices with embedded AI capabilities.
ZUFS 4.1, built on the JEDEC UFS 4.1 standard, offers a 45% reduction in app launch times and 47% faster AI app execution, according to the company. These improvements stem from its sequential write approach and enhanced sustained read performance — key pain points in conventional UFS systems which degrade over prolonged use.
The memory maker also improved the solution’s error-handling capabilities over its ZUFS 4.0 version, allowing for more precise error detection and correction guidance via the CPU, which in turn boosts system recovery and stability.
The NAND breakthrough comes amid growing demand for AI inference on smartphones, wearables, and IoT devices — all of which increasingly require local storage speed, efficiency, and reliability to support real-time AI processing without latency.
SK hynix began ZUFS 4.1 mass production in July 2025 after completing qualification in June through close collaboration with its mobile partners, suggesting that integration into Android flagship devices is imminent or already underway.
What is High NA EUV and how does it reinforce SK hynix’s process leadership?
On September 2, SK hynix also became the world’s first chipmaker to install ASML’s TWINSCAN EXE:5200B High NA EUV lithography system for mass production, a move that is expected to accelerate its roadmap for next-generation DRAM.
High NA EUV (Extreme Ultraviolet Lithography with high numerical aperture) represents a dramatic leap in precision and scaling. The EXE:5200B enables transistor feature sizes that are 1.7 times smaller and transistor densities that are 2.9 times higher than existing EUV systems, thanks to its NA increase from 0.33 to 0.55.
This installation at the M16 fab in Icheon, South Korea, gives SK hynix a crucial lead in process technology at a time when memory companies are being forced to deliver more chips per wafer without compromising thermal, yield, or defect control.
Since first introducing EUV lithography in 2021 for 1anm DRAM, SK hynix has gradually expanded its usage across advanced nodes. The High NA EUV deployment now signals readiness to meet the extreme scaling and density needs of both HBM4 and future memory product families.
Cha Seon Yong, Head of R&D at SK hynix, stated the adoption of High NA EUV brings the company’s “technological vision into reality” and will support leadership in AI memory and next-gen computing.
How are institutional investors reacting to SK hynix’s AI memory roadmap?
The 7% stock surge on September 12 places SK hynix near its 52-week high of ₩329,500, reflecting renewed institutional conviction in its AI-aligned product pipeline. Foreign institutional flows turned notably bullish in early September, particularly after signs of strong adoption of SK hynix’s HBM3E by U.S.-based AI chipmakers were reported in Q2 2025 earnings calls.
Options activity has also spiked around the memory giant, especially in AI-heavy ETFs and South Korea’s broader semiconductor basket. Long positions are building in anticipation of rising HBM unit ASPs and robust NAND shipment volume in the back half of 2025.
Analysts across major brokerages have upgraded SK hynix to “Overweight” or “Buy” ratings, citing upside from HBM4 margin expansion, increased wafer yields from High NA EUV integration, and potential strategic NAND partnerships with Android OEMs.
That said, investors are also closely watching the timeline for large-scale HBM4 customer ramps — particularly from AI chip leaders like NVIDIA, AMD, and possibly Meta or Microsoft — to assess earnings visibility in early 2026.
What’s next for SK hynix as it transforms into a full-stack AI memory provider?
With HBM4, ZUFS 4.1, and High NA EUV all going live in the same two-week window, SK hynix is attempting to stake claim as not just a supplier of memory chips but a vertically integrated enabler of AI infrastructure — spanning datacenter, mobile, and edge computing.
The firm’s strategic language — including phrases like “full-stack AI memory provider” and “symbolic turning point” — signals an ambition to shape memory form factors, performance curves, and cost models around AI hardware requirements in the same way NVIDIA has done for AI GPUs.
While challenges remain, including DRAM/NAND cyclicality and sustained CAPEX for advanced fabs, the company appears to be aligning product development, foundry partnerships, and market timing in a way that puts it in rarefied territory alongside TSMC, Intel, and Samsung.
From AI accelerators in hyperscale clusters to smartphones executing on-device inference, SK hynix is now actively powering multiple layers of the AI stack — and that transformation is being priced into its market value.
Discover more from Business-News-Today.com
Subscribe to get the latest posts sent to your email.