Americas

  • United States
by Prasanth Aby Thomas

SK Hynix’s Q4 profit signals an AI-backed rise in advanced DRAM demand

News
Jan 25, 20244 mins
Enterprise Storage

Sales of key products, DDR5 and HBM3, have surged with over four and fivefold increases compared to the previous year.

Samsung DRAM
Credit: Samsung

SK Hynix’s strong Q4 performance highlights the growing demand for advanced DRAM chips, including high bandwidth memory (HBM), crucial for AI applications.

Sales of key products, DDR5 and HBM3, have surged with over four and fivefold increases compared to the previous year.

SK Hynix is gearing up for mass production of HBM3E, a pivotal AI memory product, and continues HBM4 development. They are also supplying high-performance, high-capacity products like DDR5 and LPDDR5T to meet increasing demand for high-performance DRAM.

AI fueling demand for DRAM

AI systems require rapid data processing, surpassing the capabilities of conventional computers, resulting in a heightened demand for DRAM. Micron predicts that by 2025, approximately half of all cloud infrastructure servers will be AI servers, necessitating a sixfold increase in DRAM.

“The much-hyped NVIDIA H100 GPUs quintessential for the Generative AI boom is a 7-die package with TSMC’s Chip-on-Wafer-on-Substrate packaging architecture, which has the core GPU compute unit at the center surrounded by 6 HBM blocks,” said Danish Faruqui, CEO of Fab Economics, a US-based boutique semiconductor greenfield projects advisory firm. “Each HBM block includes 8 vertically stacked DRAM dies bonded to a bottom logic die via thermal compression bonding and through-silicon via.

“HBM is indispensable alongside the GPU compute system, mitigating what we refer to as the ‘memory wall’ constraint in both AI inference and training workloads, a critical limitation that affects data center CPUs.”

Similarly, AMD’s MI 300 AI accelerator, touted as the world’s fastest AI hardware, boasts 8 HBM memory stacks in each unit, with 12 vertically stacked DRAM dies with through-silicon via on a base logic die.

“With increased bandwidth requirements, HBMs, which are essentially vertical stacks of interconnected DRAM chips- are in growing demand,” said Arun Mampazhi, an independent analyst. “Perhaps higher cost was a factor restricting its wide use before, but with other options running out, as we have seen in many cases, the efficiency requirements eventually break the cost barrier. Moreover, SRAM, which is generally used for cache, is no longer scaling at the rate of logic.”

Broad benefits for key industry players


The demand for DRAM or HBM stacks is set to benefit several major companies active in this segment.

“As per Fab Economics Research and Analysis, based on our 6-year demand forecast for HBM, the supply is highly skewed when compared to massive AI product-driven demand,” Faruqui said. “HBM as a percentage of DRAM sales will increase by a factor of 2.5 from 2023 to 2024 and other specific growth factors forecasted by our firm for each follow-on year until 2030. Due to the skew in demand and supply for AI product-driven HBM, we have forecasted average selling price (ASP) premiums for HBM for each respective year, which will boast profit margins for players like SK Hynix, Micron, and Samsung.”

The impact of HBM varies for each player based on factors such as their technology readiness, manufacturing capacity roadmap, customer loyalty, and geopolitical considerations.

According to Faruqui, there are 37 players well-positioned to benefit from the HBM wave driven by AI hardware across the ecosystem, including the Design-Fab-Packaging-Test value chain and materials/equipment supply chains. Some of these players have the potential for exceptionally high growth.

More innovations are on the way

As the role of DRAM becomes increasingly pivotal in boosting the performance of AI chips, a focus on innovations and developments becomes crucial. Brady Wang, associate director at Counterpoint Research, suggests that this includes the fabrication of denser memory chips and modules.

“This progress involves the creation of chips with finer linewidth and the enhancement of 3D structural designs,” Wang said. “Moreover, some companies are working on the creation of DRAM specifically engineered for AI, designed to efficiently manage the distinctive workloads and data patterns inherent in AI processing.”

There are also explorations of novel memory technologies, such as MRAM, RRAM, CBRAM, etc., which have the potential to either complement or provide alternatives to conventional DRAM.

“Enterprises must prioritize strategic concerns when managing this ever-changing marketplace,” said Manish Rawat, semiconductor analyst at TechInsights. “It’s critical to give priority to DRAM technologies with increased memory bandwidth and throughput, particularly as AI applications get more complicated. Real-time applications require low-latency solutions.”

This means that businesses should look for DRAM advancements with short access times. Large-scale parallel processing in AI applications requires energy efficiency and scalability, which has led to increasing emphasis on power consumption-related solutions.