Not satisfied with just announcing three new AI NAND technologies, SK Hynix is working on new DRAM product technologies for the AI market, saying it wants to be a full stack AI memory creator.
SK hynix supplies both DRAM and NAND products and presented its AI memory-focussed ideas at the “SK AI Summit 2025”, held in Seoul on November 3. The background is that memory performance is not keeping up with GPU developments, causing a disconnect between a GPU’s hgh bandwidth memory (HBM) capacity and performance, and the GPU’s own abilities. This hurdle, SK hynix says, is known as the “Memory Wall”. Although SK hynix has developed, and leads, the HBM market to provide higher bandwidth memory than s…
Not satisfied with just announcing three new AI NAND technologies, SK Hynix is working on new DRAM product technologies for the AI market, saying it wants to be a full stack AI memory creator.
SK hynix supplies both DRAM and NAND products and presented its AI memory-focussed ideas at the “SK AI Summit 2025”, held in Seoul on November 3. The background is that memory performance is not keeping up with GPU developments, causing a disconnect between a GPU’s hgh bandwidth memory (HBM) capacity and performance, and the GPU’s own abilities. This hurdle, SK hynix says, is known as the “Memory Wall”. Although SK hynix has developed, and leads, the HBM market to provide higher bandwidth memory than standard DRAM, this is not enough.
Noh-Jung Kwak from his LinkedIn post.
President and CEO Noh-Jung Kwak said: “We will become a creator who builds “Full Stack AI Memory” as a co-architect, partner, and eco-contributor.”
There are two potential products: custom HBM and AI DRAM (AI-D). The company says “Custom HBM is a product that integrates certain functions in [the] GPU and ASIC to [the] HBM base … to maximize the performance of GPUs and ASICs, and reduce data transfer power consumption with HBM.” For example, we understand it moves the HMB controller from the GPU stack one end of the interposer connecting the HBM stack to the GPU, into the HBM base die at the other end.
SK hynix, Sandisk and Micron are all developing HBM4 and HBM4E versions, with up to 16-Hi stacks. Future generation HBM5 and HBM5E technologies were mentioned by SK hynix for the 2029-2031 period at its event.
There are three types of AI-D being prepared. AI-D O (Optimization), AI-D B (Breakthrough), and AI-D E (Expansion).
AI-D O is a low-power, high-performance DRAM that helps reduce the total cost of ownership. It uses MRDIMM, SOCAMM2, and, LPDDR5R. Technologies.
MRDIMM is a Multiplexed Rank Dual In-line Memory Module which operates two ranks of memory simultaneously to increase memory data access speed. SOCAMM2 is a low power Small Outline Compression Attached Memory Module for an AI server. It was developed by the JEDEC Solid State Technology Association as an open industry standard(JESD318), not by any single company.
Kevin (Jongwon) Lee, EVP and Head of DRAM Marketing at SK hynix, said: “SOCAMM2 is the DDR5 killer for AI—same capacity, half the power, double the sockets.” He was comparing 128 GB SOCAMM2 (9.6 GT/s, ~10 W) vs 128 GB DDR5 RDIMM (5.6 GT/s, ~25 W).
LPDDR5R is Low Power Double Data Rate 5 RAS, which is more reliable than traditional LPDDR, with RAS standing for Reliability, Availability, Serviceability.
AI-D B is Sk hynix’ answer to the memory wall hurdle, and features “ultra-high-capacity memory with flexible memory allocation.” It includes CMM (Compute eXpress Link Memory Module) technology and PIM (Processing-In-Memory) technologies. CMM is an interface that connects CPU, GPU, memory, and other components in high-performance computing systems, to support massive, ultra-fast computation.
PIM, Sk hynix says, integrates computational capabilities into memory, addressing data movement bottlenecks in AI and big data processing.
As we understand it, AI-D B employs 2 TB blades of memory, using 16 x 128 GB SOCAMM2 modules. Each blade is a CXL fabric NUMA Node and a GPU OS sees an up to 16 PB memory address space with up to 1,000 GPUs contributing their memory capacity. One GPU can borrow spare memory capacity from this pool to grow its memory capacity as workloads demand. Kwak said: “The Memory Wall is the biggest hurdle for AI scaling. AI-D B will break it.”
AI-D E is a less specific technology idea. It refers to using memory products, including HBM, outside the data center, Sk hynix wanting to extend DRAM use cases into fields including robotics, mobility and industrial automation.
SK hynix is collaborating and partnering with Nvidia for HBM and boosting its own fab productivity through fab digital twins using Nvidia Omniverse. It says it has a long-term co-operation with OpenAI concerning high-performance memory and is working with TSMC on next-generation HBM base dies. It’s also working with NAVER Cloud to optimize next-generation AI memory and storage products for real-world data center environments.
Watch a video of presentations, including Kwak’s at the Seoul event here. It is, however, in Korean.