Samsung Reveals Sixth Generation HBM4 Memory Plus HBM4E Chips and Advanced AI Infrastructure Solutions at NVIDIA GTC 2026 Exhibition
Samsung Electronics is currently showcasing a significant leap in semiconductor technology at the NVIDIA GTC 2026 event in San Jose. The primary focus of the exhibition appears to be the sixth generation of high bandwidth memory. Known as HBM4, this technology is now in mass production. It seems to be specifically tailored for the NVIDIA Vera Rubin platform. Reports from the booth suggest that these memory chips reach processing speeds of 11.7 gigabits per second, though there is talk that they might eventually hit 13 gigabits.
One of the more interesting reveals is the HBM4E chip. This successor is making its first public appearance and claims to deliver a bandwidth of 4.0 terabytes per second. To achieve this, Samsung appears to be moving toward a new manufacturing method called Hybrid Copper Bonding or HCB. This process is likely to allow for stacks of 16 layers or more. It is worth noting that early data indicates a 20 percent reduction in heat resistance compared to older bonding methods. This change could be vital for maintaining stability in massive data centers.
Beyond memory stacks, the collaboration between Samsung and NVIDIA involves a broader range of hardware designed for what they call AI Factories. This includes the following technologies
- The SOCAMM2 server memory module which is now in mass production
- The PM1763 SSD utilizing the PCIe 6.0 interface for rapid data transfers
- The PM1753 SSD which is being integrated into storage reference architectures
Looking at the manufacturing side, the two companies are using NVIDIA Omniverse libraries to build digital twins of semiconductor facilities. It seems the goal is to speed up production through agentic AI. This suggests a move toward a more automated and efficient factory model that handles everything from initial design to final production.
Personal devices are also part of the 2026 roadmap. Samsung is highlighting DRAM solutions like LPDDR5X and LPDDR6. These appear to be aimed at premium smartphones and wearable devices. LPDDR6 is particularly interesting as it targets a bandwidth of 30 to 35 gigabits per second. It seems that adaptive voltage scaling will be used to manage power consumption. This may suggest that future mobile devices will be able to handle complex AI workloads without draining the battery too quickly.
The speaker sessions at the event might offer more clarity on how these digital twins will reshape the industry. For now, the hardware on display confirms that Samsung is positioning itself as a total solution provider. By combining memory, logic, and foundry services, they appear to be covering every corner of the AI landscape.