Samsung HBM4E and AI Infrastructure at NVIDIA GTC 2026 Showcase New Semiconductor Memory and SSD Solutions

Samsung showcases sixth generation HBM4 memory and HBM4E chips at NVIDIA GTC 2026 plus advanced SSDs and DRAM for AI factories and mobile devices.
Samsung HBM4E and AI Infrastructure at NVIDIA GTC 2026 Showcase New Semiconductor Memory and SSD Solutions

Samsung Reveals Sixth Generation HBM4 Memory Plus HBM4E Chips and Advanced AI Infrastructure Solutions at NVIDIA GTC 2026 Exhibition

Samsung Electronics is currently showcasing a significant leap in semiconductor technology at the NVIDIA GTC 2026 event in San Jose. The primary focus of the exhibition appears to be the sixth generation of high bandwidth memory. Known as HBM4, this technology is now in mass production. It seems to be specifically tailored for the NVIDIA Vera Rubin platform. Reports from the booth suggest that these memory chips reach processing speeds of 11.7 gigabits per second, though there is talk that they might eventually hit 13 gigabits.

One of the more interesting reveals is the HBM4E chip. This successor is making its first public appearance and claims to deliver a bandwidth of 4.0 terabytes per second. To achieve this, Samsung appears to be moving toward a new manufacturing method called Hybrid Copper Bonding or HCB. This process is likely to allow for stacks of 16 layers or more. It is worth noting that early data indicates a 20 percent reduction in heat resistance compared to older bonding methods. This change could be vital for maintaining stability in massive data centers.

Beyond memory stacks, the collaboration between Samsung and NVIDIA involves a broader range of hardware designed for what they call AI Factories. This includes the following technologies

  • The SOCAMM2 server memory module which is now in mass production
  • The PM1763 SSD utilizing the PCIe 6.0 interface for rapid data transfers
  • The PM1753 SSD which is being integrated into storage reference architectures

Looking at the manufacturing side, the two companies are using NVIDIA Omniverse libraries to build digital twins of semiconductor facilities. It seems the goal is to speed up production through agentic AI. This suggests a move toward a more automated and efficient factory model that handles everything from initial design to final production.

Personal devices are also part of the 2026 roadmap. Samsung is highlighting DRAM solutions like LPDDR5X and LPDDR6. These appear to be aimed at premium smartphones and wearable devices. LPDDR6 is particularly interesting as it targets a bandwidth of 30 to 35 gigabits per second. It seems that adaptive voltage scaling will be used to manage power consumption. This may suggest that future mobile devices will be able to handle complex AI workloads without draining the battery too quickly.

The speaker sessions at the event might offer more clarity on how these digital twins will reshape the industry. For now, the hardware on display confirms that Samsung is positioning itself as a total solution provider. By combining memory, logic, and foundry services, they appear to be covering every corner of the AI landscape.

About the author

mgtid
Owner of Technetbook | 10+ Years of Expertise in Technology | Seasoned Writer, Designer, and Programmer | Specialist in In-Depth Tech Reviews and Industry Insights | Passionate about Driving Innovation and Educating the Tech Community Technetbook

Post a Comment

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to Technetbook newsletter and stay updated.

We use Brevo as our marketing platform. By submitting this form you agree that the personal data you provided will be transferred to Brevo for processing in accordance with Brevo's Privacy Policy.