Rambus Intros HBM4E Memory Controller for Future AI
Rambus just released its HBM4E memory controller IP to handle the huge data flow that AI models and high performance computing need. This new controller is made for the next wave of data center gear especially the upcoming NVIDIA Rubin Ultra GPUs and AMD MI500 series accelerators.
Switching from HBM4 to HBM4E is a real step up in memory speed. The Rambus controller boosts performance by 60% over past versions. It has
- Pin Speed Up to 16 Gigabits per second (Gbps) per pin a jump from HBM4's 10 Gbps.
- Module Bandwidth One memory device can hit 4.1 Terabytes per second (TB/s).
- Bandwidth AI accelerators using eight devices can get total memory bandwidth over 32 TB/s with this controller.
The HBM4E Controller IP is designed to work right away for those creating AI SoCs. It runs with low delays and has good reliability features needed for big language models. The IP is made to fit different setups
- Packaging It works with 2.5D and 3D packaging so it fits in a base die or AI SoC.
- Subsystems The digital controller can hook up with third party TSV PHY or standard PHY solutions.
- Reliability It has advanced features to keep data safe in high density memory stacks.
AI processors keep needing more and more HBM bandwidth is still a problem for getting top performance. The new 16 Gbps HBM4E IP offers what's needed to build the next superchips. Rambus says the HBM4E Controller IP is ready to be licensed and some customers can get early access.
With a 60% faster pin speed than older HBM4 controllers this helps chip designers meet the constant need for faster data movement in AI hardware.
