Groq AI Chips Samsung Foundry 4nm Orders Growth for NVIDIA Inference Hardware Chips

Groq increases Samsung Foundry 4nm orders to 15000 wafers for AI inference chips using SRAM technology for NVIDIA power efficiency and lower costs
Groq AI Chips Samsung Foundry 4nm Orders Growth for NVIDIA Inference Hardware Chips

Groq AI Chips Get More 4nm Orders from Samsung Foundry

Groq the AI chip company connected to NVIDIA is making more chips with Samsung Electronics. After trying it out with 9000 wafers last year they are now ordering around 15000. This means they are moving from just testing the waters to making lots of AI hardware for running AI models.

NVIDIA found a way to bring in Groq without the usual merger issues using a $20 $25 billion tech deal. This let NVIDIA add Groq’s team and ideas to their own without all the red tape. Groq’s CEO Jonathan Ross and his team are now working at NVIDIA to build chips that work well with NVIDIA’s training GPUs.

The AI world started with chips that learn or train AI models but now the focus is on inference running those models smoothly. Groq’s chips solve two big problems

  • Lower Costs Chips for inference are cheaper to use on a large scale than the powerful processors used for training.
  • Better Power Use These chips are built to save energy cutting down on the huge electricity bills of AI data centers.

Groq’s chips use Static Random Access Memory (SRAM) instead of the High Bandwidth Memory (HBM) that others use. This gives them

  • Faster data speeds
  • Better energy use
  • Cheaper production costs

NVIDIA might show off a new inference processor based on this at the GTC 2026 event.

The Groq deal is important for Samsung Electronics and its 4 nanometer (nm) process. Samsung is now making these inference chips along with hardware for other new companies like HyperExcel. They have made the 4nm process better for AI tasks. Getting these orders helps Samsung stay competitive with TSMC in the popular 4nm and 5nm chip market where they can make a lot of money.

Experts think this increase in production means we are entering a time where hardware efficiency decides how quickly AI is used in businesses.

About the author

mgtid
Owner of Technetbook | 10+ Years of Expertise in Technology | Seasoned Writer, Designer, and Programmer | Specialist in In-Depth Tech Reviews and Industry Insights | Passionate about Driving Innovation and Educating the Tech Community Technetbook

Post a Comment