At SIGGRAPH in Los Angeles, NVIDIA unveiled a brand new variant of their GH200′ superchip,’ which is ready to be the world’s first GPU chip to be geared up with HBM3e reminiscence. Designed to crunch the world’s most complicated generative AI workloads, the NVIDIA GH200 platform is designed to push the envelope of accelerated computing. Pooling their strengths in each the GPU house and rising efforts within the CPU house, NVIDIA is trying to ship a semi-integrated design to overcome the extremely aggressive and complex high-performance computing (HPC) market.

Though we have coated a few of the finer particulars of NVIDIA’s Grace Hopper-related announcements, including their disclosure that GH200 has entered into full production, NVIDIA’s newest announcement is a brand new GH200 variant with HBM3e reminiscence is coming later, in Q2 of 2024, to be precise. That is along with the GH200 with HBM3 already introduced and is currently in production and due to land later this year. This implies NVIDIA has two variations of the identical product, with GH200 incorporating HBM3 incoming and GH200 with HBM3e set to come back later.



















NVIDIA Grace Hopper Specs
  Grace Hopper (GH200) w/HBM3 Grace Hopper (GH200) w/HBM3e
CPU Cores 72 72
CPU Structure Arm Neoverse V2 Arm Neoverse V2
CPU Reminiscence Capability <=480GB LPDDR5X (ECC) <=480GB LPDDR5X (ECC)
CPU Reminiscence Bandwidth <=512GB/sec <=512GB/sec
GPU SMs 132 132?
GPU Tensor Cores 528 528?
GPU Structure Hopper Hopper
GPU Reminiscence Capcity 96GB (Bodily)

<=96GB (Obtainable)
144GB (Bodily)

141GB (Obtainable)
GPU Reminiscence Bandwidth <=4TB/sec 5TB/sec
GPU-to-CPU Interface 900GB/sec

NVLink 4
900GB/sec

NVLink 4
TDP 450W – 1000W 450W – 1000W
Manufacturing Course of TSMC 4N TSMC 4N
Interface Superchip Superchip
Obtainable H2’2023 Q2’2024

Throughout their keynote at SIGGRAPH 2023, President and CEO of NVIDIA, Jensen Huang, mentioned, “To satisfy surging demand for generative AI, knowledge facilities require accelerated computing platforms with specialised wants.” Jensen additionally went on to say, “The brand new GH200 Grace Hopper Superchip platform delivers this with distinctive reminiscence know-how and bandwidth to enhance throughput, the power to attach GPUs to combination efficiency with out compromise, and a server design that may be simply deployed throughout all the knowledge middle.” 

NVIDIA’s GH200 GPU is ready to be the world’s first chip to ship with HBM3e memory, an up to date model of the high-bandwidth reminiscence with even better bandwidth and, critically for NVIDIA, greater capability 24GB stacks. This can permit NVIDIA to increase its native GPU reminiscence from 96GB per GPU to 144GB (6 x 24GB stacks), a 50% improve that needs to be particularly welcome for the AI market, the place prime fashions are huge in measurement and sometimes reminiscence capability certain. In a twin configuration setup, will probably be accessible with as much as 282 GB of HBM3e reminiscence, which NVIDIA states “delivers as much as 3.5 x extra reminiscence capability and three x extra bandwidth than the present technology providing.”

Maybe one of the crucial notable particulars NVIDIA shares is that the incoming GH200 GPU with HBM3e is ‘totally’ suitable with the already introduced NVIDIA MGX server specification, unveiled at Computex. This permits system producers to have over 100 totally different variations of servers that may be deployed and is designed to supply a fast and cost-effective improve technique.

NVIDIA claims that the GH200 GPU with HBM3e supplies as much as 50% sooner reminiscence efficiency than the present HBM3 reminiscence and delivers as much as 10 TB/s of bandwidth, with as much as 5 TB/s per chip.

We have already coated the announced DGX GH200 AI Supercomputer constructed round NVIDIA’s Grace Hopper platform. The DGX GH200 is a 24-rack cluster totally constructed on NVIDIA’s structure, with every a single DGX GH200 combining 256 chips and providing 120 TB of CPU-attached reminiscence. These are linked utilizing NVIDIA’s NVLink, which has as much as 96 native L1 switches offering speedy and instantaneous communications between GH200 blades. NVIDIA’s NVLink permits the deployments to work along with a high-speed and coherent interconnect, giving the GH200 full entry to CPU reminiscence and permitting entry for as much as 1.2 TB of reminiscence when in a twin configuration.

NVIDIA states that main system producers are anticipated to ship GH200-based methods with HBM3e reminiscence someday in Q2 of 2024. It also needs to be famous that GH200 with HBM3 reminiscence is at present in full manufacturing and is ready to be launched by the top of this 12 months. We count on to listen to extra about GH200 with HBM3e reminiscence from NVIDIA within the coming months

#NVIDIA #Unveils #Up to date #GH200 #Grace #Hopper #Superchip #HBM3e #Reminiscence #Delivery #Q22024

Leave a Reply

Your email address will not be published. Required fields are marked *