Top of the page
ICC HPC Solutions

BREAKTHROUGH ACCELERATED CPU FOR AI and HPC 

NVIDIA GRAcE Hopper™ Solutions

The NVIDIA GH200 Grace Hopper™ Superchip is a breakthrough accelerated CPU designed from the ground up for giant-scale AI and high-performance computing (HPC) applications.

AI REFERENCE ARCHITECTURE

GPU
SOLUTIONS

GPU
WORKSTATIONS

INFINIBAND
FABRICS

GPU
CABLING

LIQUID/AIR
COOLING

ADVANCING AI

NVIDIA Grace Hopper Superchip
The NVIDIA GH200 Grace Hopper Superchip combines the NVIDIA Grace™ and Hopper™ architectures using NVIDIA® NVLink®-C2C to deliver a CPU+GPU coherent memory model for accelerated AI and HPC applications. 
  • CPU+GPU designed for giant-scale AI and HPC 
  • New 900 gigabytes per second (GB/s) coherent interface, 7X faster than PCIe Gen5 
  • Supercharges accelerated computing and generative AI with HBM3 and HBM3e GPU memory
  • Runs all NVIDIA software stacks and platforms, including NVIDIA AI Enterprise, HPC SDK, and Omniverse™

Grace Hopper Use Cases

GH200 Grace Hopper Superchip Platform for Era of Accelerated Computing and Generative AI

Hover image
Finance

Financial Services


Hover image
Computer Vision

Computer Vision


Hover image
Deep Learning

Deep Learning


KEY FEATURES

The World’s Most Versatile Computing Platform

NVIDIA NVLink-C2C is a memory-coherent, high-bandwidth, and low-latency interconnect for superchips. The heart of the GH200 Grace Hopper Superchip, it delivers up to 900 gigabytes per second (GB/s) of total bandwidth, which is 7X higher than PCIe Gen5 lanes commonly used in accelerated systems.

>> 72-core NVIDIA Grace CPU

>> NVIDIA H100 Tensor Core GPU  

>> Up to 480GB of LPDDR5X memory with error-correction code (ECC)

>> Supports 96GB of HBM3 or 144GB of HBM3e

>> Up to 624GB of fast-access memory

>> NVLink-C2C: 900GB/s of coherent memory Datasheet

NVIDIA GRACE HOPPER™ SOLUTIONS

Massive Bandwidth for Compute Efficiency

NVIDIA GH200 fuses an NVIDIA Grace CPU and an NVIDIA Hopper GPU into a single superchip via NVIDIA NVLink-C2C, a 900GB/s total bandwidth chip-to-chip interconnect. NVLink-C2C memory coherency enables programming of both the Grace CPU Superchip and the Grace Hopper Superchip with a unified programming model. 

ICC offer a range of Grace Hopper™ accelerated solutions. To find out more, get in touch with the team today.

heterogeneous accelerated platform 

Grace Hopper - All you need to know

The Grace Hopper Superchip is the first true heterogeneous accelerated platform for HPC and AI workloads. It accelerates any applications with the strengths of both GPUs and CPUs while providing the simplest and most productive heterogeneous programming model to date, enabling scientists and engineers to focus on solving the world’s most important problems. 

GPU SOLUTION CORE COMPONENTS

Networking

InfiniBand and Ethernet technologies enable networking functionality in GPU accelerated systems. Proper networking is critical to ensuring that your GPU solutions don't have any bottlenecks or suffer performance degradation for AI workloads.

HIGH PERFORMANCE INTERCONNECT

NVIDIA COMPATIBLE CABLES
LinkX cables and transceivers are often used to link top-of-rack switches downwards to network adapters in NVIDIA GPUs and CPU servers, and storage and/or upwards in switch-to-switch applications throughout the network infrastructure.

CORE COMPONENTS

Software

With industry-tailored software and tools from the NVIDIA AI Enterprise software suite, enterprises can rely on this proven platform to build their AI applications faster and more cost-effectively.

ACCELERATE YOUR AI WORKLOADS

ELITE NVIDIA PARTNERS
As an Elite NVIDIA Partner, ICC is well-positioned to support you on your AI journey. Contact us to discuss your GPU requirements today.

WANT TO KNOW MORE?

CONTACT US
ICC HPC Solutions
BREAKTHROUGH ACCELERATED CPU FOR AI and HPC 
NVIDIA GRAcE Hopper™ Solutions

The NVIDIA GH200 Grace Hopper™ Superchip is a breakthrough accelerated CPU designed from the ground up for giant-scale AI and high-performance computing (HPC) applications.

AI REFERENCE ARCHITECTURE

GPU
SOLUTIONS

GPU
WORKSTATIONS

INFINIBAND
FABRICS

GPU
CABLING

LIQUID/AIR
COOLING

ADVANCING AI

NVIDIA Grace Hopper Superchip
The NVIDIA GH200 Grace Hopper Superchip combines the NVIDIA Grace™ and Hopper™ architectures using NVIDIA® NVLink®-C2C to deliver a CPU+GPU coherent memory model for accelerated AI and HPC applications. 
  • CPU+GPU designed for giant-scale AI and HPC 
  • New 900 gigabytes per second (GB/s) coherent interface, 7X faster than PCIe Gen5 
  • Supercharges accelerated computing and generative AI with HBM3 and HBM3e GPU memory
  • Runs all NVIDIA software stacks and platforms, including NVIDIA AI Enterprise, HPC SDK, and Omniverse™

Grace Hopper Use Cases

GH200 Grace Hopper Superchip Platform for Era of Accelerated Computing and Generative AI

Hover image
Finance

Financial Services


Hover image
Computer Vision

Computer Vision


Hover image
Deep Learning

Deep Learning


KEY FEATURES

The World’s Most Versatile Computing Platform

NVIDIA NVLink-C2C is a memory-coherent, high-bandwidth, and low-latency interconnect for superchips. The heart of the GH200 Grace Hopper Superchip, it delivers up to 900 gigabytes per second (GB/s) of total bandwidth, which is 7X higher than PCIe Gen5 lanes commonly used in accelerated systems.

>> 72-core NVIDIA Grace CPU

>> NVIDIA H100 Tensor Core GPU  

>> Up to 480GB of LPDDR5X memory with error-correction code (ECC)

>> Supports 96GB of HBM3 or 144GB of HBM3e

>> Up to 624GB of fast-access memory

>> NVLink-C2C: 900GB/s of coherent memory Datasheet

NVIDIA GRACE HOPPER™ SOLUTIONS

Massive Bandwidth for Compute Efficiency

NVIDIA GH200 fuses an NVIDIA Grace CPU and an NVIDIA Hopper GPU into a single superchip via NVIDIA NVLink-C2C, a 900GB/s total bandwidth chip-to-chip interconnect. 

NVLink-C2C memory coherency enables programming of both the Grace CPU Superchip and the Grace Hopper Superchip with a unified programming model. 

ICC offer a range of Grace Hopper™ accelerated solutions. To find out more, get in touch with the team today.

heterogeneous accelerated platform 

Grace Hopper - All you need to know

The Grace Hopper Superchip is the first true heterogeneous accelerated platform for HPC and AI workloads. It accelerates any applications with the strengths of both GPUs and CPUs while providing the simplest and most productive heterogeneous programming model to date, enabling scientists and engineers to focus on solving the world’s most important problems. 

GPU SOLUTION CORE COMPONENTS

Networking

InfiniBand and Ethernet technologies enable networking functionality in GPU accelerated systems. Proper networking is critical to ensuring that your GPU solutions don't have any bottlenecks or suffer performance degradation for AI workloads.

HIGH PERFORMANCE INTERCONNECT

NVIDIA COMPATIBLE CABLES
LinkX cables and transceivers are often used to link top-of-rack switches downwards to network adapters in NVIDIA GPUs and CPU servers, and storage and/or upwards in switch-to-switch applications throughout the network infrastructure.

CORE COMPONENTS

Software

With industry-tailored software and tools from the NVIDIA AI Enterprise software suite, enterprises can rely on this proven platform to build their AI applications faster and more cost-effectively.

ACCELERATE YOUR AI WORKLOADS

ELITE NVIDIA PARTNERS
As an Elite NVIDIA Partner, ICC is well-positioned to support you on your AI journey. Contact us to discuss your GPU requirements today.

WANT TO KNOW MORE?

CONTACT US