H100 PCIe

NVIDIA

NVIDIA · Released March 2022 · 14,592 CUDA Cores · 80GB VRAM

H100 PCIe

OVERVIEW

The NVIDIA H100 PCIe is a high-performance GPU designed for data centers, targeting AI, machine learning, and high-performance computing workloads. It is part of the Hopper architecture, offering significant improvements in performance and efficiency over its predecessors. The H100 PCIe variant is optimized for PCIe-based systems, providing flexibility in deployment across a wide range of server configurations.

SPECIFICATIONS

FP3251 teraFLOPS
FP6426 teraFLOPS
Decoders7 NVDEC, 7 JPEG
GPU memory80GB
Form factorPCIe
InterconnectNVLink: 600GB/s PCIe Gen5: 128GB/s
Server optionsPartner and NVIDIA Certified Systems with 1–8 GPUs
FP8 Tensor Core3,026 teraFLOPS
FP16 Tensor Core1,513 teraFLOPS
FP64 Tensor Core51 teraFLOPS
INT8 Tensor Core3,026 TOPS
TF32 Tensor Core756 teraFLOPS
Multi-instance GPUsUp to 7 MIGs @ 10GB each
BFLOAT16 Tensor Core1,513 teraFLOPS
GPU memory bandwidth2TB/s
Max thermal design power (TDP)300-350W (configurable)

WHAT THIS GPU IS GOOD AT

The H100 PCIe excels at AI training and inference, offering substantial performance gains in deep learning workloads due to its advanced tensor cores and high memory bandwidth. It is also well-suited for scientific simulations and data analytics, providing a versatile solution for complex computational tasks.

SERVER OPTIONS

The NVIDIA H100 PCIe is available in a variety of server platforms, including NVIDIA's own DGX systems and OEM servers from vendors like Dell PowerEdge, HPE ProLiant, and Supermicro. It is also offered in cloud instances such as AWS p5, Azure ND, and Google Cloud A3, providing scalable options for enterprises looking to leverage its capabilities in the cloud.

POWER, THERMALS & NOISE

The H100 PCIe has a TDP of around 350 watts, necessitating robust cooling solutions, typically air-cooled in data center environments. It is designed to maintain optimal thermal performance under heavy workloads, though noise levels can vary depending on the specific server configuration and cooling setup.

COMPATIBILITY & SYSTEM FIT

This GPU supports PCIe Gen 5, ensuring high data transfer rates and compatibility with modern server architectures. It is available in a standard PCIe form factor, making it easy to integrate into existing systems. NVLink support is not available in the PCIe variant, which may limit multi-GPU communication compared to the SXM version.

LIMITATIONS & KNOWN TRADE-OFFS

While the H100 PCIe offers excellent performance, it lacks NVLink support, which can be a limitation for applications requiring high-speed inter-GPU communication. Additionally, its high power consumption may necessitate upgrades to power delivery systems in some data centers. Availability can be constrained due to high demand and production limitations.

PRICING

Vendor
Price
Unit
Currency
Date Added
$1.99
hour
USD
Feb 2, 2026
$2.39
hour
USD
Feb 2, 2026
$2.39
hour
USD
Feb 2, 2026

NOTES

Suitable for AI workflows such as AI chatbots, recommendation engines, and vision AI.

"Preliminary specifications. May be subject to change."