NVIDIA

H100 PCIe

Suitable for AI workflows such as AI chatbots, recommendation engines, and vision AI.

ArchitectureNVIDIA Hopper™
Cheapest Rent$1.99/hr
H100

Hardware Intelligence Dossier

FP32
51 teraFLOPS
FP64
26 teraFLOPS
Decoders
7 NVDEC, 7 JPEG
GPU memory
80GB
Form factor
PCIe
Interconnect
NVLink: 600GB/s PCIe Gen5: 128GB/s
Server options
Partner and NVIDIA Certified Systems with 1–8 GPUs
FP8 Tensor Core
3,026 teraFLOPS
FP16 Tensor Core
1,513 teraFLOPS
FP64 Tensor Core
51 teraFLOPS
INT8 Tensor Core
3,026 TOPS
TF32 Tensor Core
756 teraFLOPS
Multi-instance GPUs
Up to 7 MIGs @ 10GB each
BFLOAT16 Tensor Core
1,513 teraFLOPS
GPU memory bandwidth
2TB/s
Max thermal design power (TDP)
300-350W (configurable)

Workload Suitability

The high-performance accelerator is engineered for mass-scale AI applications. From large language model (LLM) fine-tuning to high-throughput inference serving, this variant provides extreme memory bandwidth.

Asset Insights

"Preliminary specifications. May be subject to change."
Last Intel Refresh:February 2026

Verified Market Rates

Pricing Matrix

Live Exchange Feed
ProviderStock StatusMarket PriceNetwork Link
Civo
Verified Node1d ago
High Availability
$1.99per hour
Inspect Listing
RunPod
Verified Node2d ago
High Availability
$2.39per hour
Inspect Listing
RunPod
Verified Node1d ago
High Availability
$2.39per hour
Inspect Listing