The GPU Index
Comprehensive specifications and live market pricing for high-performance AI accelerators.
Hardware Archive
Active Accelerators
Browse technical architecture, performance benchmarks, and variant-specific details for the world's leading GPUs.
NVIDIA
A100 80GB PCIe
FP32: 19.5 TFLOPS
FP64: 9.7 TFLOPS
NVIDIA
A100 80GB SXM
FP32: 19.5 TFLOPS
GPU Memory: 80GB HBM2e
NVIDIA
GB200 NVL4
NVIDIA
GB200 NVL72
NVIDIA
GB300 NVL72
NVIDIA
H100 NVL
memory gb: 94
tgp watts: null
NVIDIA
H100 NVL1
NVIDIA
H100 PCIe
FP32: 51 teraFLOPS
FP64: 26 teraFLOPS
NVIDIA
H100 SXM
memory gb: 80
tgp watts: null
NVIDIA
HGX B300
NVIDIA
HGX Rubin NVL8
NVIDIA
L40S L40S
NVIDIA
L40S NVL
RT Cores: 142
CUDA Cores: 18176
NVIDIA
L40S PCIe Gen4 x16
RT Cores: 142
CUDA Cores: 18176
NVIDIA
RTX PRO 6000 Blackwell Server Edition PCIe Gen 5
Verified Providers
ATLANTIC.NET
AWS
CIVO
COREWEAVE
CRUSOE CLOUD
CUDO COMPUTE
FLUENCE
FLUIDSTACK
FLY.IO
GCORE
GMO GPU CLOUD
GOOGLE CLOUD
HYPERSTACK
NEBIUS
NOVITA
OBLIVUS
ORACLE CLOUD
OVH
TENSORDOCK
TOGETHER
VERDA
VULTR