A100 80GB SXM

NVIDIA

NVIDIA · Released November 2020 · 6,912 CUDA Cores · 80GB HBM2e VRAM · 400W*** TDP

A100 80GB SXM

OVERVIEW

The NVIDIA A100 80GB SXM is a high-performance GPU designed for data centers, targeting AI, machine learning, and high-performance computing workloads. It is part of the Ampere architecture, offering significant improvements in memory capacity and bandwidth over its predecessors. The 80GB variant provides enhanced memory for large-scale models and datasets, making it ideal for demanding applications.

SPECIFICATIONS

FP3219.5 TFLOPS
GPU Memory80GB HBM2e
Form FactorSXM Interconnect NVIDIA NVLink Bridge
FP16 Tensor Core
With Sparsity624 TFLOPS
Without Sparsity312 TFLOPS
FP64 Tensor Core19.5 TFLOPS
INT8 Tensor Core
With Sparsity1248 TOPS
Without Sparsity624 TOPS
Multi-Instance GPUUp to 7 MIGs @ 10GB
BFLOAT16 Tensor Core
With Sparsity624 TFLOPS
Without Sparsity312 TFLOPS
GPU Memory Bandwidth2,039GB/s
Tensor Float 32 (TF32)
With Sparsity624 TFLOPS
Without Sparsity312 TFLOPS
Max Thermal Design Power (TDP)400W***

WHAT THIS GPU IS GOOD AT

This GPU excels at AI training and inference, offering exceptional performance for deep learning frameworks. Its large memory capacity and high bandwidth make it particularly effective for large-scale models and data-intensive tasks. The A100's support for multi-instance GPU (MIG) technology allows for efficient resource partitioning, enhancing its versatility.

SERVER OPTIONS

The A100 80GB SXM is available in NVIDIA's DGX systems, such as the DGX A100, and in HGX platforms for OEMs like Dell, HPE, and Supermicro. It is also offered in cloud instances like AWS p4d, Azure NDv4, and Google Cloud's A2 instances, providing flexible deployment options for enterprises.

POWER, THERMALS & NOISE

The A100 80GB SXM has a TDP of 400 watts, requiring robust cooling solutions typically provided by liquid cooling in data center environments. Its thermal design ensures efficient heat dissipation, maintaining performance under heavy workloads. Noise is generally not a concern in data centers, where these GPUs are primarily deployed.

COMPATIBILITY & SYSTEM FIT

The A100 80GB SXM uses the SXM4 form factor, supporting NVLink for high-speed interconnects between GPUs. It requires systems with compatible SXM slots, such as those found in DGX and HGX platforms. PCIe support is not available for this variant, emphasizing its use in specialized server environments.

LIMITATIONS & KNOWN TRADE-OFFS

While the A100 80GB SXM offers exceptional performance, its high power consumption and cooling requirements may limit its use to well-equipped data centers. The SXM form factor restricts compatibility to specific platforms, and its premium pricing can be a barrier for smaller organizations. Availability may also be constrained by high demand and production limitations.

PRICING

Vendor
Price
Unit
Currency
Date Added
$0.69
hour
USD
Feb 2, 2026
$0.69
hour
USD
Feb 2, 2026
$0.69
hour
USD
Feb 2, 2026
$0.69
hour
USD
Feb 2, 2026
$0.69
hour
USD
Feb 2, 2026
$0.69
hour
USD
Feb 2, 2026
Thunder Compute
View listing →
$0.78
hour
USD
Feb 2, 2026
Lambda Labs
View listing →
$1.29
hour
USD
Feb 2, 2026
$1.36
hour
USD
Feb 2, 2026
$1.39
hour
USD
Feb 2, 2026
$1.39
hour
USD
Feb 2, 2026
Oracle Cloud
View listing →
$1.39
hour
USD
Feb 2, 2026
$1.39
hour
USD
Feb 2, 2026
$1.42
hour
USD
Feb 2, 2026
$1.47
hour
USD
Feb 2, 2026
$1.60
hour
USD
Feb 2, 2026
$2.40
hour
USD
Feb 2, 2026
$2.40
hour
USD
Feb 2, 2026
$3.50
hour
USD
Feb 2, 2026

NOTES

Suitable for AI, data analytics, and high-performance computing applications requiring high memory bandwidth and thermal design power.

"The A100 80GB SXM variant utilizes the NVIDIA NVLink interconnect for 2 GPUs and is available in NVIDIA HGX A100-Partner and NVIDIA Certified Systems with 4, 8, or 16 GPUs."