A100 80GB PCIe

NVIDIA

NVIDIA · Released November 2020 · 6,912 CUDA Cores · 80GB HBM2e VRAM · 300W TDP

A100 80GB PCIe

OVERVIEW

The NVIDIA A100 80GB PCIe is a high-performance GPU designed for data centers, targeting AI, machine learning, and high-performance computing workloads. It is part of the Ampere architecture, offering significant improvements in performance and memory capacity over its predecessors. The 80GB variant provides ample memory for large-scale models and datasets, making it ideal for demanding applications.

SPECIFICATIONS

FP3219.5 TFLOPS
FP649.7 TFLOPS
GPU Memory80GB HBM2e
Form FactorPCIe dual-slot air cooled
FP16 Tensor Core
With Sparsity624 TFLOPS
Without Sparsity312 TFLOPS
INT8 Tensor Core
With Sparsity1248 TOPS
Without Sparsity624 TOPS
Multi-Instance GPUUp to 7 MIGs @ 10GB
BFLOAT16 Tensor Core
With Sparsity624 TFLOPS
Without Sparsity312 TFLOPS
GPU Memory Bandwidth1,935GB/s
Tensor Float 32 (TF32)
With Sparsity312 TFLOPS
Without Sparsity156 TFLOPS
Max Thermal Design Power (TDP)300W

WHAT THIS GPU IS GOOD AT

This GPU excels at AI training and inference, offering exceptional performance for deep learning frameworks like TensorFlow and PyTorch. Its large memory capacity and high bandwidth make it particularly effective for large-scale models and data-intensive tasks. The A100's support for multi-instance GPU (MIG) technology allows for efficient resource partitioning.

SERVER OPTIONS

The A100 80GB PCIe is available in a variety of server platforms, including NVIDIA's own DGX systems and HGX A100 baseboards. It is also offered by major OEMs like Dell PowerEdge, HPE ProLiant, and Supermicro. Cloud providers such as AWS, Azure, and Google Cloud offer instances powered by the A100, enabling flexible deployment options.

POWER, THERMALS & NOISE

The A100 80GB PCIe has a TDP of around 300 watts, requiring robust cooling solutions, typically air-cooled in data center environments. It maintains efficient thermal performance under load, though noise levels can vary depending on the server's cooling design. Liquid cooling options are available for optimized thermal management.

COMPATIBILITY & SYSTEM FIT

This GPU uses the PCIe 4.0 interface, ensuring high data transfer rates. It is available in a standard PCIe form factor, making it compatible with a wide range of server configurations. NVLink support is not available in the PCIe variant, which may limit inter-GPU communication bandwidth compared to the SXM version.

LIMITATIONS & KNOWN TRADE-OFFS

While the A100 80GB PCIe offers excellent performance, it lacks NVLink support, which can be a limitation for workloads requiring high inter-GPU communication. Its high power consumption necessitates adequate power supply and cooling infrastructure. Availability can be constrained due to high demand and production limitations.

PRICING

Vendor
Price
Unit
Currency
Date Added
$0.44
hour
USD
Feb 2, 2026
Amazon Web Services (AWS)
View listing →
$1.19
hour
USD
Feb 2, 2026
$1.39
hour
USD
Feb 2, 2026
Google Cloud Platform (GCP)
View listing →
$1.39
hour
USD
Feb 2, 2026
Microsoft Azure
View listing →
$1.39
hour
USD
Feb 2, 2026
GCP (Google Cloud)
View listing →
$5.00
hour
USD
Feb 2, 2026
AWS (Amazon)
View listing →
$10.00
hour
USD
Feb 2, 2026
Azure (Microsoft)
View listing →
$10.00
hour
USD
Feb 2, 2026

NOTES

Ideal for AI, data analytics, and high-performance computing applications.

"The A100 80GB PCIe variant can be used in PCIe dual-slot air-cooled or single-slot liquid-cooled form factors."