Skip to main content
NVIDIA A100

The Versatile AI Workhorse

Proven performance for training and inference. Multi-Instance GPU (MIG) support for efficient multi-tenant deployments. Excellent price/performance for production AI.

80GB HBM2e Memory
2 TB/s Memory Bandwidth
7 MIG Instances
600 GB/s NVLink Bandwidth

Balanced Performance for AI

A100 delivers excellent value across training and inference workloads.

Versatile Performance

Excellent for both training and inference workloads. The proven workhorse for production AI systems.

Multi-Instance GPU (MIG)

Partition a single A100 into up to 7 isolated GPU instances for efficient multi-tenant deployments.

40GB or 80GB HBM2e

Choose the right memory capacity for your workload. 80GB models handle larger batch sizes and models.

Excellent Price/Performance

Great value for medium-scale training and inference. Lower cost than H100 for many workloads.

Pre-configured Environment

PyTorch, TensorFlow, CUDA, and popular ML frameworks ready to use. Start training immediately.

Expert Support

Our ML infrastructure team helps with environment setup, MIG configuration, and optimization.

Technical Specifications

A100 specifications for reference.

Specification Value
GPU Memory 40GB / 80GB HBM2e
Memory Bandwidth 1.6 TB/s (40GB) / 2 TB/s (80GB)
FP16 Performance 312 TFLOPS
FP32 Performance 156 TFLOPS
TF32 Performance 156 TFLOPS
NVLink Bandwidth 600 GB/s
MIG Support Up to 7 instances
TDP 400W (SXM) / 300W (PCIe)

What A100 Excels At

Versatile performance for diverse AI workloads.

Model Training

Train medium-scale models efficiently. A100 offers excellent performance for models up to 30B parameters.

Production Inference

Deploy models for production inference with consistent performance and MIG for multi-tenant efficiency.

Multi-Tenant Deployment

Use MIG to run multiple isolated workloads on a single A100, maximizing GPU utilization.

HPC Workloads

Scientific computing, simulations, and numerical analysis benefit from A100's balanced performance.

On-Demand & Reserved Pricing

Flexible pricing with excellent value for production workloads.

1x A100 40GB

$1.49 /hr
or $895 /mo reserved
  • 1× NVIDIA A100 40GB
  • 40GB HBM2e
  • 16 vCPU
  • 120GB RAM
  • 500GB NVMe
  • MIG Support Available
Get Started

8x A100 80GB

$14.99 /hr
or $8,995 /mo reserved
  • 8× NVIDIA A100 80GB
  • 640GB HBM2e
  • 192 vCPU
  • 1.9TB RAM
  • 8TB NVMe
  • MIG Support Available
Get Started

Need a custom configuration? Contact us for a quote.

Frequently Asked Questions

Should I choose A100 or H100? +

A100 offers excellent price/performance for medium-scale training and inference. H100 is 2-4x faster for large language models thanks to the Transformer Engine. If you're training models under 30B parameters or running inference, A100 often provides better value.

What's the difference between 40GB and 80GB A100? +

The 80GB model has twice the memory and slightly higher memory bandwidth (2 TB/s vs 1.6 TB/s). Choose 80GB if you're working with larger models or batch sizes. Compute performance is identical.

What is Multi-Instance GPU (MIG)? +

MIG lets you partition a single A100 into up to 7 isolated GPU instances. Each instance has dedicated memory and compute resources, perfect for inference workloads or multi-tenant environments.

What frameworks are pre-installed? +

PyTorch, TensorFlow, CUDA 11/12, cuDNN, NCCL, and Hugging Face Transformers are pre-installed. We also provide Docker images with popular ML frameworks.

Is reserved pricing available? +

Yes. Reserved instances (1-month, 3-month, annual) offer 20-40% discounts compared to on-demand pricing. A100 reserved instances are particularly cost-effective for steady workloads.

Ready to Get Started?

Deploy Your A100 Instance Today

Talk to our team about your AI workload. We'll help you choose between A100 and H100.