Recommended GPU Instances - AWS Deep Learning AMIs (original) (raw)
We recommend a GPU instance for most deep learning purposes. Training new models is faster on a GPU instance than a CPU instance. You can scale sub-linearly when you have multi-GPU instances or if you use distributed training across many instances with GPUs.
The following instance types support the DLAMI. For information about GPU instance type options and their uses, see EC2 Instance Types and select Accelerated Computing.
Note
The size of your model should be a factor in choosing an instance. If your model exceeds an instance's available RAM, choose a different instance type with enough memory for your application.
- Amazon EC2 P6 Instances have up to 8 NVIDIA Blackwell B200 GPUs.
- Amazon EC2 P5e Instances have up to 8 NVIDIA Tesla H200 GPUs.
- Amazon EC2 P5 Instances have up to 8 NVIDIA Tesla H100 GPUs.
- Amazon EC2 P4 Instances have up to 8 NVIDIA Tesla A100 GPUs.
- Amazon EC2 P3 Instances have up to 8 NVIDIA Tesla V100 GPUs.
- Amazon EC2 G3 Instances have up to 4 NVIDIA Tesla M60 GPUs.
- Amazon EC2 G4 Instances have up to 4 NVIDIA T4 GPUs.
- Amazon EC2 G5 Instances have up to 8 NVIDIA A10G GPUs.
- Amazon EC2 G6 Instances have up to 8 NVIDIA L4 GPUs.
- Amazon EC2 G6e Instances have up to 8 NVIDIA L40S Tensor Core GPUs.
- Amazon EC2 G5g Instances have Arm64-based AWS Graviton2 processors.
DLAMI instances provide tooling to monitor and optimize your GPU processes. For more information about monitoring your GPU processes, see GPU Monitoring and Optimization.
For specific tutorials on working with G5g instances, see The ARM64 DLAMI.
Next Up
Did this page help you? - Yes
Thanks for letting us know we're doing a good job!
If you've got a moment, please tell us what we did right so we can do more of it.
Did this page help you? - No
Thanks for letting us know this page needs work. We're sorry we let you down.
If you've got a moment, please tell us how we can make the documentation better.