Cloud Compute Instances – Amazon EC2 Instance Types – AWS (original) (raw)
General Purpose
General purpose instances provide a balance of compute, memory and networking resources, and can be used for a variety of diverse workloads. These instances are ideal for applications that use these resources in equal proportions such as web servers and code repositories.
- M8g
- Amazon EC2 M8g instances are powered by AWS Graviton4 processors. They deliver the best price performance in Amazon EC2 for general purpose workloads.
Features:- Powered by custom-built AWS Graviton4 processors
- Larger instance sizes with up to 3x more vCPUs and memory than M7g instances
- Features the latest DDR5-5600 memory
- Optimized for Amazon EBS by default
- Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
- With M8gd instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the instance
- Supports Elastic Fabric Adapter (EFA) on m8g.24xlarge, m8g.48xlarge, m8g.metal-24xl, m8g.metal-48xl, m8gd.24xlarge, m8gd.48xlarge, m8gd.metal-24xl, and m8gd.metal-48xl
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Use cases
Applications built on open source software such as application servers, microservices, gaming servers, midsize data stores, and caching fleets.
- M7g
- Amazon EC2 M7g instances are powered by Arm-based AWS Graviton3 processors. They are ideal for general purpose applications.
Features:- Powered by custom-built AWS Graviton3 processors
- Features the latest DDR5 memory that offers 50% more bandwidth compared to DDR4
- 20% higher enhanced networking bandwidth compared to M6g instances
- EBS-optimized by default
- Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
- With M7gd instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the instance
- Supports Elastic Fabric Adapter (EFA) on m7g.16xlarge, m7g.metal, m7gd.16xlarge, and m7gd.metal
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Use cases
Applications built on open-source software such as application servers, microservices, gaming servers, midsize data stores, and caching fleets.
- M7i
- Amazon EC2 M7i instances are powered by 4th Generation Intel Xeon Scalable processors and deliver 15% better price performance than M6i instances.
Features:- Up to 3.2 GHz 4th Generation Intel Xeon Scalable processor (Sapphire Rapids 8488C)
- New Advance Matrix Extensions (AMX) accelerate matrix multiplication operations
- 2 metal sizes: m7i.metal-24xl and m7i.metal-48xl
- Discrete built-in accelerators (available on M7i bare metal sizes only)—Data Streaming Accelerator (DSA), In-Memory Analytics Accelerator (IAA), and QuickAssist Technology (QAT)—enable efficient offload and acceleration of data operations that help optimize performance for databases, encryption and compression, and queue management workloads
- Latest DDR5 memory, which offers more bandwidth compared to DDR4
- Support for always-on memory encryption using Intel Total Memory Encryption (TME)
- Support for up to 128 EBS volume attachments per instance
- Up to 192 vCPUs and 768 GiB memory
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- M7i-flex
- Amazon EC2 M7i-flex instances are powered by 4th Generation Intel Xeon Scalable processors and deliver 19% better price performance than M6i instances.
Features:- Easiest way for you to achieve price performance and cost benefits in the cloud for a majority of your general-purpose workloads
- Up to 3.2 GHz 4th Generation Intel Xeon Scalable processor (Sapphire Rapids 8488C)
- New Advance Matrix Extensions (AMX) accelerate matrix multiplication operations
- Latest DDR5 memory, which offers more bandwidth compared to DDR4
- EBS-optimized by default
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- M7a
- Amazon EC2 M7a instances, powered by 4th Generation AMD EPYC processors, deliver up to 50% higher performance compared to M6a instances.
Features:- Up to 3.7 GHz 4th generation AMD EPYC processors (AMD EPYC 9R14)
- Up to 50 Gbps of networking bandwidth
- Up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS)
- Instance sizes with up to 192 vCPUs and 768 GiB of memory
- SAP-certified instances
- Built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for always-on memory encryption using AMD secure memory encryption (SME)
- Support for new processor capabilities such as AVX3-512, VNNI, and bfloat16.
Use cases
Applications that benefit from high performance and high throughput such as financial applications, application servers, simulation modeling, gaming, mid-size data stores, application development environments, and caching fleets.
- Mac
- Amazon EC2 Mac instances allow you to run on-demand macOS workloads in the cloud, extending the flexibility, scalability, and cost benefits of AWS to all Apple developers. By using EC2 Mac instances, you can create apps for the iPhone, iPad, Mac, Vision Pro, Apple Watch, Apple TV, and Safari. These instances give developers access to macOS so they can develop, build, test, and sign applications that require the Xcode IDE. EC2 Mac instances are dedicated, bare-metal instances which are accessible in the EC2 console and via the AWS Command Line Interface as Dedicated Hosts.
x86-based EC2 Mac instances are powered by a combination of Mac mini computers—featuring:- Intel’s 8th generation 3.2 GHz (4.6 GHz turbo) Core i7 processors
- 6 physical and 12 logical cores
- 32 GiB of memory
- Instance storage is available through Amazon Elastic Block Store (EBS)
EC2 M1 Mac instances are powered by a combination of Apple silicon Mac mini computers—featuring: - Apple M1 chip with 8 CPU cores
- 8 GPU cores
- 16 GiB of memory
- 16-core Apple Neural Engine
- Instance storage is available through Amazon Elastic Block Store (EBS)
EC2 M1 Ultra Mac instances are powered by a combination of Apple silicon Mac Studio computers—featuring: - Apple M1 Ultra chip with 20 CPU cores
- 64 GPU cores
- 128 GiB of memory
- 32-core Apple Neural Engine
- Instance storage is available through Amazon Elastic Block Store (EBS)
EC2 M2 Mac instances are powered by a combination of Apple silicon Mac mini computers—featuring: - Apple M2 chip with 8 CPU cores
- 10 GPU cores
- 24 GiB of memory
- 16-core Apple Neural Engine
- Instance storage is available through Amazon Elastic Block Store (EBS)
EC2 M2 Pro Mac instances are powered by a combination of Apple silicon Mac mini computers—featuring: - Apple M2 Pro chip with 12 CPU cores
- 19 GPU cores
- 32 GiB of memory
- 16-core Apple Neural Engine
- Instance storage is available through Amazon Elastic Block Store (EBS)
Use Cases
Developing, building, testing, and signing iOS, iPadOS, macOS, visionOS, WatchOS, and tvOS applications on the Xcode IDE
- M6g
- Amazon EC2 M6g instances are powered by Arm-based AWS Graviton2 processors. They deliver up to 40% better price/performance over current generation M5 instances and offer a balance of compute, memory, and networking resources for a broad set of workloads.
Features:- Custom built AWS Graviton2 Processor with 64-bit Arm Neoverse cores
- Support for Enhanced Networking with Up to 25 Gbps of Network bandwidth
- EBS-optimized by default
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
- With M6gd instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the instance
Use Cases
Applications built on open-source software such as application servers, microservices, gaming servers, mid-size data stores, and caching fleets.
- M6i
- Amazon EC2 M6i instances are powered by 3rd Generation Intel Xeon Scalable processors (Ice Lake). This family provides a balance of compute, memory, and network resources, and is a good choice for many applications.
Features:- Up to 3.5 GHz 3rd Generation Intel Xeon Scalable processors (Ice Lake 8375C)
- Up to 15% better compute price performance over M5 instances
- Up to 20% higher memory bandwidth per vCPU compared to M5 instances
- Up to 50 Gbps of networking speed
- Up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (EBS)
- A new instance size (32xlarge) with 128 vCPUs and 512 GiB of memory
- Supports Elastic Fabric Adapter on the 32xlarge and metal sizes
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for always-on memory encryption using Intel Total Memory Encryption (TME)
- Support for new Intel Advanced Vector Extensions (AVX 512) instructions for faster processing of cryptographic algorithms
- With M6id instances, up to 7.6 TB of local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the M6i instance
Use Cases
These instances are SAP-Certified and are ideal for workloads such as backend servers supporting enterprise applications (for example Microsoft Exchange and SharePoint, SAP Business Suite, MySQL, Microsoft SQL Server, and PostgreSQL databases), gaming servers, caching fleets, and application development environments.
- M6in
- Amazon EC2 M6in and M6idn instances are ideal for network-intensive workloads such as backend servers, enterprise, gaming servers, and caching fleets applications. Powered by 3rd Generation Intel Xeon Scalable processors (Ice Lake) with an all-core turbo frequency of 3.5 GHz, they offer up to 200 Gbps of network bandwidth and up to 100 Gbps Amazon EBS bandwidth.
Features:- Up to 3.5 GHz 3rd Generation Intel Xeon Scalable processors (Ice Lake 8375C)
- Up to 20% higher memory bandwidth per vCPU compared to M5n and M5dn instances
- Up to 200 Gbps of networking speed, which is up to 2x compared to M5n and M5dn instances
- Up to 100 Gbps of EBS bandwidth, which is up to 5.2x compared to M5n and M5dn instances
- EFA support on the 32xlarge and metal sizes
- Support for always-on memory encryption using Intel Total Memory Encryption (TME)
- Support for new Intel Advanced Vector Extensions (AVX-512) instructions for faster processing of cryptographic algorithms
- With M6idn instances, up to 7.6 TB of local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the M6idn instance
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) m6in.large 2 8 EBS-Only Up to 25 Up to 25 m6in.xlarge 4 16 EBS-Only Up to 30 Up to 25 m6in.2xlarge 8 32 EBS-Only Up to 40 Up to 25 m6in.4xlarge 16 64 EBS-Only Up to 50 Up to 25 m6in.8xlarge 32 128 EBS-Only 50 25 m6in.12xlarge 48 192 EBS-Only 75 37.5 m6in.16xlarge 64 256 EBS-Only 100 50 m6in.24xlarge 96 384 EBS-Only 150 75 m6in.32xlarge 128 512 EBS-Only 200**** 100 m6in.metal 128 512 EBS-Only 200**** 100 m6idn.large 2 8 1x118 NVMe SSD Up to 25 Up to 25 m6idn.xlarge 4 16 1x237 NVMe SSD Up to 30 Up to 25 m6idn.2xlarge 8 32 1x474 NVMe SSD Up to 40 Up to 25 m6idn.4xlarge 16 64 1x950 NVMe SSD Up to 50 Up to 25 m6idn.8xlarge 32 128 1x1900 NVMe SSD 50 25 m6idn.12xlarge 48 192 2x1425 NVMe SSD 75 37.5 m6idn.16xlarge 64 256 2x1900 NVMe SSD 100 50 m6idn.24xlarge 96 384 4x1425 NVMe SSD 150 75 m6idn.32xlarge 128 512 4x1900 NVMe SSD 200**** 100 m6idn.metal 128 512 4x1900 NVMe SSD 200**** 100 ****For 32xlarge and metal sizes, at least two elastic network interfaces, with each attached to a different network card, are required on the instance to achieve 200 Gbps throughput. Each network interface attached to a network card can achieve a maximum of 170 Gbps. For more information, see Network cards All instances have the following specs: - Up to 3.5 GHz 3rd Generation Intel Xeon Scalable processors
- EBS-optimized
- Enhanced Networking†
Use Cases:
These instances are SAP-Certified and ideal for workloads that can take advantage of high networking throughput. Workloads include high-performance file systems, distributed web scale in-memory caches, caching fleets, real-time big data analytics, Telco applications, such as 5G User Plane Function (UPF), as well as and application development environments.
- M6a
- Amazon EC2 M6a instances are powered by 3rd generation AMD EPYC processors and are an ideal fit for general purpose workloads.
Features:- Up to 3.6 GHz 3rd generation AMD EPYC processors (AMD EPYC 7R13)
- Up to 35% better compute price performance over M5a instances
- Up to 50 Gbps of networking speed
- Up to 40 Gbps of bandwidth to the Amazon Elastic Block Store
- Instance size with up to 192 vCPUs and 768 GiB of memory
- SAP-Certified instances
- Supports Elastic Fabric Adapter on the 48xlarge size
- Built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for always-on memory encryption using AMD Transparent Single Key Memory Encryption (TSME)
- Support for new AMD Advanced Vector Extensions (AVX-2) instructions for faster execution of cryptographic algorithms
Use Cases
These instances are SAP-Certified and are ideal for workloads such as backend servers supporting enterprise applications (e.g. Microsoft Exchange and SharePoint, SAP Business Suite, MySQL, Microsoft SQL Server, and PostgreSQL databases), multi-player gaming servers, caching fleets, as well as for application development environments.
- M5
- Amazon EC2 M5 instances are the latest generation of General Purpose Instances powered by Intel Xeon® Platinum 8175M or 8259CL processors. These instances provide a balance of compute, memory, and network resources, and is a good choice for many applications.
Features:- Up to 3.1 GHz Intel Xeon Scalable processor (Skylake 8175M or Cascade Lake 8259CL) with new Intel Advanced Vector Extension (AVX-512) instruction set
- New larger instance size, m5.24xlarge, offering 96 vCPUs and 384 GiB of memory
- Up to 25 Gbps network bandwidth using Enhanced Networking
- Requires HVM AMIs that include drivers for ENA and NVMe
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
- With M5d instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the M5 instance
- New 8xlarge and 16xlarge sizes now available.
Use Cases
Small and mid-size databases, data processing tasks that require additional memory, caching fleets, and for running backend servers for SAP, Microsoft SharePoint, cluster computing, and other enterprise applications
- M5n
- Amazon EC2 M5 instances are ideal for workloads that require a balance of compute, memory, and networking resources including web and application servers, small and mid-sized databases, cluster computing, gaming servers, and caching fleet. The higher bandwidth, M5n and M5dn, instance variants are ideal for applications that can take advantage of improved network throughput and packet rate performance.
Feature:- 2nd generation Intel Xeon Scalable Processors (Cascade Lake 8259CL) with a sustained all-core Turbo CPU frequency of 3.1 GHz and maximum single core turbo frequency of 3.5 GHz
- Support for the new Intel Vector Neural Network Instructions (AVX-512 VNNI) which will help speed up typical machine learning operations like convolution, and automatically improve inference performance over a wide range of deep learning workloads
- 25 Gbps of peak bandwidth on smaller instance sizes
- 100 Gbps of network bandwidth on the largest instance size
- Requires HVM AMIs that include drivers for ENA and NVMe
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
- With M5dn instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the M5 instance
Use Cases
Web and application servers, small and mid-sized databases, cluster computing, gaming servers, caching fleets, and other enterprise applications
- M5zn
- Amazon EC2 M5zn instances deliver the fastest Intel Xeon Scalable processors in the cloud, with an all-core turbo frequency up to 4.5 GHz.
Features:- 2nd Generation Intel Xeon Scalable Processors (Cascade Lake 8252C) with an all-core turbo frequency up to 4.5 GHz
- Up to 100 Gbps of network bandwidth on the largest instance size and bare metal variant
- Up to 19 Gbps to the Amazon Elastic Block Store
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- 12x and metal sizes of M5zn instances leverage the latest generation of the Elastic Network Adapter and enable consistent low latency with Elastic Fabric Adapter
Use Cases
M5zn instances are an ideal fit for applications that benefit from extremely high single-thread performance and high throughput, low latency networking, such as gaming, High Performance Computing, and simulation modeling for the automotive, aerospace, energy, and telecommunication industries.
- M5a
- Amazon EC2 M5a instances are the latest generation of General Purpose Instances powered by AMD EPYC 7000 series processors. M5a instances deliver up to 10% cost savings over comparable instance types. With M5ad instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the instance.
Features:- AMD EPYC 7000 series processors (AMD EPYC 7571) with an all core turbo clock speed of 2.5 GHz
- Up to 20 Gbps network bandwidth using Enhanced Networking
- Requires HVM AMIs that include drivers for ENA and NVMe
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
- With M5ad instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the M5a instance
Use Cases
Small and mid-size databases, data processing tasks that require additional memory, caching fleets, and for running backend servers for SAP, Microsoft SharePoint, cluster computing, and other enterprise applications
- M4
- Amazon EC2 M4 instances provide a balance of compute, memory, and network resources, and it is a good choice for many applications.
Features:- Up to 2.4 GHz Intel Xeon Scalable Processor (Broadwell E5-2686 v4 or Haswell E5-2676 v3)
- EBS-optimized by default at no additional cost
- Support for Enhanced Networking
- Balance of compute, memory, and network resources
Instance vCPU* Mem (GiB) Storage Dedicated EBS Bandwidth (Mbps) Network Performance*** m4.large 2 8 EBS-only 450 Moderate m4.xlarge 4 16 EBS-only 750 High m4.2xlarge 8 32 EBS-only 1,000 High m4.4xlarge 16 64 EBS-only 2,000 High m4.10xlarge 40 160 EBS-only 4,000 10 Gigabit m4.16xlarge 64 256 EBS-only 10,000 25 Gigabit All instances have the following specs: - 2.4 GHz Intel Xeon E5-2676 v3** Processor
- Intel AVX†, Intel AVX2†, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
Small and mid-size databases, data processing tasks that require additional memory, caching fleets, and for running backend servers for SAP, Microsoft SharePoint, cluster computing, and other enterprise applications.
- T4g
- Amazon EC2 T4g instances are powered by Arm-based custom built AWS Graviton2 processors and deliver up to 40% better price performance over T3 instances for a broad set of burstable general purpose workloads.
T4g instances accumulate CPU credits when a workload is operating below baseline threshold. Each earned CPU credit provides the T4g instance the opportunity to burst with the performance of a full CPU core for one minute when needed. T4g instances can burst at any time for as long as required in Unlimited mode.
Features:- Free trial for t4g.small instances for up to 750 hours per month until December 31st, 2024. Refer to the FAQ for details.
- Burstable CPU, governed by CPU Credits, and consistent baseline performance
- Unlimited mode by default to ensure performance during peak periods and Standard mode option for a predictable monthly cost
- Custom built AWS Graviton2 Processor with 64-bit Arm Neoverse cores
- EBS-optimized by default
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance Size vCPU Memory (GiB) Baseline Performance / vCPU CPU Credits Earned / Hr Network Burst Bandwidth (Gbps)*** EBS Burst Bandwidth (Mbps) t4g.nano 2 0.5 5% 6 Up to 5 Up to 2,085 t4g.micro 2 1 10% 12 Up to 5 Up to 2,085 t4g.small 2 2 20% 24 Up to 5 Up to 2,085 t4g.medium 2 4 20% 24 Up to 5 Up to 2,085 t4g.large 2 8 30% 36 Up to 5 Up to 2,780 t4g.xlarge 4 16 40% 96 Up to 5 Up to 2,780 t4g.2xlarge 8 32 40% 192 Up to 5 Up to 2,780 All instances have the following specs: - Custom built AWS Graviton2 Processor with 64-bit Arm cores
- EBS Optimized
- Enhanced Networking
Use Cases:
Micro-services, low-latency interactive applications, small and medium databases, virtual desktops, development environments, code repositories, and business-critical applications.
- T3
- Amazon EC2 T3 instances are the next generation burstable general-purpose instance type that provide a baseline level of CPU performance with the ability to burst CPU usage at any time for as long as required. T3 instances offer a balance of compute, memory, and network resources and are designed for applications with moderate CPU usage that experience temporary spikes in use.
T3 instances accumulate CPU credits when a workload is operating below baseline threshold. Each earned CPU credit provides the T3 instance the opportunity to burst with the performance of a full CPU core for one minute when needed. T3 instances can burst at any time for as long as required in Unlimited mode.
Features:- Up to 3.1 GHz Intel Xeon Scalable processor (Skylake 8175M or Cascade Lake 8259CL)
- Burstable CPU, governed by CPU Credits, and consistent baseline performance
- Unlimited mode by default to ensure performance during peak periods and Standard mode option for a predictable monthly cost
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- AWS Nitro System and high frequency Intel Xeon Scalable processors result in up to a 30% price performance improvement over T2 instances
Instance vCPU* CPU Credits/hour Mem (GiB) Storage Network Performance (Gbps)*** t3.nano 2 6 0.5 EBS-Only Up to 5 t3.micro 2 12 1 EBS-Only Up to 5 t3.small 2 24 2 EBS-Only Up to 5 t3.medium 2 24 4 EBS-Only Up to 5 t3.large 2 36 8 EBS-Only Up to 5 t3.xlarge 4 96 16 EBS-Only Up to 5 t3.2xlarge 8 192 32 EBS-Only Up to 5 All instances have the following specs: - Up to 3.1 GHz Intel Xeon Scalable processor
- Intel AVX†, Intel AVX2†, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases:
Micro-services, low-latency interactive applications, small and medium databases, virtual desktops, development environments, code repositories, and business-critical applications
- T3a
- Amazon EC2 T3a instances are the next generation burstable general-purpose instance type that provide a baseline level of CPU performance with the ability to burst CPU usage at any time for as long as required. T3a instances offer a balance of compute, memory, and network resources and are designed for applications with moderate CPU usage that experience temporary spikes in use. T3a instances deliver up to 10% cost savings over comparable instance types.
T3a instances accumulate CPU credits when a workload is operating below baseline threshold. Each earned CPU credit provides the T3a instance the opportunity to burst with the performance of a full CPU core for one minute when needed. T3a instances can burst at any time for as long as required in Unlimited mode.
Features:- AMD EPYC 7000 series processors (AMD EPYC 7571) with an all core turbo clock speed of 2.5 GHz
- Burstable CPU, governed by CPU Credits, and consistent baseline performance
- Unlimited mode by default to ensure performance during peak periods and Standard mode option for a predictable monthly cost
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance vCPU* CPU Credits/hour Mem (GiB) Storage Network Performance (Gbps)*** t3a.nano 2 6 0.5 EBS-Only Up to 5 t3a.micro 2 12 1 EBS-Only Up to 5 t3a.small 2 24 2 EBS-Only Up to 5 t3a.medium 2 24 4 EBS-Only Up to 5 t3a.large 2 36 8 EBS-Only Up to 5 t3a.xlarge 4 96 16 EBS-Only Up to 5 t3a.2xlarge 8 192 32 EBS-Only Up to 5 All instances have the following specs: - 2.5 GHz AMD EPYC 7000 series processors
- EBS Optimized
- Enhanced Networking†
Use Cases:
Micro-services, low-latency interactive applications, small and medium databases, virtual desktops, development environments, code repositories, and business-critical applications
- T2
- Amazon EC2 T2 instances are Burstable Performance Instances that provide a baseline level of CPU performance with the ability to burst above the baseline.
T2 Unlimited instances can sustain high CPU performance for as long as a workload needs it. For most general-purpose workloads, T2 Unlimited instances will provide ample performance without any additional charges. If the instance needs to run at higher CPU utilization for a prolonged period, it can also do so at a flat additional charge of 5 cents per vCPU-hour.
The baseline performance and ability to burst are governed by CPU Credits. T2 instances receive CPU Credits continuously at a set rate depending on the instance size, accumulating CPU Credits when they are idle, and consuming CPU credits when they are active. T2 instances are a good choice for a variety of general-purpose workloads including micro-services, low-latency interactive applications, small and medium databases, virtual desktops, development, build and stage environments, code repositories, and product prototypes. For more information see Burstable Performance Instances.
Features:- Up to 3.3 GHz Intel Xeon Scalable processor (Haswell E5-2676 v3 or Broadwell E5-2686 v4)
- High frequency Intel Xeon processors
- Burstable CPU, governed by CPU Credits, and consistent baseline performance
- Low-cost general purpose instance type, and Free Tier eligible*
- Balance of compute, memory, and network resources
* t2.micro only. If configured as T2 Unlimited, charges may apply if average CPU utilization exceeds the baseline of the instance. See documentation for more details.Instance vCPU* CPU Credits / hour Mem (GiB) Storage Network Performance t2.nano 1 3 0.5 EBS-Only Low t2.micro 1 6 1 EBS-Only Low to Moderate t2.small 1 12 2 EBS-Only Low to Moderate t2.medium 2 24 4 EBS-Only Low to Moderate t2.large 2 36 8 EBS-Only Low to Moderate t2.xlarge 4 54 16 EBS-Only Moderate t2.2xlarge 8 81 32 EBS-Only Moderate All instances have the following specs: - Intel AVX†, Intel Turbo†
- t2.nano, t2.micro, t2.small, t2.medium have up to 3.3 GHz Intel Xeon Scalable processor
- t2.large, t2.xlarge, and t2.2xlarge have up to 3.0 GHz Intel Scalable Processor
Use Cases
Websites and web applications, development environments, build servers, code repositories, micro services, test and staging environments, and line of business applications.
Each vCPU on Graviton-based Amazon EC2 instances is a core of AWS Graviton processor.
Each vCPU on non-Graviton-based Amazon EC2 instances is a thread of x86-based processor, except for M7a instances, T2 instances, and m3.medium.
† AVX, AVX2, and Enhanced Networking are only available on instances launched with HVM AMIs.
* This is the default and maximum number of vCPUs available for this instance type. You can specify a custom number of vCPUs when launching this instance type. For more details on valid vCPU counts and how to start using this feature, visit the Optimize CPUs documentation page here.
** These M4 instances may launch on an Intel Xeon E5-2686 v4 (Broadwell) processor.
*** Instances marked with "Up to" Network Bandwidth have a baseline bandwidth and can use a network I/O credit mechanism to burst beyond their baseline bandwidth on a best effort basis. For more information, see instance network bandwidth.
Compute Optimized
Compute Optimized instances are ideal for compute bound applications that benefit from high performance processors. Instances belonging to this category are well suited for batch processing workloads, media transcoding, high performance web servers, high performance computing (HPC), scientific modeling, dedicated gaming servers and ad server engines, machine learning inference and other compute intensive applications.
- C8g
- Amazon EC2 C8g instances are powered by AWS Graviton4 processors. They deliver the best price performance in Amazon EC2 for compute-intensive workloads.
Features:- Powered by custom-built AWS Graviton4 processors
- Larger instance sizes with up to 3x more vCPUs and memory than C7g instances
- Features the latest DDR5-5600 memory
- Optimized for Amazon EBS by default
- Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
- With C8gd instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the instance
- Supports Elastic Fabric Adapter (EFA) on c8g.24xlarge, c8g.48xlarge, c8g.metal-24xl, c8g.metal-48xl, c8gd.24xlarge, c8gd.48xlarge, c8gd.metal-24xl, and c8gd.metal-48xl
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Use cases
High performance computing (HPC), batch processing, ad serving, video encoding, gaming, scientific modeling, distributed analytics, and CPU-based ML inference.
- C7g
- Amazon EC2 C7g instances are powered by Arm-based AWS Graviton3 processors. They are ideal for compute-intensive workloads.
Features:- Powered by custom-built AWS Graviton3 processors
- Featuring the latest DDR5 memory that offers 50% more bandwidth compared to DDR4
- 20% higher enhanced networking bandwidth compared to C6g instances
- EBS-optimized by default
- Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
- With C7gd instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the instance
- Supports Elastic Fabric Adapter on c7g.16xlarge, c7g.metal, c7gd.16xlarge, and c7gd.metal instances
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Use Cases
High performance computing (HPC), batch processing, ad serving, video encoding, gaming, scientific modelling, distributed analytics, and CPU-based machine learning inference.
- C7gn
- Amazon EC2 C7gn instances are powered by Arm-based AWS Graviton3E processors. They offer up to 200 Gbps of network bandwidth and up to 3x higher packet-processing performance per vCPU compared with comparable current generation x86-based network optimized instances.
Features:- Powered by custom-built AWS Graviton3E processors
- Featuring the latest Double Data Rate 5 (DDR5) memory that offers 50% more bandwidth compared to DDR4
- Up to 200 Gbps of networking bandwidth
- Up to 40 Gbps of bandwidth to Amazon Elastic Block Store (EBS)
- 2x higher enhanced network bandwidth compared to C6gn instances
- EBS-optimized, by default
- Supports Elastic Fabric Adapter (EFA) on c7gn.16xlarge and c7gn.metal instances
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Use Cases
Network-intensive workloads, such as network virtual appliances, data analytics, and CPU-based artificial intelligence and machine learning (AI/ML) inference
- C7i
- Amazon EC2 C7i instances are powered by 4th Generation Intel Xeon Scalable processors and deliver 15% better price performance than C6i instances.
Features:- Up to 3.2 GHz 4th Generation Intel Xeon Scalable processor (Sapphire Rapids 8488C)
- New Advance Matrix Extensions (AMX) accelerate matrix multiplication operations
- 2 metal sizes: c7i.metal-24xl and c7i.metal-48xl
- Discrete built-in accelerators (available on C7i bare metal sizes only)—Data Streaming Accelerator (DSA), In-Memory Analytics Accelerator (IAA), and QuickAssist Technology (QAT)—enable efficient offload and acceleration of data operations that help optimize performance for databases, encryption and compression, and queue management workloads
- Latest DDR5 memory, which offers more bandwidth compared to DDR4
- Support for always-on memory encryption using Intel Total Memory Encryption (TME)
- Support for up to 128 EBS volume attachments per instance
- Up to 192 vCPUs and 384 GiB memory
- Supports Elastic Fabric Adapter on the 48xlarge size and metal-48xl size
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
All instances have the following specs: - Up to 3.2 GHz 4th generation Intel Xeon Scalable processors
- EBS Optimized
- Enhanced Networking†
Use Cases
C7i instances are ideal for compute-intensive workloads such as batch processing, distributed analytics, high-performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding.
- C7i-flex
- Amazon EC2 C7i-flex instances are powered by 4th Generation Intel Xeon Scalable processors and deliver 19% better price performance than C6i instances.
Features:- Easiest way for you to achieve price performance and cost benefits in the cloud for a majority of your compute-intensive workloads
- Up to 3.2 GHz 4th Generation Intel Xeon Scalable processor (Sapphire Rapids 8488C)
- New Advance Matrix Extensions (AMX) accelerate matrix multiplication operations
- Latest DDR5 memory, which offers more bandwidth compared to DDR4
- EBS-optimized by default
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Use Cases
C7i-flex instances are a great first choice to seamlessly run a majority of compute-intensive workloads, including web and application servers, databases, caches, Apache Kafka, and Elasticsearch.
- C7a
- Amazon EC2 C7a instances, powered by 4th generation AMD EPYC processors, deliver up to 50% higher performance compared to C6a instances.
Features:- Up to 3.7 GHz 4th generation AMD EPYC processors (AMD EPYC 9R14)
- Up to 50 Gbps of networking bandwidth
- Up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS)
- Instance sizes with up to 192 vCPUs and 384 GiB of memory
- Built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for always-on memory encryption using AMD secure memory encryption (SME)
- Support for new processor capabilities such as AVX-512, VNNI, and bfloat16
Use cases
Compute-intensive workloads such as batch processing, distributed analytics, high-performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding.
- C6g
- Amazon EC2 C6g instances are powered by Arm-based AWS Graviton2 processors. They deliver up to 40% better price performance over current generation C5 instances for compute-intensive applications.
Features:- Custom built AWS Graviton2 Processor with 64-bit Arm Neoverse cores
- Support for Enhanced Networking with Up to 25 Gbps of Network bandwidth
- EBS-optimized by default
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- With C6gd instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the instance
Use Cases
High performance computing (HPC), batch processing, ad serving, video encoding, gaming, scientific modelling, distributed analytics, and CPU-based machine learning inference.
- C6gn
- Amazon EC2 C6gn instances are powered by Arm-based AWS Graviton2 processors. They deliver up to 40% better price performance over current generation C5n instances and provide up to 100 Gbps networking and support for Elastic Fabric Adapter (EFA) for applications that need higher networking throughput, such as high performance computing (HPC), network appliance, real-time video communication, and data analytics.
Features:- Custom built AWS Graviton2 Processor with 64-bit Arm Neoverse cores
- Support for Enhanced Networking with Up to 100 Gbps of Network bandwidth
- EFA support on c6gn.16xlarge instances
- EBS-optimized by default, 2x EBS bandwidth compared to C5n instances
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Use Cases
High performance web servers, scientific modelling, batch processing, distributed analytics, high-performance computing (HPC), network appliance, machine/deep learning inference, ad serving, highly scalable multiplayer gaming, and video encoding.
- C6i
- Amazon EC2 C6i instances are powered by 3rd generation Intel Xeon Scalable processors and are an ideal fit for compute-intensive workloads.
Features:- Up to 3.5 GHz 3rd generation Intel Xeon Scalable processors (Ice Lake 8375C)
- Up to 15% better compute price performance over C5 instances
- Up to 9% higher memory bandwidth per vCPU compared to C5 instances
- Up to 50 Gbps of networking speed
- Up to 40 Gbps of bandwidth to the Amazon Elastic Block Store
- A new instance size (32xlarge) with 128 vCPUs and 256 GiB of memory
- Supports Elastic Fabric Adapter on the 32xlarge and metal sizes
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for always-on memory encryption using Intel Total Memory Encryption (TME)
- Support for new Intel Advanced Vector Extensions (AVX 512) instructions for faster execution of cryptographic algorithms
- With C6id instances, up to 7.6 TB of local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the C6i instance
All instances have the following specs: - Up to 3.5 GHz 3rd generation Intel Xeon Scalable processors
- EBS Optimized
- Enhanced Networking†
Use Cases
Compute-intensive workloads such as batch processing, distributed analytics, high-performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding.
- C6in
- Amazon EC2 C6in instances are ideal for network-intensive workloads such as network virtual appliances, data analytics, high performance computing (HPC), and CPU-based AI/ML. They are powered by 3rd Generation Intel Xeon Scalable processors (Ice Lake) with an all-core turbo frequency of 3.5 GHz. C6in instances offer up to 200 Gbps of network bandwidth and up to 100 Gbps Amazon Elastic Block Store (EBS) bandwidth. The C6in.32xlarge and C6in.metal instances support Elastic Fabric Adapter (EFA). EFA is a network interface for Amazon EC2 instances that you can use to run applications that require high levels of internode communications, such as HPC applications using Message Passing Interface (MPI) libraries, at scale on AWS.
Features:- Up to 3.5 GHz 3rd Generation Intel Xeon Scalable processors (Ice Lake 8375C)
- Support for Enhanced Networking with up to 200 Gbps of network bandwidth, up to 2x compared to C5n instances
- Up to 100 Gbps of EBS bandwidth, up to 5.2x compared to C5n instances
- EFA support on the 32xlarge and metal sizes
- Support for always-on memory encryption using Intel Total Memory Encryption (TME)
- Support for new Intel Advanced Vector Extensions (AVX-512) instructions for faster processing of cryptographic algorithm
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Use Cases
Compute-intensive workloads that require high network bandwidth or high packet-processing performance such as distributed computing applications, network virtual appliances, data analytics, high performance computing (HPC), and CPU-based AI/ML.
- C6a
- Amazon C6a instances are powered by 3rd generation AMD EPYC processors and are designed for compute-intensive workloads.
Features:- Up to 3.6 GHz 3rd generation AMD EPYC processors (AMD EPYC 7R13)
- Up to 15% better compute price performance over C5a instances
- Up to 50 Gbps of networking speed
- Up to 40 Gbps of bandwidth to the Amazon Elastic Block Store
- Up to 192 vCPUs and 384 GiB of memory in the largest size
- SAP-Certified instances
- Supports Elastic Fabric Adapter on the 48xlarge size
- Built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for always-on memory encryption using AMD Transparent Single Key Memory Encryption (TSME)
- Support for new AMD Advanced Vector Extensions (AVX-2) instructions for faster execution of cryptographic algorithms
All instances have the following specs: - Up to 3.6 GHz 3rd generation AMD EPYC processors
- EBS Optimized
- Enhanced Networking†
Use Cases
Compute-intensive workloads such as batch processing, distributed analytics, high-performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding.
- C5
- Amazon EC2 C5 instances are optimized for compute-intensive workloads and deliver cost-effective high performance at a low price per compute ratio.
Features:- C5 instances offer a choice of processors based on the size of the instance.
- C5 and C5d 12xlarge, 24xlarge, and metal instance sizes feature custom 2nd generation Intel Xeon Scalable Processors (Cascade Lake 8275CL) with a sustained all core Turbo frequency of 3.6GHz and single core turbo frequency of up to 3.9GHz.
- Other C5 instance sizes will launch on the 2nd generation Intel Xeon Scalable Processors (Cascade Lake 8223CL) or 1st generation Intel Xeon Platinum 8000 series (Skylake 8124M) processor with a sustained all core Turbo frequency of up to 3.4GHz, and single core turbo frequency of up to 3.5 GHz.
- New larger 24xlarge instance size offering 96 vCPUs, 192 GiB of memory, and optional 3.6TB local NVMe-based SSDs
- Requires HVM AMIs that include drivers for ENA and NVMe
- With C5d instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the C5 instance
- Elastic Network Adapter (ENA) provides C5 instances with up to 25 Gbps of network bandwidth and up to 19 Gbps of dedicated bandwidth to Amazon EBS.
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Use Cases
High performance web servers, scientific modelling, batch processing, distributed analytics, high-performance computing (HPC), machine/deep learning inference, ad serving, highly scalable multiplayer gaming, and video encoding.
- C5n
- Amazon EC2 C5n instances are ideal for high compute applications (including High Performance Computing (HPC) workloads, data lakes, and network appliances such as firewalls and routers) that can take advantage of improved network throughput and packet rate performance. C5n instances offers up to 100 Gbps network bandwidth and increased memory over comparable C5 instances. C5n.18xlarge instances support Elastic Fabric Adapter (EFA), a network interface for Amazon EC2 instances that enables customers to run applications requiring high levels of inter-node communications, like High Performance Computing (HPC) applications using the Message Passing Interface (MPI), at scale on AWS.
Features:- 3.0 GHz Intel Xeon Platinum processors (Skylake 8124) with Intel Advanced Vector Extension 512 (AVX-512) instruction set
- Sustained all core Turbo frequency of up to 3.4GHz, and single core turbo frequency of up to 3.5 GHz
- Larger instance size, c5n.18xlarge, offering 72 vCPUs and 192 GiB of memory
- Requires HVM AMIs that include drivers for ENA and NVMe
- Network bandwidth increases to up to 100 Gbps, delivering increased performance for network intensive applications.
- EFA support on c5n.18xlarge instances
- 33% higher memory footprint compared to C5 instances
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Model vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Mbps) c5n.large 2 5.25 EBS-Only Up to 25 Up to 4,750 c5n.xlarge 4 10.5 EBS-Only Up to 25 Up to 4,750 c5n.2xlarge 8 21 EBS-Only Up to 25 Up to 4,750 c5n.4xlarge 16 42 EBS-Only Up to 25 4,750 c5n.9xlarge 36 96 EBS-Only 50 9,500 c5n.18xlarge 72 192 EBS-Only 100 19,000 c5n.metal 72 192 EBS-Only 100 19,000 All instances have the following specs: - 3.0 GHz Intel Xeon Platinum Processor
- Intel AVX†, Intel AVX2†, Intel AVX-512, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
High performance web servers, scientific modelling, batch processing, distributed analytics, high-performance computing (HPC), machine/deep learning inference, ad serving, highly scalable multiplayer gaming, and video encoding.
- C5a
- Amazon EC2 C5a instances offer leading x86 price-performance for a broad set of compute-intensive workloads.
Features:- 2nd generation AMD EPYC 7002 series processors (AMD EPYC 7R32) running at frequencies up to 3.3 GHz
- Elastic Network Adapter (ENA) provides C5a instances with up to 20 Gbps of network bandwidth and up to 9.5 Gbps of dedicated bandwidth to Amazon EBS
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- With C5ad instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the C5a instance
Model vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Mbps) c5a.large 2 4 EBS-Only Up to 10 Up to 3,170 c5a.xlarge 4 8 EBS-Only Up to 10 Up to 3,170 c5a.2xlarge 8 16 EBS-Only Up to 10 Up to 3,170 c5a.4xlarge 16 32 EBS-Only Up to 10 Up to 3,170 c5a.8xlarge 32 64 EBS-Only 10 3,170 c5a.12xlarge 48 96 EBS-Only 12 4,750 c5a.16xlarge 64 128 EBS-Only 20 6,300 c5a.24xlarge 96 192 EBS-Only 20 9,500 c5ad.large 2 4 1 x 75 NVMe SSD up to 10 up to 3,170 c5ad.xlarge 4 8 1 x 150 NVMe SSD up to 10 up to 3,170 c5ad.2xlarge 8 16 1 x 300 NVMe SSD up to 10 up to 3,170 c5ad.4xlarge 16 32 2 x 300 NVMe SSD up to 10 up to 3,170 c5ad.8xlarge 32 64 2 x 600 NVMe SSD 10 3,170 c5ad.12xlarge 48 96 2 x 900 NVMe SSD 12 4,750 c5ad.16xlarge 64 128 2 x 1200 NVMe SSD 20 6,300 c5ad.24xlarge 96 192 2 x 1900 NVMe SSD 20 9,500 All instances have the following specs: - Up to 3.3 GHz 2nd generation AMD EPYC Processor
- EBS Optimized
- Enhanced Networking†
Use Cases
C5a instances are ideal for workloads requiring high vCPU and memory bandwidth such as batch processing, distributed analytics, data transformations, gaming, log analysis, web applications, and other compute-intensive workloads.
- C4
- C4 instances are optimized for compute-intensive workloads and deliver very cost-effective high performance at a low price per compute ratio.
Features:- Up to 2.9 GHz Intel Xeon Scalable Processor (Haswell E5-2666 v3)
- High frequency Intel Xeon E5-2666 v3 (Haswell) processors optimized specifically for EC2
- Default EBS-optimized for increased storage performance at no additional cost
- Higher networking performance with Enhanced Networking supporting Intel 82599 VF
- Requires Amazon VPC, Amazon EBS and 64-bit HVM AMIs
Instance vCPU* Mem (GiB) Storage Dedicated EBS Bandwidth (Mbps) Network Performance c4.large 2 3.75 EBS-Only 500 Moderate c4.xlarge 4 7.5 EBS-Only 750 High c4.2xlarge 8 15 EBS-Only 1,000 High c4.4xlarge 16 30 EBS-Only 2,000 High c4.8xlarge 36 60 EBS-Only 4,000 10 Gigabit All instances have the following specs: - Up to 2.9 GHz Intel Xeon Scalable Processor
- Intel AVX†, Intel AVX2†, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
High performance front-end fleets, web-servers, batch processing, distributed analytics, high performance science and engineering applications, ad serving, MMO gaming, and video-encoding.
Each vCPU on Graviton-based Amazon EC2 instances is a core of AWS Graviton processor.
Each vCPU on non-Graviton-based Amazon EC2 instances is a thread of x86-based processor, except for C7a instances.
† AVX, AVX2, and Enhanced Networking are only available on instances launched with HVM AMIs.
* This is the default and maximum number of vCPUs available for this instance type. You can specify a custom number of vCPUs when launching this instance type. For more details on valid vCPU counts and how to start using this feature, visit the Optimize CPUs documentation page here.
*** Instances marked with "Up to" Network Bandwidth have a baseline bandwidth and can use a network I/O credit mechanism to burst beyond their baseline bandwidth on a best effort basis. For more information, see instance network bandwidth.
Memory Optimized
Memory optimized instances are designed to deliver fast performance for workloads that process large data sets in memory.
- R8g
- Amazon EC2 R8g instances are powered by AWS Graviton4 processors. They deliver the best price performance in Amazon EC2 for memory-intensive workloads.
Features:- Powered by custom-built AWS Graviton4 processors
- Larger instance sizes with up to 3x more vCPUs and memory than R7g instances
- Features the latest DDR5-5600 memory
- Optimized for Amazon EBS by default
- Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
- With R8gd instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the instance
- Supports Elastic Fabric Adapter (EFA) on r8g.24xlarge, r8g.48xlarge, r8g.metal-24xl, r8g.metal-48xl, r8gd.24xlarge, r8gd.48xlarge, r8gd.metal-24xl, and r8gd.metal-48xl
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Use cases
Memory-intensive workloads such as open source databases, in-memory caches, and real-time big data analytics.
- R7g
- Amazon EC2 R7g instances are powered by AWS Graviton3 processors. They are ideal for memory-intensive workloads.
Features:- Powered by custom-built AWS Graviton3 processors
- Features DDR5 memory that offers 50% more bandwidth compared to DDR4
- 20% higher enhanced networking bandwidth compared to R6g instances
- EBS-optimized by default
- Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
- With R7gd instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the instance
- Supports Elastic Fabric Adapter (EFA) on r7g.16xlarge, r7g.metal, r7gd.16xlarge, and r7gd.metal instances
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Use cases
Memory-intensive workloads such as open-source databases, in-memory caches, and real-time big data analytics.
- R7i
- Amazon EC2 R7i instances are powered by 4th Generation Intel Xeon Scalable processors and deliver 15% better price performance than R6i instances.
Features:- Up to 3.2 GHz 4th Generation Intel Xeon Scalable processor (Sapphire Rapids 8488C)
- New Advance Matrix Extensions (AMX) accelerate matrix multiplication operations
- 2 metal sizes: r7i.metal-24xl and r7i.metal-48xl
- Discrete built-in accelerators (available on R7i bare metal sizes only)—Data Streaming Accelerator (DSA), In-Memory Analytics Accelerator (IAA), and QuickAssist Technology (QAT)—enable efficient offload and acceleration of data operations that help optimize performance for databases, encryption and compression, and queue management workloads
- Latest DDR5 memory, which offers more bandwidth compared to DDR4
- Support for always-on memory encryption using Intel Total Memory Encryption (TME)
- Support for up to 128 EBS volume attachments per instance
- Up to 192 vCPUs and 1,536GiB memory
- Supports Elastic Fabric Adapter on the 48xlarge size and metal-48xl size
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
All instances have the following specs: - Up to 3.2 GHz 4th generation Intel Xeon Scalable processors
- EBS Optimized
- Enhanced Networking†
Use Cases
R7i instances are SAP-certified and ideal for all memory-intensive workloads (SQL and NoSQL databases), distributed web scale in-memory caches (Memcached and Redis), in-memory databases (SAP HANA), and real-time big data analytics (Apache Hadoop and Apache Spark clusters).
- R7iz
- Amazon EC2 R7iz instances are powered by 4th Generation Intel Xeon Scalable processors and are an ideal fit for high CPU and memory-intensive workloads.
Features:- 4th Generation Intel Xeon Scalable Processors (Sapphire Rapids 6455B) with an all-core turbo frequency up to 3.9 GHz
- Up to 20% higher compute performance than z1d instances
- New Advance Matrix Extensions (AMX) accelerate matrix multiplication operations – available in all sizes
- Discrete built-in accelerators (available on R7iz bare metal sizes only) - Data Streaming Accelerator (DSA), In-Memory Analytics Accelerator (IAA), and QuickAssist Technology (QAT) - enable efficient offload and acceleration of data operations that help in optimizing performance for databases, encryption and compression, and queue management workloads
- Up to 50 Gbps of networking speed
- Up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (EBS)
- Instance size with up to 128 vCPUs and 1,024 GiB of memory
- Supports Elastic Fabric Adapter on the 32xlarge size and the metal-32xl size
- Built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Use Cases
High-compute and memory-intensive workloads such as frontend Electronic Design Automation (EDA), relational database workloads with high per-core licensing fees, and financial, actuarial, and data analytics simulation workloads.
- R7a
- Amazon EC2 R7a instances, powered by 4th generation AMD EPYC processors, deliver up to 50% higher performance compared to R6a instances.
Features:- Up to 3.7 GHz 4th generation AMD EPYC processors (AMD EPYC 9R14)
- Up to 50 Gbps of networking bandwidth
- Up to 40 Gbps of bandwidth to the Amazon Elastic Block Store
- Instance sizes with up to 192 vCPUs and 1,536 GiB of memory
- SAP-certified instances
- Built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for always-on memory encryption using AMD secure memory encryption (SME)
- Support for new processor capabilities such as AVX3-512, VNNI, and bfloat16.
Use cases
Memory-intensive workloads, such as SQL and NoSQL databases, distributed web scale in-memory caches, in-memory databases, real-time big data analytics, and Electronic Design Automation (EDA)
- R6g
- Amazon EC2 R6g instances are powered by Arm-based AWS Graviton2 processors. They deliver up to 40% better price performance over current generation R5 instances for memory-intensive applications.
Features:- Custom built AWS Graviton2 Processor with 64-bit Arm Neoverse cores
- Support for Enhanced Networking with Up to 25 Gbps of Network bandwidth
- EBS-optimized by default
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- With R6gd instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the instance
Use Cases
Memory-intensive applications such as open-source databases, in-memory caches, and real time big data analytics
- R6i
- Amazon R6i instances are powered by 3rd generation Intel Xeon Scalable processors (code named Ice Lake) and are an ideal fit for memory-intensive workloads.
Features:- Up to 3.5 GHz 3rd generation Intel Xeon Scalable processors (Ice Lake 8375C)
- Up to 15% better compute price performance over R5 instances
- Up to 20% higher memory bandwidth per vCPU compared to R5 instances
- Up to 50 Gbps of networking speed
- Up to 40 Gbps of bandwidth to the Amazon Elastic Block Store
- A new instance size (32xlarge) with 128 vCPUs and 1,024 GiB of memory
- Supports Elastic Fabric Adapter on the 32xlarge and metal sizes
- Built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for always-on memory encryption using Intel Total Memory Encryption (TME)
- Support for new Intel Advanced Vector Extension (AVX 512) instructions for faster execution of cryptographic algorithms
- With R6id instances, up to 7.6 TB of local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the R6i instance
Use Cases
Memory-intensive workloads such as SAP, SQL and NoSQL databases, distributed web scale in-memory caches like Memcached and Redis, in-memory databases like SAP HANA, and real time big data analytics like Hadoop and Spark clusters.
- R6in
- Amazon EC2 R6in and R6idn instances are ideal for memory-intensive workloads that can take advantage of high networking bandwidth, such as SAP, SQL and NoSQL databases, and in-memory databases, such as SAP HANA. R6in and R6idn instances offer up to 200 Gbps of network bandwidth and up to 100 Gbps Amazon Elastic Block Store (EBS) bandwidth.
Features:- Up to 3.5 GHz 3rd Generation Intel Xeon Scalable processors (Ice Lake 8375C)
- Up to 20% higher memory bandwidth per vCPU compared to R5n and R5dn instances
- Up to 200 Gbps of networking speed, which is up to 2x compared to R5n and R5dn instances
- Up to 100 Gbps of EBS bandwidth, which is up to 1.6x more than R5b instances
- Supports Elastic Fabric Adapter (EFA) on the 32xlarge and metal sizes
- Support for always-on memory encryption using Intel Total Memory Encryption (TME)
- Support for new Intel Advanced Vector Extensions (AVX-512) instructions for faster processing of cryptographic algorithms
- With R6idn instances, up to 7.6 TB of local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the R6idn instance lifetime
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Use Cases
Memory-intensive workloads that can take advantage of high networking throughput, such as SAP, SQL and NoSQL databases, and in-memory databases, such as SAP HANA.
- R6a
- Amazon EC2 R6a instances are powered by 3rd generation AMD EPYC processors and are an ideal fit for memory intensive workloads.
Features:- Up to 3.6 GHz 3rd generation AMD EPYC processors (AMD EPYC 7R13)
- Up to 35% better compute price performance over R5a instances
- Up to 50 Gbps of networking speed
- Up to 40 Gbps of bandwidth to the Amazon Elastic Block Store
- Instance size with up to 192 vCPUs and 1,536 GiB of memory
- SAP-Certified instances
- Supports Elastic Fabric Adapter on the 48xlarge and metal sizes
- Built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for always-on memory encryption using AMD Transparent Single Key Memory Encryption (TSME)
- Support for new AMD Advanced Vector Extensions (AVX-2) instructions for faster execution of cryptographic algorithms
Use Cases
Memory-intensive workloads, such as SAP, SQL, and NoSQL databases; distributed web scale in-memory caches, such as Memcached and Redis; in-memory databases and real-time big data analytics, such as Hadoop and Spark clusters; and other enterprise applications
- R5
- Amazon EC2 R5 instances deliver 5% additional memory per vCPU than R4 and the largest size provides 768 GiB of memory. In addition, R5 instances deliver a 10% price per GiB improvement and a ~20% increased CPU performance over R4.
Features:- Up to 3.1 GHz Intel Xeon® Platinum 8000 series processors (Skylake 8175M or Cascade Lake 8259CL) with new Intel Advanced Vector Extension (AVX-512) instruction set
- Up to 768 GiB of memory per instance
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- With R5d instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the R5 instance
- New 8xlarge and 16xlarge sizes now available.
Use Cases
R5 instances are well suited for memory intensive applications such as high performance databases, distributed web scale in-memory caches, mid-size in-memory databases, real time big data analytics, and other enterprise applications.
- R5n
- Amazon EC2 R5 instances are ideal for memory-bound workloads including high performance databases, distributed web scale in-memory caches, mid-sized in-memory database, real time big data analytics, and other enterprise applications. The higher bandwidth, R5n and R5dn, instance variants are ideal for applications that can take advantage of improved network throughput and packet rate performance.
Features:- 2nd generation Intel Xeon Scalable Processors (Cascade Lake 8259CL) with a sustained all-core Turbo CPU frequency of 3.1 GHz and maximum single core turbo frequency of 3.5 GHz
- Support for the new Intel Vector Neural Network Instructions (AVX-512 VNNI) which will help speed up typical machine learning operations like convolution, and automatically improve inference performance over a wide range of deep learning workloads
- 25 Gbps of peak bandwidth on smaller instance sizes
- 100 Gbps of network bandwidth on the largest instance size
- Requires HVM AMIs that include drivers for ENA and NVMe
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
- With R5dn instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the R5 instance
Use Cases
High performance databases, distributed web scale in-memory caches, mid-sized in-memory database, real time big data analytics and other enterprise applications
- R5b
- Amazon EC2 R5b instances are EBS-optimized variants of memory-optimized R5 instances. R5b instances increase EBS performance by 3x compared to same-sized R5 instances. R5b instances deliver up to 60 Gbps bandwidth and 260K IOPS of EBS performance, the fastest block storage performance on EC2.
Features:- Custom 2nd generation Intel Xeon Scalable Processors (Cascade Lake 8259CL) with a sustained all-core Turbo CPU frequency of 3.1 GHz and maximum single core turbo frequency of 3.5 GHz
- Up to 96 vCPUs, Up to 768 GiB of Memory
- Up to 25 Gbps network bandwidth
- Up to 60 Gbps of EBS bandwidth
Use Cases
High performance databases, distributed web scale in-memory caches, mid-size in-memory databases, real time big data analytics.
- R5a
- Amazon EC2 R5a instances are the latest generation of Memory Optimized instances ideal for memory-bound workloads and are powered by AMD EPYC 7000 series processors. R5a instances deliver up to 10% lower cost per GiB memory over comparable instances.
Features:- AMD EPYC 7000 series processors (AMD EPYC 7571) with an all core turbo clock speed of 2.5 GHz
- Up to 20 Gbps network bandwidth using Enhanced Networking
- Up to 768 GiB of memory per instance
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
- With R5ad instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the R5a instance
Use Cases
R5a instances are well suited for memory intensive applications such as high performance databases, distributed web scale in-memory caches, mid-size in-memory databases, real time big data analytics, and other enterprise applications.
- R4
- Amazon EC2 R4 instances are optimized for memory-intensive applications and offer better price per GiB of RAM than R3.
Features:- High Frequency Intel Xeon scalable (Broadwell E5-2686 v4) processors
- DDR4 Memory
- Support for Enhanced Networking
Instance vCPU Mem (GiB) Storage Networking Performance (Gbps)*** r4.large 2 15.25 EBS-Only Up to 10 r4.xlarge 4 30.5 EBS-Only Up to 10 r4.2xlarge 8 61 EBS-Only Up to 10 r4.4xlarge 16 122 EBS-Only Up to 10 r4.8xlarge 32 244 EBS-Only 10 r4.16xlarge 64 488 EBS-Only 25 All instances have the following specs: - Up to 2.3 GHz Intel Xeon Scalable Processor
- Intel AVX†, Intel AVX2†, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
High performance databases, data mining & analysis, in-memory databases, distributed web scale in-memory caches, applications performing real-time processing of unstructured big data, Hadoop/Spark clusters, and other enterprise applications.
- U7i
- Amazon EC2 High Memory U7i instances are purpose built to run large in-memory databases such as SAP HANA and Oracle.
Features:- Offer up to 1920 vCPUs
- Featuring DDR5 memory
- Up to 32 TiB of instance memory
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Virtualized instances are available with On-Demand and with 1-year and 3-year Savings Plan purchase options*
* U7inh instances are available as a 3-year Instance Savings Plan Purchase.
U7i instances, powered by fourth generation Intel Xeon Scalable Processors (Sapphire Rapids), offer up to 32TiB of the latest DDR5 memory and up to 1920 vCPUs.
All instances have the following specs: - Intel AVX, Intel AVX2, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
Ideal for running large enterprise databases, including SAP HANA in-memory database in the cloud. Certified by SAP for running Business Suite on HANA, the next-generation Business Suite S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, and SAP BW/4HANA in production environments. For details, see SAP HANA Hardware Directory.
- High Memory (U-1)
- Amazon EC2 High Memory (U-1) instances are purpose built to run large in-memory databases, including production deployments of SAP HANA in the cloud.
Features:- Now available in both bare metal and virtualized memory
- From 3 to 24 TiB of instances memory
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Virtualized instances are available with On-Demand and with 1-year and 3-year Savings Plan purchase options
Use Cases
Ideal for running large enterprise databases, including production installations of SAP HANA in-memory database in the cloud. Certified by SAP for running Business Suite on HANA, the next-generation Business Suite S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, and SAP BW/4HANA in production environments.
- X8g
- Amazon EC2 X8g instances are powered by AWS Graviton4 processors. They deliver the best price performance among Amazon EC2 X-series instances.
Features:- Powered by custom-built AWS Graviton4 processors
- Larger instance sizes with up to 3x more vCPUs and memory than X2gd instances
- Features the latest DDR5-5600 memory
- Optimized for Amazon EBS by default
- Supports Elastic Fabric Adapter (EFA) on x8g.24xlarge, x8g.48xlarge, x8g.metal-24xl, and x8g.metal-48xl
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Use cases
Memory-intensive workloads such as in-memory databases (Redis, Memcached), relational databases (MySQL, PostgreSQL), electronic design automation (EDA) workloads, real-time big data analytics, real-time caching servers, and memory-intensive containerized applications.
- X2gd
- Amazon EC2 X2gd instances are powered by Arm-based AWS Graviton2 processors and provide the lowest cost per GiB of memory in Amazon EC2. They deliver up to 55% better price performance compared to current generation X1 instances.
Features:- Custom built AWS Graviton2 Processor with 64-bit Arm Neoverse cores
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for Enhanced Networking with up to 25 Gbps of network bandwidth
- Local NVMe-based SSD storage provide high speed, low latency access to in-memory data
- EBS-optimized by default
Use Cases
Memory-intensive workloads such as open-source databases (MySQL, MariaDB, and PostgreSQL), in-memory caches (Redis, KeyDB, Memcached), electronic design automation (EDA) workloads, real-time analytics, and real-time caching servers.
- X2idn
- Amazon EC2 X2idn instances are powered by 3rd generation Intel Xeon Scalable processors with an all core turbo frequency up to 3.5 GHz and are a good choice for a wide range of memory-intensive applications.
Features:- Up to 3.5 GHz 3rd generation Intel Xeon Scalable processors (Ice Lake 8375C)
- 16:1 ratio of memory to vCPU on all sizes
- Up to 50% better price performance than X1 instances
- Up to 100 Gbps of networking speed
- Up to 80 Gbps of bandwidth to the Amazon Elastic Block Store
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for always-on memory encryption using Intel Total Memory Encryption (TME)
- Support for new Intel Advanced Vector Extensions (AVX 512) instructions for faster execution of cryptographic algorithms
Use Cases
In-memory databases (e.g. SAP HANA, Redis) traditional databases (e.g. Oracle DB, Microsoft SQL Server), and in-memory analytics (e.g. SAS, Aerospike).
- X2iedn
- Amazon EC2 X2iedn instances are powered by 3rd generation Intel Xeon Scalable processors (code named Ice Lake) with an all core turbo frequency up to 3.5 GHz and are a good choice for a wide range of large scale memory-intensive applications.
Features:- Up to 3.5 GHz 3rd generation Intel Xeon Scalable processors (Ice Lake 8375C)
- 32:1 ratio of memory to vCPU on all sizes
- Up to 50% better price performance than X1 instances
- Up to 100 Gbps of networking speed
- Up to 80 Gbps of bandwidth to the Amazon Elastic Block Store
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for always-on memory encryption using Intel Total Memory Encryption (TME)
- Support for new Intel Advanced Vector Extensions (AVX 512) instructions for faster execution of cryptographic algorithms
Use Cases
Large scale in-memory databases (e.g. SAP HANA, Redis) traditional databases (e.g. Oracle DB, Microsoft SQL Server), and in-memory analytics (e.g. SAS, Aerospike).
- X2iezn
- Amazon EC2 X2iezn instances are powered by the fastest Intel Xeon Scalable processors (code named Cascade Lake) in the cloud, with an all-core turbo frequency up to 4.5 GHz and are a good choice for memory-intensive electronic design automation (EDA) workloads.
Features:- Up to 4.5 GHz 2nd generation Intel Xeon Scalable processors (Cascade Lake 8252C)
- 32:1 ratio of memory to vCPU on all sizes
- Up to 55% better price performance than X1e instances
- Up to 100 Gbps of networking speed
- Up to 19 Gbps of bandwidth to the Amazon Elastic Block Store
- Supports Elastic Fabric Adapter on 12xlarge and metal sizes
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Use Cases
Electronic design automation (EDA) workloads like physical verification, static timing analysis, power signoff, and full chip gate-level simulation.
- X1
- Amazon EC2 X1 instances are optimized optimized for enterprise-class databases and in-memory applications.
Features:- High frequency Intel Xeon E7-8880 v3 (Haswell) processors
- One of the lowest prices per GiB of RAM
- Up to 1,952 GiB of DRAM-based instance memory
- SSD instance storage for temporary block-level storage and EBS-optimized by default at no additional cost
- Ability to control processor C-state and P-state configuration
Instance vCPU Mem (GiB) SSD Storage (GB) Dedicated EBS Bandwidth (Mbps) Network Performance (Gbps) x1.16xlarge 64 976 1 x 1,920 7,000 10 x1.32xlarge 128 1,952 2 x 1,920 14,000 25 All instances have the following specs: - 2.3 GHz Intel Xeon Scalable Processor (Haswell E7-8880 v3)
- Intel AVX†, Intel AVX2†, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
In-memory databases (e.g. SAP HANA), big data processing engines (e.g. Apache Spark or Presto), high performance computing (HPC). Certified by SAP to run Business Warehouse on HANA (BW), Data Mart Solutions on HANA, Business Suite on HANA (SoH), Business Suite S/4HANA.
- X1e
- Amazon EC2 X1e instances are optimized for large scale databases, in-memory databases, and other memory-intensive enterprise applications.
Features:- High frequency Intel Xeon E7-8880 v3 (Haswell) processors
- One of the lowest prices per GiB of RAM
- Up to 3,904 GiB of DRAM-based instance memory
- SSD instance storage for temporary block-level storage and EBS-optimized by default at no additional cost
- Ability to control processor C-state and P-state configurations on x1e.32xlarge, x1e.16xlarge and x1e.8xlarge instances
Instance vCPU Mem (GiB) SSD Storage (GB) Dedicated EBS Bandwidth (Mbps) Networking Performance (Gbps)*** x1e.xlarge 4 122 1 x 120 500 Up to 10 x1e.2xlarge 8 244 1 x 240 1,000 Up to 10 x1e.4xlarge 16 488 1 x 480 1,750 Up to 10 x1e.8xlarge 32 976 1 x 960 3,500 Up to 10 x1e.16xlarge 64 1,952 1 x 1,920 7,000 10 x1e.32xlarge 128 3,904 2 x 1,920 14,000 25 All instances have the following specs: - 2.3 GHz Intel Xeon Scalable Processor (Haswell E7-8880 v3)
- Intel AVX†, Intel AVX2†
- EBS Optimized
- Enhanced Networking†
In addition, x1e.16xlarge and x1e.32xlarge have - Intel Turbo
Use Cases
High performance databases, in-memory databases (e.g. SAP HANA) and memory intensive applications. x1e.32xlarge instance certified by SAP to run next-generation Business Suite S/4HANA, Business Suite on HANA (SoH), Business Warehouse on HANA (BW), and Data Mart Solutions on HANA on the AWS cloud.
- z1d
- Amazon EC2 z1d instances offer both high compute capacity and a high memory footprint. High frequency z1d instances deliver a sustained all core frequency of up to 4.0 GHz, the fastest of any cloud instance.
Features:- Custom Intel® Xeon® Scalable processor (Skylake 8151) with a sustained all core frequency of up to 4.0 GHz with new Intel Advanced Vector Extension (AVX-512) instruction set
- Up to 1.8TB of instance storage
- High memory with up to 384 GiB of RAM
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- With z1d instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the z1d instance
Instance vCPU Mem (GiB) Networking Performance (Gbps)*** SSD Storage (GB) z1d.large 2 16 Up to 10 1 x 75 NVMe SSD z1d.xlarge 4 32 Up to 10 1 x 150 NVMe SSD z1d.2xlarge 8 64 Up to 10 1 x 300 NVMe SSD z1d.3xlarge 12 96 Up to 10 1 x 450 NVMe SSD z1d.6xlarge 24 192 10 1 x 900 NVMe SSD z1d.12xlarge 48 384 25 2 x 900 NVMe SSD z1d.metal 48* 384 25 2 x 900 NVMe SSD *z1d.metal provides 48 logical processors on 24 physical cores All instances have the following specs: - Up to 4.0 GHz Intel® Xeon® Scalable Processors
- Intel AVX, Intel AVX2, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
Ideal for electronic design automation (EDA) and certain relational database workloads with high per-core licensing costs.
Each vCPU on Graviton-based Amazon EC2 instances is a core of AWS Graviton processor.
Each vCPU on non-Graviton-based Amazon EC2 instances is a thread of x86-based processor, except for R7a instances.
† AVX, AVX2, and Enhanced Networking are available only on instances launched with HVM AMIs.
*** Instances marked with "Up to" Network Bandwidth have a baseline bandwidth and can use a network I/O credit mechanism to burst beyond their baseline bandwidth on a best effort basis. For more information, see instance network bandwidth.
Accelerated Computing
Accelerated computing instances use hardware accelerators, or co-processors, to perform functions, such as floating point number calculations, graphics processing, or data pattern matching, more efficiently than is possible in software running on CPUs.
- P5
- Amazon EC2 P5 instances are the latest generation of GPU-based instances and provide the highest performance in Amazon EC2 for deep learning and high performance computing (HPC).
Features:- Intel Sapphire Rapids CPU and PCIe Gen5 between the CPU and GPU in P5en instances; 3rd Gen AMD EPYC processors (AMD EPYC 7R13) and PCIe Gen4 between the CPU and GPU in P5 and P5e instances.
- Up to 8 NVIDIA H100 (in P5) or H200 (in P5e and P5en) Tensor Core GPUs
- Up to 3,200 Gbps network bandwidth with support for Elastic Fabric Adapter (EFA) and NVIDIA GPUDirect RDMA (remote direct memory access)
- 900 GB/s peer-to-peer GPU communication with NVIDIA NVSwitch
- P4
- Amazon EC2 P4 instances provide high performance for machine learning training and high performance computing in the cloud.
- 3.0 GHz 2nd Generation Intel Xeon Scalable processors (Cascade Lake P-8275CL)
- Up to 8 NVIDIA A100 Tensor Core GPUs
- 400 Gbps instance networking with support for Elastic Fabric Adapter (EFA) and NVIDIA GPUDirect RDMA (remote direct memory access)
- 600 GB/s peer-to-peer GPU communication with NVIDIA NVSwitch
- Deployed in Amazon EC2 UltraClusters consisting of more than 4,000 NVIDIA A100 Tensor Core GPUs, petabit-scale networking, and scalable low-latency storage with Amazon FSx for Lustre
Use Cases
Machine learning, high performance computing, computational fluid dynamics, computational finance, seismic analysis, speech recognition, autonomous vehicles, and drug discovery.
- G6e
- Amazon EC2 G6e instances are designed to accelerate deep learning inference and spatial computing workloads.
Features:- 3rd generation AMD EPYC processors (AMD EPYC 7R13)
- Up to 8 NVIDIA L40S Tensor Core GPUs
- Up to 400 Gbps of network bandwidth
- Up to 7.6 TB of local NVMe local storage
Use Cases
Inference workloads for large language models and diffusion models for image, audio, and video, generation; single-node training of moderately complex generative AI models; 3D simulations, digital twins, and industrial digitization.
- G6
- Amazon EC2 G6 instances are designed to accelerate graphics-intensive applications and machine learning inference.
Features:- 3rd generation AMD EPYC processors (AMD EPYC 7R13)
- Up to 8 NVIDIA L4 Tensor Core GPUs
- Up to 100 Gbps of network bandwidth
- Up to 7.52 TB of local NVMe local storage
Use Cases
Deploying ML models for natural language processing, language translation, video and image analysis, speech recognition, and personalization as well as graphics workloads, such as creating and rendering real-time, cinematic-quality graphics and game streaming.
- G5g
- Amazon EC2 G5g instances are powered by AWS Graviton2 processors and feature NVIDIA T4G Tensor Core GPUs to provide the best price performance in Amazon EC2 for graphics workloads such as Android game streaming. They are the first Arm-based instances in a major cloud to feature GPU acceleration. Customers can also use G5g instances for cost-effective ML inference.
Features:- Custom built AWS Graviton2 Processor with 64-bit Arm Neoverse cores
- Up to 2 NVIDIA T4G Tensor Core GPUs
- Up to 25 Gbps of networking bandwidth
- EBS-optimized by default
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Use Cases
Android game streaming, machine learning inference, graphics rendering, autonomous vehicle simulations
- G5
- Amazon EC2 G5 instances are designed to accelerate graphics-intensive applications and machine learning inference. They can also be used to train simple to moderately complex machine learning models.
Features:- 2nd generation AMD EPYC processors (AMD EPYC 7R32)
- Up to 8 NVIDIA A10G Tensor Core GPUs
- Up to 100 Gbps of network bandwidth
- Up to 7.6 TB of local NVMe local storage
Instance Size GPU GPU Memory (GiB) vCPUs Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) g5.xlarge 1 24 4 16 1 x 250 NVMe SSD Up to 10 Up to 3.5 g5.2xlarge 1 24 8 32 1 x 450 NVMe SSD Up to 10 Up to 3.5 g5.4xlarge 1 24 16 64 1 x 600 NVMe SSD Up to 25 8 g5.8xlarge 1 24 32 128 1 x 900 NVMe SSD 25 16 g5.16xlarge 1 24 64 256 1 x 1900 NVMe SSD 25 16 g5.12xlarge 4 96 48 192 1 x 3800 NVMe SSD 40 16 g5.24xlarge 4 96 96 384 1 x 3800 NVMe SSD 50 19 g5.48xlarge 8 192 192 768 2x 3800 NVME SSD 100 19 G5 instances have the following specs: - 2nd Generation AMD EPYC processors
- EBS Optimized
- Enhanced Networking†
Use Cases
Graphics-intensive applications such as remote workstations, video rendering, and cloud gaming to produce high fidelity graphics in real time. Training and inference deep learning models for machine learning use cases such as natural language processing, computer vision, and recommender engine use cases.
- G4dn
- Amazon EC2 G4dn instances are designed to help accelerate machine learning inference and graphics-intensive workloads.
Features:- 2nd Generation Intel Xeon Scalable Processors (Cascade Lake P-8259CL)
- Up to 8 NVIDIA T4 Tensor Core GPUs
- Up to 100 Gbps of networking throughput
- Up to 1.8 TB of local NVMe storage
Instance GPUs vCPU Memory (GiB) GPU Memory (GiB) Instance Storage (GB) Network Performance (Gbps)*** EBS Bandwidth (Gbps) g4dn.xlarge 1 4 16 16 1 x 125 NVMe SSD Up to 25 Up to 3.5 g4dn.2xlarge 1 8 32 16 1 x 225 NVMe SSD Up to 25 Up to 3.5 g4dn.4xlarge 1 16 64 16 1 x 225 NVMe SSD Up to 25 4.75 g4dn.8xlarge 1 32 128 16 1 x 900 NVMe SSD 50 9.5 g4dn.16xlarge 1 64 256 16 1 x 900 NVMe SSD 50 9.5 g4dn.12xlarge 4 48 192 64 1 x 900 NVMe SSD 50 9.5 g4dn.metal 8 96 384 128 2 x 900 NVMe SSD 100 19
All instances have the following specs:
- 2.5 GHz Cascade Lake 24C processors
- Intel AVX, Intel AVX2, Intel AVX-512, and Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
Machine learning inference for applications like adding metadata to an image, object detection, recommender systems, automated speech recognition, and language translation. G4 instances also provide a very cost-effective platform for building and running graphics-intensive applications, such as remote graphics workstations, video transcoding, photo-realistic design, and game streaming in the cloud. - G4ad
- Amazon EC2 G4ad instances provide the best price performance for graphics intensive applications in the cloud.
Features:- 2nd Generation AMD EPYC Processors (AMD EPYC 7R32)
- AMD Radeon Pro V520 GPUs
- Up to 2.4 TB of local NVMe storage
Instance GPUs vCPU Memory (GiB) GPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) g4ad.xlarge 1 4 16 8 1 x 150 NVMe SSD Up to 10 Up to 3 g4ad.2xlarge 1 8 32 8 1 x 300 NVMe SSD Up to 10 Up to 3 g4ad.4xlarge 1 16 64 8 1 x 600 NVMe SSD Up to 10 Up to 3 g4ad.8xlarge 2 32 128 16 1 x 1200 NVMe SSD 15 3 g4ad.16xlarge 4 64 256 32 1 x 2400 NVMe SSD 25 6
All instances have the following specs:
- Second generation AMD EPYC processors
- EBS Optimized
- Enhanced Networking†
Use Cases
Graphics-intensive applications, such as remote graphics workstations, video transcoding, photo-realistic design, and game streaming in the cloud. - Trn2
- Amazon EC2 Trn2 instances, powered by AWS Trainium2 chips, are purpose built for high-performance generative AI training and inference of models with hundreds of billions to trillion+ parameters.
Features:- 16 AWS Trainium2 chips
- Supported by AWS Neuron SDK
- 4th Generation Intel Xeon Scalable processor (Sapphire Rapids 8488C)
- Up to 12.8 Tbps third-generation Elastic Fabric Adapter (EFA) networking bandwidth
- Up to 8 TB local NVMe storage
- High-bandwidth, intra-instance, and inter-instance connectivity with NeuronLink
- Deployed in Amazon EC2 UltraClusters and available in EC2 UltraServers (available in preview)
- Amazon EBS-optimized
- Enhanced networking
Instance Size Available in EC2 UltraServers Trainium2 Chips Accelerator Memory (TB) vCPUs Memory (TB) Instance Storage (TB) Network Bandwidth (Tbps)*** EBS Bandwidth (Gbps) trn2.48xlarge No 16 1.5 192 2 4 x 1.92 NVMe SSD 3.2 80 trn2u.48xlarge Yes (Preview) 16 1.5 192 2 4 x 1.92 NVMe SSD 3.2 80 Use Cases Training and inference of the most demanding foundation models including large language models (LLMs), multi-modal models, diffusion transformers and more to build a broad set of next-generation generative AI applications.
- Trn1
- Amazon EC2 Trn1 instances, powered by AWS Trainium chips, are purpose built for high-performance deep learning training while offering up to 50% cost-to-train savings over comparable Amazon EC2 instances.
Features:- 16 AWS Trainium chips
- Supported by AWS Neuron SDK
- 3rd Generation Intel Xeon Scalable processor (Ice Lake SP)
- Up to 1600 Gbps second-generation Elastic Fabric Adapter (EFA) networking bandwidth
- Up to 8 TB local NVMe storage
- High-bandwidth, intra-instance connectivity with NeuronLink
- Deployed in EC2 UltraClusters that enable scaling up to 30,000 AWS Trainium accelerators, connected with a petabit-scale nonblocking network, and scalable low-latency storage with Amazon FSx for Lustre
- Amazon EBS-optimized
- Enhanced networking
Instance Size Trainium Chips Accelerator Memory (GB) vCPUs Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) trn1.2xlarge 1 32 8 32 1 x 500 NVMe SSD Up to 12.5 Up to 20 trn1.32xlarge 16 512 128 512 4 x 2000 NVMe SSD 800 80 trn1n.32xlarge 16 512 128 512 4 x 2000 NVMe SSD 1600 80 Use Cases Deep learning training for natural language processing (NLP), computer vision, search, recommendation, ranking, and more
- Inf2
- Amazon EC2 Inf2 instances are purpose built for deep learning inference. They deliver high performance at the lowest cost in Amazon EC2 for generative artificial intelligence models, including large language models and vision transformers. Inf2 instances are powered by AWS Inferentia2. These new instances offer 3x higher compute performance, 4x higher accelerator memory, up to 4x higher throughput, and up to 10x lower latency compared to Inf1 instances
Features:- Up to 12 AWS Inferentia2 chips
- Supported by AWS Neuron SDK
- Dual AMD EPYC processors (AMD EPYC 7R13)
- Up to 384 GB of shared accelerator memory (32 GB HBM per accelerator)
- Up to 100 Gbps networking
Use Cases
Natural language understanding (advanced text analytics, document analysis, conversational agents), translation, image and video generation, speech recognition, personalization, fraud detection, and more.
- Inf1
- Amazon EC2 Inf1 instances are built from the ground up to support machine learning inference applications.
Features:- Up to 16 AWS Inferentia Chips
- Supported by AWS Neuron SDK
- High frequency 2nd Generation Intel Xeon Scalable processors (Cascade Lake P-8259L)
- Up to 100 Gbps networking
Instance Size Inferentia chips vCPUs Memory (GiB) Instance Storage Inter-accelerator Interconnect Network Bandwidth (Gbps)*** EBS Bandwidth inf1.xlarge 1 4 8 EBS only N/A Up to 25 Up to 4.75 inf1.2xlarge 1 8 16 EBS only N/A Up to 25 Up to 4.75 inf1.6xlarge 4 24 48 EBS only Yes 25 4.75 inf1.24xlarge 16 96 192 EBS only Yes 100 19 Use Cases Recommendation engines, forecasting, image and video analysis, advanced text analytics, document analysis, voice, conversational agents, translation, transcription, and fraud detection.
- DL1
- Amazon EC2 DL1 instances are powered by Gaudi accelerators from Habana Labs (an Intel company). They deliver up to 40% better price performance for training deep learning models compared to current generation GPU-based EC2 instances.
Features:- 2nd Generation Intel Xeon Scalable Processor (Cascade Lake P-8275CL)
- Up to 8 Gaudi accelerators with 32 GB of high bandwidth memory (HBM) per accelerator
- 400 Gbps of networking throughput
- 4 TB of local NVMe storage
Instance Size vCPU Gaudi Accelerators Instance Memory (GiB) Instance Storage (GB) Accelerator Peer-to-Peer Bidirectional (Gbps) Network Bandwidth (Gbps) EBS Bandwidth (Gbps) dl1.24xlarge 96 8 768 4 x 1000 NVMe SSD 100 400 19
DL1 instances have the following specs:
- 2nd Generation Intel Xeon Scalable Processor
- Intel AVX†, Intel AVX2†, Intel AVX-512, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
Deep learning training, object detection, image recognition, natural language processing, and recommendation engines. - DL2q
- Amazon EC2 DL2q instances , powered by Qualcomm AI 100 accelerators, can be used to cost-efficiently deploy deep learning (DL) workloads in the cloud or validate performance and accuracy of DL workloads that will be deployed on Qualcomm devices.
Features:- 8 Qualcomm AI 100 accelerators
- Supported by Qualcomm Cloud AI Platform and Apps SDK
- 2nd Generation Intel Xeon Scalable Processors (Cascade Lake P-8259CL)
- Up to 128 GB of shared accelerator memory
- Up to 100 Gbps networking
Use Cases
Run popular DL and generative AI applications, such as content generation, image analysis, text summarization, and virtual assistants.; Validate AI workloads before deploying them across smartphones, automobiles, robotics, and extended reality headsets.
- F2
- Amazon EC2 F2 instances offer customizable hardware acceleration with field programmable gate arrays (FPGAs).
Features:- Up to 8 AMD Virtex UltraScale+ HBM VU47P FPGAs with 2.9 million logic cells and 9024 DSP slices
- 3rd generation AMD EPYC processor
- 64 GiB of DDR4 ECC-protected FPGA memory
- Dedicated FPGA PCI-Express x16 interface
- Up to 100 Gbps of networking bandwidth
- Supported by FPGA Developer AMI and FPGA Development Kit
Instance Name FPGAs vCPU FPGA Memory HBM / DDR4 Instance Memory (GiB) Local Storage (GiB) Network Bandwidth (Gbps) EBS Bandwidth (Gbps) f2.6xlarge 1 24 16 GiB/ 64 GiB 256 1x 940 12.5 7.5 f2.12xlarge 2 48 32 GiB / 128 GiB 512 2x 940 25 15 f2.48xlarge 8 192 128 GiB / 512 GiB 2,048 8x 940 100 60 Use Cases Genomics research, financial analytics, real-time video processing, big data search and analysis, and security.
- VT1
- Amazon EC2 VT1 instances are designed to deliver low cost real-time video transcoding with support for up to 4K UHD resolution.
Features:- 2nd Generation Intel Xeon Scalable Processors (Cascade Lake P-8259CL)
- Up to 8 Xilinx U30 media accelerator cards with accelerated H.264/AVC and H.265/HEVC codecs
- Up to 25 Gbps of enhanced networking throughput
- Up to 19 Gbps of EBS bandwidth
Instance Size U30 Accelerators vCPU Memory (GiB) Network Bandwidth (Gbps) EBS Bandwidth (Gbps) 1080p60 Streams 4Kp60 Streams vt1.3xlarge 1 12 24 3.125 Up to 4.75 8 2 vt1.6xlarge 2 24 48 6.25 4.75 16 4 vt1.24xlarge 8 96 192 25 19 64 16 All instances have the following specs: - 2nd Generation Intel Xeon Scalable Processors
- Intel AVX†, Intel AVX2†, Intel AVX-512, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
Live event broadcast, video conferencing, and just-in-time transcoding.
Each vCPU is a thread of either an Intel Xeon core or an AMD EPYC core, except for T2 and m3.medium.
† AVX, AVX2, AVX-512, and Enhanced Networking are only available on instances launched with HVM AMIs.
* This is the default and maximum number of vCPUs available for this instance type. You can specify a custom number of vCPUs when launching this instance type. For more details on valid vCPU counts and how to start using this feature, visit the Optimize CPUs documentation page here.
*** Instances marked with "Up to" Network Bandwidth have a baseline bandwidth and can use a network I/O credit mechanism to burst beyond their baseline bandwidth on a best effort basis. For more information, see instance network bandwidth.
Instance Features
Amazon EC2 instances provide a number of additional features to help you deploy, manage, and scale your applications.
Burstable Performance Instances
Amazon EC2 allows you to choose between Fixed Performance instance families (e.g. M6, C6, and R6) and Burstable Performance Instance families (e.g. T3). Burstable Performance Instances provide a baseline level of CPU performance with the ability to burst above the baseline.
T Unlimited instances can sustain high CPU performance for as long as a workload needs it. For most general-purpose workloads, T Unlimited instances will provide ample performance without any additional charges. The hourly T instance price automatically covers all interim spikes in usage when the average CPU utilization of a T instance is at or less than the baseline over a 24-hour window. If the instance needs to run at higher CPU utilization for a prolonged period, it can do so at a flat additional charge of 5 cents per vCPU-hour.
T instances’ baseline performance and ability to burst are governed by CPU Credits. Each T instance receives CPU Credits continuously, the rate of which depends on the instance size. T instances accrue CPU Credits when they are idle, and use CPU credits when they are active. A CPU Credit provides the performance of a full CPU core for one minute.
For example, a t2.small instance receives credits continuously at a rate of 12 CPU Credits per hour. This capability provides baseline performance equivalent to 20% of a CPU core (20% x 60 mins = 12 mins). If the instance does not use the credits it receives, they are stored in its CPU Credit balance up to a maximum of 288 CPU Credits. When the t2.small instance needs to burst to more than 20% of a core, it draws from its CPU Credit balance to handle this surge automatically.
With T2 Unlimited enabled, the t2.small instance can burst above the baseline even after its CPU Credit balance is drawn down to zero. For a vast majority of general purpose workloads where the average CPU utilization is at or below the baseline performance, the basic hourly price for t2.small covers all CPU bursts. If the instance happens to run at an average 25% CPU utilization (5% above baseline) over a period of 24 hours after its CPU Credit balance is drawn to zero, it will be charged an additional 6 cents (5 cents/vCPU-hour x 1 vCPU x 5% x 24 hours).
Many applications such as web servers, developer environments and small databases don’t need consistently high levels of CPU, but benefit significantly from having full access to very fast CPUs when they need them. T instances are engineered specifically for these use cases. If you need consistently high CPU performance for applications such as video encoding, high volume websites or HPC applications, we recommend you use Fixed Performance Instances. T instances are designed to perform as if they have dedicated high speed processor cores available when your application really needs CPU performance, while protecting you from the variable performance or other common side-effects you might typically see from over-subscription in other environments.
Multiple Storage Options
Amazon EC2 allows you to choose between multiple storage options based on your requirements. Amazon EBS is a durable, block-level storage volume that you can attach to a single, running Amazon EC2 instance. You can use Amazon EBS as a primary storage device for data that requires frequent and granular updates. For example, Amazon EBS is the recommended storage option when you run a database on Amazon EC2. Amazon EBS volumes persist independently from the running life of an Amazon EC2 instance. Once a volume is attached to an instance you can use it like any other physical hard drive. Amazon EBS provides three volume types to best meet the needs of your workloads: General Purpose (SSD), Provisioned IOPS (SSD), and Magnetic. General Purpose (SSD) is the new, SSD-backed, general purpose EBS volume type that we recommend as the default choice for customers. General Purpose (SSD) volumes are suitable for a broad range of workloads, including small to medium sized databases, development and test environments, and boot volumes. Provisioned IOPS (SSD) volumes offer storage with consistent and low-latency performance, and are designed for I/O intensive applications such as large relational or NoSQL databases. Magnetic volumes provide the lowest cost per gigabyte of all EBS volume types. Magnetic volumes are ideal for workloads where data is accessed infrequently, and applications where the lowest storage cost is important.
Many Amazon EC2 instances can also include storage from devices that are located inside the host computer, referred to as instance storage. Instance storage provides temporary block-level storage for Amazon EC2 instances. The data on instance storage persists only during the life of the associated Amazon EC2 instance.
In addition to block level storage via Amazon EBS or instance storage, you can also use Amazon S3 for highly durable, highly available object storage. Learn more about Amazon EC2 storage options from the Amazon EC2 documentation.
EBS-optimized Instances
For an additional, low, hourly fee, customers can launch selected Amazon EC2 instances types as EBS-optimized instances. EBS-optimized instances enable EC2 instances to fully use the IOPS provisioned on an EBS volume. EBS-optimized instances deliver dedicated throughput between Amazon EC2 and Amazon EBS, with options between 500 Megabits per second (Mbps) and 80 Gigabits per second (Gbps), depending on the instance type used. The dedicated throughput minimizes contention between Amazon EBS I/O and other traffic from your EC2 instance, providing the best performance for your EBS volumes. EBS-optimized instances are designed for use with all EBS volumes. When attached to EBS-optimized instances, Provisioned IOPS volumes can achieve single digit millisecond latencies and are designed to deliver within 10% of the provisioned IOPS performance 99.9% of the time. We recommend using Provisioned IOPS volumes with EBS-optimized instances or instances that support cluster networking for applications with high storage I/O requirements.
Cluster Networking
Select EC2 instances support cluster networking when launched into a common cluster placement group. A cluster placement group provides low-latency networking between all instances in the cluster. The bandwidth an EC2 instance can utilize depends on the instance type and its networking performance specification. Inter instance traffic within the same region can utilize up to 5 Gbps for single-flow and up to 100 Gbps for multi-flow traffic in each direction (full duplex). Traffic to and from S3 buckets in the same region can also utilize all available instance aggregate bandwidth. When launched in a placement group, instances can utilize up to 10 Gbps for single-flow traffic and up to 100 Gbps for multi-flow traffic. Network traffic to the Internet is limited to 5 Gbps (full duplex). Cluster networking is ideal for high performance analytics systems and many science and engineering applications, especially those using the MPI library standard for parallel programming.
Intel Processor Features
Amazon EC2 instances that feature an Intel processor may provide access to the following processor features:
- Intel AES New Instructions (AES-NI): Intel AES-NI encryption instruction set improves upon the original Advanced Encryption Standard (AES) algorithm to provide faster data protection and greater security. All current generation EC2 instances support this processor feature.
- Intel Advanced Vector Extensions (Intel AVX, Intel AVX2 and Intel AVX-512): Intel AVX and Intel AVX2 are 256-bit and Intel AVX-512 is a 512-bit instruction set extensions designed for applications that are Floating Point (FP) intensive. Intel AVX instructions improve performance for applications like image and audio/video processing, scientific simulations, financial analytics, and 3D modeling and analysis. These features are only available on instances launched with HVM AMIs.
- Intel Turbo Boost Technology: Intel Turbo Boost Technology provides more performance when needed. The processor is able to automatically run cores faster than the base operating frequency to help you get more done faster.
- Intel Deep Learning Boost (Intel DL Boost): A new set of built-in processor technologies designed to accelerate AI deep learning use cases. The 2nd Gen Intel Xeon Scalable processors extend Intel AVX-512 with a new Vector Neural Network Instruction (VNNI/INT8) that significantly increases deep learning inference performance over previous generation Intel Xeon Scalable processors (with FP32), for image recognition/segmentation, object detection, speech recognition, language translation, recommendation systems, reinforcement learning and others. VNNI may not be compatible with all Linux distributions. Please check documentation before using.
Not all processor features are available in all instance types, check out the instance type matrix for more detailed information on which features are available from which instance types.
Measuring Instance Performance
Why should you measure instance performance?
Amazon EC2 allows you to provision a variety of instances types, which provide different combinations of CPU, memory, disk, and networking. Launching new instances and running tests in parallel is easy, and we recommend measuring the performance of applications to identify appropriate instance types and validate application architecture. We also recommend rigorous load/scale testing to ensure that your applications can scale as you intend.
Considerations for Amazon EC2 performance evaluation
Amazon EC2 provides you with a large number of options across ten different instance types, each with one or more size options, organized into distinct instance families optimized for different types of applications. We recommend that you assess the requirements of your applications and select the appropriate instance family as a starting point for application performance testing. You should start evaluating the performance of your applications by (a) identifying how your application needs compare to different instance families (e.g. is the application compute-bound, memory-bound, etc.?), and (b) sizing your workload to identify the appropriate instance size. There is no substitute for measuring the performance of your full application since application performance can be impacted by the underlying infrastructure or by software and architectural limitations. We recommend application-level testing, including the use of application profiling and load testing tools and services. For more information, open a support case and ask for additional network performance specifications for the specific instance types that you are interested in.