Machine families resource and comparison guide (original) (raw)

Skip to main content

Machine families resource and comparison guide


This document describes the machine families, machine series, and machine types that you can choose from to create a virtual machine (VM) instance or bare metal instance with the resources that you need. When you create a compute instance, you select a machine type from a machine family that determines the resources available to that instance.

There are several machine families you can choose from. Each machine family is further organized into machine series and predefined machine types within each series. For example, within the N2 machine series in the general-purpose machine family, you can select the n2-standard-4 machine type.

For information about machine series that supportSpot VMs (and preemptible VMs), seeCompute Engine instances provisioning models.

Note: This is a list of Compute Engine machine families. For a detailed explanation of each machine family, see the following pages:

Compute Engine terminology

This documentation uses the following terms:

The following sections describe the different machine types.

Predefined machine types

Predefined machine types come with a non-configurable amount of memory and vCPUs. Predefined machine types use a variety of vCPU to memory ratios:

For example, a c3-standard-22 machine type has 22 vCPUs, and as astandard machine type, it also has 88 GB of memory.

Local SSD machine types

Local SSD machine types are a special predefined machine type. The machine type name ends in -lssd. When you create a compute instance using one of these machine types, Titanium SSDor Local SSD disks are automatically attached to the instance.

These machine types are available with the C4A, C4D, C3, and C3D machine series. Other machine series also support Local SSD disks but don't use a -lssd machine type. For more information about what machine types you can use with Titanium SSD or Local SSD disks, seeChoose a valid number of Local SSD disks.

Bare metal machine types

Bare metal machine types are a special predefined machine type. The machine type name ends in -metal. When you create a compute instance using one of these machine types, there is no hypervisor installed on the instance. You can attach disks to a bare metal instance, just as you would with a VM instance. Bare metal instances can be used in VPC networks and subnetworks in the same way as VM instances.

These machine types are available with the C4D (Preview), C3, Z3 (Preview), and X4 machine series.

Custom machine types

If none of the predefined machine types match your workload needs, you can create a VM instance with a custom machine type for theN and E machine seriesin the general-purpose machine family. .

Custom machine types cost slightly more to use compared to an equivalent predefined machine type. Also, there are limitations in the amount of memory and vCPUs that you can select for a custom machine type. The on-demand prices for custom machine types include a 5% premium over the on-demand and commitment prices for predefined machine types.

When creating a custom machine type, you can use the extended memory feature. Instead of using the default memory size based on the number of vCPUs you select, you can specify an amount of memory, up to the limit for the machine series.

For more information, seeCreate a VM with a custom machine type.

Shared-core machine types

The E2 and N1 series contain shared-core machine types. These machine types timeshare a physical core which can be a cost-effective method for running small, non-resource intensive apps.

Machine family and series recommendations

The following tables provide recommendations for different workloads.

General-purpose workloads
N4, N2, N2D, N1 C4, C4A, C4D, C3, C3D E2 Tau T2D, Tau T2A
Balanced price/performance across a wide range of machine types Consistently high performance for a variety of workloads Day-to-day computing at a lower cost Best per-core performance/cost for scale-out workloads
Medium traffic web and app servers Containerized microservices Business intelligence apps Virtual desktops CRM applications Development and test environments Batch processing Storage and archive High traffic web and app servers Databases In-memory caches Ad servers Game Servers Data analytics Media streaming and transcoding CPU-based ML training and inference Low-traffic web servers Back office apps Containerized microservices Microservices Virtual desktops Development and test environments Scale-out workloads Web serving Containerized microservices Media transcoding Large-scale Java applications
Optimized workloads
Storage-optimized Compute-optimized Memory-optimized Accelerator-optimized
Z3 H3, C2, C2D X4, M4, M3, M2, M1 A4X, A4, A3, A2, G2
Highest block storage to compute ratios for storage-intensive workloads Ultra high performance for compute-intensive workloads Highest memory to compute ratios for memory-intensive workloads Optimized for accelerated high performance computing workloads
SQL, NoSQL, and vector databases Data analytics and data warehouses Search Media streaming Large distributed parallel file systems Compute-bound workloads High-performance web servers Game Servers High performance computing (HPC) Media transcoding Modeling and simulation workloads AI/ML Medium to extra-large SAP HANA in-memory databases In-memory data stores, such as Redis Simulation High Performance databases such as Microsoft SQL Server, MySQL Electronic design automation Generative AI models such as the following: Large Language Models (LLM) Diffusion Models Generative Adversarial Networks (GAN) CUDA-enabled ML training and inference High-performance computing (HPC) Massively parallelized computation BERT natural language processing Deep learning recommendation model (DLRM) Video transcoding Remote visualization workstation

After you create a compute instance, you can use _rightsizing recommendations_to optimize resource utilization based on your workload. For more information, see Applying machine type recommendations for VMs.

General-purpose machine family guide

The general-purpose machine familyoffers several machine series with the best price-performance ratio for a variety of workloads.

Compute Engine offers general-purpose machine series that run on either x86 or Arm architecture.

x86

Arm

Storage-optimized machine family guide

The storage-optimized machine familyis best suited for high-performance and flash-optimized workloads such as SQL, NoSQL, and vector databases, scale-out data analytics, data warehouses and search, and distributed file systems that need fast access to large amounts of data stored in local storage. The storage-optimized machine family is designed to provide high local storage throughput and IOPS at sub-millisecond latency.

Compute-optimized machine family guide

The compute-optimized machine familyis optimized for running compute-bound applications by providing the highest performance per core.

Memory-optimized machine family guide

The memory-optimized machine familyhas machine series that are ideal for OLAP and OLTP SAP workloads, genomic modeling, electronic design automation, and your most memory intensive HPC workloads. This family offers more memory per core than any other machine family, with up to 32 TB of memory.

Accelerator-optimized machine family guide

The accelerator-optimized machine familyis ideal formassively parallelized Compute Unified Device Architecture (CUDA) compute workloads, such as machine learning (ML) and high performance computing (HPC). This family is the optimal choice for workloads that require GPUs.

Arm

x86

Use the following table to compare each machine family and determine which one is appropriate for your workload. If, after reviewing this section, you are still unsure which family is best for your workload, start with the general-purpose machine family. For details about all supported processors, see CPU platforms.

To learn how your selection affects the performance of disk volumes attached to your compute instances, see:

Compare the characteristics of different machine series, from C4A to G2. You can select specific properties in the Choose instance properties to compare field to compare those properties across all machine series in the following table.

General-purpose General-purpose General-purpose General-purpose General-purpose General-purpose General-purpose General-purpose General-purpose General-purpose General-purpose Cost optimized Storage optimized Compute optimized Compute optimized Compute optimized Memory optimized Memory optimized Memory optimized Memory optimized Memory optimized Accelerator optimized Accelerator optimized Accelerator optimized Accelerator optimized Accelerator optimized Accelerator optimized Accelerator optimized
VM VM VM and bare metal VM and bare metal VM VM VM VM VM VM VM VM VM and bare metal VM VM VM Bare metal VM VM VM VM VM VM VM VM VM VM VM
Intel Emerald Rapids Google Axion AMD EPYC Turin Intel Sapphire Rapids AMD EPYC Genoa Intel Emerald Rapids Intel Cascade Lake and Ice Lake AMD EPYC Rome and EPYC Milan Intel Skylake, Broadwell, Haswell, Sandy Bridge, and Ivy Bridge AMD EPYC Milan Ampere Altra Intel Skylake, Broadwell, and Haswell, AMD EPYC Rome and EPYC Milan Intel Sapphire Rapids Intel Sapphire Rapids Intel Cascade Lake AMD EPYC Milan Intel Sapphire Rapids Intel Emerald Rapids Intel Ice Lake Intel Cascade Lake Intel Skylake and Broadwell Intel Skylake, Broadwell, Haswell, Sandy Bridge, and Ivy Bridge NVIDIA Grace Intel Emerald Rapids Intel Emerald Rapids Intel Sapphire Rapids Intel Cascade Lake Intel Cascade Lake
x86 Arm x86 x86 x86 x86 x86 x86 x86 x86 Arm x86 x86 x86 x86 x86 x86 x86 x86 x86 x86 x86 Arm x86 x86 x86 x86 x86
2 to 192 1 to 72 2 to 384 4 to 176 4 to 360 2 to 80 2 to 128 2 to 224 1 to 96 1 to 60 1 to 48 0.25 to 32 14 to 192 88 4 to 60 2 to 112 960 to 1,920 28 to 224 32 to 128 208 to 416 40 to 160 1 to 96 140 224 224 208 12 to 96 4 to 96
Thread Core Thread Thread Thread Thread Thread Thread Thread Core Core Thread Thread Core Thread Thread Thread Thread Thread Thread Thread Thread Core Thread Thread Thread Thread Thread
2 to 1,488 GB 2 to 576 GB 3 to 3,072 GB 8 to 1,408 GB 8 to 2,880 GB 2 to 640 GB 2 to 864 GB 2 to 896 GB 1.8 to 624 GB 4 to 240 GB 4 to 192 GB 1 to 128 GB 112 to 1,536 GB 352 GB 16 to 240 GB 4 to 896 GB 16,384 to 32,768 GB 372 to 5,952 GB 976 to 3,904 GB 5,888 to 11,776 GB 961 to 3,844 GB 3.75 to 624 GB 884 GB 3,968 GB 2,952 GB 1,872 GB 85 to 1,360 GB 16 to 432 GB
<style="white-space:no-wrap;"> AMD SEV </style="white-space:no-wrap;"> Intel TDX <style="white-space:no-wrap;"> AMD SEV </style="white-space:no-wrap;"> <style="white-space:no-wrap;"> AMD SEV , AMD SEV-SNP</style="white-space:no-wrap;"> <style="white-space:no-wrap;"> AMD SEV </style="white-space:no-wrap;"> Intel TDX, NVIDIA Confidential Computing
NVMe NVMe NVMe NVMe NVMe NVMe SCSI and NVMe SCSI and NVMe SCSI and NVMe SCSI and NVMe NVMe SCSI NVMe NVMe SCSI and NVMe SCSI and NVMe NVMe NVMe NVMe SCSI SCSI and NVMe SCSI and NVMe NVMe NVMe NVMe NVMe SCSI and NVMe NVMe
0 6 TiB 12 TiB 12 TiB 12 TiB 0 9 TiB 9 TiB 9 TiB 0 0 0 36 TiB (VM), 72 TiB (Metal) 0 3 TiB 3 TiB 0 0 3 TiB 0 3 TiB 9 TiB 12 TiB 12 TiB 12 TiB 6 TiB 3 TiB 3 TiB
Zonal and Regional Zonal and Regional Zonal and Regional Zonal Zonal Zonal and Regional Zonal Zonal Zonal Zonal Zonal and Regional Zonal
Zonal Zonal Zonal and Regional Zonal and Regional Zonal and Regional Zonal Zonal Zonal and Regional Zonal Zonal Zonal Zonal Zonal Zonal Zonal Zonal and Regional Zonal Zonal Zonal
Zonal Zonal Zonal and Regional Zonal and Regional Zonal and Regional Zonal Zonal Zonal and Regional Zonal Zonal Zonal Zonal Zonal Zonal Zonal and Regional Zonal Zonal Zonal
gVNIC gVNIC gVNIC and IDPF gVNIC and IDPF gVNIC gVNIC gVNIC and VirtIO-Net gVNIC and VirtIO-Net gVNIC and VirtIO-Net gVNIC and VirtIO-Net gVNIC gVNIC and VirtIO-Net gVNIC and IDPF gVNIC gVNIC and VirtIO-Net gVNIC and VirtIO-Net IDPF gVNIC gVNIC gVNIC and VirtIO-Net gVNIC and VirtIO-Net gVNIC and VirtIO-Net gVNIC and MRDMA gVNIC and MRDMA gVNIC and MRDMA gVNIC gVNIC and VirtIO-Net gVNIC and VirtIO-Net
10 to 100 Gbps 10 to 50 Gbps 10 to 100 Gbps 23 to 100 Gbps 20 to 100 Gbps 10 to 50 Gbps 10 to 32 Gbps 10 to 32 Gbps 2 to 32 Gbps 10 to 32 Gbps 10 to 32 Gbps 1 to 16 Gbps 23 to 100 Gbps up to 200 Gbps 10 to 32 Gbps 10 to 32 Gbps up to 100 Gbps 32 to 100 Gbps up to 32 Gbps up to 32 Gbps up to 32 Gbps 2 to 32 Gbps up to 2,000 GBps up to 3,600 Gbps up to 3,200 Gbps up to 1,800 Gbps 24 to 100 Gbps 10 to 100 Gbps
50 to 200 Gbps 50 to 100 Gbps 50 to 200 Gbps 50 to 200 Gbps 50 to 200 Gbps 50 to 100 Gbps 50 to 100 Gbps 50 to 200 Gbps 50 to 100 Gbps 50 to 100 Gbps 50 to 200 Gbps 50 to 100 Gbps 50 to 100 Gbps up to 2,000 GBps up to 3,600 Gbps up to 3,200 Gbps up to 1,800 Gbps 50 to 100 Gbps 50 to 100 Gbps
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 8 4 8 8 8 16 8
Resource-based CUDs and Flexible CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs Resource-based CUDs Resource-based CUDs Resource-based CUDs Resource-based CUDs Resource-based CUDs Resource-based CUDs Resource-based CUDs Resource-based CUDs Resource-based CUDs Resource-based CUDs Resource-based CUDs

GPUs and compute instances

GPUs are used to accelerate workloads, and are supported for A4X, A4, A3, A2, G2, and N1 instances. For instances that use A4X, A4, A3, A2, or G2 machine types, the GPUs are automatically attached when you create the instance. For instances that use N1 machine types, you can attach GPUs to the instance during or after instance creation. GPUs can't be used with any other machine series.

Instances with fewer GPUs attached are limited to a maximum number of vCPUs. In general, a higher number of GPUs lets you create instances with a higher number of vCPUs and memory. For more information, seeGPUs on Compute Engine.

What's next

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-06-13 UTC.