Network bandwidth (original) (raw)


Google Cloud accounts for bandwidth per compute instance, not per virtual network interface (vNIC) or IP address. An instance'smachine type defines its maximum possible egress rate; however, you can only achieve that maximum possible egress rate in specific situations.

This page outlines the network bandwidth limits, which are useful when planning your deployments. It categorizes bandwidth using two dimensions:

Neither additional virtual network interfaces (vNICs)nor additional IP addresses per vNIC increase ingress or egress bandwidth for a compute instance. For example, a C3 VM with 22 vCPUs is limited to 23 Gbps total egress bandwidth. If you configure the C3 VM with two vNICs, the VM is still limited to 23 Gbps total egress bandwidth, not 23 Gbps bandwidth per vNIC.

To get the highest possible ingress and egress bandwidth,configure per VM Tier_1 networking performancefor your compute instance.

All of the information on this page is applicable to Compute Engine compute instances, as well as products that depend on Compute Engine instances. For example, a Google Kubernetes Engine node is a Compute Engine instance.

Bandwidth summary

The following table illustrates the maximum possible bandwidth based on whether a packet is sent from (egress) or received by (ingress) a compute instance and the packet routing method.

Egress bandwidth limits

Routing withina VPC network Primarily defined by a per-instance maximum egress bandwidth based on the sending instance's machine type and whether Tier_1 networking is enabled. N2, N2D, C2, C2D, and C4A VMs with Tier_1 networking support egress bandwidth limits up to 100 Gbps. H3 VMs support VM-to-VM egress bandwidth limits up to 200 Gbps. X4, A2, and G2 instances support egress bandwidth limits up to 100 Gbps. A4X instances support egress bandwidth limits up to 2,000 GBps. A4 and A3 instances support egress bandwidth limits up to 3,600 Gbps. M4, C4, C4D, C3, C3D, and Z3 instances support up to 200 Gbps egress bandwidth limits with Tier_1 networking. For other factors, definitions, and scenarios, seeEgress to destinations routable within a VPC network.
Routing outsidea VPC network Primarily defined by a per-instance maximum egress bandwidth based on the sending instance's machine type and whether Tier_1 networking is enabled. Except for H3 VMs, a sending instance's maximum possible egress to a destination outside of its VPC network cannot exceed the following: 7 Gbps total when Tier_1 networking isn't enabled 25 Gbps total when Tier_1 networking is enabled 3 Gbps per flow For other factors, definitions, and caveats, seeEgress to destinations outside of a VPC network.

Ingress bandwidth limits

Routing withina VPC network Generally, ingress rates are similar to the egress rates for a machine type. To get the highest possible ingress bandwidth,enable Tier_1 networking. The size of your compute instance, the capacity of the server NIC, the traffic coming into other guest VMs running on the same host hardware, your guest OS network configuration, and the number of disk reads performed by your instance can all impact the ingress rate. Google Cloud doesn't impose any additional limitations on ingress rates within a VPC network. For other factors, definitions, and scenarios, seeIngress to destinations routable within a VPC network.
Routing outsidea VPC network Google Cloud protects each compute instance by limiting ingress traffic routed outside a VPC network. The limit is the first of the following rates encountered: 1,800,000 pps (packets per second) 30 Gbps For a machine series that supports multiple physical NICs, such as A3, the limit is the first of the following rates encountered: 1,800,000 pps (packets per second) per physical NIC 30 Gbps per physical NIC For other factors, definitions, and scenarios, seeIngress to destinations outside of a VPC network.

Egress bandwidth

Google Cloud limits outbound (egress) bandwidth using per-instance maximum egress rates. These rates are based the machine type of the compute instance that is sending the packet and whether the packet's destination is accessible using routes within a VPC network or routes outside of a VPC network. Outbound bandwidth includes packets emitted by all of the instance's NICs and data transferred to all Hyperdisk and Persistent Disk volumes connected to the instance.

Per-instance maximum egress bandwidth

Per-instance maximum egress bandwidth is generally 2 Gbps per vCPU, but there are some differences and exceptions, depending on the machine series. The following table shows the range of maximum limits for egress bandwidth for traffic routed within a VPC network for standard networking tier only, notper VM Tier_1 networking performance.

Machine series Lowest per-instance maximum egress limit for standard Highest per-instance maximum egress limit for standard
C4, C4A, and C4D 10 Gbps 100 Gbps
C3 23 Gbps 100 Gbps
C3D 20 Gbps 100 Gbps
C2 and C2D 10 Gbps 32 Gbps
E2 1 Gbps 16 Gbps
H3 N/A 200 Gbps
M4 32 Gbps 100 Gbps
M3 and M1 32 Gbps 32 Gbps
M2 32 Gbps 32 Gbps on Intel Cascade Lake CPU platform16 Gbps on other CPU platforms
N4 10 Gbps 50 Gbps
N2 and N2D 10 Gbps 32 Gbps
N1 (excluding VMs with 1 vCPU) 10 Gbps 32 Gbps on Intel Skylake CPU platform16 Gbps on CPU platforms older than Intel Skylake
N1 machine types with 1 vCPU, f1-micro, and g1-small 2 Gbps 2 Gbps
T2D 10 Gbps 32 Gbps
X4 N/A 100 Gbps
Z3 23 Gbps 100 Gbps

You can find the per-instance maximum egress bandwidth for every machine type listed on its specific machine family page:

Per-instance maximum egress bandwidth is not a guarantee. The actual egress bandwidth can be lowered according to factors such as the following non-exhaustive list:

To get the largest possible per-instance maximum egress bandwidth:

Egress to destinations routable within a VPC network

From the perspective of a sending instance and for destination IP addresses accessible by means of routes within a VPC network, Google Cloud limits outbound traffic using these rules:

Destinations routable within a VPC network include all of the following destinations, each of which is accessible from the perspective of the sending instance by a route whose next hop is not the default internet gateway:

The following list ranks traffic from sending instances to internal destinations, from highest possible bandwidth to lowest:

Egress to destinations outside of a VPC network

From the perspective of a sending instance and for destination IP addresses_outside of a VPC network_, Google Cloud limits outbound traffic to whichever of the following rates is reached first:

Destinations outside of a VPC network include all of the following destinations, each of which is accessible by a route in the sending instance's VPC network whose next hop is the default internet gateway:

For details about which Google Cloud resources use what types of external IP addresses, seeExternal IP addresses.

Ingress bandwidth

Google Cloud handles inbound (ingress) bandwidth depending on how the incoming packet is routed to a receiving compute instance.

Ingress to destinations routable within a VPC network

A receiving instance can handle as many incoming packets as its machine type, operating system, and other network conditions permit. Google Cloud does not implement any purposeful bandwidth restriction on incoming packets delivered to an instance if the incoming packet is delivered using routes within a VPC network:

Destinations for packets that are routed within a VPC network include:

Ingress to destinations outside of a VPC network

Google Cloud implements the following bandwidth limits for incoming packets delivered to a receiving instance using routes outside a VPC network. When load balancing is involved, the bandwidth limits are applied individually to each receiving instance.

For machine series that don't support multiple physical NICs, the applicable inbound bandwidth restriction applies collectively to all virtual network interfaces (vNICs). The limit is the first of the following rates encountered:

For machine series that support multiple physical NICs, such as A3, the applicable inbound bandwidth restriction applies individually to each physical NIC. The limit is the first of the following rates encountered:

Destinations for packets that are routed using routes outside of a VPC network include:

Jumbo frames

To receive and send jumbo frames, configure the VPC network used by your compute instances; set themaximum transmission unit (MTU) to a larger value, up to 8896.

Higher MTU values increase the packet size and reduce the packet-header overhead, which increases payload data throughput.

You can use jumbo frames with the gVNIC driver version 1.3 or later on VM instances, or with the IDPF driver on bare metal instances. Not all Google Cloud public images include these drivers. For more information about operating system support for jumbo frames, see theNetworking features tab on theOperating system detailspage.

If you are using an OS image that doesn't have full support for jumbo frames, you can manually install gVNIC driver version v1.3.0 or later. Google recommends installing the gVNIC driver version marked Latest to benefit from additional features and bug fixes. You can download the gVNIC drivers fromGitHub.

To manually update the gVNIC driver version in your guest OS, seeUse on non-supported operating systems.

Receive and transmit queues

Each NIC or vNIC for a compute instance is assigned a number of receive and transmit queues for processing packets from the network.

Default queue allocation

Unless you explicitly assign queue counts for NICs, you can model the algorithm Google Cloud uses to assign a fixed number of RX and TX queues per NIC in this way:

Bare metal instances

For bare metal instances, there is only one NIC, so the maximum queue count is 16.

VM instances that use the gVNIC network interface

For C4 instances, to improve performance, the following configurations use a fixed number of queues:

For the other machine series, the queue count depends on whether the machine series uses Titanium or not.

To finish the default queue count calculation:

  1. If the calculated number is less than 1, assign each vNIC one queue instead.
  2. Determine if the calculated number is greater than the maximum number of queues per vNIC, which is 16. If the calculated number is greater than 16, ignore the calculated number, and assign each vNIC 16 queues instead.

VM instances using the VirtIO network interface or a custom driver

Divide the number of vCPUs by the number of vNICs, and discard any remainder —[number of vCPUs/number of vNICs].

  1. If the calculated number is less than 1, assign each vNIC one queue instead.
  2. Determine if the calculated number is greater than the maximum number of queues per vNIC, which is 32. If the calculated number is greater than 32, ignore the calculated number, and assign each vNIC 32 queues instead.

Examples

The following examples show how to calculate the default number of queues for a VM instance:

On Linux systems, you can use ethtool to configure a vNIC with fewer queues than the number of queues Google Cloud assigns per vNIC.

Custom queue allocation for VM instances

Instead of the default queue allocation, you can assign a custom queue count (total of both RX and TX) to each vNIC when you create a new compute instance by using the Compute Engine API.

The number of custom queues you specify must adhere to the following rules:

You can oversubscribe the custom queue count for your vNICs. In other words, you can have a sum of the queue counts assigned to all NICs for your VM instance that is greater than the number of vCPUs for your instance. To oversubscribe the custom queue count, the VM instance must satisfy the following conditions:

With queue oversubscription, the maximum queue count for the VM instance is 16 times the number of NICs. So, if you have 6 NICs configured for an instance with 30 vCPUs, you can configure a maximum of (16 * 6), or 96 custom queues for your instance.

Examples

It's also possible to assign a custom queue count for only some NICs, letting Google Cloud assign queues to the remaining NICs. The number of queues you can assign per vNIC is still subject to rules mentioned previously. You can model the feasibility of your configuration, and, if your configuration is possible, the number of queues that Google Cloud assigns to the remaining vNICs with this process:

  1. Calculate the sum of queues for the vNICs using custom queue assignment. For an example VM with 20 vCPUs and 6 vNICs, suppose you assign nic0 5 queues, nic1 6 queues, nic2 4 queues, and let Google Cloud assign queues for nic3, nic4, and nic5. In this example, the sum of custom-assigned queues is 5+6+4 = 15.
  2. Subtract the sum of custom-assigned queues from the number of vCPUs. If the difference is less than the number of remaining vNICs for which Google Cloud must assign queues, Google Cloud returns an error because each vNIC must have at least one queue.
    Continuing the example with a VM that has 20 vCPUs and a sum of 15custom-assigned queues, Google Cloud has 20-15 = 5 queues left to assign to the remaining vNICs (nic3, nic4, nic5).
  3. Divide the difference from the previous step by the number of remaining vNICs and discard any remainder —⌊(number of vCPUs - sum of assigned queues)/(number of remaining vNICs)⌋. This calculation always results in a whole number (not a fraction) that is at least equal to one because of the constraint explained in the previous step. Google Cloud assigns each remaining vNIC a queue count matching the calculated number as long as the calculated number is not greater than the maximum number of queues per vNIC. The maximum number of queues per vNIC depends on the driver type:

Configure custom queue counts

To create a compute instance that uses a custom queue count for one or more NICs or vNICs, complete the following steps.

In the following code examples, the VM is created with the network interface type set to GVNIC and per VM Tier_1 networking performance enabled. You can use these code examples to specify the maximum queue counts and queue oversubscription that is available for the supported machine types.

gcloud

  1. If you don't already have aVPC networkwith a subnet for each vNIC interface you plan to configure, create them.

  2. Use thegcloud compute instances create commandto create the compute instance. Repeat the --network-interface flag for each vNIC that you want to configure for the instance, and include the queue-count option.

    gcloud compute instances create INSTANCE_NAME
    --zone=ZONE
    --machine-type=MACHINE_TYPE
    --network-performance-configs=total-egress-bandwidth-tier=TIER_1
    --network-interface=network=NETWORK_NAME_1,subnet=SUBNET_1,nic-type=GVNIC,queue-count=QUEUE_SIZE_1
    --network-interface=network=NETWORK_NAME_2,subnet=SUBNET_2,nic-type=GVNIC,queue-count=QUEUE_SIZE_2

Replace the following:

Terraform

  1. If you don't already have aVPC networkwith a subnet for each vNIC interface you plan to configure, create them.
  2. Create a compute instance with specific queue counts for vNICs using thegoogle_compute_instance resource. Repeat the --network-interface parameter for each vNIC you want to configure for the compute instance, and include the queue-count parameter.

Queue oversubscription instance

resource "google_compute_instance" "VM_NAME" {
project = "PROJECT_ID"
boot_disk {
auto_delete = true
device_name = "DEVICE_NAME"
initialize_params {
image="IMAGE_NAME"
size = DISK_SIZE
type = "DISK_TYPE"
}
}
machine_type = "MACHINE_TYPE"
name = "VM_NAME"
zone = "ZONE"
network_performance_config {
total_egress_bandwidth_tier = "TIER_1"
}
network_interface {
nic_type = "GVNIC"
queue_count = QUEUE_COUNT_1
subnetwork_project = "PROJECT_ID"
subnetwork = "SUBNET_1"
}
network_interface {
nic_type = "GVNIC"
queue_count = QUEUE_COUNT_2
subnetwork_project = "PROJECT_ID"
subnetwork = "SUBNET_2"
}
network_interface {
nic_type = "GVNIC"
queue_count = QUEUE_COUNT_3
subnetwork_project = "PROJECT_ID"
subnetwork = "SUBNET_3""
}
network_interface {
nic_type = "GVNIC"
queue_count = QUEUE_COUNT_4
subnetwork_project = "PROJECT_ID"
subnetwork = "SUBNET_4""
}
}

Replace the following:

REST

  1. If you don't already have aVPC networkwith a subnet for each vNIC interface you plan to configure, create them.
  2. Create a compute instance with specific queue counts for NICs using theinstances.insert method. Repeat the networkInterfaces property to configure multiple network interfaces.
    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances
    {
    "name": "VM_NAME",
    "machineType": "machineTypes/MACHINE_TYPE",
    "networkPerformanceConfig": {
    "totalEgressBandwidthTier": TIER_1
    },
    "networkInterfaces": [
    {
    "nicType": gVNIC,
    "subnetwork":"regions/region/subnetworks/SUBNET_1",
    "queueCount": "QUEUE_COUNT_1"
    } ],
    "networkInterfaces": [
    {
    "nicType": gVNIC,
    "subnetwork":"regions/region/subnetworks/SUBNET_2",
    "queueCount": "QUEUE_COUNT_2"
    } ],
    }
    Replace the following:
    • PROJECT_ID: ID of the project to create the compute instance in
    • ZONE: zone to create the compute instance in
    • VM_NAME:name of the new compute instance
    • MACHINE_TYPE: machine type,predefined orcustom, for the new compute instance. To oversubscribe the queue count, the machine type must support gVNIC and Tier_1 networking.
    • SUBNET_*: the name of the subnet that the network interface connects to
    • QUEUE_COUNT: Number of queues for the vNIC, subject to the rules discussed inCustom queue allocation.

Queue allocations and changing the machine type

Compute instances are created with adefault queue allocation, or you can assign acustom queue count to each virtual network interface card (vNIC) when you create a new compute instance by using the Compute Engine API. The default or custom vNIC queue assignments are only set when creating a compute instance. If your instance has vNICs that use default queue counts, you canchange its machine type. If the machine type that you are changing to has a different number of vCPUs, the default queue counts for your instance are recalculated based on the new machine type.

If your VM has vNICs which use custom, non-default queue counts, then you can change the machine type by using the Google Cloud CLI or Compute Engine API toupdate the instance properties. The conversion succeeds if the resulting VM supports the same queue count per vNIC as the original instance. For VMs that use the VirtIO-Net interface and have a custom queue count that is higher than 16 per vNIC, you can't change the machine type to a third generation or later machine type, because they use only gVNIC. Instead, you can migrate your VM to a third generation or later machine type by following the instructions inMove your workload to a new compute instance.

What's next