Optimize Persistent Disk performance (original) (raw)

Linux Windows


Persistent Disks give you the performance described in thedisk type chart if the VM drives usage that is sufficient to reach the performance limits. After you size your persistent disk volumes to meet your performance needs, your workload and operating system might need some tuning.

The following sections describe VM and workload characteristics that impact disk performance and discuss a few key elements that can be tuned for better performance. Some of the suggestions and how to apply some of them to specific types of workloads.

Factors that affect disk performance

The following sections describe factors that impact disk performance for a VM.

Network egress caps on write throughput

Your VM has a network egress cap that depends on the machine type of the VM.

Compute Engine stores data on Persistent Disk with multiple parallel writes to ensure built-in redundancy. Also, each write request has some overhead that uses additional write bandwidth.

The maximum write traffic that a VM instance can issue is the network egress cap divided by a bandwidth multiplier that accounts for the replication and overhead.

The network egress caps are listed in the **Maximum egress bandwidth (Gbps)**column in the machine type tables forgeneral purpose,compute-optimized,storage-optimized,memory-optimized, andaccelerator-optimizedmachine families.

The bandwidth multiplier is approximately 1.16x at full network utilization meaning that 16% of bytes written are overhead. For regional Persistent Disk, the bandwidth multiplier is approximately 2.32x to account for additional replication overhead.

In a situation where Persistent Disk read and write operations compete with network egress bandwidth, 60% of the maximum network egress bandwidth, defined by the machine type, is allocated to Persistent Disk writes. The remaining 40% is available for all other network egress traffic. Refer toegress bandwidth for details about other network egress traffic.

The following example shows how to calculate the maximum write bandwidth for a Persistent Disk on anN1 VM instance. The bandwidth allocation is the portion of network egress bandwidth allocated to Persistent Disk. The maximum write bandwidth is the maximum write bandwidth of the Persistent Disk adjusted for overhead.

VM vCPU Count Network egress cap (MB/s) Bandwidth allocation (MB/s) Maximum write bandwidth (MB/s) Maximum write bandwidth at full network utilization (MB/s)
1 250 150 216 129
2-7 1,250 750 1,078 647
8-15 2,000 1,200 1,724 1,034
16+ 4,000 2,400 3,448 2,069

You can calculate the maximum Persistent Disk bandwidth using the following formulas:

N1 VM with 1 vCPU

The network egress cap is:

2 Gbps / 8 bits = 0.25 GB per second = 250 MB per second

Persistent Disk bandwidth allocation at full network utilization is:

250 MB per second * 0.6 = 150 MB per second.

Persistent Disk maximum write bandwidth with no network contention is:

Persistent Disk maximum write bandwidth at full network utilization is:

The network egress limits provide an upper bound on performance. Other factors may limit performance below this level. See the following sections for information on other performance constraints.

Simultaneous reads and writes

For standard Persistent Disk, simultaneous reads and writes share the same resources. When your VM is using more read throughput or IOPS, it is able to perform fewer writes. Conversely, instances that use more write throughput or IOPS are able to perform fewer reads.

Persistent Disk volumes cannot simultaneously reach their maximum throughput and IOPS limits for both reads and writes.

The calculation for throughput is IOPS * I/O size. To take advantage of the maximum throughput limits for simultaneous reads and writes on SSD Persistent Disk, use an I/O size such that read and write IOPS combined don't exceed the IOPS limit.

The following table lists the IOPS limits per VM for simultaneous reads and writes.

Standard persistent disk SSD persistent disk (8 vCPUs) SSD persistent disk (32+ vCPUs) SSD persistent disk (64+ vCPUs)
Read Write Read Write Read Write Read Write
7,500 0 15,000 0 60,000 0 100,000 0
5,625 3,750 11,250 3,750 45,000 15,000 75,000 25,000
3,750 7,500 7,500 7,500 30,000 30,000 50,000 50,000
1875 11,250 3,750 11,250 15,000 45,000 25,000 75,000
0 15,000 0 15,000 0 60,000 0 100,000

The IOPS numbers in this table are based on an 8 KB I/O size. Other I/O sizes, such as 16 KB, might have different IOPS numbers but maintain the same read/write distribution.

The following table lists the throughput limits (MB per second) per VM for simultaneous reads and writes.

Standard persistent disk SSD persistent disk (6-14 vCPUs) SSD persistent disk (16+ vCPUs)
Read Write Read Write Read Write
1200 0 800* 800* 1,200* 1,200*
900 100
600 200
300 300
0 400

* For SSD Persistent Disk, the max read throughput and max write throughput are independent of each other, so these limits are constant.

Logical volume size

Persistent Disk can be up to 64 TiB in size, and you can create single logical volumes of up to 257 TiB using logical volume management inside your VM. A larger volume size impacts performance in the following ways:

Multiple disks attached to a single VM instance

The performance limits of disks when you have multiple disks attached to a VM depend on whether the disks are of the same type or different types.

Multiple disks of the same type

If you have multiple disks of the same type attached to a VM instance in the same mode (for example, read/write), the performance limits are the same as the limits of a single disk that has the combined size of those disks. If you use all the disks at 100%, the aggregate performance limit is split evenly among the disks regardless of relative disk size.

For example, suppose you have a 200 GB pd-standard disk and a 1,000 GB pd-standard disk. If you don't use the 1,000 GB disk, then the 200 GB disk can reach the performance limit of a 1,200 GB standard disk. If you use both disks at 100%, then each has the performance limit of a 600 GB pd-standard disk (1,200 GB / 2 disks = 600 GB disk).

Multiple disks of different types

If you attach different types of disks to a VM, the maximum possible performance is the performance limit of the fastest disk that the VM supports. The cumulative performance of the attached disks will not exceed the performance limits of the fastest disk the VM supports.

Optimize your disks for IOPS or throughput oriented workloads

Performance recommendations depend on whether you want to maximize IOPS or throughput.

IOPS-oriented workloads

Databases, whether SQL or NoSQL, have usage patterns of random access to data. Google recommends the following values for IOPS-oriented workloads:

Lower readahead values are typically suggested in best practices documents forMongoDB,Apache Cassandra, and other database applications.

Throughput-oriented workloads

Streaming operations, such as a Hadoop job, benefit from fast sequential reads, and larger I/O sizes can increase streaming performance.

Workload changes that can improve disk performance

Certain workload behaviors can improve the performance of I/O operations on the attached disks.

Use a high I/O queue depth

Persistent Disks have higher latency than locally attached disks such as Local SSD disks because they are network-attached devices. They can provide very high IOPS and throughput, but you must make sure that sufficient I/O requests are done in parallel. The number of I/O requests done in parallel is referred to as the I/O queue depth.

The tables below show the recommended I/O queue depth to ensure you can achieve a certain performance level. Note that the table below uses a slight overestimate of typical latency in order to show conservative recommendations. The example assumes that you are using an I/O size of 16 KB.

Recommended I/O queue depth

For SSD, balanced, and extreme Persistent Disk:

Desired IOPS Queue depth
500 1
1,000 2
2,000 4
4,000 8
8,000 16
16,000 32
32,000 64
64,000 128
100,000 200
Desired throughput (MB/s) Queue depth
8 1
16 2
32 4
64 8
128 16
256 32
512 64
1,000 128
1,200 153

For standard Persistent Disk:

Desired IOPS Queue depth
200 1
400 2
800 4
1,600 8
3,200 16
6,400 32
12,800 64
15,000 75
Desired throughput (MB/s) Queue depth
3.2 1
6.4 2
12.8 4
25.6 8
51.2 16
102.4 32
204.8 64
400 125

Generate enough I/Os using large I/O size

Limit heavy I/O loads to a maximum span

A span refers to a contiguous range of logical block addresses on a single physical disk. Heavy I/O loads achieve maximum performance when limited to a certain maximum span, which depends on the machine type of the VM to which the disk is attached, as listed in the following table.

Machine type Recommended maximum span
m2-megamem-416 C2D VMs 25 TB
All other machine types 50 TB

Spans on separate Persistent Disks that add up to 50 TB or less can be considered equal to a single 50 TB span for performance purposes.

Operating system changes to improve disk performance

In some cases, you can enable or disable features at the operating system level, or configure the attached disks in specific ways to improve the disk performance.

Avoid using ext3 file systems in Linux

Using ext3 file system in a Linux VM can result in very poor performance under heavy write loads. Use ext4 when possible. The ext4 file system driver is backwards compatible with ext3/ext2 and supports mounting ext3 file systems. The ext4 file system is the default on most Linux operating systems.

If you can't migrate to ext4, as a workaround, you can mount ext3 file systems with the data=journal mount option. This improves write IOPS at the cost of write throughput. Migrating to ext4 can result in up to a 7x improvement in some benchmarks.

Disable lazy initialization and enable DISCARD commands

Persistent Disks support discard operations orTRIMcommands, which allow operating systems to inform the disks when blocks are no longer in use. Discard support allows the operating system to mark disk blocks as no longer needed, without incurring the cost of zeroing out the blocks.

On most Linux operating systems, you enable discard operations when you mount a Persistent Disk on your VM. Windows Server 2012 R2 VMs enable discard operations by default when you mount a Persistent Disk.

Enabling discard operations can boost general runtime performance, and it can also speed up the performance of your disk when it is first mounted. Formatting an entire disk volume can be time consuming, so _lazy formatting_is a common practice. The downside of lazy formatting is that the cost is often then paid the first time the volume is mounted. By disabling lazy initialization and enabling discard operations, you can get fast format and mount operations.

-E lazy_itable_init=0,lazy_journal_init=0,discard  

The lazy_journal_init=0 parameter does not work on instances withCentOS 6 orRHEL 6 images. For VMs that use those operating systems, format the Persistent Disk without that parameter.

-E lazy_itable_init=0,discard  
-o discard  

Persistent Disk works well with the discard operations enabled. However, you can optionally run fstrim periodically in addition to, or instead of using discard operations. If you do not use discard operations, runfstrim before you create a snapshot of your boot disk. Trimming the file system lets you create smaller snapshot images, which reduces the cost of storing snapshots.

Adjust the readahead value

To improve I/O performance, operating systems employ techniques such asreadahead, where more of a file than was requested is read into memory with the assumption that subsequent reads are likely to need that data. Higher readahead increases throughput at the expense of memory and IOPS. Lower readahead increases IOPS at the expense of throughput.

On Linux systems, you can get and set the readahead value with theblockdev command:

$ sudo blockdev --getra /dev/DEVICE_ID

$ sudo blockdev --setra VALUE /dev/DEVICE_ID

The readahead value is <desired_readahead_bytes> / 512 bytes.

For example, for an 8 MB readahead, 8 MB is 8388608 bytes (8 * 1024 * 1024).

8388608 bytes / 512 bytes = 16384

You set blockdev to 16384:

$ sudo blockdev --setra 16384 /dev/DEVICE_ID

Modify your VM or create a new VM

There are limits associated with each VM machine type that can impact the performance you can get from the attached disks. These limits include:

Ensure you have free CPUs

Reading and writing to persistent disk requires CPU cycles from your VM. To achieve very high, consistent IOPS levels, you must have CPUs free to process I/O.

To increase the number of vCPUs available with your VM, you can create a new VM, or you canedit the machine type of a VM instance.

Create a new VM to gain new functionality

Newer disk types aren't supported with all machine series or machine types.Hyperdiskprovide higher IOPS or throughput rates for your workloads, but are currently available with only a few machine series, and require at least 64 vCPUs.

New VM machine series typically run on newer CPUs, which can offer better performance that their predecessors. Also, newer CPUs can support additional functionality to improve the performance of your workloads, such asAdvanced Matrix Extensions (AMX)or Intel Advanced Vector Extensions (AVX-512).

What's next