About Persistent Disk (original) (raw)

Linux Windows


This document describes the features, types, performance and benefits of Persistent Disk volumes. If you need block storage for a virtual machine (VM) instance or container, such as for a boot disk or data disk, use Persistent Disk volumes if Google Cloud Hyperdisk isn't available for your compute instance. To learn about the other block storage options in Compute Engine, seeChoose a disk type.

Persistent Disk volumes are durable network storage devices that your instances can access like physical disks in a desktop or a server. Persistent Disk volumes aren't attached to the physical machine hosting the instance. Instead, they are attached to the instance as network block devices. When you read to or write from a Persistent Disk volume, data is transmitted over the network.

The data on each Persistent Disk volume is distributed across several physical disks. Compute Engine manages the physical disks and the data distribution for you to ensure redundancy and optimal performance.

You can detach or move the volumes to keep your data even after you delete your instances. Persistent Disk performance increases with size, so you can resize your existing Persistent Disk volumes or add more Persistent Disk volumes to a VM to meet your performance and storage space requirements.

Add a non-boot disk to your instancewhen you need reliable and affordable storage with consistent performance characteristics.

Add a Persistent Disk to your instance

Persistent Disk types

When you create a Persistent Disk volume, you can select one of the following disk types:

If you create a disk in the Google Cloud console, the default disk type ispd-balanced. If you create a disk using the gcloud CLI or the Compute Engine API, the default disk type is pd-standard.

For information about machine type support, refer to the following:

Durability of Persistent Disk

Disk durability represents the probability of data loss, by design, for a typical disk in a typical year, using a set of assumptions about hardware failures, the likelihood of catastrophic events, isolation practices and engineering processes in Google data centers, and the internal encodings used by each disk type. Persistent Disk data loss events are extremely rare and have historically been the result of coordinated hardware failures, software bugs, or a combination of the two. Google also takes many steps to mitigate the industry-wide risk of silent data corruption. Human error by a Google Cloud customer, such as when a customer accidentally deletes a disk, is outside the scope of Persistent Disk durability.

There is a very small risk of data loss occurring with a regional Persistent Disk volume due to its internal data encodings and replication. Regional Persistent Disk provides high availabilityand can be used for disaster recovery if an entire data center is lost and can't be recovered. Regional Persistent Disk provides twice as many disk replicas as zonal Persistent Disk, with each replica distributed between two zones in the same region. If a primary zone becomes unavailable during an outage, the replica in the second zone can be accessed immediately.

For more information about region-specific considerations, seeGeography and regions.

The following table shows durability for each disk type's design. 99.999% durability means that with 1,000 disks, you would likely go a hundred years without losing a single one.

Zonal standard Persistent Disk Zonal balanced Persistent Disk Zonal SSD Persistent Disk Zonal extreme Persistent Disk Regional standard Persistent Disk Regional balanced Persistent Disk Regional SSD Persistent Disk
Better than 99.99% Better than 99.999% Better than 99.999% Better than 99.9999% Better than 99.999% Better than 99.9999% Better than 99.9999%

Machine series support

Maximum capacity

Persistent Disk volumes can be up to 64 TiB in size. You can add up to 127 secondary, non-boot zonal Persistent Disk volumes to a VM instance. However, the combined total capacity of all Persistent Disk volumes attached to a single VM can't exceed 257 TiB.

You can create single logical volumes of up to 257 TiB using logical volume management inside your VM. For information about how to ensure maximum performance with large volumes, seeLogical volume size.

A zonal Persistent Disk is a Persistent Disk that's accessible only within one specific zone, for example, europe-west-2.

Ease of use

Compute Engine handles most disk management tasks for you so that you don't need to deal with partitioning, redundant disk arrays, or subvolume management. Generally, you don't need to create larger logical volumes. However, you can extend your secondary attached Persistent Disk capacity to 257 TiB per VM and apply these practices to your Persistent Disk volumes. You can save time and get the best performance if youformat your Persistent Disk volumeswith a single file system and no partition tables.

If you need to separate your data into multiple unique volumes,create additional disksrather than dividing your existing disks into multiple partitions.

When you require additional space on your Persistent Disk volumes,resize your disks rather than repartitioning and formatting.

Performance

Persistent Disk performance is predictable and scales linearly with provisioned capacity until the limits for a VM's provisioned vCPUs are reached. For more information about performance scaling limits and optimization, see Configure disks to meet performance requirements.

Standard Persistent Disk volumes are efficient and economical for handling sequential read/write operations, but they aren't optimized to handle high rates of random input/output operations per second (IOPS). If your apps require high rates of random IOPS, use SSD or extreme Persistent Disk. SSD Persistent Disk is designed for single-digit millisecond latencies. Observed latency is application specific.

Compute Engine optimizes performance and scaling on Persistent Disk volumes automatically. You don't need to stripe multiple disks together or pre-warm disks to get the best performance. When you need more disk space or better performance, resize your disksand possibly add more vCPUs to add more storage space, throughput, and IOPS. Persistent Disk performance is based on the total Persistent Disk capacity attached to a VM and the number of vCPUs that the VM has.

For boot devices, you can reduce costs by using a standard Persistent Disk. Small, 10 GiB Persistent Disk volumes can work for basic boot and package management use cases. However, to ensure consistent performance for more general use of the boot device, use a balanced Persistent Disk as your boot disk.

Because Persistent Disk write operations contribute to the cumulative network egress traffic for your VM, Persistent Disk write operations are capped by thenetwork egress capfor your VM.

Reliability

Persistent Disk has built-in redundancy to protect your data against equipment failure and to ensure data availability through data center maintenance events. Checksums are calculated for all Persistent Disk operations, so we can ensure that what you read is what you wrote.

Additionally, you cancreate snapshots of Persistent Disk to protect against data loss due to user error. Snapshots are incremental, and take only minutes to create even if you snapshot disks that are attached to running VMs.

Regional Persistent Disk

Regional Persistent Disk volumes have storage qualities that are similar to zonal Persistent Disk. However, regional Persistent Disk volumes provide durable storage and replication of data between two zones in the same region.

About synchronous disk replication

When you create a new Persistent Disk, you can either create the disk in one zone, or replicate it across two zones within the same region.

For example, if you create one disk in a zone, such as in us-west1-a, you have one copy of the disk. A disk created in only one zone is referred to as a zonal disk. You can increase the disk's availability by storing another copy of the disk in a different zone within the region, such as inus-west1-b.

Persistent Disk replicated across two zones in the same region are called Regional Persistent Disk. You can also use Hyperdisk Balanced High Availability for cross-zonal synchronous replication of Google Cloud Hyperdisk.

It's unlikely for a region to fail altogether, but zonal failures can happen. Replicating within the region to different zones, as shown in the following image, helps with availability and reduces disk latency. If both replication zones fail, it's considered a region-wide failure.

Illustration of a VM that has a regional disk.
 The disk has two replicas, one in the same zone as the VM, and one in a second zone.

Disk is replicated in two zones.

In the replicated scenario, the data is available in the local zone (us-west1-a) which is the zone the virtual machine (VM) is running in. Then, the data is replicated to another zone (us-west1-b). One of the zones must be the same zone that the VM is running in.

If a zonal outage occurs, you can usually failover your workload running on Regional Persistent Disk to another zone. To learn more, seeRegional Persistent Disk failover.

Design considerations for Regional Persistent Disk

If you're designing robust systems orhigh availability services on Compute Engine, use Regional Persistent Disk combined with other best practices such as backing up your data using snapshots. Regional Persistent Disk volumes are also designed to work withregional managed instance groups.

Performance

Regional Persistent Disk volumes are designed for workloads that require a lowerRecovery Point Objective (RPO)andRecovery Time Objective (RTO) compared to using Persistent Disk snapshots.

Regional Persistent Disk are an option when write performance is less critical than data redundancy across multiple zones.

Like zonal Persistent Disk, Regional Persistent Disk can achieve greater IOPS and throughput performance on VMs with a greater number of vCPUs. For more information about this and other limitations, seeConfigure disks to meet performance requirements.

When you need more disk space or better performance, you canresize your regional disksto add more storage space, throughput, and IOPS.

Reliability

Compute Engine replicates data of your regional Persistent Disk to the zones you selected when you created your disks. The data of each replica is spread across multiple physical machines within the zone to ensure redundancy.

Similar to zonal Persistent Disk, you cancreate snapshots of Persistent Disk to protect against data loss due to user error. Snapshots are incremental, and take only minutes to create even if you snapshot disks that are attached to running VMs.

Limitations for Regional Persistent Disk

Storage interface types

The storage interface is chosen automatically for you when you create your instance or add Persistent Disk volumes to a VM. Tau T2A and third generation VMs (such as M3) use the NVMeinterface for Persistent Disk.

Confidential VMinstances also use NVMe Persistent Disk. All other Compute Engine machine series use the SCSI disk interface for Persistent Disk.

Most public images include both NVMe and SCSI drivers. Most images include a kernel with optimized drivers that allow your VM to achieve the best performance using NVMe. Your imported Linux images achieve the best performance with NVMe if they include kernel version 4.14.68 or later.

To determine if an operating system version supports NVMe, see theoperating system details page.

Multi-writer mode

You can attach an SSD Persistent Disk in multi-writer mode to up to two N2 VMs simultaneously so that both VMs can read and write to the disk.

Persistent Disk in multi-writer mode provides a shared block storage capability and presents an infrastructural foundation for building highly available shared file systems and databases. These specialized file systems and databases should be designed to work with shared block storage and handle cache coherence between VMs by using tools such asSCSI Persistent Reservations.

However, Persistent Disk with multi-writer mode should generally not be used directly. Many file systems such as EXT4, XFS, and NTFS aren't designed to be used with shared block storage. For more information about the best practices when sharing Persistent Disk between VMs, see Best practices.

If you require a fully managed file storage, you can mount a Filestore file share on your Compute Engine VMs.

To enable multi-writer mode for new Persistent Disk volumes, create a new Persistent Disk and specify the --multi-writer flag in the gcloud CLI or the multiWriter property in the Compute Engine API. For more information, seeShare Persistent Disk volumes between VMs.

Persistent Disk encryption

Compute Engine automatically encrypts your data before it travels outside of your VM to the Persistent Disk storage space. Each Persistent Disk remains encrypted either with system-defined keys or withcustomer-supplied keys. Google distributes Persistent Disk data across multiple physical disks in a manner that users don't control.

When you delete a Persistent Disk volume, Google discards the cipher keys, rendering the data irretrievable. This process is irreversible.

If you want to control the encryption keys that are used to encrypt your data,create your disks with your own encryption keys.

Restrictions

Persistent Disk and Colossus

Persistent Disk is designed to run in tandem with Google's file system,Colossus, which is a distributed block storage system. Persistent Disk drivers automatically encrypt data on the VM before it's transmitted from the VM onto the network. Then, Colossus persists the data. When Colossus reads the data, the driver decrypts the incoming data.

image

Persistent Disk volumes use Colossus for the storage backend.

Having disks as a service is useful in a number of cases, for example:

What's next