Share disks between instances (original) (raw)


You can access the same disk from multiple virtual machine (VM) instances by attaching the disk to each instance. You can attach a disk in read-only mode or multi-writer mode to an instance.

With read-only mode, multiple instances can only read data from the disk. None of the instances can write to the disk. Sharing a disk in read-only mode between instances is less expensive than having copies of the same data on multiple disks.

With multi-writer mode, multiple instances can read and write to the same disk. This is useful for highly-available (HA) shared file systems and databases like SQL Server Failover Cluster Infrastructure (FCI).

You can share a zonal disk only between instances in the same zone. Regional disks can be shared only with instances in the same zones as the disk's replicas.

There are no additional costs associated with sharing a disk between instances. Compute Engine instances don't have to use the same machine type to share a disk, but each instance must use a machine type that supports disk sharing.

This document discusses multi-writer and read-only disk sharing in Compute Engine, including the supported disk types and performance considerations.

Before you begin

Enable disk sharing

You can attach an existing Hyperdisk or Persistent Disk volume to multiple instances. However, for Hyperdisk volumes, you must first put the disk in multi-writer or read-only mode by setting its access mode.

A Hyperdisk volume's access mode is a property that determines how instances can access the disk.

The available access modes are as follows:

Support for each access mode varies by Hyperdisk type, as stated in the following table. You can't set the access mode for Hyperdisk Throughput or Hyperdisk Extreme volumes.

Hyperdisk type Supported access modes
Hyperdisk Balanced Hyperdisk Balanced High Availability Single-writer mode Multi-writer mode
Hyperdisk ML Single-writer mode Read-only mode
Hyperdisk Throughput Hyperdisk Extreme Single-writer mode

For disks that can be shared between instances, you can set the access mode at or after disk creation. For instructions on setting the access mode, see set the disk's access mode.

Read-only mode for Hyperdisk and Persistent Disk

This section discusses sharing a single disk in read-only mode between multiple instances.

Supported disk types for read-only mode

You can attach these disk types to multiple instances in read-only mode:

Performance in read-only mode

Attaching a disk in read-only mode to multiple instances doesn't affect the disk's performance. Each instance can still reach the maximum disk performance possible for the instance's machine type.

Limitations for sharing disks in read-only mode

How to share a disk in read-only mode between instances

If you're not using Hyperdisk ML, attach the disk to multiple instances by following the instructions inAttach a non-boot disk to an instance.

To attach a Hyperdisk ML volume in read-only mode to multiple instances, you must first set the disk's access mode toread-only mode. After you set the access mode,attach the Hyperdisk ML volume to your instances.

Multi-writer mode for Hyperdisk

Disks in multi-writer mode are suitable for use cases like the following:

If your primary goal is shared file storage among compute instances, consider one of the following options:

Supported Hyperdisk and machine types for multi-writer mode

You can use Hyperdisk Balanced and Hyperdisk Balanced High Availability volumes in multi-writer mode. You can attach a single volume in multi-writer mode to at most 8 instances. You can't attach volumes in multi-writer mode to bare metal instances.

Hyperdisk Balanced supports multi-writer mode for the following machine types:

Hyperdisk Balanced High Availability supports multi-writer mode for the following machine types:

Multi-writer mode for Hyperdisk supports the NVMe interface. If you're attaching a disk in multi-writer mode to an instance, the instance's boot disk must also be attached with NVMe.

Supported file systems for multi-writer mode

To access a disk from multiple instances, use one of the following options:

Hyperdisk performance in multi-writer mode

When you attach a disk in multi-writer mode to multiple instances, the disk's provisioned performance is divided evenly across all instances—even among instances that aren't running or that aren't actively using the disk. However, the maximum performance for each instance is ultimately limited by the throughput and IOPS limits of each instance's machine type.

For example, suppose you attach a Hyperdisk Balanced volume provisioned with 100,000 IOPS to 2 instances. Each instance gets 50,000 IOPS concurrently.

The following table shows how much performance each instance in this example would get depending on how many instances you attach the disk to. Each time you attach a disk to another instance, Compute Engine asynchronously adjusts the performance allotted to each previously attached instance.

# of instances attached 1 2 3 4 5 6 7 8
Max IOPS per instance 100,000 50,000 ~33,333 25,000 20,000 ~16,667 14285 12,500
Max throughput per instancein MiBps 1,200 600 400 300 240 200 ~172 150

Limitations for sharing Hyperdisk volumes in multi-writer mode

Available regions

You can enable multi-writer mode in all the regions where Hyperdisk Balanced and Hyperdisk Balanced High Availability are available. For a list of supported regions, seeRegional availability for Hyperdisk Balancedand Regional availability for Hyperdisk Balanced High Availability.

I/O fencing with persistent reservations

Google recommends using persistent reservations (PR) with disks in multi writer mode to provide I/O fencing. Persistent reservations manage access to the disk between instances. This prevents data corruption from instances simultaneously writing to the same portion of the disk.

Hyperdisk volumes in multi-writer mode supportNVMe (spec 1.2.1)reservations.

Supported reservation modes

The following reservation modes are supported:

  1. Write Exclusive: there will be a single reservation holder and a single writer. All other registrants/non-registrants will only have read access.
  2. Write Exclusive - Registrants Only: there will be a single reservation holder. All registrants will have read and write access to the disk. The non-registrants will only have read access.

The following reservation modes aren't supported:

NVMe Get Features - Host Identifieris supported. The instance number is used as the default Host ID.

The following NVMe reservation features are not supported:

Supported commands

NVMe reservations support the following commands:

NVMe reservations don't support the following commands:

Before you attach a disk in multi-writer mode to multiple instances, you must set the disk's access mode to multi-writer. You can set the access mode for a disk when you create it.

You can also set the access mode for an existing disk, but you must first detach the disk from all instances.

To create and use a new disk in multi-writer mode, follow these steps:

  1. Create the disk, setting its access mode to multi-writer. For instructions, see Add a Hyperdisk to your instance.
  2. Attach the disk to each instance.

To use an existing disk in multi-writer mode, follow these steps:

  1. Detach the disk from all instances.
  2. Set the disk's access mode to multi-writer.
  3. Attach the disk to each instance.

Multi-writer mode for Persistent Disk volumes

You can attach an SSD Persistent Disk volume in multi-writer mode to up to two N2 virtual machine (VM) instances simultaneously so that both VMs can read and write to the disk.

If you have more than 2 N2 VMs or you're using any other machine series, you can use one of the following options:

To enable multi-writer mode for new Persistent Disk volumes, create a new Persistent Disk volume and specify the --multi-writer flag in the gcloud CLI or the multiWriter property in the Compute Engine API.

Persistent Disk volumes in multi-writer mode provide a shared block storage capability and present an infrastructural foundation for building distributed storage systems and similar highly available services. When using Persistent Disk volumes in multi-writer mode, use ascale-out storage software systemthat has the ability to coordinate access to Persistent Disk devices across multiple VMs. Examples of these storage systems includeLustre and IBM Spectrum Scale. Most single VM file systems such as EXT4, XFS, and NTFS are not designed to be used with shared block storage.

For more information, see Best practices in this document. If you require a fully managed file storage, you canmount a Filestore file share on your Compute Engine instances.

Persistent Disk volumes in multi-writer mode support a subset ofSCSI-3 Persistent Reservations (SCSI PR) commands. High-availability applications can use these commands for I/O fencing and failover configurations.

The following SCSI PR commands are supported:

For instructions, see Share an SSD Persistent Disk volume in multi-writer mode between VMs.

Supported Persistent Disk types for multi-writer mode

You can simultaneously attach SSD Persistent Disk in multi-writer mode to up to 2 N2 VMs.

Best practices for multi-writer mode

Persistent Disk performance in multi-writer mode

Persistent Disk volumes created in multi-writer mode have specific IOPS and throughput limits.

Zonal SSD persistent disk multi-writer mode
Maximum sustained IOPS
Read IOPS per GB 30
Write IOPS per GB 30
Read IOPS per instance 15,000–100,000*
Write IOPS per instance 15,000–100,000*
Maximum sustained throughput (MB/s)
Read throughput per GB 0.48
Write throughput per GB 0.48
Read throughput per instance 240–1,200*
Write throughput per instance 240–1,200*

* Persistent disk IOPS and throughput performance depends on disk size, instance vCPU count, and I/O block size, amongother factors.

Attaching a multi-writer disk to multiple virtual machine instances does not affect aggregate performance or cost. Each machine gets a share of the per-disk performance limit.

To learn how to share persistent disks between multiple VMs, seeShare persistent disks between VMs.

Restrictions for sharing a disk in multi-writer mode

Share an SSD Persistent Disk volume in multi-writer mode between VMs

You can share an SSD Persistent Disk volume in multi-writer mode between N2 VMs in the same zone. See Persistent Disk multi-writer modefor details about how this mode works. You can create and attach multi-writer Persistent Disk volumes using the following process:

gcloud

Create and attach a zonal Persistent Disk volume by using thegcloud CLI:

  1. Use thegcloud beta compute disks create commandcommand to create a zonal Persistent Disk volume. Include the --multi-writer flag to indicate that the disk must be shareable between the VMs in multi-writer mode.
    gcloud beta compute disks create DISK_NAME \
    --size DISK_SIZE \
    --type pd-ssd \
    --multi-writer
    Replace the following:
    • DISK_NAME: the name of the new disk
    • DISK_SIZE: the size, in GB, of the new disk Acceptable sizes range from 1 GB to 65,536 GB for SSD Persistent Disk volumes, or 200 GB to 65,536 GB for standard Persistent Disk volumes in multi-writer mode.
  2. After you create the disk, attach it to any running or stopped VM with an N2 machine type. Use the gcloud compute instances attach-disk command:
    gcloud compute instances attach-disk INSTANCE_NAME \
    --disk DISK_NAME
    Replace the following:
    • INSTANCE_NAME: the name of the N2 VM where you are adding the new zonal Persistent Disk volume
    • DISK_NAME: the name of the new disk that you are attaching to the VM
  3. Repeat the gcloud compute instances attach-disk commandbut replace INSTANCE_NAME with the name of your second VM.

After you create and attach a new disk to an instance, format and mount the disk using ashared-disk file system. Most file systems are not capable of using shared storage. Confirm that your file system supports these capabilities before you use it with multi-writer Persistent Disk. You cannot mount the disk to multiple VMs using the same process you would normally use to mount the disk to a single VM.

REST

Use the Compute Engine API to create and attach an SSD Persistent Disk volume to N2 VMs in multi-writer mode.

  1. In the API, construct a POST request to create a zonal Persistent Disk volume using the disks.insert method. Include the name, sizeGb, and type properties. To create this new disk as an empty and unformatted non-boot disk, don't specify a source image or a source snapshot for this disk. Include the multiWriterproperty with a value of True to indicate that the disk must be sharable between the VMs in multi-writer mode.
    POST https://compute.googleapis.com/compute/beta/projects/PROJECT_ID/zones/ZONE/disks
    {
    "name": "DISK_NAME",
    "sizeGb": "DISK_SIZE",
    "type": "zones/ZONE/diskTypes/pd-ssd",
    "multiWriter": "True"
    }
    Replace the following:
    • PROJECT_ID: your project ID
    • ZONE: the zone where your VM and new disk are located
    • DISK_NAME: the name of the new disk
    • DISK_SIZE: the size, in GB, of the new disk Acceptable sizes range from 1 GB to 65,536 GB for SSD Persistent Disk volumes, or 200 GB to 65,536 GB for standard Persistent Disk volumes in multi-writer mode.
  2. To attach the disk to an instance, construct a POST request to thecompute.instances.attachDisk method. Include the URL to the zonal Persistent Disk volume that you just created:
    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME/attachDisk
    {
    "source": "/compute/v1/projects/PROJECT_ID/zones/ZONE/disks/DISK_NAME"
    }
    Replace the following:
    • PROJECT_ID: your project ID
    • ZONE: the zone where your VM and new disk are located
    • INSTANCE_NAME: the name of the VM where you are adding the new Persistent Disk volume.
    • DISK_NAME: the name of the new disk
  3. To attach the disk to a second VM, repeat theinstances.attachDiskcommand from the previous step. Set the INSTANCE_NAME to the name of the second VM.

After you create and attach a new disk to an instance, format and mount the disk using a shared-disk file system. Most file systems are not capable of using shared storage. Confirm that your file system supports these capabilities before you use it with multi-writer Persistent Disk.

What's next