Create an N1 VM that has attached GPUs (original) (raw)
Linux Windows
This document explains how to create a VM that has attached GPUs and uses anN1 machine family. You can use most N1 machine types except the N1 shared-core
.
Before you begin
- To review additional prerequisite steps such as selecting an OS image and checking GPU quota, review theoverview document.
- If you haven't already, then set up authentication.Authentication is the process by which your identity is verified for access to Google Cloud services and APIs. To run code or samples from a local development environment, you can authenticate to Compute Engine by selecting one of the following options:
Select the tab for how you plan to use the samples on this page:
Required roles
To get the permissions that you need to create VMs, ask your administrator to grant you theCompute Instance Admin (v1) (roles/compute.instanceAdmin.v1
) IAM role on the project. For more information about granting roles, see Manage access to projects, folders, and organizations.
This predefined role contains the permissions required to create VMs. To see the exact permissions that are required, expand the Required permissions section:
Required permissions
The following permissions are required to create VMs:
compute.instances.create
on the project- To use a custom image to create the VM:
compute.images.useReadOnly
on the image - To use a snapshot to create the VM:
compute.snapshots.useReadOnly
on the snapshot - To use an instance template to create the VM:
compute.instanceTemplates.useReadOnly
on the instance template - To assign a legacy network to the VM:
compute.networks.use
on the project - To specify a static IP address for the VM:
compute.addresses.use
on the project - To assign an external IP address to the VM when using a legacy network:
compute.networks.useExternalIp
on the project - To specify a subnet for your VM:
compute.subnetworks.use
on the project or on the chosen subnet - To assign an external IP address to the VM when using a VPC network:
compute.subnetworks.useExternalIp
on the project or on the chosen subnet - To set VM instance metadata for the VM:
compute.instances.setMetadata
on the project - To set tags for the VM:
compute.instances.setTags
on the VM - To set labels for the VM:
compute.instances.setLabels
on the VM - To set a service account for the VM to use:
compute.instances.setServiceAccount
on the VM - To create a new disk for the VM:
compute.disks.create
on the project - To attach an existing disk in read-only or read-write mode:
compute.disks.use
on the disk - To attach an existing disk in read-only mode:
compute.disks.useReadOnly
on the disk
You might also be able to get these permissions with custom roles or other predefined roles.
Overview
The following GPU models can be attached to VMs that use N1 machine families.
NVIDIA GPUs:
- NVIDIA T4:
nvidia-tesla-t4
- NVIDIA P4:
nvidia-tesla-p4
- NVIDIA P100:
nvidia-tesla-p100
- NVIDIA V100:
nvidia-tesla-v100
NVIDIA RTX Virtual Workstation (vWS) (formerly known as NVIDIA GRID):
- NVIDIA T4 Virtual Workstation:
nvidia-tesla-t4-vws
- NVIDIA P4 Virtual Workstation:
nvidia-tesla-p4-vws
- NVIDIA P100 Virtual Workstation:
nvidia-tesla-p100-vws
For these virtual workstations, an NVIDIA RTX Virtual Workstation (vWS) license is automatically added to your VM.
You can create an N1 VM that has attached GPUs by using either the Google Cloud console, Google Cloud CLI, or REST.
Console
- In the Google Cloud console, go to the Create an instance page.
Go to Create an instance - Specify a Name for your VM. SeeResource naming convention.
- Select a region and zone where GPUs are available. See the list of availableGPU zones.
- In the Machine configuration section, select the GPUs machine family, and then do the following:
- In the GPU type list, select one of the GPU models supported on N1 machines.
- In the Number of GPUs list, select the number of GPUs.
- If your GPU model supportsNVIDIA RTX Virtual Workstations (vWS) for graphics workloads, and you plan on running graphics-intensive workloads on this VM, selectEnable Virtual Workstation (NVIDIA GRID).
- In the Machine type list, select one of the preset N1 machine types. Alternatively, you can also specify custom machine type settings.
- In the Boot disk section, clickChange. This opens the Boot disk configuration page.
- On the Boot disk configuration page, do the following:
- On the Public images tab, choose asupported Compute Engine imageor Deep Learning VM Images.
- Specify a boot disk size of at least 40 GB.
- To confirm your boot disk options, click Select.
- Optional: In the VM provisioning model list, select aprovisioning model.
- To create and start the VM, click Create.
gcloud
To create and start a VM use thegcloud compute instances createcommand with the following flags.
If your workload is fault-tolerant and can withstand possible VM preemption, consider using Spot VMs to reduce the cost of your VMs and the attached GPUs. For more information, seeGPUs on Spot VMs. The --provisioning-model=SPOT
is an optional flag that configures your VMs as Spot VMs. For Spot VMs, the automatic restart and host maintenance options flags are disabled.
gcloud compute instances create VM_NAME
--machine-type MACHINE_TYPE
--zone ZONE
--boot-disk-size DISK_SIZE
--accelerator type=ACCELERATOR_TYPE,count=ACCELERATOR_COUNT
[--image IMAGE | --image-family IMAGE_FAMILY]
--image-project IMAGE_PROJECT
--maintenance-policy TERMINATE
[--provisioning-model=SPOT]
Replace the following:
VM_NAME
: thenamefor the new VM.MACHINE_TYPE
: the machine typethat you selected for your VM.ZONE
: the zone for the VM. This zone must support theGPU type.DISK_SIZE
: the size of your boot disk in GB. Specify a boot disk size of at least 40 GB.IMAGE
orIMAGE_FAMILY
thatsupports GPUs. Specify one of the following:IMAGE
: the required version of a public image. For example,--image debian-10-buster-v20200309
.IMAGE_FAMILY
: animage family. This creates the VM from the most recent, non-deprecated OS image. For example, if you specify--image-family debian-10
, Compute Engine creates a VM from the latest version of the OS image in the Debian 10 image family.
You can also specify a custom image or Deep Learning VM Images.
IMAGE_PROJECT
: the Compute Engine image projectthat the image family belongs to. If using a custom image or Deep Learning VM Images, specify the project that those images belong to.ACCELERATOR_COUNT
: the number of GPUs that you want to add to your VM. SeeGPUs on Compute Enginefor a list of GPU limits based on the machine type of your VM.ACCELERATOR_TYPE
: theGPU model that you want to use. If you plan on running graphics-intensive workloads on this VM, use one of thevirtual workstation models.
Choose one of the following values:- NVIDIA GPUs:
* NVIDIA T4:nvidia-tesla-t4
* NVIDIA P4:nvidia-tesla-p4
* NVIDIA P100:nvidia-tesla-p100
* NVIDIA V100:nvidia-tesla-v100
- NVIDIA RTX Virtual Workstation (vWS) (formerly known as NVIDIA GRID):
* NVIDIA T4 Virtual Workstation:nvidia-tesla-t4-vws
* NVIDIA P4 Virtual Workstation:nvidia-tesla-p4-vws
* NVIDIA P100 Virtual Workstation:nvidia-tesla-p100-vws
For these virtual workstations, an NVIDIA RTX Virtual Workstation (vWS) license is automatically added to your VM.
- NVIDIA GPUs:
Example
For example, you can use the following gcloud
command to start an Ubuntu 22.04 VM with 1 NVIDIA T4 GPU and 2 vCPUs in theus-east1-d
zone.
gcloud compute instances create gpu-instance-1
--machine-type n1-standard-2
--zone us-east1-d
--boot-disk-size 40GB
--accelerator type=nvidia-tesla-t4,count=1
--image-family ubuntu-2204-lts
--image-project ubuntu-os-cloud
--maintenance-policy TERMINATE
REST
Identify the GPU type that you want to add to your VM. Submit a GET request to list the GPU types that are available to your project in a specific zone.
If your workload is fault-tolerant and can withstand possible VM preemption, consider using Spot VMs to reduce the cost of your VMs and the attached GPUs. For more information, seeGPUs on Spot VMs. The "provisioningModel": "SPOT"
is an optional parameter that configures your VMs as Spot VMs. For Spot VMs, the automatic restart and host maintenance options flags are disabled.
GET https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/acceleratorTypes
Replace the following:
PROJECT_ID
: project ID.ZONE
: zonefrom which you want to list the available GPU types.
Send a POST request to theinstances.insert method. Include the acceleratorType
parameter to specify which GPU type you want to use, and include the acceleratorCount
parameter to specify how many GPUs you want to add. Also set the onHostMaintenance
parameter to TERMINATE
.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances { "machineType": "projects/PROJECT_ID/zones/ZONE/machineTypes/MACHINE_TYPE", "disks": [ { "type": "PERSISTENT", "initializeParams": { "diskSizeGb": "DISK_SIZE", "sourceImage": "projects/IMAGE_PROJECT/global/images/family/IMAGE_FAMILY" }, "boot": true } ], "name": "VM_NAME", "networkInterfaces": [ { "network": "projects/PROJECT_ID/global/networks/NETWORK" } ], "guestAccelerators": [ { "acceleratorCount": ACCELERATOR_COUNT, "acceleratorType": "projects/PROJECT_ID/zones/ZONE/acceleratorTypes/ACCELERATOR_TYPE" } ], "scheduling": { ["automaticRestart": true], "onHostMaintenance": "TERMINATE", ["provisioningModel": "SPOT"] }, }
Replace the following:
VM_NAME
: thename of the VM.PROJECT_ID
: your project ID.ZONE
: the zone for the VM. This zone must support theGPU type.MACHINE_TYPE
: the machine typethat you selected for the VM. SeeGPUs on Compute Engineto see what machine types are available based on your desired GPU count.IMAGE
orIMAGE_FAMILY
: specify one of the following:IMAGE
: the required version of a public image. For example,"sourceImage": "projects/debian-cloud/global/images/debian-10-buster-v20200309"
IMAGE_FAMILY
: animage family. This creates the VM from the most recent, non-deprecated OS image. For example, if you specify"sourceImage": "projects/debian-cloud/global/images/family/debian-10"
, Compute Engine creates a VM from the latest version of the OS image in the Debian 10 image family.
You can also specify a custom image or Deep Learning VM Images.
IMAGE_PROJECT
: the Compute Engine image projectthat the image family belongs to. If using a custom image or Deep Learning VM Images, specify the project that those images belong to.DISK_SIZE
: the size of your boot disk in GB. Specify a boot disk size of at least 40 GB.NETWORK
: the VPCnetwork that you want to use for the VM. You can specifydefault
to use your default network.ACCELERATOR_COUNT
: the number of GPUs that you want to add to your VM. SeeGPUs on Compute Enginefor a list of GPU limits based on the machine type of your VM.ACCELERATOR_TYPE
: theGPU model that you want to use. If you plan on running graphics-intensive workloads on this VM, use one of thevirtual workstation models.
Choose one of the following values:- NVIDIA GPUs:
* NVIDIA T4:nvidia-tesla-t4
* NVIDIA P4:nvidia-tesla-p4
* NVIDIA P100:nvidia-tesla-p100
* NVIDIA V100:nvidia-tesla-v100
- NVIDIA RTX Virtual Workstation (vWS) (formerly known as NVIDIA GRID):
* NVIDIA T4 Virtual Workstation:nvidia-tesla-t4-vws
* NVIDIA P4 Virtual Workstation:nvidia-tesla-p4-vws
* NVIDIA P100 Virtual Workstation:nvidia-tesla-p100-vws
For these virtual workstations, an NVIDIA RTX Virtual Workstation (vWS) license is automatically added to your VM.
- NVIDIA GPUs:
Install drivers
To install the drivers, choose one of the following options:
- If you plan to run graphics-intensive workloads, such as those for gaming and visualization,install drivers for the NVIDIA RTX Virtual Workstation.
- For most workloads, install the GPU drivers.
What's next?
- Learn more about GPU platforms.
- Add Local SSDs to your instances. Local SSD devices pair well with GPUs when your apps require high-performance storage.
- Install the GPU drivers. If you enabled an NVIDIA RTX Virtual Workstation,install a driver for the virtual workstation.
- To handle GPU host maintenance, see Handling GPU host maintenance events.