Create a user cluster using GKE On-Prem API clients (original) (raw)

s Skip to main content

Create a user cluster using GKE On-Prem API clients

This page describes how to create a user cluster by using the Google Cloud console, the Google Cloud CLI (gcloud CLI), or Terraform.

What is the GKE On-Prem API?

The GKE On-Prem API is a Google Cloud-hosted API that lets you manage the lifecycle of your on-premises clusters using Terraform and standard Google Cloud applications. The GKE On-Prem API runs in Google Cloud's infrastructure. Terraform, the console, and the gcloud CLI are clients of the API, and they use the API to create clusters in your data center.

To manage the lifecycle of your clusters, the GKE On-Prem API must store metadata about your cluster's state in Google Cloud, using the Google Cloud region that you specify when creating the cluster. This metadata lets the API manage the cluster lifecycle and doesn't include workload-specific data.

When you create a cluster using an GKE On-Prem API client, you specify a Google Cloud project. After the cluster is created, it is automatically registered to the specified project'sfleet. This project is referred to as thefleet host project. The fleet host project can't be changed after the cluster is created.

If you prefer, you can create a user cluster by creating a user cluster configuration file and using bmctl, as described inCreating a user cluster.

If you want to use Terraform, the console or gcloud CLI to manage the lifecycle of clusters that were created using bmctl, seeConfigure a user cluster to be managed by the GKE On-Prem API..

Before you begin

This section describes the requirements for creating a user cluster using GKE On-Prem API clients.

Grant IAM permissions

If you aren't a project owner, you must be grantedroles/gkeonprem.admin.

If you want to access the Google Kubernetes Engine pages in the console, you must also have the following roles:

After the cluster is created, if your aren't a project owner and you want to use the connect gateway toconnect to the user cluster by the command line, the following roles are required:

For information on granting the roles, seeManage access to projects, folders, and organizations.

Required Google APIs

Make sure that all therequired Google APIsare enabled in the fleet host project.

If you will be using the gcloud CLI to create the cluster, you must enable the GKE On-Prem API. If you are using the console to create the cluster, it enables the GKE On-Prem API automatically.

gcloud services enable --project FLEET_HOST_PROJECT_ID
gkeonprem.googleapis.com

Admin cluster prerequisites

You need a working admin cluster before you can create a user cluster. The admin cluster must:

Cluster node machine prerequisites

ReviewCluster node machine prerequisitesto make sure that the machines that will be running the user cluster meet the prerequisites.

Command line access

After the cluster is created, if you want to use the connect gateway to run kubectl against the user cluster on computers other than the admin workstation, install the following command-line tools on the computer that you plan to use.

You can use Terraform, the Google Cloud console, or the Google Cloud CLI (gcloud CLI) to create a cluster that is managed by the GKE On-Prem API. If this is your first time installing Google Distributed Cloud, you might find the console the easiest tool to use.

After you are more familiar with the information that you need to provide to create clusters, you might find Terraform or the gcloud CLI more convenient, particularly if you will be creating more than one cluster.Terraform is an industry-standard infrastructure as code tool. If your organization already uses Terraform, then you will likely want to use it for creating clusters and managing the cluster lifecycle.

With the gcloud CLI, you can save the command with its arguments to a text file and make changes as needed to create additional clusters. If you are using a CI/CD tool, such as Cloud Build, you can use thegcloud commands to create a cluster and node pool and specify the--impersonate-service-account flag to automate the creation.

Console

Most of the settings in the console correspond to the fields in the cluster configuration file.

  1. In the console, go to the Create a bare metal clusterpage.
    Go to Create a bare metal cluster
  2. Select the Google Cloud project that you want to create the cluster in. The selected project is also used as the fleet host project. This must be the same project that the admin cluster is registered to. After the user cluster is created, it is automatically registered to the selected project's fleet.
  3. Click Next to begin configuring the cluster.

The following sections guide you through configuring the user cluster.

Cluster basics

Enter basic information about the cluster.

  1. Enter a Name for the user cluster.
  2. Under Admin cluster, select the admin cluster from the list.
  3. In the Google Cloud API Location field, select the Google Cloud region from the list. This setting specifies the region where the following APIs and services run:
    • GKE On-Prem API (gkeonprem.googleapis.com)
    • Fleet service (gkehub.googleapis.com)
    • Connect service (gkeconnect.googleapis.com)
      This setting also controls the region in which the following are stored:
    • The user cluster metadata that the GKE On-Prem API needs to manage the cluster lifecycle
    • The Cloud Logging and Cloud Monitoring data of system components
    • The Admin Audit log created by Cloud Audit Logs
      The cluster name, project, and location together uniquely identify the cluster in Google Cloud.
  4. Select the version for your user cluster. User clusters must be either the same minor version as the admin cluster or one minor version lower than the admin cluster.
  5. As the cluster creator, you are granted cluster admin privileges to the cluster. Optionally, enter the email address of another user who will administer the cluster in the Admin user field.
    When the cluster is created, the GKE On-Prem API applies the Kubernetes role-based access control (RBAC) policies to the cluster to grant you and other admin users the Kubernetes clusterrole/cluster-admin role, which provides full access to every resource in the cluster in all namespaces.
  6. In the Node configuration section, specify the following:
    • Maximum Pods per node: Enter the maximum number of Pods that can be run on a single node. Allowable values are between 32250, inclusive. Kubernetes assigns a Classless Inter-Domain Routing (CIDR) blockto each node so that each pod can have a unique IP address. The size of the CIDR block corresponds to the maximum number of pods per node. For more information about setting the maximum number of pods per node,Pod networking.
    • Container runtime: containerd is the only available container runtime for your cluster.
  7. Click Next to go to the Networking section.

Networking

In this section, you specify the IP addresses for your cluster's nodes, Pods, and Services. If you are using bundled load balancing with MetalLB, you configure that as well.

  1. In the Control plane node section, enter the IPv4 address of each control plane node. Control plane nodes run the system workload. Typically, this is either a single machine if using a minimum deployment or three machines if using a high availability (HA) deployment. Specify an odd number of nodes to have a majority quorum for HA. This field can be changed whenever you update or upgrade a cluster.
    Click + Add IP Address as needed to enter more IP addresses.
  2. In the Load balancer section, select the load balancer from theMode list to set up for your cluster. SeeOverview of load balancers for more information.

Bundled with MetalLB

Configure load balancing with the bundled MetalLB load balancer. With this option, Google Distributed Cloud deploys Layer 4 load balancers that run on either a dedicated pool of worker nodes or on the same nodes as the control plane.

  1. In the Load balancer node pools section, select one of the following:
    • Use control plane nodes: Choose this option to run the load balancers on the same nodes as the control plane.
    • Create load balancer node pool: Choose this advanced option if you need to run the load balancers on a dedicated pool of worker nodes. All nodes in the load balancer node pool must be in the same Layer 2 subnet as the load balancer virtual IPs (VIPs) that you configure in the Load balancer address poolssection.
      1. In the Load balancer node pool IP 1 field, enter an IPv4 address for a node in your load-balancer node pool.
      2. Click + Add IP address as needed to enter additional IP addresses.
  2. In the Load balancer address pools section, add one or more address pools for the MetalLB controller to choose from and assign to Services of type LoadBalancer. The ingress VIP, which you specify in the Virtual IPs section, must be in one of these pools.
    1. Enter a name for the address pool.
    2. Enter an IP address range in either CIDR notation (for example: 192.0.2.0/26) or range notation (for example: 192.0.2.64-192.0.2.72). To specify a single IP address in a pool, use /32 in the CIDR notation (for example: 192.0.2.1/32).
    3. If the ingress VIP isn't in the address range, select+ Add IP Address Range and enter another address range that includes the ingress VIP.
      The IP addresses in each pool cannot overlap, and must be in the same subnet as the cluster nodes.
    4. Under Assignment of IP addresses, select one of the following:
      * Automatic: Choose this option if you want the MetalLB controller to automatically assign IP addresses from the address pool to Services of type LoadBalancer.
      * Manual: Choose this option if you intend to use addresses from the pool to manually specify addresses for Services of type LoadBalancer.
    5. Click Avoid buggy IP addresses if you want the MetalLB controller to not use addresses from the pool that end in .0 or .255. This avoids the problem of buggy consumer devices mistakenly dropping traffic sent to those special IP addresses.
    6. When you're finished click Done.
    7. If needed, click Add Address Pool.

Manual load balancer

With manual load balancing, you configure your own load-balancing solutions for control plane and data plane traffic. You must configure your control plane VIP on the external load balancer before creating a cluster. The external control plane load balancer can also be used for data plane traffic, or you can set up a separate load balancer for the data plane. For more information, seeConfigure manual load balancing. 3. In the Virtual IPs section, enter the following:

  1. In the Service and Pod CIDRs section, specify the Kubernetes Service and Pod IP address ranges in CIDR notation. They must not overlap with each other, nor with any address outside the cluster that you want to reach from inside the cluster. We recommend that you use the private IP address ranges defined byRFC 1918. The console provides the following default address ranges, but you can change them:
    • Service CIDR: 10.96.0.0/20 If you don't accept the default, enter a CIDR range between /24 and /12, where /12 provides the most IP addresses.
    • Pod CIDR: 192.168.0.0/16 If you don't accept the default, enter a CIDR range between between /18 and /8, where /8 provides the most IP addresses.
  2. In the Advanced attributes attributes section, optionally specify the following:
    • Proxy URL: The HTTP address of your proxy server. Include the port number even if it's the same as the scheme's default port, for example: http://my-proxy.example.local:80
    • URLs: A comma-separated list of IP addresses, IP address ranges, host names, and domain names that shouldn't go through the proxy server. When Google Distributed Cloud sends a request to one of these addresses, hosts, or domains, the request is sent directly.
  3. Click Next.

Storage

Google Distributed Cloud software-only provides block and file storage interfaces. They have default options, but you can customize the configurations. For more information, see Configure local storage.

  1. Optionally, you can configure the following:
    • Local volume provisioner node mounts: Specifies the configuration for local PersistentVolumes (PVs) backed by mounted disks. You need to format and mount these disks, which you can do before or after cluster creation.
    • Local volume provisioner share: Specifies the configuration for localPersistentVolumes backed by subdirectories in a shared file system. These subdirectories are automatically created during cluster creation.
  2. Click Next.

Features

To help you monitor, troubleshoot, and operate your cluster, the following are enabled automatically and can't be disabled:

Create a node pool in the console

Your cluster must have at least one node pool for worker nodes. A node pool is a template for the groups of worker nodes created in this cluster.

In the console, you configure at least one node pool (or accept the default values) and then create the cluster. You can add additional node pools after the cluster is created. With the gcloud CLI, you create the cluster first and then add one or more node pools to the newly-created cluster.

  1. Click default pool in the left navigation bar.
  2. In the Node pool defaults section, enter the Node pool name or accept "default-pool" as the name.
  3. In the Worker nodes section, enter the IP addresses of machines for the cluster to run on.
  4. In the Node pool metadata (optional) section, if you want to add Kuberneteslabelsandtaintsdo the following:
    1. Click + Add Kubernetes Labels. Enter the Key andValue for the label. Repeat as needed.
    2. Click + Add Taint. Enter the Key, Value, andEffect for the taint. Repeat as needed.
  5. Click Verify and Complete to create the user cluster. It takes 15 minutes or more to create the user cluster. The console displays status messages as it verifies the settings and creates the cluster in your data center.
    If there is a problem with the configuration, the console displays an error message that should be clear enough for you to fix the configuration issue and try again to create the cluster.

gcloud CLI

You use the following command to create a user cluster:

gcloud container bare-metal clusters create

After creating the cluster, you need to create at least one node pool using the following command:

gcloud container bare-metal node-pools create

Most of the flags for creating the cluster and the node pool correspond to the fields in theuser cluster configuration file. To help you get started, you can test the complete command in theExamples section. For information about the flags, see the sections that follow the examples, or refer to thegcloud CLI reference.

Before you begin

The version that you select when creating a user cluster must be a version that your admin cluster supports. Additionally, the latest minor or patch versions aren't available in the GKE On-Prem API until 7 to 14 days after the release. You can run a gcloud command to get a list of supported cluster versions that you can install.

  1. Make sure to update components:
    gcloud components update
  2. Get the name and the fleet membership location of your admin cluster:
    gcloud container fleet memberships list \
    --project=FLEET_HOST_PROJECT_ID
    Replace FLEET_HOST_PROJECT_ID with ID of the project that the admin cluster is registered to.
    The output is similar to the following:
    NAME EXTERNAL_ID LOCATION
    admin-cluster-1 bb7803b4-8438-4b22-859f-4559b4b29072 global
    admin-cluster-2 ee16ee2b-6ec0-49fc-9413-3c89cbc70854 global
    admin-cluster-3 fc2b7ef5-39ff-4b63-b919-04c5adc67be4 us-west1
    The location specifies where the Fleet and Connect services run. Admin clusters created prior to 1.28 are managed by the global Fleet and Connect services. In 1.28 and later, you can specify either global or a Google Cloud region when you create the admin cluster. You specify the region in the --admin-cluster-membership-location flag in the example commands that follow.
  3. Get a list of available versions to install on the user cluster:
    gcloud container bare-metal clusters query-version-config \
    --admin-cluster-membership=ADMIN_CLUSTER_NAME \
    --admin-cluster-membership-project=FLEET_HOST_PROJECT_ID \
    --admin-cluster-membership-location=ADMIN_CLUSTER_REGION \
    --location=REGION
    Replace the following:
    • ADMIN_CLUSTER_NAME: The name of the admin cluster.
    • FLEET_HOST_PROJECT_ID: The ID of the project that the admin cluster is registered to.
    • ADMIN_CLUSTER_REGION: The admin cluster's fleet membership region. This is either global or a Google Cloud region. Use the location for the admin cluster from the output of gcloud container fleet memberships list.
    • REGION: The Google Cloud region that you will use when you create the cluster. This is the region in which the GKE On-Prem API and the Fleet and Connect services run. Specifyus-west1 or anothersupported region.
      The output of the command is similar to the following:
versions:  
- version: 1.16.2  
- version: 1.16.1  
- version: 1.16.0  
- version: 1.15.7  
- version: 1.15.6  
- version: 1.15.5  

We suggest that you use the highest supported version to get the latest fixes and improvements.

Examples

This section provides an example of a command that creates a cluster using the MetalLB load balancer and an example using a manual load balancer. The information that you specify varies depending on the type of load balancer you will be using. SeeOverview of load balancers for more information.

The examples create the cluster without any node pools. After the cluster is running, you must add a node poolbefore deploying workloads.

MetalLB

This example shows how to create a user cluster with the bundled MetalLB load balancer.

gcloud container bare-metal clusters create USER_CLUSTER_NAME
--project=FLEET_HOST_PROJECT_ID
--admin-cluster-membership=ADMIN_CLUSTER_NAME
--admin-cluster-membership-project=FLEET_HOST_PROJECT_ID
--admin-cluster-membership-location=ADMIN_CLUSTER_REGION
--location=REGION
--version=VERSION
--admin-users=YOUR_EMAIL_ADDRESS
--admin-users=ANOTHER_EMAIL_ADDRESS
--metal-lb-address-pools='pool=NAME,avoid-buggy-ips=True|False,manual-assign=True|False,addresses=IP_ADDRESS_RANGE_1;IP_ADDRESS_RANGE_2;...'
--control-plane-node-configs='node-ip=CP_IP_ADDRESS_1,labels=CP_KEY_1.1=CP_VALUE_1.1;CP_KEY_1.2=CP_VALUE_1.2;...'
--control-plane-vip=CONTROL_PLANE_VIP
--control-plane-load-balancer-port=CONTROL_PLANE_LB_PORT
--ingress-vip=INGRESS_VIP
--island-mode-service-address-cidr-blocks=SERVICE_CIDR_BLOCK
--island-mode-pod-address-cidr-blocks=POD_CIDR_BLOCK
--lvp-share-path=/mnt/localpv-share
--lvp-share-storage-class=local-shared
--lvp-node-mounts-config-path=/mnt/localpv-disk
--lvp-node-mounts-config-storage-class=local-disks

Replace the following:

MetalLB address pools

'pool=NAME,avoid-buggy-ips=True|False,manual-assign=True|False,addresses=IP_ADDRESS_RANGE_1;IP_ADDRESS_RANGE_2;...' \

The value has segments that start with the keywordspool, avoid-buggy-ip, manual-assign, andaddresses. Separate each segment with a comma.

Note the following syntax rules:

You can specify more than one instance of the flag, as shown in the following example:

--metal-lb-address-pools='pool=pool1,avoid-buggy-ips=False,manual-assign=True,addresses=192.0.2.0/26;192.0.2.64-192.0.2.72' --metal-lb-address-pools='pool=pool2,avoid-buggy-ips=True,manual-assign=True,addresses=10.251.133.0/24;10.251.134.80/32'

MetalLB nodes

Control plane nodes

Virtual IPs

Service and Pod CIDRs

Storage

  1. --lvp-share-path: This is the host machine path where subdirectories can be created. A local PersistentVolume (PV) is created for each subdirectory.
  2. --lvp-share-storage-class: This is the StorageClass to use to create persistent volumes. The StorageClass is created during cluster creation.
  3. --lvp-node-mounts-config-path: This is the host machine path where mounted disks can be discovered. A local PersistentVolume (PV) is created for each mount.
  4. --lvp-node-mounts-config-storage: The storage class that PVs are created with during cluster creation.

For more information about storage, seeConfigure local storage.

Manual

With manual load balancing, you configure your own load-balancing solutions for control plane and data plane traffic. You must configure your control plane VIP on the external load balancer before creating a cluster. The external control plane load balancer can also be used for data plane traffic, or you can set up a separate load balancer for the data plane. For more information, seeConfigure manual load balancing.

Be sure to scroll over if needed to fill in theADMIN_CLUSTER_NAME placeholder for the--admin-cluster-membership flag.

gcloud container bare-metal clusters create USER_CLUSTER_NAME
--project=FLEET_HOST_PROJECT_ID
--admin-cluster-membership=ADMIN_CLUSTER_NAME
--admin-cluster-membership-project=FLEET_HOST_PROJECT_ID
--admin-cluster-membership-location=ADMIN_CLUSTER_REGION
--location=REGION
--version=VERSION
--admin-users=YOUR_EMAIL_ADDRESS
--admin-users=ANOTHER_EMAIL_ADDRESS
--enable-manual-lb
--control-plane-node-configs='node-ip=CP_IP_ADDRESS_1,labels=CP_KEY_1.1=CP_VALUE_1.1;CP_KEY_1.2=CP_VALUE_1.2;...'
--control-plane-vip=CONTROL_PLANE_VIP
--control-plane-load-balancer-port=CONTROL_PLANE_LB_PORT
--ingress-vip=INGRESS_VIP
--island-mode-service-address-cidr-blocks=SERVICE_CIDR_BLOCK
--island-mode-pod-address-cidr-blocks=POD_CIDR_BLOCK
--lvp-share-path=/mnt/localpv-share
--lvp-share-storage-class=local-shared
--lvp-node-mounts-config-path=/mnt/localpv-disk
--lvp-node-mounts-config-storage-class=local-disks

Replace the following:

Control plane nodes

Virtual IPs

Service and Pod CIDRs

Storage

  1. --lvp-share-path: This is the host machine path where subdirectories can be created. A local PersistentVolume (PV) is created for each subdirectory.
  2. --lvp-share-storage-class: This is the StorageClass to use to create persistent volumes. The StorageClass is created during cluster creation.
  3. --lvp-node-mounts-config-path: This is the host machine path where mounted disks can be discovered. A local PersistentVolume (PV) is created for each mount.
  4. --lvp-node-mounts-config-storage: The storage class that PVs are created with during cluster creation.

For more information about storage, seeConfigure local storage.

Before running the gcloud command to create the cluster, you might want to include --validate-only to validate the configuration that you specified in the flags to the gcloud command. When you are ready to create the cluster, remove this flag and run the command.

The output from the command is similar to the following:

Waiting for operation [projects/example-project-12345/locations/us-west1/operations/operation-1679543737105-5f7893fd5bae9-942b3f97-75e59179] to complete.

In the example output, the string operation-1679543737105-5f7893fd5bae9-942b3f97-75e59179is the OPERATION_ID of the long-running operation. You can find out the status of the operation with the following command:

gcloud container bare-metal operations describe OPERATION_ID
--project=FLEET_HOST_PROJECT_ID
--location=REGION

It takes 15 minutes or more to create the user cluster. You can view the cluster in the Google Cloud console on theGKE clusterspage.

For a complete list of the flags and their descriptions, see thegcloud CLI reference.

Create a node pool

After the cluster is created, you need to create at least one node pool before deploying workloads. A node pool is a template for the groups of worker nodes created in this cluster. With the gcloud CLI, you create the cluster first and then add one or more node pools to the newly-created cluster.

gcloud container bare-metal node-pools create NODE_POOL_NAME
--cluster=USER_CLUSTER_NAME
--project=FLEET_HOST_PROJECT_ID
--location=REGION
--node-configs='node-ip=NP_IP_ADDRESS_1,labels=NP_KEY_1.1=NP_VALUE_1.1;NP_KEY_1.2=NP_VALUE_1.2;...'

Replace the following:

The following example creates a node pool called default-pool onuser-cluster-, and adds two nodes to the node pool. All both nodes are labeled with node-pool-key=node-pool-value and have the taint dedicated=experimental:PreferNoSchedule,

gcloud container bare-metal node-pools create default-pool
--cluster=user-cluster-1
--project=example-project-12345
--location=us-west1
--node-configs='node-ip=10.200.0.10'
--node-configs='node-ip=10.200.0.11,labels=key2.1=value2.1'
--node-labels=node-pool-key=node-pool-value
--node-taints=dedicated=experimental:PreferNoSchedule

For more information, see thegcloud CLI reference.

Terraform

Before you begin

The Google Distributed Cloud (software only) on bare metal version that you select when creating a user cluster must be a version that your admin cluster supports. Additionally, the latest minor or patch versions aren't available in the GKE On-Prem API until 7 to 14 days after the release. You can run agcloud command to get a list of supported versions that you can use to install the user cluster.

  1. Make sure to update components:
    gcloud components update
  2. Get the name and the fleet membership location of your admin cluster:
    gcloud container fleet memberships list \
    --project=FLEET_HOST_PROJECT_ID
    Replace FLEET_HOST_PROJECT_ID with ID of the project that the admin cluster is registered to.
    The output is similar to the following:
    NAME EXTERNAL_ID LOCATION
    admin-cluster-1 bb7803b4-8438-4b22-859f-4559b4b29072 global
    admin-cluster-2 ee16ee2b-6ec0-49fc-9413-3c89cbc70854 global
    admin-cluster-3 fc2b7ef5-39ff-4b63-b919-04c5adc67be4 us-west1
    The location specifies where the Fleet and Connect services run. Admin clusters created prior to 1.28 are managed by the global Fleet and Connect services. In 1.28 and later, you can specify either global or a Google Cloud region when you create the admin cluster. You specify the region in the --admin-cluster-membership-location flag in the example commands that follow.
  3. Get a list of available versions to install on the user cluster:
    gcloud container bare-metal clusters query-version-config \
    --admin-cluster-membership=ADMIN_CLUSTER_NAME \
    --admin-cluster-membership-project=FLEET_HOST_PROJECT_ID \
    --admin-cluster-membership-location=ADMIN_CLUSTER_REGION \
    --location=REGION
    Replace the following:
    • ADMIN_CLUSTER_NAME: The name of the admin cluster.
    • FLEET_HOST_PROJECT_ID: The ID of the project that the admin cluster is registered to.
    • ADMIN_CLUSTER_REGION: The admin cluster's fleet membership region. This is either global or a Google Cloud region. Use the location for the admin cluster from the output of gcloud container fleet memberships list.
    • REGION: The Google Cloud region that you will use when you create the cluster. This is the region in which the GKE On-Prem API and the Fleet and Connect services run. Specifyus-west1 or anothersupported region.
      The output of the command is similar to the following:
versions:  
- version: 1.16.2  
- version: 1.16.1  
- version: 1.16.0  
- version: 1.15.7  
- version: 1.15.6  
- version: 1.15.5  

We suggest that you use the highest supported version to get the latest fixes and improvements.

Example

You can use the following basic configuration sample to create a user cluster with bundled MetalLB load balancer. For more information, see thegoogle_gkeonprem_bare_metal_cluster reference documentation.

Set variables in terraform.tfvars

The sample provides an example variables file to pass in to main.tf, which shows how to configure the bundled MetalLB load balancer.

  1. Clone the anthos-samples repository and change to the directory where the Terraform sample is located:
git clone https://github.com/GoogleCloudPlatform/anthos-samples  
cd anthos-samples/anthos-onprem-terraform/abm_user_cluster_metallb  

The sample provides an example variables file to pass in to main.tf. 2. Make a copy of the terraform.tfvars.sample file:

cp terraform.tfvars.sample terraform.tfvars  
  1. Modify the parameters values in terraform.tfvars and save the file.
    The following list describes the variables:
    • project_id: The ID of the project that you want to create the cluster in. The specified project is also used as the fleet host project. This must be the same project that the admin cluster is registered to. After the user cluster is created, it is automatically registered to the selected project's fleet. The fleet host project can't be changed after the cluster is created.
    • region: The Google Cloud region in which the GKE On-Prem API (gkeonprem.googleapis.com), Fleet service (gkehub.googleapis.com), and the Connect service (gkeconnect.googleapis.com) run. Specifyus-west1 or anothersupported region.
    • admin_cluster_name: The name of the admin cluster that manages the user cluster. The example assumes that the admin cluster uses global as the region. If you have a regional admin cluster:
      1. Open main.tf in a text editor.
      2. Search for admin_cluster_membership, which looks like the following:
      admin_cluster_membership = "projects/${var.project_id}/locations/global/memberships/${var.admin_cluster_name}"
      3. Change global to the region that the admin cluster uses and save the file.
    • bare_metal_version: The Google Distributed Cloud version for your user cluster. Specify either the same version as the admin cluster, or a version that is no more than one minor version lower than the admin cluster.
    • admin_user_emails: A list of email addresses of the users to be granted administrative privileges on the cluster. Be sure to add your email address if you intend to administer the cluster.
      When the cluster is created, the GKE On-Prem API applies the Kubernetes role-based access control (RBAC) policies to the cluster to grant the admin users the Kubernetes clusterrole/cluster-admin role, which provides full access to every resource in the cluster in all namespaces. This also lets users log on to the console using their Google identity.
    • cluster_name: A name of your choice for your user cluster. The name can't be changed after the cluster is created. The name must:
      * contain at most 40 characters
      * contain only lowercase alphanumeric characters or a hyphen (-)
      * start with an alphabetic character
      * end with an alphanumeric character
    • control_plane_ips: A list of one or more IPv4 addresses for the control plane nodes. Control plane nodes run the system workload. Typically, you have a single machine if using a minimum deployment, or three machines if using a high availability (HA) deployment. Specify an odd number of nodes to have a majority quorum for HA. You can change these addresses whenever you update or upgrade a cluster.
    • worker_node_ips: A list of one or more IPv4 addresses for the worker node machines.
    • control_plane_vip: The virtual IP address (VIP) that you have chosen to configure on the load balancer for the Kubernetes API server of the user cluster.
    • ingress_vip: The IP address that you have chosen to configure on the load balancer for the ingress proxy.
    • lb_address_pools: A list of maps that define the address pools to be used by the MetalLB load balancer. The ingress VIP must be in one of these pools.
  2. Save the changes in terraform.tfvars.
  3. Initialize and create the Terraform plan:
terraform init  

Terraform installs any needed libraries, such as the Google Cloud provider. 6. Review the configuration and make changes if needed:

terraform plan  
  1. Apply the Terraform plan to create the user cluster:
terraform apply  

It takes 15 minutes or more to create the user cluster. You can view the cluster in the Google Cloud console on theGKE clusterspage.

Connect to the user cluster

When you create a user cluster in the console, the cluster is configured with the Kubernetes role-based access control (RBAC) policies so that you can log in to the cluster using your Google Cloud identity. When you create a user cluster with the gcloud CLI, by default you are granted these RBAC policies if you don't include the --admin-users flag. If you include--admin-users to designate another user as an administrator, you override the default and you need to include both your email address and the email address of the other administrator. For more information about the required IAM and RBAC policies, seeSet up Google identity authentication.

All clusters have a canonical endpoint. The endpoint exposes the Kubernetes API server that kubectl and other services use to communicate with your cluster control plane over TCP port 443. This endpoint is not accessible on the public internet. If you have access to your cluster's private endpoint through your VPC, you can connect directly to the private endpoint and generate akubeconfig file directly. Otherwise, you can useconnect gateway.

To access the user cluster from the command line, you need a kubeconfig file. There are two ways to get a kubeconfig file:

Be sure to wait until the Google Cloud console indicates that user cluster status is healthy.

Connect gateway

  1. Either initialize the gcloud CLI for use with the fleet host project, or run the following commands to log in with your Google Account, set your fleet host project as the default, and update components:
gcloud auth login  
gcloud config set project PROJECT_ID  
gcloud components update  
  1. Fetch the cluster credentials used to interact with connect gateway. In the following command, replaceMEMBERSHIP_NAME with your cluster's name. For Google Distributed Cloud (software only) on bare metal, the membership name is the same as the cluster name.
gcloud container fleet memberships get-credentials MEMBERSHIP_NAME  

This command returns a special **connect gateway-specific kubeconfig**that lets you connect to the cluster through the gateway.

After you have the necessary credentials, you can run commands usingkubectl as you normally would for any Kubernetes cluster, and you don't need to specify the name of the kubeconfig file, for example:

kubectl get namespaces

Admin workstation

Use the bmctl get credentials command to retrieve a kubeconfig file for the newly created user cluster.

bmctl get credentials --cluster CLUSTER_NAME --admin-kubeconfig ADMIN_KUBECONFIG_PATH

Replace the following:

A kubeconfig with the user cluster's credentials are written to a file,bmctl-workspace/CLUSTER_NAME/CLUSTER_NAME-TIMESTAMP-kubeconfig. The TIMESTAMP in the filename indicates the date and time the file was created.

Because this file contains authentication credentials for your cluster, you should store it in a secure location with restricted access.

What's next

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-06-16 UTC.