Module reference - cloud-init 25.1.2 documentation (original) (raw)

Deprecation schedule and versions

Keys can be documented as deprecated, new, or changed. This allows cloud-init to evolve as requirements change, and to adopt better practices without maintaining design decisions indefinitely.

Keys marked as deprecated or changed may be removed or changed 5 years from the deprecation date. For example, if a key is deprecated in version 22.1 (the first release in 2022) it is scheduled to be removed in27.1 (first release in 2027). Use of deprecated keys may cause warnings in the logs. If a key’s expected value changes, the key will be markedchanged with a date. A 5 year timeline also applies to changed keys.

Modules

Ansible

Configure Ansible for instance

Summary

This module provides Ansible integration for augmenting cloud-init’s configuration of the local node. This module installs ansible during boot and then uses ansible-pull to run the playbook repository at the remote URL.

Internal name: cc_ansible

Module frequency: once-per-instance

Supported distros: all

Activate only on keys: ansible

Config schema

Examples

Example 1:

#cloud-config ansible: package_name: ansible-core install_method: distro pull: url: https://github.com/holmanb/vmboot.git playbook_name: ubuntu.yml

Example 2:

#cloud-config ansible: package_name: ansible-core install_method: pip pull: url: https://github.com/holmanb/vmboot.git playbook_name: ubuntu.yml

APK Configure

Configure APK repositories file

Summary

This module handles configuration of the Alpine Package Keeper (APK)/etc/apk/repositories file.

Note

To ensure that APK configuration is valid YAML, any strings containing special characters, especially colons, should be quoted (“:”).

Internal name: cc_apk_configure

Module frequency: once-per-instance

Supported distros: alpine

Activate only on keys: apk_repos

Config schema

Examples

Example 1: Keep the existing /etc/apk/repositories file unaltered.

#cloud-config apk_repos: preserve_repositories: true

Example 2: Create repositories file for Alpine v3.12 main and community using default mirror site.

#cloud-config apk_repos: alpine_repo: community_enabled: true version: 'v3.12'

Example 3: Create repositories file for Alpine Edge main, community, and testing using a specified mirror site and also a local repo.

#cloud-config apk_repos: alpine_repo: base_url: https://some-alpine-mirror/alpine community_enabled: true testing_enabled: true version: edge local_repo_base_url: https://my-local-server/local-alpine

Apt Configure

Configure APT for the user

Summary

This module handles configuration of advanced package tool (APT) options and adding source lists. There are configuration options such asapt_get_wrapper` and apt_get_command that control how cloud-init invokes apt-get. These configuration options are handled on a per-distro basis, so consult documentation for cloud-init’s distro support for instructions on using these config options.

By default, cloud-init will generate default APT sources information indeb822 format at /etc/apt/sources.list.d/<distro>.sources. When the value of sources_list does not appear to be deb822 format, or stable distribution releases disable deb822 format,/etc/apt/sources.list will be written instead.

Note

To ensure that APT configuration is valid YAML, any strings containing special characters, especially colons, should be quoted (“:”).

Note

For more information about APT configuration, see the “Additional APT configuration” example.

Internal name: cc_apt_configure

Module frequency: once-per-instance

Supported distros: ubuntu, debian

Config schema

Examples

Example 1:

#cloud-config apt: preserve_sources_list: false disable_suites: - $RELEASE-updates - backports - $RELEASE - mysuite primary: - arches: - amd64 - i386 - default uri: http://us.archive.ubuntu.com/ubuntu search: - http://cool.but-sometimes-unreachable.com/ubuntu - http://us.archive.ubuntu.com/ubuntu search_dns: false - arches: - s390x - arm64 uri: http://archive-to-use-for-arm64.example.com/ubuntu

security: - arches: - default search_dns: true sources_list: | deb MIRRORMIRROR MIRRORRELEASE main restricted deb-src MIRRORMIRROR MIRRORRELEASE main restricted deb PRIMARYPRIMARY PRIMARYRELEASE universe restricted deb SECURITYSECURITY SECURITYRELEASE-security multiverse debconf_selections: set1: the-package the-package/some-flag boolean true conf: | APT { Get { Assume-Yes 'true'; Fix-Broken 'true'; } } proxy: http://[[user][:pass]@]host[:port]/ http_proxy: http://[[user][:pass]@]host[:port]/ ftp_proxy: ftp://[[user][:pass]@]host[:port]/ https_proxy: https://[[user][:pass]@]host[:port]/ sources: source1: keyid: keyid keyserver: keyserverurl source: deb [signed-by=$KEY_FILE] http:/// bionic main source2: source: ppa: source3: source: deb MIRRORMIRROR MIRRORRELEASE multiverse key: | ------BEGIN PGP PUBLIC KEY BLOCK------- ------END PGP PUBLIC KEY BLOCK------- source4: source: deb MIRRORMIRROR MIRRORRELEASE multiverse append: false key: | ------BEGIN PGP PUBLIC KEY BLOCK------- ------END PGP PUBLIC KEY BLOCK-------

Example 2: Cloud-init version 23.4 will generate a deb822-formatted sources file at /etc/apt/sources.list.d/<distro>.sources instead of /etc/apt/sources.list when sources_list content is in deb822 format.

#cloud-config apt: sources_list: | Types: deb URIs: http://archive.ubuntu.com/ubuntu/ Suites: $RELEASE Components: main

Apt Pipelining

Configure APT pipelining

Summary

This module configures APT’s Acquire::http::Pipeline-Depth option, which controls how APT handles HTTP pipelining. It may be useful for pipelining to be disabled, because some web servers (such as S3) do not pipeline properly (LP: #948461).

Value configuration options for this module are:

Internal name: cc_apt_pipelining

Module frequency: once-per-instance

Supported distros: ubuntu, debian

Activate only on keys: apt_pipelining

Config schema

Examples

Example 1:

#cloud-config apt_pipelining: false

Example 2:

#cloud-config apt_pipelining: os

Example 3:

#cloud-config apt_pipelining: 3

Bootcmd

Run arbitrary commands early in the boot process

Summary

This module runs arbitrary commands very early in the boot process, only slightly after a boothook would run. This is very similar to a boothook, but more user friendly. Commands can be specified either as lists or strings. For invocation details, seeruncmd.

Note

bootcmd should only be used for things that could not be done later in the boot process.

Note

When writing files, do not use /tmp dir as it races withsystemd-tmpfiles-clean (LP: #1707222). Use /run/somedir instead.

Internal name: cc_bootcmd

Module frequency: always

Supported distros: all

Activate only on keys: bootcmd

Config schema

Examples

Example 1:

#cloud-config bootcmd:

Byobu

Enable/disable Byobu system-wide and for the default user

Summary

This module controls whether Byobu is enabled or disabled system-wide and for the default system user. If Byobu is to be enabled, this module will ensure it is installed. Likewise, if Byobu is to be disabled, it will be removed (if installed).

Valid configuration options for this module are:

Internal name: cc_byobu

Module frequency: once-per-instance

Supported distros: ubuntu, debian

Config schema

Examples

Example 1:

#cloud-config byobu_by_default: enable-user

Example 2:

#cloud-config byobu_by_default: disable-system

CA Certificates

Add CA certificates

Summary

This module adds CA certificates to the system’s CA store and updates any related files using the appropriate OS-specific utility. The default CA certificates can be disabled/deleted from use by the system with the configuration option remove_defaults.

Note

Certificates must be specified using valid YAML. To specify a multi-line certificate, the YAML multi-line list syntax must be used.

Note

Alpine Linux requires the ca-certificates package to be installed in order to provide the update-ca-certificates command.

Internal name: cc_ca_certs

Module frequency: once-per-instance

Supported distros: almalinux, aosc, cloudlinux, alpine, debian, fedora, rhel, opensuse, opensuse-microos, opensuse-tumbleweed, opensuse-leap, sle_hpc, sle-micro, sles, ubuntu, photon

Activate only on keys: ca_certs, ca-certs

Config schema

Examples

Example 1:

#cloud-config ca_certs: remove_defaults: true trusted:

Chef

Module that installs, configures, and starts Chef

Summary

This module enables Chef to be installed (from packages, gems, or from omnibus). Before this occurs, Chef configuration is written to disk (validation.pem, client.pem, firstboot.json, client.rb), and required directories are created (/etc/chef and /var/log/chefand so on).

If configured, Chef will be installed and started in either daemon or non-daemon mode. If run in non-daemon mode, post-run actions are executed to do finishing activities such as removing validation.pem.

Internal name: cc_chef

Module frequency: always

Supported distros: all

Activate only on keys: chef

Config schema

Examples

Example 1:

#cloud-config chef: directories: [/etc/chef, /var/log/chef] encrypted_data_bag_secret: /etc/chef/encrypted_data_bag_secret environment: _default initial_attributes: apache: keepalive: false prefork: {maxclients: 100} install_type: omnibus log_level: :auto omnibus_url_retries: 2 run_list: ['recipe[apache2]', 'role[db]'] server_url: https://chef.yourorg.com:4000 ssl_verify_mode: :verify_peer validation_cert: system validation_name: yourorg-validator

Disable EC2 Instance Metadata Service

Disable AWS EC2 Instance Metadata Service

Summary

This module can disable the EC2 datasource by rejecting the route to169.254.169.254, the usual route to the datasource. This module is disabled by default.

Internal name: cc_disable_ec2_metadata

Module frequency: always

Supported distros: all

Activate only on keys: disable_ec2_metadata

Config schema

Examples

Example 1:

#cloud-config disable_ec2_metadata: true

Disk Setup

Configure partitions and filesystems

Summary

This module configures simple partition tables and filesystems.

Note

For more detail about configuration options for disk setup, see the disk setup example.

Note

If a swap partition is being created via disk_setup, then anfs_entry entry is also needed in order for mkswap to be run, otherwise when swap activation is later attempted it will fail.

For convenience, aliases can be specified for disks using thedevice_aliases config key, which takes a dictionary of alias: pathmappings. There are automatic aliases for swap and ephemeral<X>, where swap will always refer to the active swap partition andephemeral<X> will refer to the block device of the ephemeral image.

Disk partitioning is done using the disk_setup directive. This config directive accepts a dictionary where each key is either a path to a block device or an alias specified in device_aliases, and each value is the configuration options for the device. File system configuration is done using the fs_setup directive. This config directive accepts a list of filesystem configs.

Internal name: cc_disk_setup

Module frequency: once-per-instance

Supported distros: all

Activate only on keys: disk_setup, fs_setup

Config schema

Examples

Example 1:

#cloud-config device_aliases: {my_alias: /dev/sdb, swap_disk: /dev/sdc} disk_setup: /dev/sdd: {layout: true, overwrite: true, table_type: mbr} my_alias: layout: [50, 50] overwrite: true table_type: gpt swap_disk: layout: - [100, 82] overwrite: true table_type: gpt fs_setup:

Fan

Configure Ubuntu fan networking

Summary

This module installs, configures and starts the Ubuntu fan network system (Read more about Ubuntu Fan).

If cloud-init sees a fan entry in cloud-config it will:

Additionally, the ubuntu-fan package will be automatically installed if not present.

Internal name: cc_fan

Module frequency: once-per-instance

Supported distros: ubuntu

Activate only on keys: fan

Config schema

Examples

Example 1:

#cloud-config fan: config: | # fan 240 10.0.0.0/8 eth0/16 dhcp 10.0.0.0/8 eth1/16 dhcp off # fan 241 241.0.0.0/8 eth0/16 dhcp config_path: /etc/network/fan

Final Message

Output final message when cloud-init has finished

Summary

This module configures the final message that cloud-init writes. The message is specified as a Jinja template with the following variables set:

This message is written to the cloud-init log (usually/var/log/cloud-init.log) as well as stderr (which usually redirects to/var/log/cloud-init-output.log).

Upon exit, this module writes the system uptime, timestamp, and cloud-init version to /var/lib/cloud/instance/boot-finished independent of any user data specified for this module.

Internal name: cc_final_message

Module frequency: always

Supported distros: all

Config schema

Examples

Example 1:

#cloud-config final_message: | cloud-init has finished version: $version timestamp: $timestamp datasource: $datasource uptime: $uptime

Growpart

Grow partitions

Summary

Growpart resizes partitions to fill the available disk space. This is useful for cloud instances with a larger amount of disk space available than the pristine image uses, as it allows the instance to automatically make use of the extra space.

Note that this only works if the partition to be resized is the last one on a disk with classic partitioning scheme (MBR, BSD, GPT). LVM, Btrfs and ZFS have no such restrictions.

The devices on which to run growpart are specified as a list under thedevices key.

There is some functionality overlap between this module and thegrowroot functionality of cloud-initramfs-tools. However, there are some situations where one tool is able to function and the other is not. The default configuration for both should work for most cloud instances. To explicitly prevent cloud-initramfs-tools from runninggrowroot, the file /etc/growroot-disabled can be created.

By default, both growroot and cc_growpart will check for the existence of this file and will not run if it is present. However, this file can be ignored for cc_growpart by settingignore_growroot_disabled to true.Read more about cloud-initramfs-tools.

On FreeBSD, there is also the growfs service, which has a lot of overlap with cc_growpart and cc_resizefs, but only works on the root partition. In that configuration, we use it, otherwise, we fall back to gpart.

Note

growfs may insert a swap partition, if none is present, unless instructed not to via growfs_swap_size=0 in either kenv(1), orrc.conf(5).

Growpart is enabled by default on the root partition. The default config for growpart is:

growpart: mode: auto devices: ["/"] ignore_growroot_disabled: false

Internal name: cc_growpart

Module frequency: always

Supported distros: all

Config schema

Examples

Example 1:

#cloud-config growpart: devices: [/] ignore_growroot_disabled: false mode: auto

Example 2:

#cloud-config growpart: devices: [/, /dev/vdb1] ignore_growroot_disabled: true mode: growpart

GRUB dpkg

Configure GRUB debconf installation device

Summary

Configure which device is used as the target for GRUB installation. This module can be enabled/disabled using the enabled config key in thegrub_dpkg config dict. This module automatically selects a disk usinggrub-probe if no installation device is specified.

The value placed into the debconf database is in the format expected by the GRUB post-install script expects. Normally, this is a /dev/disk/by-id/value, but we do fallback to the plain disk name if a by-id name is not present.

If this module is executed inside a container, then the debconf database is seeded with empty values, and install_devices_empty is set to true.

Internal name: cc_grub_dpkg

Module frequency: once-per-instance

Supported distros: ubuntu, debian

Activate only on keys: grub_dpkg, grub-dpkg

Config schema

Examples

Example 1:

#cloud-config grub_dpkg: enabled: true

BIOS mode (install_devices needs disk)

grub-pc/install_devices: /dev/sda grub-pc/install_devices_empty: false

EFI mode (install_devices needs partition)

grub-efi/install_devices: /dev/sda

Install Hotplug

Install hotplug udev rules if supported and enabled

Summary

This module will install the udev rules to enable hotplug if supported by the datasource and enabled in the user-data. The udev rules will be installed as /etc/udev/rules.d/90-cloud-init-hook-hotplug.rules.

When hotplug is enabled, newly added network devices will be added to the system by cloud-init. After udev detects the event, cloud-init will refresh the instance metadata from the datasource, detect the device in the updated metadata, then apply the updated network configuration.

Udev rules are installed while cloud-init is running, which means that devices which are added during boot might not be configured. To work around this limitation, one can wait until cloud-init has completed before hotplugging devices.

Currently supported datasources: Openstack, EC2

Internal name: cc_install_hotplug

Module frequency: once-per-instance

Supported distros: all

Config schema

Examples

Example 1: Enable hotplug of network devices

#cloud-config updates: network: when: [hotplug]

Example 2: Enable network hotplug alongside boot event

#cloud-config updates: network: when: [boot, hotplug]

Keyboard

Set keyboard layout

Summary

Handle keyboard configuration.

Internal name: cc_keyboard

Module frequency: once-per-instance

Supported distros: alpine, arch, debian, ubuntu, almalinux, amazon, azurelinux, centos, cloudlinux, eurolinux, fedora, mariner, miraclelinux, openmandriva, photon, rhel, rocky, virtuozzo, opensuse, opensuse-leap, opensuse-microos, opensuse-tumbleweed, sle_hpc, sle-micro, sles, suse

Activate only on keys: keyboard

Config schema

Examples

Example 1: Set keyboard layout to “us”

#cloud-config keyboard: layout: us

Example 2: Set specific keyboard layout, model, variant, options

#cloud-config keyboard: layout: de model: pc105 variant: nodeadkeys options: compose:rwin

Example 3: For Alpine Linux, set specific keyboard layout and variant, as used by setup-keymap. Model and options are ignored.

#cloud-config keyboard: layout: gb variant: gb-extd

Keys to Console

Control which SSH host keys may be written to console

Summary

For security reasons it may be desirable not to write SSH host keys and their fingerprints to the console. To avoid either of them being written to the console, the emit_keys_to_console config key under the mainssh config key can be used.

To avoid the fingerprint of types of SSH host keys being written to console the ssh_fp_console_blacklist config key can be used. By default, all types of keys will have their fingerprints written to console.

To avoid host keys of a key type being written to console thessh_key_console_blacklist config key can be used. By default, all supported host keys are written to console.

Internal name: cc_keys_to_console

Module frequency: once-per-instance

Supported distros: all

Config schema

Examples

Example 1: Do not print any SSH keys to system console

#cloud-config ssh: emit_keys_to_console: false

Example 2: Do not print certain SSH key types to console

#cloud-config ssh_key_console_blacklist: [rsa]

Example 3: Do not print specific SSH key fingerprints to console

#cloud-config ssh_fp_console_blacklist:

Landscape

Install and configure Landscape client

Summary

This module installs and configures landscape-client. The Landscape client will only be installed if the key landscape is present in config.

Landscape client configuration is given under the client key under the main landscape config key. The config parameters are not interpreted by cloud-init, but rather are converted into a ConfigObj-formatted file and written out to the [client] section in/etc/landscape/client.conf. The following default client config is provided, but can be overridden

landscape: client: log_level: "info" url: "https://landscape.canonical.com/message-system" ping_url: "http://landscape.canoncial.com/ping" data_path: "/var/lib/landscape/client"

Note

If tags is defined, its contents should be a string delimited with a comma (“,”) rather than a list.

Internal name: cc_landscape

Module frequency: once-per-instance

Supported distros: ubuntu

Activate only on keys: landscape

Config schema

Examples

To discover additional supported client keys, run man landscape-config.

Example 1:

#cloud-config landscape: client: url: https://landscape.canonical.com/message-system ping_url: http://landscape.canonical.com/ping data_path: /var/lib/landscape/client http_proxy: http://my.proxy.com/foobar https_proxy: https://my.proxy.com/foobar tags: server,cloud computer_title: footitle registration_key: fookey account_name: fooaccount

Example 2: Minimum viable config requires account_name and computer_title.

#cloud-config landscape: client: computer_title: kiosk 1 account_name: Joe's Biz

Example 3: To install landscape-client from a PPA, specify apt.sources.

#cloud-config apt: sources: trunk-testing-ppa: source: ppa:landscape/self-hosted-beta landscape: client: account_name: myaccount computer_title: himom

Locale

Set system locale

Summary

Configure the system locale and apply it system-wide. By default, use the locale specified by the datasource.

Internal name: cc_locale

Module frequency: once-per-instance

Supported distros: all

Config schema

Examples

Example 1: Set the locale to "ar_AE"

#cloud-config locale: ar_AE

Example 2: Set the locale to "fr_CA" in /etc/alternate_path/locale

#cloud-config locale: fr_CA locale_configfile: /etc/alternate_path/locale

Example 3: Skip performing any locale setup or generation

#cloud-config locale: false

LXD

Configure LXD with lxd init and (optionally) lxd-bridge

Summary

This module configures LXD with user-specified options using lxd init.

Internal name: cc_lxd

Module frequency: once-per-instance

Supported distros: ubuntu

Activate only on keys: lxd

Config schema

Examples

Example 1: Simplest working directory-backed LXD configuration.

#cloud-config lxd: init: storage_backend: dir

Example 2: lxd-init showcasing cloud-init’s LXD config options.

#cloud-config lxd: init: network_address: 0.0.0.0 network_port: 8443 storage_backend: zfs storage_pool: datapool storage_create_loop: 10 bridge: mode: new mtu: 1500 name: lxdbr0 ipv4_address: 10.0.8.1 ipv4_netmask: 24 ipv4_dhcp_first: 10.0.8.2 ipv4_dhcp_last: 10.0.8.3 ipv4_dhcp_leases: 250 ipv4_nat: true ipv6_address: fd98:9e0:3744::1 ipv6_netmask: 64 ipv6_nat: true domain: lxd

Example 3: For more complex non-interactive LXD configuration of networks, storage pools, profiles, projects, clusters and core config, lxd:preseed config will be passed as stdin to the command: lxd init --preseed.

See the LXD non-interactive configuration or run lxd init --dump to see viable preseed YAML allowed.

Preseed settings configuring the LXD daemon for HTTPS connections on 192.168.1.1 port 9999, a nested profile which allows for LXD nesting on containers and a limited project allowing for RBAC approach when defining behavior for sub-projects.

#cloud-config lxd: preseed: | config: core.https_address: 192.168.1.1:9999 networks: - config: ipv4.address: 10.42.42.1/24 ipv4.nat: true ipv6.address: fd42:4242:4242:4242::1/64 ipv6.nat: true description: "" name: lxdbr0 type: bridge project: default storage_pools: - config: size: 5GiB source: /var/snap/lxd/common/lxd/disks/default.img description: "" name: default driver: zfs profiles: - config: {} description: Default LXD profile devices: eth0: name: eth0 network: lxdbr0 type: nic root: path: / pool: default type: disk name: default - config: {} security.nesting: true devices: eth0: name: eth0 network: lxdbr0 type: nic root: path: / pool: default type: disk name: nested projects: - config: features.images: true features.networks: true features.profiles: true features.storage.volumes: true description: Default LXD project name: default - config: features.images: false features.networks: true features.profiles: false features.storage.volumes: false description: Limited Access LXD project name: limited

MCollective

Install, configure and start MCollective

Summary

This module installs, configures and starts MCollective. If themcollective key is present in config, then MCollective will be installed and started.

Configuration for mcollective can be specified in the conf key under mcollective. Each config value consists of a key-value pair and will be written to /etc/mcollective/server.cfg. The public-certand private-cert keys, if present in conf may be used to specify the public and private certificates for MCollective. Their values will be written to /etc/mcollective/ssl/server-public.pem and/etc/mcollective/ssl/server-private.pem.

Warning

The EC2 metadata service is a network service and thus is readable by non-root users on the system (i.e., ec2metadata --user-data). If security is a concern, use include-once and SSL URLS.

Internal name: cc_mcollective

Module frequency: once-per-instance

Supported distros: all

Activate only on keys: mcollective

Config schema

Examples

Example 1: Provide server private and public key, and provide the loglevel: debug and plugin.stomp.host: dbhost config settings in /etc/mcollective/server.cfg:

#cloud-config mcollective: conf: loglevel: debug plugin.stomp.host: dbhost public-cert: | -------BEGIN CERTIFICATE-------- -------END CERTIFICATE-------- private-cert: | -------BEGIN CERTIFICATE-------- -------END CERTIFICATE--------

Mounts

Configure mount points and swap files

Summary

This module can add or remove mount points from /etc/fstab as well as configure swap. The mounts config key takes a list of fstab entries to add. Each entry is specified as a list of [ fs_spec, fs_file, fs_vfstype, fs_mntops, fs-freq, fs_passno ].

For more information on these options, consult the manual for/etc/fstab. When specifying the fs_spec, if the device name starts with one of xvd, sd, hd, or vd, the leading /dev may be omitted. Any mounts that do not appear to either an attached block device or network resource will be skipped with a log like “Ignoring nonexistent mount …”.

Cloud-init will attempt to add the following mount directives if available and unconfigured in /etc/fstab:

mounts:

In order to remove a previously-listed mount, an entry can be added to themounts list containing fs_spec for the device to be removed but no mount point (i.e. [ swap ] or [ swap, null ]).

The mount_default_fields config key allows default values to be specified for the fields in a mounts entry that are not specified, aside from the fs_spec and the fs_file fields. If specified, this must be a list containing 6 values. It defaults to:

mount_default_fields: [none, none, "auto", "defaults,nofail,x-systemd.after=cloud-init-network.service", "0", "2"]

Non-systemd init systems will vary in mount_default_fields.

Swap files can be configured by setting the path to the swap file to create with filename, the size of the swap file with size, maximum size of the swap file if using an size: auto with maxsize. By default, no swap file is created.

Note

If multiple mounts are specified, where a subsequent mount’s mount point is inside of a previously-declared mount’s mount point, (i.e. the 1st mount has a mount point of /abc and the 2nd mount has a mount point of /abc/def) then this will not work as expected – cc_mountsfirst creates the directories for all the mount points before it starts to perform any mounts and so the sub-mount point directory will not be created correctly inside the parent mount point.

For systems using util-linux’s mount program, this issue can be worked around by specifying X-mount.mkdir as part of a fs_mntopsvalue for the subsequent mount entry.

Internal name: cc_mounts

Module frequency: once-per-instance

Supported distros: all

Config schema

Examples

Example 1: Mount ephemeral0 with noexec flag, /dev/sdc with mount_default_fields, and /dev/xvdh with custom fs_passno "0" to avoid fsck on the mount.

Also provide an automatically-sized swap with a max size of 10485760 bytes.

#cloud-config mounts:

Example 2: Create a 2 GB swap file at /swapfile using human-readable values.

#cloud-config swap: filename: /swapfile size: 2G maxsize: 2G

NTP

Enable and configure NTP

Summary

Handle Network Time Protocol (NTP) configuration. If ntp is not installed on the system and NTP configuration is specified, ntp will be installed.

If there is a default NTP config file in the image or one is present in the distro’s ntp package, it will be copied to a file with .distappended to the filename before any changes are made.

A list of NTP pools and NTP servers can be provided under the ntpconfig key.

If no NTP servers or pools are provided, 4 pools will be used in the format:

{0-3}.{distro}.pool.ntp.org

Internal name: cc_ntp

Module frequency: once-per-instance

Supported distros: almalinux, alpine, aosc, azurelinux, centos, cloudlinux, cos, debian, eurolinux, fedora, freebsd, mariner, miraclelinux, openbsd, openeuler, OpenCloudOS, openmandriva, opensuse, opensuse-microos, opensuse-tumbleweed, opensuse-leap, photon, rhel, rocky, sle_hpc, sle-micro, sles, TencentOS, ubuntu, virtuozzo

Activate only on keys: ntp

Config schema

Examples

Example 1: Override NTP with chrony configuration on Ubuntu.

#cloud-config ntp: enabled: true ntp_client: chrony # Uses cloud-init default chrony configuration

Example 2: Provide a custom NTP client configuration.

#cloud-config ntp: enabled: true ntp_client: myntpclient config: confpath: /etc/myntpclient/myntpclient.conf check_exe: myntpclientd packages: - myntpclient service_name: myntpclient template: | ## template:jinja # My NTP Client config {% if pools -%}# pools{% endif %} {% for pool in pools -%} pool {{pool}} iburst {% endfor %} {%- if servers %}# servers {% endif %} {% for server in servers -%} server {{server}} iburst {% endfor %} {% if peers -%}# peers{% endif %} {% for peer in peers -%} peer {{peer}} {% endfor %} {% if allow -%}# allow{% endif %} {% for cidr in allow -%} allow {{cidr}} {% endfor %} pools: [0.int.pool.ntp.org, 1.int.pool.ntp.org, ntp.myorg.org] servers:

Package Update Upgrade Install

Update, upgrade, and install packages

Summary

This module allows packages to be updated, upgraded or installed during boot using any available package manager present on a system such as apt, pkg, snap, yum or zypper. If any packages are to be installed or an upgrade is to be performed then the package cache will be updated first. If a package installation or upgrade requires a reboot, then a reboot can be performed if package_reboot_if_required is specified.

Internal name: cc_package_update_upgrade_install

Module frequency: once-per-instance

Supported distros: all

Activate only on keys: apt_update, package_update, apt_upgrade, package_upgrade, packages

Config schema

Examples

Example 1:

#cloud-config package_reboot_if_required: true package_update: true package_upgrade: true packages:

By default, package_upgrade: true performs upgrades on any installed package manager. To avoid calling snap refresh in images with snap installed, set snap refresh.hold to forever will prevent cloud-init’s snap interaction during any boot

#cloud-config package_update: true package_upgrade: true snap: commands: 00: snap refresh --hold=forever package_reboot_if_required: true

Phone Home

Post data to URL

Summary

This module can be used to post data to a remote host after boot is complete.

Either all data can be posted, or a list of keys to post.

Available keys are:

Data is sent as x-www-form-urlencoded arguments.

Example HTTP POST:

POST / HTTP/1.1 Content-Length: 1337 User-Agent: Cloud-Init/21.4 Accept-Encoding: gzip, deflate Accept: / Content-Type: application/x-www-form-urlencoded

pub_key_rsa=rsa_contents&pub_key_ecdsa=ecdsa_contents&pub_key_ed25519=ed25519_contents&instance_id=i-87018aed&hostname=myhost&fqdn=myhost.internal

Internal name: cc_phone_home

Module frequency: once-per-instance

Supported distros: all

Activate only on keys: phone_home

Config schema

Examples

Example 1:

template: jinja

#cloud-config phone_home: {post: all, url: 'http://example.com/{{ v1.instance_id }}/'}

Example 2:

template: jinja

#cloud-config phone_home: post: [pub_key_rsa, pub_key_ecdsa, pub_key_ed25519, instance_id, hostname, fqdn] tries: 5 url: http://example.com/{{ v1.instance_id }}/

Power State Change

Change power state

Summary

This module handles shutdown/reboot after all config modules have been run. By default it will take no action, and the system will keep running unless a package installation/upgrade requires a system reboot (e.g. installing a new kernel) and package_reboot_if_required is true.

Using this module ensures that cloud-init is entirely finished with modules that would be executed. An example to distinguish delay from timeout:

If you delay 5 (5 minutes) and have a timeout of 120 (2 minutes), the max time until shutdown will be 7 minutes, though it could be as soon as 5 minutes. Cloud-init will invoke ‘shutdown +5’ after the process finishes, or when ‘timeout’ seconds have elapsed.

Note

With Alpine Linux any message value specified is ignored as Alpine’shalt, poweroff, and reboot commands do not support broadcasting a message.

Internal name: cc_power_state_change

Module frequency: once-per-instance

Supported distros: all

Activate only on keys: power_state

Config schema

Examples

Example 1:

#cloud-config power_state: delay: now mode: poweroff message: Powering off timeout: 2 condition: true

Example 2:

#cloud-config power_state: delay: 30 mode: reboot message: Rebooting machine condition: test -f /var/tmp/reboot_me

Puppet

Install, configure and start Puppet

Summary

This module handles Puppet installation and configuration. If thepuppet key does not exist in global configuration, no action will be taken.

If a config entry for puppet is present, then by default the latest version of Puppet will be installed. If the puppet config key exists in the config archive, this module will attempt to start puppet even if no installation was performed.

The module also provides keys for configuring the new Puppet 4 paths and installing the puppet package from thepuppetlabs repositories.

The keys are package_name, conf_file, ssl_dir andcsr_attributes_path. If unset, their values will default to ones that work with Puppet 3.X, and with distributions that ship modified Puppet 4.X, that use the old paths.

Internal name: cc_puppet

Module frequency: once-per-instance

Supported distros: all

Activate only on keys: puppet

Config schema

Examples

Example 1:

#cloud-config puppet: install: true version: "7.7.0" install_type: "aio" collection: "puppet7" aio_install_url: 'https://git.io/JBhoQ' cleanup: true conf_file: "/etc/puppet/puppet.conf" ssl_dir: "/var/lib/puppet/ssl" csr_attributes_path: "/etc/puppet/csr_attributes.yaml" exec: true exec_args: ['--test'] conf: agent: server: "puppetserver.example.org" certname: "%i.%f" ca_cert: | -----BEGIN CERTIFICATE----- MIICCTCCAXKgAwIBAgIBATANBgkqhkiG9w0BAQUFADANMQswCQYDVQQDDAJjYTAe Fw0xMDAyMTUxNzI5MjFaFw0xNTAyMTQxNzI5MjFaMA0xCzAJBgNVBAMMAmNhMIGf MA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCu7Q40sm47/E1Pf+r8AYb/V/FWGPgc b014OmNoX7dgCxTDvps/h8Vw555PdAFsW5+QhsGr31IJNI3kSYprFQcYf7A8tNWu 1MASW2CfaEiOEi9F1R3R4Qlz4ix+iNoHiUDTjazw/tZwEdxaQXQVLwgTGRwVa+aA qbutJKi93MILLwIDAQABo3kwdzA4BglghkgBhvhCAQ0EKxYpUHVwcGV0IFJ1Ynkv T3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwDwYDVR0TAQH/BAUwAwEB/zAd BgNVHQ4EFgQUu4+jHB+GYE5Vxo+ol1OAhevspjAwCwYDVR0PBAQDAgEGMA0GCSqG SIb3DQEBBQUAA4GBAH/rxlUIjwNb3n7TXJcDJ6MMHUlwjr03BDJXKb34Ulndkpaf +GAlzPXWa7bO908M9I8RnPfvtKnteLbvgTK+h+zX1XCty+S2EQWk29i2AdoqOTxb hppiGMp0tT5Havu4aceCXiy2crVcudj3NFciy8X66SoECemW9UYDCb9T5D0d -----END CERTIFICATE----- csr_attributes: custom_attributes: 1.2.840.113549.1.9.7: 342thbjkt82094y0uthhor289jnqthpc2290 extension_requests: pp_uuid: ED803750-E3C7-44F5-BB08-41A04433FE2E pp_image_name: my_ami_image pp_preshared_key: 342thbjkt82094y0uthhor289jnqthpc2290

Example 2:

#cloud-config puppet: install_type: "packages" package_name: "puppet" exec: false

Resizefs

Resize filesystem

Summary

Resize a filesystem to use all available space on partition. This module is useful along with cc_growpart and will ensure that if the root partition has been resized, the root filesystem will be resized along with it.

By default, cc_resizefs will resize the root partition and will block the boot process while the resize command is running.

Optionally, the resize operation can be performed in the background while cloud-init continues running modules. This can be enabled by settingresize_rootfs to noblock.

This module can be disabled altogether by setting resize_rootfs tofalse.

Internal name: cc_resizefs

Module frequency: always

Supported distros: all

Config schema

Examples

Example 1: Disable root filesystem resize operation.

#cloud-config resize_rootfs: false

Example 2: Runs resize operation in the background.

#cloud-config resize_rootfs: noblock

Resolv Conf

Configure resolv.conf

Summary

You should not use this module unless manually editing/etc/resolv.conf is the correct way to manage nameserver information on your operating system.

Many distros have moved away from manually editing resolv.conf so please verify that this is the preferred nameserver management method for your distro before using this module. Note that using Network configurationis preferred, rather than using this module, when possible.

This module is intended to manage resolv.conf in environments where early configuration of resolv.conf is necessary for further bootstrapping and/or where configuration management such as Puppet or Chef own DNS configuration.

When using a Config drive and a RHEL-like system,resolv.conf will also be managed automatically due to the available information provided for DNS servers in the Networking config Version 2format. For those who wish to have different settings, use this module.

For the resolv_conf section to be applied, manage_resolv_conf must be set true.

Note

For Red Hat with sysconfig, be sure to set PEERDNS=no for all DHCP-enabled NICs.

Internal name: cc_resolv_conf

Module frequency: once-per-instance

Supported distros: alpine, azurelinux, fedora, mariner, opensuse, opensuse-leap, opensuse-microos, opensuse-tumbleweed, photon, rhel, sle_hpc, sle-micro, sles, openeuler

Activate only on keys: manage_resolv_conf

Config schema

Examples

Example 1:

#cloud-config manage_resolv_conf: true resolv_conf: domain: example.com nameservers: [8.8.8.8, 8.8.4.4] options: {rotate: true, timeout: 1} searchdomains: [foo.example.com, bar.example.com] sortlist: [10.0.0.1/255, 10.0.0.2]

Red Hat Subscription

Register Red Hat Enterprise Linux-based system

Summary

Register a Red Hat system, either by username and password or by activation and org.

Following a successful registration, you can:

Internal name: cc_rh_subscription

Module frequency: once-per-instance

Supported distros: fedora, rhel, openeuler

Activate only on keys: rh_subscription

Config schema

Examples

Example 1:

#cloud-config rh_subscription: username: joe@foo.bar

Quote your password if it has symbols to be safe

password: '1234abcd'

Example 2:

#cloud-config rh_subscription: activation-key: foobar org: "ABC"

Example 3:

#cloud-config rh_subscription: activation-key: foobar org: 12345 auto-attach: true service-level: self-support add-pool: - 1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a - 2b2b2b2b2b2b2b2b2b2b2b2b2b2b2b2b enable-repo: - repo-id-to-enable - other-repo-id-to-enable disable-repo: - repo-id-to-disable - other-repo-id-to-disable

Alter the baseurl in /etc/rhsm/rhsm.conf

rhsm-baseurl: http://url

Alter the server hostname in /etc/rhsm/rhsm.conf

server-hostname: foo.bar.com

Rsyslog

Configure system logging via rsyslog

Summary

This module configures remote system logging using rsyslog.

Configuration for remote servers can be specified in configs, but for convenience it can be specified as key-value pairs in remotes.

This module can install rsyslog if not already present on the system using the install_rsyslog, packages, and check_exe options. Installation may not work on systems where this module runs before networking is up.

Note

On BSD, cloud-init will attempt to disable and stop the base system syslogd. This may fail on a first run. We recommend creating images with service syslogd disable.

Internal name: cc_rsyslog

Module frequency: once-per-instance

Supported distros: all

Activate only on keys: rsyslog

Config schema

Examples

Example 1:

#cloud-config rsyslog: remotes: {juju: 10.0.4.1, maas: 192.168.1.1} service_reload_command: auto

Example 2:

#cloud-config rsyslog: config_dir: /opt/etc/rsyslog.d config_filename: 99-late-cloud-config.conf configs:

remotes: {juju: 10.0.4.1, maas: 192.168.1.1} service_reload_command: [your, syslog, restart, command]

Example 3: Default (no) configuration with package installation on FreeBSD.

#cloud-config rsyslog: check_exe: rsyslogd config_dir: /usr/local/etc/rsyslog.d install_rsyslog: true packages: [rsyslogd]

Runcmd

Run arbitrary commands

Summary

Run arbitrary commands at a rc.local-like time-frame with output to the console. Each item can be either a list or a string. The item type affects how it is executed:

The runcmd module only writes the script to be run later. The module that actually runs the script is scripts_user in theFinal boot stage.

Note

All commands must be proper YAML, so you must quote any characters YAML would eat (“:” can be problematic).

Note

When writing files, do not use /tmp dir as it races withsystemd-tmpfiles-clean (LP: #1707222). Use /run/somedir instead.

Internal name: cc_runcmd

Module frequency: once-per-instance

Supported distros: all

Activate only on keys: runcmd

Config schema

Examples

Example 1:

#cloud-config runcmd:

Salt Minion

Set up and run Salt Minion

Summary

This module installs, configures and starts Salt Minion. If thesalt_minion key is present in the config parts, then Salt Minion will be installed and started.

Configuration for Salt Minion can be specified in the conf key undersalt_minion. Any config values present there will be assigned in/etc/salt/minion. The public and private keys to use for Salt Minion can be specified with public_key and private_key respectively.

If you have a custom package name, service name, or config directory, you can specify them with pkg_name, service_name, and config_dirrespectively.

Salt keys can be manually generated by salt-key --gen-keys=GEN_KEYS, where GEN_KEYS is the name of the keypair, e.g. ‘’minion’’. The keypair will be copied to /etc/salt/pki on the Minion instance.

Internal name: cc_salt_minion

Module frequency: once-per-instance

Supported distros: all

Activate only on keys: salt_minion

Config schema

Examples

Example 1:

#cloud-config salt_minion: conf: file_client: local fileserver_backend: [gitfs] gitfs_remotes: ['https://github.com/_user_/_repo_.git'] master: salt.example.com config_dir: /etc/salt grains: role: [web] pkg_name: salt-minion pki_dir: /etc/salt/pki/minion private_key: '------BEGIN PRIVATE KEY------

<key data>

------END PRIVATE KEY-------

'

public_key: '------BEGIN PUBLIC KEY-------

<key data>

------END PUBLIC KEY-------

'

service_name: salt-minion

Scripts Per Boot

Run per-boot scripts

Summary

Any scripts in the scripts/per-boot directory on the datasource will be run every time the system boots. Scripts will be run in alphabetical order. This module does not accept any config keys.

Internal name: cc_scripts_per_boot

Module frequency: always

Supported distros: all

Config schema

No schema definitions for this module

Examples

No examples for this module

Scripts Per Instance

Run per-instance scripts

Summary

Any scripts in the scripts/per-instance directory on the datasource will be run when a new instance is first booted. Scripts will be run in alphabetical order. This module does not accept any config keys.

Some cloud platforms change instance-id if a significant change was made to the system. As a result, per-instance scripts will run again.

Internal name: cc_scripts_per_instance

Module frequency: once-per-instance

Supported distros: all

Config schema

No schema definitions for this module

Examples

No examples for this module

Scripts Per Once

Run one-time scripts

Summary

Any scripts in the scripts/per-once directory on the datasource will be run only once. Changes to the instance will not force a re-run. The only way to re-run these scripts is to run the clean subcommand and reboot. Scripts will be run in alphabetical order. This module does not accept any config keys.

Internal name: cc_scripts_per_once

Module frequency: once

Supported distros: all

Config schema

No schema definitions for this module

Examples

No examples for this module

Scripts User

Run user scripts

Summary

This module runs all user scripts present in the scripts directory in the instance configuration. Any cloud-config parts with a #! will be treated as a script and run. Scripts specified as cloud-config parts will be run in the order they are specified in the configuration. This module does not accept any config keys.

Internal name: cc_scripts_user

Module frequency: once-per-instance

Supported distros: all

Config schema

No schema definitions for this module

Examples

No examples for this module

Scripts Vendor

Run vendor scripts

Summary

On select Datasources, vendors have a channel for the consumption of all supported user data types via a special channel called vendor data. Any scripts in the scripts/vendor directory in the datasource will be run when a new instance is first booted. Scripts will be run in alphabetical order. This module allows control over the execution of vendor data.

Internal name: cc_scripts_vendor

Module frequency: once-per-instance

Supported distros: all

Config schema

Examples

Example 1:

#cloud-config vendor_data: {enabled: true, prefix: /usr/bin/ltrace}

Example 2:

#cloud-config vendor_data: enabled: true prefix: [timeout, 30]

Example 3: Vendor data will not be processed.

#cloud-config vendor_data: {enabled: false}

Seed Random

Provide random seed data

Summary

All cloud instances started from the same image will produce similar data when they are first booted as they are all starting with the same seed for the kernel’s entropy keyring. To avoid this, random seed data can be provided to the instance, either as a string or by specifying a command to run to generate the data.

Configuration for this module is under the random_seed config key. If the cloud provides its own random seed data, it will be appended todata before it is written to file.

If the command key is specified, the given command will be executed. This will happen after file has been populated. That command’s environment will contain the value of the file key asRANDOM_SEED_FILE. If a command is specified that cannot be run, no error will be reported unless command_required is set to true.

Internal name: cc_seed_random

Module frequency: once-per-instance

Supported distros: all

Config schema

Examples

Example 1:

#cloud-config random_seed: command: [sh, -c, dd if=/dev/urandom of=$RANDOM_SEED_FILE] command_required: true data: my random string encoding: raw file: /dev/urandom

Example 2: Use pollinate to gather data from a remote entropy server and write it to /dev/urandom:

#cloud-config random_seed: command: [pollinate, '--server=http://local.pollinate.server'] command_required: true file: /dev/urandom

Set Hostname

Set hostname and FQDN

Summary

This module handles setting the system hostname and fully qualified domain name (FQDN). If preserve_hostname is set, then the hostname will not be altered.

A hostname and FQDN can be provided by specifying a full domain name under the fqdn key. Alternatively, a hostname can be specified using thehostname key, and the FQDN of the cloud will be used. If a FQDN is specified with the hostname key, it will be handled properly, although it is better to use the fqdn config key. If both fqdn andhostname are set, then prefer_fqdn_over_hostname will force use of FQDN in all distros when true, and when false it will force the short hostname. Otherwise, the hostname to use is distro-dependent.

Note

Cloud-init performs no hostname input validation before sending the hostname to distro-specific tools, and most tools will not accept a trailing dot on the FQDN.

This module will run in the init-local stage before networking is configured if the hostname is set by metadata or user data on the local system.

This will occur on datasources like NoCloud and OVF where metadata and user data are available locally. This ensures that the desired hostname is applied before any DHCP requests are performed on these platforms where dynamic DNS is based on initial hostname.

Internal name: cc_set_hostname

Module frequency: once-per-instance

Supported distros: all

Config schema

Examples

Example 1:

#cloud-config preserve_hostname: true

Example 2:

#cloud-config hostname: myhost create_hostname_file: true fqdn: myhost.example.com prefer_fqdn_over_hostname: true

Example 3: On a machine without an /etc/hostname file, don’t create it. In most clouds, this will result in a DHCP-configured hostname provided by the cloud.

#cloud-config create_hostname_file: false

Set Passwords

Set user passwords and enable/disable SSH password auth

Summary

This module consumes three top-level config keys: ssh_pwauth,chpasswd and password.

The ssh_pwauth config key determines whether or not sshd will be configured to accept password authentication.

The chpasswd config key accepts a dictionary containing either (or both) of users and expire.

Note

Prior to cloud-init 22.3, the expire key only applies to plain text (including RANDOM) passwords. Post-22.3, the expire key applies to both plain text and hashed passwords.

The password config key is used to set the default user’s password. It is ignored if the chpasswd users is used. Note that the listkeyword is deprecated in favor of users.

Internal name: cc_set_passwords

Module frequency: once-per-instance

Supported distros: all

Config schema

Examples

Example 1: Set a default password, to be changed at first login.

#cloud-config {password: password1, ssh_pwauth: true}

Example 2:

#cloud-config chpasswd: expire: false users:

Snap

Install, configure and manage snapd and snap packages

Summary

This module provides a simple configuration namespace in cloud-init for setting up snapd and installing snaps.

Both assertions and commands values can be either a dictionary or a list. If these configs are provided as a dictionary, the keys are only used to order the execution of the assertions or commands and the dictionary is merged with any vendor data the snap configuration provided. If a list is provided by the user instead of a dict, any vendor data snap configuration is ignored.

The assertions configuration option is a dictionary or list of properly-signed snap assertions, which will run before any snap commands. They will be added to snapd’s assertion database by invokingsnap ack <aggregate_assertion_file>.

Snap commands is a dictionary or list of individual snap commands to run on the target system. These commands can be used to create snap users, install snaps, and provide snap configuration.

Note

If ‘side-loading’ private/unpublished snaps on an instance, it is best to create a snap seed directory and seed.yaml manifest in/var/lib/snapd/seed/ which snapd automatically installs on startup.

Internal name: cc_snap

Module frequency: once-per-instance

Supported distros: ubuntu

Activate only on keys: snap

Config schema

Examples

Example 1:

#cloud-config snap: assertions: 00: | signed_assertion_blob_here 02: | signed_assertion_blob_here commands: 00: snap create-user --sudoer --known @mydomain.com 01: snap install canonical-livepatch 02: canonical-livepatch enable

Example 2: For convenience, the snap command can be omitted when specifying commands as a list - snap will be automatically prepended. The following commands are all equivalent:

#cloud-config snap: commands: 0: [install, vlc] 1: [snap, install, vlc] 2: snap install vlc 3: snap install vlc

Example 3: You can use a list of commands.

#cloud-config snap: assertions: - signed_assertion_blob_here - | signed_assertion_blob_here

Example 4: You can also use a list of assertions.

#cloud-config snap: assertions: - signed_assertion_blob_here - | signed_assertion_blob_here

Spacewalk

Install and configure spacewalk

Summary

This module installs Spacewalk and applies basic configuration. If the Spacewalk config key is present, Spacewalk will be installed. The server to connect to after installation must be provided in the server in Spacewalk configuration. A proxy to connect through and an activation key may optionally be specified.

For more details about spacewalk see theFedora documentation.

Internal name: cc_spacewalk

Module frequency: once-per-instance

Supported distros: rhel, fedora, openeuler

Activate only on keys: spacewalk

Config schema

Examples

Example 1:

#cloud-config spacewalk: {activation_key: , proxy: , server: }

SSH

Configure SSH and SSH keys

Summary

This module handles most configuration for SSH, and for both host and authorized SSH keys.

Authorized keys

Authorized keys are a list of public SSH keys that are allowed to connect to a user account on a system. They are stored in .ssh/authorized_keysin that account’s home directory. Authorized keys for the default user defined in users can be specified using ssh_authorized_keys. Keys should be specified as a list of public keys.

Note

See the cc_set_passwords module documentation to enable/disable SSH password authentication.

Root login can be enabled/disabled using the disable_root config key. Root login options can be manually specified with disable_root_opts.

Supported public key types for the ssh_authorized_keys are:

Note

This list has been filtered out from the supported key types ofOpenSSHsource, where the sigonly keys are removed. See ssh_util for more information.

rsa, ecdsa and ed25519 are added for legacy, as they are valid public keys in some older distros. They may be removed in the future when support for the older distros is dropped.

Host keys

Host keys are for authenticating a specific instance. Many images have default host SSH keys, which can be removed using ssh_deletekeys.

Host keys can be added using the ssh_keys configuration key.

When host keys are generated the output of the ssh-keygen command(s) can be displayed on the console using the ssh_quiet_keygenconfiguration key.

Note

When specifying private host keys in cloud-config, take care to ensure that communication between the data source and the instance is secure.

If no host keys are specified using ssh_keys, then keys will be generated using ssh-keygen. By default, one public/private pair of each supported host key type will be generated. The key types to generate can be specified using the ssh_genkeytypes config flag, which accepts a list of host key types to use. For each host key type for which this module has been instructed to create a keypair, if a key of the same type is already present on the system (i.e. if ssh_deletekeys was set to false), no key will be generated.

Supported host key types for the ssh_keys and the ssh_genkeytypesconfig flags are:

Unsupported host key types for the ssh_keys and the ssh_genkeytypesconfig flags are:

Internal name: cc_ssh

Module frequency: once-per-instance

Supported distros: all

Config schema

Examples

Example 1:

#cloud-config allow_public_ssh_keys: true disable_root: true disable_root_opts: no-port-forwarding,no-agent-forwarding,no-X11-forwarding ssh_authorized_keys: [ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAGEA3FSyQwBI6Z+nCSjUU ..., ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA3I7VUf2l5gSn5uavROsc5HRDpZ ...] ssh_deletekeys: true ssh_genkeytypes: [rsa, ecdsa, ed25519] ssh_keys: {rsa_certificate: 'ssh-rsa-cert-v01@openssh.com AAAAIHNzaC1lZDI1NTE5LWNlcnQt ...

', rsa_private: '-----BEGIN RSA PRIVATE KEY-----

MIIBxwIBAAJhAKD0YSHy73nUgysO13XsJmd4fHiFyQ+00R7VVu2iV9Qco

...

-----END RSA PRIVATE KEY-----

', rsa_public: ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAGEAoPRhIfLvedSDKw7Xd ...}

ssh_publish_hostkeys: blacklist: [rsa] enabled: true ssh_quiet_keygen: true

SSH AuthKey Fingerprints

Log fingerprints of user SSH keys

Summary

Write fingerprints of authorized keys for each user to log. This is enabled by default, but can be disabled using no_ssh_fingerprints. The hash type for the keys can be specified, but defaults to sha256.

Internal name: cc_ssh_authkey_fingerprints

Module frequency: once-per-instance

Supported distros: all

Config schema

Examples

Example 1:

#cloud-config no_ssh_fingerprints: true

Example 2:

#cloud-config authkey_hash: sha512

SSH Import ID

Import SSH ID

Summary

This module imports SSH keys from either a public keyserver (usually Launchpad), or GitHub, using ssh-import-id. Keys are referenced by the username they are associated with on the keyserver. The keyserver can be specified by prepending either lp: for Launchpad or gh: for GitHub to the username.

Internal name: cc_ssh_import_id

Module frequency: once-per-instance

Supported distros: alpine, cos, debian, ubuntu

Config schema

Examples

Example 1:

#cloud-config ssh_import_id: [user, 'gh:user', 'lp:user']

Timezone

Set the system timezone

Summary

Sets the system timezone based on the value provided.

Internal name: cc_timezone

Module frequency: once-per-instance

Supported distros: all

Activate only on keys: timezone

Config schema

Examples

Example 1:

#cloud-config timezone: US/Eastern

Ubuntu Drivers

Interact with third party drivers in Ubuntu

Summary

This module interacts with the ubuntu-drivers command to install third party driver packages.

Internal name: cc_ubuntu_drivers

Module frequency: once-per-instance

Supported distros: ubuntu

Activate only on keys: drivers

Config schema

Examples

Example 1:

#cloud-config drivers: nvidia: {license-accepted: true}

Ubuntu Autoinstall

Autoinstall configuration is ignored (but validated) by cloud-init.

Summary

Cloud-init is used by the Ubuntu installer in two stages. The autoinstall key may contain a configuration for the Ubuntu installer.

Cloud-init verifies that an autoinstall key contains a version key and that the installer package is present on the system.

Note

The Ubuntu installer might pass part of this configuration to cloud-init during a later boot as part of the install process. See the Ubuntu installer documentationfor more information. Please direct Ubuntu installer questions to their IRC channel (#ubuntu-server on Libera).

Internal name: cc_ubuntu_autoinstall

Module frequency: once

Supported distros: ubuntu

Activate only on keys: autoinstall

Config schema

Examples

Example 1:

#cloud-config autoinstall: version: 1

Ubuntu Pro

Configure Ubuntu Pro support services

Summary

Attach machine to an existing Ubuntu Pro support contract and enable or disable support services such as Livepatch, ESM, FIPS and FIPS Updates.

When attaching a machine to Ubuntu Pro, one can also specify services to enable. When the enable list is present, only named services will be activated. If the enable list is not present, the contract’s default services will be enabled.

On Pro instances, when ubuntu_pro config is provided to cloud-init, Pro’s auto-attach feature will be disabled and cloud-init will perform the Pro auto-attach, ignoring the token key. The enable andenable_beta values will strictly determine what services will be enabled, ignoring contract defaults.

Note that when enabling FIPS or FIPS updates you will need to schedule a reboot to ensure the machine is running the FIPS-compliant kernel. See the Power State Change module for information on how to configure cloud-init to perform this reboot.

Internal name: cc_ubuntu_pro

Module frequency: once-per-instance

Supported distros: ubuntu

Activate only on keys: ubuntu_pro, ubuntu-advantage, ubuntu_advantage

Config schema

Examples

Example 1: Attach the machine to an Ubuntu Pro support contract with a Pro contract token obtained from https://ubuntu.com/pro.

#cloud-config ubuntu_pro: {token: }

Example 2: Attach the machine to an Ubuntu Pro support contract, enabling only FIPS and ESM services. Services will only be enabled if the environment supports that service. Otherwise, warnings will be logged for incompatible services.

#cloud-config ubuntu_pro: enable: [fips, esm] token:

Example 3: Attach the machine to an Ubuntu Pro support contract and enable the FIPS service. Perform a reboot once cloud-init has completed.

#cloud-config power_state: {mode: reboot} ubuntu_pro: enable: [fips] token:

Example 4: Set a HTTP(s) proxy before attaching the machine to an Ubuntu Pro support contract and enabling the FIPS service.

#cloud-config ubuntu_pro: token: config: http_proxy: 'http://some-proxy:8088' https_proxy: 'https://some-proxy:8088' global_apt_https_proxy: 'https://some-global-apt-proxy:8088/' global_apt_http_proxy: 'http://some-global-apt-proxy:8088/' ua_apt_http_proxy: 'http://10.0.10.10:3128' ua_apt_https_proxy: 'https://10.0.10.10:3128' enable:

Example 5: On Ubuntu Pro instances, auto-attach but don’t enable any Pro services.

#cloud-config ubuntu_pro: enable: [] enable_beta: []

Example 6: Enable ESM and beta Real-time Ubuntu services in Ubuntu Pro instances.

#cloud-config ubuntu_pro: enable: [esm] enable_beta: [realtime-kernel]

Example 7: Disable auto-attach in Ubuntu Pro instances.

#cloud-config ubuntu_pro: features: {disable_auto_attach: true}

Update Etc Hosts

Update the hosts file (usually /etc/hosts)

Summary

This module will update the contents of the local hosts database (hosts file, usually /etc/hosts) based on the hostname/FQDN specified in config. Management of the hosts file is controlled usingmanage_etc_hosts. If this is set to false, cloud-init will not manage the hosts file at all. This is the default behavior.

If set to true, cloud-init will generate the hosts file using the template located in /etc/cloud/templates/hosts.tmpl. In the/etc/cloud/templates/hosts.tmpl template, the strings $hostnameand $fqdn will be replaced with the hostname and FQDN respectively.

If manage_etc_hosts is set to localhost, then cloud-init will not rewrite the hosts file entirely, but rather will ensure that an entry for the FQDN with a distribution-dependent IP is present (i.e.,ping <hostname> will ping 127.0.0.1 or 127.0.1.1 or other IP).

Note

If manage_etc_hosts is set to true, the contents of thehosts file will be updated every boot. To make any changes to thehosts file persistent they must be made in/etc/cloud/templates/hosts.tmpl.

Note

For instructions on specifying hostname and FQDN, see documentation for the cc_set_hostname module.

Internal name: cc_update_etc_hosts

Module frequency: always

Supported distros: all

Activate only on keys: manage_etc_hosts

Config schema

Examples

Example 1: Do not update or manage /etc/hosts at all. This is the default behavior. Whatever is present at instance boot time will be present after boot. User changes will not be overwritten.

#cloud-config manage_etc_hosts: false

Example 2: Manage /etc/hosts with cloud-init. On every boot, /etc/hosts will be re-written from /etc/cloud/templates/hosts.tmpl.

The strings $hostname and $fqdn are replaced in the template with the appropriate values, either from the config-config fqdn or hostname if provided. When absent, the meta-data will be checked for local-hostname which can be split into <hostname>.<fqdn>.

To make modifications persistent across a reboot, you must modify /etc/cloud/templates/hosts.tmpl.

#cloud-config manage_etc_hosts: true

Example 3: Update /etc/hosts every boot, providing a “localhost” 127.0.1.1 entry with the latest hostname and FQDN as provided by either IMDS or cloud-config. All other entries will be left alone. ping hostname will ping 127.0.1.1.

#cloud-config manage_etc_hosts: localhost

Update Hostname

Update hostname and FQDN

Summary

This module will update the system hostname and FQDN. Ifpreserve_hostname is set to true, then the hostname will not be altered.

Note

For instructions on specifying hostname and FQDN, see documentation for the cc_set_hostname module.

Internal name: cc_update_hostname

Module frequency: always

Supported distros: all

Config schema

Examples

Example 1: By default, when preserve_hostname is not specified, cloud-init updates /etc/hostname per-boot based on the cloud provided local-hostname setting. If you manually change /etc/hostname after boot cloud-init will no longer modify it.

This default cloud-init behavior is equivalent to this cloud-config:

#cloud-config preserve_hostname: false

Example 2: Prevent cloud-init from updating the system hostname.

#cloud-config preserve_hostname: true

Example 3: Prevent cloud-init from updating /etc/hostname.

#cloud-config preserve_hostname: true

Example 4: Set hostname to external.fqdn.me instead of myhost.

#cloud-config fqdn: external.fqdn.me hostname: myhost prefer_fqdn_over_hostname: true create_hostname_file: true

Example 5: Set hostname to external instead of external.fqdn.me when meta-data provides the local-hostname: external.fqdn.me.

#cloud-config prefer_fqdn_over_hostname: false

Example 6: On a machine without an /etc/hostname file, don’’t create it. In most clouds, this will result in a DHCP-configured hostname provided by the cloud.

#cloud-config create_hostname_file: false

Users and Groups

Configure users and groups

Summary

This module configures users and groups. For more detailed information on user options, see the Including users and groupsconfig example.

Groups to add to the system can be specified under the groups key as a string of comma-separated groups to create, or a list. Each item in the list should either contain a string of a single group to create, or a dictionary with the group name as the key and string of a single user as a member of that group or a list of users who should be members of the group.

Note

Groups are added before users, so any users in a group list must already exist on the system.

Users to add can be specified as a string or list under the users key. Each entry in the list should either be a string or a dictionary. If a string is specified, that string can be comma-separated usernames to create, or the reserved string default which represents the primary admin user used to access the system. The default user varies per distribution and is generally configured in /etc/cloud/cloud.cfg by thedefault_user key.

Each users dictionary item must contain either a name orsnapuser key, otherwise it will be ignored. Omission of default as the first item in the users list skips creation the default user. If no users key is provided, the default behavior is to create the default user via this config:

Note

Specifying a hash of a user’s password with passwd is a security risk if the cloud-config can be intercepted. SSH authentication is preferred.

Note

If specifying a doas rule for a user, ensure that the syntax for the rule is valid, as the only checking performed by cloud-init is to ensure that the user referenced in the rule is the correct user.

Note

If specifying a sudo rule for a user, ensure that the syntax for the rule is valid, as it is not checked by cloud-init.

Note

Most of these configuration options will not be honored if the user already exists. The following options are the exceptions, and are applied to already-existing users; plain_text_passwd, doas,hashed_passwd, lock_passwd, sudo, ssh_authorized_keys,ssh_redirect_user.

The user key can be used to override the default_user configuration defined in /etc/cloud/cloud.cfg. The user value should be a dictionary which supports the same config keys as the users dictionary items.

Internal name: cc_users_groups

Module frequency: once-per-instance

Supported distros: all

Config schema

Examples

Example 1: Add the default_user from /etc/cloud/cloud.cfg. This is also the default behavior of cloud-init when no users key is provided.

#cloud-config users: [default]

Example 2: Add the admingroup with members root and sys, and an empty group cloud-users.

#cloud-config groups:

Example 3: Skip creation of the default user and only create newsuper. Password-based login is rejected, but the GitHub user TheRealFalcon and the Launchpad user falcojr can SSH as newsuper. The default shell for newsuper is bash instead of system default.

#cloud-config users:

Example 4: Skip creation of the default user and only create newsuper. Password-based login is rejected, but the GitHub user TheRealFalcon and the Launchpad user falcojr can SSH as newsuper. doas/opendoas is configured to permit this user to run commands as other users (without being prompted for a password) except not as root.

#cloud-config users:

Example 5: On a system with SELinux enabled, add youruser and set the SELinux user to staff_u. When omitted on SELinux, the system will select the configured default SELinux user.

#cloud-config users:

Example 6: To redirect a legacy username to the default user for a distribution, ssh_redirect_user will accept an SSH connection and emit a message telling the client to SSH as the default user. SSH clients will get the message;

#cloud-config users:

Example 7: Override any default_user config in /etc/cloud/cloud.cfg with supplemental config options. This config will make the default user mynewdefault and change the user to not have sudo rights.

#cloud-config ssh_import_id: [chad.smith] user: {name: mynewdefault, sudo: null}

Example 8: Avoid creating any default_user.

Wireguard

Module to configure WireGuard tunnel

Summary

The WireGuard module provides a dynamic interface for configuring WireGuard (as a peer or server) in a straightforward way.

This module takes care of;

What is a readiness probe?

The idea behind readiness probes is to ensure WireGuard connectivity before continuing the cloud-init process. This could be useful if you need access to specific services like an internal APT Repository Server (e.g., Landscape) to install/update packages.

Example

An edge device can’t access the internet but uses cloud-init modules which will install packages (e.g. landscape, packages,ubuntu_advantage). Those modules will fail due to missing internet connection. The wireguard module fixes that problem as it waits until all readiness probes (which can be arbitrary commands, e.g. checking if a proxy server is reachable over WireGuard network) are finished, before continuing the cloud-init config stage.

Note

In order to use DNS with WireGuard you have to install theresolvconf package or symlink it to systemd’s resolvectl, otherwise wg-quick commands will throw an error message that executable resolvconf is missing, which leads the wireguardmodule to fail.

Internal name: cc_wireguard

Module frequency: once-per-instance

Supported distros: ubuntu

Activate only on keys: wireguard

Config schema

Examples

Configure one or more WireGuard interfaces and provide optional readiness probes.

#cloud-config wireguard: interfaces: - name: wg0 config_path: /etc/wireguard/wg0.conf content: | [Interface] PrivateKey = Address =

[Peer] PublicKey = Endpoint = : AllowedIPs = , , ... - name: wg1 config_path: /etc/wireguard/wg1.conf content: | [Interface] PrivateKey = Address =
[Peer] PublicKey = Endpoint = : AllowedIPs = readinessprobe: - 'systemctl restart service' - 'curl https://webhook.endpoint/example' - 'nc -zv some-service-fqdn 443'

Write Files

Write arbitrary files

Summary

Write out arbitrary content to files, optionally setting permissions. Parent folders in the path are created if absent. Content can be specified in plain text or binary. Data encoded with either base64 or binary gzip data can be specified and will be decoded before being written. Data can also be loaded from an arbitrary URI. For empty file creation, content can be omitted.

Note

If multi-line data is provided, care should be taken to ensure it follows YAML formatting standards. To specify binary data, use the YAML option !!binary.

Note

Do not write files under /tmp during boot because of a race withsystemd-tmpfiles-clean that can cause temporary files to be cleaned during the early boot process. Use /run/somedir instead to avoid a race (LP: #1707222).

Warning

Existing files will be overridden.

Internal name: cc_write_files

Module frequency: once-per-instance

Supported distros: all

Activate only on keys: write_files

Config schema

Examples

Example 1: Write out base64-encoded content to /etc/sysconfig/selinux.

#cloud-config write_files:

Example 2: Appending content to an existing file.

#cloud-config write_files:

Example 3: Provide gzipped binary content

#cloud-config write_files:

Example 4: Create an empty file on the system

#cloud-config write_files:

Example 5: Defer writing the file until after the package (Nginx) is installed and its user is created.

#cloud-config write_files:

Example 6: Retrieve file contents from a URI source, rather than inline. Especially useful with an external config-management repo, or for large binaries.

#cloud-config write_files:

Yum Add Repo

Add yum repository configuration to the system

Summary

Add yum repository configuration to /etc/yum.repos.d. Configuration files are named based on the opaque dictionary key under the yum_reposthey are specified with. If a config file already exists with the same name as a config entry, the config entry will be skipped.

Internal name: cc_yum_add_repo

Module frequency: once-per-instance

Supported distros: almalinux, azurelinux, centos, cloudlinux, eurolinux, fedora, mariner, openeuler, OpenCloudOS, openmandriva, photon, rhel, rocky, TencentOS, virtuozzo

Activate only on keys: yum_repos

Config schema

Examples

Example 1:

#cloud-config yum_repos: my_repo: baseurl: http://blah.org/pub/epel/testing/5/$basearch/ yum_repo_dir: /store/custom/yum.repos.d

Example 2: Enable cloud-init upstream’s daily testing repo for EPEL 8 to install the latest cloud-init from tip of main for testing.

#cloud-config yum_repos: cloud-init-daily: name: Copr repo for cloud-init-dev owned by @cloud-init baseurl: https://download.copr.fedorainfracloud.org/results/@cloud-init/cloud-init-dev/epel-8-$basearch/ type: rpm-md skip_if_unavailable: true gpgcheck: true gpgkey: https://download.copr.fedorainfracloud.org/results/@cloud-init/cloud-init-dev/pubkey.gpg enabled_metadata: 1

Example 3: Add the file /etc/yum.repos.d/epel_testing.repo which can then subsequently be used by yum for later operations.

#cloud-config yum_repos:

The name of the repository

epel-testing: baseurl: https://download.copr.fedorainfracloud.org/results/@cloud-init/cloud-init-dev/pubkey.gpg enabled: false failovermethod: priority gpgcheck: true gpgkey: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL name: Extra Packages for Enterprise Linux 5 - Testing

Example 4: Any yum repo configuration can be passed directly into the repository file created. See man yum.conf for supported config keys.

Write /etc/yum.conf.d/my-package-stream.repo with gpgkey checks on the repo data of the repository enabled.

#cloud-config yum_repos: my package stream: baseurl: http://blah.org/pub/epel/testing/5/$basearch/ mirrorlist: http://some-url-to-list-of-baseurls repo_gpgcheck: 1 enable_gpgcheck: true gpgkey: https://url.to.ascii-armored-gpg-key

Zypper Add Repo

Configure Zypper behavior and add Zypper repositories

Summary

Zypper behavior can be configured using the config key, which will modify /etc/zypp/zypp.conf. The configuration writer will only append the provided configuration options to the configuration file. Any duplicate options will be resolved by the way the zypp.conf INI file is parsed.

Note

Setting configdir is not supported and will be skipped.

The repos key may be used to add repositories to the system. Beyond the required id and baseurl attributions, no validation is performed on the repos entries.

It is assumed the user is familiar with the Zypper repository file format. This configuration is also applicable for systems with transactional-updates.

Internal name: cc_zypper_add_repo

Module frequency: always

Supported distros: opensuse, opensuse-microos, opensuse-tumbleweed, opensuse-leap, sle_hpc, sle-micro, sles

Activate only on keys: zypper

Config schema

Examples

Example 1:

#cloud-config zypper: config: {download.use_deltarpm: true, reposdir: /etc/zypp/repos.dir, servicesdir: /etc/zypp/services.d} repos: