Dev container metadata in image labels · Issue #18 · devcontainers/spec (original) (raw)

Unlike some developer centric formats, a non-goal for devcontainer.json is to become yet another multi-container orchestrator format. Instead, its goal is to enable development in containers regardless of how they are orchestrated. With this in mind, allowing image labels be used to set devcontainer.json properties could have significant advantages. In particular, there is a problem with images and devcontainer.json files needing to be paired but are deployed and managed separately, which can lead to disconnects. This is important in scenarios like container runtime settings required to enable debugging (e.g. cap_add: SYS_PTRACE), but applies broadly to any settings the images was purpose built to support.

In particular:

  1. When the dev container CLI is used to build the image, these labels can added automatically.
  2. Labels can be added to images using a Dockerfile, command line, an orchestrator format, or another image build system like Buildpacks.

This second point also makes it easier to use the metadata when some other mechanism is used to build the container images - whether an orchestrator or another non-Docker CLI like pack for Buildpacks.

The json-based nature of devcontainer.json would main this fairly straight forward. To simplify just reusing json blobs in automation scenarios, we can also support a whole structure being added:

LABEL dev.containers.metadata='{ "userEnvProbe": "loginInteractiveShell", "customizations": { "vscode": { "extensions": ["ms-python.python", "ms-toolsai.jupyter" ] } } }'

For hand editing, common and less complex properties could be referenced directly.

LABEL dev.containers.metadata.userEnvProbe="loginInteractiveShell" LABEL dev.containers.metadata.customizations.vscode.extensions= '["ms-python.python","ms-toolsai.jupyter"]'

More complex any type properties could then be encoded json.

LABEL dev.containers.metadata.customizations.vscode.settings="{"some.setting": "some-value"}"

In the common case, these labels would be automatically added by the dev container CLI to the image when it is pre-built. (devcontainer build ...), but manual entry would enable these additions to be embedded in orchestrator formats as well.

services: app: build: context: . dockerfile: Dockerfile labels: - "dev.containers.metadata.userEnvProbe=loginInteractiveShell" - "dev.containers.metadata.customizations.vscode.extensions=["ms-python.python","ms-toolsai.jupyter"]"

Any tool that supports the dev container spec would then look for these image labels - regardless of whether they are on a pre-built image or one built by an orchestrator. The reference implementation would then illustrate how to make this happen.

We should be able make this work with any general, lifecycle, or tool specific property.

Dev container features

The dev container CLI enables pre-building images that include the proposed dev container features concept. However, there's a couple things here to consider. First, we can include an image label that indicates whether the build step is already done for the image. This avoids accidentally re-firing of the same feature multiple times based on reused configuration.

LABEL dev.containers.features.built='["docker-in-docker","github-cli"]'

Second, we can render out the devcontainer.json properties that tie to the feature in the resulting image. e.g., extensions would include those that were indicated by the feature. Properties like capAdd we've talked about as higher level properties, so these could be handled the same way.

Processing then is always: build (optional), read labels with metadata, run. You can use devcontainer.json as you use it today, but the labels could also come from somewhere else.

Beyond providing processing consistency, this avoids a possible issue where the referenced feature in devcontainer.json updates, but the pre-built image does not. Problems here could be extremely difficult to track down when they happen otherwise.

Multi-container

Finally - this should help with the [one-to-many problem at least tactically or when there is not a specific integration to an orchestrator format. For pre-building, each image can have a separate devcontainer.json or Dockerfile with the labels - with the dev container CLI enabling features. These can then be referenced in an orchestrator directly. Even when the dev container CLI is not used, at least a subset of properties could be included in the orchestrator format.

Net-net, the resulting image should have labels on it that explains how it should be set up. Orchestrator formats can influence processing where they support adding properties. In all cases, the dev container CLI will inspect the image and make the needed adjustments.