Distributing WebAssembly components using OCI registries  - Microsoft Open Source Blog (original) (raw)

Containers are the de factostandard for packaging, distributing, and running applications in the cloud-native world. As the cloud-native space keeps evolving at a rapid pace, WebAssembly (Wasm) is emerging as a promising light weight, secure, and portable alternative to containers. But where do you store your Wasm components?

Containers have a standard storage mechanism so universal we forget about it: Open Container Initiative (OCI) registries like Microsoft Azure Container Registry, GitHub Container Registry (GHCR), or Docker Hub. Wouldn’t it be cool if Wasm components could be used easily with OCI registries, too?

The CNCF (Cloud Native Computing Foundation) Wasm Working Group thought so and came together to define a format for Wasm OCI Artifacts. Now, you can use any OCI registry to store and use your WebAssembly components, just like you do with containers.

developer looking at code

WebAssembly Working Group Charter

Run workloads in a diverse set of environments in both a centralized location and at the edge.

In this article, we’ll cover:

Advantages of OCI with Wasm

Wasm, introduced in 2015, was originally designed as a compilation target for web applications, focusing on portability, safety, and efficiency. These same features make Wasm attractive for the server side, and recently, the introduction of Wasm components and the standardization of WebAssembly System Interface (WASI) has begun to make it possible to use Wasm components in server-side applications through public interfaces like wasi-http, wasi-filesystem, and more.

The brilliance of the container ecosystem lies in its comprehensive end-to-end experience for developers, particularly through packaging and distribution according to the Open Container Initiative specifications. OCI provides a standard for container formats and runtimes, with one of its main attributes being the image format. The newly released OCI 1.1 specification includes Artifact support, which allows you to package any type of content, not just containers, and distribute to your registries in a way that’s familiar to anyone using containers.

Utilizing OCI Artifacts, the new Wasm Artifact format allows you to use both Wasm components and containers across all the major cloud providers in the same fundamental way. Moreover, because Wasm does not require a specific operating system (OS) type or architecture to execute, a single component image can be run anywhere a Wasm runtime is available. There’s no need for multiarch build continuous integration and continuous delivery (CI/CD) pipelines for Wasm OCI Artifacts.

Many projects are already taking advantage of this work. Projects like Spin, the containerd project runwasi, and wasmCloud have already begun to integrate Wasm OCI Artifacts—making the standard useful across the open-source ecosystem. Let us look at how Wasm OCI Artifacts are used to develop and run applications.

Developing a Wasm application using cargo-component

For languages, one of the most interesting use cases is to share functionality. C uses “header” files to accomplish this, and in the case of Wasm, the analogue is the Wasm component’s interface definition language, called WebAssembly Interface Types (WIT), which can also be packaged as an OCI image. As a result, you can use your language tools like Rust’s cargo-component to dynamically fetch the interfaces needed to implement Wasm components—instead of manually copying the files locally.

This section will focus on creating a Rust application that compiles to a Wasm component targeting the wasi-http world using Wasm OCI Artifacts.

Setting up your environment

First, let’s set up the environment. We’ll be building our application in Rust and assume Rust is already installed. We’ll need a few tools:

You can install these tools by running the following commands:

cargo install cargo-component@0.16.0  rustup target add wasm32-wasi cargo install wasm-tools@1.216.0 cargo install wkg@0.5.1 curl https://wasmtime.dev/install.sh -sSf | bash

Creating a Rust Component Project

Create a new Rust project for Wasm components using the following commands, and then remove the example WIT file that is created:

cargo component new --lib hello-wasi-http --proxy cd hello-wasi-http/ rm -r wit

Creating a HTTP handler component

Prior to cargo-component v0.16.0, projects had to copy and paste the files from the Bytecode Alliance WIT GitHub files to a WIT folder in their local project—similar to adding .h files to a C project. Now, you can use a single command in cargo-component to obtain the WIT interface files directly from Wasm OCI Artifacts. Our friends in the Bytecode Alliance have published all the Wasi WIT world files to GitHub Packages Registry as OCI Artifacts.

Now, we can use Rust’s Wasm integration to generate the Rust code to use the wasi:http/proxy world. The bindings for the types are generated at compile time from the WIT files when cargo-component build is called. To enable this, we need to add wasi:http as a component target. Append the following to the bottom of your Cargo.toml file:

[package.metadata.component.target] package = "wasi:http" version = "0.2.0" world = "proxy"

Now, you need to implement the Hypertext Transfer Protocol (HTTP) incoming handler interface. In particular, to target the wasi-http WIT world, your component needs to export an incoming handler interface, and this is created by cargo-component with the bindings::exports::wasi::http::incoming_handler::Guest trait, which has a single method of signature defined in WIT as fn handle(request: IncomingRequest, outparam: ResponseOutparam).

To implement the interface, open src/lib.rs in your text editor and replace the entire file with the following:

#[allow(warnings)] mod bindings; pub use bindings::wasi::http::types::{ Fields, IncomingRequest, OutgoingBody, OutgoingResponse, ResponseOutparam, }; struct Component; bindings::export!(Component with_types_in bindings); impl bindings::exports::wasi::http::incoming_handler::Guest for Component { fn handle(_request: IncomingRequest, outparam: ResponseOutparam) { let hdrs = Fields::new(); let resp = OutgoingResponse::new(hdrs); let body = resp.body().expect("outgoing response"); ResponseOutparam::set(outparam, Ok(resp)); let out = body.write().expect("outgoing stream"); out.blocking_write_and_flush(b"Hello, this is your first wasi:http/proxy world!\n") .expect("writing response"); drop(out); OutgoingBody::finish(body, None).unwrap(); } }

You may find this paradigm resembles serverless computing, and it does. The HTTP server is managed by the host that runs this Wasm component, delegating the HTTP request handling to your handler function.

Build and run this component locally

To build your component, you can run:

cargo component build --release mv target/wasm32-wasip1/release/hello_wasi_http.wasm ./hello_wasi_http.wasm

This will produce a Wasm component called hello_wasi_http.wasm. You may see that the produced component is only about 64kB. Sure, it is a hello world program, but that is still very small.

If you run wasm-tools component with hello_wasi_http.wasm, you can see that it exports wasi:http/incoming-handler@0.2.0, which you just implemented in the last section.

Next, let us run this component locally using wasmtime, an open-source Wasm runtime from the Bytecode Alliance:

wasmtime serve hello_wasi_http.wasm  

This will serve HTTP requests on local host at port 3000. Open a new terminal and run the following command to verify that your component is working properly:

Hello, this is your first wasi:http/proxy world!

You have successfully built and run your first Wasm component with the help of a Wasm OCI Artifact.

The magic: Package, push, and pull Wasm app to and from GitHub Container Registry as OCI Artifact

Building, pushing and viewing the Wasm OCI Artifact

Now that we’ve built an application by consuming WIT OCI Artifacts, let’s publish it so it can be run by a compatible runtime.

First, log in into GitHub Container Registry. Once logged in, we can use the following wkg cli to turn our Wasm component we built previously into an OCI Artifact and push it to GHCR:

wkg oci push ghcr.io/<your_github_username>/hello-wasi-http:latest hello_wasi_http.wasm

Let’s look directly at the format for the new artifact we have published. You can use regctl (or your favorite image inspection client) to inspect the artifact, looking for the digest value of the Config entry:

regctl manifest get ghcr.io/<your_github_username>/hello-wasi-http:latest 

<snip> 

Config: 

  Digest:    sha256:66305959b88c33eb660c78bed6e9e06ec809a38f06f89a9ddf5b0cb8b22f0c0c 

  MediaType: application/vnd.wasm.config.v0+json 

  Size:      413B 

Layers: 

  Digest:    sha256:a31c2628694eb560dd0e8f82de12e657268c761727c3ad98638c9c55dd46c5df 

  MediaType: application/wasm 

  Size:      87818B

Here, we can see the two important parts: the config.mediaType of application/vnd.wasm.config.v0+json and the layers with ”mediaType”:”application/wasm”, both of which are defined here.

Let’s take look at the config.mediaType for Wasm using the Digest value and pipe that to “jq.”:

regctl blob get ghcr.io/<your_github_username>/hello-wasi-http:latest sha256:66305959b88c33eb660c78bed6e9e06ec809a38f06f89a9ddf5b0cb8b22f0c0c | jq 

  "created": "2024-07-26T21:56:17.581533530Z", 

  "author": null, 

  "architecture": "wasm", 

  "os": "wasip2", 

  "layerDigests": [ 

    "sha256:a31c2628694eb560dd0e8f82de12e657268c761727c3ad98638c9c55dd46c5df" 

  ], 

  "component": { 

    "exports": [ 

      "wasi:http/incoming-handler@0.2.0

    ], 

    "imports": [ 

      "wasi:io/error@0.2.0", 

      "wasi:io/streams@0.2.0", 

      "wasi:http/types@0.2.0", 

      "wasi:cli/stdout@0.2.0", 

      "wasi:cli/stderr@0.2.0", 

      "wasi:cli/stdin@0.2.0

    ], 

    "target": null 

  } 

}

This should look familiar. The exports and imports are the same as the WIT we were using to build our application http/incoming-handler@0.2.0.

The Wasm config.mediaType configuration provides the ability to quickly identify imports, exports, or worlds that are used by the component. The full explanation for the format can be found here. Since all of this information is exposed in the configuration, it also means we can use existing tools to search and find other Wasm components in Container Registries—now that’s something new!

Now that we have it pushed to the registry, we can pull it down and run it in a runtime:

wkg oci pull ghcr.io/<your_github_username>/hello-wasi-http:latest -o app.wasm 

Successfully wrote ghcr.io/<your_github_username>/hello-wasi-http:latest to app.wasm 

wasmtime serve hello_wasi_http.wasm --addr 127.0.0.1:3000

We’ve just successfully packaged, pushed, and pulled a Wasm component as an OCI Artifact.

Moving into the future of OCI registries

Since Wasm Artifacts follow the OCI 1.1 specification, you are not limited to GitHub Container Registry. You can use any of your existing registries and also use investments you’ve made into image signing and software bill of materials (SBOM) support.

The exciting aspects about this common OCI Artifact format for Wasm are the consistency it enables for tools across the ecosystem and the fact that Wasm OCI compatibility will be built directly into the language tooling such as cargo-component, dotnet, and go.

Please try using Rust to create and run a WebAssembly component and store it and deploy it using OCI Artifacts—and provide feedback or get involved by helping your favorite language or tool add support for the OCI package format. You can reach out to any of the projects mentioned in the article or join us in the CNCF wasm working group.

author headshot

Jiaxiao (Joe) Zhou is a Software Engineer at Microsoft. He is on the Azure Container Upstream team and works on bringing WebAssembly to the Cloud through projects like "containerd/runwasi" and "SpinKube". He is a Recognized Contributor to the Bytecode Alliance and made contributions to many Wasm upstream projects, and has been a champion for a few WASI proposals under the umbrella "wasi-cloud-core".

See more articles from this author

author headshot

James Sturtevant

Principal Software Engineer

James Sturtevant is enthusiastic about creating technology in the cloud native ecosystem. He is a maintainer of the containerd runwasi project, contributes to Kubernetes as a sig-windows tech lead, and is a Recognized Contributor to the ByteCode Alliance.

See more articles from this author