Run WebAssembly on Kubernetes with SpinKube: A Practical Guide

Learn how to deploy lightweight, fast-starting WebAssembly workloads on Kubernetes using SpinKube. A hands-on guide for homelab users and Kubernetes enthusiasts.

• 7 min read
kuberneteswebassemblyspinkubeserverless
Run WebAssembly on Kubernetes with SpinKube: A Practical Guide

If you’ve been running Kubernetes for a while, you know the drill: build a container image, push it to a registry, pull it down, wait for it to start. Containers are great, but they’re not exactly lightweight. What if you could run workloads that start in milliseconds, use a fraction of the memory, and produce artifacts measured in megabytes instead of gigabytes?

Enter SpinKube—a CNCF sandbox project that brings WebAssembly (Wasm) workloads to Kubernetes with first-class integration. Let’s explore what it is, why it matters, and how to get it running in your homelab.

What is SpinKube?

SpinKube is an open-source project that streamlines developing, deploying, and operating WebAssembly workloads on Kubernetes. Accepted as a CNCF sandbox project in January 2025, it combines several components to provide native Kubernetes support for Wasm applications:

  • Spin Operator — A Kubernetes controller that manages SpinApp custom resources
  • Runtime Class Manager — An operator that automates the containerd shim lifecycle
  • runwasi — A containerd shim for running Wasm workloads directly

The result? You can deploy WebAssembly modules to Kubernetes just like you would containers, using familiar tools like kubectl and Helm.

SpinKube Architecture

Why WebAssembly vs Containers?

Before diving into installation, let’s understand why you’d want Wasm workloads in the first place.

Smaller Artifacts

Container images are bulky. Even a minimal Alpine-based image weighs in at several megabytes, and real application images often exceed hundreds of megabytes or even gigabytes. WebAssembly modules, by comparison, are tiny—often measured in single-digit megabytes. This means faster downloads, less storage overhead, and simpler artifact management.

Faster Startup Times

Containers need to boot an entire OS environment before your application runs. Wasm modules start almost instantly—the runtime loads the module and executes it directly. For serverless workloads where cold starts matter, this is a game-changer.

Lower Resource Usage

Idle containers still consume memory for their OS layer. Wasm workloads use substantially fewer resources when idle, making them ideal for edge computing and resource-constrained environments.

Kubernetes Native

SpinKube doesn’t reinvent the wheel. Your Wasm workloads get:

  • Kubernetes DNS integration
  • Health probes and readiness checks
  • Autoscaling with HPA and KEDA
  • Metrics and observability
  • Standard kubectl workflows

If you know Kubernetes, you already know most of what you need to run SpinKube.

Installing SpinKube on Your Cluster

Let’s walk through setting up SpinKube using k3d, a lightweight Kubernetes distribution perfect for homelab testing. The same approach works with any Kubernetes cluster—just adapt the node configuration for your environment.

Prerequisites

Make sure you have these installed:

  • kubectl — Kubernetes CLI
  • k3d — Lightweight Kubernetes with Docker
  • helm — Kubernetes package manager

Step 1: Create a Cluster with Wasm Support

SpinKube provides a custom k3d image with the Wasm shim pre-installed. Create your cluster:

k3d cluster create wasm-cluster \
  --image ghcr.io/spinframework/containerd-shim-spin/k3d:v0.23.0 \
  --port "8081:80@loadbalancer" \
  --agents 2

This creates a cluster named wasm-cluster with two agent nodes and exposes port 8081 for ingress traffic.

Step 2: Install cert-manager

The Spin Operator requires cert-manager for webhook certificates:

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.3/cert-manager.yaml
kubectl wait --for=condition=available --timeout=300s deployment/cert-manager-webhook -n cert-manager

Wait for cert-manager to become available before proceeding.

Step 3: Apply the Runtime Class

The Runtime Class tells Kubernetes how to handle Wasm workloads:

kubectl apply -f https://github.com/spinframework/spin-operator/releases/download/v0.6.1/spin-operator.runtime-class.yaml

Step 4: Install the CRDs

Apply the Custom Resource Definitions for Spin applications:

kubectl apply -f https://github.com/spinframework/spin-operator/releases/download/v0.6.1/spin-operator.crds.yaml

Step 5: Deploy the Spin Operator

Install the operator using Helm:

helm upgrade --install spin-operator \
  --namespace spin-operator \
  --create-namespace \
  --version 0.6.1 \
  --wait \
  oci://ghcr.io/spinframework/charts/spin-operator

Step 6: Create the Shim Executor

The shim executor manages the Wasm runtime lifecycle:

kubectl apply -f https://github.com/spinframework/spin-operator/releases/download/v0.6.1/spin-operator.shim-executor.yaml

Step 7: Deploy a Sample Application

Test your setup with the sample Spin application:

kubectl apply -f https://raw.githubusercontent.com/spinframework/spin-operator/main/config/samples/simple.yaml

Step 8: Verify It Works

Port-forward to the service and test:

kubectl port-forward svc/simple-spinapp 8083:80
curl localhost:8083/hello

You should see: Hello world from Spin!

Your cluster is now running WebAssembly workloads alongside traditional containers.

Building Your First Wasm Application

While the sample app works, you’ll want to build your own. Here’s how to create a simple HTTP service in Rust and deploy it to your SpinKube cluster.

Install the Spin CLI

First, install the Spin CLI from Fermyon:

curl -fsSL https://developer.fermyon.com/downloads/install.sh | bash

Create a New Spin Application

Initialize a new HTTP trigger application:

spin new http-rust hello-wasm --accept-defaults
cd hello-wasm

This generates a Rust project with a simple HTTP handler. The spin.toml file defines your application’s configuration:

spin_manifest_version = 2

[application]
name = "hello-wasm"
version = "0.1.0"

[[trigger.http]]
route = "/hello"
component = "hello"

[component.hello]
source = "target/wasm32-wasi/release/hello_wasm.wasm"
[component.hello.build]
command = "cargo build --release --target wasm32-wasi"

Build the Wasm Module

Compile your application to WebAssembly:

spin build

This produces a .wasm file in your target directory—typically around 100-200 KB for a simple HTTP service.

Deploy to Kubernetes

Package your Wasm module as an OCI artifact and deploy it. First, create a SpinApp manifest:

apiVersion: core.spinoperator.dev/v1alpha1
kind: SpinApp
metadata:
  name: hello-wasm
spec:
  image: "ghcr.io/yourusername/hello-wasm:v0.1.0"
  executor: wasmtime-shim
  replicas: 2
  runtimeConfig:
    healthChecks:
      liveness:
        httpGet:
          path: /hello
          port: 80
      readiness:
        httpGet:
          path: /hello
          port: 80

Apply it:

kubectl apply -f spinapp.yaml

Expose it via a LoadBalancer or Ingress, and your custom Wasm application is now running on Kubernetes.

Real-World Use Cases

Now that you have SpinKube running, where does it shine?

Serverless Functions

WebAssembly’s instant startup makes it perfect for event-driven workloads. Deploy functions that respond to HTTP requests, queue messages, or scheduled events without the cold-start penalty of traditional containers. Combined with KEDA for autoscaling, you get a responsive serverless platform.

Edge Computing

Running Kubernetes at the edge? Wasm workloads consume far less memory and CPU, letting you run more workloads on constrained hardware. Whether you’re deploying to ARM boards, industrial gateways, or micro-VMs, Wasm extends what’s possible.

Plugin Systems

Need a safe, sandboxed way to run user-provided code? Wasm’s security model isolates plugins from the host system. Users can write extensions in Rust, Go, JavaScript, or other languages that compile to Wasm, and your platform executes them safely.

Microservices

For small, stateless services—API gateways, request routers, data transformers—Wasm offers rapid deployment and minimal overhead. Build in your preferred language, compile to Wasm once, and run anywhere.

Production Considerations

Running SpinKube in production requires planning:

Supported Platforms: SpinKube works on Azure Kubernetes Service (AKS), Linode Kubernetes Engine (LKE), Rancher Desktop, MicroK8s, and generic Kubernetes clusters via Helm.

Autoscaling: Use the Kubernetes Horizontal Pod Autoscaler (HPA) for CPU/memory-based scaling, or KEDA for event-driven autoscaling based on queue depth, HTTP requests, or custom metrics.

Observability: Wasm workloads integrate with standard Kubernetes monitoring. Configure your Prometheus/Grafana stack to scrape metrics from Spin applications.

Development Workflow: Use the Spin CLI to build Wasm applications locally, then package them as OCI artifacts for deployment. The familiar spin build and spin up workflow carries over to Kubernetes.

Troubleshooting Common Issues

If something goes wrong, here are the most common fixes:

Pods stuck in Pending: Check that the Runtime Class exists: kubectl get runtimeclass wasmtime-spin-v2

Image pull errors: Ensure your Wasm image is pushed to a registry accessible from the cluster

Certificate errors: Verify cert-manager is running: kubectl get pods -n cert-manager

Exec format error: The node is missing the Wasm shim—use a compatible k3d image or install runwasi manually

The Numbers: Why Wasm Matters

Let’s put the benefits in concrete terms. In testing, a typical Spin application:

  • Starts in <10ms compared to seconds for a container
  • Uses 1-10MB of memory vs 50-100MB+ for a minimal container
  • Produces artifacts of 100KB-5MB vs container images of 50MB-1GB+
  • Scales to zero without the warm-up penalty

These aren’t theoretical gains—they’re measurable improvements that translate to real cost savings on infrastructure, faster CI/CD pipelines, and better user experience for latency-sensitive applications.

What’s Next for SpinKube?

As a CNCF sandbox project, SpinKube is actively evolving. Keep an eye on:

  • Component dependencies — Spin applications can trigger other Spin components directly
  • Database integrations — Built-in support for MySQL, PostgreSQL, and Redis via WASI
  • Event triggers — Beyond HTTP, Spin supports Redis pub/sub, MQTT, and SQS
  • Improved tooling — Better local development experience and debugging capabilities

Conclusion

SpinKube bridges the gap between WebAssembly’s promise and Kubernetes ubiquity. By treating Wasm as a first-class workload type, it opens the door to faster startups, smaller footprints, and new architectural patterns—all without leaving the Kubernetes ecosystem you already know.

For homelab users, it’s an accessible way to experiment with WebAssembly in a real cluster. For production users, it’s a path to more efficient edge and serverless deployments. Either way, the future of Kubernetes workloads just got lighter.

Ready to try it yourself? Spin up a k3d cluster, follow the installation steps, and deploy your first Wasm workload. The Hello world from Spin! message only takes a few minutes to reach.


Have questions about running Wasm workloads in your homelab? Check out the SpinKube documentation or join the CNCF Slack to connect with the community.

Anthony Lattanzio

Anthony Lattanzio

Tech Enthusiast & Builder

I'm a tech enthusiast who loves building things with hardware and software. By night, I run a homelab that's grown way beyond what any reasonable person needs. Check out about me for more.

Comments

Powered by GitHub Discussions