Quick Decision Framework

Use WebAssembly if:
  • ✅ Cold start <100ms is critical (edge functions, event-driven APIs)
  • ✅ High multi-tenancy needed (100+ tenants per host)
  • ✅ Untrusted code execution (plugins, user scripts)
  • ✅ Extreme density required (1,000+ functions per server)
Stick with containers if:
  • ❌ You need full POSIX (databases, message queues)
  • ❌ Long-lived stateful processes (WebSocket servers)
  • ❌ GPU/hardware access (ML inference, video encoding)
  • ❌ Rich observability tooling matters (APM, debuggers, profilers)
The reality: Most systems will run both — edge functions (Wasm) + backend services (containers) on the same infrastructure. Related reading:

---

Status in production today (2025): WebAssembly workloads are running at planetary scale on Cloudflare Workers (millions of functions, sub-10ms cold starts), Fastly Compute (edge CDN logic), Fermyon Cloud (Spin framework), and inside Kubernetes clusters via runwasi + SpinKube. This isn't a lab experiment. It's shipped code handling real traffic. Why this matters: Containers democratized deployment. WebAssembly is democratizing instantiation — the ability to start a workload in microseconds instead of hundreds of milliseconds. That changes what you can build at the edge, in event-driven systems, and in multi-tenant platforms where isolation + density + speed all matter at once. What this is NOT: A replacement for your monolithic Rails app, your Postgres database, or your Kafka cluster. WebAssembly shines in narrow, high-value niches. Knowing when to use it (and when to stick with containers) is the entire point of this article.

---

Part 1: The Performance Gap — and Why It Exists

Cold start: The hidden cost of serverless

The problem: AWS Lambda cold starts range from 100ms (optimized Node.js) to 1,000ms+ (JVM-based functions with large dependencies). Google Cloud Functions and Azure Functions see similar numbers. For synchronous HTTP requests at the edge, this latency is unacceptable — users notice 200ms delays. Why containers are slow to start:
  1. Kernel overhead: Each container needs a namespace (PID, network, mount), cgroups for resource limits, and often a separate network stack. Even with shared kernel structures, this takes 50-150ms.
  2. Filesystem layers: Pulling and unpacking OCI image layers (even from local cache) adds 20-100ms depending on image size.
  3. Language runtime initialization: Node.js VM startup (~30ms), Python interpreter + imports (~50ms), JVM class loading (100-500ms).
Firecracker microVMs (AWS Lambda's secret weapon): AWS Lambda doesn't use Docker. It uses Firecracker, a minimalist KVM-based microVM that boots in ~125ms. This is *already optimized* — and it's still 25x slower than WebAssembly. WebAssembly cold start (Wasmtime/wasmedge):
  • Module instantiation: 2-5ms (parse + validate bytecode)
  • Memory allocation: 1-3ms (linear memory is pre-zeroed)
  • Capability linking (WASI 0.2): <1ms (bind host functions)
  • Total: Sub-10ms consistently, often <5ms for small modules
Real-world measurement (Cloudflare Workers, 2024):
  • P50 cold start: ≈1ms
  • P99 cold start: a few milliseconds
  • *Source: Cloudflare Workers public benchmarks consistently show single-digit millisecond cold starts*
Why this is architecturally different: Containers isolate *processes*. WebAssembly isolates *instances within a single process*. You're not booting a mini-OS; you're creating a sandboxed function call. The security model is capability-based (WASI), not syscall-filtering (seccomp/AppArmor).

---

Warm-path performance: not always faster

Important nuance: Cold start isn't the whole story.

Recent research (Lumos 2025, ASPLOS) shows that:

  • AoT-compiled WebAssembly (ahead-of-time, like wasm-opt or Wasmtime AOT) can match or exceed container warm-path performance for CPU-bound workloads.
  • Interpreted or JIT-compiled Wasm (like default Wasmtime) can show worse steady-state latency and higher I/O overhead compared to native container binaries in some scenarios.

What this means in practice:
  • For short-lived, burst workloads (edge functions, event handlers): Wasm's cold-start advantage dominates. Use it.
  • For long-running, steady-state services (API servers processing thousands of requests/sec): The gap narrows significantly. Container performance may be equal or better depending on runtime configuration.
Takeaway: WebAssembly shines where instantiation speed matters more than raw throughput. For workloads that stay warm (persistent services), evaluate both before committing.

---

Part 2: Where WebAssembly Wins (and Where It Doesn't)

Niche 1: Edge compute (Cloudflare, Fastly, Vercel Edge)

The use case: You need to run custom logic in 300+ global datacenters, as close to the user as possible. Examples: A/B testing, request routing, header manipulation, lightweight API transformations. Why containers fail here:
  • Density: Running a full container per tenant per edge location is resource-prohibitive. Cloudflare Workers runs *millions* of tenants in shared processes.
  • Cold start: Edge requests are sporadic. A 200ms container cold start negates the latency benefit of being geographically close.
Why WebAssembly wins:
  • Instantiation speed: Start a new tenant's function in <5ms.
  • Isolation model: Each Wasm instance is sandboxed via capability restrictions (no filesystem access unless explicitly granted).
  • Memory density: Hundreds to thousands of Wasm instances in a single process vs. tens to hundreds of containers on the same hardware.
Real example (Shopify Functions): Shopify runs partner-supplied checkout logic (discounts, validation, payment routing) as WebAssembly via Shopify Functions. Custom logic executes directly inside the checkout flow with sub-10ms cold starts, enabling per-request personalization without origin round-trips. *Source: Shopify Functions documentation*

---

Niche 2: Event-driven functions (AWS Lambda alternative)

The problem with Lambda: For high-frequency, short-duration events (S3 triggers, Kinesis streams, SQS messages), cold starts dominate total execution time. A 200ms cold start for a 10ms function is 95% waste. Fermyon Spin (WebAssembly serverless framework):
  • Written in Rust, runs Wasm modules via Wasmtime
  • Cold start: <5ms (measured)
  • Deploy target: Fermyon Cloud (managed), Kubernetes (via SpinKube), or self-hosted
Example workload: Process 10,000 events/sec from Kafka. Each event triggers a Wasm function that validates JSON, applies business rules, and writes to a database. Container cold starts would bottleneck throughput; Wasm handles it at line rate. Trade-off: You give up rich syscall access. No fork(), no raw sockets, no arbitrary filesystem writes. WASI 0.2 provides *interfaces* (wasi:http, wasi:keyvalue, wasi:sql) but not full POSIX. If your function needs iptables or strace, use a container.

---

Niche 3: Plugins and untrusted code execution

The problem: You want users to upload custom logic (e.g., Figma plugins, Shopify apps, game mods). Running this in a container requires heavyweight isolation (gVisor, Firecracker) or trusting the sandbox (risky). WebAssembly's capability model (WASI 0.2):
  • Functions start with *zero* capabilities (no filesystem, no network, no clock).
  • You explicitly grant capabilities: "This plugin can read /data/input.json but not write anywhere."
  • Enforced at the VM level, not via OS permissions (no privilege escalation via kernel bugs).
Real example (Figma): Figma's plugin sandbox is powered by WebAssembly: they embed a QuickJS JavaScript engine compiled to Wasm, and run plugin code inside that. Plugin authors still write JavaScript, but it executes in a Wasm-backed sandbox with tightly-scoped canvas API access. No risk of filesystem writes or network exfiltration. *Source: Figma plugin architecture documentation*

---

Where WebAssembly does NOT replace containers (yet, or ever)

#### 1. Databases (Postgres, MySQL, Kafka) Why: These need raw block I/O (io_uring, mmap), complex networking (replication protocols), and kernel-level tuning (page cache, scheduler). WASI doesn't (and won't) expose this.

Verdict: Containers forever.

---

#### 2. Legacy applications (Spring Boot, Rails, Django) Why: Porting a 500K-line Java monolith to Rust + Wasm is not happening. Even if it could compile, you'd lose JVM debuggers, APM tools, and the entire ecosystem.

Verdict: Containers (or VMs if you need stronger isolation).

---

#### 3. GPU workloads (ML inference, rendering) Why: WebAssembly has no standard for GPU access. WASI proposals exist (wasi:gpu) but are early-stage. CUDA/ROCm integration is containerized for a reason.

Verdict: Containers + NVIDIA Container Toolkit.

---

#### 4. Stateful services with long-lived TCP connections Why: WebAssembly instances are designed to be ephemeral (start fast, run briefly, terminate). Long-lived stateful servers (WebSocket gateways, game servers) benefit from persistent processes.

Verdict: Containers or bare metal.

---

Part 3: The Developer Experience — Rust, Tooling, and the OCI 1.1 Bridge

Writing WebAssembly: Why Rust dominates

Language support for WebAssembly (2025):
  • Rust: First-class (via wasm32-wasi target). Tooling mature (cargo-component, wit-bindgen).
  • C/C++: Works (Emscripten, wasi-sdk) but painful (manual memory management, no std library).
  • Go: Experimental (TinyGo supports Wasm, but limited std library).
  • JavaScript/TypeScript: Via ComponentizeJS (compile TS → Wasm Component), but adds runtime overhead.
  • Python/Java: Technically possible (via interpreters compiled to Wasm), but defeats the performance purpose.
Note for JavaScript/TypeScript developers: While this article emphasizes Rust, several major platforms offer JS/TS-first Wasm experiences:
  • Deno Deploy (edge runtime, native TypeScript)
  • Vercel Edge Functions (Next.js edge middleware)
  • Supabase Edge Functions (Deno-based edge compute)
  • Cloudflare Workers (V8 isolates, JS/TS primary)

These use the same isolation model (V8 isolates or Wasm) but let you write familiar JavaScript/TypeScript instead of learning Rust. The cold-start and density benefits still apply. The trade-off: you lose some of Rust's memory-safety guarantees and may have slightly higher runtime overhead.

Why Rust wins (for this article's focus):
  1. Zero-cost abstractions: No GC pauses (unlike Go), no hidden allocations (unlike C++).
  2. Memory safety: No buffer overflows, no use-after-free (critical for sandboxed code).
  3. WASI Component Model support: wit-bindgen generates bindings from interface definitions (like gRPC, but for Wasm capabilities).
Example: Hello World HTTP handler (Rust + Spin framework)
use spin_sdk::http::{Request, Response};
use spin_sdk::http_component;

#[http_component] fn handle_request(req: Request) -> Response { Response::builder() .status(200) .header("Content-Type", "text/plain") .body("Hello from WebAssembly") .build() }

Compile:
cargo build --target wasm32-wasi --release
spin build
Deploy:
spin deploy  # to Fermyon Cloud

spin kube scaffold | kubectl apply -f - # to Kubernetes

---

Packaging WebAssembly: OCI 1.1 artifacts

The problem: Docker images are designed for layers (base OS + dependencies + app). WebAssembly modules are single .wasm files. Do we need a new registry format? The solution: OCI 1.1 (Open Container Initiative) added support for *arbitrary artifacts*. You can push a .wasm file to Docker Hub, GitHub Container Registry, or any OCI-compliant registry with a different mediaType:
{
  "schemaVersion": 2,
  "mediaType": "application/vnd.oci.image.manifest.v1+json",
  "config": {
    "mediaType": "application/vnd.wasm.config.v1+json",
    "size": 123,
    "digest": "sha256:abc..."
  },
  "layers": [
    {
      "mediaType": "application/vnd.wasm.content.layer.v1+wasm",
      "size": 456789,
      "digest": "sha256:def..."
    }
  ]
}
What this means: You can use existing CI/CD pipelines (docker build, docker push) for WebAssembly modules. Kubernetes can pull Wasm modules from the same registry as containers. Example (SpinKube on Kubernetes):
apiVersion: core.spinoperator.dev/v1alpha1
kind: SpinApp
metadata:
  name: hello-wasm
spec:
  image: ghcr.io/myorg/hello-wasm:v1.0.0  # OCI artifact with Wasm module
  replicas: 3
  executor: containerd-shim-spin

Behind the scenes:

  • Kubernetes pulls the OCI artifact from ghcr.io.
  • containerd-shim-spin (a containerd plugin) extracts the .wasm file.
  • Wasmtime instantiates the module in <5ms.

This is running in production today (Azure Kubernetes Service with runwasi, AWS EKS with SpinKube plugin).

---

Part 4: Kubernetes Integration — Mixed Workloads on the Same Cluster

The old world: Containers only

Kubernetes RuntimeClass (standard feature since v1.14): Allows different container runtimes on the same cluster (e.g., runc for normal containers, kata-containers for stronger isolation). The new world: Containers + WebAssembly RuntimeClass example (SpinKube):
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: wasmtime-spin
handler: spin  # Maps to containerd-shim-spin
Deploy a Wasm workload:
apiVersion: v1
kind: Pod
metadata:
  name: wasm-function
spec:
  runtimeClassName: wasmtime-spin  # Uses Wasmtime instead of runc
  containers:
  - name: handler
    image: ghcr.io/myorg/wasm-handler:v1
Deploy a container workload (same cluster):
apiVersion: v1
kind: Pod
metadata:
  name: postgres
spec:
  runtimeClassName: runc  # Standard container runtime
  containers:
  - name: db
    image: postgres:16
What this enables:
  • Edge functions (Wasm, <5ms cold start) run alongside databases (containers, persistent storage).
  • Same kubectl, same RBAC, same monitoring (Prometheus scrapes both).
  • Cluster autoscaling works the same (HPA scales Wasm pods just like container pods).
Real deployment (Azure AKS + runwasi, 2024): Microsoft Azure supports WebAssembly workloads via containerd-wasm-shims (runwasi). Customers run mixed clusters with .NET containers + Rust/Wasm edge functions. *Source: Azure blog, "WebAssembly on AKS," Nov 2024*

---

Part 5: Security Model — Capability-Based vs. Syscall Filtering

How containers isolate workloads

Linux namespaces + cgroups + seccomp:
  1. Namespaces: Isolate PID, network, mount, IPC (process thinks it's alone in the system).
  2. Cgroups: Limit CPU, memory, I/O (prevent noisy neighbors).
  3. Seccomp: Filter syscalls (block ptrace, mount, dangerous operations).
  4. AppArmor/SELinux: Mandatory Access Control (restrict file/network access).
Problem: This is *defense in depth* layered on top of a kernel designed for multi-user systems. Kernel bugs (e.g., Dirty Cow, Spectre) can break isolation. Additional isolation (for untrusted code):
  • gVisor: User-space kernel (intercepts syscalls, re-implements in Go). Slower but safer.
  • Firecracker: Minimal KVM microVM (each container runs in a separate VM). Overhead: ~125ms boot + 5MB RAM.

---

How WebAssembly isolates workloads

Capability-based security (WASI 0.2):
  1. Zero capabilities by default: A Wasm module starts with no access to filesystem, network, clock, or random numbers.
  2. Explicit grants: Host environment passes *capability handles* (e.g., "you can read this directory, but not write").
  3. Enforced at VM boundary: No syscalls exist. The Wasm VM mediates all access via host functions.
Example (WASI 0.2 component):
// Interface definition (WIT = WebAssembly Interface Types)
package example:service;

interface handler { handle-request: func(request: http-request) -> http-response; }

world http-service { import wasi:http/incoming-handler; export handler; }

What the module CAN'T do:
  • Open arbitrary files (no open() syscall exists).
  • Make network connections (no socket() call).
  • Read environment variables (unless host grants wasi:cli/environment).
What the module CAN do:
  • Call host functions explicitly granted (e.g., wasi:keyvalue/get, wasi:http/outgoing-request).
Security advantage: Even if a bug exists in the Wasm module (e.g., buffer overflow in unsafe Rust), the module cannot escape its sandbox. There's no syscall interface to exploit. Performance advantage: No need for seccomp filters, AppArmor profiles, or gVisor overhead. The Wasm VM is the security boundary.

---

Comparison: Container vs. Wasm isolation

Aspect Container (Linux) WebAssembly (WASI)
Isolation model Kernel namespaces + seccomp Capability-based VM sandbox
Attack surface Linux kernel (~30M LOC) Wasm VM (~100K LOC)
Syscall access Filtered (300+ syscalls, block dangerous ones) None (host functions only)
Escape risk Kernel exploits (CVEs published regularly) VM bugs (far fewer, smaller codebase)
Overhead 50-150ms (namespace setup) 2-5ms (VM instantiation)
Density Tens to hundreds per node (scheduling/kernel limits) Hundreds to thousands per process (VM instances)
Verdict: For untrusted code (plugins, user functions), WebAssembly is both faster *and* safer. For trusted code (your own microservices), containers are fine. Security reality check: While the Wasm VM has a smaller attack surface than the Linux kernel, WebAssembly is not immune to vulnerabilities. Browser-based Wasm has seen cryptominers, obfuscated malware, and side-channel attacks. Sandbox escapes and VM bugs will be discovered. Design like you'll need to rotate runtimes, patch VMs regularly, and maintain defense-in-depth even with capability-based isolation.

---

Part 6: Real-World Adoption — Who's Using This (and for What)

1. Cloudflare Workers (millions of functions)

Scale:
  • 300+ datacenters globally
  • Millions of user-deployed functions
  • Trillions of requests/month
Architecture:
  • Each Worker is a V8 isolate (JavaScript/Wasm)
  • Wasm cold start: <5ms (P99)
  • Multi-tenant: 1,000+ Workers per process
Use cases:
  • A/B testing (evaluate rules at edge)
  • Auth (JWT validation before hitting origin)
  • API transformations (GraphQL → REST)

---

2. Fastly Compute (edge CDN logic)

Technology:
  • Rust → Wasm (via wasm32-wasi)
  • Lucet VM (Fastly's WebAssembly runtime, now merged into Wasmtime)
Performance:
  • Cold start: <5ms
  • Execution: <1ms for typical request transformations
Production use cases: Fastly Compute powers edge logic for CDN customers who need custom request/response transformations, A/B testing, and dynamic routing. The Rust→Wasm compilation model provides both safety (memory-safe transformations) and performance (sub-millisecond execution). Typical workload: Front-page rendering, personalization logic, header manipulation, and cache key generation — all executed at the edge before hitting origin servers.

---

3. Shopify (storefront edge logic)

Problem: Merchants customize checkout flows (discount rules, upsells, payment methods). Running this logic at origin adds 100-200ms latency. Solution: Compile merchant logic to WebAssembly → run at Cloudflare/Fastly edge → <10ms execution. Impact: Shopify has publicly reported significant improvements in both checkout speed and conversion rates after moving customization logic to edge execution via Shopify Functions. Internal testing and merchant reports consistently describe double-digit percentage gains in both metrics, though exact numbers vary by merchant configuration and geographic region.

*Source: Shopify Functions documentation and public developer resources*

---

4. Fermyon Cloud (serverless Wasm platform)

What it is: Vercel/Netlify equivalent for WebAssembly. Deploy Rust/JS/Go → compiled to Wasm → runs on Spin framework. Developer experience:
spin new http-rust my-app
cd my-app
spin build
spin deploy
Result: Global deployment in <30 seconds, <5ms cold start, pay-per-invocation pricing.

---

5. Azure Kubernetes Service (AKS) + runwasi

What it enables: Run .NET containers + Rust Wasm functions on the same cluster. Example workload:
  • Containers: ASP.NET Core API (3 replicas, 2GB RAM each)
  • Wasm: Image resizing function (100 replicas, 10MB RAM each)
Why this works: Image resizing is CPU-bound, short-lived, triggered by HTTP uploads. Wasm cold start <5ms means you can scale to zero between bursts. Customer (NDA, public case study pending): European retailer runs product photo processing (crop, compress, watermark) as Wasm functions. 95% cost reduction vs. Lambda (no cold start tax, higher density).

---

Part 7: The Tooling Ecosystem (2025 Status)

Runtimes (production-ready)

Runtime Language WASI Support Performance Use Case
Wasmtime Rust 0.2 (full) Fast (JIT or AOT) General-purpose, Spin, SpinKube
wasmedge C++ 0.2 (partial) Very fast (AOT) Edge, embedded, Kubernetes
wazero Go 0.1 (working on 0.2) Good (interpreter) Go apps needing Wasm plugins
V8 C++ Custom (not WASI) Very fast (JIT) Cloudflare Workers, Node.js
Recommendation:
  • Cloudflare/edge: V8 (built-in)
  • Kubernetes: Wasmtime (via SpinKube/runwasi)
  • Embedded/IoT: wasmedge (smaller footprint)

---

Frameworks

Framework Language Target Maturity
Spin Rust, JS, Go Edge, serverless Production (v2.0+)
componentize-js TypeScript WASI Components Beta (v0.5+)
wasi-sdk C/C++ General WASI Stable
Extism Many (plugin host) Plugins, sandboxing Production (v1.0+)

---

Kubernetes Integration

Tool Status What It Does
runwasi Production (CNCF sandbox) containerd shim for Wasm
SpinKube Production (v0.2+) Spin apps on Kubernetes
kwasm Experimental Generic Wasm operator
Installation (SpinKube on existing cluster):
kubectl apply -f https://github.com/spinkube/spin-operator/releases/latest/download/install.yaml
Deploy Wasm app:
spin kube scaffold --from ghcr.io/myorg/app:v1 | kubectl apply -f -

---

Observability: The Real Migration Pain

This is the biggest operational blocker for WebAssembly adoption.

Traditional APM tools (Datadog, New Relic, Dynatrace) are built around process-based models. They instrument at the process/host level, correlate traces across containers, and assume you can attach debuggers to running services. WebAssembly instances live *inside* a single process, breaking these assumptions.

What doesn't work out-of-the-box:
  • Distributed tracing: Your APM can't see individual Wasm function invocations unless the runtime exposes them via OpenTelemetry hooks.
  • Profiling: Traditional profilers attach to processes, not VM instances. You can't just run perf or pprof against a Wasm module.
  • Debugging: No gdb, no interactive debuggers for multi-language Component Model interactions. Rust-to-Rust is manageable; cross-language composition is painful.
  • Log aggregation: Wasm modules don't write to stdout/stderr by default. You need explicit WASI logging interfaces.
Migration cost for orgs: If your team relies on years of Datadog dashboards, custom alerts, and runbook integration, moving to Wasm means:
  1. Rewriting instrumentation to use OpenTelemetry WASI SDKs
  2. Rebuilding dashboards around new metric names/cardinality
  3. Training SREs on new debugging workflows
  4. Accepting reduced visibility during the transition
This is non-trivial and why many teams stick with containers even when the performance case for Wasm is strong. What works today (2025):
  1. OpenTelemetry (WASI component SDK):
Wasm modules emit traces/metrics via wasi:observe interface → collected by host → exported to Prometheus/Jaeger. Works, but requires explicit instrumentation.
  1. Spin telemetry (built-in):
spin up automatically exports Prometheus metrics (http_requests_total, wasm_instance_count). Basic, but production-ready.
  1. eBPF tracing (Linux):
Tools like Pixie/Parca can trace Wasm function calls via eBPF (tracks CPU time, memory allocations). Requires kernel support and setup. Status: Improving rapidly, but 2-3 years behind container observability maturity. If APM is critical to your ops, budget extra migration time.

---

Part 8: When to Choose WebAssembly (Decision Framework)

Use WebAssembly if:

Cold start matters (<100ms is critical) → Edge functions, event-driven APIs, user-facing request handling

High multi-tenancy (100+ tenants per host) → SaaS platforms, plugin systems, serverless platforms

Untrusted code execution → User-uploaded scripts, marketplace apps, game mods

Polyglot environment (Rust + JS + Go on same platform) → Wasm modules compose via WIT interfaces (language-agnostic)

Extreme density (1,000+ functions on single server) → IoT gateways, edge routers, shared hosting

---

Stick with containers if:

You need full POSIX (filesystem, signals, fork, raw sockets) → Databases, message queues, legacy apps

Long-lived stateful processes (WebSocket servers, game servers) → Containers handle persistent connections better

Rich ecosystem matters (APM, debuggers, profilers) → Docker/K8s tooling is 10+ years mature

GPU/hardware access (ML inference, video encoding) → WASI doesn't expose this (yet)

---

Hybrid approach (best of both):

🔀 Edge functions (Wasm) + Backend services (containers) Example: Cloudflare Worker (Wasm) validates auth → routes to AWS ECS (container) for business logic

🔀 Kubernetes cluster (mixed RuntimeClass) Example: API gateway (container) + rate limiting (Wasm) + database (container)

🔀 Plugin architecture (Wasm) inside monolith (container) Example: Figma runs plugin code in a Wasm-backed sandbox inside its desktop app

---

Part 9: The Future (2026-2028 Predictions)

What's coming (high confidence):

  1. WASI 0.3 Preview (2025 Q2):
- Async interfaces (wasi:io/poll for non-blocking I/O) - Componentization (compose Wasm modules like microservices) - Better networking (wasi:sockets for TCP/UDP)
  1. Native Wasm support in major clouds:
- AWS Lambda (already experimenting with Wasm layers) - GCP Cloud Run (likely direction, given broader GKE Wasm experimentation) - Azure Functions (via custom handlers + runwasi)
  1. eBPF + Wasm integration:
- Cilium (Kubernetes CNI) exploring Wasm for network policy logic - Falco (security monitoring) using Wasm for custom rules
  1. Wasm in embedded systems:
- Automotive (ISO 26262-compliant sandboxing for third-party code) - Medical devices (FDA-approved plugin architecture)

---

What's uncertain (lower confidence):

🤔 GPU access via WASI: Proposals exist (wasi:gpu, wasi:webgpu) but no production implementations. May require 3-5 years.

🤔 Full POSIX compatibility: Unlikely. WASI is intentionally NOT POSIX (it's capability-based). If you need full syscall access, containers are the answer.

🤔 Wasm replacing containers for monoliths: Not happening. A 500MB container image with 200 dependencies isn't porting to Wasm. But net-new services? Increasingly Wasm-first.

---

Part 10: How to Get Started (Practical Next Steps)

For edge compute:

Option 1: Cloudflare Workers (easiest)
npm create cloudflare@latest my-worker
cd my-worker
npm run deploy

Supports JavaScript, TypeScript, Rust (via workers-rs).

---

Option 2: Fermyon Cloud (more control)
spin new http-rust my-service
cd my-service
spin build
spin deploy

Runs on Fermyon-managed infrastructure or self-hosted Kubernetes.

---

For Kubernetes:

Step 1: Install SpinKube
kubectl apply -f https://github.com/spinkube/spin-operator/releases/latest/download/install.yaml
Step 2: Deploy a Wasm app
spin kube scaffold --from ghcr.io/spinkube/hello-world:latest | kubectl apply -f -
Step 3: Verify
kubectl get spinapps
kubectl logs -l app=hello-world

---

For plugins/sandboxing:

Use Extism (polyglot plugin framework)

Host (Rust):

use extism::*;

let wasm = std::fs::read("plugin.wasm")?; let mut plugin = Plugin::new(&wasm, [], true)?;

let output = plugin.call("process", "input data")?; println!("{}", output);

Plugin (any language → Wasm):

#[extism_pdk::plugin_fn]
pub fn process(input: String) -> String {
    format!("Processed: {}", input)
}

---

Conclusion: The Five-Millisecond Cloud Is Real (in Specific Niches)

What we learned:
  1. WebAssembly is NOT a container replacement — it's a *complement* for workloads where cold start, density, or sandboxing matter.
  1. Real production use — Cloudflare (trillions of requests), Shopify (checkout logic), Azure AKS (mixed clusters). This isn't vaporware.
  1. Developer experience is good — Rust tooling is mature, OCI 1.1 lets you reuse existing registries, Kubernetes integration works today.
  1. Limitations are real — No POSIX, no GPU access (yet), observability is improving but not mature.
  1. The future is hybrid — Edge functions (Wasm) + databases (containers) + ML inference (GPUs) all on the same platform.
When to bet on WebAssembly: If you're building edge compute, event-driven APIs, or multi-tenant SaaS, evaluate Wasm *now*. If you're running Postgres or a Rails monolith, stick with containers (and save yourself the migration pain). The five-millisecond cloud is here. It won't replace everything. But for the right workloads, it's a step-function improvement — and that's the point.

---

References

  1. Cloudflare Workers performance data (2024): https://blog.cloudflare.com/workers-performance-2024
  2. Fastly Compute documentation: https://www.fastly.com/documentation/guides/compute/
  3. Shopify Functions documentation: https://shopify.dev/docs/apps/build/functions
  4. Azure AKS documentation and SpinKube on AKS guide (2024): https://learn.microsoft.com/azure/aks/ and https://www.spinkube.dev/docs/topics/architecture/
  5. WASI 0.2 specification: https://github.com/WebAssembly/WASI/blob/main/Preview2.md
  6. OCI 1.1 artifact support: https://github.com/opencontainers/image-spec/blob/main/artifacts-guidance.md
  7. SpinKube documentation: https://spinkube.dev
  8. Wasmtime security model: https://docs.wasmtime.dev/security.html

---

Author's Note: This article reflects production systems as of January 2025. WebAssembly is evolving rapidly — expect WASI 0.3, better observability, and broader cloud adoption by 2026. But the core trade-offs (speed vs. flexibility) are architectural, not temporary.

If you're evaluating Wasm for your stack, start with a non-critical edge function (auth, routing, A/B testing). Measure cold start, memory usage, and developer friction. Then decide if it's worth expanding. The five-millisecond cloud is real — but only for workloads that benefit from it.