Skip to content


k0s uses containerd as the default Container Runtime Interface (CRI) and runc as the default low-level runtime. In most cases they don't require any configuration changes. However, if custom configuration is needed, this page provides some examples.


containerd configuration#

To make changes to containerd configuration you must first generate a default containerd configuration, with the default values set to /etc/k0s/containerd.toml:

containerd config default > /etc/k0s/containerd.toml

k0s runs containerd with the following default values:

/var/lib/k0s/bin/containerd \
    --root=/var/lib/k0s/containerd \
    --state=/var/lib/k0s/run/containerd \
    --address=/var/lib/k0s/run/containerd.sock \

Next, add the following default values to the configuration file:

version = 2
root = "/var/lib/k0s/containerd"
state = "/var/lib/k0s/run/containerd"

  address = "/var/lib/k0s/run/containerd.sock"

Finally, if you want to change CRI look into:

    shim = "containerd-shim"
    runtime = "runc"

Using gVisor#

gVisor is an application kernel, written in Go, that implements a substantial portion of the Linux system call interface. It provides an additional layer of isolation between running applications and the host operating system.

  1. Install the needed gVisor binaries into the host.

      set -e
      wget ${URL}/runsc ${URL}/runsc.sha512 \
        ${URL}/gvisor-containerd-shim ${URL}/gvisor-containerd-shim.sha512 \
        ${URL}/containerd-shim-runsc-v1 ${URL}/containerd-shim-runsc-v1.sha512
      sha512sum -c runsc.sha512 \
        -c gvisor-containerd-shim.sha512 \
        -c containerd-shim-runsc-v1.sha512
      rm -f *.sha512
      chmod a+rx runsc gvisor-containerd-shim containerd-shim-runsc-v1
      sudo mv runsc gvisor-containerd-shim containerd-shim-runsc-v1 /usr/local/bin

    Refer to the gVisor install docs for more information.

  2. Prepare the config for k0s managed containerD, to utilize gVisor as additional runtime:

    cat <<EOF | sudo tee /etc/k0s/containerd.toml
    disabled_plugins = ["restart"]
      shim_debug = true
      runtime_type = "io.containerd.runsc.v1"
  3. Start and join the worker into the cluster, as normal:

    k0s worker $token
  4. Register containerd to the Kubernetes side to make gVisor runtime usable for workloads (by default, containerd uses normal runc as the runtime):

    cat <<EOF | kubectl apply -f -
    kind: RuntimeClass
      name: gvisor
    handler: runsc

    At this point, you can use gVisor runtime for your workloads:

    apiVersion: v1
    kind: Pod
      name: nginx-gvisor
      runtimeClassName: gvisor
      - name: nginx
        image: nginx
  5. (Optional) Verify tht the created nginx pod is running under gVisor runtime:

    # kubectl exec nginx-gvisor -- dmesg | grep -i gvisor
    [    0.000000] Starting gVisor...

Using nvidia-container-runtime#

By default, CRI is set to runC. As such, you must configure Nvidia GPU support by replacing runc with nvidia-container-runtime:

runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = ""
runtime_root = ""
privileged_without_host_devices = false
base_runtime_spec = ""
Runtime = "nvidia-container-runtime"
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = ""
runtime_root = ""
privileged_without_host_devices = false
base_runtime_spec = ""
Runtime = "nvidia-container-runtime"

Note Detailed instruction on how to run nvidia-container-runtime on your node is available here.

After editing the configuration, restart k0s to get containerd using the newly configured runtime.

Using custom CRI runtime#

Warning: You can use your own CRI runtime with k0s (for example, docker). However, k0s will not start or manage the runtime, and configuration is solely your responsibility.

Use the option --cri-socket to run a k0s worker with a custom CRI runtime. the option takes input in the form of <type>:<socket_path> (for type, use docker for a pure Docker setup and remote for anything else).

To run k0s with a pre-existing Docker setup, run the worker with k0s worker --cri-socket docker:unix:///var/run/docker.sock <token>.

When docker is used as a runtime, k0s configures kubelet to create the dockershim socket at /var/run/dockershim.sock.