Skip to content

Run k0s in Docker#

You can create a k0s cluster on top of Docker.

Prerequisites#

You will require a Docker environment running on a Mac, Windows, or Linux system.

Container images#

The k0s OCI images are published to both Docker Hub and GitHub Container registry. For simplicity, the examples given here use Docker Hub (GitHub requires separate authentication, which is not covered here). The image names are as follows:

  • docker.io/k0sproject/k0s:v1.32.3-k0s.0
  • ghcr.io/k0sproject/k0s:v1.32.3-k0s.0

Note: Due to Docker's tag validation scheme, - is used as the k0s version separator instead of the usual +. For example, k0s version v1.32.3+k0s.0 is tagged as docker.io/k0sproject/k0s:v1.32.3-k0s.0.

Start k0s#

1. Run a controller#

By default, running the k0s OCI image will launch a controller with workloads enabled (i.e. a controller with the --enable-worker flag) to provide an easy local testing "cluster":

docker run -d --name k0s-controller --hostname k0s-controller \
  -v /var/lib/k0s -v /var/log/pods `# this is where k0s stores its data` \
  --tmpfs /run `# this is where k0s stores runtime data` \
  --privileged `# this is the easiest way to enable container-in-container workloads` \
  -p 6443:6443 `# publish the Kubernetes API server port` \
  docker.io/k0sproject/k0s:v1.32.3-k0s.0

Explanation of command line arguments:

  • -d runs the container in detached mode, i.e. in the background.
  • --name k0s-controller names the container "k0s-controller".
  • --hostname k0s-controller sets the hostname of the container to "k0s-controller".
  • -v /var/lib/k0s -v /var/log/pods creates two Docker volumes and mounts them to /var/lib/k0s and /var/log/pods respectively inside the container, ensuring that cluster data persists across container restarts.
  • --tmpfs /run TODO
  • --privileged gives the container the elevated privileges that k0s needs to function properly within Docker. See the section on adding additional workers for a more detailed discussion of privileges.
  • -p 6443:6443 exposes the container's Kubernetes API server port 6443 to the host, allowing you to interact with the cluster externally.
  • docker.io/k0sproject/k0s:v1.32.3-k0s.0 is the name of the k0s image to run.

By default, the k0s image starts a k0s controller with worker components enabled within the same container, creating a cluster with a single controller-and-worker node using the following command:

CMD ["k0s", "controller", "--enable-worker"]

Alternatively, a controller-only node can be run like this:

docker run -d --name k0s-controller --hostname k0s-controller \
  --read-only `# k0s won't write any data outside the below paths` \
  -v /var/lib/k0s `# this is where k0s stores its data` \
  --tmpfs /run `# this is where k0s stores runtime data` \
  --tmpfs /tmp `# allow writing temporary files` \
  -p 6443:6443 `# publish the Kubernetes API server port` \
  docker.io/k0sproject/k0s:v1.32.3-k0s.0 \
  k0s controller

Note the addition of k0s controller to override the image's default command. Also note that a controller-only node requires fewer privileges.

2. (Optional) Add additional workers#

You can add multiple worker nodes to the cluster and then distribute your application containers to separate workers.

  1. Acquire a join token for the worker:

    token=$(docker exec k0s-controller k0s token create --role=worker)
    
  2. Run the container to create and join the new worker:

    docker run -d --name k0s-worker1 --hostname k0s-worker1 \
      -v /var/lib/k0s -v /var/log/pods `# this is where k0s stores its data` \
      --tmpfs /run `# this is where k0s stores runtime data` \
      --privileged `# this is the easiest way to enable container-in-container workloads` \
      docker.io/k0sproject/k0s:v1.32.3-k0s.0 \
      k0s worker $token
    

    Alternatively, with fine-grained privileges:

    docker run -d --name k0s-worker1 --hostname k0s-worker1 \
      -v /var/lib/k0s -v /var/log/pods `# this is where k0s stores its data` \
      --tmpfs /run `# this is where k0s stores runtime data` \
      --security-opt seccomp=unconfined \
      --device /dev/kmsg \
      --cap-add sys_admin \
      --cap-add net_admin \
      --cap-add sys_ptrace \
      --cap-add sys_resource \
      --cap-add syslog \
      docker.io/k0sproject/k0s:v1.32.3-k0s.0 \
      k0s worker "$token"
    

    Notes on the security-related flags:

    • --security-opt seccomp=unconfined is required for runc to access the session keyring.
    • --device /dev/kmsg makes /dev/kmsg visible from inside the container. The kubelet's OOM watcher uses this.

    Notes on Linux capabilities:

    • CAP_SYS_ADMIN allows for a variety of administrative tasks, including mounting file systems and managing namespaces, which are necessary for creating and configuring nested containers.
    • CAP_NET_ADMIN allows manipulation of network settings such as interfaces and routes, allowing containers to create isolated or bridged networks, and so on.
    • CAP_SYS_PTRACE allows to inspect and modify processes, used to monitor other containers in a nested environment.
    • CAP_SYS_RESOURCE allows containers to override resource limits for things like memory or file descriptors, used to manage and adjust resource allocation in nested container environments.
    • CAP_SYSLOG allows containers to perform privileged syslog operations. This is required in order to read /dev/kmsg.

    Note that more privileges may be required depending on your cluster configuration and workloads.

    Repeat this step for each additional worker node and adjust the container and host names accordingly. Make sure that the workers can reach the controller on the required ports. If you are using Docker's default bridged network, this should be the case.

3. Access your cluster#

a) Using kubectl within the container#

To check cluster status and list nodes, use:

docker exec k0s-controller k0s kubectl get nodes

b) Using kubectl locally#

To configure local access to your k0s cluster, follow these steps:

  1. Generate the kubeconfig:

    docker exec k0s-controller k0s kubeconfig admin > ~/.kube/k0s.config
    
  2. Update kubeconfig with Localhost Access:

    To automatically replace the server IP with localhost dynamically in ~/.kube/k0s.config, use the following command:

    sed -i '' -e "$(awk '/server:/ {print NR; exit}' ~/.kube/k0s.config)s|https://.*:6443|https://localhost:6443|" ~/.kube/k0s.config
    

    This command updates the kubeconfig to point to localhost, allowing access to the API server from your host machine

  3. Set the KUBECONFIG Environment Variable:

    export KUBECONFIG=~/.kube/k0s.config
    
  4. Verify Cluster Access:

    kubectl get nodes
    

c) Use Lens#

Access the k0s cluster using Lens by following the instructions on how to add a cluster.

Use Docker Compose (alternative)#

As an alternative you can run k0s using Docker Compose:

services:
  k0s-controller:
    image: docker.io/k0sproject/k0s:v1.32.3-k0s.0
    container_name: k0s-controller
    hostname: k0s-controller
    network_mode: bridge # other modes are unsupported
    ports:
      - 6443:6443 # publish the Kubernetes API server port
    volumes:
      - /var/lib/k0s # this is where k0s stores its data
      - /var/log/pods # this is where k0s stores pod logs
      - /dev/kmsg:/dev/kmsg:ro # required by kubelets OOM watcher
    tmpfs:
      - /run # this is where k0s stores runtime data
    devices:
    - /dev/kmsg # required by kubelet's OOM watcher
    cap_add:
     - sys_admin
     - net_admin
     - sys_ptrace
     - sys_resource
     - syslog
    security_opt:
      - seccomp:unconfined # allow access to the session keyring
    configs:
      - source: k0s.yaml
        target: /etc/k0s/k0s.yaml

configs:
  k0s.yaml:
    content: |
      apiVersion: k0s.k0sproject.io/v1beta1
      kind: ClusterConfig
      metadata:
        name: k0s
      spec:
        storage:
          type: kine
      # Any additional configuration goes here ...

Below is a more complex example, using traefik as a load balancer, along with three controller and three worker nodes:

name: compose-cluster

x-k0s-controller: &k0s-controller
  image: docker.io/k0sproject/k0s:v1.32.3-k0s.0
  networks:
    - k0s-net
  tmpfs:
    - /run # this is where k0s stores runtime data
    - /tmp
  configs:
    - source: k0s.yaml
      target: /etc/k0s/k0s.yaml
  labels:
    - traefik.enable=true
    - traefik.tcp.routers.kube-api.service=kube-api
    - traefik.tcp.routers.kube-api.rule=HostSNI(`*`)
    - traefik.tcp.routers.kube-api.entrypoints=kube-api
    - traefik.tcp.services.kube-api.loadbalancer.server.port=6443
    - traefik.tcp.routers.k0s-api.service=k0s-api
    - traefik.tcp.routers.k0s-api.rule=HostSNI(`*`)
    - traefik.tcp.routers.k0s-api.entrypoints=k0s-api
    - traefik.tcp.services.k0s-api.loadbalancer.server.port=9443
    - traefik.tcp.routers.konnectivity.service=konnectivity
    - traefik.tcp.routers.konnectivity.rule=HostSNI(`*`)
    - traefik.tcp.routers.konnectivity.entrypoints=konnectivity
    - traefik.tcp.services.konnectivity.loadbalancer.server.port=8132
  restart: on-failure

x-k0s-worker: &k0s-worker
  image: docker.io/k0sproject/k0s:v1.32.3-k0s.0
  networks:
    - k0s-net
  depends_on:
    - k0s-controller-1
  command: [k0s, worker, --token-file, /run/secrets/k0sproject.io/tokens/worker]
  volumes:
    - /var/lib/k0s # this is where k0s stores its data
    - /var/log/pods # this is where k0s stores pod logs
    - /dev/kmsg:/dev/kmsg:ro # required by kubelets OOM watcher
    - k0s-worker-token:/run/secrets/k0sproject.io/tokens:ro
  tmpfs:
    - /run # this is where k0s stores runtime data
    - /tmp
  devices:
    - /dev/kmsg # required by kubelet's OOM watcher
  cap_add:
    - sys_admin
    - net_admin
    - sys_ptrace
    - sys_resource
    - syslog
  security_opt:
    - seccomp:unconfined # allow access to the session keyring
  restart: on-failure

networks:
  k0s-net:
    driver: bridge

services:
  k0s-lb:
    image: docker.io/traefik:v3.3.5
    container_name: k0s-lb
    hostname: k0s-lb
    networks:
      - k0s-net
    command:
      - --api.insecure=true
      - --providers.docker=true
      - --providers.docker.exposedbydefault=false
      - --entryPoints.kube-api.address=:6443
      - --entryPoints.k0s-api.address=:9443
      - --entryPoints.konnectivity.address=:8132
    ports:
      - 6443:6443 # publish the Kubernetes API server port
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro

  k0s-controller-1:
    <<: *k0s-controller
    container_name: k0s-controller-1
    hostname: k0s-controller-1
    depends_on:
      - k0s-lb
    command: [k0s, controller]
    post_start:
      - command:
          - /bin/sh
          - -euc
          - |
            bootstrap() {
              # this works even if etcd isn't up

              find /run/secrets/k0sproject.io/controller-token ! -path /run/secrets/k0sproject.io/controller-token -prune -exec rm -rf {} +
              k0s token pre-shared --role controller \
                --cert /var/lib/k0s/pki/ca.crt \
                --url https://k0s-lb:9443 \
                --out /run/secrets/k0sproject.io/controller-token/

              find /run/secrets/k0sproject.io/worker-token ! -path /run/secrets/k0sproject.io/worker-token -prune -exec rm -rf {} +
              k0s token pre-shared --role worker \
                --cert /var/lib/k0s/pki/ca.crt \
                --url https://k0s-lb:6443 \
                --out /run/secrets/k0sproject.io/worker-token/

              mv /run/secrets/k0sproject.io/controller-token/*.yaml /var/lib/k0s/manifests/k0s-token-secrets/controller.yaml
              mv /run/secrets/k0sproject.io/controller-token/token_* /run/secrets/k0sproject.io/controller-token/controller
              mv /run/secrets/k0sproject.io/worker-token/*.yaml /var/lib/k0s/manifests/k0s-token-secrets/worker.yaml
              mv /run/secrets/k0sproject.io/worker-token/token_* /run/secrets/k0sproject.io/worker-token/worker
            }

            while [ ! -f /var/lib/k0s/pki/ca.crt ] || ! bootstrap; do
              sleep 1
            done
            sleep 10 # give this controller a bit of a head start

    volumes:
      - /var/lib/k0s # this is where k0s stores its data
      - k0s-token-secrets:/var/lib/k0s/manifests/k0s-token-secrets
      - k0s-controller-token:/run/secrets/k0sproject.io/controller-token
      - k0s-worker-token:/run/secrets/k0sproject.io/worker-token

  k0s-controller-2: &k0s-additional-controller
    <<: *k0s-controller
    container_name: k0s-controller-2
    hostname: k0s-controller-2
    depends_on:
      - k0s-controller-1
    command: [ k0s, controller, --token-file, /run/secrets/k0sproject.io/tokens/controller ]
    volumes:
      - /var/lib/k0s # this is where k0s stores its data
      - k0s-token-secrets:/var/lib/k0s/manifests/k0s-token-secrets:ro
      - k0s-controller-token:/run/secrets/k0sproject.io/tokens:ro

  k0s-controller-3:
    <<: *k0s-additional-controller
    container_name: k0s-controller-3
    hostname: k0s-controller-3

  k0s-worker-1:
    <<: *k0s-worker
    container_name: k0s-worker-1
    hostname: k0s-worker-1

  k0s-worker-2:
    <<: *k0s-worker
    container_name: k0s-worker-2
    hostname: k0s-worker-2

  k0s-worker-3:
    <<: *k0s-worker
    container_name: k0s-worker-3
    hostname: k0s-worker-3

volumes:
  k0s-token-secrets:
    driver: local
    driver_opts:
      type: tmpfs
      device: tmpfs
  k0s-controller-token:
    driver: local
    driver_opts:
      type: tmpfs
      device: tmpfs
  k0s-worker-token:
    driver: local
    driver_opts:
      type: tmpfs
      device: tmpfs

configs:
  k0s.yaml:
    content: |
      spec:
        api:
          externalAddress: k0s-lb

Running the above:

 ❯ docker compose up -d
[+] Running 11/11
 ✔ Network compose-cluster_k0s-net                Created                0.1s
 ✔ Volume "compose-cluster_k0s-token-secrets"     Created                0.0s
 ✔ Volume "compose-cluster_k0s-controller-token"  Created                0.0s
 ✔ Volume "compose-cluster_k0s-worker-token"      Created                0.0s
 ✔ Container k0s-lb                               Started                0.5s
 ✔ Container k0s-controller-1                     Started                11.8s
 ✔ Container k0s-controller-2                     Started                12.2s
 ✔ Container k0s-worker-1                         Started                12.4s
 ✔ Container k0s-worker-2                         Started                12.3s
 ✔ Container k0s-worker-3                         Started                12.1s
 ✔ Container k0s-controller-3                     Started                12.5s

After a short while:

$ docker exec k0s-controller-1 k0s kc get node,po -A
NAME                STATUS   ROLES    AGE     VERSION
node/k0s-worker-1   Ready    <none>   1m36s   v1.32.3+k0s
node/k0s-worker-2   Ready    <none>   1m36s   v1.32.3+k0s
node/k0s-worker-3   Ready    <none>   1m36s   v1.32.3+k0s

NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system   pod/coredns-7d4f7fbd5c-54lxp          1/1     Running   0          1m27s
kube-system   pod/coredns-7d4f7fbd5c-pwbck          1/1     Running   0          1m27s
kube-system   pod/konnectivity-agent-5g8pn          1/1     Running   0          1m22s
kube-system   pod/konnectivity-agent-6rp7r          1/1     Running   0          1m22s
kube-system   pod/konnectivity-agent-zx9fn          1/1     Running   0          1m22s
kube-system   pod/kube-proxy-9m77t                  1/1     Running   0          1m36s
kube-system   pod/kube-proxy-v5vs6                  1/1     Running   0          1m36s
kube-system   pod/kube-proxy-xfw2h                  1/1     Running   0          1m36s
kube-system   pod/kube-router-6c62v                 1/1     Running   0          1m36s
kube-system   pod/kube-router-98ss8                 1/1     Running   0          1m36s
kube-system   pod/kube-router-lr46f                 1/1     Running   0          1m36s
kube-system   pod/metrics-server-7778865875-fzhx6   1/1     Running   0          1m37s

Known limitations#

No custom Docker networks#

Currently, k0s nodes cannot be run if the containers are configured to use custom networks (for example, with --net my-net). This is because Docker sets up a custom DNS service within the network which creates issues with CoreDNS. No completely reliable workaounds are available, however no issues should arise from running k0s cluster(s) on a bridge network.

Next Steps#