k0s configuration#
Control plane#
k0s Control plane can be configured via a YAML config file. By default k0s server
command reads a file called k0s.yaml
but can be told to read any yaml file via --config
option.
An example config file with the most common options users should configure:
apiVersion: k0s.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: k0s
spec:
api:
address: 192.168.68.106
sans:
- my-k0s-control.my-domain.com
network:
podCIDR: 10.244.0.0/16
serviceCIDR: 10.96.0.0/12
extensions:
helm:
repositories:
- name: prometheus-community
url: https://prometheus-community.github.io/helm-charts
charts:
- name: prometheus-stack
chartname: prometheus-community/prometheus
version: "11.16.8"
namespace: default
spec.api
#
address
: The local address to bind API on. Also used as one of the addresses pushed on the k0s create service certificate on the API. Defaults to first non-local address found on the node.sans
: List of additional addresses to push to API servers serving certificate
spec.network
#
podCIDR
: Pod network CIDR to be used in the clusterserviceCIDR
: Network CIDR to be used for cluster VIP services.
extensions.helm
#
List of Helm repositories and charts to deploy during cluster bootstrap. This example configures Prometheus from "stable" Helms chart repository.
Configuring multi-node controlplane#
When configuring an elastic/HA controlplane one must use same configuration options on each node for the cluster level options. Following options need to match on each node, otherwise the control plane components will end up in very unknown states:
- network
- storage
: Needless to say, one cannot create a clustered controlplane with each node only storing data locally on SQLite.
Full config reference#
Note: Many of the options configure things deep down in the "stack" on various components. So please make sure you understand what is being configured and whether or not it works in your specific environment.
A full config file with defaults generated by the k0s default-config
command:
apiVersion: k0s.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: k0s
spec:
api:
address: 192.168.68.106
sans:
- 192.168.68.106
- 192.168.68.106
extraArgs: {}
controllerManager:
extraArgs: {}
scheduler:
extraArgs: {}
storage:
type: etcd
etcd:
peerAddress: 192.168.68.106
network:
podCIDR: 10.244.0.0/16
serviceCIDR: 10.96.0.0/12
provider: calico
calico:
mode: vxlan
vxlanPort: 4789
vxlanVNI: 4096
mtu: 1450
wireguard: false
flexVolumeDriverPath: /usr/libexec/k0s/kubelet-plugins/volume/exec/nodeagent~uds
podSecurityPolicy:
defaultPolicy: 00-k0s-privileged
workerProfiles: []
images:
konnectivity:
image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-agent
version: v0.0.13
metricsserver:
image: gcr.io/k8s-staging-metrics-server/metrics-server
version: v0.3.7
kubeproxy:
image: k8s.gcr.io/kube-proxy
version: v1.20.0
coredns:
image: docker.io/coredns/coredns
version: 1.7.0
calico:
cni:
image: calico/cni
version: v3.16.2
flexvolume:
image: calico/pod2daemon-flexvol
version: v3.16.2
node:
image: calico/node
version: v3.16.2
kubecontrollers:
image: calico/kube-controllers
version: v3.16.2
repository: ""
telemetry:
interval: 10m0s
enabled: true
extensions:
helm:
repositories:
- name: stable
url: https://charts.helm.sh/stable
- name: prometheus-community
url: https://prometheus-community.github.io/helm-charts
charts:
- name: prometheus-stack
chartname: prometheus-community/prometheus
version: "11.16.8"
values: |
server:
podDisruptionBudget:
enabled: false
namespace: default
spec.api
#
address
: The local address to bind API on. Also used as one of the addresses pushed on the k0s create service certificate on the API. Defaults to first non-local address found on the node.sans
: List of additional addresses to push to API servers serving certificateextraArgs
: Map of key-values (strings) for any extra arguments you wish to pass down to Kubernetes api-server process
spec.controllerManager
#
extraArgs
: Map of key-values (strings) for any extra arguments you wish to pass down to Kubernetes controller manager process
spec.scheduler
#
extraArgs
: Map of key-values (strings) for any extra arguments you wish to pass down to Kubernetes scheduler process
spec.storage
#
type
: Type of the data store, eitheretcd
orkine
.etcd.peerAddress
: Nodes address to be used for etcd cluster peering.kine.dataSource
: kine datasource URL.
Using type etcd
will make k0s to create and manage an elastic etcd cluster within the controller nodes.
spec.network
#
provider
: Network provider, eithercalico
orcustom
. In case ofcustom
user can push any network provider.podCIDR
: Pod network CIDR to be used in the clusterserviceCIDR
: Network CIDR to be used for cluster VIP services.
spec.network.calico
#
mode
:vxlan
(default) oripip
vxlanPort
: The UDP port to use for VXLAN (default4789
)vxlanVNI
: The virtual network ID to use for VXLAN. (default:4096
)mtu
: MTU to use for overlay network (default1450
)wireguard
: enable wireguard based encryption (defaultfalse
). Your host system must be wireguard ready. See https://docs.projectcalico.org/security/encrypt-cluster-pod-traffic for details.flexVolumeDriverPath
: The host path to use for Calicos flex-volume-driver (default:/usr/libexec/k0s/kubelet-plugins/volume/exec/nodeagent~uds
). This should only need to be changed if the default path is unwriteable. See https://github.com/projectcalico/calico/issues/2712 for details. This option should ideally be paired with a custom volumePluginDir in the profile used on your worker nodes.
spec.podSecurityPolicy
#
Configures the default psp to be set. k0s creates two PSPs out of box:
00-k0s-privileged
(default): no restrictions, always also used for Kubernetes/k0s level system pods99-k0s-restricted
: no host namespaces or root users allowed, no bind mounts from host
As a user you can of course create any supplemental PSPs and bind them to users / access accounts as you need.
spec.workerProfiles
#
Array of spec.workerProfiles.workerProfile
Each element has following properties:
- name
: string, name, used as profile selector for the worker process
- values
: mapping object
For each profile the control plane will create separate ConfigMap with kubelet-config yaml.
Based on the --profile
argument given to the k0s worker
the corresponding ConfigMap would be used to extract kubelet-config.yaml
from.
values
are recursively merged with default kubelet-config.yaml
There are a few fields that cannot be overridden:
- clusterDNS
- clusterDomain
- apiVersion
- kind
Example:
workerProfiles:
- name: custom-role
values:
key: value
mapping:
innerKey: innerValue
Custom volumePluginDir:
workerProfiles:
- name: custom-role
values:
volumePluginDir: /var/libexec/k0s/kubelet-plugins/volume/exec
images
#
Each node under the images
key has the same structure
images:
konnectivity:
image: calico/kube-controllers
version: v3.16.2
images.konnectivity
#
images.metricsserver
#
images.kubeproxy
#
images.coredns
#
images.calico.cni
#
images.calico.flexvolume
#
images.calico.node
#
images.calico.kubecontrollers
#
images.repository
#
If images.repository
is set and not empty, every image name will be prefixed with the value of images.repository
Example
images:
repository: "my.own.repo"
konnectivity:
image: calico/kube-controllers
version: v3.16.2
In the runtime the image name will be calculated as my.own.repo/calico/kube-controllers:v3.16.2
.
This only affects the location where images are getting pulled, omitting an image specification here will not disable the component from being deployed.
Extensions#
As stated in the project scope we intent to keep the scope of k0s quite small and not build gazillions of extensions into the product itself.
To run k0s easily with your preferred extensions you have two options.
- Dump all needed extension manifest under
/var/lib/k0s/manifests/my-extension
. Read more on this approach here. - Define your extensions as Helm charts:
extensions:
helm:
repositories:
- name: stable
url: https://charts.helm.sh/stable
- name: prometheus-community
url: https://prometheus-community.github.io/helm-charts
charts:
- name: prometheus-stack
chartname: prometheus-community/prometheus
version: "11.16.8"
values: |
storageSpec:
emptyDir:
medium: Memory
namespace: default
This way you get a declarative way to configure the cluster and k0s controller manages the setup of the defined extension Helm charts as part of the cluster bootstrap process.
Some examples what you could use as extension charts: - Ingress controllers: Nginx ingress, Traefix ingress (tutorial), - Volume storage providers: OpenEBS, Rook, Longhorn - Monitoring: Prometheus, Grafana
Telemetry#
To build better end user experience we collect and send telemetry data from clusters. It is enabled by default and can be disabled by settings corresponding option as false
The default interval is 10 minutes, any valid value for time.Duration
string representation can be used as a value.
Example
telemetry:
interval: 2m0s
enabled: true