Install and debug Kubernetes inside LXD
Feb 4, 2020
Never miss our publications about Open Source, big data and distributed systems, low frequency of one email every two months.
We recently deployed a Kubernetes cluster with the need to maintain clusters isolation on our bare metal nodes across our infrastructure. We knew that Virtual Machines would provide the required isolation but we decided to use Linux containers for performance reasons.
Our environment consist of physical nodes running Ubuntu 19.10. LXD is running version 3.19 with ZFS as the storage backend. In this article we will focus on a few errors we encountered while installing a Kubernetes with
kubeadm in lxc containers. To learn more about Kubernetes or LXC/LXD, please read the following articles:
FATAL: Module configs not found in directory /lib/modules
The first issue we had was after running
kubeadm init ..., a preflight check. It gave us this stacktrace:
[ERROR SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/5.0.0-38-generic/modules.dep.bin'\nmodprobe: FATAL: Module configs not found in directory /lib/modules/5.0.0-38-generic\n", err: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
This issue occurred because LXC does not load Kernel modules (located in /lib/modules) by default inside the containers. This thread explains that we need to load Kernel modules with
lxc config set mycontainer linux.kernel_modules overlay but in our case it was not enough.
The issue was that the version of Docker that we installed as recommended by Kubernetes’ documentation came with a dependency on
linux-modules-5.0.0-1033-oem-osp1 which was not coherent with our host’s kernel version:
uname -r 5.0.0-38-generic
We were indeed missing the 5.0.0-38-generic kernel modules:
ls -l /lib/modules total 9 drwxr-xr-x 5 root root 16 Feb 3 14:22 5.0.0-1033-oem-osp1
To solve the problem, we manually installed the correct Kernel modules:
apt install linux-modules-5.0.0-38-generic
/dev/kmsg: no such file or directory
This issue was visible in the status of the kubelet service:
journalctl -xeu kubelet Jan 20 21:31:00 k8s-m-1 kubelet: E0120 21:31:00.269923 22875 kubelet.go:1302] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data Jan 20 21:31:00 k8s-m-1 kubelet: E0120 21:31:00.270327 22875 event.go:272] Unable to write event: 'Post https://10.0.0.64:6443/api/v1/namespaces/default/events: dial tcp 10.0.0.64:6443: connect: connection refuse Jan 20 21:31:00 k8s-m-1 kubelet: I0120 21:31:00.270527 22875 server.go:143] Starting to listen on 0.0.0.0:10250 Jan 20 21:31:00 k8s-m-1 kubelet: F0120 21:31:00.270954 22875 kubelet.go:1413] failed to start OOM watcher open /dev/kmsg: no such file or directory Jan 20 21:31:00 k8s-m-1 systemd: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION Jan 20 21:31:00 k8s-m-1 systemd: kubelet.service: Failed with result 'exit-code'.
As per Linux’s documentation, “The /dev/kmsg character device node provides userspace access to the kernel’s printk buffer”.
By default, this device is not created in a LXD container, we can do it by setting
lxc.kmsg to 1 in the container’s configuration or by manually creating the symbolic link:
echo 'L /dev/kmsg - - - - /dev/console' > /etc/tmpfiles.d/kmsg.conf
We followed Cornelius Weig’s kubernetes-lxd guide in which he recommends to create two ZFS volumes to host
We then had the following issue while trying to create our first pods:
Normal Created 12m kubelet, k8s-wrk-2 Created container liveness-prometheus Warning Failed 12m (x2 over 12m) kubelet, k8s-wrk-2 Error: failed to start container "csi-cephfsplugin": Error response from daemon: path /var/lib/kubelet/plugins is mounted on /var/lib/kubelet but it is not a shared mount Normal Created 12m (x4 over 12m) kubelet, k8s-wrk-2 Created container csi-cephfsplugin Normal Pulled 12m (x3 over 12m) kubelet, k8s-wrk-2 Container image "quay.io/cephcsi/cephcsi:v1.2.2" already present on machine Warning Failed 12m (x2 over 12m) kubelet, k8s-wrk-2 Error: failed to start container "csi-cephfsplugin": Error response from daemon: path /var/lib/kubelet/pods is mounted on /var/lib/kubelet but it is not a shared mount Warning BackOff 3m3s (x44 over 12m) kubelet, k8s-wrk-2 Back-off restarting failed container
The error tells us that the
/var/lib/kubelet mount is not shared, we can verify this:
findmnt -o TARGET,PROPAGATION /var/lib/kubelet/ TARGET PROPAGATION /var/lib/kubelet private
Indeed it is not shared but private. By default LXD mounts volumes as private, we can change this behavior with the following configuration:
devices: kubelet-volume: path: /var/lib/kubelet propagation: shared source: /dev/zvol/syspool/kubernetes/kubelet type: disk
We can then see that the result is ok:
findmnt -o TARGET,PROPAGATION /var/lib/kubelet/ TARGET PROPAGATION /var/lib/kubelet shared