Creating a K3S Raspberry PI Cluster with K3Sup to fire up a nightscout backend service based on a mongo db

h3rmanns
6 min readApr 3, 2021

--

I would like to have two raspberries as a small kubernetes K3s cluster. What do I have to do?

I assume that you already have two raspberries with a recent raspbian os lite server 64bit already installed. This is very important, since nightscout 14.2.2 needs a mongo 4.x database. This version is only available for arm64 operating systems.

If you are using ubuntu 21.10 you would have to uninstall snapd and downgrade cgroups before k3s is able to run:

sudo echo -n "systemd.unified_cgroup_hierarchy=0 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory" | sudo tee -a /boot/firmware/cmdline.txt

Further more have a look a the end of this documentation. I needed to install an additional package on ubuntu server 21.10.

The easiest setup is to use raspbian 64. So that is what I would further assume. Here you have to modify the groups, too:

sudo echo -n " cgroup_memory=1 cgroup_enable=memory" | sudo tee -a /boot/cmdline.txt

If you would like to use longhorn as a storage class, each node has to have iscsi installed. So prepare each one like (details in the longhorn section):

sudo apt-get install open-iscsi

Further on, the PIs should be accessible by — lets say k1 and k2 in your local network.

brew on your mac

To get other stuff — the homebrew package manager comes handy on your mac. Get it by firing:

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"

kubectl on your mac

And of course you need a kubernetes client on your mac to get the ability to fire the `kubectl`command. Install it by using the brew package manager:

brew install kubernetes-cli

arkade on your mac as a packet manager

I would like to setup the raspberries from my mac. There is a small tool called ‘arkade’ to get all I need for it:

sudo curl -sSL https://dl.get-arkade.dev | sudo sh

k3sup to setup k3s on your first pi

With arkade it’s easy to setup k3sup on my mac:

arkade get k3sup

Afterwards you could install k3s on your first pi. Lets assume your pi is connectable using ssh by using the user pi and its name is k1. Furthermore the ip of k1 is ‘192.168.0.2’. Than you can simply install k3s by doing:

k3sup install --ip 192.168.0.2 --user pi --k3s-channel stable
export KUBECONFIG=`pwd`/kubeconfig

The last command tells your kubectl on your mac where your kubernetes cluster can be found in your network.

k3sup to join your second pi to your cluster

Assuming your second pi k2 has the IP 192.168.0.3, you could simply at it to your cluster by:

k3sup join --ip 192.168.0.3 --server-ip 192.168.0.2 --user pi --k3s-channel stable

No you should be able to see both cluster nodes up and running. Check that by firing:

❯ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k1 Ready master 19h v1.19.15+k3s1
k2 Ready <none> 11s v1.19.15+k3s2

Installing the kubernetes dashboard

Of course you would like to have the graphical kubernetes user interface. So simply do:

arkade install kubernetes-dashboard

arkade prints everything you need to know to connect to the dashboard afterwards. Basically you need to open a proxy to your mac:

kubectl proxy

And create an admin-user with this little yaml script:

apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard

Name it dashboard-admin-user.yaml and send this to your cluster:

kubectl apply -f dashboard-admin-user.yaml

Generate a secret dashboard logon token:

kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user-token | awk '{print $1}')

And finally use this token to do a token based logon by using this URI:

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login

Setup the Longhorn Persistent Volume

longhorn needs a special packet installed an your pi. So execute this on all your rasps-nodes:

sudo apt-get install open-iscsi

Afterwards you can install longhorn:

kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml

This lets you store the mongodb nightscout data.

Enabling TLS with letsencrypt to have correct HTTPS access

Of course you would like to access to your webservice now, by just calling https://mynightscout.duckdns.org. But you would need special certificates for HTTPS to work correctly. Luckily k3s comes with a traefik reverse proxy that is already able to get your TLS certifices on it’s own. But of course, you have to configure this proxy.

I use the cert-manager to manage the certificates. Therefore we create a namespace for it:

kubectl create namespace cert-manager

And then install the cert-manager pods:

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.7.1/cert-manager.yaml

Now we need a clusterissuer.yml:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: <your_email>@example.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: traefik

Recognize that you have to change your email in the snipped above.

Now we can transmit the clusterissuer.yml to your cluster:

kubectl apply -f clusterissuer.yml

You can have a look at it by firing:

kubectl describe clusterissuer letsencrypt-prod

You can check whether traefik generates correct certificate by executing:

openssl s_client -showcerts -servername mydomain.duckdns.org -connect mydomain.duckdns.org:443 </dev/null

Installing a nightscout chart

It is recommended to have the cert-manager in place before installing your nightscout backend. This way you ensure that certificates are immediately generated. We are using helm to easily setup the nightscout environment. So first we need to install it:

brew install helm

Then add my nightscout helm chart repository:

helm repo add ns https://dhermanns.github.io/nightscout-helm-chart/
helm repo update
helm show values ns/nightscout-helm-chart > ns.values

Edit the created ns.values file and add your own nightscout api secret (password):

nano ns.values

Install the chart to your k3s cluster now:

helm upgrade --install --values ns.values ns nightscout-helm-chart

You can access the system now using the entered domain-names in ns.values. If you have enabled the default-ingress in ns.values, you can even access the system using the internal name of your raspberry.

Backup your Mongo-DB

Now that your Mongo-DB is up and running, you would like to backup and restore all your data. To achieve that, install mongo tools on your mac:

brew tap mongodb/brew
brew install mongodb-database-tools

Take care that you are connect to the right k3s cluster:

export KUBECONFIG=~/.kube/config
kubectl get nodes

If these are the expected nodes, proxy the port to your mac:

kubectl port-forward --namespace default svc/mongo 27017:27017 &

Now you can dump all your data to a local directory named “nightscout”:

mongodump --uri mongodb://localhost:27017/nightscout -o .

And restore it later:

mongorestore --uri mongodb://localhost:27017/nightscout nightscout

What if you don’t have a domain?

In case you don’t have a domain, you could get a free one.
I create a dynamic dns here:

http://www.duckdns.org/

You will get a simple update script for your fritzbox here:

So your local raspberries will be available under your duckdns domain even if you don’t have a dedicated ip address.

Summing all up

We have now a kubernetes cluster with 2 PIs up and running. It contains a nightscout webservice based on a mongo db. This service is public available under your domain ***.duckdns.org. The included reverse proxy Traefik takes care that everything is well encrypted using letsencrypt.

Congratulations — you have everything up and running!

Optimizations — setting an Alias

To be able to fire up kubernetes commands with comfort, you can add an alias on your kubernetes node:

ssh ubuntu@k1
nano .bash_profile

Add these line to your profile

alias k="sudo k3s kubectl"

Know you can easily fire commands like

k get pods

Finding out whats going on on your cluster

Use this command to see the events on your cluster:

kubectl get events --sort-by=.metadata.creationTimestamp

Fixing invalid capacity 0 on image filesystem

If you see this warning in the kubernetes events (see above), I could fix it by executing:

sudo apt install linux-modules-extra-raspi
sudo reboot

You will notice this error if kubernetes keeps toggling from Ready to NotReady.

--

--