Install nubus on k3s kubernetes

I have tried many times to install this product. I succeeded on time. I can’t see why it succeeded or not. How is your experience?

Grtz, Paul

It is not so easy to give a remote diagnosis, with the little information, and since I do not know “k3s”; I can give generic hints which I have observed with “k8s”:

  • If a deployment goes wrong, you can use “k9s,” for example, to easily see which pod is the cause.
  • if the environment is slow, it can take more than five minutes, and the problem resolves itself.
  • The critical factor in Kubernetes deployment is disk i/o: If many other things are still being done, or an old spinning disk is being used (instead of SSD), then long wait times are normal.
  • During deployment, external services are accessed, such as Let’s Encrypt: There’s a rate limit here, which blocks the IP on many deployment attempts. This would explain why it sometimes works and sometimes doesn’t, despite many attempts.

Here are instructions for installing Nubus on K3S I wrote in Nov’24 in an internal blog post. They may not be up-to-date anymore. At least the version of Nubus is way to old… At least the way the default user(s) are created and read changed. So, you’ll have to adapt there…
THIS COMES WITHOUT ANY GUARANTEES!
Good luck, I hope it helps with setting up Nubus with k3s.

Greetings
Daniel


Install K3S

K3S bundles Traefik as an Ingress controller, but Nubus works out of the box only with the “ingress-nginx” controller.
So, to use Nginx instead of Traefik, we follow the instructions of the article Setup NGINX Ingress Controller on the Rancher Desktop docs site. (Rancher Desktop is a product by SuSE for container management and Kubernetes on the Desktop, which uses K3S under the hood.)

Install K3S without Traefik:

export INSTALL_K3S_EXEC="server --disable=traefik"
curl -sfL https://get.k3s.io | sh -

Output:

[INFO] Finding release for channel stable
[INFO] Using v1.30.6+k3s1 as release
[INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.30.6+k3s1/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.30.6+k3s1/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Skipping installation of SELinux RPM
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO] systemd: Starting k3s

Explore K3S

Applications that wish to access the Kubernetes API must know its configuration and credentials. For the rest of this article I’ll assume you have executed this in every shell you use:

export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

K3S can run the Kubernetes cluster inside a Docker container like Kind, but usually it executes it directly in the host. We have installed it that way, so you’ll see the respective processes if you run:

pstree -UT

or

ps axf

This looks similar to how Docker works. You can access the Kubernetes and container management tools (kubectl and crictl) directly in the host:

crictl ps
crictl images
kubectl get namespaces
kubectl get -A pods

And you’ll find the containers file systems below

/var/lib/rancher/k3s

To read K3S’ logs run:

journalctl -u k3s

The logs of pods can be found at:

/var/log/pods

Install Nubus

Install ingress-nginx

Install Helm if you don’t have it yet (instructions on Helm page).

From script:

curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash -

From Apt (Debian/Ubuntu):

curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm

Install “ingress-nginx” Ingress controller:

export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

helm upgrade --install ingress-nginx ingress-nginx \ 
     --repo https://kubernetes.github.io/ingress-nginx \ 
     --namespace ingress-nginx \ 
     --create-namespace \ 
     --version "4.8.0" \ 
     --set controller.allowSnippetAnnotations=true \ 
     --set controller.config.hsts=false \ 
     --set controller.service.enableHttps=false \ 
     --set controller.hostPort.enabled=true \ 
     --set controller.service.ports.http=8000

Output:

Release "ingress-nginx" does not exist. Installing it now.
NAME: ingress-nginx
LAST DEPLOYED: Mon Nov 25 08:26:52 2024
NAMESPACE: ingress-nginx
STATUS: deployed
# ...

Wait for the service to become ready:

kubectl wait --namespace ingress-nginx \ 
  --for=condition=ready pod \ 
  --selector=app.kubernetes.io/component=controller \ 
  --timeout=90s

Output after a short while:

pod/ingress-nginx-controller-d49697d5f-9tgp4 condition met

Install certificate manager

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.15.0/cert-manager.yaml

Output:

namespace/cert-manager created
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
# ...

Install Nubus

Download and optionally edit the configuration file for Nubus:

export VERSION="1.0.0"

curl --output custom_values.yaml "https://raw.githubusercontent.com/univention/nubus-stack/v${VERSION}/helm/nubus/example.yaml"

# edit the file "custom_values.yaml" if you want

Now we install Nubus:

export YOUR_NAMESPACE=default

helm upgrade \
    --install nubus \
    --namespace="$YOUR_NAMESPACE" \
    oci://artifacts.software-univention.de/nubus/charts/nubus \
    --values custom_values.yaml \
    --timeout 20m \
    --version "$VERSION"

Output:

Release "nubus" does not exist. Installing it now.
Pulled: artifacts.software-univention.de/nubus/charts/nubus:1.0.0
Digest: sha256:73a5482944778af697e3a1404da95481b827de8496f71a4dccc8f9dbbe01722d
coalesce.go:289: warning: destination for minio.ingress.tls is a table. Ignoring non-table value (false)
W1125 08:43:14.462191 25423 warnings.go:70] unknown field "spec.strategy"
NAME: nubus
LAST DEPLOYED: Mon Nov 25 08:42:50 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
Thank you for installing nubus.

Your release is named nubus.

To learn more about the release, try:

  $ helm status nubus -n default
  $ helm get all nubus -n default

The default user is called "Administrator" with admin and user roles. You can retrieve its password by running:
  $ kubectl -n default get secret nubus-nubus-credentials -o json | jq -r '.data.administrator_password' | base64 -d

To get the password of the “Administrator” user, run:

kubectl -n default get secret nubus-nubus-credentials -o json | jq -r '.data.administrator_password' | base64 -d; echo

Explore K3S

Install K9s

K9s is a terminal-based UI that interacts with Kubernetes clusters.
Let’s install K9s to look at the running containers in the Kubernetes cluster:

wget https://github.com/derailed/k9s/releases/download/v0.32.5/k9s_linux_amd64.deb
sudo dpkg -i k9s_linux_amd64.deb
rm k9s_linux_amd64.deb

Start it with:

k9s

You can leave k9s using the “vi keys” colon-q → :q

Test Portal, LDAP and UDM REST API

We can request an object from the LDAP server using the UDM REST API to test whether the directory works.

But first, we make the domain we created known to the system. We’ll add its host names to the “/etc/hosts” file for that. Assuming you didn’t change the domain (“example.com”) in the “custom_values.yaml” file:

echo "127.0.0.2 id.example.com portal.example.com" >> /etc/hosts

Now, you should be able to access the Univention Portal at https://portal.example.com/univention/portal/ . The SSL certificate is self-signed. So you’ll have to ignore that error:

curl -k https://portal.example.com/univention/portal/

If you don’t have “jq” (for reading JSON) already installed, get it now:

apt-get install --yes jq

Let’s get the UDM object of the Administrator user from the UDM REST API:

export USERNAME=Administrator
export PASSWORD="$(kubectl -n default get secret nubus-nubus-credentials -o json | jq -r '.data.administrator_password' | base64 -d)"
export FQDN=portal.example.com

curl -ks -H "Accept: application/json" "https://${USERNAME}:${PASSWORD}@${FQDN}/univention/udm/users/user/?query\[username\]=Administrator" | jq

Explore file system

As written before, K3S stores files directly in the host’s file system below “/var/lib/rancher/k3s”. Accessing them is easy if you know where to look.
“crictl” is similar to the “docker” command line tool; it also has an “inspect” command.

“crictl ps” lists the running containers. Use “crictl ps | grep primary” to find the one that contains the primary LDAP server:

crictl ps | head -1
CONTAINER     IMAGE         CREATED        STATE   NAME     ATTEMPT POD ID        POD

crictl ps | grep primary
e11417efcfa66 4676211299811 54 minutes ago Running openldap 0       1545e0e6419b9 nubus-ldap-server-primary-

OK, so the container ID is “e11417efcfa66”. Now we can look at its metadata with “crictl inspect e11417efcfa66”. But that will show us much more information than we want. We can use “jq” to select only the parts that are interesting for us:

crictl inspect -o json e11417efcfa66 | jq '.status.mounts[] | {containerPath: .containerPath, hostPath: .hostPath}'

Output:

{
  "containerPath": "/var/lib/univention-ldap/slapd-sock",
  "hostPath": "/var/lib/kubelet/pods/b1cf6178-1014-4e80-9859-668ca43988cd/volumes/kubernetes.io~empty-dir/slapd-overlay-unix-socket-volume"
}
{
  "containerPath": "/usr/share/univention-ldap",
  "hostPath": "/var/lib/kubelet/pods/b1cf6178-1014-4e80-9859-668ca43988cd/volumes/kubernetes.io~empty-dir/usr-share-univention-ldap-volume"
}
{
  "containerPath": "/usr/share/saml",
  "hostPath": "/var/lib/kubelet/pods/b1cf6178-1014-4e80-9859-668ca43988cd/volumes/kubernetes.io~empty-dir/usr-share-saml-volume"
}
{
  "containerPath": "/etc/ldap",
  "hostPath": "/var/lib/kubelet/pods/b1cf6178-1014-4e80-9859-668ca43988cd/volumes/kubernetes.io~empty-dir/etc-ldap-volume"
}
# ...

If we now look at the fourth directory (that is “/etc/ldap” in the container), we’ll see:

ls -ln /var/lib/kubelet/pods/b1cf6178-1014-4e80-9859-668ca43988cd/volumes/kubernetes.io~empty-dir/etc-ldap-volume

insgesamt 36
drwxr-sr-x 2 101 102 4096 Nov 25 08:45 sasl2
drwxr-sr-x 2 101 102 4096 Nov 1 10:44 schema
-rw-r--r-- 1 101 102 21369 Nov 25 08:45 slapd.conf
drwxr-sr-x 3 101 102 4096 Nov 1 10:44 slapd.d

That looks very familiar to a UCS user :slight_smile:
You can read the OpenLDAP configuration now:

less /var/lib/kubelet/pods/b1cf6178-1014-4e80-9859-668ca43988cd/volumes/kubernetes.io~empty-dir/etc-ldap-volume/slapd.conf

Do not edit those files. Not only is that missing the whole point of Kubernetes (operation by configuration), but it also won’t persist as all containers and their volumes are ephemeral (they’ll be recreated from scratch on the next change).

Run “mount” and “df” to see all the mounts.

This output is OK on a production system but would be annoying on your notebook.
So, if you plan to use K3S on your desktop machine, see the section Running K3s in Docker of K3S’ documentation.

Explore logs

K3S’ logs:

journalctl -u k3s

The pods logs:

ls -l /var/log/pods

Explore images, containers and pods

Use “crictl” to inspect pods and the containers they contain:

inspect     Display the status of one or more containers
inspecti    Return the status of one or more images
imagefsinfo Return image filesystem info
inspectp    Display the status of one or more pods

stats       List container(s) resource usage statistics
statsp      List pod statistics. Stats represent a structured API that will fulfill the Kubelet's /stats/summary endpoint.

Explore network

With K3S using the host’s “contained”, the Kubernetes network nodes can also be seen from the host:

ip a | grep UP

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue ...
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc ...
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc ...
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc ...
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc ...
6: veth90e109af@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 ...
7: vetha0b3f29c@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 ...
# ...
36: veth152a44cf@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 ...
route -n | grep cni

10.42.0.0   0.0.0.0   255.255.255.0   U   0   0   0   cni0

Let’s look at a pods network.

crictl ps | grep udm-rest

6c35518e7ff8b ead0e57ae5382 About an hour ago Running udm-rest-api 1 fb5e3a0b588ca nubus-udm-rest-api-7648f9fc8-sfmq7
crictl inspectp -o json fb5e3a0b588ca | jq .info.cniResult

{
  "Interfaces": {
    "cni0": {
      "IPConfigs": null,
      "Mac": "5e:a4:28:0a:9a:0b",
...
    "eth0": {
      "IPConfigs": [
        {
          "IP": "10.42.0.113",
          "Gateway": "10.42.0.1"
        }
      ],
      "Mac": "be:4c:c8:ca:c6:f3",
...
    "vethc778d5e2": {
      "IPConfigs": null,
      "Mac": "ba:b6:5e:cf:48:a7",
...
  "Routes": [
    {
      "dst": "10.42.0.0/16"
    },
    {
      "dst": "0.0.0.0/0",
      "gw": "10.42.0.1"
    }
  ]
}

We get the pods “external” interface as seen from inside the pod (eth0 | be:4c:c8:ca:c6:f3 | 10.42.0.113) and from outside (vethc778d5e2 | ba:b6:5e:cf:48:a7)

From the “outside” (the host):

ip a show vethc778d5e2

27: vethc778d5e2@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
    link/ether ba:b6:5e:cf:48:a7 brd ff:ff:ff:ff:ff:ff link-netns cni-f545ee32-e6d8-583a-6f94-ef821952207c
    inet6 fe80::b8b6:5eff:fecf:48a7/64 scope link
       valid_lft forever preferred_lft forever

From the “inside” (a container in the pod):

kubectl exec nubus-udm-rest-api-7648f9fc8-sfmq7 -- awk '/32 host/ { print f } {f=$2}' /proc/net/fib_trie

Defaulted container "udm-rest-api" out of: udm-rest-api, univention-compatibility (init), ucr-commit (init), load-internal-plugins (init), load-portal-extension (init)

10.42.0.113
127.0.0.1
10.42.0.113
127.0.0.1

Also useful:

crictl inspectp -o json fb5e3a0b588ca | jq .status.network

{
  "additionalIps": [],
  "ip": "10.42.0.113"
}

crictl inspectp -o json fb5e3a0b588ca | jq .info.config.dns_config

{
  "servers": [
    "10.43.0.10"
  ],
  "searches": [
    "default.svc.cluster.local",
    "svc.cluster.local",
    "cluster.local"
  ],
  "options": [
    "ndots:5"
  ]
}

crictl inspectp -o json fb5e3a0b588ca | jq .info.config.port_mappings

[
  {
    "container_port": 9979
  }
]

BTW: All that information is also available from the Kubernetes API.
That is also what you would typically use, as it is standardized and reachable over the network:

kubectl get pods -o json nubus-udm-rest-api-7648f9fc8-sfmq7 | jq .status.podIP

"10.42.0.113"

kubectl get pods -o json nubus-udm-rest-api-7648f9fc8-sfmq7 | jq .status.podIPs

[
  {
    "ip": "10.42.0.113"
  }
]

kubectl get pods -o json nubus-udm-rest-api-7648f9fc8-sfmq7 | jq '.spec.containers[].ports'

[
  {
    "containerPort": 9979,
    "name": "http",
    "protocol": "TCP"
  }
]

Now that we know the IP and port of the service, we can access it directly:

export USERNAME=Administrator
export PASSWORD="$(kubectl -n default get secret nubus-nubus-credentials -o json | jq -r '.data.administrator_password' | base64 -d)"
export FQDN=10.42.0.113:9979

curl -s -H "Accept: application/json" "http://${USERNAME}:${PASSWORD}@${FQDN}/udm/users/user/?query\[username\]=Administrator" | jq

While this works and is fun, the “correct” way, that also works if the Kubernetes cluster is not running locally, is to create a port-forwarding:

kubectl port-forward svc/nubus-udm-rest-api 12345:9979

Forwarding from 127.0.0.1:12345 -> 9979
Forwarding from [::1]:12345 -> 9979

Now, you’ll need a new shell.

export USERNAME=Administrator
export PASSWORD="$(kubectl -n default get secret nubus-nubus-credentials -o json | jq -r '.data.administrator_password' | base64 -d)"
export FQDN=localhost:12345

curl -s -H "Accept: application/json" "http://${USERNAME}:${PASSWORD}@${FQDN}/udm/users/user/?query\[username\]=Administrator" | jq

Like this, you can create port forwardings to access every network service inside Kubernetes on the host’s “localhost.” This also works for your containers on remote clusters.

Assuming you are working on a VM and want to access the service from your notebook, you’ll create it so that the port is not forwarded from “localhost” but from the external interface.

# if you're on UCS:
systemctl stop univention-firewall.service
kubectl port-forward --address 0.0.0.0 svc/nubus-udm-rest-api 12345:9979

Now, port “12345” will be accessible on the host’s external network interface, which you can access from your notebook.