Table of contents
- Introduction
- Prerequisites
- Matryoshka
- Install tools in the host
- Create a cluster
- Install the Ingress controller
- Install certificate manager
- Install Nubus
- Credentials
- Client access
- Logs
- Cleanup
Introduction
Although Nubus is a very complex software collection, installing it in Kubernetes is pretty straightforward.
In this article, I will show you first how to install and run Kubernetes on your notebook and then how to install Nubus into it.
For a production deployment of Nubus, please read the official Nubus for Kubernetes - Operation Manual.
Prerequisites
You’ll only need a Linux system on a 32-bit or 64-bit Intel/AMD or 64-bit ARM processor. Any Linux distribution of the last 5 years will work. This article uses a Ubuntu 24.04 system (download link) on a 64-bit Intel/AMD CPU.
If you use Windows or MacOS, you can run it in your preferred virtualization solution.
The instructions also work on a UCS 5.0 system.
The system should have at least 6 GB free RAM and 12 GB free disk space.
Disclaimer: Please be aware that this installation is for testing purposes only! A single-node Kubernetes “cluster” does obviously not offer the robustness one would expect from a production-level Kubernetes cluster.
There is an accompanying article that explains what Kubernetes and containers are and how they work: Explore Kubernetes
Throughout this article, you’ll find links to the other article for deeper technical details that do not fit into a short how-to.
→ If interested, read section What is Kubernetes? of the K8s background article for a short introduction to what Kubernetes is, and What are containers? and What are pods? for short introductions to what containers and pods are.
Matryoshka
On a production Kubernetes node, multiple services would be running in the host, many storage devices would show up, and the network would have unique settings. That’s not something we want on our desktop machine. We want to test Kubernetes and Nubus, throw them away, and have a clean system afterward.
With that scenario (mostly for developers) in mind, a Kubernetes distribution found an elegant solution: Kind is a Kubernetes distribution that is very nice to use on notebooks because it does not add any service to your system.
“Kind” is an abbreviation for “Kubernetes in Docker.” That describes exactly what it does: It uses “Docker in Docker” (DinD) to run a complete Kubernetes environment in a single Docker container.
→ If interested, read section Matryoshka of the K8s background article to explore more technical details about how Kind runs Kubernetes inside a Docker container.
Install tools in the host
We need a few command line (CLI) tools in the host to manage our Kubernetes cluster.
Docker
If not already installed, add Docker to your system:
sudo apt-get install --yes docker.io
docker version
Client:
Version: 26.1.3
API version: 1.45
# ...
Server:
Engine:
Version: 26.1.3
API version: 1.45 (minimum version 1.24)
# ...
Kubectl
kubectl
is a command line tool for communicating with a Kubernetes cluster’s control plane, using the Kubernetes API.
This command line tool can be used to manage everything in Kubernetes (kubectl documentation).
sudo apt-get install --yes apt-transport-https ca-certificates curl
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install --yes kubectl
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl >/dev/null
source /usr/share/bash-completion/bash_completion
You should now be able to run kubectl
(without arguments) and get a help screen.
Helm
Helm is a package manager for Kubernetes.
A Helm chart is a package that describes Kubernetes resources. It can be used to install and configure complex software. Helm charts are versioned.
To install Helm, execute the following:
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg >/dev/null
sudo apt-get install --yes apt-transport-https
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install --yes helm
helm completion bash | sudo tee /etc/bash_completion.d/helm >/dev/null
source /usr/share/bash-completion/bash_completion
You should now be able to run helm
(without arguments) and get a help screen.
Kind
Install kind
, the command line tool to manage a single-node Kubernetes cluster (quick start guide):
sudo curl -Lo /usr/local/bin/kind https://kind.sigs.k8s.io/dl/v0.27.0/kind-linux-amd64
sudo chmod 755 /usr/local/bin/kind
kind completion bash | sudo tee /etc/bash_completion.d/kind >/dev/null
source /usr/share/bash-completion/bash_completion
You should now be able to run kind
(without arguments) and get a help screen.
jq
Install jq
the commandline JSON processor:
sudo apt-get install --yes jq
Create a cluster
We have everything now to create our first Kubernetes cluster.
When creating it, we must define the network ports of containers inside the cluster that we want to access.
→ If interested, read section Accessing the cluster network — the Ingress controller of the K8s background article if you’re curious why.
Store the following cluster configuration in a file called kind-cluster-config.yaml
. Take care to keep the indentation because the YAML format is strict about that.
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
This configuration will try to use network ports 80 and 443 on your machine. If those are already in use by a web server, it will fail.
You can set the two hostPort
settings to e.g., 8000 and 8443. Then you must install and configure a reverse proxy like Apache or Nginx to forward HTTP and HTTPS to id.example.com
and portal.example.com
to those ports.
You’ll also have to do that if you want to run multiple instances of Nubus.
Use the above values if you are not using those ports and just want to start Nubus.
Now, create the cluster. I’ll call it nubus
:
kind create cluster --name=nubus --config=kind-cluster-config.yaml
Creating cluster "nubus" ...
✓ Ensuring node image (kindest/node:v1.32.2) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-nubus"
You can now use your cluster with:
kubectl cluster-info --context kind-nubus
Thanks for using kind! 😊
That’s it! We now have a Kubernetes cluster.
It is running inside a single Docker container:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6309a2bf70cf kindest/node:v1.32.2 "/usr/local/bin/entr…" 21 minutes ago Up 21 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 127.0.0.1:37465->6443/tcp nubus-control-plane
→ There’s a whole world inside that container. If you’re curious, head over to section Explore Kind cluster of the K8s background article.
Install the Ingress controller
By default, Kubernetes only connects containers with each other inside the cluster.
Ingress manages external access to services in a cluster.
→ Section Accessing the cluster network — the Ingress controller has a longer explanation.
Nubus supports out-of-the-box one Ingress controller implementation: ingress-nginx
. Not nginx-ingress
, careful with the name!
Install ingress-nginx
into the cluster:
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx \
--create-namespace \
--version "4.8.0" \
--set controller.allowSnippetAnnotations=true \
--set controller.config.hsts=false \
--set controller.service.enableHttps=false \
--set controller.hostPort.enabled=true \
--set controller.service.ports.http=80
Release "ingress-nginx" does not exist. Installing it now.
NAME: ingress-nginx
LAST DEPLOYED: Tue Dec 17 16:57:59 2024
NAMESPACE: ingress-nginx
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace ingress-nginx get services -o wide -w ingress-nginx-controller'
An example Ingress that makes use of the controller:
# ...
If you look at the running pods, you’ll notice a new one with name ingress-nginx-controller-...
:
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx ingress-nginx-controller-6787bf559d-ctsz4 1/1 Running 0 3m19s
kube-system coredns-668d6bf9bc-6hwlb 1/1 Running 0 3m41s
# ...
Install certificate manager
We need a certificate manager in our cluster. It provisions SSL/TLS certificates.
We’ll use cert-manager.
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.17.1/cert-manager.yaml
namespace/cert-manager created
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
# ...
Install Nubus
Our system and our cluster are now prepared to install Nubus.
First, we’ll get a configuration file:
export VERSION="1.8.0"
curl --output custom_values.yaml "https://raw.githubusercontent.com/univention/nubus-stack/v${VERSION}/helm/nubus/example.yaml"
You can edit custom_values.yaml
and replace baseDn
, domainName
and domain
, or leave it at it’s default to use example.com
as your domain.
See section Configuration of the Nubus Operation Manual for available configuration options.
In this article, I’ll keep it unchanged.
We’ll install all of Nubus using its “umbrella Helm chart.”
Each Nubus component can be installed separately using its own Helm chart. The umbrella Helm chart conveniently applies our configuration consistently to all components and installs them.
export VERSION="1.8.0"
export NAMESPACE_FOR_NUBUS=default
export RELEASE_NAME=nubus
helm upgrade \
--install \
--namespace="$NAMESPACE_FOR_NUBUS" \
--values custom_values.yaml \
--version "$VERSION" \
--timeout 20m \
"$RELEASE_NAME" \
oci://artifacts.software-univention.de/nubus/charts/nubus
Depending on your internet and hard disk speed, this will take about 10 minutes.
You’ll see an output like this:
Release "nubus" does not exist. Installing it now.
Pulled: artifacts.software-univention.de/nubus/charts/nubus:1.8.0
Digest: sha256:400d2b917117e702639e1a3566c454c1b96b7a3cbf08cb648524fb0222ad43e9
coalesce.go:301: warning: destination for nubus.nubusUmcServer.postgresql.auth.existingSecret is a table. Ignoring non-table value ()
coalesce.go:301: warning: destination for minio.ingress.tls is a table. Ignoring non-table value (false)
coalesce.go:298: warning: cannot overwrite table with non table for nubusNotificationsApi.postgresql.auth.existingSecret (map[name:])
coalesce.go:298: warning: cannot overwrite table with non table for keycloak.postgresql.auth.existingSecret (map[keyMapping:map[password:<nil>] name:<nil>])
coalesce.go:298: warning: cannot overwrite table with non table for nubusUmcServer.postgresql.auth.existingSecret (map[name:])
W0422 10:57:44.059350 1828534 warnings.go:70] unknown field "spec.strategy"
It’ll stall for a while here. Just be patient. Then you’ll get output like this:
NAME: nubus
LAST DEPLOYED: Tue Apr 22 10:57:27 2025
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
Thank you for installing nubus.
Your release is named nubus.
To learn more about the release, try:
$ helm status nubus -n default
$ helm get all nubus -n default
The default user is called "Administrator" with admin and user roles. You can retrieve its password by running:
$ kubectl -n default get secret nubus-nubus-credentials -o json | jq -r '.data.administrator_password' | base64 -d
That’s it. All Nubus components were installed.
But they have not all finished initializing yet:
kubectl get pods
NAME READY STATUS RESTARTS AGE
nubus-keycloak-0 1/1 Running 5 (3m19s ago) 6m55s
nubus-keycloak-bootstrap-bootstrap-1-br2s6 0/1 Completed 0 6m54s
nubus-ldap-notifier-0 0/1 CrashLoopBackOff 3 (31s ago) 6m55s
nubus-ldap-server-primary-0 0/1 Running 0 6m54s
nubus-ldap-server-proxy-775c88d546-l42n5 0/1 Init:3/5 0 6m54s
nubus-ldap-server-secondary-0 0/1 Init:4/5 0 6m55s
nubus-minio-8686f5d865-m92k2 1/1 Running 0 6m55s
nubus-minio-provisioning-hls78 0/1 Completed 0 8s
nubus-notifications-api-845dcf5995-sfsz6 1/1 Running 5 (2m11s ago) 6m54s
nubus-portal-consumer-0 0/1 Init:0/4 0 6m55s
nubus-portal-frontend-64b4f46f95-mk4kk 1/1 Running 0 6m55s
nubus-portal-server-59df9bf565-b2j6l 1/1 Running 0 6m55s
nubus-postgresql-0 1/1 Running 0 6m55s
nubus-provisioning-api-5b946577d9-llt4h 1/1 Running 0 6m55s
nubus-provisioning-dispatcher-7f7c9cf8d6-nl6sp 1/1 Running 0 6m55s
nubus-provisioning-listener-0 1/1 Running 0 6m55s
nubus-provisioning-nats-0 3/3 Running 0 6m55s
nubus-provisioning-prefill-677dd99cc5-fj5g9 0/1 Init:0/2 0 6m55s
nubus-provisioning-register-consumers-6zqnw 0/1 Init:1/2 0 6m55s
nubus-provisioning-udm-transformer-5c5cbb4c5c-kxfdg 0/1 Running 2 (24s ago) 6m55s
nubus-selfservice-listener-66d9797764-6xs4w 0/1 Init:0/1 2 (115s ago) 6m55s
nubus-stack-data-ums-1-nvs8s 0/1 Init:2/3 0 6m55s
nubus-udm-rest-api-db7458445-mjskl 1/1 Running 0 6m54s
nubus-umc-gateway-5fb767c545-5xwbv 1/1 Running 0 6m54s
nubus-umc-server-0 1/2 Running 0 6m55s
nubus-umc-server-7d8f8478d9-8q7n9 1/1 Running 0 6m55s
nubus-umc-server-memcached-8676b45b68-x4s5b 1/1 Running 0 6m55s
It is normal for some pods to be in status CrashLoopBackOff
. When all their dependencies have finished initializing, they’ll recover on their own.
Credentials
Until that’s finished, we can use the time to access the password of the Administrator
user.
The end of the helm
output was:
The default user is called "Administrator" with admin and user roles. You can retrieve its password by running:
$ kubectl -n default get secret nubus-nubus-credentials -o json | jq -r '.data.administrator_password' | base64 -d
We can copy and paste that command. But I suggest appending ; echo
, so get a line break:
kubectl -n default get secret nubus-nubus-credentials -o json | jq -r '.data.administrator_password' | base64 -d; echo
b30665941288528735bf8aa04655188c1328509b
If you didn’t change the value for nubusMasterPassword
in the file custom_values.yaml
, then you’ll see the same password now.
Client access
After a few minutes, all pods should have the status Running
or Completed
.
We can start testing Nubus now.
By default, Nubus exposes only three services: The Portal, the UMC and the UDM REST API, all accessible through HTTPS. To use them, you must use the correct hostname in your browser or REST client.
Add id.example.com
and portal.example.com
to the file /etc/hosts
(replace example.com
with your domain if you changed custom_values.yaml
):
echo "127.0.0.2 id.example.com portal.example.com" | sudo tee -a /etc/hosts >/dev/null
You must do that on the system where you intend to start the browser and the curl
command. That may be your VM or your desktop machine.
Now you should be able to connect with your browser to https://portal.example.com/ . The SSL certificate is self-signed, so you must accept it manually.
You can use the UDM REST API:
export USERNAME=Administrator
export PASSWORD="$(kubectl -n default get secret nubus-nubus-credentials -o json | jq -r '.data.administrator_password' | base64 -d)"
export FQDN=portal.example.com
curl -ks -X GET -H "Accept: application/json" \
"https://${USERNAME}:${PASSWORD}@${FQDN}/univention/udm/users/user/?query\[username\]=Administrator" | jq
{
"_links": {
"udm:object-modules": [
{
"title": "All modules",
"href": "https://portal.example.com/univention/udm/"
}
# ...
},
"_embedded": {
"udm:object": [
{
"dn": "uid=Administrator,cn=users,dc=example,dc=com",
"objectType": "users/user",
"id": "Administrator",
"position": "cn=users,dc=example,dc=com",
"properties": {
"username": "Administrator",
"uidNumber": 2001,
"gidNumber": 5000,
"firstname": "Admin",
# ...
},
"options": {
"pki": false
},
"policies": {
"policies/pwhistory": [],
"policies/desktop": [],
"policies/umc": []
},
"uuid": "6b747944-50e1-103f-8c3f-1f706b703556",
"uri": "https://portal.example.com/univention/udm/users/user/uid%3DAdministrator%2Ccn%3Dusers%2Cdc%3Dexample%2Cdc%3Dcom",
"_links": {
"self": [
{
"name": "uid=Administrator,cn=users,dc=example,dc=com",
"title": "Administrator",
"href": "https://portal.example.com/univention/udm/users/user/uid%3DAdministrator%2Ccn%3Dusers%2Cdc%3Dexample%2Cdc%3Dcom"
}
]
}
}
]
},
"results": 1
}
Logs
To read the logs, use kubectl logs [-f] <object>
.
The -f
flag works like with the tail
command: It keeps streaming new log lines to the terminal.
Examples:
- UDM REST API:
kubectl logs -f deployments/nubus-udm-rest-api
- Portal:
kubectl logs -f deployments/nubus-portal-frontend
- LDAP:
kubectl logs -f statefulsets/nubus-ldap-server-primary
Use kubectl logs TAB TAB
(two times tabulator key) to navigate the available options.
Cleanup
That’s it — everything is working now.
If you want to uninstall Nubus, run: helm uninstall nubus
If you want to delete the whole cluster (stopping the Docker container and deleting its volume, freeing all used RAM and disk space), run:
kind delete clusters nubus
Only the Docker image remains, so the next cluster deployment doesn’t require the download:
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
kindest/node <none> 2d9b4b74084a 3 days ago 1.05GB
To delete the Docker image, run: docker rmi 2d9b4b74084a
.