This continues the article Explore Kubernetes above…
Scaling
Kubernetes is great for raising and lowering the provided copies/instances of a service - scaling.
This is usually done to accommodate changing demands of the workloads (more/fewer concurrent users) and to ensure a fault-tolerant behavior of the service (high availability).
The WordPress application itself is stateless, so it is installed as a “Deployment”.
kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
my-wordpress 1/1 1 1 26m
A “ReplicaSet” is used to maintain a set of pods for a deployment.
kubectl get replicasets
NAME DESIRED CURRENT READY AGE
my-wordpress-557cbb9647 1 1 1 26m
The Maria database is stateful, and thus installed as a “StatefulSet”.
kubectl get statefulsets
NAME READY AGE
my-wordpress-mariadb 1/1 28m
We cannot safely scale StatefulSets (database replication is a topic of its own), but the stateless WordPress is easy to scale:
kubectl scale deployment --replicas=3 my-wordpress
deployment.apps/my-wordpress scaled
Two additional pods are now being created:
kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
my-wordpress 1/3 3 1 35m
kubectl get replicasets
NAME DESIRED CURRENT READY AGE
my-wordpress-557cbb9647 3 3 1 35m
kubectl get pods
NAME READY STATUS RESTARTS AGE
my-wordpress-557cbb9647-dntzr 1/1 Running 0 36m
my-wordpress-557cbb9647-jvwvt 1/1 Running 0 86s
my-wordpress-557cbb9647-jxb5r 1/1 Running 0 86s
my-wordpress-mariadb-0 1/1 Running 0 36m
$ docker exec -it wordpress-control-plane crictl ps
# ..
$ ps axf | grep /opt/bitnami/apache/bin/httpd
# ..
You’ll notice that only the WordPress pods (and containers) were scaled up and not the MariaDB pod.
So all WordPress instances are using the same MariaDB database.
To follow the logs of all pods, you must open three terminals:
# terminal 1:
kubectl logs -f --tail=5 my-wordpress-557cbb9647-dntzr
# terminal 2:
kubectl logs -f --tail=5 my-wordpress-557cbb9647-jvwvt
# terminal 3:
kubectl logs -f --tail=5 my-wordpress-557cbb9647-jxb5r
Make sure the system that runs the next commands has wordpress.test
in its /etc/hosts
. You can test it by running curl http://wordpress.test/
. It should return a web page (not 404 Not Found
).
Open a new (fourth) terminal and download bombardier to run load tests:
wget -O /usr/local/bin/bombardier https://github.com/codesenberg/bombardier/releases/download/v1.2.6/bombardier-linux-amd64
chmod +x /usr/local/bin/bombardier
bombardier -c 4 -d 10s http://wordpress.test/
Bombarding http://wordpress.test:80/ for 10s using 8 connection(s)
[=========================================================================] 10s
Done!
Statistics Avg Stdev Max
Reqs/sec 9.10 19.53 148.92
Latency 437.44ms 273.52ms 0.99s
HTTP codes:
1xx - 0, 2xx - 95, 3xx - 0, 4xx - 0, 5xx - 0
others - 0
Throughput: 442.57KB/s
If you look at the logs, you’ll notice that the requests have been distributed among the three pods.
The Ingress controller we installed does basic load balancing.
With the URL being the same in all lines that’s a bit hard to see, so do the following:
for I in $(seq 100); do curl "http://wordpress.test/bad_url_$I" &>/dev/null; done
This will produce lots of HTTP 404
errors in the logs - all for different URLs - nicely showing the distribution.
This is very basic load balancing. More complex setups use dedicated Kubernetes objects and services and allow balancing according to different criteria.
But we did not only get scaling for load distribution - we got something more fundamental: scaling for redundancy / high availability (ignoring the single-SQL-DB installation)!
You can test this by forcibly killing one of the WordPress processes, and thus its container and pod.
Use ps
in the host to find out the process ID of an Apache process:
ps axf | grep 'opt/bitnami/apache/bin/httpd'
290304 pts/0 S+ 0:00 \_ grep --color=auto opt/bitnami/apache/bin/httpd
274145 ? Ss 0:00 | \_ /opt/bitnami/apache/bin/httpd -f /opt/bitnami/apache/conf/httpd.conf -D FOREGROUND
275017 ? S 0:02 | \_ /opt/bitnami/apache/bin/httpd -f /opt/bitnami/apache/conf/httpd.conf -D FOREGROUND
275018 ? S 0:01 | \_ /opt/bitnami/apache/bin/httpd -f /opt/bitnami/apache/conf/httpd.conf -D FOREGROUND
275019 ? S 0:01 | \_ /opt/bitnami/apache/bin/httpd -f /opt/bitnami/apache/conf/httpd.conf -D FOREGROUND
# ...
286564 ? S 0:00 | \_ /opt/bitnami/apache/bin/httpd -f /opt/bitnami/apache/conf/httpd.conf -D FOREGROUND
283842 ? Ss 0:00 | \_ /opt/bitnami/apache/bin/httpd -f /opt/bitnami/apache/conf/httpd.conf -D FOREGROUND
284458 ? S 0:01 | \_ /opt/bitnami/apache/bin/httpd -f /opt/bitnami/apache/conf/httpd.conf -D FOREGROUND
284459 ? S 0:01 | \_ /opt/bitnami/apache/bin/httpd -f /opt/bitnami/apache/conf/httpd.conf -D FOREGROUND
# ...
The f
in the ps
command creates a kind of tree. We want to kill one of the Apache processes at the “root.” In my case 274145
or 283842
.
This will simulate a crash of the application.
Open a fifth terminal for that.
The first three terminals still show the logs.
In the fourth terminal start a load test that runs for two minutes:
bombardier -c 4 -d 120s http://wordpress.test/
When it has started ([===>-----------]
), and the logs in the first terminals start showing connections (... GET / HTTP/1.1" 200 ...
), kill the container (in the fifth terminal) by killing the main Apache process:
kill -9 274145
The result will be that the “log tail” in one of the first terminals stops.
Just hit the up-arrow and start it again. (It’s possible you’ll have to do it repeatedly if the pod restart is slow.)
Defaulted container "wordpress" out of: wordpress, prepare-base-dir (init)
wordpress 12:31:22.88 INFO ==>
wordpress 12:31:22.89 INFO ==> Welcome to the Bitnami wordpress container
# ...
wordpress 12:31:24.59 INFO ==> Restoring persisted WordPress installation
wordpress 12:31:24.78 INFO ==> Trying to connect to the database server
wordpress 12:31:28.55 INFO ==> ** WordPress setup finished! **
wordpress 12:31:28.64 INFO ==> ** Starting Apache **
[Wed Dec 18 12:31:29.067258 2024] [core:warn] [pid 1:tid 1] AH00098: pid file /opt/bitnami/apache/var/run/httpd.pid overwritten -- Unclean shutdown of previous Apache run?
[Wed Dec 18 12:31:29.085625 2024] [mpm_prefork:notice] [pid 1:tid 1] AH00163: Apache/2.4.62 (Unix) OpenSSL/3.0.15 configured -- resuming normal operations
[Wed Dec 18 12:31:29.085694 2024] [core:notice] [pid 1:tid 1] AH00094: Command line: '/opt/bitnami/apache/bin/httpd -f /opt/bitnami/apache/conf/httpd.conf -D FOREGROUND'
10.244.0.1 - - [18/Dec/2024:12:31:54 +0000] "GET /wp-login.php HTTP/1.1" 200 4455
10.244.0.1 - - [18/Dec/2024:12:32:04 +0000] "GET /wp-login.php HTTP/1.1" 200 4455
# ...
So Kubernetes restarts the pod and the Ingress controller readds it to the network.
In the meantime, the other two pods receive all the traffic.
The load test finishes (in the fourth terminal), and it shows 0
errors!:
Bombarding http://wordpress.test:80/ for 2m0s using 4 connection(s)
[==================================================================] 2m0s
Done!
Statistics Avg Stdev Max
Reqs/sec 9.66 19.50 144.81
Latency 413.12ms 254.45ms 1.16s
HTTP codes:
1xx - 0, 2xx - 1165, 3xx - 0, 4xx - 0, 5xx - 0
others - 0
Throughput: 473.83KB/s
This easily we achieved a redundant setup.
If you look at kubectl get pods
, you’ll notice that the pod has been restarted (see column RESTARTS
).
This is how Kubernetes works: You define a desired starte and Kubernetes implements and supervises it.
This is also how in Kubernetes app updates are done without service interruption:
Remove a pod from the load balancer, shut it down, upgrade it, start it and readd it to the load balancing.
During the upgrade, the other pods take over the load from the pod down, as it’s being upgraded.
Repeat this for all pods and the upgrade finishes without service interruption.
So, now let’s imagine the load dropped, because it’s after-work hours.
A proper load balancer combined with metrics from the pods can see that they are mostly idle and down-scale, reducing costs and CO2:
kubectl scale deployment --replicas=1 my-wordpress
deployment.apps/my-wordpress scaled
kubectl get pods
NAME READY STATUS RESTARTS AGE
my-wordpress-557cbb9647-dntzr 1/1 Terminating 2 (2m35s ago) 67m
my-wordpress-557cbb9647-jvwvt 1/1 Terminating 0 32m
my-wordpress-557cbb9647-jxb5r 1/1 Running 0 32m
my-wordpress-mariadb-0 1/1 Running 0 67m
# zzz
kubectl get pods
NAME READY STATUS RESTARTS AGE
my-wordpress-557cbb9647-jxb5r 1/1 Running 0 34m
my-wordpress-mariadb-0 1/1 Running 0 70m
Cleanup
See main article section Cleanup:
If you want to uninstall WordPress, run: helm uninstall my-wordpress
If you want to delete the whole cluster (stopping the Docker container and deleting its volume, freeing all used RAM and disk space), run:
kind delete clusters wordpress
Deleted nodes: ["wordpress-control-plane"]
Deleted clusters: ["wordpress"]
If you also wish to remove the Kind Docker image (1GB, you’ll have to download it again if you wish to start another cluster):
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
kindest/node <none> 2d9b4b74084a 3 days ago 1.05GB
docker rmi 2d9b4b74084a