K3s Setup
Notes for setting up a single-node K3s cluster on an OVH Baremetal machine. The setup covers OS hardening, WireGuard VPN (so the Kubernetes API is never exposed to the public internet), and a Traefik ingress with TLS certificates via cert-manager and Cloudflare DNS-01 challenges.
OS
Install Rocky Linux 9, then update and reboot:
sudo dnf update -y && sudo reboot
SELinux
To enable SELinux:
sudo grubby --update-kernel ALL --remove-args selinux
sudo reboot
Verify it is enforcing:
getenforce
Should show: Enforcing
Hardening
Apply kernel-level network hardening, lock down SSH, and disable IPv6:
sudo tee /etc/sysctl.d/99-hardening.conf > /dev/null <<EOF
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.tcp_timestamps = 0
EOF
sudo sysctl --system
sudo cp /etc/ssh/sshd_config /etc/ssh/sshd_config.bak
sudo sed -i 's/^#*PermitRootLogin.*/PermitRootLogin no/' /etc/ssh/sshd_config
sudo sed -i 's/^#*PasswordAuthentication.*/PasswordAuthentication no/' /etc/ssh/sshd_config
sudo sed -i 's/^#*PermitEmptyPasswords.*/PermitEmptyPasswords no/' /etc/ssh/sshd_config
sudo sed -i 's/^#*ChallengeResponseAuthentication.*/ChallengeResponseAuthentication no/' /etc/ssh/sshd_config
sudo systemctl reload sshd
sudo tee /etc/sysctl.d/99-disable-ipv6.conf > /dev/null <<EOF
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
EOF
sudo sysctl --system
Wireguard
K3s and SSH will only be accessible through the WireGuard VPN tunnel (10.11.0.0/24). The server uses 10.11.0.1 and the operator machine uses 10.11.0.2.
sudo dnf install epel-release -y
sudo dnf install wireguard-tools -y
echo "net.ipv4.ip_forward=1" | sudo tee /etc/sysctl.d/99-wireguard.conf
sudo sysctl --system
sudo mkdir -p /etc/wireguard
sudo chmod 700 /etc/wireguard
wg genkey | sudo tee /etc/wireguard/server.key | wg pubkey | sudo tee /etc/wireguard/server.pub
sudo chmod 600 /etc/wireguard/server.key
sudo tee /etc/wireguard/wg0.conf > /dev/null <<EOF
[Interface]
Address = 10.11.0.1/24
ListenPort = 51820
PrivateKey = $(sudo cat /etc/wireguard/server.key)
[Peer]
PublicKey = <CLIENT_PUBLIC_KEY>
AllowedIPs = 10.11.0.2/32
EOF
sudo chmod 600 /etc/wireguard/wg0.conf
sudo chown root:root /etc/wireguard/wg0.conf
sudo systemctl enable --now wg-quick@wg0
# sudo wg-quick down wg0
# sudo wg-quick up wg0
sudo cat /etc/wireguard/server.pub
sudo wg show
Firewall
Only ports 80, 443, and 51820 (WireGuard) are open to the world. SSH and the Kubernetes API (6443) are restricted to the VPN subnet.
sudo dnf install firewalld -y
sudo systemctl enable --now firewalld
sudo firewall-cmd --state
sudo firewall-cmd --set-default-zone=drop
sudo firewall-cmd --permanent --zone=drop --add-port=80/tcp
sudo firewall-cmd --permanent --zone=drop --add-port=443/tcp
sudo firewall-cmd --permanent --zone=drop --add-port=51820/udp
sudo firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16 # pods
sudo firewall-cmd --permanent --zone=trusted --add-source=10.43.0.0/16 # services
sudo firewall-cmd --permanent --zone=drop \
--add-rich-rule='rule family="ipv4" source address="10.11.0.0/24" port port=22 protocol=tcp accept'
sudo firewall-cmd --permanent --zone=drop \
--add-rich-rule='rule family="ipv4" source address="10.11.0.0/24" port port=6443 protocol=tcp accept'
sudo firewall-cmd --reload
sudo firewall-cmd --get-default-zone
Fail2ban
Protect SSH from brute-force attacks. The VPN subnet is allowlisted so legitimate connections are never banned.
sudo dnf install -y fail2ban
sudo systemctl enable --now fail2ban
sudo tee /etc/fail2ban/jail.d/ssh-vpn.local > /dev/null <<EOF
[sshd]
enabled = true
port = 22
filter = sshd
logpath = /var/log/secure
maxretry = 5
bantime = 3600
ignoreip = 10.11.0.0/24
EOF
sudo systemctl restart fail2ban
sudo fail2ban-client status sshd
sudo tail -f /var/log/secure
K3s
Installation
See below for the hardened installation with etcd secret encryption.
Bind K3s to the WireGuard interface so the API server is only reachable over VPN:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--bind-address 10.11.0.1 --tls-san 10.11.0.1 --node-ip 10.11.0.1" sh -
sudo ss -tulpen | grep 6443
Copy the kubeconfig to your operator machine:
# On the server
sudo cat /etc/rancher/k3s/k3s.yaml
# On the operator machine
mkdir -p ~/.kube
# Paste content into ~/.kube/config and replace 127.0.0.1 with 10.11.0.1
Fix k3s service (in case of error during installation)
Update the k3s service: sudo systemctl edit k3s
[Service]
ExecStart=
ExecStart=/usr/local/bin/k3s server \
--bind-address 10.11.0.1 \
--tls-san 10.11.0.1 \
--node-ip 10.11.0.1
sudo systemctl daemon-reexec
sudo systemctl restart k3s
Hardening
Enable etcd secret encryption at rest. Generate the encryption config before installing K3s:
sudo tee /var/lib/rancher/k3s/encryption-config.yaml > /dev/null <<EOF
kind: EncryptionConfiguration
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: $(head -c 24 /dev/urandom | base64)
- identity: {}
EOF
sudo chown root:root /var/lib/rancher/k3s/encryption-config.yaml
sudo chmod 600 /var/lib/rancher/k3s/encryption-config.yaml
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='--bind-address 10.11.0.1 --tls-san 10.11.0.1 --node-ip 10.11.0.1 --kube-apiserver-arg="encryption-provider-config=/var/lib/rancher/k3s/encryption-config.yaml"' sh -
Optional
Apply a default-deny NetworkPolicy to block all unintended pod-to-pod and pod-to-internet traffic:
default-deny.yml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
kubectl apply -f default-deny.yml
Demo (Step by step)
End-to-end example deploying a whoami app with a TLS certificate issued via Cloudflare DNS-01.
You will need:
- A Cloudflare API token (Zone: Edit + Read)
- A domain name managed by Cloudflare
- An email address for Let's Encrypt
Cert Manager
Install cert-manager:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.19.4/cert-manager.yaml
Create the Cloudflare API token secret in the cert-manager namespace:
apiVersion: v1
kind: Secret
metadata:
name: cloudflare-api-token-secret
namespace: cert-manager
type: Opaque
stringData:
api-token: CLOUDFLARE_API_KEY
Create the ClusterIssuer using DNS-01:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: cloudflare-cluster-issuer
spec:
acme:
email: [email protected]
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: cloudflare-cluster-issuer-account-key
solvers:
- dns01:
cloudflare:
apiTokenSecretRef:
name: cloudflare-api-token-secret
key: api-token
Web App
Simple whoami test to verify TLS and round-robin load balancing across replicas.
Create namespace
apiVersion: v1
kind: Namespace
metadata:
name: webux-dev
kubectl apply -f namespace.yml
Create deployment
Two replicas — refreshing the page will round-robin between them.
apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami
namespace: webux-dev
spec:
replicas: 2
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: traefik/whoami
ports:
- containerPort: 80
kubectl apply -f deployment.yml
Create service
apiVersion: v1
kind: Service
metadata:
name: whoami
namespace: webux-dev
spec:
selector:
app: whoami
ports:
- port: 80
kubectl apply -f service.yml
Create ingress route
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: whoami
namespace: webux-dev
spec:
entryPoints:
- websecure
routes:
- match: Host(`web.webux.dev`)
kind: Rule
services:
- name: whoami
port: 80
tls:
secretName: web-webux-dev-secret
kubectl apply -f ingressroute.yml
Request certificate
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: web-webux-dev
namespace: webux-dev
spec:
secretName: web-webux-dev-secret
issuerRef:
name: cloudflare-cluster-issuer
kind: ClusterIssuer
dnsNames:
- web.webux.dev
kubectl apply -f certificate.yml
Cloudflare notes
- Create an
Arecord pointing your domain to the baremetal public IP. - Enable the
Proxyin Cloudflare. - Set
SSL/TLS encryptiontoFull.
Test:
curl https://web.webux.dev
The response should be served over HTTPS with a Let's Encrypt certificate.
Tests
Traefik Metrics
podName=$(kubectl -n kube-system get pods -l app.kubernetes.io/name=traefik -o jsonpath='{.items[0].metadata.name}')
kubectl -n kube-system port-forward pod/$podName 9100:9100
curl http://127.0.0.1:9100/metrics
Traefik Dashboard
Generate a bcrypt-hashed password for basic auth:
htpasswd -nb admin yourpassword | openssl base64
Issue a TLS certificate for the dashboard:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: traefik-dashboard
namespace: kube-system
spec:
secretName: traefik-dashboard-secret
issuerRef:
name: cloudflare-cluster-issuer
kind: ClusterIssuer
dnsNames:
- traefik.webux.dev
kubectl apply -f certificate.yml
Restrict dashboard access to the VPN subnet:
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: dashboard-allowlist
namespace: kube-system
spec:
ipAllowList:
sourceRange:
- 10.11.0.0/24
kubectl apply -f allowlist.yml
Store the hashed credentials as a secret:
apiVersion: v1
kind: Secret
metadata:
name: traefik-dashboard-auth
namespace: kube-system
type: Opaque
data:
users: <YOUR_BASE64_CRED>
kubectl apply -f secret.yml
Create the basic auth middleware:
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: dashboard-auth
namespace: kube-system
spec:
basicAuth:
secret: traefik-dashboard-auth
kubectl apply -f middleware.yml
Expose the dashboard with both middlewares applied:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: traefik-dashboard
namespace: kube-system
spec:
entryPoints:
- websecure
routes:
- match: Host(`traefik.webux.dev`) && (PathPrefix(`/dashboard`) || PathPrefix(`/api`))
kind: Rule
middlewares:
- name: dashboard-auth
- name: dashboard-allowlist
services:
- name: api@internal
kind: TraefikService
tls:
secretName: traefik-dashboard-secret
kubectl apply -f ingressroute.yml
Preserve the real client IP by setting externalTrafficPolicy: Local on the Traefik service. Edit sudo vi /var/lib/rancher/k3s/server/manifests/traefik-config.yaml:
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: traefik
namespace: kube-system
spec:
valuesContent: |-
service:
spec:
externalTrafficPolicy: Local
With externalTrafficPolicy: Local, Traefik sees the original client IP (10.11.x.x) instead of the internal cluster IP.
Troubleshooting
Traefik Debug and Access Logs
Enable debug logging and access logs. Edit sudo vi /var/lib/rancher/k3s/server/manifests/traefik-config.yaml:
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: traefik
namespace: kube-system
spec:
valuesContent: |-
logs:
general:
level: DEBUG
access:
enabled: true
Verify the config was applied:
kubectl get helmchartconfig -n kube-system traefik
Tail the logs:
kubectl logs -n kube-system deployment/traefik --tail=20
References
- https://letsencrypt.org/getting-started/
- https://cert-manager.io/docs/configuration/acme/dns01/cloudflare/
- https://www.youtube.com/watch?v=vJweuU6Qrgo
- https://docs.k3s.io/quick-start
- https://docs.k3s.io/installation/configuration
- https://github.com/grafana-community/helm-charts/tree/main/charts/tempo