kustomizeでNGINX単体をデプロイする

kustomizeを使ってみたかったので試してみました。
特に設定ファイルをcomfigmapで管理しようと思ったら
configMapGeneratorを使うのが便利じゃないかと思った次第です。

検証環境

いつものようにラズパイ3台で構築したRKE2クラスタを使用します。
足りないので増やしたいですが、昔の値段に戻らないかなぁ
無理だよなぁ

NAME   STATUS   ROLES                       AGE    VERSION           INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s1   Ready    control-plane,etcd,master   146d   v1.28.10+rke2r1   192.168.0.51   <none>        Ubuntu 24.04 LTS     6.8.0-1013-raspi   containerd://1.7.11-k3s2
k8s2   Ready    worker                      23d    v1.28.10+rke2r1   192.168.0.52   <none>        Ubuntu 24.04.1 LTS   6.8.0-1013-raspi   containerd://1.7.11-k3s2
k8s3   Ready    worker                      146d   v1.28.10+rke2r1   192.168.0.53   <none>        Ubuntu 24.04 LTS     6.8.0-1013-raspi   containerd://1.7.11-k3s2

前回構築したloki やMetalLBが入っています。

NAMESPACE                  NAME                                                     READY   STATUS      RESTARTS        AGE
calico-system              calico-kube-controllers-7b48486fdc-75h6v                 1/1     Running     2 (24d ago)     146d
calico-system              calico-node-5pn72                                        1/1     Running     4 (24d ago)     146d
calico-system              calico-node-74fdq                                        1/1     Running     0               23d
calico-system              calico-node-85lns                                        1/1     Running     2 (24d ago)     146d
calico-system              calico-typha-f8576ccc9-782rk                             1/1     Running     0               23d
calico-system              calico-typha-f8576ccc9-m9qtv                             1/1     Running     0               24d
cattle-fleet-system        fleet-agent-f7dc57db7-86jb6                              1/1     Running     2 (16d ago)     24d
cattle-monitoring-system   alertmanager-rancher-monitoring-alertmanager-0           2/2     Running     0               11d
cattle-monitoring-system   prometheus-rancher-monitoring-prometheus-0               3/3     Running     0               11d
cattle-monitoring-system   pushprox-kube-controller-manager-client-ckgg8            1/1     Running     0               11d
cattle-monitoring-system   pushprox-kube-controller-manager-proxy-b7b6cdcc4-ch9tr   1/1     Running     0               11d
cattle-monitoring-system   pushprox-kube-etcd-client-lzqrj                          1/1     Running     0               11d
cattle-monitoring-system   pushprox-kube-etcd-proxy-6b7fc9dbfd-b55km                1/1     Running     0               11d
cattle-monitoring-system   pushprox-kube-proxy-client-2ldbt                         1/1     Running     0               11d
cattle-monitoring-system   pushprox-kube-proxy-client-95qmb                         1/1     Running     0               11d
cattle-monitoring-system   pushprox-kube-proxy-client-kssp7                         1/1     Running     0               11d
cattle-monitoring-system   pushprox-kube-proxy-proxy-5bfdcb799d-j6wh6               1/1     Running     0               11d
cattle-monitoring-system   pushprox-kube-scheduler-client-xmckg                     1/1     Running     0               11d
cattle-monitoring-system   pushprox-kube-scheduler-proxy-96ff98f56-h4rwk            1/1     Running     0               11d
cattle-monitoring-system   rancher-monitoring-grafana-6bc7446b55-5lpd8              3/3     Running     0               11d
cattle-monitoring-system   rancher-monitoring-kube-state-metrics-745f856f66-fplqt   1/1     Running     0               11d
cattle-monitoring-system   rancher-monitoring-operator-579577c4f7-mswt4             1/1     Running     0               11d
cattle-monitoring-system   rancher-monitoring-prometheus-adapter-5584c94696-gx76r   1/1     Running     0               11d
cattle-monitoring-system   rancher-monitoring-prometheus-node-exporter-vksxn        1/1     Running     0               11d
cattle-monitoring-system   rancher-monitoring-prometheus-node-exporter-zlwj4        1/1     Running     0               11d
cattle-monitoring-system   rancher-monitoring-prometheus-node-exporter-znfsw        1/1     Running     0               11d
cattle-system              cattle-cluster-agent-5455d59c7d-cfxs6                    1/1     Running     7 (21d ago)     135d
cattle-system              cattle-cluster-agent-5455d59c7d-cwm6r                    1/1     Running     3 (16d ago)     32d
cattle-system              rancher-webhook-fd7599678-9xtmw                          1/1     Running     0               24d
cattle-system              system-upgrade-controller-6f86d6d4df-pmqnh               1/1     Running     2 (24d ago)     146d
fluent-bit                 fluent-bit-0-1732539504-pcb7d                            1/1     Running     0               11d
fluent-bit                 fluent-bit-0-1732539504-vszkt                            1/1     Running     0               11d
kube-system                cloud-controller-manager-k8s1                            1/1     Running     28 (3d6h ago)   146d
kube-system                etcd-k8s1                                                1/1     Running     9               146d
kube-system                helm-install-rke2-calico-crd-t4kz8                       0/1     Completed   0               146d
kube-system                helm-install-rke2-calico-r7zlv                           0/1     Completed   2               146d
kube-system                helm-install-rke2-coredns-qcwnq                          0/1     Completed   0               146d
kube-system                kube-apiserver-k8s1                                      1/1     Running     4               146d
kube-system                kube-controller-manager-k8s1                             1/1     Running     21 (3d6h ago)   135d
kube-system                kube-proxy-k8s1                                          1/1     Running     2 (24d ago)     146d
kube-system                kube-proxy-k8s2                                          1/1     Running     0               23d
kube-system                kube-proxy-k8s3                                          1/1     Running     0               22d
kube-system                kube-scheduler-k8s1                                      1/1     Running     13 (3d6h ago)   135d
kube-system                rke2-coredns-rke2-coredns-84b9cb946c-lbp4f               1/1     Running     0               24d
kube-system                rke2-coredns-rke2-coredns-84b9cb946c-mtc6p               1/1     Running     2 (24d ago)     146d
kube-system                rke2-coredns-rke2-coredns-autoscaler-b49765765-vl5vt     1/1     Running     2 (24d ago)     146d
kube-system                rke2-ingress-nginx-controller-c45jp                      1/1     Running     0               23d
kube-system                rke2-ingress-nginx-controller-xls2p                      1/1     Running     4 (24d ago)     146d
kube-system                rke2-metrics-server-655477f655-hmhft                     1/1     Running     0               24d
kube-system                rke2-snapshot-controller-59cc9cd8f4-srpxx                1/1     Running     10 (3d6h ago)   146d
kube-system                rke2-snapshot-validation-webhook-54c5989b65-v2fhn        1/1     Running     1 (24d ago)     32d
local-path-storage         local-path-provisioner-65d5864f8d-5wc8t                  1/1     Running     0               18d
loki                       loki-6-1732536120-chunks-cache-0                         2/2     Running     0               11d
loki                       loki-6-1732536120-gateway-c8575d878-h4tlp                1/1     Running     0               11d
loki                       loki-6-1732536120-results-cache-0                        2/2     Running     0               11d
loki                       loki-backend-0                                           2/2     Running     0               11d
loki                       loki-backend-1                                           2/2     Running     0               11d
loki                       loki-canary-7jhn6                                        1/1     Running     0               11d
loki                       loki-canary-rt56h                                        1/1     Running     0               11d
loki                       loki-read-f4fcb9bb-2zmt7                                 1/1     Running     0               11d
loki                       loki-read-f4fcb9bb-477xn                                 1/1     Running     0               11d
loki                       loki-write-0                                             1/1     Running     0               11d
loki                       loki-write-1                                             1/1     Running     0               11d
metallb-system             metallb-0-1730000084-controller-7cf9b5cd5c-pq7f2         1/1     Running     0               24d
metallb-system             metallb-0-1730000084-speaker-45t9d                       4/4     Running     8 (24d ago)     41d
metallb-system             metallb-0-1730000084-speaker-rs5s7                       4/4     Running     0               23d
tigera-operator            tigera-operator-795545875-lzs2r                          1/1     Running     21 (3d6h ago)   146d

kubectl versionの実行結果は以下です。

$ kubectl version
Client Version: v1.28.10+rke2r1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.10+rke2r1

資材準備

準備した資材は以下です。

├── 404.html
├── customize.conf
├── default.conf
├── health.html
├── kustomization.yaml
├── nginx-deploy.yaml
├── nginx-namespace.yaml
└── nginx-service.yaml

まずkustomization.yaml

namespace: nginx
resources:
- nginx-namespace.yaml
- nginx-deploy.yaml
- nginx-service.yaml
configMapGenerator:
- name: nginx-conf
  files:
  - default.conf
  - customize.conf
- name: nginx-html
  files:
  - health.html
  - 404.html
  options:
    disableNameSuffixHash: true

これで以下のように2つconfigmapがデプロイされます。

$ kubectl get cm -n nginx
NAME                    DATA   AGE
kube-root-ca.crt        1      8h
nginx-conf-tgt8c95t6g   2      7h18m
nginx-html              2      7h12m

default.confやcustomize.confのNGINXの設定ファイルを変更した際は、
configmapの名前が毎回変わりマウントしているPodが再起動しますが、
health.htmlや404.htmlなどのコンテンツを変更した際は
disableNameSuffixHashをtrueにしているので名前が変わらず、
Podの再起動なしで変更が反映されます。

default.confとcustomize.confは以下にしました。

$ cat default.conf
server {
    listen       80;
    listen  [::]:80;
    server_name  localhost;

    access_log /var/log/nginx/access.log json;

    real_ip_header    X-Forwarded-For;
    real_ip_recursive on;

    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }

    error_page 403 404 500 502 503 504 /errors/404.html;
    location = /errors/404.html {
        root   /usr/share/nginx/html;
    }
}

access.logをjsonファイルで出力します。

$ cat customize.conf 
log_format json escape=json '{'
    '"time": "$time_iso8601",'
    '"client-ip": "$remote_addr",'
    '"vhost": "$host",'
    '"user": "$remote_user",'
    '"status": "$status",'
    '"protocol": "$server_protocol",'
    '"method": "$request_method",'
    '"path": "$request_uri",'
    '"req": "$request",'
    '"size": "$body_bytes_sent",'
    '"reqtime": "$request_time",'
    '"apptime": "$upstream_response_time",'
    '"ua": "$http_user_agent",'
    '"forwardedfor": "$http_x_forwarded_for",'
    '"forwardedproto": "$http_x_forwarded_proto",'
    '"referrer": "$http_referer"'
'}';

health.htmlと404.htmlはただのhtmlなので省略します。

nginx-namespace.yaml
namespace: nginxで上書きされます。

apiVersion: v1
kind: Namespace
metadata:
  name: default

nginx-deploy.yaml
nginx-confはkustomizeでデプロイするときに名前が変わります。

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:latest
        name: nginx
        livenessProbe:
          httpGet:
            port: 80
            path: /health.html
          failureThreshold: 5
          periodSeconds: 5
        readinessProbe:
          httpGet:
            port: 80
            path: /health.html
          failureThreshold: 1
          periodSeconds: 1
        volumeMounts:
        - name: nginx-conf
          mountPath: /etc/nginx/conf.d/
        - name: nginx-html
          mountPath: /usr/share/nginx/html/
      volumes:
      - name: nginx-conf
        configMap:
          name: nginx-conf
      - name: nginx-html
        configMap:
          name: nginx-html
          items:
          - key: health.html
            path: health.html
          - key: 404.html
            path: errors/404.html

nginx-service.yaml
MetalLBのロードバランサーでNGINXにアクセスします。
アクセス元のIPがうまく取れなかったので こちら を参考にexternalTrafficPolicy: Localを入れています。

apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer
  loadBalancerIP: 192.168.0.230
  externalTrafficPolicy: Local

デプロイする

ではデプロイします。

$ kubectl apply -k .
namespace/nginx created
configmap/nginx-conf-dtd65thk6m created
configmap/nginx-html created
service/nginx created
deployment.apps/nginx created

デプロイされました。

$ kubectl get pod,svc,cm -n nginx
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-7f5948f749-jl244   1/1     Running   0          73m

NAME            TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)        AGE
service/nginx   LoadBalancer   10.43.28.107   192.168.0.230   80:32038/TCP   73m

NAME                              DATA   AGE
configmap/kube-root-ca.crt        1      73m
configmap/nginx-conf-dtd65thk6m   2      73m
configmap/nginx-html              2      73m

動作検証

ブラウザでアクセスしてみます。

 kustomize_nginx_01.png

設定通りのアクセスログになっているかどうかlokiで確認します。

 kustomize_nginx_02.png

json parserを加えると見やすくなると思います。

 kustomize_nginx_03.png

アクセスログに加えてコンテナイメージなどの情報も付与されるのですね。

おまけ

何度もconfigをいじっていると以下のようにもう使っていない
古いバージョンのconfigmapが増えると思います。

$ kubectl get cm
NAME                    DATA   AGE
kube-root-ca.crt        1      63m
nginx-conf-h86cdcch7b   2      45m
nginx-conf-tgt8c95t6g   2      9m38s
nginx-health            2      63m
nginx-html              2      3m15s

これは自動で削除はしてくれないようですね。
調べてみたところ pruneで消す ようです。
ただこちらのコマンドではうまくいかなかったので こちら を参考に、
以下を実行してみたら思い通りの結果を得られました。

$ kubectl kustomize . | kubectl apply -f - --prune --all --prune-allowlist=core/v1/ConfigMap
namespace/nginx unchanged
configmap/nginx-conf-tgt8c95t6g unchanged
configmap/nginx-html unchanged
service/nginx unchanged
deployment.apps/nginx unchanged
configmap/nginx-conf-h86cdcch7b pruned
configmap/nginx-health pruned
$ kubectl get cm
NAME                    DATA   AGE
kube-root-ca.crt        1      157m
nginx-conf-tgt8c95t6g   2      104m
nginx-html              2      97m