Browse Source

update tags, add run-one cronjobs to scripts

Josh Bicking 10 months ago
parent
commit
c3eb2abc4a
9 changed files with 301 additions and 87 deletions
  1. 51 9
      README.md
  2. 4 4
      cloudflared.yaml
  3. 23 0
      lidarr_empty_folders.py
  4. 149 69
      nextcloud/values.yaml
  5. 4 1
      plex.yaml
  6. 5 0
      postgres/values.yaml
  7. 1 1
      prowlarr.yaml
  8. 6 3
      seedbox_sync.py
  9. 58 0
      whoami.yaml

+ 51 - 9
README.md

@@ -6,21 +6,37 @@ _Below is mostly braindumps & rough commands for creating/tweaking these service
 
 # k3s
 
+## installing k3s
+
 ```
 curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --cluster-init" sh -
 export NODE_TOKEN=$(cat /var/lib/rancher/k3s/server/node-token)
 curl -sfL https://get.k3s.io | K3S_TOKEN=$NODE_TOKEN INSTALL_K3S_EXEC="server --server https://192.168.122.87:6443" INSTALL_K3S_VERSION=v1.23.6+k3s1 sh -
 ```
 
+## upgrading k3s
+
+TODO
 
 # rook
 
+## installing rook
+
 ```
 KUBECONFIG=/etc/rancher/k3s/k3s.yaml helm upgrade --install --create-namespace --namespace rook-ceph rook-ceph rook-release/rook-ceph:1.9.2 -f rook-ceph-values.yaml
 
 KUBECONFIG=/etc/rancher/k3s/k3s.yaml helm install --create-namespace --namespace rook-ceph rook-ceph-cluster --set operatorNamespace=rook-ceph rook-release/rook-ceph-cluster:1.9.2 -f rook-ceph-cluster-values.yaml
 ```
 
+## upgrading rook
+
+TODO
+
+## Finding the physical device for an OSD
+
+ ceph osd metadata <id>
+
+
 ## Sharing 1 CephFS instance between multiple PVCs
 
 https://github.com/rook/rook/blob/677d3fa47f21b07245e2e4ab6cc964eb44223c48/Documentation/Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage.md
@@ -29,21 +45,38 @@ Create CephFilesystem
 Create SC backed by Filesystem & Pool
 Ensure the CSI subvolumegroup was created. If not, `ceph fs subvolumegroup create <fsname> csi`
 Create PVC without a specified PV: PV will be auto-created
-Set created PV to ReclaimPolicy: Retain
+_Super important_: Set created PV to ReclaimPolicy: Retain
 Create a new, better-named PVC
 
-If important data is on CephBlockPool-backed PVCs, don't forget to set the PV's persistentVolumeReclaimPolicy to `Retain`.
-
 ## tolerations
 If your setup divides k8s nodes into ceph & non-ceph nodes (using a label, like `storage-node=true`), ensure labels & a toleration are set properly (`storage-node=false`, with a toleration checking for `storage-node`) so non-ceph nodes still run PV plugin Daemonsets.
 
+Otherwise, any pod scheduled on a non-ceph node won't be able to mount ceph-backed PVCs.
+
+See rook-ceph-cluster-values.yaml->cephClusterSpec->placement for an example.
+
 ## CephFS w/ EC backing pool
 
-EC-backed filesystems require a regular replicated pool as a default
+EC-backed filesystems require a regular replicated pool as a default.
 
-https://lists.ceph.io/hyperkitty/list/[email protected]/thread/Y6T7OVTC4XAAWMFTK3MYGC7TB6G47OCH/
+https://lists.ceph.io/hyperkitty/list/[email protected]/thread/QI42CLL3GJ6G7PZEMAD3CXBHA5BNWSYS/
 https://tracker.ceph.com/issues/42450
 
+Then setfattr a directory on the filesystem with an EC-backed pool. Any new data written to the folder will go to the EC-backed pool.
+
+setfattr -n ceph.dir.layout.pool -v cephfs-erasurecoded /mnt/cephfs/my-erasure-coded-dir
+
+https://docs.ceph.com/en/quincy/cephfs/file-layouts/
+
+## Crush rules for each pool
+
+ for i in `ceph osd pool ls`; do echo $i: `ceph osd pool get $i crush_rule`; done
+
+On ES backed pools, device class information is in the erasure code profile, not the crush rule.
+https://docs.ceph.com/en/latest/dev/erasure-coded-pool/
+
+ for i in `ceph osd erasure-code-profile ls`; do echo $i: `ceph osd erasure-code-profile get $i`; done
+
 
 ## ObjectStore
 
@@ -156,12 +189,20 @@ $ helm upgrade -i nvdp nvdp/nvidia-device-plugin ... --set-file config.map.confi
 
 # ceph client for cephfs volumes
 
+## New method
+
+https://docs.ceph.com/en/latest/man/8/mount.ceph/
+
+```
+sudo mount -t ceph user@<cluster FSID>.<filesystem name>=/ /mnt/ceph -o secret=<secret key>,x-systemd.requires=ceph.target,x-systemd.mount-timeout=5min,_netdev,mon_addr=192.168.1.1
 ```
-sudo apt install ceph-fuse
 
+## Older method (stopped working for me around Pacific)
+
+```
 sudo vi /etc/fstab
 
-192.168.1.1.,192.168.1.2:/    /ceph   ceph    name=admin,secret=<secret key>,x-systemd.mount-timeout=5min,_netdev,mds_namespace=data
+192.168.1.1,192.168.1.2:/    /ceph   ceph    name=admin,secret=<secret key>,x-systemd.mount-timeout=5min,_netdev,mds_namespace=data
 ```
 
 
@@ -170,9 +211,9 @@ https://unix.stackexchange.com/questions/554908/disable-spectre-and-meltdown-mit
 
 # Monitoring
 
-https://rpi4cluster.com/monitoring/k3s-grafana/
+https://rpi4cluster.com/monitoring/monitor-intro/, + what's in the `monitoring` folder.
 
-Tried https://github.com/prometheus-operator/kube-prometheus. The only way to persist dashboards is to add them to Jsonnet & apply the generated configmap.
+Tried https://github.com/prometheus-operator/kube-prometheus. The only way to persist dashboards is to add them to Jsonnet & apply the generated configmap. I'm not ready for that kind of IaC commitment in a homelab.
 
 # Exposing internal services
 
@@ -192,3 +233,4 @@ Service will then be available on port 1234 of any k8s node.
 - deluge
 - gogs ssh ingress (can't go through cloudflare without cloudflared on the client)
 - Something better than `expose` for accessing internal services
+- replicated_ssd crush rule never resolves (or didn't on `data-metadata`)

+ 4 - 4
cloudflared.yaml

@@ -85,8 +85,8 @@ data:
     - hostname: vaultwarden.jibby.org
       path: /notifications/hub.*
       service: http://vaultwarden-service.vaultwarden.svc.cluster.local:3012
-    - hostname: mastodon.jibby.org
-      service: http://mastodon-service.mastodon.svc.cluster.local:3000
-    - hostname: streaming-mastodon.jibby.org
-      service: http://mastodon-service.mastodon.svc.cluster.local:4000
+    # - hostname: mastodon.jibby.org
+    #   service: http://mastodon-service.mastodon.svc.cluster.local:3000
+    # - hostname: streaming-mastodon.jibby.org
+    #   service: http://mastodon-service.mastodon.svc.cluster.local:4000
     - service: http_status:404

+ 23 - 0
lidarr_empty_folders.py

@@ -0,0 +1,23 @@
+# */1 * * * * /usr/bin/run-one /usr/bin/python3 /path/to/lidarr_empty_folders.py <lidarr IP>:8686 <API key> /path/to/Music/ 2>&1 | /usr/bin/logger -t lidarr_empty_folders
+
+import requests
+import os
+import sys
+if len(sys.argv) != 4:
+    print("One or more args are undefined")
+    sys.exit(1)
+
+lidarr_server, lidarr_api_key, music_folder = sys.argv[1:4]
+
+resp = requests.get(
+    f"http://{lidarr_server}/api/v1/artist",
+    headers={"Authorization": f"Bearer {lidarr_api_key}"}
+    )
+artists = resp.json()
+
+for artist in artists:
+     artist_name = artist.get("artistName")
+     artist_path = music_folder + artist_name
+     if ('/' not in artist_name) and (not os.path.exists(artist_path)):
+        print("Creating ", artist_path)
+        os.mkdir(artist_path)

+ 149 - 69
nextcloud/values.yaml

@@ -1,12 +1,28 @@
 # helm repo add nextcloud https://nextcloud.github.io/helm/
-# helm upgrade --install nextcloud nextcloud/nextcloud -n nextcloud -f values.yaml --version 2.14.4
+# helm upgrade --install nextcloud nextcloud/nextcloud -n nextcloud -f values.yaml --version 3.5.14
+
+# Upgrading:
+# su -s /bin/bash - www-data
+# cd /var/www/html
+# PHP_MEMORY_LIMIT=512M ./occ upgrade
+
+# Forwarding IPs requires:
+#
+#  'trusted_proxies' =>
+#  array (
+#    0 => '10.42.0.0/16',
+#    1 => '127.0.0.1',
+#  ),
+#  'overwritecondaddr' => '^10\.42\.[0-9]+\.[0-9]+$',
+#
+# For whatever your ingress is.
 
 ## Official nextcloud image version
 ## ref: https://hub.docker.com/r/library/nextcloud/tags/
 ##
 image:
   repository: nextcloud
-  tag: 24.0.1-apache
+  tag: 26.0.3-apache
   pullPolicy: IfNotPresent
   # pullSecrets:
   #   - myRegistrKeySecretName
@@ -15,6 +31,7 @@ nameOverride: ""
 fullnameOverride: ""
 podAnnotations: {}
 deploymentAnnotations: {}
+deploymentLabels: {}
 
 # Number of replicas to be deployed
 replicaCount: 1
@@ -32,8 +49,8 @@ ingress:
   #  nginx.ingress.kubernetes.io/server-snippet: |-
   #    server_tokens off;
   #    proxy_hide_header X-Powered-By;
-
-  #    rewrite ^/.well-known/webfinger /public.php?service=webfinger last;
+  #    rewrite ^/.well-known/webfinger /index.php/.well-known/webfinger last;
+  #    rewrite ^/.well-known/nodeinfo /index.php/.well-known/nodeinfo last;
   #    rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
   #    rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json;
   #    location = /.well-known/carddav {
@@ -69,7 +86,7 @@ lifecycle: {}
   # preStopCommand: []
 
 phpClientHttpsFix:
-  enabled: true
+  enabled: false
   protocol: https
 
 nextcloud:
@@ -80,14 +97,14 @@ nextcloud:
   existingSecret:
     enabled: false
     # secretName: nameofsecret
-    # usernameKey: username
-    # passwordKey: password
-    # tokenKey: serverinfo_token
-    # smtpUsernameKey: smtp_username
-    # smtpPasswordKey: smtp_password
+    # usernameKey: nextcloud-username
+    # passwordKey: nextcloud-password
+    # tokenKey: nextcloud-token
+    # smtpUsernameKey: smtp-username
+    # smtpPasswordKey: smtp-password
   update: 0
   # If web server is not binding default port, you can define it
-  # containerPort: 8080
+  containerPort: 80
   datadir: /var/www/html/data
   persistence:
     subPath:
@@ -170,10 +187,6 @@ nextcloud:
         secretKeyRef:
           name: redis-client-secret
           key: REDIS_HOST_PASSWORD
-    # This will only set apache's RemoteIPTrustedProxy, not
-    # RemoteIPInternalProxy. Local IPs will not be passed through.
-    - name: TRUSTED_PROXIES
-      value: "10.42.0.0/16,127.0.0.1"
 
   # Extra init containers that runs before pods start.
   extraInitContainers: []
@@ -181,6 +194,15 @@ nextcloud:
   #    image: busybox
   #    command: ['do', 'something']
 
+  # Extra sidecar containers.
+  extraSidecarContainers: []
+  #  - name: nextcloud-logger
+  #    image: busybox
+  #    command: [/bin/sh, -c, 'while ! test -f "/run/nextcloud/data/nextcloud.log"; do sleep 1; done; tail -n+1 -f /run/nextcloud/data/nextcloud.log']
+  #    volumeMounts:
+  #    - name: nextcloud-data
+  #      mountPath: /run/nextcloud/data
+
   # Extra mounts for the pods. Example shown is for connecting a legacy NFS volume
   # to NextCloud pods in Kubernetes. This can then be configured in External Storage
   extraVolumes:
@@ -193,12 +215,20 @@ nextcloud:
   #  - name: nfs
   #    mountPath: "/legacy_data"
 
-  # Extra secuurityContext parameters. For example you may need to define runAsNonRoot directive
-  # extraSecurityContext:
-  #   runAsUser: "33"
-  #   runAsGroup: "33"
+  # Set securityContext parameters for the nextcloud CONTAINER only (will not affect nginx container).
+  # For example, you may need to define runAsNonRoot directive
+  securityContext: {}
+  #   runAsUser: 33
+  #   runAsGroup: 33
   #   runAsNonRoot: true
-  #   readOnlyRootFilesystem: true
+  #   readOnlyRootFilesystem: false
+
+  # Set securityContext parameters for the entire pod. For example, you may need to define runAsNonRoot directive
+  podSecurityContext: {}
+  #   runAsUser: 33
+  #   runAsGroup: 33
+  #   runAsNonRoot: true
+  #   readOnlyRootFilesystem: false
 
 nginx:
   ## You need to set an fpm version of the image for nextcloud if you want to use nginx!
@@ -216,13 +246,18 @@ nginx:
 
   resources: {}
 
+  # Set nginx container securityContext parameters. For example, you may need to define runAsNonRoot directive
+  securityContext: {}
+  # the nginx alpine container default user is 82
+  #   runAsUser: 82
+  #   runAsGroup: 33
+  #   runAsNonRoot: true
+  #   readOnlyRootFilesystem: true
+
 internalDatabase:
   enabled: false
   name: nextcloud
 
-##
-## External database configuration
-##
 externalDatabase:
   enabled: true
 
@@ -250,15 +285,21 @@ externalDatabase:
 
 ##
 ## MariaDB chart configuration
+## ref: https://github.com/bitnami/charts/tree/main/bitnami/mariadb
 ##
 mariadb:
-  ## Whether to deploy a mariadb server to satisfy the applications database requirements. To use an external database set this to false and configure the externalDatabase parameters
+  ## Whether to deploy a mariadb server from the bitnami mariab db helm chart
+  # to satisfy the applications database requirements. if you want to deploy this bitnami mariadb, set this and externalDatabase to true
+  # To use an ALREADY DEPLOYED mariadb database, set this to false and configure the externalDatabase parameters
   enabled: false
 
   auth:
     database: nextcloud
     username: nextcloud
     password: changeme
+    # Use existing secret (auth.rootPassword, auth.password, and auth.replicationPassword will be ignored).
+    # secret must contain the keys mariadb-root-password, mariadb-replication-password and mariadb-password
+    existingSecret: ""
 
   architecture: standalone
 
@@ -268,30 +309,45 @@ mariadb:
   primary:
     persistence:
       enabled: false
+      # Use an existing Persistent Volume Claim (must be created ahead of time)
+      # existingClaim: ""
       # storageClass: ""
       accessMode: ReadWriteOnce
       size: 8Gi
 
 ##
 ## PostgreSQL chart configuration
-## for more options see https://github.com/bitnami/charts/tree/master/bitnami/postgresql
+## for more options see https://github.com/bitnami/charts/tree/main/bitnami/postgresql
 ##
 postgresql:
   enabled: false
   global:
     postgresql:
+      # global.postgresql.auth overrides postgresql.auth
       auth:
         username: nextcloud
         password: changeme
         database: nextcloud
+        # Name of existing secret to use for PostgreSQL credentials.
+        # auth.postgresPassword, auth.password, and auth.replicationPassword will be ignored and picked up from this secret.
+        # secret might also contains the key ldap-password if LDAP is enabled.
+        # ldap.bind_password will be ignored and picked from this secret in this case.
+        existingSecret: ""
+        # Names of keys in existing secret to use for PostgreSQL credentials
+        secretKeys:
+          adminPasswordKey: ""
+          userPasswordKey: ""
+          replicationPasswordKey: ""
   primary:
     persistence:
       enabled: false
+      # Use an existing Persistent Volume Claim (must be created ahead of time)
+      # existingClaim: ""
       # storageClass: ""
 
 ##
 ## Redis chart configuration
-## for more options see https://github.com/bitnami/charts/tree/master/bitnami/redis
+## for more options see https://github.com/bitnami/charts/tree/main/bitnami/redis
 ##
 
 redis:
@@ -299,49 +355,34 @@ redis:
   auth:
     enabled: true
     password: 'changeme'
+    # name of an existing secret with Redis® credentials (instead of auth.password), must be created ahead of time
+    existingSecret: ""
+    # Password key to be retrieved from existing secret
+    existingSecretPasswordKey: ""
+
 
 ## Cronjob to execute Nextcloud background tasks
-## ref: https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/background_jobs_configuration.html#webcron
+## ref: https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/background_jobs_configuration.html#cron
 ##
 cronjob:
-  enabled: true
-  # Nexcloud image is used as default but only curl is needed
-  image: {}
-    # repository: nextcloud
-    # tag: 16.0.3-apache
-    # pullPolicy: IfNotPresent
-    # pullSecrets:
-    #   - myRegistrKeySecretName
-  # Every 5 minutes
-  # Note: Setting this to any any other value than 5 minutes might
-  #  cause issues with how nextcloud background jobs are executed
-  schedule: "*/5 * * * *"
-  annotations: {}
-  # Set curl's insecure option if you use e.g. self-signed certificates
-  curlInsecure: false
-  failedJobsHistoryLimit: 5
-  successfulJobsHistoryLimit: 2
-  # If not set, nextcloud deployment one will be set
-  # resources:
-    # We usually recommend not to specify default resources and to leave this as a conscious
-    # choice for the user. This also increases chances charts run on environments with little
-    # resources, such as Minikube. If you do want to specify resources, uncomment the following
-    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
-    # limits:
-    #  cpu: 100m
-    #  memory: 128Mi
-    # requests:
-    #  cpu: 100m
-    #  memory: 128Mi
-
-  # If not set, nextcloud deployment one will be set
-  # nodeSelector: {}
-
-  # If not set, nextcloud deployment one will be set
-  # tolerations: []
-
-  # If not set, nextcloud deployment one will be set
-  # affinity: {}
+  enabled: false
+
+  ## Cronjob sidecar resource requests and limits
+  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
+  ##
+  resources: {}
+
+  # Allow configuration of lifecycle hooks
+  # ref: https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/
+  lifecycle: {}
+    # postStartCommand: []
+    # preStopCommand: []
+  # Set securityContext parameters. For example, you may need to define runAsNonRoot directive
+  securityContext: {}
+  #   runAsUser: 33
+  #   runAsGroup: 33
+  #   runAsNonRoot: true
+  #   readOnlyRootFilesystem: true
 
 service:
   type: ClusterIP
@@ -400,14 +441,14 @@ resources: {}
 ## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
 ##
 livenessProbe:
-  enabled: true
+  enabled: false
   initialDelaySeconds: 10
   periodSeconds: 10
   timeoutSeconds: 5
   failureThreshold: 3
   successThreshold: 1
 readinessProbe:
-  enabled: true
+  enabled: false
   initialDelaySeconds: 10
   periodSeconds: 10
   timeoutSeconds: 5
@@ -451,11 +492,15 @@ metrics:
   # Currently you still need to set the token manually in your nextcloud install
   token: ""
   timeout: 5s
+  # if set to true, exporter skips certificate verification of Nextcloud server.
+  tlsSkipVerify: false
 
   image:
     repository: xperimental/nextcloud-exporter
-    tag: 0.5.1
+    tag: 0.6.0
     pullPolicy: IfNotPresent
+    # pullSecrets:
+    #   - myRegistrKeySecretName
 
   ## Metrics exporter resource requests and limits
   ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
@@ -477,8 +522,43 @@ metrics:
       prometheus.io/port: "9205"
     labels: {}
 
+  ## Prometheus Operator ServiceMonitor configuration
+  ##
+  serviceMonitor:
+    ## @param metrics.serviceMonitor.enabled Create ServiceMonitor Resource for scraping metrics using PrometheusOperator
+    ##
+    enabled: false
+
+    ## @param metrics.serviceMonitor.namespace Namespace in which Prometheus is running
+    ##
+    namespace: ""
+
+    ## @param metrics.serviceMonitor.jobLabel The name of the label on the target service to use as the job name in prometheus.
+    ##
+    jobLabel: ""
+
+    ## @param metrics.serviceMonitor.interval Interval at which metrics should be scraped
+    ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint
+    ##
+    interval: 30s
+
+    ## @param metrics.serviceMonitor.scrapeTimeout Specify the timeout after which the scrape is ended
+    ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint
+    ##
+    scrapeTimeout: ""
+
+    ## @param metrics.serviceMonitor.labels Extra labels for the ServiceMonitor
+    ##
+    labels: {}
+
+
 rbac:
   enabled: false
   serviceaccount:
-    create: false
+    create: true
     name: nextcloud-serviceaccount
+    annotations: {}
+
+
+## @param securityContext for nextcloud pod @deprecated Use `nextcloud.podSecurityContext` instead
+securityContext: {}

+ 4 - 1
plex.yaml

@@ -21,7 +21,10 @@ spec:
     spec:
       containers:
       - name: plex
-        image: linuxserver/plex:amd64-version-1.30.2.6563-3d4dc0cce
+        image: linuxserver/plex:amd64-version-1.32.2.7100-248a2daf0
+        # for debugging
+        # command: ["/bin/sh"]
+        # args: ["-c", "sleep 3600"]
         ports:
         - containerPort: 32400
           name: http-web-svc

+ 5 - 0
postgres/values.yaml

@@ -1,3 +1,8 @@
+# helm upgrade --install postgres oci://registry-1.docker.io/bitnamicharts/postgresql -n postgres -f values.yaml --version 11.6.7
+
+# Dump a DB from a pod to disk
+# kubectl -n postgres exec -it postgres-postgresql-0 -- bash -c 'PGPASSWORD=<password> pg_dump -U <user> <db name>' > /path/to/db.pgdump
+
 ## @section Global parameters
 ## Please, note that this will override the parameters, including dependencies, configured to use the global value
 ##

+ 1 - 1
prowlarr.yaml

@@ -16,7 +16,7 @@ spec:
     spec:
       containers:
       - name: prowlarr
-        image: lscr.io/linuxserver/prowlarr:develop-0.4.10.2111-ls77
+        image: lscr.io/linuxserver/prowlarr:develop-1.3.1.2796-ls94
         ports:
         - containerPort: 9696
           name: http-web-svc

+ 6 - 3
seedbox_sync.py

@@ -1,6 +1,3 @@
-import subprocess
-import sys
-
 # Usage: sonarr_sync.py my-seedbox /seedbox/path/to/data /local/working /local/metadata /local/data
 # Get all file names in HOST:HOST_DATA_PATH
 # Get all previously processed file names in LOCAL_METADATA_PATH
@@ -10,6 +7,12 @@ import sys
 #   Add file name to LOCAL_METADATA_PATH
 #   Move file to LOCAL_DATA_PATH
 
+# */1 * * * * /usr/bin/run-one /usr/bin/python3 /path/to/seedbox_sync.py <seedbox host> /seedbox/path/to/completed/ /local/path/to/downloading /local/path/to/processed /local/path/to/ready 2>&1 | /usr/bin/logger -t seedbox
+
+import subprocess
+import sys
+
+
 if len(sys.argv) != 6:
     print("One or more args are undefined")
     sys.exit(1)

+ 58 - 0
whoami.yaml

@@ -0,0 +1,58 @@
+---
+apiVersion: v1
+kind: Namespace
+metadata:
+    name: whoami
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+  name: whoami
+  namespace: whoami
+spec:
+  selector:
+    matchLabels:
+      app: whoami
+  replicas: 2
+  template:
+    metadata:
+      labels:
+        app: whoami
+    spec:
+      containers:
+      - name: whoami
+        image: traefik/whoami:v1.8
+        ports:
+        - containerPort: 80
+          name: http-web-svc
+---
+apiVersion: v1
+kind: Service
+metadata:
+  name: whoami-service
+  namespace: whoami
+spec:
+  selector:
+    app: whoami
+  type: ClusterIP
+  ports:
+  - name: whoami-port
+    protocol: TCP
+    port: 80
+    targetPort: http-web-svc
+---
+apiVersion: traefik.containo.us/v1alpha1
+kind: IngressRoute
+metadata:
+  name: whoami
+  namespace: whoami
+spec:
+  entryPoints:
+  - websecure
+  routes:
+  - kind: Rule
+    match: Host(`whoami.jibby.org`)
+    services:
+    - kind: Service
+      name: whoami-service
+      port: 80