Skip to main content

CKA 2025 MOCK Q -11 TLS

 

You’re managing a web server running in a Kubernetes Deployment called secure-site, located in the web-zone namespace. Its NGINX configuration comes from a ConfigMap called site-tls-config.

🧩 Task:

Update the ConfigMap (site-tls-config) to ensure that only TLS version 1.3 is accepted by the server. Older versions like TLS 1.2 should be blocked completely.

Once the ConfigMap is updated:

  • Make sure the secure-site deployment picks up the changes. You might need to restart or roll it out again.
  • Test it with this command:


curl --tls-max 1.2 -k https://neokloud.in:32443

The command should fail, because the server should now reject anything below TLSv1.3.

 

echo "[1/8] Creating namespace: web-zone"

kubectl create ns web-zone || true


echo "[2/8] Generating TLS certificate for neokloud.in"

mkdir -p /tmp/tls && cd /tmp/tls

openssl req -x509 -nodes -days 365 \

  -newkey rsa:2048 \

  -keyout tls.key \

  -out tls.crt \

  -subj "/CN=neokloud.in"


echo "[3/8] Creating TLS secret in web-zone"

kubectl create secret tls web-cert \

  --cert=/tmp/tls/tls.crt --key=/tmp/tls/tls.key \

  -n web-zone --dry-run=client -o yaml | kubectl apply -f -


echo "[4/8] Creating ConfigMap with TLS version logic"

cat <<EOF | kubectl apply -f -

apiVersion: v1

kind: ConfigMap

metadata:

  name: site-tls-config

  namespace: web-zone

data:

  nginx.conf: |

    events {}

    http {

      server {

        listen 443 ssl;

        server_name neokloud.in;


        ssl_certificate /etc/nginx/tls/tls.crt;

        ssl_certificate_key /etc/nginx/tls/tls.key;


        ssl_protocols TLSv1.2 TLSv1.3;


        location / {

          if (\$ssl_protocol = "TLSv1.2") {

            return 200 "TLS 1.2 Connection Successful!\n";

          }

          if (\$ssl_protocol = "TLSv1.3") {

            return 200 "TLS 1.3 Connection Successful!\n";

          }

          return 426 "Unsupported TLS Version\n";

        }

      }

    }

EOF


echo "[5/8] Deploying NGINX with TLS config and secret"

cat <<EOF | kubectl apply -f -

apiVersion: apps/v1

kind: Deployment

metadata:

  name: secure-site

  namespace: web-zone

spec:

  replicas: 1

  selector:

    matchLabels:

      app: secure-site

  template:

    metadata:

      labels:

        app: secure-site

    spec:

      volumes:

        - name: config

          configMap:

            name: site-tls-config

            items:

              - key: nginx.conf

                path: nginx.conf

        - name: tls-vol

          secret:

            secretName: web-cert

      containers:

        - name: nginx

          image: nginx:1.25

          volumeMounts:

            - name: config

              mountPath: /etc/nginx/nginx.conf

              subPath: nginx.conf

            - name: tls-vol

              mountPath: /etc/nginx/tls

          ports:

            - containerPort: 443

EOF


echo "[6/8] Creating NodePort service on 32443"

cat <<EOF | kubectl apply -f -

apiVersion: v1

kind: Service

metadata:

  name: secure-svc

  namespace: web-zone

spec:

  selector:

    app: secure-site

  type: NodePort

  ports:

    - port: 443

      targetPort: 443

      protocol: TCP

      nodePort: 32443

EOF


echo "[7/8] Waiting for Pod to start..."

sleep 5

POD_STATUS=\$(kubectl get po -n web-zone -l app=secure-site -o jsonpath='{.items[0].status.phase}')

if [ "\$POD_STATUS" != "Running" ]; then

  echo "Pod not running yet, restarting deployment just in case..."

  kubectl rollout restart deployment secure-site -n web-zone

  sleep 5

fi


echo "[8/8] Adding neokloud.in to /etc/hosts"

NODEIP=\$(kubectl get nodes -o wide | awk 'NR==2{print \$6}')

grep -q neokloud.in /etc/hosts || echo "\$NODEIP neokloud.in" >> /etc/hosts


echo "[✅ Done] TLS-enabled NGINX running on https://neokloud.in:32443"

echo "[ℹ️  Test with:]"

echo "curl --tls-max 1.2 -k https://neokloud.in:32443"

echo "curl --tls-max 1.3 -k https://neokloud.in:32443"


Comments

Popular posts from this blog

CKA-2025 MOCK Q-06 PRIORITY

 Generate a PriorityClass named urgent-priority for urgent workloads,  setting the value to 10 less than the highest current user-defined priority class value.  Patch the Deployment mysql-writer in the database namespace to use the urgent-priority class and verify a successful rollout.   Note – Pods from other Deployments in the database namespace should be evicted if resources Cruch kubectl create namespace database # redis-cache Deployment cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata:   name: redis-cache   namespace: database spec:   replicas: 2   selector:     matchLabels:       app: redis-cache   template:     metadata:       labels:         app: redis-cache     spec:       containers:       - name: redis         image: redis:7         resources: ...

CKA 2025 MOCK Q-05 HPA

5 Mock Questions on Horizontal Pod Autoscaler (HPA) 🔶 Question 1: Scale Based on Custom CPU Target and Scale-Up Cooldown You have a Deployment named api-backend in the default namespace. Task: Create an HPA targeting 70% CPU usage Min: 1, Max: 10 replicas Set scale-up cooldown (delay before scaling up again) to 30 seconds File name: hpa-backend.yaml Bonus: Set the HPA to avoid scaling up rapidly even if CPU spikes.   cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata:   name: api-backend   namespace: default spec:   replicas: 2   selector:     matchLabels:       app: api-backend   template:     metadata:       labels:         app: api-backend     spec:       containers:   ...