Skip to main content

CKA MOCK Q-12 INGRESS

You need to create an Ingress resource named whisper in the sound-zone namespace.

This Ingress should route traffic to a Service called soundserver-svc,

and handle requests coming to http://mydemo.local/whisper.

Make sure it forwards traffic to port 9090 of the Service.

To test if everything is working, run the following command.

It should return 200 if the setup is correct:

curl -o /dev/null -s -w "%{http_code}\n" http://mydemo.local/whisper


Step 1: Create Namespace

kubectl create ns sound-zone

Step 2: Install NGINX Ingress Controller

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/cloud/deploy.yaml

kubectl get pods -n ingress-nginx -l app.kubernetes.io/component=controller


Expose it via NodePort (as we are using Killercoda):

kubectl patch svc ingress-nginx-controller -n ingress-nginx \

  -p '{"spec": {"type": "NodePort"}}'

kubectl get svc ingress-nginx-controller -n ingress-nginx

Step 3: Create a Dummy Deployment + Service

kubectl create deploy soundserver --image=ealen/echo-server -n sound-zone

kubectl expose deploy soundserver --port=9090 --target-port=80 \

  --name=soundserver-svc -n sound-zone

echo "<k get nodes -o wide > mydemo.local" | sudo tee -a /etc/hosts

curl -o /dev/null -s -w "%{http_code}\n" -H "Host: mydemo.local" http://nodeip:nodeport/whisper

Comments

Popular posts from this blog

CKA 2025 MOCK Q -11 TLS

  You’re managing a web server running in a Kubernetes Deployment called secure-site , located in the web-zone namespace. Its NGINX configuration comes from a ConfigMap called site-tls-config . 🧩 Task: Update the ConfigMap ( site-tls-config ) to ensure that only TLS version 1.3 is accepted by the server. Older versions like TLS 1.2 should be blocked completely. Once the ConfigMap is updated: Make sure the secure-site deployment picks up the changes. You might need to restart or roll it out again. Test it with this command: curl --tls-max 1.2 -k https://neokloud.in:32443 The command should fail , because the server should now reject anything below TLSv1.3.   echo "[1/8] Creating namespace: web-zone" kubectl create ns web-zone || true echo "[2/8] Generating TLS certificate for neokloud.in" mkdir -p /tmp/tls && cd /tmp/tls openssl req -x509 -nodes -days 365 \   -newkey rsa:2048 \   -keyout tls.key \   -out tls.crt \   -su...

CKA-2025 MOCK Q-06 PRIORITY

 Generate a PriorityClass named urgent-priority for urgent workloads,  setting the value to 10 less than the highest current user-defined priority class value.  Patch the Deployment mysql-writer in the database namespace to use the urgent-priority class and verify a successful rollout.   Note – Pods from other Deployments in the database namespace should be evicted if resources Cruch kubectl create namespace database # redis-cache Deployment cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata:   name: redis-cache   namespace: database spec:   replicas: 2   selector:     matchLabels:       app: redis-cache   template:     metadata:       labels:         app: redis-cache     spec:       containers:       - name: redis         image: redis:7         resources: ...

CKA 2025 MOCK Q-05 HPA

5 Mock Questions on Horizontal Pod Autoscaler (HPA) 🔶 Question 1: Scale Based on Custom CPU Target and Scale-Up Cooldown You have a Deployment named api-backend in the default namespace. Task: Create an HPA targeting 70% CPU usage Min: 1, Max: 10 replicas Set scale-up cooldown (delay before scaling up again) to 30 seconds File name: hpa-backend.yaml Bonus: Set the HPA to avoid scaling up rapidly even if CPU spikes.   cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata:   name: api-backend   namespace: default spec:   replicas: 2   selector:     matchLabels:       app: api-backend   template:     metadata:       labels:         app: api-backend     spec:       containers:   ...