Skip to main content

Posts

Showing posts from June, 2025

NOT 2024 BUT 2025 MOCK CKA MOCK QUESTION 15 PV

A Deployment named web-app in the frontend namespace is already running, but it does not use persistent storage. Your task is to: Create a PersistentVolumeClaim (PVC) named web-pvc in the frontend namespace with 250Mi storage. Use the existing retained PersistentVolume (there is only one PV). Modify the existing Deployment ( web-app ) to mount the PVC at /usr/share/nginx/html . Apply the updated Deployment YAML to the cluster.  Step 1: Create Namespace frontend apiVersion: v1 kind: Namespace metadata:   name: frontend Step 2: Create PersistentVolume web-pv apiVersion: v1 kind: PersistentVolume metadata:   name: web-pv spec:   capacity:     storage: 250Mi   accessModes:     - ReadWriteOnce   persistentVolumeReclaimPolicy: Retain   storageClassName: manual   hostPath:     path: "/mnt/data/web"   # This directory must exist on the node Step 3: Create PersistentVolumeClaim web-pvc apiVersion: ...

CKA MOCK Q-12 INGRESS

You need to create an Ingress resource named whisper in the sound-zone namespace. This Ingress should route traffic to a Service called soundserver-svc, and handle requests coming to http://mydemo.local/whisper. Make sure it forwards traffic to port 9090 of the Service. To test if everything is working, run the following command. It should return 200 if the setup is correct: curl -o /dev/null -s -w "%{http_code}\n" http://mydemo.local/whisper Step 1: Create Namespace kubectl create ns sound-zone Step 2: Install NGINX Ingress Controller kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/cloud/deploy.yaml kubectl get pods -n ingress-nginx -l app.kubernetes.io/component=controller Expose it via NodePort (as we are using Killercoda): kubectl patch svc ingress-nginx-controller -n ingress-nginx \   -p '{"spec": {"type": "NodePort"}}' kubectl get svc ingress-nginx-controller -n i...

CKA MOCK Q-12 INGRESS

 You need to create an Ingress resource named whisper in the sound-zone namespace. This Ingress should route traffic to a Service called soundserver-svc , and handle requests coming to http://mydemo.local/whisper . Make sure it forwards traffic to port 9090 of the Service. To test if everything is working, run the following command. It should return 200 if the setup is correct: curl -o /dev/null -s -w "%{http_code}\n" http://mydemo.local/whisper Step 1: Create Namespace kubectl create ns sound-zone Step 2: Install NGINX Ingress Controller kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/cloud/deploy.yaml kubectl get pods -n ingress-nginx -l app.kubernetes.io/component=controller Expose it via NodePort (as we are using Killercoda): kubectl patch svc ingress-nginx-controller -n ingress-nginx \ -p '{"spec": {"type": "NodePort"}}' kubectl get svc ingress-nginx...

CKA 2025 MOCK Q -11 TLS

  You’re managing a web server running in a Kubernetes Deployment called secure-site , located in the web-zone namespace. Its NGINX configuration comes from a ConfigMap called site-tls-config . ๐Ÿงฉ Task: Update the ConfigMap ( site-tls-config ) to ensure that only TLS version 1.3 is accepted by the server. Older versions like TLS 1.2 should be blocked completely. Once the ConfigMap is updated: Make sure the secure-site deployment picks up the changes. You might need to restart or roll it out again. Test it with this command: curl --tls-max 1.2 -k https://neokloud.in:32443 The command should fail , because the server should now reject anything below TLSv1.3.   echo "[1/8] Creating namespace: web-zone" kubectl create ns web-zone || true echo "[2/8] Generating TLS certificate for neokloud.in" mkdir -p /tmp/tls && cd /tmp/tls openssl req -x509 -nodes -days 365 \   -newkey rsa:2048 \   -keyout tls.key \   -out tls.crt \   -su...

CKA MOCK-Q 09 NETWORK POLICY

  Scenario: Create the Correct Least-Permissive NetworkPolicy In the project-x namespace, you have two Deployments: frontend (label: app=frontend ) backend (label: app=backend ) ๐Ÿ“‹ Goal : Allow only frontend pods in the same namespace to access the backend on TCP port 8080 . No other traffic (ingress/egress) should be allowed. ๐Ÿ”ต Option 1 — ❌ Overly Restrictive apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata:   name: too-restrictive   namespace: project-x spec:   podSelector:     matchLabels:       app: backend   policyTypes:   - Ingress   ingress:   - from:     - podSelector:         matchLabels:           app: frontend     ports:     - protocol: TCP       port: 8080 ๐Ÿง  Why it's overly restrictive: ✅ Matches pod labels ❌ Does not allow traffic unless frontend and backend are in the same namespace — no na...

CKA 2025 Mock Question Q-09

 You are managing a Kubernetes cluster where a Deployment named ui-app already exists in the namespace dev-lab . The container image used is nginx . Your task is to: Update the Deployment ui-app in the dev-lab namespace to ensure the nginx container exposes port 80/tcp explicitly. Create a Service named ui-service in the same namespace to expose the pod(s) of the Deployment on port 80/tcp . Ensure that the Service is of type NodePort , so the application is accessible externally via any node IP and a static port.  Note: Do not recreate the Deployment —only modify it. Use standard port range if assigning nodePort manually (30000–32767) # Create namespace kubectl create ns dev-lab # Create Deployment named ui-app with nginx image (no port exposed yet) cat <<EOF > ui-app-deploy.yaml apiVersion: apps/v1 kind: Deployment metadata:   name: ui-app   namespace: dev-lab spec:   replicas: 1   selector:     matchLabels: ...

CKA-2025-MOCK-07 REQUEST AND LIMIT

Part-1   Create a Deployment named memory-demo in the default namespace with the following specifications: It must create 4 replicas (pods). Each pod must have: A single container using the image nginx:alpine A fair enough memory and cpu request  to set limit should be  (exactly double the request) The pods must run successfully without hitting memory scheduling constraints . Part-2 A  Deployment named   memory-demo  in the  default  namespace with the following specifications: four pod should be running but few pod is not running  troubleshoot and fix the issue A fair enough  memory and cpu request  to set limit should be  (exactly  double  the request) fair overhead be there  you can scaledown and down deployment to 0 The pods must  run successfully without hitting memory scheduling constraints .

CKA-2025 MOCK Q-06 PRIORITY

 Generate a PriorityClass named urgent-priority for urgent workloads,  setting the value to 10 less than the highest current user-defined priority class value.  Patch the Deployment mysql-writer in the database namespace to use the urgent-priority class and verify a successful rollout.   Note – Pods from other Deployments in the database namespace should be evicted if resources Cruch kubectl create namespace database # redis-cache Deployment cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata:   name: redis-cache   namespace: database spec:   replicas: 2   selector:     matchLabels:       app: redis-cache   template:     metadata:       labels:         app: redis-cache     spec:       containers:       - name: redis         image: redis:7         resources: ...

CKA 2025 MOCK Q-05 HPA

5 Mock Questions on Horizontal Pod Autoscaler (HPA) ๐Ÿ”ถ Question 1: Scale Based on Custom CPU Target and Scale-Up Cooldown You have a Deployment named api-backend in the default namespace. Task: Create an HPA targeting 70% CPU usage Min: 1, Max: 10 replicas Set scale-up cooldown (delay before scaling up again) to 30 seconds File name: hpa-backend.yaml Bonus: Set the HPA to avoid scaling up rapidly even if CPU spikes.   cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata:   name: api-backend   namespace: default spec:   replicas: 2   selector:     matchLabels:       app: api-backend   template:     metadata:       labels:         app: api-backend     spec:       containers:   ...

MOCK HPA

 ๐Ÿ”ถ Question 1: Scale Based on Custom CPU Target and Scale-Up Cooldown You have a Deployment named api-backend in the default namespace. Task: Create an HPA targeting 70% CPU usage Min: 1, Max: 3 replicas Set scale-up cooldown (delay before scaling up again) to 30 seconds File name: hpa-backend.yaml ๐Ÿ“Œ Bonus: Set the HPA to avoid scaling up rapidly even if CPU spikes. ๐Ÿ”ถ Question 2: Memory-Based Autoscaling You have a Deployment memory-consumer running in apps namespace. Task: Create an HPA that: Scales based on Memory usage Uses autoscaling/v2 Min: 2, Max: 8 Target Memory usage: 500Mi average per pod File: hpa-memory.yaml ๐Ÿง  Hint: Use resource metric type with memory selector. This only works if metrics-server supports memory usage (sometimes mocked in exam). ๐Ÿ”ถ Question 3: Stabilization Window for Both Scale-Up and Scale-Down Deployment load-burst-app is deployed in dev namespace. Task: Create an HPA that: Targets CPU usage at 60% Min: 3, Max: 12 Scale-Up window: 45 seconds Scale...
  In the exam, you need to maintain a rhythmic speed —that means working fast without wasting steps . You’ll have to use shortcuts smartly and avoid repeating the same command unnecessarily . These next 5 minutes could make a huge difference in your exam, so let’s begin. Many people recommend using imperative commands in Kubernetes — like kubectl run or kubectl expose . While these can be useful for quickly creating a Pod , I personally recommend: Use imperative only for Pod creation , but for everything else — avoid it. Why? Because in the real world and exams, what really helps is: Knowing how to find the correct YAML Editing and understanding it properly Applying declarative files using kubectl apply -f So instead of relying on imperative commands, get comfortable with : The official Kubernetes documentation Using Ctrl + F + apiVersion to jump straight to YAML examples The Kubernetes cheat sheet (we’ll cover that too) Mastering how to quickly locate o...

CKA 2025 MOCK Q-04 HELM

  Part1 Create template for argocd installation for 8.0.17 and install            https://argoproj.github.io/argo-helm         Install Argo CD v8.0.17 and  CRDs need not to be installed     Part2 Create template for argocd installation for 8.0.17 and install            https://argoproj.github.io/argo-helm         Install Argo CD v8.0.17 CRDs need to be installed     Part3 Create template for argocd installation for 8.0.17 and install            https://argoproj.github.io/argo-helm         Install Argo CD v8.0.17 CRDs need  not to be installed  as crds are already installed crd already install  kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-cd/master/manifests/crds/appproject-crd.yaml kubectl apply -f https://raw.githubusercontent.com/argoproj/a...

CKA 2025 MOCK Q-02 Side Car Conatiner

  ๐Ÿ”ถ Mock Question: Add a Sidecar Container for Log Tailing Task You have a Deployment named myapp in the default namespace. This Deployment has a single container named myapp which writes log messages to a file at /opt/logs.txt every second. Currently, there is no mechanism to tail or view this log in real-time. Your Task : Add a sidecar container to the existing Deployment named logshipper . This container must: Use the image alpine:latest Run the following command: tail -F /opt/logs.txt Both containers must share a volume at path /opt using an emptyDir volume named data . Do not delete or modify the original myapp container. Make sure the logshipper runs as a sidecar container , not as an initContainer .

CKA 2025 MOCK Q-01 Storage Class

๐Ÿ”ถ Question 1:  Task : Create a StorageClass named csi-retain-sc with the following specifications: Use provisioner: csi-driver.example-vendor.example Set this class as the default Set reclaimPolicy to Retain Allow volume expansion Add mount option discard Use WaitForFirstConsumer as the volumeBindingMode ✅ Save this configuration to a file named sc-default-retain.yaml and apply it. ๐Ÿ”ถ Question 2:  Task : You already have a default StorageClass named old-default . You created a new class fast-csi (already applied) but forgot to mark it as default. Patch fast-csi to be the default StorageClass. Remove the default annotation from old-default . ๐Ÿ”ถ Question 3:  Task : Create a StorageClass named perf-csi-sc using the same csi-driver.example-vendor.example provisioner but do not make it default. It must: Allow volume expansion Include the parameter guaranteedReadWriteLatency: "true" Use Immediate volumeBindingMode Use Delete ...

CKA 2025 - Q16 CRD LIST

  1️⃣ ArgoCD Application Field Extraction ๐Ÿงช Task: Verify ArgoCD is installed in the cluster. Create a list of all CRDs related to ArgoCD and save to: ~/argocd-resources.yaml Use kubectl explain to extract .spec.syncPolicy field of Application custom resource and save to: ~/sync-policy.yaml 2️⃣ Traefik IngressRoute Field Extraction ๐Ÿงช Task: Verify Traefik Ingress Controller is deployed. List all Traefik CRDs, save to: ~/traefik-resources.yaml Extract .spec.tls field of IngressRoute custom resource and save to: ~/tls-doc.yaml 3️⃣ Linkerd ServiceProfile Field Extraction ๐Ÿงช Task: Verify Linkerd is running and control plane is healthy. List all Linkerd CRDs and save to: ~/linkerd-resources.yaml Extract .spec.routes field of ServiceProfile custom resource and save to: ~/routes-doc.yaml 4️⃣ cert-manager Issuer Field Extraction   ๐Ÿงช Task: Verify cert-manager is running in the cluster. List all cert-manager CRDs and save to: ~/iss...