Skip to main content

CKA 2025 MOCK Q-05 HPA

5 Mock Questions on Horizontal Pod Autoscaler (HPA)


🔶 Question 1: Scale Based on Custom CPU Target and Scale-Up Cooldown

You have a Deployment named api-backend in the default namespace.

Task:

  • Create an HPA targeting 70% CPU usage
  • Min: 1, Max: 10 replicas
  • Set scale-up cooldown (delay before scaling up again) to 30 seconds
  • File name: hpa-backend.yaml

Bonus: Set the HPA to avoid scaling up rapidly even if CPU spikes.


 

cat <<EOF | kubectl apply -f -

apiVersion: apps/v1

kind: Deployment

metadata:

  name: api-backend

  namespace: default

spec:

  replicas: 2

  selector:

    matchLabels:

      app: api-backend

  template:

    metadata:

      labels:

        app: api-backend

    spec:

      containers:

      - name: backend

        image: nginx

        ports:

        - containerPort: 80

        resources:

          requests:

            cpu: "100m"

          limits:

            cpu: "200m"

EOF

 

Solution: Create HPA with Scale-Up Cooldown (hpa-backend.yaml)

 

vi hpa-backend.yaml

apiVersion: autoscaling/v2

kind: HorizontalPodAutoscaler

metadata:

  name: api-backend-hpa

  namespace: default

spec:

  scaleTargetRef:

    apiVersion: apps/v1

    kind: Deployment

    name: api-backend

  minReplicas: 1

  maxReplicas: 10

  metrics:

  - type: Resource

    resource:

      name: cpu

      target:

        type: Utilization

        averageUtilization: 70

  behavior:

    scaleUp:

      stabilizationWindowSeconds: 30   # Don't scale up again until 30 sec cool down

📜 Commands:

 

kubectl apply -f deploy-backend.yaml

kubectl apply -f hpa-backend.yaml

kubectl get hpa api-backend-hpa -w


Comments

Popular posts from this blog

CKA 2025 MOCK Q -11 TLS

  You’re managing a web server running in a Kubernetes Deployment called secure-site , located in the web-zone namespace. Its NGINX configuration comes from a ConfigMap called site-tls-config . 🧩 Task: Update the ConfigMap ( site-tls-config ) to ensure that only TLS version 1.3 is accepted by the server. Older versions like TLS 1.2 should be blocked completely. Once the ConfigMap is updated: Make sure the secure-site deployment picks up the changes. You might need to restart or roll it out again. Test it with this command: curl --tls-max 1.2 -k https://neokloud.in:32443 The command should fail , because the server should now reject anything below TLSv1.3.   echo "[1/8] Creating namespace: web-zone" kubectl create ns web-zone || true echo "[2/8] Generating TLS certificate for neokloud.in" mkdir -p /tmp/tls && cd /tmp/tls openssl req -x509 -nodes -days 365 \   -newkey rsa:2048 \   -keyout tls.key \   -out tls.crt \   -su...

CKA-2025 MOCK Q-06 PRIORITY

 Generate a PriorityClass named urgent-priority for urgent workloads,  setting the value to 10 less than the highest current user-defined priority class value.  Patch the Deployment mysql-writer in the database namespace to use the urgent-priority class and verify a successful rollout.   Note – Pods from other Deployments in the database namespace should be evicted if resources Cruch kubectl create namespace database # redis-cache Deployment cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata:   name: redis-cache   namespace: database spec:   replicas: 2   selector:     matchLabels:       app: redis-cache   template:     metadata:       labels:         app: redis-cache     spec:       containers:       - name: redis         image: redis:7         resources: ...