Skip to main content

CKA MOCK-Q 09 NETWORK POLICY

 Scenario:

Create the Correct Least-Permissive NetworkPolicy

In the project-x namespace, you have two Deployments:

  • frontend (label: app=frontend)

  • backend (label: app=backend)

📋 Goal:
Allow only frontend pods in the same namespace to access the backend on TCP port 8080.
No other traffic (ingress/egress) should be allowed.




🔵 Option 1 — ❌ Overly Restrictive


apiVersion: networking.k8s.io/v1

kind: NetworkPolicy

metadata:

  name: too-restrictive

  namespace: project-x

spec:

  podSelector:

    matchLabels:

      app: backend

  policyTypes:

  - Ingress

  ingress:

  - from:

    - podSelector:

        matchLabels:

          app: frontend

    ports:

    - protocol: TCP

      port: 8080

🧠 Why it's overly restrictive:

✅ Matches pod labels

❌ Does not allow traffic unless frontend and backend are in the same namespace — no namespace check present (default behavior is same-namespace only).

⚠️ If frontend pods are ever moved to another namespace, this will silently fail.


🔵 Option 2 — ✅ Perfect & Least Permissive (Correct)


apiVersion: networking.k8s.io/v1

kind: NetworkPolicy

metadata:

  name: allow-frontend-to-backend

  namespace: project-x

spec:

  podSelector:

    matchLabels:

      app: backend

  policyTypes:

  - Ingress

  ingress:

  - from:

    - podSelector:

        matchLabels:

          app: frontend

      namespaceSelector:

        matchLabels:

          kubernetes.io/metadata.name: project-x

    ports:

    - protocol: TCP

      port: 8080

🧠 Why it's correct:

✅ Explicitly restricts to both correct podSelector AND namespaceSelector

✅ Allows only traffic from frontend ➜ backend on port 8080

✅ Secure and scoped — no wildcards or assumptions


🔵 Option 3 — ❌ Over-Strict (IPBlock)


apiVersion: networking.k8s.io/v1

kind: NetworkPolicy

metadata:

  name: overly-strict-ipblock

  namespace: project-x

spec:

  podSelector:

    matchLabels:

      app: backend

  policyTypes:

  - Ingress

  ingress:

  - from:

    - ipBlock:

        cidr: 10.244.0.0/16

    ports:

    - protocol: TCP

      port: 8080

🧠 Why it's over-strict:

❌ Uses a broad IP range instead of selecting frontend pods directly

❌ Not label-based — hard to audit or update

⚠️ If pod IPs change or move to different subnets, this could break unexpectedly

✅ Allows traffic technically, but not least-permissive


✅ Final Answer: Option 2


Comments

Popular posts from this blog

CKA 2025 MOCK Q -11 TLS

  You’re managing a web server running in a Kubernetes Deployment called secure-site , located in the web-zone namespace. Its NGINX configuration comes from a ConfigMap called site-tls-config . 🧩 Task: Update the ConfigMap ( site-tls-config ) to ensure that only TLS version 1.3 is accepted by the server. Older versions like TLS 1.2 should be blocked completely. Once the ConfigMap is updated: Make sure the secure-site deployment picks up the changes. You might need to restart or roll it out again. Test it with this command: curl --tls-max 1.2 -k https://neokloud.in:32443 The command should fail , because the server should now reject anything below TLSv1.3.   echo "[1/8] Creating namespace: web-zone" kubectl create ns web-zone || true echo "[2/8] Generating TLS certificate for neokloud.in" mkdir -p /tmp/tls && cd /tmp/tls openssl req -x509 -nodes -days 365 \   -newkey rsa:2048 \   -keyout tls.key \   -out tls.crt \   -su...

CKA-2025 MOCK Q-06 PRIORITY

 Generate a PriorityClass named urgent-priority for urgent workloads,  setting the value to 10 less than the highest current user-defined priority class value.  Patch the Deployment mysql-writer in the database namespace to use the urgent-priority class and verify a successful rollout.   Note – Pods from other Deployments in the database namespace should be evicted if resources Cruch kubectl create namespace database # redis-cache Deployment cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata:   name: redis-cache   namespace: database spec:   replicas: 2   selector:     matchLabels:       app: redis-cache   template:     metadata:       labels:         app: redis-cache     spec:       containers:       - name: redis         image: redis:7         resources: ...

CKA 2025 MOCK Q-05 HPA

5 Mock Questions on Horizontal Pod Autoscaler (HPA) 🔶 Question 1: Scale Based on Custom CPU Target and Scale-Up Cooldown You have a Deployment named api-backend in the default namespace. Task: Create an HPA targeting 70% CPU usage Min: 1, Max: 10 replicas Set scale-up cooldown (delay before scaling up again) to 30 seconds File name: hpa-backend.yaml Bonus: Set the HPA to avoid scaling up rapidly even if CPU spikes.   cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata:   name: api-backend   namespace: default spec:   replicas: 2   selector:     matchLabels:       app: api-backend   template:     metadata:       labels:         app: api-backend     spec:       containers:   ...