Skip to main content

Cert-manager - Custom Resource Definitions

 Cert-manager is a Kubernetes add-on used to manage and automate the issuance, renewal, and management of TLS/SSL certificates inside a Kubernetes cluster. It helps in securing applications and services by integrating with external and internal certificate authorities (CAs) like Let’s Encrypt, HashiCorp Vault, or even self-signed issuers.

Here’s how it works:

  1. CRDs (Custom Resource Definitions): cert-manager introduces new Kubernetes resource types like Issuer, ClusterIssuer, and Certificate. These CRDs are used to define how and from where the certificates will be issued.

  2. Issuer and ClusterIssuer:

    • Issuer is a namespaced resource that defines a certificate authority configuration usable within a single namespace.

    • ClusterIssuer is similar but works cluster-wide and can issue certificates across all namespaces.

  3. Certificate Resource: You define a Certificate resource specifying the domain, secret name (where the certificate will be stored), and which Issuer to use. cert-manager will automatically create and store the cert in the defined Kubernetes secret.

  4. Renewals: cert-manager tracks the expiration of certificates and renews them automatically before they expire, ensuring uninterrupted secure communication.

  5. ACME Protocol: cert-manager supports the ACME protocol, which is used by Let’s Encrypt to validate domain ownership and issue certificates. This allows for fully automated certificate management for publicly accessible services.

  6. Challenges: For ACME, it supports HTTP-01 and DNS-01 challenges to prove domain ownership. You need to set up access to DNS providers or expose services to pass these challenges.

  7. Use Cases:

    • Automatically issuing TLS certs for Ingress resources.

    • Securing internal services with mTLS.

    • Managing certificates across microservices in the cluster.

In summary, cert-manager brings automation, security, and compliance to certificate handling in Kubernetes environments, removing the need for manual intervention and reducing the risk of service downtime due to expired certificates.

Comments

Popular posts from this blog

CKA 2025 MOCK Q -11 TLS

  You’re managing a web server running in a Kubernetes Deployment called secure-site , located in the web-zone namespace. Its NGINX configuration comes from a ConfigMap called site-tls-config . 🧩 Task: Update the ConfigMap ( site-tls-config ) to ensure that only TLS version 1.3 is accepted by the server. Older versions like TLS 1.2 should be blocked completely. Once the ConfigMap is updated: Make sure the secure-site deployment picks up the changes. You might need to restart or roll it out again. Test it with this command: curl --tls-max 1.2 -k https://neokloud.in:32443 The command should fail , because the server should now reject anything below TLSv1.3.   echo "[1/8] Creating namespace: web-zone" kubectl create ns web-zone || true echo "[2/8] Generating TLS certificate for neokloud.in" mkdir -p /tmp/tls && cd /tmp/tls openssl req -x509 -nodes -days 365 \   -newkey rsa:2048 \   -keyout tls.key \   -out tls.crt \   -su...

CKA-2025 MOCK Q-06 PRIORITY

 Generate a PriorityClass named urgent-priority for urgent workloads,  setting the value to 10 less than the highest current user-defined priority class value.  Patch the Deployment mysql-writer in the database namespace to use the urgent-priority class and verify a successful rollout.   Note – Pods from other Deployments in the database namespace should be evicted if resources Cruch kubectl create namespace database # redis-cache Deployment cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata:   name: redis-cache   namespace: database spec:   replicas: 2   selector:     matchLabels:       app: redis-cache   template:     metadata:       labels:         app: redis-cache     spec:       containers:       - name: redis         image: redis:7         resources: ...

CKA 2025 MOCK Q-05 HPA

5 Mock Questions on Horizontal Pod Autoscaler (HPA) 🔶 Question 1: Scale Based on Custom CPU Target and Scale-Up Cooldown You have a Deployment named api-backend in the default namespace. Task: Create an HPA targeting 70% CPU usage Min: 1, Max: 10 replicas Set scale-up cooldown (delay before scaling up again) to 30 seconds File name: hpa-backend.yaml Bonus: Set the HPA to avoid scaling up rapidly even if CPU spikes.   cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata:   name: api-backend   namespace: default spec:   replicas: 2   selector:     matchLabels:       app: api-backend   template:     metadata:       labels:         app: api-backend     spec:       containers:   ...