blog
  • Blogs
    • Medium Articles
      • Linux
        • 40 Powerful Linux Networking Commands You Must Know.
        • These (Linux) VI Editor Shortcuts You Must Know
        • Bash/Linux Interview Questions for DevOps Engineers
        • Page 1
      • Git
        • 40 Powerful Git Commands Every Developer Should Know
        • 10 Git Best Practices That Every Developer Must Know
      • DevOps/SRE Interview Questions and Answers
        • Top DevOps/SRE Interview Questions and Answers on AWS VPC
        • Top 10 Common DevOps/SRE Interview Questions and Answers on Terraform Best Practices
        • Top 10 Common DevOps/SRE Interview Questions and Answers on Kubernetes Best Practices
        • Top 10 Common DevOps/SRE Interview Questions and Answers on Dockerfiles
        • Top 10 Common DevOps/SRE Interview Questions and Answers on Grafana
      • Installation
        • Docker Installation on Ubuntu 20/22
        • Install WireGuard VPN on Docker Compose
        • Install Redis on Docker Compose
        • Gravitee Docker Compose
      • Kubernetes Series 2025
        • Understanding Kubernetes: Part 1 -Control Plane
        • Understanding Kubernetes: Part 2 -Worker Node
        • Understanding Kubernetes: Part 3 -Pod
        • Understanding Kubernetes: Part 4-ReplicaSets
        • Understanding Kubernetes: Part 5 -Deployment
        • Understanding Kubernetes: Part 6 -DaemonSets
        • Understanding Kubernetes: Part 7 -StatefulSet
        • Understanding Kubernetes: Part 8 -ConfigMap
        • Understanding Kubernetes: Part 9 -Kubernetes Secret
        • Understanding Kubernetes: Part 10 -StorageClass
        • Understanding Kubernetes: Part 11 -Persistent Volume (PV)
        • Understanding Kubernetes: Part 12 -Persistent Volume Claim (PVC)
        • Understanding Kubernetes: Part 13 -Services
        • Understanding Kubernetes: Part 14 -ClusterIP Service
        • Understanding Kubernetes: Part 15 -NodePort Service
        • Understanding Kubernetes: Part 16 -Load Balancer Service
        • Understanding Kubernetes: Part 17 -Ingress
        • Understanding Kubernetes: Part 18 -Ingress Controller
        • Understanding Kubernetes: Part 19 -Headless Service
        • Understanding Kubernetes: Part 20-Network Policy
        • Understanding Kubernetes: Part 21 -CNI
        • Understanding Kubernetes: Part 22 Kubernetes Resource Requests & Limits
        • Understanding Kubernetes: Part 23 Node Selector
        • Understanding Kubernetes: Part 24 Taints and Tolerations
        • Understanding Kubernetes: Part 25 Affinity and Anti-Affinity
        • Understanding Kubernetes: Part 26 Preemption and Priority
        • Understanding Kubernetes: Part 27 Role and RoleBinding
        • Understanding Kubernetes: Part 28 ClusterRole and ClusterRoleBinding
        • Understanding Kubernetes: Part 29 Service Account
        • Understanding Kubernetes: Part 30 Horizontal Pod Autoscaler (HPA)
        • Understanding Kubernetes: Part 31 Vertical Pod Autoscaler (VPA)
        • Understanding Kubernetes: Part 33 Startup Probe
        • Understanding Kubernetes: Part 34 Liveness Probe
        • Understanding Kubernetes: Part 35 Readiness Probe
        • Understanding Kubernetes: Part 36 Container Network Interface (CNI)
        • Understanding Kubernetes: Part 37 Container Runtime Interface (CRI)
        • Understanding Kubernetes: Part 38 Container Storage Interface (CSI)
      • Cloudflare
        • Cloudflare Tunnel for Secure HTTP Routing
      • Nginx
        • Nginx use cases that every engineer must know
Powered by GitBook
On this page
  1. Blogs
  2. Medium Articles
  3. Kubernetes Series 2025

Understanding Kubernetes: Part 22 Kubernetes Resource Requests & Limits

PreviousUnderstanding Kubernetes: Part 21 -CNINextUnderstanding Kubernetes: Part 23 Node Selector

Last updated 3 months ago

📢 If you’ve been following our Kubernetes series 2025, welcome back! For new readers, check out

What are Resource Requests & Limits in Kubernetes?

In Kubernetes, resource requests and limits define how much CPU and memory a container can use. This ensures fair resource allocation among workloads and prevents any single pod from consuming excessive resources, which could impact other applications running in the cluster.

  • Requests: The minimum amount of CPU/memory guaranteed to a container. The scheduler uses this value to place the pod on a suitable node.

  • Limits: The maximum amount of CPU/memory a container can use. If a container exceeds this, Kubernetes restricts it (for CPU) or terminates it (for memory).

For example:

If you have a microservice that processes user requests, you can set CPU and memory requests to ensure it has enough resources to function and limits to prevent it from consuming excessive resources during traffic spikes.

Why Use Resource Requests & Limits?

✅ Efficient Resource Management: Prevents resource hogging and ensures optimal cluster utilization. ✅ Better Performance: Guarantees that critical applications always have the required resources. ✅ Avoids OOM (Out of Memory) Kills: Helps prevent crashes due to excessive memory usage. ✅ Fair Scheduling: Ensures the Kubernetes scheduler places workloads appropriately based on available resources.

In My Previous Role

As a Senior DevOps Engineer, I ensured all Kubernetes deployments had proper resource requests and limits to avoid performance degradation. For example, in a high-traffic Node.js API, I set:

  • Requests: Ensured the service always had enough resources to handle base traffic.

  • Limits: Prevented excessive resource usage, ensuring stability during peak loads.

  • Monitoring: Used Prometheus + Grafana to fine-tune limits based on actual usage.

Example YAML for Resource Requests & Limits

apiVersion: v1
kind: Pod
metadata:
  name: resource-demo
spec:
  containers:
    - name: my-app
      image: my-app:latest
      resources:
        requests:
          memory: "256Mi"
          cpu: "250m"
        limits:
          memory: "512Mi"
          cpu: "500m"

Explanation:

  • requests.memory: “256Mi” → The container is guaranteed 256MiB of memory.

  • requests.cpu: “250m” → The container is guaranteed 0.25 vCPU.

  • limits.memory: “512Mi” → The container cannot exceed 512MiB of memory.

  • limits.cpu: “500m” → The container cannot exceed 0.5 vCPU.

This setup ensures optimal performance while preventing excessive resource usage.

🚀 Ready to Master Kubernetes?

Take your Kubernetes journey to the next level with the Master Kubernetes: Zero to Hero course! 🌟 Whether you’re a beginner or aiming to sharpen your skills, this hands-on course covers:

✅ Kubernetes Basics — Grasp essential concepts like nodes, pods, and services. ✅ Advanced Scaling — Learn HPA, VPA, and resource optimization. ✅ Monitoring Tools — Master Prometheus, Grafana, and AlertManager. ✅ Real-World Scenarios — Build production-ready Kubernetes setups.

🎓 What You’ll Achieve

💡 Confidently deploy and manage Kubernetes clusters. 🛡️ Secure applications with ConfigMaps and Secrets. 📈 Optimize and monitor resources for peak performance.

Don’t miss your chance to become a Kubernetes expert! 💻✨

🔥 Start Learning Now: [Join the Master Kubernetes Course]()

🚀 Stay ahead in DevOps and SRE! 🔔 and never miss a beat on Kubernetes and more. 🌟

https://cloudops0.gumroad.com/l/k8s
Subscribe now
Part 21: CNI