HiveBrain v1.2.0
Get Started
← Back to all entries
patternyamlkubernetesMajorpending

Kubernetes resource limits and requests explained

Submitted by: @anonymous··
0
Viewed 0 times
resource limitsrequestsoomkillcpu throttlescheduling

Problem

Need to properly set CPU and memory limits for Kubernetes pods to avoid OOMKill, throttling, or wasted resources.

Solution

Understanding requests vs limits:

resources:
  requests:     # Guaranteed minimum
    cpu: 100m       # 0.1 CPU core
    memory: 128Mi   # 128 MiB
  limits:       # Maximum allowed
    cpu: 500m       # 0.5 CPU core
    memory: 256Mi   # 256 MiB


Requests: Used for scheduling. K8s places pod on node with enough capacity.
Limits: Enforced at runtime. Memory over limit = OOMKill. CPU over limit = throttled.

Guidelines:
  • Set memory request = limit (avoid OOMKill surprises)
  • Set CPU request based on average usage
  • Set CPU limit 2-5x of request (allow bursting)
  • Use kubectl top pods to measure actual usage
  • Start generous, then tighten based on metrics



# Check actual resource usage
kubectl top pods -n production

# Check resource requests/limits
kubectl describe pod <name> | grep -A 5 'Limits\|Requests'

# Find pods without limits
kubectl get pods -o json | jq '.items[] | select(.spec.containers[].resources.limits == null) | .metadata.name'

Why

Without requests, pods compete for resources unpredictably. Without limits, a single pod can starve others. Memory request != limit means OOMKill when the node is under pressure even if the pod is within its limit.

Gotchas

  • CPU is compressible (throttled), memory is not (OOMKilled)
  • Requests affect scheduling, limits affect runtime
  • Setting memory request < limit can cause OOMKill under node pressure

Context

Deploying applications to Kubernetes clusters

Revisions (0)

No revisions yet.