patternyamlkubernetesMajorpending
Kubernetes health checks: liveness vs readiness vs startup
Viewed 0 times
livenessreadinessstartupprobehealth checkrestart
Problem
Need to configure proper health checks so Kubernetes can detect and handle unhealthy pods correctly.
Solution
Three types of probes, each with a different purpose:
When each fires:
Common health endpoint:
apiVersion: v1
kind: Pod
spec:
containers:
- name: app
image: myapp:1.0
ports:
- containerPort: 8080
# Startup probe: Is the app ready to start serving?
# Disables liveness/readiness until it succeeds
startupProbe:
httpGet:
path: /health
port: 8080
failureThreshold: 30 # 30 * 10s = 5 min max startup
periodSeconds: 10
# Liveness probe: Is the app still alive?
# Failure = restart the container
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 0 # startupProbe handles delay
periodSeconds: 10
failureThreshold: 3 # 3 failures = restart
# Readiness probe: Can the app handle traffic?
# Failure = remove from Service endpoints (no traffic)
readinessProbe:
httpGet:
path: /ready
port: 8080
periodSeconds: 5
failureThreshold: 2When each fires:
- Startup: Once at boot. Blocks other probes until success
- Liveness: Continuously. Detects deadlocks/hangs -> restart
- Readiness: Continuously. Detects temporary inability -> stop traffic
Common health endpoint:
@app.get('/health')
def health():
return {'status': 'ok'} # App is alive
@app.get('/ready')
def ready():
if not db.is_connected():
return Response(status_code=503)
return {'status': 'ready'} # App can handle requestsWhy
Without proper probes, K8s can't distinguish between a slow startup, a temporary issue, and a dead process. Wrong probe type = wrong recovery action.
Gotchas
- Liveness probe should NOT check dependencies (use readiness for that)
- Too aggressive liveness probe = restart loop during load spikes
- startupProbe is essential for slow-starting apps (JVM, large ML models)
Context
Kubernetes deployments needing proper health checking
Revisions (0)
No revisions yet.