debugbashkubernetesTip
Kubernetes events: first stop for debugging cluster issues
Viewed 0 times
eventskubectl describekubectl get eventswarningsdebuggingschedulerkubeletFailedSchedulingBackOffOOMKilling
Problem
A pod is not starting, a deployment is stuck, or a node is acting strange, but kubectl get pods only shows a status code without explanation.
Solution
Check Kubernetes events — they record warnings and errors from controllers, the scheduler, and kubelet.
# Events for a specific pod
kubectl describe pod <pod-name>
# The 'Events' section at the bottom is the most useful
# All events in a namespace, sorted by time
kubectl get events -n <namespace> --sort-by='.lastTimestamp'
# Watch events live
kubectl get events -n <namespace> -w
# Events for a specific resource
kubectl get events --field-selector \
involvedObject.name=<pod-name> \
involvedObject.kind=Pod
# Events across all namespaces
kubectl get events -A --sort-by='.lastTimestamp' | tail -30Why
Kubernetes components emit events as structured objects that record what happened, when, how many times, and which resource was involved. Events are stored in etcd and expire after ~1 hour by default. They are the first diagnostic signal before going to application logs.
Gotchas
- Events expire after 1 hour by default (configurable with --event-ttl on the API server) — check them quickly
- Events are namespace-scoped — use -A or -n to target the right namespace
- Warning events (type: Warning) indicate problems; Normal events are informational
- High event volume can affect API server performance — consider tools like event-exporter to ship events to a log store
kubectl describeaggregates related events inline — more readable than rawkubectl get events
Context
First-response debugging of pod, deployment, or node issues in Kubernetes
Revisions (0)
No revisions yet.