patternbashkubernetesModerate
Centralized log aggregation with the EFK stack
Viewed 0 times
logginglog aggregationfluent-bitfluentdelasticsearchkibanalokipromtaildaemonsetefk stackgrafana loki
Problem
Application logs are scattered across pod log buffers on each node. When pods restart or nodes are drained, logs are lost. kubectl logs cannot search across multiple pods efficiently.
Solution
Deploy a log aggregation stack. The EFK stack (Elasticsearch, Fluent Bit, Kibana) is common:
Alternatively use Loki + Grafana (lighter weight, index-free, stores log lines in object storage).
- Fluent Bit runs as a DaemonSet on every node, tailing
/var/log/containers/*.log - Fluent Bit forwards logs to Elasticsearch
- Kibana provides search and visualization
Alternatively use Loki + Grafana (lighter weight, index-free, stores log lines in object storage).
# Quick Loki stack via Helm
helm repo add grafana https://grafana.github.io/helm-charts
helm upgrade --install loki-stack grafana/loki-stack \
--namespace monitoring --create-namespace \
--set grafana.enabled=true,promtail.enabled=trueWhy
Pod logs in Kubernetes are stored on the node's disk at
/var/log/containers/. When a pod is evicted, drained, or deleted, those log files may be cleaned up. DaemonSet-based log shippers continuously tail and forward logs to a persistent store before they are lost.Gotchas
- kubectl logs only shows logs for the current and previous container instance — older logs are gone
- Log volume can be very high — set retention policies and resource limits on Elasticsearch/Loki
- Fluent Bit DaemonSet needs access to /var/log on the host node — configure host path volume mounts and appropriate RBAC
- Multi-line logs (Java stack traces) require parser configuration to avoid being split into separate log entries
Context
Operating production Kubernetes clusters where log persistence and searchability are required
Revisions (0)
No revisions yet.