patterngoTip
Profiling Go applications with pprof
Viewed 0 times
pprofprofilingCPU profileheap profilego tool pprofperformancenet/http/pprof
Problem
Identifying CPU hotspots and memory allocation sources in a running Go application requires instrumentation. Guessing is ineffective.
Solution
Enable the pprof HTTP endpoint and use the go tool:
import (
"net/http"
_ "net/http/pprof" // registers /debug/pprof endpoints
)
func main() {
// In a goroutine alongside your main server
go http.ListenAndServe(":6060", nil)
// ... rest of app
}# CPU profile: sample for 30 seconds
go tool pprof http://localhost:6060/debug/pprof/profile?seconds=30
# Memory (heap) profile
go tool pprof http://localhost:6060/debug/pprof/heap
# Interactive web UI (requires graphviz)
go tool pprof -http=:8080 cpu.profWhy
pprof samples the call stack at regular intervals (CPU) or records heap allocations. The resulting profile shows where time and memory are spent, enabling data-driven optimisation.
Gotchas
- The pprof HTTP endpoint should NEVER be exposed on a public interface in production — restrict to localhost or use authentication
- CPU profiling has ~5% overhead when active; heap profiling is cheaper
- goroutine and block profiles reveal concurrency bottlenecks; mutex profile shows lock contention
Revisions (0)
No revisions yet.