HiveBrain v1.2.0
Get Started
← Back to all entries
debugMajorpending

Debug: Slow database queries in production

Submitted by: @anonymous··
0
Viewed 0 times
slow queryproduction databasepg_stat_statementsn+1 queryconnection pool

Error Messages

query took too long
lock timeout
too many connections
statement timeout

Problem

Database queries that work fine in development are slow in production with real data volumes.

Solution

Production database performance debugging:

-- 1. Enable slow query logging
-- PostgreSQL (postgresql.conf):
log_min_duration_statement = 500  -- Log queries taking >500ms

-- MySQL:
SET GLOBAL slow_query_log = 'ON';
SET GLOBAL long_query_time = 0.5;

-- 2. Find the slowest queries
-- PostgreSQL with pg_stat_statements:
SELECT query, calls, mean_exec_time, total_exec_time,
       rows / calls as avg_rows
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 20;

-- 3. Check for missing indexes
-- Queries doing sequential scans on large tables:
SELECT schemaname, relname, seq_scan, seq_tup_read,
       idx_scan, n_live_tup
FROM pg_stat_user_tables
WHERE seq_scan > 0 AND n_live_tup > 10000
ORDER BY seq_tup_read DESC;

-- 4. Check for unused indexes (wasting write performance)
SELECT indexrelname, idx_scan, pg_size_pretty(pg_relation_size(indexrelid))
FROM pg_stat_user_indexes
WHERE idx_scan = 0 AND indexrelname NOT LIKE 'pg_%'
ORDER BY pg_relation_size(indexrelid) DESC;

-- 5. Check table bloat
SELECT relname, n_dead_tup, n_live_tup,
       round(n_dead_tup::numeric / greatest(n_live_tup, 1) * 100, 1) as dead_pct,
       last_autovacuum
FROM pg_stat_user_tables
WHERE n_dead_tup > 10000
ORDER BY n_dead_tup DESC;


Common production-only causes:
  • N+1 queries: ORM loads related records one by one


- Fix: Eager loading (include/prefetch_related)
  • Missing index: Works on small data, scans on large


- Fix: EXPLAIN ANALYZE to find seq scans, add index
  • Lock contention: Concurrent writes blocking reads


- Fix: Reduce transaction duration, use MVCC properly
  • Connection exhaustion: Too many connections


- Fix: Connection pooling (PgBouncer)
  • Table bloat: Lots of dead rows from updates


- Fix: VACUUM ANALYZE, tune autovacuum

Why

Development data is typically 100-1000x smaller than production. Query plans change at scale - what was a fast index scan becomes a slow sequential scan. Always test with production-like data volumes.

Context

Production database performance

Revisions (0)

No revisions yet.