principlesqlMinor
Postgres 9.0 vs Postgres 10 & 11 Performance
Viewed 0 times
performancepostgresstackoverflow
Problem
We want to move our databases from webhosting provider (postgres 9.0) to our local network server (tried both postgres 10 and latest 11)
Our machine is windows server, fast XEON machine with 16gb ram, just for the database.
But even after raising default_statistics_targer = 4000 and analyzing for statistics we have trouble running a lot of views which worked very fast before.
Seems the webhosting provider server was fine tuned and our execution plans are probably odd for some reason.
Our Postgres is stock install config.
Simplified example query is bellow
(biggest table is "dale table which is several million records big (it's binding table with foreign keys)
other tables are much smaller, tenth thousands of records
(System is vacuum analyzed and it's fresh install)
Explain analyze for our testing postgres 11 server
```
"Limit (cost=0.29..18432.84 rows=100 width=4) (actual time=11804.484..331036.595 rows=91 loops=1)"
" -> Index Scan Backward using reklamacny_list_pk on reklamacny_list (cost=0.29..2578713.84 rows=13990 width=4) (actual time=11804.482..331036.524 rows=91 loops=1)"
" Index Cond: (id > 0)"
" Filter: ((validto IS NULL) AND (SubPlan 1))"
" Rows Removed by Filter: 29199"
" SubPlan 1"
" -> Hash Join (cost=5.30..87.57 rows=250 width=1) (actual time=5.246..11.824 rows=1 loops=27981)"
"
Our machine is windows server, fast XEON machine with 16gb ram, just for the database.
But even after raising default_statistics_targer = 4000 and analyzing for statistics we have trouble running a lot of views which worked very fast before.
Seems the webhosting provider server was fine tuned and our execution plans are probably odd for some reason.
Our Postgres is stock install config.
Simplified example query is bellow
(biggest table is "dale table which is several million records big (it's binding table with foreign keys)
other tables are much smaller, tenth thousands of records
(System is vacuum analyzed and it's fresh install)
EXPLAIN ANALYZE
SELECT REKLAMACNY_LIST.ID REKLAMACNY_LIST_ID
FROM REKLAMACNY_LIST
WHERE REKLAMACNY_LIST.VALIDTO IS NULL
AND
( SELECT NOT tr.typ_odstupenia::boolean
AND sr.konecny_stav::boolean
FROM dale d1
CROSS JOIN typ_reklamacie tr
CROSS JOIN dale d2
CROSS JOIN stav_reklamacie sr
WHERE TRUE
AND d1.fk7 = reklamacny_list.id
AND d2.fk7 = reklamacny_list.id
AND d1.fk1 = tr.id
AND d2.fk3 = sr.id
AND sr.validto IS NULL
AND tr.validto IS NULL
AND d1.validto IS NULL
AND d2.validto IS NULL )
ORDER BY reklamacny_list_id DESC
LIMIT 100Explain analyze for our testing postgres 11 server
```
"Limit (cost=0.29..18432.84 rows=100 width=4) (actual time=11804.484..331036.595 rows=91 loops=1)"
" -> Index Scan Backward using reklamacny_list_pk on reklamacny_list (cost=0.29..2578713.84 rows=13990 width=4) (actual time=11804.482..331036.524 rows=91 loops=1)"
" Index Cond: (id > 0)"
" Filter: ((validto IS NULL) AND (SubPlan 1))"
" Rows Removed by Filter: 29199"
" SubPlan 1"
" -> Hash Join (cost=5.30..87.57 rows=250 width=1) (actual time=5.246..11.824 rows=1 loops=27981)"
"
Solution
The default
I saw from your postgresql10 configuration file that your shared memory is only set to
Tuning the PostgreSQL server is a big topic and different kinds of hardware would also need different settings, which also comes with trial and error while continuously improving the settings/configuration.
I am unable to discuss the whole topie here, I am just able to provide the tuned settings I used to use.
Target
postgresql.conf configuration is just for small footprint database and it will be slow if your database is large and is queried using complicated joins.I saw from your postgresql10 configuration file that your shared memory is only set to
128MB (also many other settings are very small). You need to reconfigure this.Tuning the PostgreSQL server is a big topic and different kinds of hardware would also need different settings, which also comes with trial and error while continuously improving the settings/configuration.
I am unable to discuss the whole topie here, I am just able to provide the tuned settings I used to use.
Target
- Use no more than 4 GB memory in my server (as my server is not dedicated to run PostgreSQL DB)
- Server has > 8 cores
max_connections : 800
shared_buffers : 1536MB
work_mem : 24MB
maintenance_work_mem : 480MB
vacuum_cost_delay : 20ms
synchronous_commit : local
wal_buffers : 8MB
max_wal_size : 1536GB
checkpoint_completion_target : 0.9
effective_cache_size : 4GB
deadlock_timeout : 3s
log_min_duration_statement : 5000
log_error_verbosity : verbose
log_autovacuum_min_duration : 10000
log_lock_waits : onCode Snippets
max_connections : 800
shared_buffers : 1536MB
work_mem : 24MB
maintenance_work_mem : 480MB
vacuum_cost_delay : 20ms
synchronous_commit : local
wal_buffers : 8MB
max_wal_size : 1536GB
checkpoint_completion_target : 0.9
effective_cache_size : 4GB
deadlock_timeout : 3s
log_min_duration_statement : 5000
log_error_verbosity : verbose
log_autovacuum_min_duration : 10000
log_lock_waits : onContext
StackExchange Database Administrators Q#221755, answer score: 5
Revisions (0)
No revisions yet.