HiveBrain v1.2.0
Get Started
← Back to all entries
patternsqlMinor

How hard would it be to have PostgreSQL run a test to adjust the cost parameters?

Submitted by: @import:stackexchange-dba··
0
Viewed 0 times
postgresqltheadjusthardwouldtesthowcostparametershave

Problem

By using EXPLAIN one can see the cost of running a query. The cost is in artificial units, and they are based on pre-defined parameters. 1 is one disk fetch and a CPU cycle is 0.01 units, I believe. My question is why PostgreSQL does not automatically run a test upon installation to determine the real numbers based on the hardware? Shouldn't this be an obvious quick win?

Solution

It the costs were constant there might be a marginal gain. The units are necessarily artificial as the various resources are not interchangeable. Even on the same system the costs can vary at different times.

  • If the CPU becomes fully loaded, then CPU cycles become more expensive.



  • On a multi-core system CPU cost may be less than observed by testing one CPU.



  • If the disks become busy the cost of a disk access increases.



  • Systems may have different disks with different performance (read cost). Cost will vary depending on which disk or disk subsystem the data is on.



  • Different distribution of data across disks may vary the performance.



  • Different placements of indexes and data may vary the performance.



  • Placement of data across disks of different performance will result in disk I/O having multiple costs depending on which data is being accessed.



  • Different types of queries (batch vs OLTP) may have different appropriate cost distributions.



Once the database is created, it may be possible to determine better metrics from the database. Use of the updated metrics may change loads in ways than make them invalid. There are margins beyond which behavior becomes relatively un-predicable and likely to seriously degrade.

Context

StackExchange Database Administrators Q#2067, answer score: 5

Revisions (0)

No revisions yet.