snippetsqlModerate
Postgres multiple joins slow query, how to store default child record
Viewed 0 times
postgresqueryslowstoredefaultrecordmultiplehowchildjoins
Problem
Ive got a small database in Postgres, about 10,000 records storing company customers.
I have a "slow" performing query (about half a second) which is performed very frequently, and my boss wants me to improve it.
First off - my code:
(Edited
What leaps out to me is I am performing a join to
Each customer can have multiple sites, but only one should be displayed in this list (a customer has to be opened to view all sites). So my question is, if i cant optimize the query, is there a different way i can store a default site for a particular customer? The same goes for the default phonenumber
A customer can have many sites, but a site has only one customer( many to one).
```
Hash Join (cost=522.72..943.76 rows=5018 width=53)
Hash Cond: (customer.id = customer_alias.customer_id)
-> Hash Right Join (cost=371.81..698.77 rows=5018 width=32)
Hash Cond: (site.id = customer.default_site_id)
-> Hash Right Join (cost=184.91..417.77 rows=5018 width=32)
Hash Cond: (phone.id = site.default_phonenumber)
-> Seq Scan on contact_phonenumbers phone (cost=0.00..121.70 rows=6970 width=17)
-> Hash (cost=122.18..122.18 rows=5018 width=23)
-> Seq Scan on site (cost=0.00..122.18 rows=5018 width=23)
-> Hash (cost=124.18..124.18 rows=5018 width=8)
-> Seq Scan on customer (cost=0.00..124.18 rows=5018 width=8)
-> Hash (cost=88.18..88.18 rows=5018 width=29)
I have a "slow" performing query (about half a second) which is performed very frequently, and my boss wants me to improve it.
First off - my code:
select customer_alias.id, customer_alias.name, site.address, phone.phonenumber
from customer_alias
join customer on customer_alias.customer_id = customer.id
left join site on customer.default_site_id = site.id
left join contact_phonenumbers as phone on site.default_phonenumber = phone.id(Edited
left join customer to join customer)What leaps out to me is I am performing a join to
customer even though i am not selecting anything from that record. I currently have to join it to get the default_site_id, a foreign key to the site table.Each customer can have multiple sites, but only one should be displayed in this list (a customer has to be opened to view all sites). So my question is, if i cant optimize the query, is there a different way i can store a default site for a particular customer? The same goes for the default phonenumber
A customer can have many sites, but a site has only one customer( many to one).
EXPLAIN returns:```
Hash Join (cost=522.72..943.76 rows=5018 width=53)
Hash Cond: (customer.id = customer_alias.customer_id)
-> Hash Right Join (cost=371.81..698.77 rows=5018 width=32)
Hash Cond: (site.id = customer.default_site_id)
-> Hash Right Join (cost=184.91..417.77 rows=5018 width=32)
Hash Cond: (phone.id = site.default_phonenumber)
-> Seq Scan on contact_phonenumbers phone (cost=0.00..121.70 rows=6970 width=17)
-> Hash (cost=122.18..122.18 rows=5018 width=23)
-> Seq Scan on site (cost=0.00..122.18 rows=5018 width=23)
-> Hash (cost=124.18..124.18 rows=5018 width=8)
-> Seq Scan on customer (cost=0.00..124.18 rows=5018 width=8)
-> Hash (cost=88.18..88.18 rows=5018 width=29)
Solution
You write:
Each customer can have multiple sites, but only one should be
displayed in this list.
Yet, your query retrieves all rows. That would be a point to optimize. But you also do not define which
Either way, it does not matter much here. Your
From the numbers I see in your
Maybe the query itself is faster, but "half a second" includes network transfer? EXPLAIN ANALYZE would tell us more.
If this query is your bottleneck, I would suggest you implement a materialized view.
After you provided more information I find that my diagnosis pretty much holds.
The query itself needs 27 ms. Not much of a problem there. "Half a second" was the kind of misunderstanding I had suspected. The slow part is the network transfer (plus ssh encoding / decoding, possibly rendering). You should only retrieve 100 rows, that would solve most of it, even if it means to execute the whole query every time.
If you go the route with a materialized view like I proposed you could add a serial number without gaps to the table plus index on it - by adding a column
Then you can query:
This will perform very fast.
pgAdmin3 timing
When you execute a query from the query tool, the message pane shows something like:
And the status line shows the same time. I quote pgAdmin3 help about that:
The status line will show how long the last query took to complete. If
a dataset was returned, not only the elapsed time for server execution
is displayed, but also the time to retrieve the data from the server
to the Data Output page.
If you want to see the time on the server you need to use SQL
Each customer can have multiple sites, but only one should be
displayed in this list.
Yet, your query retrieves all rows. That would be a point to optimize. But you also do not define which
site is to be picked.Either way, it does not matter much here. Your
EXPLAIN shows only 5026 rows for the site scan (5018 for the customer scan). So hardly any customer actually has more than one site. Did you ANALYZE your tables before running EXPLAIN?From the numbers I see in your
EXPLAIN, indexes will give you nothing for this query. Sequential table scans will be the fastest possible way. Half a second is rather slow for 5000 rows, though. Maybe your database needs some general performance tuning?Maybe the query itself is faster, but "half a second" includes network transfer? EXPLAIN ANALYZE would tell us more.
If this query is your bottleneck, I would suggest you implement a materialized view.
After you provided more information I find that my diagnosis pretty much holds.
The query itself needs 27 ms. Not much of a problem there. "Half a second" was the kind of misunderstanding I had suspected. The slow part is the network transfer (plus ssh encoding / decoding, possibly rendering). You should only retrieve 100 rows, that would solve most of it, even if it means to execute the whole query every time.
If you go the route with a materialized view like I proposed you could add a serial number without gaps to the table plus index on it - by adding a column
row_number() OVER () AS mv_id.Then you can query:
SELECT *
FROM materialized_view
WHERE mv_id >= 2700
AND mv_id < 2800;This will perform very fast.
LIMIT / OFFSET cannot compete, that needs to compute the whole table before it can sort and pick 100 rows.pgAdmin3 timing
When you execute a query from the query tool, the message pane shows something like:
Total query runtime: 62 ms.And the status line shows the same time. I quote pgAdmin3 help about that:
The status line will show how long the last query took to complete. If
a dataset was returned, not only the elapsed time for server execution
is displayed, but also the time to retrieve the data from the server
to the Data Output page.
If you want to see the time on the server you need to use SQL
EXPLAIN ANALYZE or the built in Shift + F7keyboard shortcut or Query -> Explain analyze. Then, at the bottom of the explain output you get something like this:Total runtime: 0.269 msCode Snippets
SELECT *
FROM materialized_view
WHERE mv_id >= 2700
AND mv_id < 2800;Total query runtime: 62 ms.Total runtime: 0.269 msContext
StackExchange Database Administrators Q#15082, answer score: 12
Revisions (0)
No revisions yet.