snippetsqlMinor
How to speed up query with joins
Viewed 0 times
withqueryhowspeedjoins
Problem
I have a couple of tables that I need to join. I have an employees table (~ 400K rows), a companies table (~10 million rows) and a employee_companies table which stores where someone works.
Basically, I need to get all the employees that match some conditions (they work on a company that has a website, are located in a certain country, etc). I made a query to get this, but it's taking too long. I need to speed it up.
This is the plan for that query:
```
Unique (cost=877170.24..880752.72 rows=62304 width=1064) (actual time=24023.736..26001.876 rows=73318 loops=1)
-> Sort (cost=877170.24..877326.00 rows=62304 width=1064) (actual time=24023.733..24305.989 rows=77579 loops=1)
Sort Key: employees.connections DESC, employees.id, employees.name, employees.link, employees.role, employees.area, employees.profile_picture, employees.summary, employees.current_companies, employees.previous_companies, employees.skills, employees.education, employees.languages, employees.volunteer, employees.groups, employees.interests, employees.search_vector, employees.secondary_search_vector, employees.email_status, employees.languages_count, employees.role_hierarchy
Sort Method: external merge Disk: 85816kB
-> Nested Loop (cost=2642.38..843246.15 rows=62304 width=1064) (actual time=139.870..23056.234 rows=77579 loops=1)
-> Hash Join (cost=2641.95..221744.50 rows=77860 width=1068) (actual time=139.841..22617.587 rows=77579 loops=1)
Hash Cond: (employees.id = employee_companies.employee_id)
-> Seq Scan on employees (cost
Basically, I need to get all the employees that match some conditions (they work on a company that has a website, are located in a certain country, etc). I made a query to get this, but it's taking too long. I need to speed it up.
SELECT DISTINCT "employees".*
FROM "employees"
INNER JOIN "employee_companies" ON "employee_companies"."employee_id" = "employees"."id"
INNER JOIN "companies" ON "companies"."id" = "employee_companies"."company_id"
WHERE (employee_companies.employee_id IS NOT NULL)
AND (companies.website IS NOT NULL)
AND (employees.country = 'Uruguay')
ORDER BY employees.connections DESCThis is the plan for that query:
```
Unique (cost=877170.24..880752.72 rows=62304 width=1064) (actual time=24023.736..26001.876 rows=73318 loops=1)
-> Sort (cost=877170.24..877326.00 rows=62304 width=1064) (actual time=24023.733..24305.989 rows=77579 loops=1)
Sort Key: employees.connections DESC, employees.id, employees.name, employees.link, employees.role, employees.area, employees.profile_picture, employees.summary, employees.current_companies, employees.previous_companies, employees.skills, employees.education, employees.languages, employees.volunteer, employees.groups, employees.interests, employees.search_vector, employees.secondary_search_vector, employees.email_status, employees.languages_count, employees.role_hierarchy
Sort Method: external merge Disk: 85816kB
-> Nested Loop (cost=2642.38..843246.15 rows=62304 width=1064) (actual time=139.870..23056.234 rows=77579 loops=1)
-> Hash Join (cost=2641.95..221744.50 rows=77860 width=1068) (actual time=139.841..22617.587 rows=77579 loops=1)
Hash Cond: (employees.id = employee_companies.employee_id)
-> Seq Scan on employees (cost
Solution
Based on some guess-simulations, I think you can slightly improve your query by:
The query is as follows:
The data used to produce the simulation is the following one:
Table and index definitions:
1.000.000 companies (changing to 10M doesn't make a big difference). I assume 90% have a website.
80k employees (about 10% are Germans)
200K employees x companies (this means that people have worked in about 3 companies, on average):
You can check a downsized version of this simulation at dbfiddle here. If this simulated data is sufficiently similar to your scenario, changing the query makes a 3x improvement with regard to server-execution time. I'd suggest you give it a try.
Simulating data (scaled down by a factor of 25) a scenario more similar to your real one doesn't offer such a nice increase in performance... Nevertheless, it improves by a 1.5 factor.
Check it at this dbfiddle
- Avoiding the outer
DISTINCTclause (although there will be an implicitlyDISTINCT).
- Sub-selecting a part of the data so that less is needed to
JOIN.
The query is as follows:
SELECT
employees.*
FROM
employees
WHERE
employee_id IN
(SELECT
-- Choose all employees from companies with website
employee_id
FROM
employee_companies
JOIN companies ON companies.company_id = employee_companies.company_id
WHERE
companies.website IS NOT NULL
)
-- Now filter only employees from 'Germany'
AND employees.country = 'Germany'
ORDER BY
employees.connections DESC ;The data used to produce the simulation is the following one:
Table and index definitions:
CREATE TABLE employees
(
employee_id integer PRIMARY KEY,
country text,
connections integer,
something_else text
) ;
CREATE INDEX idx_employee_country
ON employees (country) ;
CREATE TABLE companies
(
company_id integer PRIMARY KEY,
website text,
something_else text
) ;
CREATE INDEX not_empty_websites
ON companies(company_id, website) WHERE website IS NOT NULL ;
CREATE TABLE employee_companies
(
employee_id integer NOT NULL REFERENCES employees(employee_id),
company_id integer NOT NULL REFERENCES companies(company_id),
PRIMARY KEY (employee_id, company_id)
) ;
CREATE INDEX company_employee
ON employee_companies(company_id, employee_id) ;1.000.000 companies (changing to 10M doesn't make a big difference). I assume 90% have a website.
INSERT INTO
companies
(company_id, website)
SELECT
generate_series(1, 1000000),
CASE WHEN random() > 0.1 THEN 'web.com' END AS website ;80k employees (about 10% are Germans)
INSERT INTO
employees
(employee_id, country, connections)
SELECT
generate_series(1, 80000),
case (random()*10)::integer
when 0 then 'Germany'
when 1 then 'United Kingdon'
when 2 then 'United States'
else 'Angola'
end AS country,
(random()*10)::integer AS connections ;200K employees x companies (this means that people have worked in about 3 companies, on average):
INSERT INTO
employee_companies
(employee_id, company_id)
SELECT DISTINCT
(random()*79999)::integer + 1,
(random()*999999)::integer + 1
FROM
generate_series (1, 200000) ;You can check a downsized version of this simulation at dbfiddle here. If this simulated data is sufficiently similar to your scenario, changing the query makes a 3x improvement with regard to server-execution time. I'd suggest you give it a try.
Simulating data (scaled down by a factor of 25) a scenario more similar to your real one doesn't offer such a nice increase in performance... Nevertheless, it improves by a 1.5 factor.
Check it at this dbfiddle
Code Snippets
SELECT
employees.*
FROM
employees
WHERE
employee_id IN
(SELECT
-- Choose all employees from companies with website
employee_id
FROM
employee_companies
JOIN companies ON companies.company_id = employee_companies.company_id
WHERE
companies.website IS NOT NULL
)
-- Now filter only employees from 'Germany'
AND employees.country = 'Germany'
ORDER BY
employees.connections DESC ;CREATE TABLE employees
(
employee_id integer PRIMARY KEY,
country text,
connections integer,
something_else text
) ;
CREATE INDEX idx_employee_country
ON employees (country) ;
CREATE TABLE companies
(
company_id integer PRIMARY KEY,
website text,
something_else text
) ;
CREATE INDEX not_empty_websites
ON companies(company_id, website) WHERE website IS NOT NULL ;
CREATE TABLE employee_companies
(
employee_id integer NOT NULL REFERENCES employees(employee_id),
company_id integer NOT NULL REFERENCES companies(company_id),
PRIMARY KEY (employee_id, company_id)
) ;
CREATE INDEX company_employee
ON employee_companies(company_id, employee_id) ;INSERT INTO
companies
(company_id, website)
SELECT
generate_series(1, 1000000),
CASE WHEN random() > 0.1 THEN 'web.com' END AS website ;INSERT INTO
employees
(employee_id, country, connections)
SELECT
generate_series(1, 80000),
case (random()*10)::integer
when 0 then 'Germany'
when 1 then 'United Kingdon'
when 2 then 'United States'
else 'Angola'
end AS country,
(random()*10)::integer AS connections ;INSERT INTO
employee_companies
(employee_id, company_id)
SELECT DISTINCT
(random()*79999)::integer + 1,
(random()*999999)::integer + 1
FROM
generate_series (1, 200000) ;Context
StackExchange Database Administrators Q#171975, answer score: 8
Revisions (0)
No revisions yet.