Ive got a small database in Postgres, about 10,000 records storing company customers.
I have a "slow" performing query (about half a second) which is performed very frequently, and my boss wants me to improve it.
First off – my code:
select customer_alias.id, customer_alias.name, site.address, phone.phonenumber
from customer_alias
join customer on customer_alias.customer_id = customer.id
left join site on customer.default_site_id = site.id
left join contact_phonenumbers as phone on site.default_phonenumber = phone.id
(Edited left join customer
to join customer
)
What leaps out to me is I am performing a join to customer
even though i am not selecting anything from that record. I currently have to join it to get the default_site_id
, a foreign key to the site
table.
Each customer can have multiple sites, but only one should be displayed in this list (a customer has to be opened to view all sites). So my question is, if i cant optimize the query, is there a different way i can store a default site for a particular customer? The same goes for the default phonenumber
A customer can have many sites, but a site has only one customer( many to one).
EXPLAIN
returns:
Hash Join (cost=522.72..943.76 rows=5018 width=53)
Hash Cond: (customer.id = customer_alias.customer_id)
-> Hash Right Join (cost=371.81..698.77 rows=5018 width=32)
Hash Cond: (site.id = customer.default_site_id)
-> Hash Right Join (cost=184.91..417.77 rows=5018 width=32)
Hash Cond: (phone.id = site.default_phonenumber)
-> Seq Scan on contact_phonenumbers phone (cost=0.00..121.70 rows=6970 width=17)
-> Hash (cost=122.18..122.18 rows=5018 width=23)
-> Seq Scan on site (cost=0.00..122.18 rows=5018 width=23)
-> Hash (cost=124.18..124.18 rows=5018 width=8)
-> Seq Scan on customer (cost=0.00..124.18 rows=5018 width=8)
-> Hash (cost=88.18..88.18 rows=5018 width=29)
-> Seq Scan on customer_alias (cost=0.00..88.18 rows=5018 width=29)
EXPLAIN ANALYZE
returns:
Hash Join (cost=522.72..943.76 rows=5018 width=53) (actual time=12.457..26.655 rows=5018 loops=1)
Hash Cond: (customer.id = customer_alias.customer_id)
-> Hash Right Join (cost=371.81..698.77 rows=5018 width=32) (actual time=8.589..18.796 rows=5018 loops=1)
Hash Cond: (site.id = customer.default_site_id)
-> Hash Right Join (cost=184.91..417.77 rows=5018 width=32) (actual time=4.499..11.067 rows=5018 loops=1)
Hash Cond: (phone.id = site.default_phonenumber)
-> Seq Scan on contact_phonenumbers phone (cost=0.00..121.70 rows=6970 width=17) (actual time=0.007..1.581 rows=6970 loops=1)
-> Hash (cost=122.18..122.18 rows=5018 width=23) (actual time=4.465..4.465 rows=5018 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 277kB
-> Seq Scan on site (cost=0.00..122.18 rows=5018 width=23) (actual time=0.007..2.383 rows=5018 loops=1)
-> Hash (cost=124.18..124.18 rows=5018 width=8) (actual time=4.072..4.072 rows=5018 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 197kB
-> Seq Scan on customer (cost=0.00..124.18 rows=5018 width=8) (actual time=0.009..2.270 rows=5018 loops=1)
-> Hash (cost=88.18..88.18 rows=5018 width=29) (actual time=3.855..3.855 rows=5018 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 309kB
-> Seq Scan on customer_alias (cost=0.00..88.18 rows=5018 width=29) (actual time=0.008..1.664 rows=5018 loops=1)
Total runtime: 27.290 ms"
Table structures
/* --------------------------------------------------------------------------- * Table: contacts.customer * -------------------------------------------------------------------------*/ CREATE TABLE contacts.customer ( id SERIAL NOT NULL, name integer, -- Foreign key to contacts.customer_alias default_site_id integer, -- Foreign key to contacts.site -- 12 other fields unrelated to query CONSTRAINT customer_pkey PRIMARY KEY (id ) CONSTRAINT default_site_id FOREIGN KEY (default_site_id) REFERENCES contacts.site (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION; ); -- circular foreign key to customer_alias added later -- Indexed: id, name, default_site_id (btree) /*---------------------------------------------------------------------------- * Table: contacts.customer_alias *--------------------------------------------------------------------------*/ CREATE TABLE contacts.customer_alias ( id SERIAL NOT NULL, customer_id integer, name text, -- 5 other fields (not used in query) CONSTRAINT customer_alias_pkey PRIMARY KEY (id ), CONSTRAINT customer_alias_customer_id_fkey FOREIGN KEY (customer_id) REFERENCES contacts.customer (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED ) -- indexed: id, customer_id (btree) -- Customer_alias foreign key ALTER TABLE contacts.customer ADD CONSTRAINT customer_name_fkey FOREIGN KEY (name) REFERENCES contacts.customer_alias (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION; /* --------------------------------------------------------------------------- * Table: contacts.site * -------------------------------------------------------------------------*/ CREATE TABLE contacts.site ( id SERIAL NOT NULL, customer_id integer, address text, default_contact_id integer, default_phonenumber integer, -- 9 other unrelated fields CONSTRAINT site_pkey PRIMARY KEY (id ), CONSTRAINT site_customer_id_fkey FOREIGN KEY (customer_id) REFERENCES contacts.customer (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION ) indexed: id, customer_id, default_contact_id, default_phonenumber ( btree) /* --------------------------------------------------------------------------- * Table: contacts.contact_phonenumbers * -------------------------------------------------------------------------*/ CREATE TABLE contacts.contact_phonenumbers ( id SERIAL NOT NULL, site_id integer, phonenumber text, -- 4 other unrelated fields CONSTRAINT contact_phonenumbers_pkey PRIMARY KEY (id ) ) -- indexed: id, site_id (btree)
If I run the query client side, through ODBC, it takes between 450-500 milliseconds. If I run the query in pgAdmin III, it states the query takes approx 250 milliseconds, although sometimes it takes 60-100ms (which is what I'm aiming for).
I don't currently have SSH access to the server so I can't run it directly.
I only view about 100 of these rows at any time on screen, is it possible to retrieve only the relevant rows? I tried limiting the result for instance LIMIT 100, OFFSET 2345
, but that performs a fresh search every time.
Thanks very much for the help so far!
Best Answer
You write:
Yet, your query retrieves all rows. That would be a point to optimize. But you also do not define which
site
is to be picked.Either way, it does not matter much here. Your
EXPLAIN
shows only 5026 rows for thesite
scan (5018 for thecustomer
scan). So hardly any customer actually has more than one site. Did youANALYZE
your tables before runningEXPLAIN
?From the numbers I see in your
EXPLAIN
, indexes will give you nothing for this query. Sequential table scans will be the fastest possible way. Half a second is rather slow for 5000 rows, though. Maybe your database needs some general performance tuning?Maybe the query itself is faster, but "half a second" includes network transfer? EXPLAIN ANALYZE would tell us more.
If this query is your bottleneck, I would suggest you implement a materialized view.
After you provided more information I find that my diagnosis pretty much holds.
The query itself needs 27 ms. Not much of a problem there. "Half a second" was the kind of misunderstanding I had suspected. The slow part is the network transfer (plus ssh encoding / decoding, possibly rendering). You should only retrieve 100 rows, that would solve most of it, even if it means to execute the whole query every time.
If you go the route with a materialized view like I proposed you could add a serial number without gaps to the table plus index on it - by adding a column
row_number() OVER (<your sort citeria here>) AS mv_id
.Then you can query:
This will perform very fast.
LIMIT
/OFFSET
cannot compete, that needs to compute the whole table before it can sort and pick 100 rows.pgAdmin timing
When you execute a query from the query tool, the message pane shows something like:
And the status line shows the same time. I quote pgAdmin help about that:
If you want to see the time on the server you need to use SQL
EXPLAIN ANALYZE
or the built inShift + F7
keyboard shortcut orQuery -> Explain analyze
. Then, at the bottom of the explain output you get something like this: