You write:
Each customer can have multiple sites, but only one should be
displayed in this list.
Yet, your query retrieves all rows. That would be a point to optimize. But you also do not define which site
is to be picked.
Either way, it does not matter much here. Your EXPLAIN
shows only 5026 rows for the site
scan (5018 for the customer
scan). So hardly any customer actually has more than one site. Did you ANALYZE
your tables before running EXPLAIN
?
From the numbers I see in your EXPLAIN
, indexes will give you nothing for this query. Sequential table scans will be the fastest possible way. Half a second is rather slow for 5000 rows, though. Maybe your database needs some general performance tuning?
Maybe the query itself is faster, but "half a second" includes network transfer? EXPLAIN ANALYZE would tell us more.
If this query is your bottleneck, I would suggest you implement a materialized view.
After you provided more information I find that my diagnosis pretty much holds.
The query itself needs 27 ms. Not much of a problem there. "Half a second" was the kind of misunderstanding I had suspected. The slow part is the network transfer (plus ssh encoding / decoding, possibly rendering). You should only retrieve 100 rows, that would solve most of it, even if it means to execute the whole query every time.
If you go the route with a materialized view like I proposed you could add a serial number without gaps to the table plus index on it - by adding a column row_number() OVER (<your sort citeria here>) AS mv_id
.
Then you can query:
SELECT *
FROM materialized_view
WHERE mv_id >= 2700
AND mv_id < 2800;
This will perform very fast. LIMIT
/ OFFSET
cannot compete, that needs to compute the whole table before it can sort and pick 100 rows.
pgAdmin timing
When you execute a query from the query tool, the message pane shows something like:
Total query runtime: 62 ms.
And the status line shows the same time. I quote pgAdmin help about that:
The status line will show how long the last query took to complete. If
a dataset was returned, not only the elapsed time for server execution
is displayed, but also the time to retrieve the data from the server
to the Data Output page.
If you want to see the time on the server you need to use SQL EXPLAIN ANALYZE
or the built in Shift + F7
keyboard shortcut or Query -> Explain analyze
. Then, at the bottom of the explain output you get something like this:
Total runtime: 0.269 ms
This is a bit of a FAQ. You'd find more information if you searched for ON DUPLICATE KEY UPDATE
(the MySQL syntax), MERGE
(the SQL-standard syntax), or UPSERT
. It's surprisingly hard.
The best article I've seen on it yet is Depesz's "why is upsert so complicated". There's also the SO question Insert, on duplicate update (postgresql) which has suggestions but lacks explanation and discussion of the issues.
The short answer is that, yes:
It seems the only option I have is to abort the entire transaction and
try again.
When using SERIALIZABLE
transactions you just have to re-issue them when they fail. Which they will. By design - and much more frequently on Pg 9.1 and above because of greatly improved conflict detection. Upsert-like operations are very high conflict, so you may land up retrying quite a bit. If you can do your upserts in READ COMMITTED
transactions instead it'll help, but you should still be prepared to retry because there are some unavoidable race conditions.
Let the transaction fail with a unique violation when you insert the conflicting row. If you get a SQLSTATE 23505 unique_violation
failure from the transaction and you know you were attempting an upsert, re-try it. If you get a SQLSTATE 40001 serialization_failure
you should also retry.
You fundamentally cannot do that retry within a PL/PgSQL function (without dirty hacks like dblink), it must be application side. If PostgreSQL had stored procedures with autonomous transactions then it'd be possible, but it doesn't. In READ COMMITTED
mode you can check for conflicting inserts made since the transaction started, but not after the statement that calls the PL/PgSQL function started, so even in READ COMMITTED
your "detect conflict with select" approach simply will not work.
Read depesz's article for a much better and more detailed explanation.
Best Answer
You can use the functions
array_upper()
andarray_lower()
to identify array dimensions dynamically:While being at it, if arrays can have non-standard dimenstions, it would be safer to use
One subtle difference: While your code returns an empty array for an empty array in the base table, my alternatives returns NULL since
array_lower()
andarray_upper()
return NULL. (For NULL in the base table, all return NULL.I suggest to use a more revealing test case, so you don't miss corner cases:
There is another "hack" to do exactly the same as your original, but never cut off elements. Use the maximum possible array bounds (which are
integer
numbers):Inspired by Daniel's answer on my related question: