You write:
Each customer can have multiple sites, but only one should be
displayed in this list.
Yet, your query retrieves all rows. That would be a point to optimize. But you also do not define which site
is to be picked.
Either way, it does not matter much here. Your EXPLAIN
shows only 5026 rows for the site
scan (5018 for the customer
scan). So hardly any customer actually has more than one site. Did you ANALYZE
your tables before running EXPLAIN
?
From the numbers I see in your EXPLAIN
, indexes will give you nothing for this query. Sequential table scans will be the fastest possible way. Half a second is rather slow for 5000 rows, though. Maybe your database needs some general performance tuning?
Maybe the query itself is faster, but "half a second" includes network transfer? EXPLAIN ANALYZE would tell us more.
If this query is your bottleneck, I would suggest you implement a materialized view.
After you provided more information I find that my diagnosis pretty much holds.
The query itself needs 27 ms. Not much of a problem there. "Half a second" was the kind of misunderstanding I had suspected. The slow part is the network transfer (plus ssh encoding / decoding, possibly rendering). You should only retrieve 100 rows, that would solve most of it, even if it means to execute the whole query every time.
If you go the route with a materialized view like I proposed you could add a serial number without gaps to the table plus index on it - by adding a column row_number() OVER (<your sort citeria here>) AS mv_id
.
Then you can query:
SELECT *
FROM materialized_view
WHERE mv_id >= 2700
AND mv_id < 2800;
This will perform very fast. LIMIT
/ OFFSET
cannot compete, that needs to compute the whole table before it can sort and pick 100 rows.
pgAdmin timing
When you execute a query from the query tool, the message pane shows something like:
Total query runtime: 62 ms.
And the status line shows the same time. I quote pgAdmin help about that:
The status line will show how long the last query took to complete. If
a dataset was returned, not only the elapsed time for server execution
is displayed, but also the time to retrieve the data from the server
to the Data Output page.
If you want to see the time on the server you need to use SQL EXPLAIN ANALYZE
or the built in Shift + F7
keyboard shortcut or Query -> Explain analyze
. Then, at the bottom of the explain output you get something like this:
Total runtime: 0.269 ms
On the face of it, that does seem impossible.
The thing is, your error suggests it's not that you're trying to delete at all.
The message you're getting suggests you're trying to insert or update a row in the child table, not delete a row from the parent table. If the foreign key you posted was causing the problem relative to a delete, you should see this message, instead:
Cannot delete or update a parent row: a foreign key constraint fails (`test`.`bar`, CONSTRAINT `bar_ibfk_1` FOREIGN KEY (`foo_id`) REFERENCES `foo` (`id`))
It's also possible you have some BEFORE DELETE
trigger magic on survey_main that's doing something unexpected.
Right after this error occurs, try this:
SHOW ENGINE INNODB STATUS;
The LATEST FOREIGN KEY ERROR
section should give you something more to go on. Failing that, you could enable the general log, which will show queries executed by triggers and other stored programs, as well as the queries you're directly executing, to shed light on what might be going on behind the scenes.
Update (#1) Things are definitely not as they seem and the full table definitions are going to be pretty critical, here.
Also, the version of MySQL you're using may also be relevant, so please mention it.
With nothing more to go on at the moment, I'm speculating that you have invalid data in the survey_id column of the survey_answers table. To test that theory:
SELECT *
FROM survey_answers sa
LEFT JOIN survey_main sm ON sm.id = sa.survey_id
WHERE sm.id IS NULL;
If I understand your schema correctly, then this query will return zero rows if I am wrong. :) If you get rows returned, then those rows have survey_answers records that contain an survey_id value that doesn't exist in the id column of survey_main.
Best Answer
It depends on the amount of data, your indexes, and the speed of your IO sub-system (the latter particularly if supporting indexes aren't in place).
As you already have 220 million rows in the child table, it must check each of them against the parent table to make sure they are all valid values. Without a useful index on
events.visitor
this will mean scanning the whole table.Note that creating a foreign key does not automatically create a supporting index, because depending on your access patterns such an index may not be needed so would be a waste of space.
You can tell the DB not to validate the foreign key constraint for existing rows by adding
NOT VALID
(see the documentation at https://www.postgresql.org/docs/current/sql-altertable.html for the details and caveats associated with this). That is the fastest way but can lead to you having rows that are invalid that you do not know about until they cause a problem.(
NOT VALID
is postgres specific syntax, other DBs that offer the option may name it differently, for instance with MS SQL Server the equivalent isWITH NOCHECK
)The second fastest method is to make sure there is an index on that column before defining the foreign key, though creating that index will itself take a notable amount of time on a table of that many rows.
It may be that at the point of the later attempt much of the table was in memory after being read during the earlier aborted create, so scanning to validate existing rows was faster second time around due to fewer disk reads being needed.