Primary and unique keys in all(?) RDBMSes use indexes in order to quickly be able to determine whether a newly inserted value is indeed unique.
The side effect of this is that queries via primary and unique keys are usually "fast".
Now if you haven't defined primary or unique keys on your tables,
- You don't have a relational table but you have junk (OK, this is a contentious opinion, but a relational model needs keys on all tables).
- queries on this table (in the absence of any other indexes) will become slower as more data is inserted into the table.
So yes, the absence of primary keys will cause this!
Next time, please include the EXPLAIN output rather than making us dig for it in your scripts. There's no guarantee my system is using the same plan as yours (although with your test data it is likely).
The rule system here is working properly. First, the I want to include my own diagnostic queries (note I did not run EXPLAIN ANALYSE since I was just interested in what query plan was generated):
rulestest=# explain DELETE FROM user_hits WHERE day = '2013-03-16';
QUERY PLAN
--------------------------------------------------------------------------------
----------------------
Delete on application_hits (cost=0.00..3953181.85 rows=316094576 width=24)
-> Nested Loop (cost=0.00..3953181.85 rows=316094576 width=24)
-> Seq Scan on user_hits (cost=0.00..1887.00 rows=49763 width=10)
Filter: (day = '2013-03-16'::date)
-> Materialize (cost=0.00..128.53 rows=6352 width=22)
-> Nested Loop (cost=0.00..96.78 rows=6352 width=22)
-> Seq Scan on project_hits (cost=0.00..14.93 rows=397 wi
dth=10)
Filter: (day = '2013-03-16'::date)
-> Materialize (cost=0.00..2.49 rows=16 width=16)
-> Nested Loop (cost=0.00..2.41 rows=16 width=16)
-> Seq Scan on application_hits (cost=0.00..1
.10 rows=4 width=10)
Filter: (day = '2013-03-16'::date)
-> Materialize (cost=0.00..1.12 rows=4 width=
10)
-> Seq Scan on client_hits (cost=0.00..
1.10 rows=4 width=10)
Filter: (day = '2013-03-16'::date)
Delete on client_hits (cost=0.00..989722.41 rows=79023644 width=18)
-> Nested Loop (cost=0.00..989722.41 rows=79023644 width=18)
-> Seq Scan on user_hits (cost=0.00..1887.00 rows=49763 width=10)
Filter: (day = '2013-03-16'::date)
-> Materialize (cost=0.00..43.83 rows=1588 width=16)
-> Nested Loop (cost=0.00..35.89 rows=1588 width=16)
-> Seq Scan on project_hits (cost=0.00..14.93 rows=397 wi
dth=10)
Filter: (day = '2013-03-16'::date)
-> Materialize (cost=0.00..1.12 rows=4 width=10)
-> Seq Scan on client_hits (cost=0.00..1.10 rows=4
width=10)
Filter: (day = '2013-03-16'::date)
Delete on project_hits (cost=0.00..248851.80 rows=19755911 width=12)
-> Nested Loop (cost=0.00..248851.80 rows=19755911 width=12)
-> Seq Scan on user_hits (cost=0.00..1887.00 rows=49763 width=10)
Filter: (day = '2013-03-16'::date)
-> Materialize (cost=0.00..16.91 rows=397 width=10)
-> Seq Scan on project_hits (cost=0.00..14.93 rows=397 width=10
)
Filter: (day = '2013-03-16'::date)
Delete on user_hits (cost=0.00..1887.00 rows=49763 width=6)
-> Seq Scan on user_hits (cost=0.00..1887.00 rows=49763 width=6)
Filter: (day = '2013-03-16'::date)
(39 rows)
rulestest=# select distinct day from application_hits;
day
------------
2013-03-15
2013-03-16
(2 rows)
rulestest=# select count(*), day from application_hits group by day;
count | day
-------+------------
4 | 2013-03-15
4 | 2013-03-16
(2 rows)
rulestest=# select count(*), day from client_hits group by day;
count | day
-------+------------
4 | 2013-03-15
4 | 2013-03-16
(2 rows)
rulestest=# select count(*), day from project_hits group by day;
count | day
-------+------------
397 | 2013-03-15
397 | 2013-03-16
(2 rows)
If your data is anything like your existing data, neither rules nor triggers will work very well. Better will be a stored procedure which you pass a value and it deletes everything you want.
First let's note that indexes here will get you nowhere because in all cases you are pulling half of the tables (I did add indexes on day on all tables to help the planner but this made no real difference).
You need to start with what you are doing with RULEs. RULEs basically rewrite queries and they do so using ways that are as robust as possible. Your code also doesn't match your example though it matches your question better. You have rules on tables which cascade to rules on other tables which cascade to rules on other tables
Therefore when you delete from user_hits where [criteria]
, the rules transform this into a set of queries:
DELETE FROM application_hits
WHERE day IN (SELECT day FROM client_hits
WHERE day IN (SELECT day FROM user_hits WHERE [condition]));
DELETE FROM client_hits
WHERE day IN (SELECT day FROM user_hits WHERE [condition]);
DELETE FROM user_hits WHERE [condition];
Now, you might think we could skip the scan on client_hits in the first, but that isn't what happens here. The problem is that you could have days in user_hits and application_hits that are not in client_hits so you really have to scan all tables.
Now here there is no magic bullet. A trigger isn't going to work much better because, while it gets to avoid scanning every table, it gets fired every row that gets deleted so you basically end up with the same nested loop sequential scans that are currently killing performance. It will work a bit better because it will delete rows along the way rather than rewriting the query along the way, but it isn't going to perform very well.
A much better solution is to just define a stored procedure and have the application call that. Something like:
CREATE OR REPLACE FUNCTION delete_stats_at_date(in_day date) RETURNS BOOL
LANGUAGE SQL AS
$$
DELETE FROM application_hits WHERE day = $1;
DELETE FROM project_hits WHERE day = $1;
DELETE FROM client_hits WHERE day = $1;
DELETE FROM user_hits WHERE day = $1;
SELECT TRUE;
$$;
On the test data this runs in 280 ms on my laptop.
One of the hard things regarding RULEs is remembering what they are and noting that the computer cannot, in fact, read your mind. This is why I would not consider them a beginner's tool.
Best Answer
Right, the
TRUNCATE TABLE
command you are performing "... acquires an ACCESS EXCLUSIVE lock on each table it operates on", so in the first SQL block you posted, any other clients attempting to access the table after that time will be blocked until yourINSERT
finishes and youCOMMIT
.You can use the same workaround as in your MySQL-specific code; Postgres supports roughly the same syntax and will have similar locking behavior. To wit:
Extra bonus: Postgres actually supports transactional DDL, unlike MySQL, so in case you need to ROLLBACK the above transaction, you may safely do so.