This can be improved in a thousand and one ways, then it should be a matter of milliseconds.
Better Queries
This is just your query reformatted with aliases and some noise removed to clear the fog:
SELECT count(DISTINCT t.id)
FROM tickets t
JOIN transactions tr ON tr.objectid = t.id
JOIN attachments a ON a.transactionid = tr.id
WHERE t.status <> 'deleted'
AND t.type = 'ticket'
AND t.effectiveid = t.id
AND tr.objecttype = 'RT::Ticket'
AND a.contentindex @@ plainto_tsquery('frobnicate');
Most of the problem with your query lies in the first two tables tickets
and transactions
, which are missing from the question. I'm filling in with educated guesses.
t.status
, t.objecttype
and tr.objecttype
should probably not be text
, but enum
or possibly some very small value referencing a look-up table.
EXISTS
semi-join
Assuming tickets.id
is the primary key, this rewritten form should be much cheaper:
SELECT count(*)
FROM tickets t
WHERE status <> 'deleted'
AND type = 'ticket'
AND effectiveid = id
AND EXISTS (
SELECT 1
FROM transactions tr
JOIN attachments a ON a.transactionid = tr.id
WHERE tr.objectid = t.id
AND tr.objecttype = 'RT::Ticket'
AND a.contentindex @@ plainto_tsquery('frobnicate')
);
Instead of multiplying rows with two 1:n joins, only to collapse multiple matches in the end with count(DISTINCT id)
, use an EXISTS
semi-join, which can stop looking further as soon as the first match is found and at the same time obsoletes the final DISTINCT
step. Per documentation:
The subquery will generally only be executed long enough to determine
whether at least one row is returned, not all the way to completion.
Effectiveness depends on how many transactions per ticket and attachments per transaction there are.
Determine order of joins with join_collapse_limit
If you know that your search term for attachments.contentindex
is very selective - more selective than other conditions in the query (which is probably the case for 'frobnicate', but not for 'problem'), you can force the sequence of joins. The query planner can hardly judge selectiveness of particular words, except for the most common ones. Per documentation:
join_collapse_limit
(integer
)
[...]
Because the query planner does not always choose the optimal
join order, advanced users can elect to temporarily set this variable
to 1, and then specify the join order they desire explicitly.
Use SET LOCAL
for the purpose to only set it for the current transaction.
BEGIN;
SET LOCAL join_collapse_limit = 1;
SELECT count(DISTINCT t.id)
FROM attachments a -- 1st
JOIN transactions tr ON tr.id = a.transactionid -- 2nd
JOIN tickets t ON t.id = tr.objectid -- 3rd
WHERE t.status <> 'deleted'
AND t.type = 'ticket'
AND t.effectiveid = t.id
AND tr.objecttype = 'RT::Ticket'
AND a.contentindex @@ plainto_tsquery('frobnicate');
ROLLBACK; -- or COMMIT;
The order of WHERE
conditions is always irrelevant. Only the order of joins is relevant here.
Or use a CTE like @jjanes explains in "Option 2". for a similar effect.
Indexes
B-tree indexes
Take all conditions on tickets
that are used identically with most queries and create a partial index on tickets
:
CREATE INDEX tickets_partial_idx
ON tickets(id)
WHERE status <> 'deleted'
AND type = 'ticket'
AND effectiveid = id;
If one of the conditions is variable, drop it from the WHERE
condition and prepend the column as index column instead.
Another one on transactions
:
CREATE INDEX transactions_partial_idx
ON transactions(objecttype, objectid, id)
The third column is just to enable index-only scans.
Also, since you have this composite index with two integer columns on attachments
:
"attachments3" btree (parent, transactionid)
This additional index is a complete waste, delete it:
"attachments1" btree (parent)
Details:
GIN index
Add transactionid
to your GIN index to make it a lot more effective. This may be another silver bullet, because it potentially allows index-only scans, eliminating visits to the big table completely.
You need additional operator classes provided by the additional module btree_gin
. Detailed instructions:
"contentindex_idx" gin (transactionid, contentindex)
4 bytes from an integer
column don't make the index much bigger. Also, fortunately for you, GIN indexes are different from B-tree indexes in a crucial aspect. Per documentation:
A multicolumn GIN index can be used with query conditions that involve
any subset of the index's columns. Unlike B-tree or GiST, index search
effectiveness is the same regardless of which index column(s) the
query conditions use.
Bold emphasis mine. So you just need the one (big and somewhat costly) GIN index.
Table definition
Move the integer not null columns
to the front. This has a couple of minor positive effects on storage and performance. Saves 4 - 8 bytes per row in this case.
Table "public.attachments"
Column | Type | Modifiers
-----------------+-----------------------------+------------------------------
id | integer | not null default nextval('...
transactionid | integer | not null
parent | integer | not null default 0
creator | integer | not null default 0 -- !
created | timestamp | -- !
messageid | character varying(160) |
subject | character varying(255) |
filename | character varying(255) |
contenttype | character varying(80) |
contentencoding | character varying(80) |
content | text |
headers | text |
contentindex | tsvector |
Original answer is wrong! See Edit 1 for the corrected version.
Original answer
An amazing solution would require a foreign key to a column in a view or a inherited table, but unfortunately PostgreSQL (I suppose that's your RDBMS because of the tag) does not have that (yet).
I think a simple change in the way you organize the data would suffice: create a table like ItemsAvailableQuantity, connecting an Item with its availability which will be references in the orders. When an item is not available anymore, DELETE
it from it.
CREATE TABLE Item (
id serial NOT NULL
, name text NOT NULL
, cost numeric
, PRIMARY KEY (id)
, CONSTRAINT positive_cost
CHECK (cost > 0)
);
CREATE TABLE ItemAvailableQuantity (
id serial NOT NULL
, item_id integer NOT NULL
, quantity integer NOT NULL
, PRIMARY KEY (id)
, FOREIGN KEY (item_id)
REFERENCES Item (id)
ON UPDATE CASCADE
ON DELETE CASCADE
, CONSTRAINT postive_quantity -- This constraint is the same as
CHECK (quantity > 0) -- checking something like `available = TRUE`.
);
CREATE TABLE ItemOrder ( -- Changed the name from `Order` because
id serial NOT NULL -- PostgreSQL refuses that name, somehow
, bill_id integer NOT NULL
, item_id integer NOT NULL
, units integer NOT NULL
, PRIMARY KEY (id)
, FOREIGN KEY (item_id)
REFERENCES ItemAvailableQuantity (id)
ON UPDATE CASCADE
ON DELETE CASCADE
-- Uncomment when `Bill` table is ready
-- , FOREIGN KEY (bill_id)
-- REFERENCES Bill (id)
-- ON UPDATE CASCADE
-- ON DELETE CASCADE
, CONSTRAINT positive_units
CHECK (units > 0)
);
Notice! The constraint positive_units may cause problems when your software reduces the units and reaches 0. Make it something like CHECK >= 0
if needed, or add a trigger that automatically DELETE
-s rows when units reaches 0 (or less) on each INSERT
or UPDATE
. This would preserve the table ItemAvailableQuantity to have only actually available items, which is what we want for being referenced from the table ItemOrder.
This should solve your problem. It's not an exact answer to your question. That would involve a trigger or a CHECK
calling a function as in the link you provided.
To easily see the quantity of the items then, just create a view that joins ItemAvailableQuantity and Item. If you really want then, make it INSERT
-able with a trigger (see yellow-box warning).
Edit 1
Actually Order (a.k.a. ItemOrder) should reference the Item instead of ItemAvailableQuantity to avoid any problem when the Item is not currently available, as stated in the comment.
This suggest we should remove the whole table ItemAvailableQuantity and only add a column available_quantity on Item.
CREATE TABLE Item (
id serial NOT NULL
, name text NOT NULL
, cost numeric
, available_quantity integer NOT NULL
, PRIMARY KEY (id)
, CONSTRAINT positive_cost
CHECK (cost > 0)
, CONSTRAINT non_negative_quantity
CHECK (quantity >= 0)
);
Then, to be certain of inserting only available items into orders we could just run
INSERT INTO ItemOrder (bill_id, item_id, units) VALUES
(SELECT id FROM Bill WHERE condition = something -- customize at will
, SELECT id FROM Item WHERE available_quantity >= wanted_quantity
AND other_condition = something
, wanted_quantity)
;
where wanted_quantity
is a parameter passed by your software to the query.
Still, solves the problem, but is no direct answer to the question.
Best Answer
A record will only be null if all of its fields are null or if the record itself is null. If you want to check if any of the fields is null then check one by one: