I have a products table where I insert around 150,000 records a day. Most of them are redundant, but I need to keep them because of the new expiration date. I get product feeds from about 5 out of 30 vendors a day. Each vendor has around 35,000 unique products. No product can belong to more than one vendor.
CREATE TABLE vendor_prices ( id serial PRIMARY KEY, vendor integer NOT NULL, sku character varying(25) NOT NULL, category_name character varying(100) NOT NULL, price numeric(8,5) NOT NULL, effective_date timestamp without time zone, expiration_date timestamp without time zone DEFAULT (now() + '1 year'::interval) );
I'm trying to delete irrelevant records where there was no price change and it's no longer the last update for said product, eg.:
effective_date price '2015-05-01' $1.99 '2015-05-02' $1.99 delete '2015-05-03' $1.59 '2015-05-04' $1.99 '2015-05-05' $1.99 delete '2015-05-06' $1.99 keep for new expiration date
So after each load (I figured it'll be easier for one vendor at a time) I want to do some kind of delete. Here is the long non-performing solution that I came up with.
CREATE OR REPLACE FUNCTION remove_vendor_price_dupes(_vendor integer)
RETURNS integer AS
$BODY$
BEGIN
-- Delete Redundant prices
delete from vendor_prices
where id in (
select id from (
select vp1.id, vp1.vendor, vp1.sku, vp1.price, vp1.effective_date, vp1.expiration_date
from vendor_prices vp1
inner join (
select vendor, sku, price from vendor_prices
where vendor = _vendor
group by vendor, sku, price
) vp2
on vp1.vendor = vp2.vendor and vp1.sku = vp2.sku and vp1.price = vp2.price
where vp1.vendor = _vendor
) dupe
-- fetch the irrelevant record
WHERE (select a.effective_date from vendor_prices a
where vendor = _vendor
and a.price = dupe.price and a.sku = dupe.sku and dupe.effective_date > a.effective_date
-- but make sure there's no price change in-between(
and (select b.effective_date from vendor_prices b
where vendor = _vendor
and b.sku = dupe.sku and b.effective_date < dupe.effective_date and b.effective_date > a.effective_date limit 1) IS NULL
limit 1
) IS NOT NULL
-- and that this is not the last update on said product, otherwise we'll keep it for expiration_date
and ( select c.effective_date from vendor_prices c
where vendor = _vendor
and c.sku = dupe.sku
and c.effective_date > dupe.effective_date limit 1
) IS NOT NULL
);
return 0;
END;
$BODY$
LANGUAGE plpgsql
This function ran for a few hours so I killed it. The table has around 5 million records. I tried all kinds of different indexes and combo indexes but nothing seems to help. There may be other inserts and deletes while I'm running this function.
Running PostgreSQL 9.3.4 on Solaris 11.2.
I have plenty of RAM and disk space.
Best Answer
Core feature is the window function
lag()
.Also pay special attention to avoid deadlocks and race conditions with concurrent deletes and inserts (which can affect which rows to delete!):
Call:
Notes
The current version of the 9.3 major release is 9.3.6. The project recommends that ...
A multicolumn index on
(vendor, sku, effective_date, id)
would be perfect for this - in this particular order. But Postgres can combine indexes rather efficiently, too.It might pay to add the otherwise irrelevant
price
as last item ot the index to get index-only scans out of this. You'll have to test.Since you have concurrent deletes it may be a good idea to run a separate delete per vendor to reduce the potential for race conditions and deadlocks. Since there are only a few vendors, this seems like a reasonable partitioning. (Many tiny calls would be comparatively slow.)
I am running a separate
SELECT
(PERFORM
in plpgsql, since we do not use the result) because the row locking clauseFOR UPDATE
cannot be used together with window functions. Don't let the keyword mislead you, this is not just for updates. I am locking all rows for the given vendor, since the result depends on all rows. Concurrent reads are not impaired, only concurrent writes have to wait until we are done. That's another reason why deleting rows for one vendor at a time in a separate transaction should be best.sku
is unique per product, so we canPARTITION BY
it.ORDER BY effective_date, id
: your first version of the question included code for duplicate rows, so I added id toORDER BY
as additional tie breaker. This way it works for duplicates on(sku, effective_date)
as well.To preserve the last row for each set:
AND (lead(id) OVER w) IS NOT NULL
. Reusing the same window forlead()
is cheap - independent of the added explicitWINDOW
clause - that's just syntax shorthand for convenience.I am locking rows in the same order:
ORDER BY sku, effective_date, id
. Make sure that concurrent DELETEs operate in the same order to avoid deadlocks. If all other transactions delete no more than a single row within the same transaction, there cannot be deadlocks and you don't need the row locking at all.If concurrent INSERTs could lead to a different result (make different rows obsolete), you have to lock the whole table in EXCLUSIVE mode instead to avoid race conditions:
Do that only if it's necessary. It blocks all concurrent write access.
I am returning the number of rows deleted, but that's totally optional. You might as well return nothing and declare the function as
RETURNS void
.