I'm trying to run an update query but still taking to long I have a commit in it.
Is it better to set the commit to a higher value (5000) or lower (1)?
I have to update 15 million records in a table of 21 million
edit here's my query for the update but I need to know how low/high I should set the commit
DECLARE
CURSOR c_itemloc
IS
SELECT item, loc
FROM dc_item_loc;
l_item item_loc.item%TYPE;
l_loc item_loc.loc%TYPE;
i NUMBER;
l_commit VARCHAR2(1) := 'Y';
BEGIN
i:=0;
FOR r_itemloc IN c_itemloc
LOOP
i := i+1;
UPDATE item_loc il
SET il.status = 'D',
il.last_update_datetime = get_vdate,
IL.LAST_UPDATE_ID = 'CNVOBJ_RNG'
WHERE item = r_itemloc.item
AND loc = r_itemloc.loc;
IF l_commit = 'Y' AND mod(i, 5000) = 0 THEN
COMMIT;
END IF;
END LOOP;
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line('YOU FAIL');
END;
Best Answer
Not know a huge amounts about what version (I've assumed 11g) you are using and other environment issues, something like this might be useful.
I'd set the commit level to be as high as your environment can comfortably handle as the fewer commits the faster it will be. What you can confortably handle will be dictated by factors such as:
You can alter the constant
c_bulk_limit
to whatever size you can cope with.At the very least you'll be using more bulk operations and therefore significantly less context switching so it should save you time.
Of course the best method would be a single
UPDATE
statement but with that many records your DB might not be able to cope without interim commits.EDIT: Having just seen @Florin's answer, it might be a good idea to use his cursor to get the
ROWID
of the taget update table before then using theBULK
commands to perform the update as it will be the fastest access method.E.G.