It isn't possible to determine the order in which rows are committed. Either a sequence or a timestamp can allow you to determine the order in which rows are inserted. But neither will tell you the order in which those transactions were committed.
Since you have a process that is trying subscribe to changes in a table, however, you have a few options that don't require you to identify this order. You can use Oracle Change Data Capture (CDC) and then have the reading process register as a subscriber. Or you can use Streams to publish changes to the table and have the reader process act as a custom apply process.
Both CDC and Streams will send the subscriber the data in commit order. The CDC change table will also store the commit SCN and the commit timestamp if you really want them (though I suspect you don't really care so long as the reader is sent every change in order).
Normal CDC involves the reader process controlling a window that shows one change set at a time, consuming that change set, and then advancing the window. The reader consumes data in commit order and never needs to know specifically what the SCN of a change is. You can hit the change tables directly as well to get the actual SCN if you want to complicate the reader's pull process.
Your reader process can hit the change tables, get the SCN from the change table, move it to the sequence_id
column of your existing table, and then do whatever else you need to do with the data in the change table to move it into your existing architecture. It's a bit messy but doable. Your reader process could also be a Streams consumer and do the same thing (grabbing the SCN from the Streams change record rather than the CDC change table).
When you delete data, oracle save delete data because perhaps transaction don't ends with commit and oracle need to restore delete data.
This copy of data is stored in rollback segments. In Oracle settings you have max extends for this segments.
Solution
- You can extend segments
- You can delete data by 'slices':
Sample spliting delete data:
DELETE FROM table SC
WHERE SC.evaluationMonth= 'Some month'
and SC.evaluationDay= '1';
COMMIT;
DELETE FROM table SC
WHERE SC.evaluationMonth= 'Some month'
and SC.evaluationDay= '2';
COMMIT;
...
DELETE FROM table SC
WHERE SC.evaluationMonth= 'Some month'
and SC.evaluationDay= '31';
COMMIT;
(you can do a loop, of course )
Best Answer
UNDO is the area that stores the change vectors required to restore the data to the state that it was in at the beginning of the transaction.
The only cures for this are to reduce the amount of data that you're changing (by changing less data or reducing the transaction size), or to increase the available undo size.
Reducing the transaction size, which you'd do by committing more frequently, is problematic because you still have to deal with rolling back transactions that you've committed, and because it would also promote snapshot too old errors when the database cannot reconstruct from undo the data required for consistency with the start of a query.
So, as commenters say, the answer really is to have the DBA's adjust the size of UNDO available. Strangely, they can be reluctant to do so, or at least to do so adequately. If you currently have 256MB of UNDO and you ask for 2GB, they might see this as some huge amount that might in some way be undesirable and offer you 1GB. Smile indulgently at them, because you already doubled the amount that you really wanted and because there is no such thing as oversizing your UNDO tablespace.