Oracle sequence generation performance during persists of large data sets

oracleperformancesequence

In the context of an Oracle sequence, when persisting > 200 million records, is there a performance hit on the database if the sequence cache is the default value of 10000? That is to say does modifying this value to something around 1,000,000 give me any benefit during a transaction where > 200 million sequences will be used?

Best Answer

The larger the CACHE on the sequence, the fewer times that you'll need to wait for Oracle to acquire the latch to be able to update the data dictionary when a new set of values need to be generated. There is definitely at least the potential for some performance benefit by increasing the size of the cache. On the other hand, if the CACHE is already at 10,000, you're only going to need to update the data dictionary 20,000 times in the course of a 200 million row data load (assuming one call to the sequence.nextval per row). Without seeing your system, it seems unlikely that these 20,000 updates would account for a meaningful fraction of the time you're spending in a 200 million row data load. You would potentially shave a couple of seconds off the load by increasing the size of the cache but that probably isn't meaningful given the amount of time you're likely spending on I/O.