Problem with procedure performance

oracleoracle-11g-r2performanceplsqlstored-procedures

I'm having problem in Oracle, where I have two procedures that are executed one after the other.

Where the first procedure, I have a cursor that does insert into a table.
This insert has about 30 million records, which are inserted at a time fast and acceptable.

But when the following procedure is executed, which makes the update is rather delay. In a last execution took about 38 hours. And it has a single slider update, less than the first and run the cursor separately, it runs in less than one minute.

One detail is that when the procedures are being performed, and see that the second procedure is "locked" if I stop running, and execute analyze table, and only perform the second procedure again, it is executed typically taking about 1 or 2 hours to complete.

can someone help me with this problem?

Best Answer

It sounds like you're dumping 30 million rows into an existing table, then immediately trying to update those rows. I have a couple of thoughts, which may or may not work for you (since I can't see your code currently):

1) Is there a way to combine the inserts with the updates? It seems suspect to me that you're creating 30 million rows, then immediately updating them. Perhaps you need to look into insert using select statements to dynamically derive the full value, and then insert the rows in their final state.

2) By inserting 30 million rows, I suspect that you're altering your tables by a value > 10%. Oracle's automated jobs will reanalyze stats on any table with a change of 10% or greater; I make it a practice to do the same thing when I'm inserting/updating/deleting data. This is why you're seeing this behavior when you stop the 2nd procedure and analyze the table.

3) For the update statement, have you properly indexed the columns?

4) What is your expectation for the second procedure's run time? Does 1-2 hours to update 30 million rows sound too long? Only you can know that. For example, in the environment I work in, we can run our databases at 17,000 IOPS, but we force logging on everything because we run a data guard to a DR site, which adds some overhead on operations. So for me, 1-2 hours for a 30 million row update is right about where I normally am. Of course, this also depends on factors like row size (are you updating a single character, or a blob?), the where clause of the update (are you updating based on primary key, a foreign key, or based on an unindexed field?), and the overall load on your system at run time.

HTH.