If the definition of toptab
will potentially be different in different environments (which is, of course, a separate issue that needs addressing), your options would generally be to either create an object type that does have a defined order and BULK COLLECT
into that or to define a different collection for each column that you specify as the targets of your BULK COLLECT
.
For the object approach, you'd define an object
CREATE TYPE toptab_like_object
AS OBJECT (
col1 number,
col2 number,
...
colN number
);
and then use that object rather than the %ROWTYPE
in your code
DECLARE
TYPE t_toptab_list IS TABLE OF toptab_like_object;
l_data t_toptab_list;
BEGIN
SELECT toptab_like_object( ... ) -- many columns, with some manipulation, etc.
BULK COLLECT INTO l_data
FROM a,b,c,d
WHERE ... -- join and filter, etc.
END;
You could also define a different collection for each column (you could probably define a much smaller number of collection types)
DECLARE
TYPE t_col1_list IS TABLE OF toptab.col1%TYPE;
TYPE t_col2_list IS TABLE OF toptab.col2%TYPE;
...
TYPE t_colN_list IS TABLE OF toptab.colN%TYPE;
l_col1 t_col1_list;
l_col2 t_col2_list;
...
l_colN t_colN_list;
BEGIN
SELECT ... -- many columns, with some manipulation, etc.
BULK COLLECT INTO l_col1, l_col2, ... l_colN
FROM a,b,c,d
WHERE ... -- join and filter, etc.
END;
It sounds like you're dumping 30 million rows into an existing table, then immediately trying to update those rows. I have a couple of thoughts, which may or may not work for you (since I can't see your code currently):
1) Is there a way to combine the inserts with the updates? It seems suspect to me that you're creating 30 million rows, then immediately updating them. Perhaps you need to look into insert using select statements to dynamically derive the full value, and then insert the rows in their final state.
2) By inserting 30 million rows, I suspect that you're altering your tables by a value > 10%. Oracle's automated jobs will reanalyze stats on any table with a change of 10% or greater; I make it a practice to do the same thing when I'm inserting/updating/deleting data. This is why you're seeing this behavior when you stop the 2nd procedure and analyze the table.
3) For the update statement, have you properly indexed the columns?
4) What is your expectation for the second procedure's run time? Does 1-2 hours to update 30 million rows sound too long? Only you can know that. For example, in the environment I work in, we can run our databases at 17,000 IOPS, but we force logging on everything because we run a data guard to a DR site, which adds some overhead on operations. So for me, 1-2 hours for a 30 million row update is right about where I normally am. Of course, this also depends on factors like row size (are you updating a single character, or a blob?), the where clause of the update (are you updating based on primary key, a foreign key, or based on an unindexed field?), and the overall load on your system at run time.
HTH.
Best Answer
First, I'll cover the SQL row-limiting side of things.
In Oracle 11.2.x and lower, you have to use
rownum
and a subquery, as it doesn't support theLIMIT
orFETCH
clauses:In Oracle 12 you can use:
... then, for the next 1k:
Sample table:
The PL/SQL:
The stored procedure takes an input, which is the start offset.
Create another cursor, and use your concatenated ID
ID_YYYYMMDD
in theWHERE
clause.Why do you need to process 1000 records at a time? It'll be slower iterating this way. Often you can achieve the same thing using one or two lone SQL statements, which end up way more efficient.