Last week I tried deploying a new version of an application on an app server. Prior deploying, some database scripts needed to be executed on an Oracle 11g database.
The exact version of the database is Oracle Database 11g Release 11.2.0.1.0 – 64bit Production . Theree are no triggers on the tables.
The statement causing the headache is this one:
ALTER TABLE EMS_DATA ADD STATUS VARCHAR2(1) DEFAULT 'V' NOT NULL;
on a DDL that looks like this:
create table EMS_DATA
(
CHANNEL_ID NUMBER not null,
TSTAMP TIMESTAMP(3) WITH LOCAL TIME ZONE not null,
VAL FLOAT,
constraint EMS_DATA_PK primary key (CHANNEL_ID, TSTAMP)
)
The EMS_DATA
PROD table contains around 350 million records and our ACC table contains around 280 million records. Both are Oracle 11g instances.
I ran the statement twice on the ACC table and the execution time was approx. 50ms and 300ms. The reason why it SHOULD be fast as this is the following:
Metadata-Only DEFAULT Values
Prior to Oracle 11g, adding a new column to an existing table required
all rows in that table to be modified to add the new column.Oracle 11g introduced the concept of metadata-only default values.
Adding a NOT NULL column with a DEFAULT clause to an existing table
involved just a metadata change, rather than a change to all the rows
in the table. Queries of the new column were rewritten by the
optimizer to make sure the result was consistent with the default
definition.Ref: https://oracle-base.com/articles/12c/default-values-for-table-columns-enhancements-12cr1
So yes I was fairly confident the update would cause no issue on the PROD database. Instead, this happened;
The execution of the statement took > 120mins and eventually came to a halt because what had happened was that the disk drive was full. After inspection we saw that 180GB of .ARC
(Archived Redo Logs) files have been written over this period of time.
TL;DR
What can cause this alter table statement to have this HUGE difference in duration and why does one database generate a ridiculous amount of .ARC
files when the total amount of data in the table isn't even close to 180GB of disk space.
Can this Metadata-Only optimisation be switched off?
It's important to note that the application server was turned off at the moment of the script execution, so no outside process could have caused this.
Best Answer
From Database SQL Language Reference, 11g Release 2 (11.2):
Maybe the default values were actually written to the rows. So check the version of your database and the structure of you table!
There is a hidden parameter _ADD_COL_OPTIM_ENABLED in 11g that can be used to disable this metadata optimization and maybe the parameter COMPATIBLE. This is described in the metalink note
Init.ora Parameter "_ADD_COL_OPTIM_ENABLED" [Hidden] Reference Note (Doc ID 1492674.1)
But all these parameters are valid system wide and not only for a table. So if for one table everything works as expected the parameters seems to be set properly for the instance.
In this metalink noted cited above also the following query is published that shows the columns that were added with this metadata optimization:
With this query you can check if your columns are set with this metadata optimization. You can search for your table and column by
If this returns
then all rows where updated. The query
should return at least a row for your ACC table.