Huge difference in execution time of update statement with default value

oracleoracle-11gupdate

Last week I tried deploying a new version of an application on an app server. Prior deploying, some database scripts needed to be executed on an Oracle 11g database.

The exact version of the database is Oracle Database 11g Release 11.2.0.1.0 – 64bit Production . Theree are no triggers on the tables.

The statement causing the headache is this one:

ALTER TABLE EMS_DATA ADD STATUS VARCHAR2(1) DEFAULT 'V' NOT NULL;

on a DDL that looks like this:

create table EMS_DATA
(
    CHANNEL_ID NUMBER not null,
    TSTAMP TIMESTAMP(3) WITH LOCAL TIME ZONE not null,
    VAL FLOAT,
    constraint EMS_DATA_PK primary key (CHANNEL_ID, TSTAMP)
)

The EMS_DATA PROD table contains around 350 million records and our ACC table contains around 280 million records. Both are Oracle 11g instances.

I ran the statement twice on the ACC table and the execution time was approx. 50ms and 300ms. The reason why it SHOULD be fast as this is the following:

Metadata-Only DEFAULT Values

Prior to Oracle 11g, adding a new column to an existing table required
all rows in that table to be modified to add the new column.

Oracle 11g introduced the concept of metadata-only default values.
Adding a NOT NULL column with a DEFAULT clause to an existing table
involved just a metadata change, rather than a change to all the rows
in the table. Queries of the new column were rewritten by the
optimizer to make sure the result was consistent with the default
definition.

Ref: https://oracle-base.com/articles/12c/default-values-for-table-columns-enhancements-12cr1

So yes I was fairly confident the update would cause no issue on the PROD database. Instead, this happened;

The execution of the statement took > 120mins and eventually came to a halt because what had happened was that the disk drive was full. After inspection we saw that 180GB of .ARC (Archived Redo Logs) files have been written over this period of time.

TL;DR
What can cause this alter table statement to have this HUGE difference in duration and why does one database generate a ridiculous amount of .ARC files when the total amount of data in the table isn't even close to 180GB of disk space.
Can this Metadata-Only optimisation be switched off?

It's important to note that the application server was turned off at the moment of the script execution, so no outside process could have caused this.

Best Answer

From Database SQL Language Reference, 11g Release 2 (11.2):

When you add a column, the initial value of each row for the new column is null.

  • If you specify the DEFAULT clause for a NOT NULL column, then the default value is stored as metadata but the column itself is not populated with data. However, subsequent queries that specify the new column are rewritten so that the default value is returned in the result set.

    This optimized behavior differs from earlier releases, when as part of the ALTER TABLE operation Oracle Database updated each row in the newly created column with the default value, and then fired any update triggers defined on the table. In this release, no triggers are fired because the default is stored only as metadata. The optimized behavior is subject to the following restrictions:

    • The table cannot have any LOB columns. It cannot be index-organized, temporary, or part of a cluster. It also cannot be a queue table, an object table, or the container table of a materialized view.

    • If the table has a Virtual Private Database (VPD) policy on it, then the optimized behavior will not take place unless the user who issues the ALTER TABLE ... ADD statement has the EXEMPT ACCESS POLICY system privilege.

    • The column being added cannot be encrypted, and cannot be an object column, nested table column, or a LOB column.

  • If you specify the DEFAULT clause for a nullable column, then the default value is added to existing rows as part of this ALTER TABLE statement, and any update triggers defined on the table are fired. This behavior also results if you change a NOT NULL column with a default value to be nullable.

Maybe the default values were actually written to the rows. So check the version of your database and the structure of you table!


There is a hidden parameter _ADD_COL_OPTIM_ENABLED in 11g that can be used to disable this metadata optimization and maybe the parameter COMPATIBLE. This is described in the metalink note

Init.ora Parameter "_ADD_COL_OPTIM_ENABLED" [Hidden] Reference Note (Doc ID 1492674.1)

But all these parameters are valid system wide and not only for a table. So if for one table everything works as expected the parameters seems to be set properly for the instance.

In this metalink noted cited above also the following query is published that shows the columns that were added with this metadata optimization:

select owner, object_name, name 
from dba_objects, col$
where bitand(col$.PROPERTY,1073741824)=1073741824
    and object_id=obj#;

With this query you can check if your columns are set with this metadata optimization. You can search for your table and column by

select owner, object_name, name
from dba_objects, col$
where bitand(col$.PROPERTY,1073741824)=1073741824
    and object_id=obj#
    and object_name='EMS_DATA'
    and name='STATUS'

If this returns

no rows selected

then all rows where updated. The query

select owner, object_name, name
from dba_objects, col$
where bitand(col$.PROPERTY,1073741824)=1073741824
    and object_id=obj#
    and object_name='ACC'
    and name='STATUS'

should return at least a row for your ACC table.