This is actually improved in Oracle 12c.
From Oracle Magazine's Ask Tom column:
http://www.oracle.com/technetwork/issue-archive/2013/13-sep/o53asktom-1999186.html
In Oracle Database 11g you were able to perform a fast add of a column
to a table if it had a default value and was defined as NOT NULL.
(Arup Nanda has written about this at bit.ly/16tQNCh.) However, if you
attempted to add a column with a default value and that column
permitted null values, the ADD COLUMN operation could take a
significant amount of time, generate a large amount of undo and redo,
and lock the entire table for the duration of the operation. In Oracle
Database 12c, that time, volume, and locking are no longer part of the
process.
I know this doesn't help if you're stuck on 11gR2, since you probably want to get that column added before taking an outage to upgrade the DB.
As to why it takes so long only when the data type is TIMESTAMP
, I'm at a loss. Maybe since the TIMESTAMP
data type is a few more bytes to store than DATE
, or than VARCHAR2
for empty strings, and those extra bytes are causing a cascade of row migrations. To test that theory, try adding a fixed length CHAR column (in a non-Prod environment): alter table my_table add my_char_col CHAR(200) default 'hello' not null;
I expect this would not be instantaneous due to row migrations.
Your best bet on 11gR2 might be either to
- Take an outage at an appropriate time that is ok with your stakeholders to add this column
or
- Use
DBMS_REDEFINITION
to add the column as part of an online redefinition. This should be non-blocking, and even though it may take hours, the users will not even notice it happening (no locking). If you have a lot of indexes, this becomes a little more difficult. If you have foreign keys, this becomes even more difficult. If you have stored procs or views that reference this table, be sure to recompile them after you FINISH
the redef. This will have a side-benefit that your table will be reorganized (i.e. defragmented).
or
- Maybe add the column without the NOT NULL constraint. When that finishes, then alter the column to add the NOT NULL constraint. My intuition is that you'll still get locking somewhere in the process, so no promises. Maybe just a crazy idea.
If you want to try the Online Redefinition, then you might find these links helpful:
OK, following @jynus pointers, I finally got the backups to restore. I ended up creating another mysql instance on the server and restoring there.
Just in case someone runs into the same problem, the steps I took are the following...
My environment is CentOS 6.4, MySQL 5.1 with stock InnoDB as far as I can tell, innobackupex --version
gives me InnoDB Backup Utility v1.5.1-xtrabackup
, xtrabackup --version
gives me xtrabackup version 2.1.8 for Percona Server 5.1.73
and the server has Plesk 11.0.9 installed.
Firstly I set up another mysql instance by copying and modifying the default /etc/init.d/mysql
startup script. The problem that I had there was that to load a different config file for that server, the --defaults-file
parameter that mysqld_safe
needs, has to be given before any other parameters (see here). I then copied the default /etc/my.cnf
file and modified the values in there (port, log locations etc) to suit my setup.
To log in to the new mysql instance and set up a new root user, the first mysqld_safe
run was with the --skip-grant-tables
option in the init script (see here).
To be able to manage the new mysql instance with phpmyadmin I created a new database server in Plesk (Server->Database Servers->Add Database Server) and entered the new port and/or new ip the server is running on and the new admin user I set up in the previous step. Plesk won't let you use a "root" username so I had to add a different username with root privileges in the previous step.
After that, the server can be started/stopped/reloaded using something like /etc/init.d/mysql-backup start
.
For the actual recovery process I extracted the backup file (taking care to use the -i
flag for tar) and once in the backup dir, I applied the log using the xtrabackup
executable itself (xtrabackup --prepare --target-dir=/path/to/extracted/backup/
) because innobackupex
requires that it connect to a running mysql server to determine the version, something that I didn't want to figure out how to do.
I stopped my new mysql server, copied over the folder that contained my database to the datadir, copied over the ibdata1 file, changed permissions and started the server.
I could then browse/manage my backup database using phpmyadmin.
Best Answer
From MySQL documentation