Mysql – Fill ibdata1 With Zeros/Nulls/etc Or Change A Table’s Tablespace

innodbMySQL

I have a MySQL 5.5 server running on CentOS 6. The MySQL package is a MySQL-released version (MySQL-server-5.5.28-1.el6.x86_64), not the stock CentOS one. I have moved an existing database over from a single tablespace to innodb_file_per_table. This leaves me with a 150GB ibdata1 file.

The active space isn't a big deal as the rest of the database is quite large. However, I'd like to reduce the size of it for backups. My thinking is that if I can fill the empty space in it with mostly zeros, nulls, or anything else that'll compress well, that that'll cut a lot of space out of the backups.

I've searched around for ways to fill it with zeros and for a related question, filling with random data.

The obvious solution if there isn't a built-in solution, is to use a new table with a large column and insert zeros into it until it's around size of ibdata1. However, I've been unable to find a way to change a table's tablespace when innodb_file_per_table=1.

Best Answer

There is absolutely no way to reduce the size of ibdata1 in-place. As for the already present data pages and index pages in ibdata1 from obsolete tables, you could let mysql overwrite them with transactions over time.

RECOMMENDATION #1

You must migrate all data out and reload with an empty ibdata1.

I have already addressed this in StackOverflow back in Oct 29, 2010 (Steps Outlined).

I have also addressed this in other posts in the DBA StackExchange

If you cannot do this in Production at this time, you will have to present this as a case study for the need to do so. Here is what you do:

  • Setup a Staging DB Server
    • No initial data
    • innodb_file_per_table configured
  • Take mysqldump of Production (use --single-transaction)
  • Load the mysqldump into Staging

Please note that ibdata1 will not grow dramatically because data and index pages for each InnoDB table will reside in its own personal tablespace file.

For example, the table mydb.mytable will reside in ibdata1. All you will have is /var/lib/mysql/mydb/mytable.frm as an external manifestation of the table.

Once you load the Staging DB Server, the table will look like this on disk:

  • /var/lib/mysql/mydb/mytable.frm
  • /var/lib/mysql/mydb/mytable.ibd

Only data dictionary elements of the table are left behind in ibdata1.

That will give you a look at what production will look like when you perform the InnoDB Cleanup.

RECOMMENDATION #2

Once you have performed InnoDB CleanUp in Production, you may be concerned with old data trapped within each individual .ibd file. Cleaning up an .ibd file is much easier.

To cleanup the InnoDB table mydb.mytable, run one of the following:

  • ALTER TABLE mydb.mytable ENGINE=InnoDB;
  • OPTIMIZE TABLE mydb.mytable;

That's it. The only difference is that OPTIMIZE TABLE may try to run ANALYZE TABLE under the hood. It may get bypassed for an InnoDB table.

RECOMMENDATION #3

If the goal is to truly stomp over all unused pages in ibdata1, here is something you may want to try out on a Staging Server:

STEP01) Extract all InnoDB tables from ibdata1. (Since you have already done so, I will write this up for the benefit of others):

SQLTORUN="SELECT CONCAT('ALTER TABLE ',table_schema,'.',table_name,' ENGINE=InnoDB;')"
SQLTORUN="${SQLTORUN} FROM information_schema.tables WHERE engine='InnoDB'"
mysql -uroot -p...  -ANe"${SQLTORUN}" > ExtractInnoDB.sql
mysql -uroot -A < ExtractInnoDB.sql

STEP02) Create a large temp table in MyISAM called test.junkman:

CREATE TABLE test.junkman (a INT) ENGINE=MyISAM;
INSERT INTO test.junkman VALUES (0);
INSERT INTO test.junkman VALUES SELECT * FROM test.junkman; ## run this line 30 times

When done, you will generate a 7 GB MyISAM table with no indexes:

mysql> SELECT data_length/power(1024,3) TableSize
    -> FROM information_schema.tables
    -> WHERE table_schema='test'
    -> AND table_name='junkman';
+-----------+
| TableSize |
+-----------+
|         7 |
+-----------+
1 row in set (0.00 sec)

mysql>

This should take about 10-15 minutes to create.

STEP03) Disable innodb_file_per_table

Goto /etc/my.cnf and comment it out

[mysqld]
#innodb_file_per_table

then service mysql restart

STEP04) Create InnoDB table test.junkman2

CREATE TABLE test.junkman2 LIKE test.junkman;
ALTER TABLE test.junkman2 ENGINE=InnoDB;

This will create test.junkman2 inside ibdata1

STEP05) In another SSH session run this watch

watch ls -l /var/lib/mysql/ibdata1

STEP06) Load test.junkman INTO test.junkman2

INSERT INTO test.junkman2 SELECT * FROM test.junkman;

STEP07) Repeat STEP06 until the size of ibdata1 actually changes in the other SSH session

STEP08) Uncomment innodb_file_per_table in /etc/my.cnf to re-enable it

[mysqld]
innodb_file_per_table

STEP09) service mysql restart

STEP10) TRUNCATE TABLE test.junkman2; DROP TABLE test.junkman2;

STEP11) TRUNCATE TABLE test.junkman; DROP TABLE test.junkman;

I've never tried this before (this was just a working theory although the 7GB MyISAM stunt was real). Go give it a try and let me what happens.