Before answering the question, please click here to see the InnoDB Infrastructure Map
Based on innodb_file_per_table being disabled, let's go through your questions:
Q 1: How much fragmentation is allowed before it affects performance?
The system tablespace can grow to the limit of the disk volume.
EXAMPLE: I just answered a question about what do to when the system tablespace reaches the limit of an ext3 disk : How to solve "The table ... is full" with "innodb_file_per_table"?
There may still be some wiggle room inside the system tablespace. However, when the wiggle room dwindles to the point that all 1023 undo logs inside the system tablespace are completely filled and can no longer extend, then you must add a new system tablespace file.
Please note that when I say wiggle room, I am referring to the free space within the system tablespace that must accommodate the following:
- Data Dictionary
- Double Write Buffer (can be disabled but not recommended)
- Insert Buffer (Cached Index Changes in System Tablespace instead of the OS)
- Rollback Segments (1023 slots)
- Undo Logs (referenced from the Rollback Segments)
- Please refer Back to the InnoDB Infrastructure Map
Q 2: Should InnoDB tables even be optimized (some say yes others say no)?
If you run OPTIMIZE TABLE, you basically make the data and index pages contiguous in the system tablespace. This defragments the table and accesses all data and indexes quicker until the fragmentation reappears over time in production use. This can introduce new areas of fragmentation. Again, all that fragmentation can fill up with data and indexes. This will endanger the wiggle room I mentioned before.
Q 3: How do you test for InnoDB fragmentation if the server does not use the “file per table” option?
Back on Aug 27, 2012
, I answered this post : How To Optimize and Repair InnoDB tables? ALTER and OPTIMIZE table failed
I explained there how to get the fragmentation. In essence, you do this:
Goto the OS and run
cd /var/lib/mysql
ls -l ibdata1 | awk '{print $5}'
This gets you the size of ibdata1 in bytes
SELECT (data_length+index_length) InnoDBDataIndexBytes
FROM information_schema.tables WHERE engine='InnoDB';
This gets you the sum total of data and index pages in bytes
Subtract the sum total from ibdata1's total bytes. The difference represents the wiggle room. This space causes fragmentation, but is constantly in use until ibdata1 gets filled.
CAVEAT : When innodb_file_per_table is enabled, I explain how to get the fragmentation of an individual table: Innodb table with many deletes and inserts - is there any disk space wasted?
Q 4: Is fragmentation the only reason to run “optimize table”?
Yes. It is far more beneficial for MyISAM and for InnoDB tables (innodb_file_per_table being enabled). Do this with innodb_file_per_table off and you will just make the system tablespace grow faster. See my post How can Innodb ibdata1 file grows by 5X even with innodb_file_per_table set?
Q 5: If I do need to run “optimize table” on an InnoDB table should I run ALTER TABLE mydb.mytable ENGINE=InnoDB; and not ANALYZE TABLE
Running ALTER TABLE mydb.mytable ENGINE=InnoDB;
would indeed shrink when innodb_file_per_table is enabled. Again, it is not worth when innodb_file_per_table is disabled.
Q 6: Can you selectively tell which innodb tables needs optimizing if the server does not use the "file per table" option?
No you cannot. Why? The INFORMATION_SCHEMA becomes totally useless because all the tables are inside one file. I wrote a script to find to uptime_time (last time an InnoDB table was written) of all InnoDB tables. Is there a way to find the least recently used tables in a schema? script only works for innodb_file_per_table. This shows that you cannot ascentain that fragmentation with ease. You could resort to more aggressive techniques like dumping the tablespace map and located segments with unused space : See this blog post : http://www.markleith.co.uk/2009/01/19/innodb-table-and-tablespace-monitors/. This is way too much firepower to deal with. You could just run the OPTIMIZE TABLE
to eliminate segment fragmentation, but this brings us back full circle to getting everything out of ibdata1
.
SUGGESTION
If you want to remove all data and index pages from ibdata1 and shrink ibdata1 permanently, please read my Oct 29, 2010
StackOverflow post Howto: Clean a mysql InnoDB storage engine?
As you can see, this subject is not new to me
EPILOGUE
Running OPTIMIZE TABLE
is not the biggest reason that ibdata1 grows quickly. Please see this post from mysqlperformanceblog and learn about the other contributing factors.
Please remember that most who run OPTIMIZE TABLE
do so sequentially. You could probably script many of them in parallel. Of course, you need to convert to innodb_file_per_table like I mentioned before.
Everything you have done is correct in order to drastically defragment innodb tablespaces (there are other ways, but exporting, deleting files and importing certainly works).
Trying to optimize the tables (which, by the way, the correct way to do it for innodb is running ALTER TABLE mytable ENGINE=InnoDB
- OPTIMIZE table just calls this, and REPAIR table does nothing for InnoDB) just after an in-primary-key-order import is useless. The way InnoDB works is optimising for access and write performance, not for data size. You will always have overhead in disk space (even if free space says 0), as innodb reserves spaces in whole extend beyond a certain size. Also, some data types, like blobs and random inserts and deletions can lead to extra fragmentation, but I presume that the size you are showing is less than 10% of the table size, so you should no worry. After a proper import, it is the least fragmented way innodb can work with the default options. Under normal operation, the file size should be constant when inserting new data until no more free space is available.
Of course, you can try to change parameters like extent or page sizes, but if you are concerned about saving disk space over performance I would recommend you to try compression (InnoDB Barracuda file formant has it) or use a different engine. However, please note that fragmentation and having huge files with lots of free space is usually not a concern with innodb_file_per_table = 1
under normal loads.
Best Answer
ANALYZE TABLE will read index pages for a table, compute statistics, and store the results in
INFORMATION_SCHEMA.STATISTICS
. No writes toibdata1
whatsoever.Notwithstanding, anything DDL-related such as
performed against an InnoDB table or its indexes with innodb_file_per_table disabled will make ibdata1 mercilessly grow.
There are two things you could try to minimally control (or at least monitor) ibdata1's growth
ALTERNATIVE #1 : Place a limit on ibdata1 on creation
Perhaps create a large ibdata1
or set to large initial size and a larger max filesize
ALTERNATIVE #2 : Use a Large Raw Data Device
According to MySQL 5.0 Certification Study Guide, Page 428
CAVEAT
With either alternate you stop worrying about growth until there is no more room. Otherwise, you must eventually deal with any imposed limits on ibdata1's size.
Apr 15, 2012
: What happens when InnoDB hits its tablespace autoextend max?Apr 11, 2012
: How do you remove fragmentation from InnoDB tables?