I think performance of your server should improve as you have already set
innodb_file_per_table=1
and migrated bulky tables with this setting, so that all your large tables uses separate ibdata
file for InnoDB operations.
Also now onwards all new tables and existing migrated tables will use their own ibdata
files instead of default system tablespace file which is 10G larger, hence performance should improve ....IMHO.
InnoDB_file_per_table
has these advantages:
You can back up or restore a single table quickly without
interrupting the use of other InnoDB tables, using the MySQL
Enterprise Backup product. See Restoring a Single .ibd File for the
procedure and restrictions.
Storing specific tables on separate physical disks, for I/O
optimization or backup purposes.
Restoring backups of single tables quickly without interrupting the
use of other InnoDB tables.
Using compressed row format to compress table data.
Reclaiming disk space when truncating a table.
Before answering the question, please click here to see the InnoDB Infrastructure Map
Based on innodb_file_per_table being disabled, let's go through your questions:
Q 1: How much fragmentation is allowed before it affects performance?
The system tablespace can grow to the limit of the disk volume.
EXAMPLE: I just answered a question about what do to when the system tablespace reaches the limit of an ext3 disk : How to solve "The table ... is full" with "innodb_file_per_table"?
There may still be some wiggle room inside the system tablespace. However, when the wiggle room dwindles to the point that all 1023 undo logs inside the system tablespace are completely filled and can no longer extend, then you must add a new system tablespace file.
Please note that when I say wiggle room, I am referring to the free space within the system tablespace that must accommodate the following:
- Data Dictionary
- Double Write Buffer (can be disabled but not recommended)
- Insert Buffer (Cached Index Changes in System Tablespace instead of the OS)
- Rollback Segments (1023 slots)
- Undo Logs (referenced from the Rollback Segments)
- Please refer Back to the InnoDB Infrastructure Map
Q 2: Should InnoDB tables even be optimized (some say yes others say no)?
If you run OPTIMIZE TABLE, you basically make the data and index pages contiguous in the system tablespace. This defragments the table and accesses all data and indexes quicker until the fragmentation reappears over time in production use. This can introduce new areas of fragmentation. Again, all that fragmentation can fill up with data and indexes. This will endanger the wiggle room I mentioned before.
Q 3: How do you test for InnoDB fragmentation if the server does not use the “file per table” option?
Back on Aug 27, 2012
, I answered this post : How To Optimize and Repair InnoDB tables? ALTER and OPTIMIZE table failed
I explained there how to get the fragmentation. In essence, you do this:
Goto the OS and run
cd /var/lib/mysql
ls -l ibdata1 | awk '{print $5}'
This gets you the size of ibdata1 in bytes
SELECT (data_length+index_length) InnoDBDataIndexBytes
FROM information_schema.tables WHERE engine='InnoDB';
This gets you the sum total of data and index pages in bytes
Subtract the sum total from ibdata1's total bytes. The difference represents the wiggle room. This space causes fragmentation, but is constantly in use until ibdata1 gets filled.
CAVEAT : When innodb_file_per_table is enabled, I explain how to get the fragmentation of an individual table: Innodb table with many deletes and inserts - is there any disk space wasted?
Q 4: Is fragmentation the only reason to run “optimize table”?
Yes. It is far more beneficial for MyISAM and for InnoDB tables (innodb_file_per_table being enabled). Do this with innodb_file_per_table off and you will just make the system tablespace grow faster. See my post How can Innodb ibdata1 file grows by 5X even with innodb_file_per_table set?
Q 5: If I do need to run “optimize table” on an InnoDB table should I run ALTER TABLE mydb.mytable ENGINE=InnoDB; and not ANALYZE TABLE
Running ALTER TABLE mydb.mytable ENGINE=InnoDB;
would indeed shrink when innodb_file_per_table is enabled. Again, it is not worth when innodb_file_per_table is disabled.
Q 6: Can you selectively tell which innodb tables needs optimizing if the server does not use the "file per table" option?
No you cannot. Why? The INFORMATION_SCHEMA becomes totally useless because all the tables are inside one file. I wrote a script to find to uptime_time (last time an InnoDB table was written) of all InnoDB tables. Is there a way to find the least recently used tables in a schema? script only works for innodb_file_per_table. This shows that you cannot ascentain that fragmentation with ease. You could resort to more aggressive techniques like dumping the tablespace map and located segments with unused space : See this blog post : http://www.markleith.co.uk/2009/01/19/innodb-table-and-tablespace-monitors/. This is way too much firepower to deal with. You could just run the OPTIMIZE TABLE
to eliminate segment fragmentation, but this brings us back full circle to getting everything out of ibdata1
.
SUGGESTION
If you want to remove all data and index pages from ibdata1 and shrink ibdata1 permanently, please read my Oct 29, 2010
StackOverflow post Howto: Clean a mysql InnoDB storage engine?
As you can see, this subject is not new to me
EPILOGUE
Running OPTIMIZE TABLE
is not the biggest reason that ibdata1 grows quickly. Please see this post from mysqlperformanceblog and learn about the other contributing factors.
Please remember that most who run OPTIMIZE TABLE
do so sequentially. You could probably script many of them in parallel. Of course, you need to convert to innodb_file_per_table like I mentioned before.
Best Answer
If the table is that simple (INT and TEXT), you cannot change the design any more.
Suppose your table is called
mydb.myarticles
. If you run this TableSize Query,You will see how much space the data pages are taking up. If you feel DataPagesGB is significantly lower than 98GB, then run:
which will actually execute
This will shrink the table as much as possible. Then, run the TableSize Query again. Chances are, you may not get much shrinkage.
Keep this in mind: The default size of an InnoDB Page is 16KB (InnoDB_page_size). Any
TEXT
fields over 16KB will cause InnoDB to split the row up into two or more pages. In other wordsAt least one of those pages will have lots of wasted space.
CAVEAT
I would strongly advise against changing the InnoDB_page_size for compression purposes because I wrote a post back on Mar 02, 2012 on how InnoDB creates additional wasted space in the Buffer Pool changing to the Barracuda Format, requiring a much larger Buffer Pool. This would be in addition to the chaining of the increased number of InnoDB pages.
Your Original Question
Yes, it is normal for such an increase especially if many rows have TEXT values bigger than 16KB. If you add any additional indexes (such a FULLTEXT index or a prefix index), you will bloat the Index Pages.
You can estimate the bookkeeping info by subtracting
DataAndIndexesInBytes
from the size of the.ibd
and dividing that difference by 3 million. That answer will give you the bookkeeping info needed per row.SUGGESTIONS