FACTS
You said you are using ext4
. File size limit is 16TB. Thus, Sample.ibd
should not be full.
You said your innodb_data_file_path
is ibdata1:10M:autoextend
. Thus, the ibdata1 file itself has no cap to its size except from the OS.
Why is this message coming up at all? Notice the message is "The table ... is full", not "The disk ... is full". This table full condition is from a logical standpoint. Think about InnoDB. What interactions are going on ?
My guess is InnoDB is attempting to load 93GB of data as a single transaction. Where would the Table is Full
message emanate from? I would look at the ibdata1, not in terms its physical size (which you already ruled out), but in terms of what transaction limits are being reached.
What is inside ibdata1 when innodb_file_per_table is enabled and you load new data into MySQL?
My suspicions tell me that the Undo Logs and/or Redo Logs are to blame.
What are these logs? According to the Book
Chapter 10 : "Storage Engines" Page 203 Paragraphs 3,4 say the following:
The InnoDB engine keeps two types of logs: an undo log and a redo log. The purpose of an undo log is to roll back transactions, as well as to display the older versions of the data for queries running in the transaction isolation level that requires it. The code that handles the undo log can be found in storage/innobase/buf/log/log0log.c.
The purpose of the redo log is to store the information to be used in crash recovery. It permits the recovery process to re-execute the transactions that may or may not have completed before the crash. After re-executing those transactions, the database is brought to a consistent state. The code dealing with the redo log can be found in storage/innobase/log/log0recv.c.
ANALYSIS
There are 1023 Undo Logs inside ibdata1 (See Rollback Segments and Undo Space). Since the undo logs keep copies of data as they appeared before the reload, all 1023 Undo Logs have reached its limit. From another perspective, all 1023 Undo Logs may be dedicated to the one transaction that loads the Sample
table.
BUT WAIT...
You are probably saying "I am loading an empty Sample
table". How are Undo Logs involved? Before the Sample
table was loaded with 93GB of data, it was empty. Representing every row that did not exist must take up some housecleaning space in the Undo Logs. Filling up 1023 Undo Logs seems trivial given the amount of data pouring into ibdata1
. I am not the first person to suspect this:
From the MySQL 4.1 Documentation, note Posted by Chris Calender on September 4 2009 4:25pm
:
Note that in 5.0 (pre-5.0.85) and in 5.1 (pre-5.1.38), you could receive the "table is full" error for an InnoDB table if InnoDB runs out of undo slots (bug #18828).
Here is the bug report for MySQL 5.0 : http://bugs.mysql.com/bug.php?id=18828
SUGGESTIONS
When you create the mysqldump of the Sample
table, please use --no-autocommit
mysqldump --no-autocommit ... mydb Sample > Sample.sql
This will put an explicit COMMIT;
after every INSERT
. Then, reload the table.
If this does not work (you are not going to like this), do this
mysqldump --no-autocommit --skip-extended-insert ... mydb Sample > Sample.sql
This will make each INSERT have just one row. The mysqldump will be much larger (10+ times bigger) and could take 10 to 100 times longer to reload.
In either case, this will spare the Undo Logs from being inundated.
Give it a Try !!!
UPDATE 2013-06-03 13:05 EDT
ADDITIONAL SUGGESTION
If the InnoDB system table (a.k.a ibdata1) strikes a filesize limit and Undo Logs cannot be used, you could just add another system tablespace (ibdata2).
I just encountered this situation just two days ago. I updated my old post with what I did: See Database Design - Creating Multiple databases to avoid the headache of limit on table size
In essence, you have to change innodb_data_file_path to accommodate a new system tablespace file. Let me explain how:
SCENARIO
On disk (ext3), my client's server had the following:
[root@l*****]# ls -l ibd*
-rw-rw---- 1 s-em7-mysql s-em7-mysql 362807296 Jun 2 00:15 ibdata1
-rw-rw---- 1 s-em7-mysql s-em7-mysql 2196875759616 Jun 2 00:15 ibdata2
The setting was
innodb_data_file_path=ibdata1:346M;ibdata2:500M:autoextend:max:10240000M
Note that ibdata2
grew to 2196875759616 which is 2145386484M
.
I had to embed the filesize of ibdata2
into innodb_data_file_path and add ibdata3
innodb_data_file_path=ibdata1:346M;ibdata2:2196875759616;ibdata3:10M:autoextend
When I restarted mysqld, it worked:
[root@l*****]# ls -l ibd*
-rw-rw---- 1 s-em7-mysql s-em7-mysql 362807296 Jun 3 17:02 ibdata1
-rw-rw---- 1 s-em7-mysql s-em7-mysql 2196875759616 Jun 3 17:02 ibdata2
-rw-rw---- 1 s-em7-mysql s-em7-mysql 32315015168 Jun 3 17:02 ibdata3
In 40 hours, ibdata3
grew to 31G. MySQL was once again working.
I think I have a plausible explanation you find very intriguing.
When you loaded the MyISAM table, only one index got loaded into the .MYI
file in a lopsided manner.
Which index would that be ? THE PRIMARY KEY. Why ?
Indexes usually use BTREEs. They are designed to collect keys into a single BTREE node and balance the BTREE internally until the BTREE is full. On the insert into the full node, the full tree node is split and the keys inside are divided.
Most often a BTREE node will split is when you load the keys in order. For proof, look at the worst case BTREE --- the balanced binary tree.
I have several posts in the DBA STackExchange where I mention how a balanced binary only has node with one key. I also mention how a leaf pages must rebalance about 45% of the time when inserting a new key. Here are those posts:
My working theory is that random inserts into the table may actually make the .MYI
smaller by prevent tree node splits as much as possible.
In order to make this happen, you will have to something quite unusual. Ready for this ?
ALTER TABLE generic_dummy_table_name ORDER BY datetime;
or
ALTER TABLE generic_dummy_table_name ORDER BY user,datetime;
That right, you can reorder the physical rows of the table. This may make the indexes a different size by inducing BTREE splits on another index.
Try reordering the table. Then, mysqldump it and reload it. You may find that the .MYI
may get smaller or bigger. You cannot really predict it.
Next time, try reloading the MyISAM table with a very large bulk insert buffer.
Just run
SET GLOBAL bulk_insert_buffer_size = 1024 * 1024 * 1024;
then connect to mysql and reload the mysqldump.
GIVE IT A TRY !!!
Best Answer
Yes, it is perfectly safe to do so as long as you donĀ“t gzip the file which is currently written (the last file).
Also in case of a replication setup ensure that the files you are gzipping are already fetched by the slave. You can verify this by checking the output of
SHOW SLAVE STATUS
on the slave, look here forMaster_Log_File
, this will give you the logfile of the master which is currently be fetched by the slaves io-thread. Every preceding file is absolutely safe to gzip.