Following these steps probably offers a decent shot at making the MySQL instance happy again...
- recreate the database's directory that you deleted
- create an InnoDB table with the same exact column names and definitions and the same table name, but in a different schema (or on a different server)
- copy only the .frm file for that table into the database's directory, not the .ibd file
- be sure the file and directory permissions are correct
- start MySQL normally (without innodb-force-recovery)
- DROP TABLE database_name.table_name;
The server should then respond by removing the .frm file, removing the table from InnoDB's internal data dictionary, and return a simple warning that the .ibd file was missing.
You'll have to repeat the process for all of the tables, but if you have the schema on another machine (or identical tables in another schema on this one, already) you should be able to copy all the .frm files over in one shot, start up, then drop them all.
But don't try opening them, don't "USE database_name", and don't connect with a graphical client that tries to enumerate the tables. Start the command line client with the --no-auto-rehash option so it doesn't try to enumerate them in the background, either.
This should let you drop the tables, at which point, you may be stable again.
There are some options you will need to consider
InnoDB Buffer Pool
The reason 26G was picked is that you have 32GB of RAM and 80% of that is 25.6 G. Since you mentioned that you will have 100 databases and 100 applications making this a multitenant DB Server, you are going to have to get the InnoDB Buffer Pool just right.
Please run this query:
SELECT IFNULL(B.engine,'Total') "Storage Engine",
CONCAT(LPAD(REPLACE(FORMAT(B.DSize/POWER(1024,pw),3),',',''),17,' '),' ',
SUBSTR(' KMGTP',pw+1,1),'B') "Data Size", CONCAT(LPAD(REPLACE(
FORMAT(B.ISize/POWER(1024,pw),3),',',''),17,' '),' ',
SUBSTR(' KMGTP',pw+1,1),'B') "Index Size", CONCAT(LPAD(REPLACE(
FORMAT(B.TSize/POWER(1024,pw),3),',',''),17,' '),' ',
SUBSTR(' KMGTP',pw+1,1),'B') "Table Size" FROM
(SELECT engine,SUM(data_length) DSize,SUM(index_length) ISize,
SUM(data_length+index_length) TSize FROM
information_schema.tables WHERE table_schema NOT IN
('mysql','information_schema','performance_schema') AND
engine IS NOT NULL GROUP BY engine WITH ROLLUP) B,
(SELECT 3 pw) A ORDER BY TSize;
This will tell you how much space is currently occupied by your MySQL instance. Whatever the total of InnoDB Data Size and Index Size is, that is what you use. If that total is over the 80% limit then you must you the 80% (leaving innodb_buffer_pool_size at 26G).
Since you have a quad-core server, set innodb_buffer_pool_instances to 4.
InnoDB Transaction Log Files
Since 26G was selected as innodb_buffer_pool_size, you are going to need the biggest possible transaction logs. The value 512M was probably picked for innodb_log_file_size because there is nothing to suggest the amount of transaction data (in bytes) that will actually be processed.
To resize your transaction logs
mysql -u... -p... -e"SET GLOBAL innodb_fast_shutdown = 0;"
service mysql stop
Next edit my.cnf
, replacing
innodb_log_file_size = 512M
with this
innodb_log_file_size = 2047M
Then, replace the transaction logs like this
mv /var/lib/mysql/ib_logfile0 /var/lib/mysql/ib_logfile0.bak
mv /var/lib/mysql/ib_logfile1 /var/lib/mysql/ib_logfile1.bak
service mysql start
After a few months of peak activity, you could then run this query during a peak:
SET @TimeInterval = 300;
SELECT variable_value INTO @num1 FROM information_schema.global_status
WHERE variable_name = 'Innodb_os_log_written';
SELECT SLEEP(@TimeInterval);
SELECT variable_value INTO @num2 FROM information_schema.global_status
WHERE variable_name = 'Innodb_os_log_written';
SET @ByteWrittenToLog = @num2 - @num1;
SET @KB_WL = @ByteWrittenToLog / POWER(1024,1) * 3600 / @TimeInterval;
SET @MB_WL = @ByteWrittenToLog / POWER(1024,2) * 3600 / @TimeInterval;
SET @GB_WL = @ByteWrittenToLog / POWER(1024,3) * 3600 / @TimeInterval;
SELECT @KB_WL,@MB_WL,@GB_WL;
Based on what comes back, you should resize the transaction logs again.
Please see my earlier posts on doing this:
Multicore Engagement
When the InnoDB Plugin was introduced in MySQL 5.1.38, it set the world of MySQL on fire. Why? Because InnoDB was single threaded. You had to install the plugin to have new setting that allowed InnoDB to use multiple cores.
Rather than writing something lengthy, please read my earlier posts of tweaking MySQL 5.5 to have InnoDB utilitze multiple cores:
Best Answer
In an ideal world, all of your data would be resident in the InnoDB buffer pool, all the time... so I had a really hard time thinking of a situation where evicting data from the InnoDB buffer pool would be a good thing, but I did find two cases.
I assume you're referring to this value:
I can't directly address the Enterprise aspect, since I don't use it, but I can tell you that I was able to get "MySQL Community Server (GPL) 5.5.15-log" to actually free up space in the InnoDB buffer pool.
If you have a light enough traffic load or small enough data set that your InnoDB buffer pool is technically larger than is needed (and therefore already has blocks free, the value of innodb_buffer_pool_pages_free will actually increase if you delete rows from a table that's already in the buffer pool.
In the example above, I started out with a value of 35 on a test server with very low traffic. I did
SELECT *
from a table that probably wasn't all in the pool, which dropped the value to 27. Then I deleted everything from the table and the value rose to 29... so at least in some cases, deletes from a table that is already in the pool will trigger an increase in the pages_free value, assuming your load isn't such that the pages will be claimed by a different table as soon as they are freed -- which is the case on my servers with much larger datasets and heavier traffic.Similarly, from a second session, I created a small temporary table using InnoDB. This cost me 4 blocks from the pool. Then I closed that session, and immediately regained the 4 blocks back.
These same tests on a busy server don't yield the same results. The "free" pretty much stays at 0, which means I technically need a larger pool, though performance on that machine is quite adequate.
So the difference you are seeing may be a difference in traffic load and data size, rather than a difference in the server editions.
However, "free space" isn't something to be wished for, here, unless you gain that space by increasing the provisioned size of the pool.
I can think of no reason why actively evicting live data from the InnoDB buffer pool in the interest of "having more free space" there would be beneficial, at all, ever.
If a page is clean (i.e., its contents match the disk files where it came from because it's unmodified or modifications have been written back to the disk files), then it can be reused, without being "free" first. The InnoDB buffer pool is exactly where you want as much of your data as possible to always be resident.