ASPECT #1: BAD FILE
Something tells me you somehow ran out of space in /tmpfs
.
Whenever MySQL creates a temp table
- It is created as a MyISAM table
- No
.frm
is created
- Initial
.MYD
is 0 bytes
- Initial
.MYI
is 1024 bytes (1K)
- There are never any indexes in the
.MYI
I find that strange that MySQL could not create a 1K .MYI
.
Since ext3
has a default block size of 1K, while ext4
has 4K, perhaps the 4K blocksize may have allowed for more data. This, in itself, is not the real problem.
ASPECT #2: YOUR QUERY
If you have tens of thousands of tables, the query you are running looks quite demanding. In fact, the query says "Get me all databases that have 1 or more tables in them." If you have a large number of tables, you could be looking at a rather large temp table in order collect table_schema
values before doing the DISTINCT
.
Sometimes it is easy to forget
There are two Stackoverflow posts that mention this
ANALYSIS
My guess is a combination of anomalies
ANOMALY #1 : You have a large number of tables
ANOMALY #2 : The temp table being made must have run out of space to have a damaged .MYI
ANOMALY #3 : The query does not use an index.
Look at the table
mysql> show create table information_schema.tables\G
*************************** 1. row ***************************
Table: TABLES
Create Table: CREATE TEMPORARY TABLE `TABLES` (
`TABLE_CATALOG` varchar(512) NOT NULL DEFAULT '',
`TABLE_SCHEMA` varchar(64) NOT NULL DEFAULT '',
`TABLE_NAME` varchar(64) NOT NULL DEFAULT '',
`TABLE_TYPE` varchar(64) NOT NULL DEFAULT '',
`ENGINE` varchar(64) DEFAULT NULL,
`VERSION` bigint(21) unsigned DEFAULT NULL,
`ROW_FORMAT` varchar(10) DEFAULT NULL,
`TABLE_ROWS` bigint(21) unsigned DEFAULT NULL,
`AVG_ROW_LENGTH` bigint(21) unsigned DEFAULT NULL,
`DATA_LENGTH` bigint(21) unsigned DEFAULT NULL,
`MAX_DATA_LENGTH` bigint(21) unsigned DEFAULT NULL,
`INDEX_LENGTH` bigint(21) unsigned DEFAULT NULL,
`DATA_FREE` bigint(21) unsigned DEFAULT NULL,
`AUTO_INCREMENT` bigint(21) unsigned DEFAULT NULL,
`CREATE_TIME` datetime DEFAULT NULL,
`UPDATE_TIME` datetime DEFAULT NULL,
`CHECK_TIME` datetime DEFAULT NULL,
`TABLE_COLLATION` varchar(32) DEFAULT NULL,
`CHECKSUM` bigint(21) unsigned DEFAULT NULL,
`CREATE_OPTIONS` varchar(255) DEFAULT NULL,
`TABLE_COMMENT` varchar(2048) NOT NULL DEFAULT ''
) ENGINE=MEMORY DEFAULT CHARSET=utf8
1 row in set (0.00 sec)
mysql>
A full table scan is required, resulting in the need for a temp table with an in-memory sort.
SUGGESTIONS
SUGGESTION #1 : Use ext4
. You already said the query works fine in ext4
.
SUGGESTION #2 : Increase tmpfs
to 16G
SUGGESTION #3 : Query the information_schema so that it does not use as much memory
DROP TABLE IF EXISTS mysql.dbtbcount;
CREATE TABLE mysql.dbtbcount
(
table_schema VARCHAR(64) NOT NULL,
table_count INT NOT NULL DEFAULT 0,
PRIMARY KEY (table_schema)
) ENGINE=MEMORY;
INSERT INTO mysql.dbtbcount (table_schema)
SELECT schema_name FROM information_schema.schemata
WHERE schema_name NOT IN ('information_schema','mysql');
INSERT INTO mysql.dbtbcount (table_schema)
SELECT table_schema FROM information_schema.tables
ON DUPLICATE KEY UPDATE table_count = table_count + 1;
SELECT * FROM mysql.dbtbcount;
SELECT COUNT(1) DistinctDBs FROM mysql.dbtbcount WHERE table_count > 0;
This counts all tables in all databases
- When any row has a zero table_count, that database is empty.
- Count all rows that have a nonzero
table_count
to get the DISTINCT count.
- When done, run
DROP TABLE mysql.dbtbcount;
Give it a Try !!!
UPDATE 2013-08-19 17:40 EDT
Your problem should have been posted as a separate question. Nevertheless, let's see what we can surmise from this.
You should not be using /tmp
as a backup location. Why?
You need to reorganize the script to do parallel dumps
SUGGESTION
You should collect the databases and parallel mysqldump 25 databases at a time, starting from the biggest databases to the smallest. It gets faster with each next set of 25...
BACKUP_FOLDER=/any/path/but/tmp
MYSQL_HOST=localhost
MYSQL_USER=root
MYSQL_PASS=rootpassword
MYSQL_CONN="-h${MYSQL_HOST} -u${MYSQL_USER} -p${MYSQL_PASS}"
SQLSTMT="SELECT db FROM (SELECT table_schema db,SUM(data_length) datasize"
SQLSTMT-"${SQLSTMT} FROM information_schema.tables"
SQLSTMT-"${SQLSTMT} WHERE table_schema NOT ('mysql','log')"
SQLSTMT-"${SQLSTMT} AND RIGHT(table_schema,7)<>'_schema'"
SQLSTMT-"${SQLSTMT} GROUP BY table_schema) A"
SQLSTMT-"${SQLSTMT} ORDER BY datasize DESC"
mysql ${MYSQL_CONN} -ANe"${SQLSTMT}" > /tmp/ListOfDatabases.txt
COMMIT_COUNT=0
COMMIT_LIMIT=25
MYSQLDUMP_OPTIONS="--hex-blob --routines --triggers"
for DB in `cat /tmp/ListOfDatabases.txt`
do
BACKUP_FILE=${BACKUP_FOLDER}/${DB}.sql.gz
mysqldump ${MYSQL_CONN} ${MYSQLDUMP_OPTIONS} ${DB} | gzip > ${BACKUP_FILE} &
(( COMMIT_COUNT++ ))
if [ ${COMMIT_COUNT} -eq ${COMMIT_LIMIT} ]
then
COMMIT_COUNT=0
wait
fi
done
if [ ${COMMIT_COUNT} -gt 0 ]
then
wait
fi
UPDATE 2013-08-20 10:03 EDT
If you don't care about the size of the database here is an alternate method that will start the mysqldump process faster:
BACKUP_FOLDER=/any/path/but/tmp
MYSQL_HOST=localhost
MYSQL_USER=root
MYSQL_PASS=rootpassword
MYSQL_CONN="-h${MYSQL_HOST} -u${MYSQL_USER} -p${MYSQL_PASS}"
SQLSTMT-"${SQLSTMT} SELECT schema_name FROM information_schema.schemata"
SQLSTMT-"${SQLSTMT} WHERE schema_name NOT ('mysql','log')"
SQLSTMT-"${SQLSTMT} AND RIGHT(schema_name ,7)<>'_schema'"
COMMIT_COUNT=0
COMMIT_LIMIT=25
MYSQLDUMP_OPTIONS="--hex-blob --routines --triggers"
for DB in `cat /tmp/ListOfDatabases.txt`
do
BACKUP_FILE=${BACKUP_FOLDER}/${DB}.sql.gz
mysqldump ${MYSQL_CONN} ${MYSQLDUMP_OPTIONS} ${DB} | gzip > ${BACKUP_FILE} &
(( COMMIT_COUNT++ ))
if [ ${COMMIT_COUNT} -eq ${COMMIT_LIMIT} ]
then
COMMIT_COUNT=0
wait
fi
done
if [ ${COMMIT_COUNT} -gt 0 ]
then
wait
fi
I wrote about doing parallel database and table dumps before : How can I optimize a mysqldump of a large database?
If you cannot get mydumper
to throttle based on database size, this is the way to go.
GIVE IT A TRY !!!
Best Answer
This link sums up your question completely.
I quote:
The table itself is used by the MySQL event scheduler. A link to the documentation is here.