- At T1, insert comes for table_1, goes to disk block_1.
- At T2, insert comes for table_2, can it go to same disk block_1(provided block_1 has space) or new block will be created per table ?
- At T3, again insert comes for table_1, can it go either block_1 or block_2 based on available space ?
Mysql – Disk block per table
blockingMySQLrdbms
Related Solutions
If you cannot free up or add any disk space, you could copy the table_data.* files (MYD,MYI,frm,etc) to another machine with plenty of free disk, run the repair there with myisamchk, then copy the files back to the original machine. If the server needs to stay running, do a FLUSH TABLES WITH READ LOCK table_data
before copying the data to the other server and UNLOCK TABLES
after you copy the data back. You will need to keep the FLUSH TABLES ... session running for this duration.
So, it would look like this
server1:
# keep this session running for the duration of the repair!
mysql> FLUSH TABLES WITH READ LOCK table_data;
/var/lib/mysql# rsync -aP table_date.* server2:/somedir/
server2:
# make a backup
/somedir# tar -czvf table_data_backup.tgz table_date.*
# run the repair
/somedir# myisamchk -r table_data
# copy the files back
/somedir# rsync -aP table_date.* server1:/var/lib/mysql/
server1:
# make sure the permissions are correct on /var/lib/mysql/table_date.*
mysql> UNLOCK TABLES;
Remember, do not close the FLUSH TABLES WITH READ LOCK session.
You may want to give myisamchk additional memory for the following parameters to speed it up: --key_buffer_size --sort_buffer_size --read_buffer_size --write_buffer_size
Next, convert your table to InnoDB. There's almost no good reason to use MyISAM these days.
Last time, I answered your question by suggesting you perform a well-scripted parallel dump of all tables.
Since the tables are lopsided in size, parallel table dumps are not much help for the 20GB and 13GB tables. What can be done? You must try something out of the ordinary. The good thing here is that all the tables are MyISAM.
Have you ever considered running an rsync? How do you do that?
EXAMPLE
You have /var/lib/mysql as your datadir for a remote production database
Perform an rsync on /var/lib/mysql on prod server (IP Address 10.1.2.30) to /backups on a backup server. Just run a script something like this on the backup server:
INIT_TS=0
COMP_TS=0
DIFF_TS=61
RSYNC_THRESHOLD=60
DBSERVER_IP=10.1.2.30
FOLDER_TO_BACKUP=/var/lib/mysql
mkdir /backups/backup_temp
while [ ${DIFF_TS} -gt ${RSYNC_THRESHOLD} ]
do
INIT_TS=`date +%s`
rsync -arv user01@${DBSERVER_IP}:${FOLDER_TO_BACKUP}/ /backups/backup_temp
CURR_TS=`date +%s`
(( DIFF_TS = CURR_TS - PREV_TS ))
done
MYSQL_CONN="-h${DBSERVER_IP} -uusername -ppassword"
mysql ${MYSQL_CONN} -A -e"FLUSH TABLES WITH READ LOCK; SELECT SLEEP(86400)" &
SEARCHING_FOR_SLEEP=0
while [ ${SEARCHING_FOR_SLEEP} -eq 0 ]
do
sleep 3
SEARCHING_FOR_SLEEP=`mysql ${MYSQL_CONN} -A -e"SHOW PROCESSLIST;" | grep -c "SELECT SLEEP(86400)"`
done
sleep 3
SLEEP_ID=`mysql ${MYSQL_CONN} -A -e"SHOW PROCESSLIST;" | grep "SELECT SLEEP(86400)" | awk '{print $1}'`
rsync -arv user01@${DBSERVER_IP}:/var/lib/mysql/ /backups/backup_temp
BACKUP_TIMESTAMP=`date +"%Y%m%d%H%M%S"`
mv /backups/backup_temp /backups/${BACKUP_TIMESTAMP}
mysql ${MYSQL_CONN} -A -e"KILL ${SLEEP_ID}"
Here is what this basic backup script does
- Step 01) rsync production's /var/lib/mysql (from remote server) to /backups on local server
- Step 02) If running time of rsync is greater than 60 seconds, repeat Step 01
- Step 03) Close all tables and issue global read lock
- Step 04) Perform final rsync (SELECTs allowed only)
- Step 05) Release global read lock
Notice the following:
- the only mysql interaction is
- issuing the global read lock
- checking for the process ID that is holding the global read lock
- releasing the global read lock
- FOLDER_TO_BACKUP is the datadir. If you want to backup a specific database (say mydb), set FOLDER_TO_BACKUP to /var/lib/mysql/mydb
- Actual downtime is from the acquisition of the SLEEP_ID to the processing killing of the SLEEP_ID
Give it a Try !!!
UPDATE 2012-02-02 11:47 EDT
The above script secures mysqld in such a way that it takes a disklevel copy of the database while allowing mysqld to perform INSERTs, UPDATEs, and DELETEs during the copy. This is necessary since the MyISAM Storage Engine does not have a natural mechanism for allowing a copying of itself. Even powerful products like MySQL Enterprise Backup and Percona XtraBackup perform full table locking on MyISAM tables just like this. My personal preference is to script it so I can see it, feel it, taste it, and touch it and just know for myself that MyISAM is fully protected.
The advantage of using my script does stems from the fact that I shorten the length of downtime. In the context of this anwser, downtime is really the period where the mysqld process will only allow SELECTs to all MyISAM tables during the final rsync.
An additional benefit is that all MyISAM tables copied will all have the same point-in-time should you wish to restore all MyISAM tables as of the same point-in-time. That point-in-time will be based, not on the start of the backup process, but when the global read lock as acquired.
If you need to copy only specific tables and you can guarantee no DB Connection will touch it, only then can you toss aside any failsafes and copy any MyISAM table to your heart's content.
UPDATE 2012-02-03 11:47 EDT
If the length of time the rsync takes concerns you, here is an additional suggestion:
You should also make nightly backups of the binary logs. By having nightly copies of binary logs, you would essentially have the incremental backups via those binary logs.
That being said, you could set up a weekly or monthly run of the backup script. Recovery to a specific point-in-time would be your responsibility from there, but you would have every piece of data necessary to do so.
Related Question
- Thesql cluster table full error
- Mysql – when do i usually optimize an innodb table
- MySQL 5.6 – Why INSERT SELECT Blocks Unrelated Table
- MySQL DBs taking up way too much disk space
- Mysql – Would current configuration be better as a staging environment and how to determine best practices to enhance production environment
Best Answer
InnoDB (the default Engine in MySQL) allocates separate "blocks" for each table.
A "block" is 16KB, regardless of what size the OS or disk controller does.
Yes, a table with one small row is "wasting" a lot of disk space. But if you have a million rows, the percentage waste will be lower.
Similarly, each
INDEX
(except for leaf nodes of thePRIMARY KEY
) will use disk in blocks of 16KB.Use this to see the "Data_length" and "Index_length":
I created a table with 2 secondary indexes, but have not put any rows in it yet:
Those 3 blocks belong to that one table; no sharing.