If you convert all your tables to InnoDB, then you can perform your mysqldumps without disturbing any of DB Connections.
Here is a script to mass convert all MyISAM tables into InnoDB.
mysql -uroot -p -AN -e"SELECT CONCAT('ALTER TABLE ',table_schema,'.',table_name,' ENGINE=InnoDB;') InnoDBConversionSQL FROM information_schema.tables WHERE engine='MyISAM' AND table_schema NOT IN ('information_schema','mysql','performance_schema') ORDER BY (data_length+index_length)" > /root/ConvertMyISAMToInnoDB.sql
Just execute /root/ConvertMyISAMToInnoDB.sql
during a full maintenance. This needs to be done only once. Once all your tables are InnoDB, you could probably mysqldump your databases in parallel. I wrote an earlier post back in April 2011 on multiple ways to perform mysqldumps. You need option 2. You will also need to add the --single-transaction
option to each mysqldump so that all the mysqldumps are launched from the same point-in-time.
However, you could still experience performance problems anyway because each table that is being mysqldump'd will push old data out of the InnoDB Buffer Pool to push in table data for each table being mysqldump'd.
Operations must slow down because queries being executed must share the InnoDB Buffer Pool with the mysqldumps that are thrashing it.
As for upping the max_connections you can do this without restarting mysql.
Please add this to my.ini
[mysqld]
max_connections=1000
You do not have to restart mysql. Just run this command in the mysql client:
mysql> SET GLOBAL max_connections = 1000;
You may need more RAM on the box to accommodate additional DB Connections as well as a Decent Sized InnoDB Buffer Pool.
First you need to do is run this query:
SELECT user,host FROM mysql.user
WHERE super_priv='Y' AND
CONCAT(user,'@',host) <> 'root@localhost';
This will list all users that have SUPER privilege. Most users that do application-related DB processing do not require this privilege. According to the MySQL Documentation, those with SUPER privilege can do the following:
- Run CHANGE MASTER TO for controlling replication coordinates
- KILL or
mysqladmin kill
to kill threads belonging to other accounts
- PURGE BINARY LOGS to systemically delete binary logs
- Make configuration changes using SET GLOBAL to modify global system variables
- mysqladmin debug command
- enabling or disabling logging
- performing updates even if the *read_only* system variable is enabled
- starting and stopping replication on slave servers
- specification of any account in the DEFINER attribute of stored programs and views
- HERE IS THE MOST IMPORTANT ONE FOR YOUR PROBLEM: : Enables you to connect (once) even if the connection limit controlled by the max_connections system variable is reached.
You will need to login as root@localhost and revoke SUPER privilege as follows:
UPDATE mysql.user SET super_priv='N'
WHERE super_priv='Y' AND
CONCAT(user,'@',host) <> 'root@localhost';
FLUSH PRIVILEGES;
Once you do this, whenever all users flood mysql connections, only root@localhost
can login. After all, if everybody and his grandmother had SUPER privilege, this would bar root@localhost
from ever connecting ahead of everybody else. If max_connections is at 200 and you need to raise it to 300 without having to restart mysqld, you can dynamically increase the max_connections with this command:
mysql> SET GLOBAL max_connections = 300;
That will allow more connections effective immediately, but don't just arbitrarily increase the number on whim. You have to make sure mysql has enough RAM to accommodate the increase.
CAVEAT : If you change max_connections dynamically to 300, please put it in /etc/my.cnf
[mysqld]
max_connections=300
You can run mysqltuner.pl on your MySQL DB Server. If you do not have it, then run the following:
cd
wget mysqltuner.pl
perl mysqltuner.pl
The 3rd line under Performance Metrics has this
-------- Performance Metrics -------------------------------------------------
[--] Up for: 8d 20h 46m 22s (8M q [10.711 qps], 129K conn, TX: 90B, RX: 19B)
[--] Reads / Writes: 4% / 96%
[--] Total buffers: 2.1G global + 5.4M per thread (2000 max threads)
[OK] Maximum possible memory usage: 12.6G (80% of installed RAM)
See the 5.4M per thread? That is multipled by max_connections. In this example, that would be a maximum of about 10.8G of RAM. Therefore, each time you bump up max_connections, you should run mysqltuner.pl and check if you are pressing the OS for too much memory.
In any case, limiting who has SUPER privileges give such users opportunity to mitigate flooding mysqld with DB Connections.
Best Answer
here there is a full guide on how I solved the issue:
I tried to log in as root with the command:
But it did not work for me, mysql kept giving the error:
After some searches I tried
This way stopped mysqld, which wasn't able to restart because at the same time my disk was full and receiving:
Then, in order to inspect on ubuntu where my space was occupied I used the command:
That let me know all the space was occupied by binlog files of mysql. I couldn't remove files by hand, it is not recommended procedure. But I did not have anything to remove and I was not able to start mysql to purge the binlog files. Then I run the following command which gave me enough space to restart mysql:
At this point I was able to login to mysql and purge the binlog files with:
But this is not a long term solution. To avoid the problem in the future I did:
I am going to improve this answer when I solve the maximum connection issue for a long term solution.