Something tells me you are either using a version of MySQL prior to 5.0, you called an older version of mysqldump, or you messed with the settings of the dump.
What usually blows up a mysqldump past the size of its dataset is the option --skip-extended-insert.
In older versions of MySQL, there was no extended insert. That means each and every row in a table had an INSERT command to itself. If a table had 2,000 rows, the mysqldump output will have 2,000 INSERT commands. That's a whole lot of commas, parentheses, single quotes, and "INSERT INTO" tags to place in a dump file.
In newer versions of MySQL, --extended-insert was added to group together dozens (or even hundreds) of rows in a single INSERT. SO, instead of...
INSERT INTO tbname VALUES (1);
INSERT INTO tbname VALUES (2);
INSERT INTO tbname VALUES (3);
INSERT INTO tbname VALUES (4);
INSERT INTO tbname VALUES (5);
You could have this:
INSERT INTO tbname VALUES (1),(2),(3),(4),(5);
For mysqldump, --opt includes --extended-insert. If you use --skip-opt when doing mysqldump, it disables --skip-extended-insert. Here are the options of mysqldump that affect extended insert:
-e, --extended-insert
Use multiple-row INSERT syntax that include several
VALUES lists.
(Defaults to on; use --skip-extended-insert to disable.)
--opt Same as --add-drop-table, --add-locks, --create-options,
--quick, --extended-insert, --lock-tables, --set-charset,
and --disable-keys. Enabled by default, disable with
--skip-opt.
CAVEAT
Run these at the Linux command line
mysql --version
mysqldump --version
which mysqldump
If these do not match the version of mysql you are running on the server, or multiple versions of MySQL exist on the same machine, get that straightened out. Until then, make sure you call the correct version of mysqldump and don't use --skip-opt.
There is a known issue in Windows, that when you push a large file to another server all the memory ends up getting allocated to the System cache instead of the user processes. You can look in the Physical Memory (MB) section of task manager to see how much memory is allocated to the system cache.
This can be solved by backing up to a local disk, then having the remote machine pull that file.
Best Answer
The main bottleneck in the dump like this is drive I/O. You are reading a load of data and writing it again. You can speed this up in a number of ways:
gzip
or similar. This will reduce the amount of writing being done (so reduce the overall IO load, and the amount of head movement) at the expense of some CPU time (which you may have a lot of spare at these times anyway).--quick
option to reduce the RAM impact of backing up large tables).You may be fixing the wrong issue though: it might be easier to address the connection drops instead (though reducing the I/O load imposed by your backups will help reduce the effect you have on other users so is worth trying anyway). Could you run your manual backups through screen (or similar tools like tmux)? That way if your connection to the server drops you can just reconnect and reattach to the
screen
session without any processes getting interrupted.If you are sending the data directly over the connection (i.e. you are running mysqldump on your local machine against a remote database, so the dump appears locally) you might be better off running the dump on the server first, compressing as needed, then transferring the data over the network using a tool (such as
rsync
) which supports partial transfers so you can resume the transfer (instead of restarting) if a connection drop interrupts it.As part of your "reducing the size of the overall database to resolve this issue" I would guess that a large chunk of your data does not change. You might be able to move a large chunk of the 1.2Gb from that main table off into another and remove that from those that are copied by the
mysqldump
call. You don't need to backup this data every time if it never changes. Splitting data between tables and databases this way is usually referred to as data partitioning and can also allow you to spread the data and I/O load over multiple drives. High-end database have built in support for automatic partitioning, though in mysql you will probably have to do it manually and alter your data access layer to account for it.Straying off-topic for this site (so you should probably nip over to ServerFault or SuperUser to ask if you need more detail): If you seem to be losing connections due to inactivity, check the options in your SSH server and SSH client to make sure keep-alive packets are enabled and being sent often enough. If seeing drops even if the connection is active you could also try using OpenVPN or similar to wrap the connection - it should handle a short drop, even a complete drop if your entire connection is down for a few seconds, such that the SSH client and server don't notice.