Can't say about the size, but you could try and check for the last entry or the amount of entries in the dump and then check for the current last entry in your database. This might help you determine the time the import will take till it finishes.
The size of the imported data might even get bigger than the uncompressed 7GB, since the indexes are usually not contained in dumps but get built on insert.
As a sidenote: This is also a way to speed up the import itself: drop the index during import and rebuild it later on, this helped me several times to speed things up.
The file ibdata1
has "tablespace ids" in it that need to correspond to the .ibd files containing the data, etc. Since these ids are generated by the instance, when you reloaded the dump, they may be different than the ones on the production machine. Hence, the rsync could be copying wrong ids.
Moral of the story: Don't mix source dumps (mysqdump, xtrabackup, etc) with file backups (rsync, etc).
Your backup is hosed; you will need to start over.
Some backup techniques (starting with least-invasive):
Zero downtime (after set up): Master-Slave; the Slave is continually updated as a viable backup.
Near zero downtime: LVM. After setting up a snapshot area, you (1) stop mysqld, (2) "snapshot" the filesystem, (3) restart mysqld, and (4) rsync the snapshot to wherever. Step 4 is the only slow part; it does not impact the running of the db other than chewing up lots of I/O.
incremental backup via binlog
Xtrabackup, with some of the non-trivial features.
Best Answer
Assuming a bash shell, (and for some reason you cannot concat the files together) that is a simple for loop: