For this answer, let's use /root/MySQLData.sql as the SQL File.
Go into mysql via the client and run it like this:
mysql> source /root/MySQLData.sql
Give it a Try !!!
UPDATE 2011-12-16 21:08 EDT
I just caught on to your problem: Your program is not running because you did not supply the password. It is stuck in the background waiting for a password. That's your actual problem. Kill the job and run it again with the password on the command line:
mysql -u root -ppassword DB_NAME < .SQL FILE
or just use my original submitted answer up above.
UPDATE 2011-12-19 11:40 EDT
I just re-read the question. You said single table. Then it hit me !!!
You need to ramp up your bulk_insert_buffer_size. The default is 8M.
To see what your current setting is, run this:
mysql> show variables like 'bulk%';
Just add this to /etc/my.cnf
[mysqld]
bulk_insert_buffer_size=512M
So as to not restart mysql, run this SQL command:
SET GLOBAL bulk_insert_buffer_size = 1024 * 1024 * 512;
Give it a Try !!!
I have dealt with questions like this in both directions
Your question is a little different because you did not give anything about the target MySQL Instance (what Storage Engine will be used, which columns are TEXT/BLOB, what indexing is needed for your data, etc.)
IMHO the most straightforward way I can think of is the following
METHOD #1
Write some Perl/Python script to sum the length of all XML tag delimiters. Subtract the sum you get from the length of the file. That would essentially be the size of a CSV file less delimiters. This would be the bytes of the raw data.
METHOD #2
Do METHOD #1 and also take the count of all XML tag delimiters and add that count to the result of METHOD #1. That would like adding commas back into a CSV file. This make be closer in size to an actual CSV should you print the output to such a file.
Best Answer
Just from the words in your question I suspect the following: You most likely have innodb_file_per_table disabled.
NOTE : The following info is based on innodb_file_per_table being disabled
When insert data into InnoDB tables, everything and its grandmother lands in the system tablespace file, better known as ibdata1. What does ibdata1 actually contain?
Table Data and Indexes while initially bloat ibdata1. The metadata is just simply the data dictionary + the list of tablespace_ids assign on a per-table basis.
What about MVCC (Multiversioning Concurrency Control)? This represents the systrem objects designed to support transaction isolation, rollbacks, undo logs, insert buffers for secondayr indexes, and the double- write buffer.
You need to clean up the InnoDB infrastructure. I already wrote StackExchange posts on how and why to do this:
Getting back to your original question, the only way to estimate the size of the ibdata1 on reload would have been to run this query before the mysqldump:
This will report the size of the data in GB. Upon a fresh reload of the ibdata1 (with innodb_file_per_table disabled in your case), this would have been the rule of thumb for the size estimation.
From the dump file size, it is hard to judge because the combined total size of data pages and index pages maybe far less that the size of ibdata1 the dump was created from. That difference would be have leftover space from the bloating of MVCC system objects (rollback segments, undo logs, double-write buffer, secondary index insert buffer). From another perspective, data pages could outnumber index pages, and vice versa. This could be due to too many indexes, bad design, or just a judicious amount of data.