When we did something similar recently ~250GB, we first used an EC2 slave from production database as a staging slave with SSL and binary logging enabled -- to make table dumps, etc. Converting any tables to InnoDB if you need here. Load data dumps from EC2 to RDS. Then used a binary log feeding strategy to load data and keep it current. At cutover time, flush the last binary log-- and load it into RDS and you're done. I prepped the logs first (removing any set cmds that require super) and just fed them to the mysql client on rds. Next time I want to try Tungsten replicator.
I realize you've already worked around the issue, but here is the explanation and the fix for what you encountered. It's nice to be able to do it with the pipe, so you don't have to write that big file somewhere.
As you know, when you type a query into the mysql command line client, while that query is executing, the client doesn't accept any more input from you. That same thing essentially happens when you're stringing the these tools together with a pipe -- the target mysql client waits for the target server to return from the query before it can read more data from the pipe from mysqldump.
Meanwhile, as mysqldump is being blocked on writing to the pipe, because we're waiting for the target server to finish a query, it stops reading data from its socket connection. The OS will only buffer so much data before it stops accepting data from the origin server.
If we do something like ENABLE KEYS
which takes a while, we hit a timeout... on the origin MySQL server. But the timeout value can be changed.
MySQL Server has two timers, net_write_timeout and net_read_timeout, which default to 60 and 30 seconds respectively, and which will cause the serve to tear down a client connection when the timer is exceeded while blocking on a write to or a read from the network.
Most likely, it's net_write_timeout you're hitting, so the origin server is giving up on the connection from mysqldump, which isn't accepting data fast enough (because it's blocked on its output). Or, more precisely, it isn't accepting data often enough. If any one statement takes too long to execute, the game is over, so, on the origin server:
SET GLOBAL net_write_timeout = 3600; # one hour
This is setting approximately how much time we can wait for any single query to finish executing on the destination server without the origin server timing out. Normally, you wouldn't want MySQL to sit and wait for an hour for a blocking client, so set the timer back after you're done.
Best Answer
If you can tolerate a little downtime you can make a snapshot and share the snapshot with the new account. This is a simple and cheap approach though you will be down for the time it takes to make the snapshot, share and then restore it and then point your clients to the new instance.
For a solution with much less downtime, if you are using RDS you can make use of AWS DMS (Data Migration Service). It can be used as a method for either a one-off data migration, or you can use it to replicate data from one VPC to another within the same AWS Account, or even cross account, so that you can replicate data written to the old RDS into the new one, until you are ready to switch to using the RDS in the new account.
Good documentation on AWS DMS can be found here: https://docs.aws.amazon.com/dms/latest/sbs/DMS-SBS-Welcome.html
If you can, to minimise downtime, set up the current database with a CNAME and have your clients address the RDS via that CNAME.