The recommended way to back up RDS is with automatic backups and DB snapshots. DB snapshots are basically the same as EBS snapshots, which are stored in S3 behind the scenes, but are only available within the same region.
If you need cross-region fault tolerance (good plan!), there is no way to restore your data in another region without doing it "the hard way" from a mysqldump. Your alternatives are to back up using mysqldump (slow and terrible for any reasonable sized dataset), or set up your own EC2-based slave in another region and back that up using any available method (xtrabackup, EBS snapshots, etc). However, then you are back to managing your own MySQL instances, so you might as well abandon RDS entirely.
For my money, RDS provides absolutely no benefits in really any way and a whole lot of disadvantages in performance, flexibility, and reliability. I would ask yourself what value RDS provides to you.
I realize you've already worked around the issue, but here is the explanation and the fix for what you encountered. It's nice to be able to do it with the pipe, so you don't have to write that big file somewhere.
As you know, when you type a query into the mysql command line client, while that query is executing, the client doesn't accept any more input from you. That same thing essentially happens when you're stringing the these tools together with a pipe -- the target mysql client waits for the target server to return from the query before it can read more data from the pipe from mysqldump.
Meanwhile, as mysqldump is being blocked on writing to the pipe, because we're waiting for the target server to finish a query, it stops reading data from its socket connection. The OS will only buffer so much data before it stops accepting data from the origin server.
If we do something like ENABLE KEYS
which takes a while, we hit a timeout... on the origin MySQL server. But the timeout value can be changed.
MySQL Server has two timers, net_write_timeout and net_read_timeout, which default to 60 and 30 seconds respectively, and which will cause the serve to tear down a client connection when the timer is exceeded while blocking on a write to or a read from the network.
Most likely, it's net_write_timeout you're hitting, so the origin server is giving up on the connection from mysqldump, which isn't accepting data fast enough (because it's blocked on its output). Or, more precisely, it isn't accepting data often enough. If any one statement takes too long to execute, the game is over, so, on the origin server:
SET GLOBAL net_write_timeout = 3600; # one hour
This is setting approximately how much time we can wait for any single query to finish executing on the destination server without the origin server timing out. Normally, you wouldn't want MySQL to sit and wait for an hour for a blocking client, so set the timer back after you're done.
Best Answer
Have you tried mysqldump using --single-transaction ? In this way it doesn't lock the table for threads doing SQL,DML operations provided no DDL statements should be issued and you will get the backups for conistent states only. This will be efficient only for innodb tables. Below url might give some insights about the option.
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_single-transaction