There is no cross specification of HTTP and FTP which would allow the two to interact. Someone, somewhere, would have to download, and then upload it.
If you have shell access to the web server, the easiest way would be to upload it directly from the webserver to the ftp server with the
ftp command (Assuming it's a *nix server.)
If you have shell access to the ftp server, then you could also use the
wget command to download the file directly to the ftp server, again assuming it's a *nix server.
No, you cannot use
rsync to transfer files to Amazon. It uses its own protocol for the service. But you can access their storage via third-party services, like www.s3rsync.com, then you'll be able to use
rsync, and your data will be finally transfered to S3 storage.
Or you can use special utilities designed for S3 storage. There are: s3sync, s3command, s3cp, tarsnap (tarsnap is a third party service, like s3rsync).
Another rsync-like tool,
duplicity, supports S3 storage as a backend as well as many other backup backends, including RackSpace Cloud Files (another cloud storage service, priced similarly to Amazon S3).
Backup to S3:
duplicity /home/me s3+http://bucketname/prefix
or to Rackspace's Cloud Files:
duplicity /home/me cf+http://container_name
If you have an SSH access to your server and you can use the amazon tools (e.g. the
ec2-api-toolspackage on Ubuntu), then you can upload your tarball directly from your server. However, if you only have FTP access to it, your only choice (afaik) is to download it and upload it from your workstation.