First thought based on comments...
Use differential backups every, say, 6 hours, to reduce the size/time of backup + FTP. Then reduce your full backup + FTP to weekends only. This avoids complexity of log shipping, simple to do, and only adds slight complexity to DR
I feel that differential backups are overlooked... I've suggested using them before:
Edit: after jcolebrand's comment I'll attempt to explain more
A differential backup only takes pages that have changed. Outside of any index maintenance (which can affect a lot of the database), only a few % of pages will change during a day. So a differential backup is a lot smaller than a full backup before any compression.
If you have a full backup, say weekly, you can then do daily differentials and ship them off site. A daily full backup with differentials will still require both files off site.
This should solve the problem of getting data from A to B, C and D quickly.
You probably need to restore both the full and latest differential to get the latest data but you can maybe work around this with NORECOVERY and a STANDBY file (I haven't tried it with a diff restore for years since I was last in a pure DBA job).
An added bonus is that diff backups are unrelated to ongoing log backups so you can separate any High Availability/DR requirement from the "get data to the code monkeys" requirement.
I see some issues if you have daily full backups by policy or audit, but the diff restore can be applied before any log restores to shorten recovery time. Unlike backups, diff and log restores do interact.
Hope I've covered most bases...
You have to first set expectations - a screen that does such and such activities should complete each action in 1 second and all actions in 5 seconds and so on. For example, a search screen should retrieve results in 3 seconds, the booking actions (ticket booking) should be completed in 30 seconds etc.
Then work towards meeting those targets. That is the "normal" performance you want. Now go about meeting those targets. The database may be your bottleneck, it may not be. To identify issues at the database side, try using a tool like pgbadger. That will tell you which queries are taking time.
By the way, 8 hours for a query is probably not acceptable under any circumstance. Try the tool pgtune and see if there is scope for optimizing parameters.
Best Answer
Little hard to say, because we don't know your database setup.
does your database use schemas with views in it. you can use the -n option to dump only the data schemas and rebuild you views after the restore.
maybe you can do it on a quit moment on your server, with -j you can parallelize the dump. It will take less time, but uses more memory.
Otherwise the slave option is a good practice