First thought based on comments...
Use differential backups every, say, 6 hours, to reduce the size/time of backup + FTP. Then reduce your full backup + FTP to weekends only. This avoids complexity of log shipping, simple to do, and only adds slight complexity to DR
I feel that differential backups are overlooked... I've suggested using them before:
Edit: after jcolebrand's comment I'll attempt to explain more
A differential backup only takes pages that have changed. Outside of any index maintenance (which can affect a lot of the database), only a few % of pages will change during a day. So a differential backup is a lot smaller than a full backup before any compression.
If you have a full backup, say weekly, you can then do daily differentials and ship them off site. A daily full backup with differentials will still require both files off site.
This should solve the problem of getting data from A to B, C and D quickly.
You probably need to restore both the full and latest differential to get the latest data but you can maybe work around this with NORECOVERY and a STANDBY file (I haven't tried it with a diff restore for years since I was last in a pure DBA job).
An added bonus is that diff backups are unrelated to ongoing log backups so you can separate any High Availability/DR requirement from the "get data to the code monkeys" requirement.
I see some issues if you have daily full backups by policy or audit, but the diff restore can be applied before any log restores to shorten recovery time. Unlike backups, diff and log restores do interact.
Hope I've covered most bases...
I would add TRY/CATCH blocks to handle errors and log them. The DB could be in single user, being restored or whatever.
Without this, errors can abort in a way that no errors are logged (statement, batch, scope, connection etc)
With TRY/CATCH then everything except for compile or connection aborting errors are logged? but I doubt this is the case.
I'd also use sys.databases which replaces sysdatabases and read more flags:
-- declares etc
BEGIN TRY
DECLARE db_cursor CURSOR FOR
SELECT name, state, user_access
FROM sys.databases
WHERE name NOT IN ('tempdb')
OPEN db_cursor
FETCH NEXT FROM db_cursor INTO @name, @state, @user_access
WHILE @@FETCH_STATUS = 0
BEGIN
SET @fileName = @path + @name + '.bak'
IF @state = 0 AND user_access = 0
BEGIN
BEGIN TRY
BACKUP DATABASE @name TO DISK = @fileName WITH INIT
END TRY
BEGIN CATCH
-- log but do not rethrow so loop continues
END CATCH
WAITFOR DELAY '00:00:20'
END
ELSE
--log user and/or state issues
FETCH NEXT FROM db_cursor INTO @name
END
CLOSE db_cursor
DEALLOCATE db_cursor
END TRY
BEGIN CATCH
-- some useful stuff here
END CATCH
Best Answer
Yes, in my opinion you should still use compression when backing up, even if the data is compressed. (Exceptions noted below.)
The compression used during backup is able to work across all of the data, unlike page and row compression, which are constrained to only considering the data in each individual page, one page at a time. Think about a phone book in a large metro are: imagine the difference between being able to use dictionary compression for all instances of the word Smith on one page, and then for all instances of the word Smith across all pages. That's similar to the improvement backup compression can add on top of already compressed data.
In my testing, using backup compression on top of page/row compression always yielded smaller backups and completed faster. There is also a lot of interesting commentary where others have confirmed the same. I don 't want to copy all of that here but you can read it at the following URL:
There are exceptions, of course: if your system is already heavily CPU-bound, compression isn't free, and it may make that problem worse. Also if your databases are encrypted with TDE, you'll find that the compression has little to no effect, because the output of that encryption is not a good candidate for further compression.