You say you are "using SQLDATABACKUPANDFTP Software" if this software is from http://sqlbackupandftp.com/ then it says it "Zips, Encrypts and FTPs the database backups" which would explain much of the difference.
I'm afraid your screenshots didn't appear so I can't comment on the other difference.
I'm guessing here based on how I'd implement MIRROR TO
:
Probably, SQL Server streams the backup data to two backup devices if you use MIRROR TO
. This means that every block they read is written two times. The data stream has a fork.
What they are certainly not doing is write the data once and then copy it over. That would a) incur an additional read pass over the data and b) also open a window for data corruption while only a single copy of the backup exists. From an implementors standpoint it is less work to fork the stream and it has a better result. That's why I think they're doing that way.
This means that MIRROR TO
is less resource intensive than copying the data files after the backup has succeeded. It also has less room for data corruption. Data corruption is rare but a practical concern. Memory, disk and network all can have undetected bit flips. (Yes, TCP does not 100% protect from that.)
Advice given the fact that Enterprise Edition will become available to you: Use MIRROR TO
. It results in less resource usage and less potential for (undetected) backup corruption. It also automates stuff that you can get wrong when you are doing it manually. SQL Server is certainly tested better than most in-house script development. The only reason not to use it would be if your backup process cannot easily be modified to use it.
One additional point: If the mirror cannot be written the backup will fail. This can be a downside (you probably exceed your RPO objective). It can be an upside because errors are more likely to be detected.
As mentioned by Paul White in the comments: If one of the backup destinations is slow (or all data is flowing over the same saturated network link) the backup might take a lot longer than before. In case of a saturated network link it might be faster to locally duplicate the backup file on the target server.
This theory can be tested: Back up the database to two IO devices that have identical perf characteristics using MIRROR TO. If I'm correct you should observe only a tiny slowdown because the writes are streaming and parallel.
I performed this benchmark myself. Backing up from SSD (very fast) to two (almost) identical magnetic drives. Using data compression (but the CPU was not maxed out).
- Mirror: 1:37.
- No mirror: 1:10 to drive 1.
- No mirror: 1:14 to drive 2.
This slowdown (1:37 vs. 1:14) is well within the tolerance of my theory. There seem to be some overheads but there certainly is no separate mirroring/copy phase. Using Process Explorer I observed IO measured in MB/sec. It was fairly constant. This also hints that there is no separate phase. Writes seem to be parallel to both drives.
Best Answer
The alternative you mentioned seems to be the best choice.
What you can do is a 2 step process :
This way, your backups are local and they will be fast. You will need more disk space and obviously redundancy (what if the backup disk fails - you don't want to lose all your backups).
Alternatively, as recommended by Max Vernon, do the Robocopy as a step in the backup job to ensure the robocopy only occurs if the backup is successfully completed, and as soon as possible after the backup is complete. The backup is at the same risk as the data, as long as it stays local.
Also, regularly test your restores since if you cannot restore a backup - what purpose does it serve!
Also, refer to my answer to SQL Backup tuning large databases