Sql-server – Alternative for taking the backup of SQL DB’s on backup drive

backupsql server

We just got new servers to take care of, regarding their backups, handed over by the company B who used to manage the backups, but they never did via any mant'ce plans or SQL jobs.They did that task through NBU client. Thus backups were never backed up on the drives on that respective SQL server.

Now, as per the security/audit findings we need to have backups on the drive.

But the issue is we never had any backup drive on those servers. Moreover many of the servers only have a single data drive.

Therefore we came up with a plan to back them over NW onto the server with ample amount of space as of now, till we get the backup drives installed on each server.

But again we came with a challenge of resources that is being utilized while copying / taking backups over the NW like CPU and Memory usage got high.

Can someone help with their expertise on what alternative can be used, because we are short of space as of now and adding a drive per our process takes a month.

Kindly suggest!

Thank you!

Best Answer

Speaking as someone who has taken over from multiple third parties who have done NetBackup database backups you can safely assume three things.

  • Many of your servers and individual databases have not been backed up. You would do best to confirm yourself through msdb checks.
  • The "tapes" being backed up to are probably not accessible, do not work, or otherwise don't exist.
  • You can probably find a lot of NetBackup failure notifications in ErrorLog which have been ignored for days, months, and years.

Regarding your specific situation backups are meant to be taken to network shares and backing up to the same server is wrong. What is going to happen when your server VM crashes or becomes corrupt? You've lost all your backups.

Whatever security / audit issues have come up are obviously wrong and you should be rallying against them. Or - print out your ironclad disclaimer and get your manager to sign it - because they're going to be fired down the line when they lose everything and you don't want this to destroy your career along with their poor decisions.

Best practice? A network share like you mentioned, and standardise on Ola Hallengren's scripts like others mentioned.

Don't be afraid to modify the setup script a little (or add your own to run afterwards) to set up the jobs the way you like them and to schedule them; you can also stagger them server by server if you like to spread the load. I wrote a corresponding script in PowerShell to help do the deployments (create a folder on each server to store the output text files; grant the service account access; create the network folder and grant the service or network account access; then deploy the jobs in the way we want them and with appropriate schedules).

You should also prioritise setting up your registered server list so that in the mean time you can run mass checks of msdb catalogs to make sure you haven't missed anything.

Network and CPU going high doesn't mean anything unless it impacts Production; the only important thing aside from gigabit network interfaces is that the network share is somewhere relatively close to whatever network segments you have. And the initial FULL backups will always be an issue, that's why they're scheduled for weekends, then daily DIFF will be fine.

Otherwise the only databases where this really becomes a problem are when they start going over 1TB (especially if it's much higher, 5TB, 20TB) and those need special attention.