This procedure should do what you want:
create procedure dbo.dbrestore
@dbname nvarchar(50)
as
declare @fileno integer
declare @dumpdevice nvarchar(50)
declare @mediasetid integer
declare @position integer
select top(1) @dumpdevice=a.logical_device_name, @mediasetid=b.media_set_id, @position=b.position from backupmediafamily a inner join backupset b on a.media_set_id=b.media_set_id
where b.database_name='backuptest' and b.type='D' order by a.media_set_id desc, b. position desc
declare restore_cursor cursor for
select position from msdb.dbo.backupset where database_name=@dbname and type='D' and media_set_id=@mediasetid and position=@position
union
select max(position) as position from msdb.dbo.backupset where database_name=@dbname and type='I' and media_set_id=@mediasetid and position>@position
union
select position from msdb.dbo.backupset where database_name=@dbname and type='L' and media_set_id=@mediasetid and position>@position
and last_lsn>(select max(last_lsn)from msdb.dbo.backupset where database_name=@dbname and type='I' and media_set_id=@mediasetid and position>@position)
order by position asc;
open restore_cursor
fetch next from restore_cursor into @fileno
while @@FETCH_STATUS = 0
begin
--comment the print statement and uncomment the exec statement to run
--exec('restore database ['+@dbname+'] from '+@dumpdevice+' with norecovery, replace, FILE='+@fileno+';')
print ('restore database ['+@dbname+'] from '+@dumpdevice+' with norecovery, replace, FILE='+convert(varchar(4),@fileno)+';')
fetch next from restore_cursor into @fileno
end
close restore_cursor
deallocate restore_cursor
--exec('restore database ['+@dbname+'] with recovery;')
go
execution goes:
get the name of the dumpdevice based on the supplied database name. I'm assuming that they are all on the same device. The top is to work around having multiple media sets per database if using 'with format' for the initial DB, and to cope with the possibility of multiple full backups in the device
Then build up a union query consisting of:
The last full database backup (type=D) in the device
the last differential backup (type=I and has the greatest LSN (Log Sequence Number)) taken after the last full db backup.
the transaction logs (type=L) taken after the above diff.
returning just the position value, which maps to the FILE value in this case (http://msdn.microsoft.com/en-us/library/ms186299.aspx)
Loop through the result restoring them individually, and then a final restore with recovery to finish it off.
Call it as:
exec dbrestore 'db2restore'
I've tried it on a test db and it appears to work happily, and comparing it to the files SSMS wants to restore for the same operation it looks the same.
Provided as is, there's probably typos, I've probably missed a possible boundary condition, and there's no error checking.
As with all things in SQL Server, it depends.
First thing you need to do is make sure you understand what each type of backup does.
Books Online has all the gooey details, but here's my summary.
A FULL backup contains everything within the database. A DIFFERENTIAL backup is cumlative NOT incremental. In your example, if your database failed on the 12th, then you'd only need to restore the full backup from the 1st and then the most recent differential on the 12th, then followed by all the transaction log backups upto the failure. A TRANSACTION LOG backup is only needed for databases using the full or bulk-logged recovery model. If you're using the simple recovery model then transaction log backups are not needed.
Now that we've cleared that up...Designing a backup schedule really depends on how much data you need to recovery and how fast you need to recover it in the event of a diaster. I would recommend starting with a full backup each day. You can always reduce the frequency later. Remember the differential backup is cumlative since the last full, so depending on the amount change going on in your database the differential could be larger than the full backup after a few days. If you do a full backup each day, then you may not need to use differentials at all; however you could still do it once a day and schedule it at 12 noon. The transaction log backup only backs up the log. The frequency of the log backup will determine how much data you're willing to lose in the event of a failure. If you run your log backup every 15 minutes, then you would expect to lose up to the last 15 minutes of data that changed. 15 minutes is a good frequency, but every 30 minutes works perfectly for my environment.
As I said earlier, it all depends on your environment. After you've designed and setup your backup schedule, remember to test it on an alternate server. Practice restoring your full, diff, and log backups so that you know everything works like you designed it.
Books Online has some good info if you plan use Maintenance Plans, but if you really want flexibility then check out Ola Hallengren's backup scripts.
Best Answer
Transactional Replication would be hard to maintain. If you do frequent schema changes, then you have to drop replication and recreate it, which is time consuming and on the top of it, you have to maintain distribution database as well. Network latency also plays a big role if you are replicating your entire database.
For you scenario, I would recommend :
If not on enterprise edition, you can use any of below :
Backup and restore with COMPRESSION. SQL Server native compression is very efficient which will reduce the backup size.
Since data can be stale up-to 24 hrs, another option would be to use logshipping. You just have to delay the restore of the log backups on the secondary server. So when you want to update the data on the reporting server, you can kickoff the job manually or it will get kicked off with the schedule. This way, you just have to restore the T-logs on the secondary server, when you want to refresh the data in the reporting (secondary) server.
So when setting up Logshipping, configure the secondary database in STANDBY mode and make sure you select Disconnect users in the database when restoring backups.
If on Enterprise edition, then AlwaysON with readable secondary. This will offload the reporting work from your primary server to the replica.
Note: For completeness of this answer, even though Database Mirroring is announced as Deprecated, it is still supported in SQL Server 2012. You have to be careful, as the database snapshot name will be different from the original database name, so if your app relies on the database name, it will need modification. Also, as snapshot works on "COPY ON WRITE" method, it can be significant overhead if there are multiple snapshots. Also, it will randomizes IO during query processing.