You are backing up the log to the same file repeatedly. Subsequent backups are being appended to that file, which is what you can see in your screen capture. Typically, we tend to backup to dated files:
BACKUP LOG
lagerdb
TO DISK = N'C:\SQLBACKUP\lagerdb-rechts_201112061130.trn'
WITH NAME = N'lagerdb logs'
Save yourself the trouble of hand cranking the necessary scripts and use Ola Hallengren's maintenance solution.
This procedure should do what you want:
create procedure dbo.dbrestore
@dbname nvarchar(50)
as
declare @fileno integer
declare @dumpdevice nvarchar(50)
declare @mediasetid integer
declare @position integer
select top(1) @dumpdevice=a.logical_device_name, @mediasetid=b.media_set_id, @position=b.position from backupmediafamily a inner join backupset b on a.media_set_id=b.media_set_id
where b.database_name='backuptest' and b.type='D' order by a.media_set_id desc, b. position desc
declare restore_cursor cursor for
select position from msdb.dbo.backupset where database_name=@dbname and type='D' and media_set_id=@mediasetid and position=@position
union
select max(position) as position from msdb.dbo.backupset where database_name=@dbname and type='I' and media_set_id=@mediasetid and position>@position
union
select position from msdb.dbo.backupset where database_name=@dbname and type='L' and media_set_id=@mediasetid and position>@position
and last_lsn>(select max(last_lsn)from msdb.dbo.backupset where database_name=@dbname and type='I' and media_set_id=@mediasetid and position>@position)
order by position asc;
open restore_cursor
fetch next from restore_cursor into @fileno
while @@FETCH_STATUS = 0
begin
--comment the print statement and uncomment the exec statement to run
--exec('restore database ['+@dbname+'] from '+@dumpdevice+' with norecovery, replace, FILE='+@fileno+';')
print ('restore database ['+@dbname+'] from '+@dumpdevice+' with norecovery, replace, FILE='+convert(varchar(4),@fileno)+';')
fetch next from restore_cursor into @fileno
end
close restore_cursor
deallocate restore_cursor
--exec('restore database ['+@dbname+'] with recovery;')
go
execution goes:
get the name of the dumpdevice based on the supplied database name. I'm assuming that they are all on the same device. The top is to work around having multiple media sets per database if using 'with format' for the initial DB, and to cope with the possibility of multiple full backups in the device
Then build up a union query consisting of:
The last full database backup (type=D) in the device
the last differential backup (type=I and has the greatest LSN (Log Sequence Number)) taken after the last full db backup.
the transaction logs (type=L) taken after the above diff.
returning just the position value, which maps to the FILE value in this case (http://msdn.microsoft.com/en-us/library/ms186299.aspx)
Loop through the result restoring them individually, and then a final restore with recovery to finish it off.
Call it as:
exec dbrestore 'db2restore'
I've tried it on a test db and it appears to work happily, and comparing it to the files SSMS wants to restore for the same operation it looks the same.
Provided as is, there's probably typos, I've probably missed a possible boundary condition, and there's no error checking.
Best Answer
Restoring a full + differential backup restores you to the point in time that the differential backup started. If you just took the differential this second and nothing has happened in the database since it started, then yes, that is the latest state. But it's quite unlikely that you will have a disaster the minute you take a differential backup and that nothing has happened in the meantime.
Which means you can't restore to the point in time right before the disaster - everything that happened between the differential and
<now>
is gone. If you take a differential every hour, this means that disaster could strike athh:59:59.9999999
and then you'd have lost almost an hour's worth of data.This is the whole point of the full recovery model - to minimize data loss. If you're not going to bother taking log backups, then switch to simple.
If you take transaction log backups in between differentials (or just don't bother with differentials, since all they do is reduce the number of transaction log backups you need to restore in the event of a disaster), then your maximum potential data loss is your log backup schedule. Because with log backups, you can apply the following during a restore:
b
d
taken afterb
n
log backups taken afterd
up to point in timet
You need to decide what your tolerance is for data loss in order to determine how often you should be running transaction log backups. If you don't know what your tolerance is, you need to discuss this with management / stakeholders.
Could you just take a differential every minute? Or take full backups every five minutes? Sure. But buy some bigger disks.