SQL Server – Database Maintenance Job and Backup Scheduling

backupmaintenanceola-hallengrensql server

I would request to have expert advise on scheduling maintenance and backup jobs. Below was the scenario prior to my change:

  1. Full backup of database scheduled to run at 12:30AM everyday.
  2. Differential Backup scheduled to run every 2 hours in the business hour(8AM to 6PM) and every 6 hour in the non-business hour.
  3. Log backup is scheduled to run every 15 minutes as Log-shipping is configured.
  4. Index optimize job(Using script from Great Mr. Ola Hallengren) running every Sunday morning at 1:45AM.

We used to face space issue on the storage disk in the above scenario as Full backup was run before Maintenance job and hence subsequent differential backup was getting bigger and bigger in size until next full backup was run. This prompted me to run Full backup after maintenance job and I also checked the fragmentation level in mid of week and based on the values decided to run maintenance job twice a week, below is the modified plan:

  1. Full backup of database scheduled to run at 12:30AM on all days except on Sunday and Tuesday when maintenance job is scheduled. On Sunday and Tuesday, Full backup is taken at 2:30AM.
  2. Differential Backup scheduled to run every 2 hours in the business hour(8AM to 6PM) and every 6 hour in the non-business hour – No change.
  3. Log backup is scheduled to run every 15 minutes as Log-shipping is configured – No change
  4. Index optimize job(Using script from Great Mr. Ola Hallengren) running every Sunday and Tuesday morning at 1:45AM.

The issue I am facing right now is the size of log backup immediately after maintenance job, Log backup is much bigger than Full backup of database itself. Needless to say, Log backups are transferred to Secondary site, which is then uploaded there for syncing purpose. This taken longer than expected and in between Log-shipping alert gets triggered since primary and secondary is not in sync. Earlier also, Log backup was bigger however used to be much lesser than that of Full backup and was taking considerably lesser time in transferring from primary to secondary server.

I am not really sure, if this is a valid scenario wherein the changes(insert/update/delete) was so voluminous in last 3 days that the maintenance job created bigger log file than full backup and would stabilize gradually or I should be scheduling two full backup on Sunday and Tuesday(when maintenance job is running) – One at 12:30AM and another one after maintenance job.

Appreciate your kind advise.

Best Answer

According to the scheduled maintenance tasks and second backup strategy here is my suggestion which may suit your requirement to reduce transaction log backup. As you said in the comment, “Log-Shipping is enabled for the databases. Log-Shipping feature support both FULL and BULK_LOGGED recovery model.

So

1st thing you can do if you want to continue with FULL recovery model.

You can add two steps in the schedule maintenance job:

  1. Change recovery model of database to BULK_LOGGED
  2. Execute Index maintenance script
  3. And Change recovery model back to FULL

Pros: It will reduce truncation log backup size. Cons: Point-In-Time recovery will not be possible for the duration.

2nd thing you can do if it suits your SLA (RTO & RPO) as you are backing-up transaction log every 15 minutes, “change recovery model to BULK_LOGGED.

3rd action you can perform to find all unused indexes and remove them, it’ll help to reduce T-Log backup size and maintenance duration as well. You can use this script to find index details.

CREATE TABLE #tbl_index_info(
    [Table Name] [sysname] NOT NULL,
    [Index Name] [sysname] NULL,
    [Index Type] [nvarchar](60) NULL,
    [Index Columns] [sysname] NULL,
    [Row Count] [bigint] NULL,
    [Fill Factor] [tinyint] NOT NULL,
    [user_seeks] [bigint] NOT NULL,
    [user_scans] [bigint] NOT NULL,
    [user_lookups] [bigint] NOT NULL,
    [Index Read] [bigint] NULL,
    [Index Writes] [bigint] NOT NULL,
    [Last Used (Read)] [datetime] NULL,
    [Last Used (Write)] [datetime] NULL,
    [Last Index Re-Built/Re-Organize] [datetime] NULL,
    [index_id] [int] NOT NULL,
    [object_id] [int] NOT NULL
) 
GO

insert into #tbl_index_info
select      t.name [Table Name]
            ,i.name [Index Name]
            ,i.type_desc [Index Type]
            ,c.name [Index Columns]
            ,p.rows [Row Count]
            ,i.fill_factor [Fill Factor]
            ,ius.user_seeks
            ,ius.user_scans
            ,ius.user_lookups
            ,(ius.user_seeks+ius.user_scans+ius.user_lookups) [Index Read]
            ,ius.user_updates [Index Writes]
            ,COALESCE(ius.last_user_seek,ius.last_user_scan,ius.last_user_lookup) [Last Used (Read)]
            ,ius.last_user_update [Last Used (Write)]
            ,STATS_DATE(s.object_id, s.stats_id) [Last Index Re-Built/Re-Organize]
            ,i.index_id
            ,t.object_id

from        sys.tables t
inner join  sys.columns c
on          t.object_id=c.object_id
inner join  sys.indexes i
on          t.object_id=i.object_id
inner join  sys.index_columns ic
on          t.object_id=ic.object_id and i.index_id=ic.index_id and c.column_id=ic.column_id
inner join  sys.dm_db_index_usage_stats ius
on          t.object_id=ius.object_id and i.index_id=ius.index_id
inner join  sys.stats s
on          t.object_id=s.object_id and i.index_id=s.stats_id
inner join  sys.partitions p
on          p.object_id=i.object_id and p.index_id=i.index_id
where i.name is not null and i.is_unique!=1
Order by ius.user_lookups desc
GO

Select      DISTINCT
            a.[Table Name],a.[Index Name],a.[Index Type],
            (SELECT SUBSTRING(
            (
                SELECT ',' + [Index Columns] FROM #tbl_index_info b
                WHERE a.[Table Name]=b.[Table Name] and a.[Index Name]=b.[Index Name]
                FOR XML PATH('')), 2,10000)
                )[Index Columns]
                ,a.[Row Count]
            ,ips.avg_fragmentation_in_percent,a.[Fill Factor],a.user_seeks,a.user_scans
            ,a.user_lookups,a.[Index Read],a.[Index Writes],a.[Last Used (Read)],a.[Last Used (Write)]
            ,[Last Index Re-Built/Re-Organize]
from
#tbl_index_info a
cross apply
            sys.dm_db_index_physical_stats(db_id(),a.object_id,a.index_id,0,DEFAULT) ips
ORDER BY [Index Read] ASC
GO

DROP TABLE #tbl_index_info
GO

4th you need to verify that you are using appropriate FILLFACTOR for indexes. Less FILLFACTOR means more number for pages. So chose FILLFACTOR according to usage of table.

Thanks!