UPDATE (April 14 2016):
From SQL 2016 – It Just Runs Faster: Indirect Checkpoint Default on the official team Web Log for Microsoft Customer Service and Support (CSS) SQL Support:
Indirect checkpoint is the recommended configuration, especially on systems with large memory footprints and default for databases created in SQL Server 2016.
Does indirect checkpoint takes into considerations number of dirty buffers instead of number of transaction which usually automatic checkpoint was using?
Once we have set the target_recovery_time, SQL Server internally calculates the Target Dirty Buffer threshold. As the transactions are logged in the transaction log, a Dirty Page List keeps track of the LSNs and dirty buffers which are modified by the transaction. So for indirect checkpoints, the dirty buffers from each transaction along with the LSN are being tracked.
The Recovery Writer (a new background process in 2012) periodically polls the Dirty Page List and if it finds a number of Dirty Pages in the Buffer Pool greater than the Target Dirty Buffer threshold it flushes the Dirty Buffers and moves the minLSN forward.
Does indirect checkpoint setting is more suitable for Data Warehouse vs OLTP type workload?
From BOL, an online transactional workload on a database that is configured for indirect checkpoints could experience performance degradation. This is because the background writer used by indirect checkpoint sometimes increases the total write load for a server instance.
What scenarios you consider for SQL database in question before you start leveraging indirect checkpoint?
This depends on many factors as each environment is different interms of hardware, memory, transactions occuring, etc . I would recommend you to TEST and double TEST this particuliar feature as it can be helpful to you or else can cause detrimental effect on your workload and possibly jeopardise your databases as well. By default, target Recovery time is set to ZERO meaning, change it if you have tested it thorougly and its behaviour is acceptable to you ...
Advantages :
It may improve database recovery time
It may reduce checkpoint I/O as it writes continuously pages to the disk in the background
Disadvantages :
In OLTP workload it can increase overall writes on server by writing continuously pages to the disk in the background which may reduce the performance.
Ref:
Setting indirect checkpoints by database in SQL Server 2012
How do checkpoints work and what gets logged ?
There are lot of things I would like to say
What I'm looking for is some suggestions or 'gotchas' that may arise from switching some to the FULL recovery model.
As far as I know if you change recovery model of database during maintenance window time or when load is relatively less there wont be any problem. It wont create a situation.
I'm also aware that as soon as we switch to FULL recovery that I need to run a full backup on that database. (I understand that FULL mode won't be truly enabled until we do this).
You can take differential backup also perhaps this would be very helpful if you have big database. anything that links LSN chains would do.See below link it has good information about switching between recovery models
http://msdn.microsoft.com/en-gb/library/ms189272.aspx
My plan is also to weekly run a full backup, wipe the log files from the previous week and, essentially 'start over'.
Stop there, this is not correct approach just because you did a full backup and log backup please don't be in idea that you can safely remove old log files and full backup. Do you have a plan to test old backup by restoring to actually see that in case of disaster this backup files would work. Remember only successful restore guarantees that your backup is totally consistent. Do you have a option to check backup integrity.If not please include it into your backup plan. At least keep 4 day old backup files( this is what I do on local disk) before if you delete backups This strategy is upto you keep backups for time where you are sure they wont be useless if disaster strikes. Sometimes business want only specific data change to which was made couple of days. I also have my database backup up on tape and that tape is stored for 6 months.
(maybe switch to BULK_LOGGED for that time period as to not explode the log file?)
Changing recovery model to bulk logged and doing bulk logged operation would make you loose point in time(PIT) recovery so if you are concerned about PIT recovery dont do that. Instead rebuild index through intelligent script which only rebuild fragmented index like one Is Ola Hallengren index rebuild solution please note that if Index is rebuild with full scan for that index stats is already updated with rebuild process.
If you do heavy DML please break it into batches so as not to explode log files.
Best Answer
Dirty pages are written directly to the data files. There is no need to write them to the log because their modifications have been logged already. A dirty page never hits disk until all log records that record its modification have been written already. After the log has been hardened SQL Server is at leisure to write dirty data pages whenever it wants (or never - if the machine loses power for example). After a page has been written is is simply marked not dirty.
A transaction commit leads to a corresponding log record. The very fact of hardening that log record is what makes the transaction considered committed (except for delayed durability in 2014+).
The recovery model has little to do with this. It controls whether the log is eligible for truncation after a checkpoint has completed.