Defragmentation strategies help improve scan speed to/from disk.
The wide variety of opinions is because an environment's ideal defragmentation strategy should depends on many different factors. There are also multiple potential layers of fragmentation in play.
Saying that your databases are stored on a SAN isn't enough information. For example:
Are database files stored on separate physical RAID groups or the same RAID group? What other processes are active on that same device? Are your backup files ending up there, too? You may have to ask your SAN admin for this information, because it's not always transparent.
What are the access patterns for the databases? OLTP is generally random access, but sometimes an application is table-scan-happy and you can't change its behaviour (ISV app). Are the applications read-mostly, write-mostly, or somewhere in between?
Are there performance SLAs in play during a recovery/failover period?
Brent's post assumes there is one giant pool of storage and everything shares it. This means the physical disks are rarely idle, and hence most access is random. If that is your situation, then the advice applies, and I agree with it for the most part. While this type of strategy is much easier to manage, it isn't necessarily (a) what you have in your environment, or (b) what is the best solution for your environment.
If index maintenance is burdensome, consider doing it less aggressively, and/or amortize the cost over the week (i.e., run light maintenance once/day, instead of heavy maintenance once/week).
You can also turn on the SortInTempdb
option to potentially reduce the amount of logging that takes place in the user databases.
If you want to improve the speed of your tlog restores, you will want to look into the bulk-logged recovery model. I would never suggest running a database under this recovery model exclusively, though switching to it during your maintenance window will help significantly in your case.
The key here is to be very explicit on the approach you take when switching recovery models. From the source, anytime you wish to utilize the bulk-logged recovery model, you should follow these steps:
- Before you switch to the bulk-logged recovery model, take a tlog backup
- Switch the recovery model of the db to bulk-logged
- Perform your minimally logged operations (e.g. index maintenance, etc.)
- Switch the database back to the full recovery model
- Immediately take another tlog backup
Here's a handy little infographic that outlines the approach:
In an ideal situation, MS also encourages the following:
- Users are currently not allowed in the database, or their activity is kept to a minimum.
- All modifications made during bulk processing are recoverable without depending on taking a log backup; for example, by re-running the bulk processes.
These are not hard-line requirements, but good practice recommendations and the reason behind them is that all activity that completes during the period where the database is in bulk-logged mode (once a minimally logged operation occurs) will either have to be redone if you roll back to the tlog backup taken in step 1 or will be committed if you restore the tlog backup taken in step 5. The yellow window identified in the infographic is an all-or-nothing sort of process, and you are unable to do any point-in-time recovery during that time-frame. The moment you convert the database back to the full recovery model and take another tlog backup is the moment you can once again utilize point-in-time recovery.
Finally, there are some restrictions to using the bulk-logged recovery model, such as considerations for databases with read-only filegroups and online recovery scenarios, so do some testing to make sure you're not causing yourself more headache than necessary.
I've used the bulk-logged recovery model for years and it's quite helpful so long as you understand how to use it properly.
Best Answer
(converting my comment to answer)
FileTable uses Filestream technology. From Paul Randal's blog - Defrag the NTFS volume if needed before setting up FILESTREAM, and periodically to maintain good scan performance.
Instead of rebuilding, you should defrag the ntfs volume as suggested.
Also, highly suggest to read - Best Practices on FILESTREAM implementations - by Pedro Lopes (Senior PM at Microsoft Database Systems)