if any table has page count more [than] 50 [pages]
That is still extremely small. I typically don't take fragmentation into account for any index with less than 1000 pages.
and avg_fragmentation_in_percent > 30 then the index is REBUILD
That's a good threshold for rebuilds, but I'd also consider reorganizing indexes with fragmentation in the range of 5% to 30%.
If the database log size>2 MB, then the recovery mode is made as simple, and it is shrinked, and the recovery mode is set back to FULL
Why are you doing this? Why is it a problem if a log file is greater than 2 MB? That's extremely small. Not to mention, when you switch to SIMPLE
, shrink, and then back to FULL
recovery you are breaking the log chain. You could get yourself into serious trouble for point in time recovery. Size your log files appropriately, and allow for autogrowth only in the case of emergency. Shrinking a database file should only be done in extremely corner cases, and by no means part of a routine maintenance schedule.
But we found that autogrowth events are resource incentive and it should not happen repeatedly
Correct. That is where sizing comes into play. Monitor and get alerted when a certain percentage of space is being used. That way you can plan your manual file growth operations for a maintenance window to minimize end-user impact.
and 10% for ldf file which is default value while creating new database
Even though a percentage file growth is default, I recommend against it. You don't want to have a variable amount of file growth. If you must fall back on autogrowth, then you want to know exactly how much the file will grow by. Not to mention, with growth of a log file you will have a variable amount of virtual log files (VLF) with the additional space. You want to smartly plan this, and this can be done through a fixed amount of file growth.
Should i set autogroth so that it happens only once a day,week or month etc
Set autogrowth for a fixed amount of space (not percentage), and only you can determine what is a good amount. Too small, you will have frequent autogrowth with minimal end user impact, but happening often. Autogrowth set too high, then you'll have less frequent growths but when it does happen the impact will be for a longer duration.
My recommendation? Monitor the space used in your database files. When it hits a certain percentage (say, 80% space used) you should be alerted. And then schedule a manual growth during a maintenance window.
Best Answer
If you know it's going to grow 1GB per day, then yes, manually expand the file now (well, add many files and expand them all) to support as much future growth as you can afford.
Every growth event has a cost. Even with instant file initialization enabled, transactions have to wait for it to complete. If you aren't using IFI or you have extremely slow storage, these surprise events are definitely not a good thing.
Growing the file(s) in advance now has an obvious cost to disk space, but it's not like you could have used that space for anything else. This does not have a cost to backups, since empty pages/extents are not backed up.
Whether it will impact production will depend almost wholly on whether you have instant file initialization enabled. You are using an ancient, unsupported version of SQL Server, otherwise you could check this in a simple DMV or in the errorlog. There is an elaborate way to check on older versions, explained by Paul Randal here: