if any table has page count more [than] 50 [pages]
That is still extremely small. I typically don't take fragmentation into account for any index with less than 1000 pages.
and avg_fragmentation_in_percent > 30 then the index is REBUILD
That's a good threshold for rebuilds, but I'd also consider reorganizing indexes with fragmentation in the range of 5% to 30%.
If the database log size>2 MB, then the recovery mode is made as simple, and it is shrinked, and the recovery mode is set back to FULL
Why are you doing this? Why is it a problem if a log file is greater than 2 MB? That's extremely small. Not to mention, when you switch to SIMPLE
, shrink, and then back to FULL
recovery you are breaking the log chain. You could get yourself into serious trouble for point in time recovery. Size your log files appropriately, and allow for autogrowth only in the case of emergency. Shrinking a database file should only be done in extremely corner cases, and by no means part of a routine maintenance schedule.
But we found that autogrowth events are resource incentive and it should not happen repeatedly
Correct. That is where sizing comes into play. Monitor and get alerted when a certain percentage of space is being used. That way you can plan your manual file growth operations for a maintenance window to minimize end-user impact.
and 10% for ldf file which is default value while creating new database
Even though a percentage file growth is default, I recommend against it. You don't want to have a variable amount of file growth. If you must fall back on autogrowth, then you want to know exactly how much the file will grow by. Not to mention, with growth of a log file you will have a variable amount of virtual log files (VLF) with the additional space. You want to smartly plan this, and this can be done through a fixed amount of file growth.
Should i set autogroth so that it happens only once a day,week or month etc
Set autogrowth for a fixed amount of space (not percentage), and only you can determine what is a good amount. Too small, you will have frequent autogrowth with minimal end user impact, but happening often. Autogrowth set too high, then you'll have less frequent growths but when it does happen the impact will be for a longer duration.
My recommendation? Monitor the space used in your database files. When it hits a certain percentage (say, 80% space used) you should be alerted. And then schedule a manual growth during a maintenance window.
Best Answer
This all depends on what you want to happen when the database hits 5GB. Do you want users to get errors and the database to become unusable full stop, or do you just want to be notified?
It also depends on whether you want to consider the log size separately from the actual data. The log size will fluctuate based on a variety of factors, including recovery model, frequency of full/log/diff backups, avg/peak size of transactions, duration between begin transaction and commit/rollback, etc.
Finally, it depends on whether you want to be have to manage this proactively. If for any of the above reason your log reaches 5GB but you've decided you don't want to stop the database from working because of log usage, you'll have to intervene if the max log size is set to 5GB but the data itself really hasn't grown...