if any table has page count more [than] 50 [pages]
That is still extremely small. I typically don't take fragmentation into account for any index with less than 1000 pages.
and avg_fragmentation_in_percent > 30 then the index is REBUILD
That's a good threshold for rebuilds, but I'd also consider reorganizing indexes with fragmentation in the range of 5% to 30%.
If the database log size>2 MB, then the recovery mode is made as simple, and it is shrinked, and the recovery mode is set back to FULL
Why are you doing this? Why is it a problem if a log file is greater than 2 MB? That's extremely small. Not to mention, when you switch to SIMPLE
, shrink, and then back to FULL
recovery you are breaking the log chain. You could get yourself into serious trouble for point in time recovery. Size your log files appropriately, and allow for autogrowth only in the case of emergency. Shrinking a database file should only be done in extremely corner cases, and by no means part of a routine maintenance schedule.
But we found that autogrowth events are resource incentive and it should not happen repeatedly
Correct. That is where sizing comes into play. Monitor and get alerted when a certain percentage of space is being used. That way you can plan your manual file growth operations for a maintenance window to minimize end-user impact.
and 10% for ldf file which is default value while creating new database
Even though a percentage file growth is default, I recommend against it. You don't want to have a variable amount of file growth. If you must fall back on autogrowth, then you want to know exactly how much the file will grow by. Not to mention, with growth of a log file you will have a variable amount of virtual log files (VLF) with the additional space. You want to smartly plan this, and this can be done through a fixed amount of file growth.
Should i set autogroth so that it happens only once a day,week or month etc
Set autogrowth for a fixed amount of space (not percentage), and only you can determine what is a good amount. Too small, you will have frequent autogrowth with minimal end user impact, but happening often. Autogrowth set too high, then you'll have less frequent growths but when it does happen the impact will be for a longer duration.
My recommendation? Monitor the space used in your database files. When it hits a certain percentage (say, 80% space used) you should be alerted. And then schedule a manual growth during a maintenance window.
As stated in Paul's recommendations and in KB 2154845 use 8 files, and adjust as needed.
As a general rule, if the number of logical processors is less than or equal to 8, use the same number of data files as logical processors. If the number of logical processors is greater than 8, use 8 data files and then if contention continues, increase the number of data files by multiples of 4 (up to the number of logical processors) until the contention is reduced to acceptable levels or make changes to the workload/code.
Now also make sure that the tempdb files are created such that you don't need auto growth and that they are all the same size with the same growth parameters not in percentiles.
Your tempdb is now ~140 GB so that is about the size you need to provision for. Create 8x15-20 GB datafiles and a single log file (as transaction logs are used sequentially) set all the files with exactly the same growth parameters, something that makes sense on your storage 512mb is fine if you have instant file allocation. As your transaction log is just about 5GB create a single 6-8GB file for the transaction log and set autogrowth to be in MB. Log growth is not affected by instant file allocations so make that parameter smaller than the one on the database files.
If you still see contention then go for 16x 8-10 GB files and dont change the transaction log parameters
Best Answer
You could run this against each file, replacing each
n
with the size and growth amounts:Now to ensure that they all grow at the same time (instead of only one of them growing, e.g. in response to a large spill or other operation), then you can enable trace flag 1117, but keep in mind that this flag applies to all databases, so if tempdb isn't your only database with more than one data file, you'll want to test this behavior. More info:
http://blogs.technet.com/technet_blog_images/b/sql_server_sizing_ha_and_performance_hints/archive/2012/02/09/sql-server-2008-trace-flag-t-1117.aspx
http://www.sqlskills.com/BLOGS/PAUL/post/Tempdb-configuration-survey-results.aspx