You should be aiming to auto-grow as little as possible. Seven times a day is excruciating, even with instant file initialization.
Don't do a Shrink Database. Ever. Shrinkfile, maybe, but only after an extraordinary event. Shrinking it just to grow again is an exercise in futility and should actually be called auto-fragment.
If recovery model is simple, there is no way on earth you should need to grow your log file by 250 GB. The used space in the file will clean itself out automatically over time, unless you started a transaction a month ago and have no intentions of ever committing it or rolling it back.
So my advice would be:
Auto-grow the data file manually during a quiet period to a size that will accommodate several months of growth. What are you saving it for in the meantime?
Set the auto-growth increment for the data file to something relatively small (so that it doesn't interrupt users when it does happen), and alert on this event (you can catch it in the default trace, for example, or through extended events). This can tell you that you are hitting the high point you estimated and it is time to grow manually again. At this point you will want to keep this manual in case you want to add a new file / filegroup on a different drive to accommodate the space, since eventually you will fill the current drive.
Auto-grow the log file to, say, twice the largest it's ever been. It shouldn't auto-grow further unless there is some abnormal transaction holding things up. You should monitor for this event as well, so that you know about them.
It sounds like you've got a huge transaction that has remained open which has done a lot of work. Run DBCC OPENTRAN on the database and see how long the oldest transaction has been open for. You'll probably need to kill that transaction (or have the user commit if it's someone in Management Studio). Then the log will clear automatically.
Best Answer
I agree with Brent about this being not a fantastic idea; I prefer to manage log file sizes manually with intention.
Having said that, you can use SQL Server Agent alerts to accomplish what you want.
You'll need to modify the code above to reflect the name of the server in the
@performance_condition
parameter. If your server is a named instance, you'll need to changeMV2016
to the name of your instance. If you're using an unnamed instance then@performance_condition
would simply be:This alert will fire whenever the tempdb log file crosses 60% full. When the alert fires, it automatically runs a job which increases the size of the tempdb log by 40%.