When did you check this data? The msdb.dbo.sysjobschedules
table refreshes every 20 minutes. So if you had it set and then changed it, then ran the sp_help_jobschedule
stored procedure, the underlying data might not be updated yet.
What do you get for next_scheduled_run_date when you execute this query?
exec sp_help_jobactivity @job_name = 'YourJobScheduleName'
You should be aiming to auto-grow as little as possible. Seven times a day is excruciating, even with instant file initialization.
Don't do a Shrink Database. Ever. Shrinkfile, maybe, but only after an extraordinary event. Shrinking it just to grow again is an exercise in futility and should actually be called auto-fragment.
If recovery model is simple, there is no way on earth you should need to grow your log file by 250 GB. The used space in the file will clean itself out automatically over time, unless you started a transaction a month ago and have no intentions of ever committing it or rolling it back.
So my advice would be:
Auto-grow the data file manually during a quiet period to a size that will accommodate several months of growth. What are you saving it for in the meantime?
Set the auto-growth increment for the data file to something relatively small (so that it doesn't interrupt users when it does happen), and alert on this event (you can catch it in the default trace, for example, or through extended events). This can tell you that you are hitting the high point you estimated and it is time to grow manually again. At this point you will want to keep this manual in case you want to add a new file / filegroup on a different drive to accommodate the space, since eventually you will fill the current drive.
Auto-grow the log file to, say, twice the largest it's ever been. It shouldn't auto-grow further unless there is some abnormal transaction holding things up. You should monitor for this event as well, so that you know about them.
Best Answer
I believe @freq_recurrence_factor is the parameter you are looking for.