You should be aiming to auto-grow as little as possible. Seven times a day is excruciating, even with instant file initialization.
Don't do a Shrink Database. Ever. Shrinkfile, maybe, but only after an extraordinary event. Shrinking it just to grow again is an exercise in futility and should actually be called auto-fragment.
If recovery model is simple, there is no way on earth you should need to grow your log file by 250 GB. The used space in the file will clean itself out automatically over time, unless you started a transaction a month ago and have no intentions of ever committing it or rolling it back.
So my advice would be:
Auto-grow the data file manually during a quiet period to a size that will accommodate several months of growth. What are you saving it for in the meantime?
Set the auto-growth increment for the data file to something relatively small (so that it doesn't interrupt users when it does happen), and alert on this event (you can catch it in the default trace, for example, or through extended events). This can tell you that you are hitting the high point you estimated and it is time to grow manually again. At this point you will want to keep this manual in case you want to add a new file / filegroup on a different drive to accommodate the space, since eventually you will fill the current drive.
Auto-grow the log file to, say, twice the largest it's ever been. It shouldn't auto-grow further unless there is some abnormal transaction holding things up. You should monitor for this event as well, so that you know about them.
If you are going to continue logging data in this database, the last thing on earth you want to do is shrink the database file (and then perform index maintenance that will require it to grow again). Never mind that these shrink and grow operations will interfere with performance, and that the end result is not going to leave you much better off than when you started.
Since the file is only going to grow again, this is an extremely futile operation - much like drying off while still in the shower. What are you going to do with all that disk space you free up temporarily? Lease it out to another application until the database needs to grow again? Of course not. If the database grew to that size once, it will grow to that size again, but it is going to be much more efficient to reuse the space within the file without all this unnecessary shrink-grow-shrink-grow roller coaster of the file itself.
Even if you move the logging table to another database, you should do what you can to pre-allocate the data file to a size that will accommodate the amount of logging you want to save (be it a week, a month, what have you). Keep this database trim by purging data every day, and stop worrying about shrinking & re-indexing. If you size it right, there should always be some free space, but not excessive free space. And if you need to re-index (you shouldn't really, if your clustered index is datetime or otherwise monotonically-based), do so after the purge (when you have the most amount of free space), not after a shrink (when you have the least).
You can do what Mark suggests without introducing a new database to the application, or changing the application or its interface to the database at all (one important change, of course, would be the removal of any foreign keys or other database-dependent features). You can simply create your table(s) in the new database, then add an INSTEAD OF INSERT trigger to the table in the current database (I assume there are no updates to the logging table, but you may also need an INSTEAD OF DELETE trigger in case you don't directly control the process that performs the purging). That will help with writes, but you'll have to point reads elsewhere, since there are no INSTEAD OF SELECT triggers. Another alternative would be to rename the existing table and create a synonym or even a view that points to the new table.
If you're going to need to clean up the logs table that has grown, I would avoid a single atomic transaction like:
DELETE dbo.logs_table WHERE [datetime] < '20121201';
This will cause massive log growth and will take a long time. Instead you can break the cleanup into chunks, e.g.
BEGIN TRANSACTION;
SELECT 1;
WHILE @@ROWCOUNT > 0
BEGIN
COMMIT TRANSACTION;
-- if in simple: CHECKPOINT
-- otherwise: BACKUP LOG
BEGIN TRANSACTION;
DELETE TOP (1000) FROM dbo.logs_table WHERE [datetime] < '20121201';
END
I picked 1000 and Dec. 1st arbitrarily, I don't know what fits your scenario best. The point is that you want to keep transactions short and contained, and prevent any long-term impact while you clean up the table. Another option I've used in the past, instead of deleting 99% of the junk in a table, move the 1% you want to keep to a new table and drop the old one.
BEGIN TRANSACTION;
SELECT *
INTO dbo.new_logs_table
FROM dbo.logs_table
WHERE [datetime] >= '20121201'
COMMIT TRANSACTION;
-- create indexes/constraints/triggers on dbo.new_logs_table
BEGIN TRANSACTION;
DROP TABLE dbo.logs_table;
EXEC sp_rename N'dbo.new_logs_table', N'logs_table', N'OBJECT';
COMMIT TRANSACTION;
If you say the logs have never been purged, then you may very well be in a scenario where the database will be at a size that you don't anticipate you will ever need again (say, if you only intend to ever keep a week of logs at a time). In this case I might go along with a shrink operation, but only if truly necessary (e.g. you really do need the space for other purposes). A bunch of empty pages aren't going to affect your backups or other operations, and the pages will eventually be fully deallocated and reused.
Best Answer
Select the second radio button
Reorganize pages before releasing unused space
, and then in theShrink file to:
option, put the minimum MB specified. You will be corrected if you put a lower MB than that specified. Then click OK.I have found that using the DBCC commands for these work all the time, as the GUI did not work in some of my cases.
Also note, the extra space is there for a reason. You will need the space for future growth, but then that is a whole new topic in performance.