The size in and of itself is what you will need to appropriately determine when setting and growing. This is all part of properly sizing your database.
But you need to be careful with growing your files at such a small increment. If you are growing your files often, you are causing SQL Server to have to do this relatively expensive operation. Take, for instance, growing of the transaction log. When this happens all write activity will be impeded. Another thing about that is if you grow your transaction log at a small interval and it grows often, you will end up with a large amount of VLFs that are small in size.
Instant File Initialization is a great approach, as it bypasses the zero-initialization of the additional file (or file allocation). This shaves off a lot of the overhead and performance impact, but this is only possible with data files, not log files.
As you can see, there are a handful of things to consider depending on your environment and how much your database grows. Size your database appropriately and find the right growth increment.
Note, autogrowth should be reserved for emergency situations and it is recommended to manually grow your files if at all possible. This gives you full control over all of the points above, as well as allowing you to determine when this operation occurs.
As for your numbers, "space available" is just that: The free space that you have for additional data until growth will need to happen. One strategy is to get notified when you reach a certain threshold of used space so that you can schedule and kick off growth of the appropriate file(s).
The SQL Server log contains all the data that was written to the database (in some form). So, you are right to be concerned about it.
The log is a ring buffer made up of virtual log files (VLF). Once a VLF is not needed anymore, it can be reused. However it will not be reused until the "write head" comes around to that VLF again.
The only difference between simple and full (with a few small exceptions) recovery model is how quickly a VLF is "not needed anymore". If you use full recovery mode, a VLF is "needed" at least until a log backup has been taken. (A full backup never clears the "needed" flag of a VLF.)
The only way to force a VLF to be overwritten is to first make sure that it is not needed anymore and then pump enough transactions into the database that the VLF gets reused. That however is probably not acceptable to you.
Another way to force the data out is to shrink the log file. If you can take downtime, follow this process:
- Set the database to READ_ONLY
- Run the CHECKPOINT command.
- take a log backup (if in full recovery)
- make sure other log-readers (replication, CDC, ...) have caught up
- set the database to READ_WRITE
- create a new log file
- empty the original file migrating the data to the new (
DBCC SHRINKFILE
, there should not be any data left to migrate at this point, but you still need to execute this step.)
- drop the old log file
- take another log backup (if in full recovery)
You still need to test this, but this should get you to where you need to be.
If you can't take downtime, follow the same process without setting the database to READ_ONLY. However, you run the risk not to be able to have a complete clean cutover, but if you do this regularly you still should be OK.
Note, as some of the commenters have mentioned, a delete does not actually remove the data from the MDF files either. However, that can be dealt with in a similar fashion:
- rebuild all indexes on the offending table(s) (or better all tables)
- create a new file
- empty the old file, migrating the data to the new file using
DBCC SHRINKFILE
- drop the old file
Step 1 is necessary as the file migration does not change page contents, so single deleted rows might survive the move. Rebuilding the indexes should use new pages.
Additional notes:
The log cleanup has to happen after the data file cleanup as there is a chance that the data file cleanup moves some of the data into the log file.
Index rebuild operations and also a lot of data access operations utilize tempdb, so you have to shred that too. Tempdb is recreated on server restart, so you can stop the SQL Server service, delete the tempdb files and then start the service again.
Finally, this all causes a lot of blocks (~ database pages) to be unlinked from existing database files. This unlinking (due to file shrinkage or the deletion of files) does not force those blocks to be overwritten. So it is recommended to create the new files in the above processes on a separate disk and afterwards securely wipe the original disk.
Best Answer
If you just acquired an instance of SQL Server from someone, I would first ask have you gone over the instance configuration and all to understand how the previous person had it configured?
I would suggest taking a look at Brent Ozar's sp_BLITZ script. It is a stored procedure that will collect some good information for you on your instance configuration and give you a good overview of what it sees. I would read over Brent's blog on this script in detail before ever running this on your server, FULLY understand what it is doing.
Regarding your database, are you running out of disk storage or database storage? How much free space does the database contain? I would first understand the architecture of the files before I start making changes. A good article to read on file management is from SQLSkills.com here.
Just my $0.02...