As some people already mentioned you could create new database and "copy" stuff over from old database. This would be the best option for you. However I've noticed that you want to do it quite regularly. So your best option is Redgate Data Compare and Redgate Compare. Both are part of Redgate SqlToolbelt package.
So what you do:
- Create an empty DB with small initial size.
- Use Redgate Compare to copy db structure, functions etc from old db
- Use Redgate Data Compare to copy data from old database to new one
- You work on dev database and then at any time you either do just Data Compare and update the Dev DB regularly, or if you do any changes to db you can deploy those changes using Redgate Compare and then doing Redgate Data Compare.
What is good with Data Compare is that after you copy those 30gb of data (you can do it starting with some tables only) after a while it just needs to 'recompare' only some changes and not whole 30gb of data. Which means it will do a lot less impact on both databases then it would by copying it normally.
If your database has 16 MB of free space, and the drive it's on has plenty of free space, then don't worry about it. As long as there is sufficient space and you haven't disabled the database's ability to autogrow, then the database will grow the data file when it needs to. Of course you should set your data file's autogrow to some realistic increment, balancing duration it takes to grow with the frequency it will have to grow. The defaults of 1MB or 10% are not realistic at all, and should have been changed ages ago. If you have instant file initialization enabled you can lean towards larger, but I have no idea what a good balance for you might be.
You can check autogrow settings here:
SELECT name, type_desc, size, growth, is_percent_growth
FROM [database_name].sys.database_files;
Ideally you will create the database with sufficient size to avoid autogrowth at all, since this operation can be expensive (especially if you don't have instant file initialization) and can block all other activity for the duration of the growth event. You may want to manually expand the file now or during a maintenance window to avoid this happening during peak activity. So proactively you can do this (e.g. to change the auto growth setting to 100 MB):
ALTER DATABASE [database_name]
MODIFY FILE (NAME = N'database_name_data', FILEGROWTH = 102400KB);
If your drive has 16 MB of free space, then you need to free up some space somehow, or move the database to a drive that has more space. It's possible that the drive has been taken up by an overgrown log file, probably because you're in full recovery model and have never taken a log backup. In this case you can either:
- Switch to simple recovery mode, run
CHECKPOINT
, and then DBCC SHRINKFILE
to make the log a reasonable size. Resist the temptation to shrink the file to 1 MB; you don't want to get into a tug-of-war with your disk space.
- Stay in full, then
BACKUP LOG
, and then DBCC SHRINKFILE
with the same caveats as (1).
Now, it could also be that you're in simple mode and the log has ballooned to support some atypically large transaction. So you may still be able to do (1) without any worry, but if you're going to be running these large transactions again, or not doing (1) or (2) regularly, then you're only "solving" the problem temporarily. Get a bigger disk or use full recovery model and manage your log much more effectively.
That all said, this is just a guess; maybe your drive is actually being taken up by a large (and useless) page file, or your MP3 collection, or videos from your last vacation.
Best Answer
Some Initial Caveats:
If you've read about the concerns and risks and you still need to do this shrink because you freed a significant amount of space, hopefully the rest of this answer will help you out. But do consider the risks.
There are two main approaches two consider here:
1.) Shrink Yes, do the actual shrink - Consider using
DBCC SHRINKFILE
instead ofDBCC SHRINKDATABASE
, you have more control over what gets shrunk and how. This will cause some performance degradation for sure - it is a large operation doing a lot of IO. You can potentially get away with repeated shrinks to a target size that gets progressively smaller.This is the "A.)" example in the above
DBCC SHRINKFILE
link.. A datafile is being shrunk to 7MB target size in this example. This format is a good way to shrink repeatedly as your downtime window allows. I would do this in testing on development to see how the performance looks and how low/high you can go of an increment and to determine the expected timing in production. This is an online operation - you can run it with users in the system accessing the database being shrunk, but there will be performance degradation, almost guaranteed. So monitor and watch and see what you are doing to the server, pick a downtime window or period of lighter activity, ideally.Always Remember: - every time you shrink you fragment your indexes and should do an index rebuild if you are going to shrink in chunks over a prolonged period of time. You are now incurring that cost each time if you can't get it done all in one window.
2.) New Database - You could create a new database and migrate data to it. You would have to script the empty database out and all of it's keys, indexes, objects, procs, functions, etc. and then migrate data to it. You could write scripts for this or you could use a tool like SQL Data Compare from Red Gate or other vendors with similar tools. This is more setup work on your side, more development and testing, and depending on your environment may also blow out your downtime window also but an option to consider.
When I am forced to shrink a Database If this were my environment, I'd look to leave a fair/hefty amount of white space in the data file because I like being a disk hog and like to be prepared for future/unexpected growth. So I would be okay giving space back if we just deleted a majority of the space, but I'd never trust those saying "but it will never grow again" and still leave some white space. The route I'd probably go with (sigh) is the shrink approach if I had smaller downtime windows and didn't want to incur the complexity of creating an empty DB and migrating data to it. So I would shrink it a bunch of times incrementally (based on how many times I thought I needed to based on my testing in dev and the desired size. progressively choosing a smaller file size)and then rebuild the indexes.. And then I'd never tell any one that I shrank my database ;-)