Let's go through a few points.
1- Why Even Shrink? What Are The Benefits And Cons?
First is the 700GB extra space causing any issues? If the DB is already at 2.3TBs, it will probably continue to grow, if so, then consider just leaving the space. In fact, you WANT free space in your DB if it's still growing! You don't want it to expand often, as that causes physical file fragmentation and causes blocking/performance issues when it's growing unless you have Instant File Init. enabled, then the blocking/locking issues are largely negated.
When you take backups those free 700GB are not actually copied in the backup, just pointers to empty pages which will then be populated on restores, thus doing this will not reduce backup time, restore time, backup space, but will take up restore space. The only time it'll affect restore time is if you do not have instant file initialization enabled and your SQL Server Service user is not an administrator.
Reindexing again will increase the free space for it to hold temp info and such, but again, if you end up growing and using that free space then there's no problem.
Perhaps if you are restoring to a smaller dev server that doesn't have as much space, then this would be warrented.
2-To Shrink Or Not To Shrink
Now assuming you absolutely need to reduce your DB size, then read Paul Randal's post on moving to a different filegroup instead of shrinking. This ensures that your shrink does not create additional free space. If you have a clustered key it also reorders the base clustered index as well.
TRUNCATEONLY affects both the LOG and the DATA files in 2008. On BOL for SQL Server 2012 the message simply indicates that if you only wish to SHRINK the database file, then you should use DBCC SHRINKFILE which will allow you to shrink either the data or log files.
For 2008, it is clearly indicated that TRUNCATEONLY only affects DATA files.
Lets test. To visualize what happens using TRUNCATEONLY
with SHRINKDATABASE
, here is a run down of what happened to a database called performance
that I have installed locally.
SELECT @@VERSION;
Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
Jun 28 2012 08:36:30
Copyright (c) Microsoft Corporation
Enterprise Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)
USE performance;
I ran DBCC LOGINFO
just to take a peak inside the transaction log and found that all of the virtual log files after about 200 were inactive, status = 0. In my performance database, all those inactive VLFs can be truncated and the space can be given back to the OS.
Here is some sample output from LOGINFO.
FileId FileSize StartOffset FSeqNo Status Parity CreateLSN
------ -------- ----------- ------ ------ ------ -----------------
2 253952 8192 23 2 64 0
2 253952 262144 24 2 64 0
2 270336 516096 25 2 64 24000000013400005
I then ran DBCC SQLPERF(LOGSPACE)
and got the following output
DBCC SQLPERF(LOGSPACE)
/*
Database Name Log Size (MB) Log Space Used (%) Status
Performance 9870,867 3,395626 0
*/
I then ran
DBCC SHRINKDATABASE(PERFORMANCE, truncateonly)
and got the following result
/*
DbId FileId CurrentSize MinimumSize UsedPages EstimatedPages
8 1 66464 288 44944 44944
8 2 116584 72 116584 72
*/
Next, I reran
DBCC SQLPERF(LOGSPACE)
and got
/*
Database Name Log Size (MB) Log Space Used (%) Status
Performance 910,8047 37,08699 0
*/
SHRINKDATABASE
with TRUNCATEONLY
gave back over 9 GB of available space from the transaction log file back to the OS.
I tried the same experiment with another database called performance2
sp_spaceused
/*
Performance2 52.19 MB 22.81 MB
*/
DBCC SHRINKDATABASE(Performance2, truncateonly)
sp_spaceused
/*
database_name database_size unallocated space
Performance2 31.13 MB 21.13 MB
*/
Which gave back 20 MB to the OS. Using SQL Server 2008, SHRINKDATABASE TRUNCATEONLY
shrinks both data and transaction log files.
Best Answer
There will always be some free space in the database, stored within pages that are too full for other data. Shrinking databases is an incredibly intensive operation, and will take a lot of time as it moves almost every record to somewhere new. A terabyte is a lot of data. Interestingly, you may have more luck creating space by applying compression such as the Hyperbac stuff from Red Gate. That can help remove the free space and compress the rest, but potentially doing fewer writes (and certainly not having to do lots of log activity) in the process. Your mileage may vary though - I don't want to try to speak authoritatively on how Hyperbac compression of your data will work without knowing much about your data. Hope this helps though...