The database is being running on SQL Server 2000 Standard for the last 6 years in Full Recovery mode.
This is fin as long as you are doing regular full and log backups to keep the log file size under control. There are plenty of articles and questions on here regarding log management.
In the beginning of this summer the database was about 5GB. Since then, the only thing we did beyond normal usage is some extensive deletions.
Large deletions are fine but understand that the space that data used was not released back to the operating system, unless you did a shrink of the data file.
This week, there was a problem with the PC and I was forced to do a backup and restore it on an SQL Server 2008R2 Express and set the database in 2008 mode.
I hope you tested your code! T-SQL syntax changed a good bit from SQL 2000 to SQL 2008 R2. I have supported many databases that broke due to some of these changes. Also note that Express has a limitation on how large a database can grow, 10GB with SQL 2008 R2
The backup file was about 1GB. When I restored it, the MDF was 9GB of size! I checked the old MDF and it was of the same size. I checked the size of the tables and they cannot reach the 9GB reported!
When you do a backup of the database, the backup is only going to grab the data itself along with some configuration information of the database. One of those configuration items is the initial size of the data and log file. I would expect if you checked your database properties you might find the initial size of your data file (MDF) is set to 9GB in size.
I did a shrink but the size did not change.
A database shrink is likely what you performed. Although ill-advised, if you need to regain disk space or just bring the data file (MDF) of your database under control you will need to perform a data
file shrink. Again there are plenty of articles on MSDN and blog post that provide the way of doing this.
Is there a chance that the Full Recovery, can affect the size of the MDF files?
Nope.
I am thinking of settinng the recovery model to Simple, back-it-up and restore it. Is it going to make a difference?
With the size of your data file, nope.
Frankly, I wouldn't use either. You can find your biggest tables immediately - with more flexibility - and not worry about where @updateusage
has been set.
CREATE PROCEDURE dbo.TopTables
@NumberOfObjects INT = 100,
@MinimumSizeInMB INT = 10
AS
BEGIN
SET NOCOUNT ON;
SELECT TOP (@NumberOfObjects)
[object] = QUOTENAME(s.name) + N'.' + QUOTENAME(t.name),
index_count = COUNT(i.index_id),
size_in_MB = SUM(p.reserved_page_count)*8/1024.0
FROM sys.schemas AS s
INNER JOIN sys.objects AS t
ON s.[schema_id] = t.[schema_id]
INNER JOIN sys.indexes AS i
ON t.[object_id] = i.[object_id]
INNER JOIN sys.dm_db_partition_stats AS p
ON t.[object_id] = p.[object_id]
AND i.index_id = p.index_id
WHERE t.is_ms_shipped = 0
GROUP BY s.name, t.name
HAVING SUM(p.reserved_page_count)*8/1024.0 >= @MinimumSizeInMB
ORDER BY size_in_MB DESC;
END
GO
Best Answer
At the risk of making an unpopular statement, I think that shrinking might be a good option for you here. Let me explain...
Shrinking a database data file will almost certainly cause fragmentation. This is critical in a production database, where you likely cannot afford the time to defrag the database after the shrink.
However, it sounds like you have some time up your sleeves. Perhaps you could restore the databases onto a non-production server where you will have ample time and resources to shrink and defrag the databases. Then you could migrate these databases to your SSD.
Of course this takes time. You might need to keep your log backups on production so you can roll forward changes, or possibly setup something like replication to keep your data in sync.
Will be interested in what other people think, and what you ultimately choose to do.