Sql-server – How to save a severely fragmented database/table

database-designfragmentationperformancesql server

I've recently made a discovery that might explain my slow SQL run times.

I found out that the default in SSMS is to allow mdf growth of 1MB at a time. When the size of one table alone is roughly 23 GB this is a phenomenally HUGE issue. The result is that I have a severely fragmented table (and likely the entire db is severely fragmented). It makes updates incredibly slow as SQL must first create the space for the update, before writing it. And to run updates, it has to piece together thousands of 1 MB pieces of information each time before performing calculations.

So my question: is there a way to resolve this and defragment? I've already gone into Properties – Files and reset the mdf growth level to 100MB, and db recovery is on simple. Can I defragment, or do I need to just go in and recreate the table, or re-create the entire db? I know it will be a lot of time and work, but my main goal for this db is time efficiency. It needs to be as fast as possible and if that means starting from the beginning again, so be it.

In addition, it was brought to my attention that rather than having one very large mdf I can create 'filegroups'. I understand this is something like saving multiple mdfs; one for each table, or even index. How can I safely create these filegroups and will they also help optimize query time and efficiency?

Best Answer

If you have a clustered index on the table the fastest way to defragmanet your table would be to drop and recreate the clustered index. On versions prior to SQL Server 2005 you can defrag standard indexes using the DBCC INDEXDEFRAG command:

http://msdn.microsoft.com/en-us/library/ms177571.aspx

Or the more prefered way on SQL Server 2005 and above is by using the ALTER INDEX statement:

http://technet.microsoft.com/en-us/library/ms188388.aspx

If you have no indexes at all on the table then you will need to defragment the file using Windows (although this does not move the data in the file - it only moves fragments of the file closer to each other.)

As for the filegroups question; multiple files can help improve performance when the files are on separate disks. This is because you can take advantage of concurrent read/writes. It's generally accepted as good practice to put large indexes in a different file on a different drive to the table that they belong to.

I hope this helps you.