Sql-server – Partitioning / indexing an extremely large table

sql serversql-server-2008-r2

I'm working on indexing and partitioning a single data warehouse table that weighs in around 500 GB. The table is a heap, has over a hundred TEXT columns, and the TEXT_IN_ROW option is enabled. I didn't design this table, and I don't have any capacity to change it in the immediate future.

I've been tasked with partitioning it. We're tackling this using a copy of the database on a test server. It can push about 2 GB per second to the SSD RAID arrays, so I/O isn't a significant bottleneck, and it's got 16 cores (2 NUMA nodes), and 64 GB of RAM.

My approach is to disable all the nonclustered indexes, create a partition function and partition scheme (about 12 partitions, all on the PRIMARY filegroup – they're using this to enable rolling maintenance and provide more localized inserts for nightly ETL, and not to distribute I/O), then build a clustered index for the table using this partition scheme.

I'm creating the clustered index and partitioning the table as follows:

CREATE CLUSTERED INDEX CX_DailyTable ON DailyTable (LoadDate, SeqNumber) 
  WITH (SORT_IN_TEMPDB = ON) ON monthly_on_primary (LoadDate)

Obviously, it's taking a long time (3 hours so far as of this post), and I certainly don't expect it to be quick. What worries me slightly is that tempdb is now pushing nearly 1 TB and steadily climbing, despite the current table being around half that size. The MS docs I've read suggest the tempdb space usage should be about the size of the final table/clustered index.

http://msdn.microsoft.com/en-us/library/ms188281.aspx

If SORT_IN_TEMPDB is set to ON, there must be sufficient free space in tempdb to store the sort runs, and sufficient free space in the destination filegroup to store the final index structure. The sort runs contain the leaf rows of the index.

Are their estimates incorrect? Is tempdb being used for substantially more than just the sort runs? Or is creating this clustered index somehow doubling the size of the table? (Seems pretty unlikely; it's a rather wide table, and I estimate we're getting an extra 4-8 bytes per row, plus non-leaf pages by adding a clustered index.)

Best Answer

My approach is to disable all the nonclustered indexes [...] then build a clustered index for the table using this partition scheme.

Creating a clustered index on a heap automatically rebuilds all nonclustered indexes (even disabled ones). The nonclustered indexes are rebuilt but not partitioned. Assuming the desired end state is a partitioned clustered table with aligned indexes, rebuilding the nonclustered indexes to be non-aligned is entirely wasted effort.

What worries me slightly is that tempdb is now pushing nearly 1 TB and steadily climbing, despite the current table being around half that size. The MS docs I've read suggest the tempdb space usage should be about the size of the final table/clustered index.

The question of sort space is very complex. To understand all the details (including the effect of parallelism) you would need to carefully read an entire series of posts by the SQL Server Query Processing Team. Converting a heap to a partitioned clustered table with parallelism enabled is probably pretty close to worst case.

At it's most basic (neglecting most of the important information in the QP Team's posts), you are asking SQL Server to run a query like:

SELECT *
FROM DailyTable
ORDER BY
    $partition.monthly_on_primary(LoadDate),
    LoadDate,
    SeqNumber;

This query will not execute quickly, regardless of where you choose to write the sort runs that do not fit in memory to. Add to that the work of actually building a complete new copy of the entire data set in separate rowsets, and the work involved in rebuilding the nonclustered indexes pointlessly...

Advice

There are many considerations in getting this change to work efficiently. The important ones are to avoid sorting at all where possible, and to use parallel minimally-logged bulk load wherever possible.

The details of that depend on details not contained in the question, and a full solution is beyond an answer here. Nevertheless, the outline of an approach that has worked well for me personally in the past is:

  • Extract the existing data using bcp to one file per final partition
  • Drop the existing table and create the new one
  • Load the new table using parallel minimally-logged bulk load

The per-partition data extract needs be ordered on (LoadDate, SeqNumber). Ideally, you would avoid a sorting operation. If you have an existing nonclustered index on (LoadDate, SeqNumber) you can extract data in the right order without sorting if you construct the query correctly.

Once per-partition data has been extracted to separate files (this can be done in parallel if your hardware is up to it), the source table can then be dropped, freeing space. A new partitioned heap or clustered table is then created, and bulk loaded with the pre-sorted data, possibly also in parallel.

Done right, the entire process requires no more than 1x data size and achieves the fastest possible data transfer rates in both directions, with the least amount of log use.