I'm currently at around 7 billion rows on one table and approaching the max disk space for the entire database.
I need to start saving new data on the one large table to a new disk.
I've looked at partitioning by datetime
, which seems the obvious choice
This is an interim solution before we get a new server in 6 months, but need a "now" solution.
How would I achieve this? Can I implement manual partitioning so all new data goes to the new disk?
Everything I've seen requires creating something like a monthly option, whereas I just want to put all new stuff on a new disk.
One option is to split the existing data by month, and then manually move the partitions around.
I'm not even sure if this is possible, so would appreciate some feedback on the best solution to achieve the above.
SQL Server 2017 Enterprise Edition.
Best Answer
The normal short-term solution here is to simply move some indexes and tables to the new filegroup to free up space in the primary filegroup.
But
Yes you can. The key idea is to create a new table on a partition scheme that is compatible with the existing table, and partitioned on a clustered index key column. Then switch the table into a partition on that partition scheme, and split its partition function so new rows go to the new filegroup.
And I just discovered that you can create a partitioned table with only the "primordial" partition, and then switch a non-partitioned table into the primordial partition without needing a CHECK constraint on the non-partitioned table.
Here's an end-to-end demo:
outputs
You will have to drop and recreate foreign key constraints referencing this table. And recreating the FK constraints after the switch the switch require table scans during DDL, so can be lengthy offline operations. To optimize for up time you can recreate the FK constraints with NOCHECK and CHECK them later. Until then they will be enforced for DML but not trusted by the query optimizer.