You will want to load your data into a new table, doing this in small batches, then drop the existing table. I put together a quick example using the Sales.Customer table in AdventureWorks, something similar should work for you also.
First, create your new table, complete with the new datatype you want to use:
CREATE TABLE [Sales].[Currency_New](
[CurrencyCode] [nchar](4) NOT NULL,
[Name] [varchar](128) NOT NULL,
[ModifiedDate] [datetime] NOT NULL,
CONSTRAINT [PK_Currency_New_CurrencyCode] PRIMARY KEY CLUSTERED
(
[CurrencyCode] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
)
GO
Then, insert your records and define your batch. I am using 10 here, but you will likely want to use something larger, say 10,000 rows at a time. For 30MM rows I'd even suggest you go to 100k row batch size at a time, that's the limit I typically used with larger tables:
DECLARE @RowsInserted INT, @InsertVolume INT
SET @RowsInserted = 1
SET @InsertVolume = 10 --Set to # of rows
WHILE @RowsInserted > 0
BEGIN
INSERT INTO [Sales].[Currency_New] ([CurrencyCode]
,[Name]
,[ModifiedDate])
SELECT TOP (@InsertVolume)
SC.[CurrencyCode]
,SC.[Name]
,SC.[ModifiedDate]
FROM [Sales].[Currency] AS SC
LEFT JOIN [Sales].[Currency_New] AS SCN
ON SC.[CurrencyCode] = SCN.[CurrencyCode]
WHERE SCN.[CurrencyCode] IS NULL
SET @RowsInserted = @@ROWCOUNT
END
I usually do a sanity check and verify the rowcounts are the same before cleaning up:
SELECT COUNT(*) FROM [Sales].[Currency]
SELECT COUNT(*) FROM [Sales].[Currency_New]
Once you are confident you have migrated your data, you can drop the original table:
DROP TABLE [Sales].[Currency]
Last step, rename the new table, so that users don't have to change any code:
EXEC sp_rename '[Sales].[Currency_New]', '[Sales].[Currency]';
GO
I don't know how long this will take. I'd suggest you try doing this when you have a clear maintenance window and users aren't connected.
HTH
This error occurs because the transaction log becomes full due to LOG_BACKUP
. Therefore, you can’t perform any action on this database, and In this case,
the SQL Server Database Engine will raise a 9002 error.
To solve this issue, you have to do the following:
- Take a Full database backup.
- Shrink the log file to reduce the physical file size.
- Create a LOG_BACKUP.
- Create a LOG_BACKUP Maintenance Plan to take backup logs frequently.
Note: The Shrink operation effects on SQL Server Performance during executing shrink command. it also causes index fragmentation and can slow the performance of queries that search a range of the index.
So, it’s recommended before goes live you should prepare LOG_BACKUP Maintenance Plan to back up the log file frequently to avoid the shrink operation on Production.
For more details check The transaction log for database ‘SharePoint_Config’ is full due to LOG_BACKUP
Hope this helps you
Best Answer
Of course you are. The log contains a record for every change to every page with sufficient detail to either redo or undo the change.
Unrelated. You're deleting 18 million rows from each of 17 seperate "indexes", 16 nonclustred indexes and one heap.
And it's safer to disable/rebuild the non-clustered indexes than drop/create them because the index definitions are retained on the server.