You should be able to do batch updates to this table without causing the log to grow uncontrollably.
Here is a simple testbed to show how I'd approach this:
USE tempdb;
SET NOCOUNT ON;
IF OBJECT_ID(N'dbo.Table1', N'U') IS NOT NULL DROP TABLE dbo.Table1;
CREATE TABLE dbo.Table1
(
id int NOT NULL
PRIMARY KEY CLUSTERED
IDENTITY(1,1)
, COL1 nvarchar(150) NOT NULL
, COL2 nvarchar(150) NOT NULL
) ON [PRIMARY];
INSERT INTO dbo.Table1 (COL1, COl2)
SELECT N' ' + sc1.name + N' '
, N' ' + sc2.name + N' '
FROM sys.syscolumns sc1
CROSS JOIN sys.syscolumns sc2;
GO
This actually removes the leading and trailing spaces, 4,000 rows per batch:
DECLARE @LogFreePercent DECIMAL(5,2);
/* this temp table will hold all the ID values for
rows we need to update
*/
IF OBJECT_ID(N'tempdb..#ids', N'U') IS NOT NULL
DROP TABLE #ids;
CREATE TABLE #ids
(
id int NOT NULL
) ON [PRIMARY];
CREATE CLUSTERED INDEX #ids_cx ON #ids(id);
/* this temp table will hold all the ID values for
rows we've already updated
*/
IF OBJECT_ID(N'tempdb..#fixed_ids', N'U') IS NOT NULL
DROP TABLE #fixed_ids;
CREATE TABLE #fixed_ids
(
id int NOT NULL
) ON [PRIMARY];
CREATE CLUSTERED INDEX #fixed_ids_cx ON #fixed_ids(id);
/* Insert the ID values for rows with leading or trailing
spaces. This is not efficient, but should do the trick,
and ensures we only update rows that actually have leading
or trailing spaces.
*/
INSERT INTO #ids (id)
SELECT id
FROM dbo.Table1 WITH (NOLOCK)
WHERE COL1 LIKE ' %'
OR COl1 LIKE '% '
OR COL2 LIKE ' %'
OR COl2 LIKE '% ';
/* Run a loop while we still have IDs that need processing
*/
WHILE EXISTS (SELECT TOP(1) 1 FROM #ids)
BEGIN
/* Use a CTE to select 4000 rows to update from the
temp table, outputing the IDs we've fixed into
the #fixed_ids table.
*/
;WITH src AS (
SELECT TOP(4000) Table1.*
FROM dbo.Table1
INNER JOIN #ids i ON i.id = Table1.id
)
UPDATE src
SET COL1 = LTRIM(RTRIM(COL1))
, COL2 = LTRIM(RTRIM(COL2))
OUTPUT inserted.id
INTO #fixed_ids;
/* Show some status details
*/
PRINT (CONVERT(nvarchar(30), GETDATE(), 120) + N' - updated '
+ CONVERT(nvarchar(20), @@ROWCOUNT) + ' rows.')
SELECT @LogFreePercent = 100 - su.used_log_space_in_percent
FROM sys.dm_db_log_space_usage su;
PRINT (N'Log free percent: ' + CONVERT(nvarchar(10), @LogFreePercent));
/* remove the processed rows from the temp tables
*/
DELETE FROM #ids
WHERE EXISTS (SELECT 1 FROM #fixed_ids fi WHERE fi.id = #ids.id);
DELETE FROM #fixed_ids;
/* checkpoint the database to allow log records to be truncated
*/
CHECKPOINT 1;
END
Run this afterwards to see if any rows have leading or trailing spaces:
SELECT [1] = '|' + t1.COL1 + '|'
, [2] = '|' + t1.COL2 + '|'
FROM dbo.Table1 t1;
You may notice I've got the dreaded WITH (NOLOCK)
in the query that reads from the source table, dbo.Table
. This is to help prevent our process from blocking other ongoing transactions. Since we're only using this to create a list of IDs we want to operate on, this is an acceptable use of READ UNCOMMITTED
.
In my slow dev database, this processed 564,000 rows in just under three minutes, and the log never dipped below 97% free.
Install the most recent service pack, then install the latest cumulative update.
According to SQL Server Builds, the most recent Service Pack for SQL Server 2014 is currently SP2.
CU10 is the most recent Cumulative Update.
Ensure you take a backup of your databases onto physically different media, and do this in a non-production environment to test the change prior to doing this to your production SQL Server.
Best Answer
As has been mentioned in the comments auto shrink is a generally bad idea for a number of reasons (see the links they've provided) and this is most likely the cause of your transaction log issues
Judging by what you've posted you may have ran something similar to this, but its a useful bit of code which will list out all of the databases in an instance along with the recovery model and what currently is preventing the transaction log from shrinking. the reasons are usually self explanatory, like
checkpoint
,Log_backup
Using this you should be able to narrow down what is preventing your transaction log from shrinking and hereby having all other transactions making it grow
Personally I have this in a proc in my adminDB that can be ran whenever we see any issues relating to the transaction log to find out the exact state of it and what is the main cause of those issues
Its worth noting additionally for the auto shrink on the tlog, but mainly on the inevitable growth that you will have that the DB will be locked out while it grows the file and re-writes over that area of disk space