No, it is not logged anywhere. Go vote and state your business case; this is one on the long list of things that should be fixed in SQL Server.
This was requested years ago on Connect (probably first in the SQL Server 2000 or 2005 timeframe), then again on the new feedback system:
And now it has been delivered in the following versions:
In the very first public CTP of SQL Server 2019, it only surfaces under trace flag 460. This sounds kind of secret, but it was published in this Microsoft whitepaper. This will be the default behavior (no trace flag required) going forward, though you will be able to control this via a new database scoped configuration VERBOSE_TRUNCATION_WARNINGS
.
Here is an example:
USE tempdb;
GO
CREATE TABLE dbo.x(a char(1));
INSERT dbo.x(a) VALUES('foo');
GO
Result in all supported versions prior to SQL Server 2019:
Msg 8152, Level 16, State 30, Line 5
String or binary data would be truncated.
The statement has been terminated.
Now, on SQL Server 2019 CTPs, with the trace flag enabled:
DBCC TRACEON(460);
GO
INSERT dbo.x(a) VALUES('foo');
GO
DROP TABLE dbo.x;
DBCC TRACEOFF(460);
Result shows the table, the column, and the (truncated, not full) value:
Msg 2628, Level 16, State 1, Line 11
String or binary data would be truncated in table 'tempdb.dbo.x', column 'a'. Truncated value: 'f'.
The statement has been terminated.
Until you can move to a supported version/CU, or move to Azure SQL Database, you can change your "automagic" code to actually pull the max_length from sys.columns
, along with the name which you must be getting there anyway, and then applying LEFT(column, max_length)
or whatever PG's equivalent is. Or, since that just means you'll silently lose data, go figure out what columns are mismatched and fix the destination columns so they fit all of the data from the source. Given metadata access to both systems, and the fact that you're already writing a query that must automagically match source -> destination columns (otherwise this error would hardly be your biggest problem), you shouldn't have to do any brute-force guessing at all.
One thing you might try is breaking those fields out into their own table(s). Then reference them by ID. You could then set up a UNIQUE index on those ID columns, and still use your hashing trick to ensure you have unique values in the new tables (if the values need to be unique there).
Example table(s):
[Shipper]
ShipperID INT,
ShipperName NVARCHAR(MAX)
[Consignee]
ConsigneeID INT,
ConsigneeName NVARCHAR(MAX)
ETC...
Unfortunately, with this type of design problem you have to get a little creative. Having that many NVARCHAR(MAX) columns is problematic on many levels.
Best Answer
Well this won't happen unless
Also, what happens with this please? This can not fail unless there is some processing.