I have a MS SQL Server 2012 and the following table:
CREATE TABLE dzp.contractid (
id bigint PRIMARY KEY IDENTITY(1,1),
VNR char(18) NOT NULL,
CONSTRAINT UC_VNR UNIQUE (VNR))
For context, VNR represents a unique contract number in string form.
I want to import a csv file with contract information, with possibly multiple rows per contract. To avoid querying over a char(18), I am trying to normalize the data upon loading using the INSERT-Trigger of the staging table.
Flow is:
csv-file --> staging table --> insert trigger --> normalized tables
In my trigger, I am attempting the following:
BEGIN TRY
INSERT INTO dzp.contractid(VNR)
SELECT VNR
FROM dzp.accounts_stage
END TRY
BEGIN CATCH
-- ignore unique constraint violation error, raise otherwise
IF ERROR_NUMBER() <> 2627
RAISERROR ('blah', 16, 1)
END CATCH
So what I hope to achieve is basically:
- Get all the contract numbers from the staging table
- Add all these numbers to the normalized table
- Ignore the error that gets thrown when the unique constraint is violated
- When working correctly, I now have a table with all the unique VNRs
Problem:
The Uniqueness-Constraint seems to be blocking the entire INSERT, not only the values that are actually duplicates.
Is there some way of inserting data row-wise, or tell the constraint to treat it on a row-by-row-basis?
Is this even the right approach? I'm happy to completely change the approach, since I cannot imagine I'm the first to have this issue… For context, this is essentially a reporting database that gets batch-fed once per day, with about 1m rows in total (from the csv-table), and has ~5 users running queries from time to time.
Thanks for your input!
Best Answer
SQL constraint violation affects the entire statement, you cannot ignore individual rows.
What you can do is insert only the rows that don't violate the constraint. There are several ways to do that, for example
You could also use
MERGE
: