There are 1000 picture that are jpeg format.How do I load them as varbinary(MAX) format into SQL Database?But at the same time I want the bulk insert functionality.
Sql-server – How to load them as varbinary(MAX) format into SQL Database
bulksql-server-2005
Related Solutions
If the data is in memory, you can use SQLBulkCOpy in .net or similar to send data to SQL Server. No need to instantiate a file.
And load a staging table first in SQL Server. Then use MERGE from this staging table to the actual table
If you don't want a persistent staging table, create a #temp table and use that in the subsequent MERGE. I'm not sure about the usefulness of Table Valued Parameters here
If you want to load the final table directly, then you can MERGE directly from a file using OPENROWSET(BULKā¦). But you don't want to instantiate a file.
In summary: there is no direct "in client memory" to final table solution: you need an intermediate table or file
A few more days of reading and experimentation and I was able to (mostly) answer a lot of these:
I found this buried in the ODP.NET documentation (ironically not in the
OracleBulkCopy
docs):The ODP.NET Bulk Copy feature uses a direct path load approach, which is similar to, but not the same as Oracle SQL*Loader. Using direct path load is faster than conventional loading (using conventional SQL
INSERT
statements).So it appears that it does use direct path.
This I was able to verify by doing a large bulk copy operation and getting the index properties from SQL Developer. The index did appear as
UNUSABLE
while the bulk copy was in progress. However, I've also discovered thatOracleBulkCopy.WriteToServer
will refuse to run if the index starts in anUNUSABLE
state, so clearly there's more going on here, because if it were as simple as disabling and rebuilding the index then it shouldn't care about initial state.It does make a difference specifically if the index is also a constraint. Found this little gem in the documentation linked above:
Enabled Constraints
During an Oracle bulk copy, the following constraints are automatically enabled by default:NOT NULL
UNIQUE
PRIMARY KEY
(unique-constraints on not-null columns)
NOT NULL
constraints are checked at column array build time. Any row that violates theNOT NULL
constraint is rejected.UNIQUE
constraints are verified when indexes are rebuilt at the end of the load. The index is left in an Index Unusable state if it violates aUNIQUE
constraint.The documentation is a little hazy on what happens during the load, especially with primary keys, but one thing is absolutely certain - it behaves differently with a primary key vs. without one. Since the
OracleBulkCopy
will happily allow you violate index constraints (and punt the index intoUNUSABLE
state when it's done), my hunch is that it's building the PK index during the bulk copy but simply not validating it until afterward.I'm not sure whether the difference observed is within Oracle itself or just a quirk of the
OracleBulkCopy
. The jury's still out on this one.OracleBulkCopy
will throw an exception if an index is initially in theUNUSABLE
state, so it's really a moot point.If there are other factors, indexes (and especially PK indexes) are still the most important, as I found out by:
Creating a global temporary table with the same schema (using
CREATE AS
), then bulk copying into the temporary table, and finally doing a plain oldINSERT
from the temp table into the real table. Since the temp table has no index, the bulk copy happens very fast, and the finalINSERT
is also fast because the data is already in a table (I haven't tried the append hint yet, since a 5M row table-to-table copy already takes less than 1 minute).I'm not yet sure of the potential ramifications of (ab)using the temporary table space this way, but so far it hasn't given me any trouble, and it's much safer than the alternative by way of preventing corruption of either the rows or indexes.
The success of this also pretty clearly demonstrates that the PK index is the problem, as that is the only practical difference between the temp table and the permanent table - both started with zero rows during the performance tests.
Conclusion: Don't bother trying to bulk copy more than around 100k rows into an indexed Oracle table using ODP.NET. Either drop the index (if you don't actually need it) or "preload" the data into a different (non-indexed) table.
Related Question
- Sql-server – Solving high page load time using SQL Profiler
- Sql-server – How SQL Server handles concurrent requests
- Sql-server – Importing large file while keeping table available for other operations
- Postgresql – How to load 100k + Records into progres database from a CSV file
- SQL Server – Importing Data in Parallel
- MySQL – Select and Update MAC Addresses to Same Format
Best Answer
Yes, use OPENROWSET with BULK. You need a format file though.
Assuming you want to attach blobs to existing records, something like: