I insert into this table small amount of data just about 1000 rows and it takes more than 4 minutes
Remote data modifications through a linked server use the sp_cursor
model. The effect is similar to issuing 1000 separate single inserts (one for each row). If a round trip takes 250ms, 1000 such trips will take 4 minutes and 10 seconds. Using a bulk loading method such as bcp
or SSIS
will generally be more efficient (unless the number of rows to be inserted is tiny).
Another alternative for ad-hoc needs is to build a string containing a single INSERT
statement with multiple rows in the VALUES
clause. The statement is then used with EXECUTE AT
, for example:
-- Note local-to-azure table name used, not four-part syntax
DECLARE @sql varchar(max) = 'INSERT dbo.Test VALUES ';
-- Build a list of 1000 numbers to insert
SELECT @sql += '(' + CONVERT(varchar(11), n) + '),'
FROM dbo.Numbers AS N
WHERE n BETWEEN 1 AND 1000;
-- Remove trailing comma
SET @sql = LEFT(@sql, LEN(@sql) - 1);
-- For debugging
PRINT @sql;
-- Execute the finished statement at the remote server;
EXECUTE (@sql) AT AZURE;
The constructed insert statement executes at the Azure database, so a local name is used for the table (note: building a dynamic insert statement using a four-part target name buys you nothing).
Note that EXECUTE ... AT
requires RPC
and RPC OUT
enabled in the linked server options.
The VALUES
clause for a plan INSERT
statement is limited to 1000 elements. There are ways around that (a VALUES
clause in a CTE does not have that restriction). You could also choose to build 1000-row batches or build single-row insert statements, but if the there is that much data on a regular basis, you would probably be better off using one of the bulk loading methods instead.
If you're sure the user in question has View Server State
(and it looks like in your screenshot he does).
Then there are a number of reasons previously put into an msdn blog. Ranging from:
- Performance Objects and counters set-up during the SQL Server installation failed.
- A mixture of 64 and 32 bit platforms.
- Registry permissions have been skewed
To resolve this we can use the same steps outlined in the guidelines for reinstalling the performance counters in a different stack-exchange post:
Using an elevated administrator command prompt perform the following steps.
- Change the path to the
BINN
directory of the SQL Server instance you desire to correct.
(Ex: C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\Binn
)
- Execute
unlodctr <<REGISTERED SERVER NAME>>
For example: unlodctr MSSQL$SQL2008
or SQLAgent$SQL2008
...
- Execute
lodctr /T:<<perf-sql* matching the counters you desire to load>>
For example: perf-MSSQL$SQL2008sqlctr.ini
or perf-SQLAgent$SQL2008sqlagtctr.ini
for SQLAgent. The /T
is important to load the SQL Server performance counter provider as a trusted provider.
- Cycle the remote registry service:
net stop "Remote Registry"
then net start "Remote Registry"
- Force a WMI synchronization using
winmgmt /resyncperfctr "<<PID>>"
where PID is the process id of the WinPriv.exe
(you can get this from Task Manager)
The following may also be required:
Best Answer
From my experience there can be just 1 catalog per 1 linked server
You will need to create separate linked servers for each database on remote server (DB2) that you want to be available for querying on your "local" server (SQL Server)