Are there any reasons for periodic/cyclic slow-down of insert performance?
Yes. check point events. With a write intensive workload, big RAM server, as you describe, a large number of 'dirty' pages accumulate in memory. At the predetermined checkpoint interval all these dirty pages get written to disk, causing a spike of IO requests. This in turn slows down the log commit writes, which manifests as the increase in INSERT response time you observe periodically. QED. This is, of course, just a guess, in lack of a proper investigation. For a more certain response, I recommend you read How to analyse SQL Server performance and apply the techniques described there to identify the problem.
If the problem is indeed caused by checkpoint, then SQL Server 2012 comes with Indirect Checkpoints:
Indirect checkpoints, new in SQL Server 2012, provide a configurable database-level alternative to automatic checkpoints. ... Indirect checkpoints reduce checkpoint-related I/O spiking by continually writing dirty pages to disk in the background.
For a more detailed discussion about chekcpoint impact on performance read
SQL Q&A: Fine Tuning for Optimal Performance:
In Search of Spikes
Q. I’m troubleshooting an issue where we see periodic I/O spikes from one of our SQL Servers. I’ve narrowed it down to checkpoints using PerfMon, but I can’t tell which database is the major culprit. How can I drill in further?
Pre-SQL Server 2012 you have the option to reduce the recovery interval value. This will increase the frequency of checkpoints, but will reduce the number of dirty pages each checkpoint has to write. Spreading out the data IO helps (buy more spindles). Separating the log IO to it's own path (own spindle) does not help the checkpoint, but isolates the log commits from the effects and thus keep the INSERT responsive. SSDs work miracles.
I would advice against any structural changes. In my opinion you already have the best clustered index for time series. Any structural change would have to be backed by root-cause- performance analysis pointing to the current structure as a problem.
Is the default setting for Sql Server comparable to "innodb_flush_log_at_trx_commit = 1" or to "innodb_flush_log_at_trx_commit = 2"?
No. Assuming similar hardware, SQL Server should be just as slow as MySQL, since both would have to flush the commit for every row. Therefore there must be something else at play, very likely in the (not posted) C# code you use for your 'test'. Batch commit (a single xact spanning all 100 rows) would explain the behavior.
Note that SQL Server does have an equivalent of setting flush_log_at_trx_commit, namely Delayed durability. This is onyl available in SQL Server 2014 and forward, and must be explicitly configured.
The C# code, MySql and Sql Server are all running on my laptop.
Does your laptop do Write caching? If so, you must repeat your tests with the write caching off, as is not supported.
Best Answer
Yes, lots. One particularly good reference is the Data Loading Performance Guide.
There's nothing particularly special about bulk inserts. There isn't a special queue, or anything like that. Each is processed as a normal command, which will either proceed or be blocked based on concurrent locking activity. To be clear: yes concurrent bulk imports to the same clustered table from different server connections are possible.
If two connections attempt to insert the same records into a unique index, one will be blocked behind the other. When the first one releases its exclusive lock, the second will throw a uniqueness violation error (unless esoteric options like
IGNORE_DUP_KEY
are present on the target index).Not materially different. SQL Server 2016 and later can achieve minimally-logged bulk inserts to an index without trace flag 610.
It can be tricky to achieve truly concurrent bulk inserts to a table with one (or more) b-tree indexes in practice. You may need to disable lock escalation for the target and/or take other actions. It is a more complex undertaking when several indexes are present. See the Data Loading Performance Guide for more details.