There's a few different questions in here:
Q: "Looking at perfmon I can see very high Avg. Disk Write Queue."
That Perfmon counter isn't relevant for SQL Server anymore. SQL Server batches IO operations together, and it's normal to see big jumps here. Instead, check out the counters for Avg Disk Sec/Read and sec/Write. This tells you how fast the storage is responding to your requests. The downside is that it's only at the drive (volume, mount point) level. To get IO stats for specific files, query sys.dm_io_virtual_file_stats. David Pless has a great query here:
http://blogs.msdn.com/b/dpless/archive/2010/12/01/leveraging-sys-dm-io-virtual-file-stats.aspx
Q: "Looking at SQL ASYNC_IO_COMPLETION and OLEDB are featuring as the more prominent waittypes. These suggest to me that SQL isn't the defining factor in the poor performance."
ASYNC_IO_COMPLETION is typically data file writes that happen in the background, asynchronously. When you insert/update/delete stuff, SQL Server has to get the data into the log file immediately - that's WRITELOG waits. It changes the data pages in memory, and then those get cached until later. ASYNC_IO_COMPLETION waits aren't holding up end users. It can indicate slow drive performance for data file writes, but that's a system bottleneck, not an end-user-facing bottleneck.
In my experience, OLEDB is usually caused by performance monitoring tools like Spotlight, SQL Sentry, Idera SQL DM, etc that are running traces and grabbing performance data over the wire.
Q: What do others think? What other steps can I take to prove (or disprove) my theory that the disk is slow.
I've got a video on how to do that here:
http://www.brentozar.com/archive/2011/08/how-prove-its-san-problem-webcast-video/
Short story - IO is probably not your biggest problem based on what you described here.
You want to use bulk write operations instead of individual updates and inserts.
// BEFORE your iteration loop
var bulk = db.another_collection.initializeUnorderedBulkOp();
// INSIDE your iteration loop
// instead of db.another_collection.update(…)
bulk.find(query).update(yourUpdateDocumentHere);
// instead of db.another_collection.insert(…)
bulk.insert(yourDocumentToInsertHere);
// AFTER your iteration loop
bulk.execute()
This should speed up the write operations dramatically.
EDIT: as for the bottleneck, extending arrays is a costly operation, since it forces MongoDB to relocate documents often, since the padding will be exceeded. As a rule of thumb: if you have an array which you have to extend often, there is something wrong with your data model. More often than not, it is worth to revise it.
Best Answer
Below are some good ways to improve BULK INSERT operations :
The max degree of parallelism should be configured on the server rather than the default. You can refer to my answer on how it configure it here.
Some good reference with interesting stats are given in The Data Loading Performance Guide here . Have a Look at BATCHSIZE and ROWS_PER_BATCH as well in the guide.