USB2 running in "hi-speed" can theoretically do about 60MBps (MegaBYTES) maximum (AKA 480Mbps), but you'll never get that.
If you are using both drives on the same USB hub (as it appears in the picture), then you are sharing that available bandwidth between the devices. So at theoretical best (with both drives being accessed) you'd probably only get like 30MBps to each drive. After adding overhead and real-world physics to the mix, then getting ~15MBps (BYTES) sustained sounds about right to me.
You're not going be able to reassign which USB ports are attached to which hub, without some very good soldering (at least). :)
You're going to want to get that 2nd drive onto a different USB hub, or hooked up via Firewire, eSATA, or alike to get it onto a different data bus.
Windows memory management is a complex thing. As you see it has different behavior with different devices.
The different operating systems has different memory management.
Your question was very interesting. I am sharing a MSDN page which explains a part of the memory management in windows and more specifically "Mapped Files"
It's documentation for software developers, but Windows is software too.
One advantage to using MMF I/O is that the system performs all data transfers for it in 4K pages of data. Internally all pages of memory are managed by the virtual-memory manager (VMM). It decides when a page should be paged to disk, which pages are to be freed for use by other applications, and how many pages each application can have out of the entire allotment of physical memory. Since the VMM performs all disk I/O in the same manner—reading or writing memory one page at a time—it has been optimized to make it as fast as possible. Limiting the disk read and write instructions to sequences of 4K pages means that several smaller reads or writes are effectively cached into one larger operation, reducing the number of times the hard disk read/write head moves. Reading and writing pages of memory at a time is sometimes referred to as paging and is common to virtual-memory management operating systems.
Unfortunately we can't easy figure how Microsoft implements the Read/Write - it isn't open source.
But we know that it has very different situations:
From To
==================
SSD HDD
HDD Busy SSD ??
NTFS FAT
NTFS ext4
Network HDD
IDE0slave IDE0master // IDE cable support disk to disk transfer.
IDE SATA // in this case you have separated device controllers.
You get the point... A hdd may be bussy, the file systems may be different (or may be the same)...
For example: dd
command in linux copying data "byte by byte" - It's extremely fast (because the heads of both HDDs moving sync), but if the file systems are different (with different block sizes for example) - the copied data will not be readable because the file system has different structure.
We know the RAM is much much faster than HDD. So if we have to do some data parsing (to fit the output file system) it will be better to have this data in the RAM.
Also imagine you coping the file directly from-to.
What's happening if you overload the source with other data flows? What about the destination?
What if you almost doesn't have free RAM in this moment?
...
Only Microsoft engineers know.
Best Answer
This is not part of the question, but I was wondering exactly how long it would take: USB 2.0 has a transfer rate of 480 MBit/s, or 53.248 MB/s. 300 Gigabytes is 307200 Megabytes / 53.248 MB = 5769 seconds = 96 minutes
This free software reviews great and seems to fit your need:
http://www.codesector.com/teracopy.php
Here is one more that also might:
http://www.copyhandler.com/en