If it is the USB drive, and it is size-related, then the USB drive is failing to correctly process a sector write (and probably read, too) request. The file size does not matter. The cause is that the larger file has "pieces" falling beyond the addressable boundary.
Due to disk fragmentation, it is difficult to confirm or deny this hypothesis, but you can try with any tool which displays the disk fragmentation map. This should display a large disk with the beginning which is filling up, and nothing past a certain point. Not at the end, especially.
On a FAT32 disk you could try and fill the disk with small files, each 8Kb in size, until the "reachable" area was filled up and the disk became unwriteable. But the disk is NTFS and however the method isn't really very precise, or certain.
If at all possible, I would mount the disk on a Linux live distribution. At that point you could try and read the disk one sector at a time:
fdisk -l
will tell you how many 512-byte blocks are there in the external disk. Then
dd bs=512 if=/dev/sdc of=test skip=NNNNN count=1
will request a read of sector NNNNN (one-based :-) ).
If it is a matter of a limit to NNNNN, you will observe that:
N=1 it works
N=MAX_NUM it fails
N=MAX_NUM/2 it fails
...
so you can start with a classic bisection algorithm and determine where the critical sector "C" lies (any sector before C is readable, any after is not). If such a sector exists, you've got either an incredibly weird hardware damage, or the proof you were looking for of the enclosure's guilt.
Update - finding the boundary by bisecting: an example
So let's say the disk is 4TB, so 8,000,000,000 sectors. We know that sector 1 is readable and sector 8-billion isn't. Let READABLE be 1, let UNREADABLE be 8. Then the algorithm is:
let TESTING be (READABLE + UNREADABLE)/2
if sector TESTING is readable then READABLE becomes equal to TESTING
else, UNREADABLE becomes equal to TESTING.
Lather, rinse, repeat with the new values of (UN)READABLE.
When two consecutive values of TESTING are obtained, that's your boundary.
Let's imagine the boundary lies at sector 3,141,592,653 because of some strange bug in the enclosure.
first pass: testing = (1 + 8000000000)/2 = 4000000000.
4,000,000,000 is unreadable, so replace 8,000,000,000 with 4,000,000,000
second pass: testing (1 + 4M)/2 = 2M
sector 2M is readable, so replace 1 with 2,000,000,000
third pass: testing (2M + 4M)/2 = 3M
sector 3,000,000,000 is readable
fourth pass: testing (3M + 4M)/2 = 3,500,000,000 which is UNREADABLE
fifth: (3 + 3.5) / 2 = 3,250,000,000 UNREADABLE
...
So READABLE and UNREADABLE stalk the unknown boundary more and more closely, from both directions. When they are close enough you can even go and try all the sectors in between.
To locate the boundary, only log2(max - min) = log2(4TB - 0) = log2(4TB) = log2(240) = 40 (actually I think perhaps 42) sectors need to be read. Given a 30" reset delay on the enclosure when a reading error occurs, that should be 20 minutes at the most; probably much less.
Once you have the boundary B, to confirm it is a boundary you can do a sequential read of large chunks before B (this will not take too long), maybe one megabyte every gigabyte or so; and then a random sampling of sectors beyond B. For example the first 4*63 sectors beyond the boundary, then one sector every 3905 (or every RAND(4000, 4100) ) to try to avoid hitting always the same magnetic platter.
But actually, if you do find a boundary-like behaviour, and confirm that with another enclosure there is no such boundary -- well, I'd declare the case (en)closed.
Best Answer
I will provide all the necessary steps to detect and resolve the problem.
You have already tried these:
After rebooting:
If the problem still outstanding:
You need to also check the hard disk by using either a live cd and performing disk checks (or with in built tools - computer manufacturer). You can also use tolls like hdtune or seatools (which is free). This could be an IO error or BIOS faulty configuration (RAID if used or SATA and power Raid or AHCI settings). Carefully examine the settings. All the firmware should be up to date. Make sure you perform a malware scan along with that. With that, check whether any process takes too much resources.
Make sure your BIOS is up to date and chipset drivers and graphics drivers are NOT raising any issues. Disk has to be formatted as NTFS. This can be easily found from using performance counters/reliability monitor and event viewer system logs. In case if you are using nVidia systems, use the drivers from nVidia (not from MS Updates. However, there were instances where default Windows or older drivers not resulting issues). Then examine the logs. If you encounter nvstor64 this is due to the driver issues or disk failures (future). Note that nvstor64 report errors related to RAID or Non-RAID devices.
How to configure Resource Monitor: Microsoft Resources