I have a raid array, in fact, two raid arrays which are very similar, however one is being written to constantly (by jbd2 it seems) and the other is not. Here are the arrays:
md9 : active raid5 sdl4[4] sdk4[2] sdh4[1] sdb4[0]
11626217472 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
bitmap: 2/29 pages [8KB], 65536KB chunk
md8 : active raid5 sdf3[2] sdc3[1] sda3[0] sdi3[3]
11626217472 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
bitmap: 0/29 pages [0KB], 65536KB chunk
As you can see, no "checking" or anything special is going. Both arrays are 4x 4TB.
So far so good.
Both of these arrays (/dev/md8 and /dev/md9) contain data only, no root filesystem. In fact, they're rarely used by anything at all. Both have a single ext4 partition mounted with noatime
and are "bcache" ready (but there is no cache volume yet):
df -h
:
/dev/bcache0 11T 7.3T 3.6T 67% /mnt/raid5a
/dev/bcache1 11T 7.4T 3.5T 68% /mnt/raid5b
cat /proc/mounts
:
/dev/bcache0 /mnt/raid5a ext4 rw,nosuid,nodev,noexec,noatime,data=ordered 0 0
/dev/bcache1 /mnt/raid5b ext4 rw,nosuid,nodev,noexec,noatime,data=ordered 0 0
However, iostat
reports that there is constant writing going to /dev/bcache1
(and it's backing volume /dev/md9
), while nothing similar is happening to the identical array /dev/md8
…
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
md8 0.00 0.00 0.00 0 0
bcache0 0.00 0.00 0.00 0 0
md9 1.50 0.00 18.00 0 36
bcache1 1.00 0.00 12.00 0 24
md8 0.00 0.00 0.00 0 0
bcache0 0.00 0.00 0.00 0 0
md9 2.50 0.00 18.00 0 36
bcache1 2.50 0.00 18.00 0 36
This has been going on for hours.
What I tried:
- Killed anything gvfs related.
ps ax |grep gvfs
gives zero results now. Writes keep happening. - Checked with
lsof
if anything is happening. It shows nothing. - Used
iotop
. I see a process called[jbd2/bcache1-8]
that is often at the top. Nothing similar for the other array. - I tried unmounting the volume. This works without a hitch and iostat reports no further accesses (seemingly indicating that nobody is using it). Remounting it however triggers these low volume writes again immediately…
I'm very curious what could possibly be writing to this array. As I said, it only contains data, literally one folder and lost+found
, which is empty…
Best Answer
Looks like I already found the culprit after typing a full question...
Even though the volume is already over a week old (vs the other array which is two weeks old), another process
ext4lazyinit
is still busy initializing inodes (which I even limited to a very sane 4 million, instead of the insane 4 gazillion mkfs.ext4 normally would create for such a large volume).df -h -i
:After remounting the volume yet again with
init_itable=0
,iostat
shows the same writes except in a much higher volume:...which seems to confirm that it is indeed still busy initializing inodes.