Ubuntu – command-line equivalent to gnome-disks

benchmarkscommand linehard drive

Gnome Disks (gnome-disks – formerly known as palimpsest) provides SMART and some benchmarking information. From what I gather, it used to be based on a command-line tool udisks but these projects appear to have merged.

The new Gnome Disks utility appears only to show average results from the benchmarking tests. From screenshots, previous versions of palimpsest appear to have maximum and minimum responses in the results as well.

I'm interested in all results in the benchmarking – specifically I'm trying to find disks that are having a negative effect on users by weeding out disks with slow I/O in the worst-case. I also want to map this data over time so I need to be able to process/export it in a programmatic way.

I looked at udisksctl (in the udisks2 package) but it appears just to be general information on the disks and some SMART information.

Is there a command-line tool which runs the old udisks style benchmarking report and returns minimums and maximums as well?

Best Answer

I cant speak to the old udisks benchmarking report but perhaps fio will be of use to you. fio is currently available for all versions of Ubuntu from Precise To Zesty

You can install it with sudo apt-get install fio after activating the Universe repository

Some quick testing indicates that you can choose the partition to test simply by insuring that the pwd (Present Working Directory) is on the partition that you wish to test.

For instance, here's the results I get running it on my root partition which is on a Toshiba THNSNH128GBST SSD (my /dev/sda)

$ sudo fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=0 --size=256M --numjobs=8 --runtime=60 --group_reporting randwrite: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1 ...

  randwrite: (groupid=0, jobs=8): err= 0: pid=15096: Wed Feb 15 13:58:31 2017
  write: io=2048.0MB, bw=133432KB/s, iops=33358, runt= 15717msec
    slat (usec): min=1, max=223379, avg=232.82, stdev=4112.31
    clat (usec): min=0, max=16018, avg= 0.30, stdev=22.20
     lat (usec): min=1, max=223381, avg=233.25, stdev=4112.55
    clat percentiles (usec):
     |  1.00th=[    0],  5.00th=[    0], 10.00th=[    0], 20.00th=[    0],
     | 30.00th=[    0], 40.00th=[    0], 50.00th=[    0], 60.00th=[    0],
     | 70.00th=[    0], 80.00th=[    1], 90.00th=[    1], 95.00th=[    1],
     | 99.00th=[    1], 99.50th=[    1], 99.90th=[    2], 99.95th=[    3],
     | 99.99th=[   31]
    bw (KB  /s): min= 3473, max=241560, per=12.42%, avg=16577.30, stdev=28056.68
    lat (usec) : 2=99.79%, 4=0.18%, 10=0.02%, 20=0.01%, 50=0.01%
    lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%
    lat (msec) : 20=0.01%
  cpu          : usr=0.52%, sys=1.08%, ctx=3235, majf=0, minf=228
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=524288/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
  WRITE: io=2048.0MB, aggrb=133432KB/s, minb=133432KB/s, maxb=133432KB/s, mint=15717msec, maxt=15717msec

Disk stats (read/write):
  sda: ios=0/197922, merge=0/84378, ticks=0/37360, in_queue=37324, util=93.41%

Running in my home directory which is on a Western Digital WD2003FZEX-00Z4SA0 HDD with the same command results in the following output:

randwrite: (groupid=0, jobs=8): err= 0: pid=15062: Wed Feb 15 13:53:32 2017
  write: io=1299.6MB, bw=22156KB/s, iops=5538, runt= 60062msec
    slat (usec): min=1, max=200040, avg=1441.http://meta.stackexchange.com/questions/122692/moderator-tools-make-merging-questions-a-little-easier74, stdev=11322.69
    clat (usec): min=0, max=12031, avg= 0.41, stdev=32.24
     lat (usec): min=1, max=200042, avg=1442.29, stdev=11323.05
    clat percentiles (usec):
     |  1.00th=[    0],  5.00th=[    0], 10.00th=[    0], 20.00th=[    0],
     | 30.00th=[    0], 40.00th=[    0], 50.00th=[    0], 60.00th=[    0],
     | 70.00th=[    0], 80.00th=[    1], 90.00th=[    1], 95.00th=[    1],
     | 99.00th=[    2], 99.50th=[    2], 99.90th=[    3], 99.95th=[    9],
     | 99.99th=[   14]
    bw (KB  /s): min=  426, max=282171, per=13.12%, avg=2906.99, stdev=17280.75
    lat (usec) : 2=98.88%, 4=1.03%, 10=0.05%, 20=0.04%, 50=0.01%
    lat (usec) : 100=0.01%, 250=0.01%
    lat (msec) : 10=0.01%, 20=0.01%
  cpu          : usr=0.09%, sys=0.25%, ctx=7912, majf=0, minf=227
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=332678/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
  WRITE: io=1299.6MB, aggrb=22155KB/s, minb=22155KB/s, maxb=22155KB/s, mint=60062msec, maxt=60062msec

Disk stats (read/write):
  sdb: ios=0/94158, merge=0/75298, ticks=0/116296, in_queue=116264, util=98.40%

I trimmed out the output produced while it's running to keep this answer a readable size.

Explanation of output that I found interesting:

You can see that we get min, max average and standard deviation for all of these metrics.

slat indicates submission latency -

clat indicates completion latency. This is the time that passes between submission to the kernel and when the IO is complete, not including submission latency. In older versions of fio, this was the best metric for approximating application-level latency.

lat seems to be fairly new. It seems that this metric starts the moment the IO struct is created in fio and is completed right after clat, making this the one that best represents what applications will experience. This is the one that you'll probably want to graph.

bw Bandwidth is pretty self-explanatory except for the per= part. The docs say it's meant for testing a single device with multiple workloads, so you can see how much of the IO was consumed by each process.

When fio is run against multiple devices, as I did for this output, it can provide a useful comparison regardless of the fact that it's intended purpose is to test a specific workload.

I'm sure it comes as no surprise that the latency on the hard drive is much higher than that of the solid state drive.

Sources:

https://tobert.github.io/post/2014-04-17-fio-output-explained.html

https://github.com/axboe/fio/blob/master/README