loaddisk prints out load averages that describe the current saturation level of the disks. The value reported is based on the length of the wait queues. The first example shows loaddisk running with a sample interval of 1 second on a quiet system. There is no history of samples in the kernel, so loaddisk needs to run for some time (15 minutes) before it can fill all the columns. We finish by running iostat -x for comparison. $ ./loaddisk 1 Dsk Time 1sec 5sec 15sec 1min 5min 15min 21:01:19 0.00 21:01:20 0.00 21:01:21 0.00 21:01:22 0.00 21:01:23 0.00 0.00 21:01:24 0.00 0.00 21:01:25 0.00 0.00 21:01:26 0.00 0.00 21:01:27 0.00 0.00 21:01:28 0.00 0.00 21:01:29 0.00 0.00 21:01:30 0.00 0.00 21:01:31 0.00 0.00 21:01:32 0.00 0.00 21:01:33 0.03 0.01 0.00 21:01:34 0.00 0.01 0.00 21:01:35 0.00 0.01 0.00 21:01:36 0.00 0.01 0.00 21:01:37 0.00 0.01 0.00 21:01:38 0.00 0.00 0.00 ^C $ iostat -x 1 [...] extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b dad0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 dad1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 fd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd16 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd31 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 nfs1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 loaddisk can be thought of as printing average values from iostat's wait column. In this example we run loaddisk with a 5 second interval. As loaddisk runs, several commands are run (tar cf /dev/null /) to generate disk load. We finish by running iostat -x for comparison. $ ./loaddisk 5 Dsk Time 1sec 5sec 15sec 1min 5min 15min 21:08:01 0.00 0.01 21:08:06 0.00 0.00 21:08:11 0.00 0.00 0.00 21:08:16 0.00 0.00 0.00 21:08:21 0.27 0.06 0.02 21:08:26 0.12 0.52 0.19 21:08:31 0.00 0.07 0.21 21:08:36 2.30 1.21 0.60 21:08:41 3.90 2.16 1.15 21:08:46 5.48 4.49 2.62 21:08:51 1.76 3.53 3.40 21:08:56 0.21 0.24 2.76 1.03 21:09:01 0.41 0.50 1.42 1.07 21:09:06 0.02 0.11 0.28 1.08 21:09:11 0.11 0.18 0.26 1.09 21:09:16 0.12 0.12 0.13 1.10 21:09:21 0.29 0.25 0.18 1.12 21:09:26 0.23 0.29 0.22 1.10 21:09:31 0.70 0.75 0.43 1.15 21:09:36 0.75 0.93 0.66 1.13 21:09:41 0.74 0.74 0.81 1.01 21:09:46 2.87 2.08 1.25 0.81 21:09:51 4.57 3.94 2.25 0.84 21:09:56 2.95 3.80 3.27 1.14 21:10:01 0.25 2.60 3.45 1.32 ^C $ iostat -x 5 [...] extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b dad0 93.0 0.0 6703.2 0.0 2.4 2.0 47.5 97 100 dad1 0.6 0.0 2.2 0.0 0.0 0.0 10.9 0 1 fd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd16 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd31 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 nfs1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 In this example loaddisk was run with a 5 second interval. I was suprised to see load on the disks which I was expecting to be idle. The psio program was run to find the culprit process, discovering two tar programs still running from the previous example. These tar programs were then killed. $ ./loaddisk 5 Dsk Time 1sec 5sec 15sec 1min 5min 15min 21:24:35 0.18 0.06 21:24:40 0.35 0.31 21:24:45 0.12 0.16 0.18 21:24:50 0.20 0.05 0.18 21:24:55 0.23 0.11 0.11 21:25:00 0.10 0.21 0.12 21:25:05 0.03 0.10 0.14 21:25:10 0.03 0.02 0.11 21:25:15 0.06 0.04 0.05 21:25:20 0.05 0.10 0.06 21:25:25 0.16 0.04 0.06 21:25:30 0.03 0.19 0.11 0.12 21:25:35 0.00 0.00 0.08 0.11 21:25:40 0.00 0.00 0.06 0.09 21:25:45 0.00 0.00 0.00 0.07 21:25:50 0.00 0.00 0.00 0.07 21:25:55 0.00 0.00 0.00 0.06 21:26:00 0.00 0.00 0.00 0.04 21:26:05 0.00 0.00 0.00 0.03 21:26:10 0.00 0.00 0.00 0.03 21:26:15 0.00 0.00 0.00 0.03 ^C # psio | head UID PID PPID %I/O STIME TTY TIME CMD root 18535 18263 61.8 21:18:42 pts/21 0:01 tar cf /dev/null / root 18536 18263 20.4 21:18:44 pts/21 0:00 tar cf /dev/null / root 0 0 0.0 Feb 25 ? 0:16 sched root 1 0 0.0 Feb 25 ? 0:01 /etc/init - root 2 0 0.0 Feb 25 ? 0:00 pageout root 3 0 0.0 Feb 25 ? 01:04:30 fsflush root 77 1 0.0 Feb 25 ? 0:00 /usr/lib/sysevent/... root 85 1 0.0 Feb 25 ? 0:00 /usr/lib/picl/picld root 165 1 0.0 Feb 25 ? 0:40 /usr/sbin/skipd psio can be found at http://www.brendangregg.com/psio.html