The vmstat utility provides a good low-overhead view of system performance. Since vmstat is such a low-overhead tool, it is practical to have it running even on heavily loaded servers when it is needed to monitor the system’s health.
Run vmstat on your machine with a 1 second delay between updates. Notice the CPU utilisation (info about the output columns here).
In another terminal, use the stress command to start N CPU workers, where N is the number of cores on your system. Do not pass the number directly. In stead, use command substitution.
Note: if you are trying to solve the lab on fep and you don't have stress installed, try cloning and compiling stress-ng.
Let us look at how vmstat works under the hood. We can assume that all these statistics (memory, swap, etc.) can not be normally gathered in userspace. So how does vmstat get these values from the kernel? Or rather, how does any process interact with the kernel? Most obvious answer: system calls.
$ strace vmstat
“All well and good. But what am I looking at?”
What you should be looking at are the system calls after the two writes that display the output header (hint: it has to do with /proc/ file system). So, what are these files that vmstat opens?
$ file /proc/meminfo $ cat /proc/meminfo $ man 5 proc
The manual should contain enough information about what these kernel interfaces can provide. However, if you are interested in how the kernel generates the statistics in /proc/meminfo (for example), a good place to start would be meminfo.c (but first, SO2 wiki).
Write a one-liner that uses vmstat to report complete disk statistics and sort the output in descending order based on total reads column.
tail -n +3
.