Before you start, create a Google Doc. Here, you will add screenshots / code snippets / comments for each exercise. Whatever you decide to include, it must prove that you managed to solve the given task (so don't show just the output, but how you obtained it and what conclusion can be drawn from it). If you decide to complete the feedback for bonus points, include a screenshot with the form submission confirmation, but not with its contents.
When done, export the document as a pdf and upload in the appropriate assignment on moodle. The deadline is 23:55 on Friday.
The skeleton for this lab can be found in this repository. Clone it locally before you start.
The vmstat utility provides a good low-overhead view of system performance. Since vmstat is such a low-overhead tool, it is practical to have it running even on heavily loaded servers when it is needed to monitor the system’s health.
Run vmstat on your machine with a 1 second delay between updates. Notice the CPU utilisation (info about the output columns here).
In another terminal, use the stress command to start N CPU workers, where N is the number of cores on your system. Do not pass the number directly. Instead, use command substitution.
Note: if you are trying to solve the lab on fep and you don't have stress installed, try cloning and compiling stress-ng.
Let us look at how vmstat works under the hood. We can assume that all these statistics (memory, swap, etc.) can not be normally gathered in userspace. So how does vmstat get these values from the kernel? Or rather, how does any process interact with the kernel? Most obvious answer: system calls.
$ strace vmstat
“All well and good. But what am I looking at?”
What you should be looking at are the system calls after the two writes that display the output header (hint: it has to do with /proc/ file system). So, what are these files that vmstat opens?
$ file /proc/meminfo $ cat /proc/meminfo $ man 5 proc
The manual should contain enough information about what these kernel interfaces can provide. However, if you are interested in how the kernel generates the statistics in /proc/meminfo (for example), a good place to start would be meminfo.c (but first, SO2 wiki).
Write a one-liner that uses vmstat to report complete disk statistics and sort the output in descending order based on total reads column.
tail -n +3
.
Try to run the script while passing 1000 as a command line argument. Why does it crash?
Luckily, python allows you to both retrieve the current recursion limit and set a new value for it. Increase the recursion limit so that the process will never crash, regardless of input (assume that it still has a reasonable upper bound).
Run the script again, this time passing 10000. Use mpstat to monitor the load on each individual CPU at 1s intervals. The one with close to 100% load will be the one running our script. Note that the process might be passed around from one core to another.
Stop the process. Use stress to create N-1 CPU workers, where N is the number of cores on your system. Use taskset to set the CPU affinity of the N-1 workers to CPUs 1-(N-1) and then run the script again. You should notice that the process is scheduled on cpu0.
Note: to get the best performance when running a process, make sure that it stays on the same core for as long as possible. Don't let the scheduler decide this for you, if you can help it. Allowing it to bounce your process between cores can drastically impact the efficient use of the cache and the TLB. This holds especially true when you are working with servers rather than your personal PCs. While the problem may not manifest on a system with only 4 cores, you can't guarantee that it also won't manifest on one with 40 cores. When running several experiments in parallel, aim for something like this:
Write a bash command that binds CPU stress workers on your odd-numbered cores (i.e.: 1,3,5,…). The list of cores and the number of stress workers must NOT be hardcoded, but constructed based on nproc (or whatever else you fancy).
In your submission, include both the bash command and a mpstat capture to prove that the command is working.
The zip command is used for compression and file packaging under Linux/Unix operating system. It provides 10 levels of compression, where:
$ zip -5 file.zip file.txt
Write a script to measure the compression rate and the time required for each level. You have a few large files in the code skeleton but feel free to add more. If you do add new files, make sure that they are not random data!
Generate a plot illustrating the compression rate, size decrease, etc. as a function of zip compression level. Make sure that your plot is understandable (i.e., has labels, a legend, etc.) Make sure to average multiple measurements for each compression level.
llvm-mca is a machine code analyzer that simulates the execution of a sequence of instructions. By leveraging high-level knowledge of the micro-architectural implementation of the CPU, as well as its execution pipeline, this tool is able to determine the execution speed of said instructions in terms of clock cycles. More importantly though, it can highlight possible contentions of two or more instructions over CPU resources or rather, its ports.
Note that llvm-mca is not the most reliable tool when predicting the precise runtime of an instruction block (see this paper for details). After all, CPUs are not as simple as the good old AVR microcontrollers. While calculating the execution time of an AVR linear program (i.e.: no conditional loops) is as simple as adding up the clock cycles associated to each instruction (from the reference manual), things are never that clear-cut when it comes to CPUs. CPU manufacturers such as Intel often times implement hardware optimizations that are not documented or even publicized. For example, we know that the CPU caches instructions in case a loop is detected. If this is the case, then the instructions are dispatched once again form the buffer, thus avoiding extra instruction fetches. What happens though, if the size of the loop's contents exceeds this buffer size? Obviously, without knowing certain aspects such as this buffer size, not to mention anything about microcode or unknown hardware optimizations, it is impossible to give accurate estimates.
As a simple example we will look at task_04/csum.c
. This file contains the `csum_16b1c()` function that computes the 16-bit one's complement checksum used in the IP and TCP headers.
Since llvm-mca requires assembly code as input, we first need to translate the provided C code. Because the assembly parser it utilizes is the same as clang's, use it to compile the C program but stop after the LLVM generation and optimization stages, when the target-specific assembly code is emitted.
$ clang -S -masm=intel csum.c # output = csum.s
LLVM-MCA-BEGIN
and LLVM-MCA-END
markers can be parsed (as assembly comments) in order to restrict the scope of the analysis.
These markers can also be placed in C code (see gcc extended asm and llvm inline asm expressions):
asm volatile("# LLVM-MCA-BEGIN" ::: "memory");
Remember, however, that this approach is not always desirable, for two reasons:
volatile
qualifier can pessimize optimization passes. As a result, the generated code may not correspond to what would normally be emitted.for
loop, doing so by injecting assembly meta comments in C code will exclude the iterator increment and condition check (which are also executed on every iteration).
Use llvm-mca to inspect its expected throughput and “pressure points” (check out this example).
One important thing to remember is that llvm-mca does not simulate the behavior of each instruction, but only the time required for it to execute. In other words, if you load an immediate value in a register via mov rax, 0x1234
, the analyzer will not care what the instruction does (or what the value of rax
even is), but how long it takes the CPU to do it. The implication is quite significant: llvm-mca is incapable of analyzing complex sequences of code that contain conditional structures, such as for
loops or function calls. Instead, given the sequence of instructions, it will pass through each of them one by one, ignoring their intended effect: conditional jump instructions will fall through, call
instructions will by passed over not even considering the cost of the associated ret
, etc. The closest we can come to analyzing a loop is by reducing the analysis scope via the aforementioned LLVM-MCA-*
markers and controlling the number of simulated iterations from the command line.
To solve this issue, you can set the number of iterations from the command line, so its behavior can resemble an actual loop.
A very short description of each port's main usage:
The the significance of the SKL ports reported by llvm-mca can be found in the Skylake machine model config. To find out if your CPU belongs to this category, RTFS and run an inxi -Cx
.
#uOps
) associated to each instruction. These are the number of primitive operations that each instruction (from the x86 ISA) is broken into. Fun and irrelevant fact: the hardware implementation of certain instructions can be modified via microcode upgrades.
Anyway, keeping in mind this #uOps
value (for each instruction), we'll notice that the sum of all resource pressures per port will equal that value. In other words resource pressure means the average number of micro-operations that depend on that resource.
Now that you've got the hang of things, use the -bottleneck-analysis
flag to identify contentious instruction sequences.
Explain the reason to the best of your abilities. For example, the following two instructions display a register dependency because the mov
instruction needs to wait for the push
instruction to update the RSP register.
0. push rbp ## REGISTER dependency: rsp 1. mov rbp, rsp ## REGISTER dependency: rsp
How would you go about further optimizing this code?
Please take a minute to fill in the feedback form for this lab.