Table of Contents

04. [25p] llvm-mca

llvm-mca is a machine code analyzer that simulates the execution of a sequence of instructions. By leveraging high-level knowledge of the micro-architectural implementation of the CPU, as well as its execution pipeline, this tool is able to determine the execution speed of said instructions in terms of clock cycles. More importantly though, it can highlight possible contentions of two or more instructions over CPU resources or rather, its ports.

Note that llvm-mca is not the most reliable tool when predicting the precise runtime of an instruction block (see this paper for details). After all, CPUs are not as simple as the good old AVR microcontrollers. While calculating the execution time of an AVR linear program (i.e.: no conditional loops) is as simple as adding up the clock cycles associated to each instruction (from the reference manual), things are never that clear-cut when it comes to CPUs. CPU manufacturers such as Intel often times implement hardware optimizations that are not documented or even publicized. For example, we know that the CPU caches instructions in case a loop is detected. If this is the case, then the instructions are dispatched once again form the buffer, thus avoiding extra instruction fetches. What happens though, if the size of the loop's contents exceeds this buffer size? Obviously, without knowing certain aspects such as this buffer size, not to mention anything about microcode or unknown hardware optimizations, it is impossible to give accurate estimates.

Figure 2: Simplified view of a single Intel Skylake CPU core. Instructions are decoded into μOps and scheduled out-of-order onto the Execution Units. Your CPUs most likely have (many) more EUs.

[5p] Task A - Preparing the input

As previosuly mentioned, llvm-mca requires assembly code as input so start by preparing it from the source provided in the archive.

Since llvm-mca requires assembly code as input, we first need to translate the C source provided in the archive. Because the assembly parser it utilizes is the same as clang's, use it to compile the C program but stop after the LLVM generation and optmization stages, when the target-specific assembly code is generated.

Note how in the llvm-mca documentation it is stated that the LLVM-MCA-BEGIN and LLVM-MCA-END markers can be parsed (as assembly comments) in order to restrict the scope of the analysis.

These markers can also be placed in C code (see gcc extended asm and llvm inline asm expressions):

asm volatile("# LLVM-MCA-BEGIN" ::: "memory");

Remember, however, that this approach is not always desirable, for two reasons:

  1. Even though this is just a comment, the volatile modifier can pessimize optimization passes. As a result, the generated code may not correspond to what would normally be emitted.
  2. Some code structures can not be included in the analysis region. For example, if you want to include the contents of a for loop, doing so by injecting assembly meta comments in C code will exclude the incrementation and condition check (which are also executed on every iteration).

[10p] Task B - Analyzing the assembly code

After disassembling the code use llvm-mca to inspect its expected throughput and “pressure points” (check out this example.

One important thing to remember is that llvm-mca does not simulate the behaviour of each instruction, but only the time required for it to execute. In other words, if you load an immediate value in a register via mov rax, 0x1234, the analyzer will not care what the instruction does (or what the value of rax even is), but how long it takes the CPU to do it. The implication is quite significant: llvm-mca is incapable of analyzing complex sequences of code that contain conditional structures, such as for loops or function calls. Instead, given the sequence of instructions, it will pass through each of them one by one, ignoring their intended effect: conditional jump instructions will fall through, call instructions will by passed over not even considering the cost of the associated ret, etc. The closest we can come to analyzing a loop is by reducing the analysis scope via the aforementioned LLVM-MCA-* markers and controlling the number of simulated iterations from the command line.

To solve this issue, you can set the number of iterations from the command line, so its behaviour can resemble an actual loop.

Read more on the Skylake instruction scheduler and ports.

A very short description of each port's main usage:

  • Port 0,1: arithmetic instructions
  • Port 2,3: load operations, AGU (address generation unit)
  • Port 4: store operations, AGU
  • Port 5: vector operations
  • Port 6: integer and branch operations
  • Port 7: AGU

The the significance of the SKL ports reported by llvm-mca can be found in the Skylake machine model config. To find out if your CPU belongs to this category, RTFS and run an inxi -Cx.

In the default view, look at the number of micro-operations (i.e.: #uOps) associated to each instruction. These are the number of primitive operations that each instruction (from the x86 ISA) is broken into. Fun and irrelevant fact: the hardware implementation of certain instructions can be modified via microcode upgrades.

Anyway, keeping in mind this #uOps value (for each instruction), we'll notice that the sum of all resource pressures per port will equal that value. In other words resource pressure means the average number of micro-operations that depend on that resource.

[10p] Task C - In-depth examination

Now that you've got the hang of things, try generating asm code with certain optimization levels (i.e.: O1,2,3,s, etc.)
Use the -bottleneck-analysis flag to identify contentious instruction sequences. Explain the reason to the best of your abilities.