04. [30p] Perf & fuzzing

The purpose of this exercise is to identify where bottlenecks appear in a real-world application. For this we will use perf and American Fuzzy Lop (AFL).

perf is a Linux performance analysis tool that we will use to analyze what events occur when running a program.

afl is a fuzzing tool. Fuzzing is the process of detecting bugs empirically. Starting from a seed input file, a certain program is executed and its behavior observed. The meaning of “behavior” is not fixed, but in the simplest sense, let's say that it means “order in which instructions are executed”. After executing the binary under test, the fuzzer will mutate the input file. Following another execution, with the updated input, the fuzzer decides whether or not the mutations were useful. This determination is made based on deviations from known paths during runtime. Fuzzers usually run over a period of days, weeks, or even months, all in the hope of finding an input that crashes the program.

[10p] Task A - Fuzzing with AFL

First, let's compile AFL and all related tools. We initialize / update a few environment variables to make them more accessible. Remember that these are set only for the current shell.

$ git clone https://github.com/google/AFL
 
$ pushd AFL
$ make -j $(nproc)
 
$ export PATH="${PATH}:$(pwd)"
$ export AFL_PATH="$(pwd)"
$ popd

Now, check that it worked:

$ afl-fuzz --help
$ afl-gcc --version

The program under test will be fuzzgoat, a vulnerable program made for the express purpose of illustrating fuzzer behaviour. To prepare the program for fuzzing, the source code has to be compiled with afl-gcc. afl-gcc is a wrapper over gcc that statically instruments the compiled program. This analysis code that is introduced is leveraged by afl-fuzz to track what branches are taken during execution. In turn, this information is used to guide the input mutation procedure.

$ git clone https://github.com/fuzzstati0n/fuzzgoat.git
 
$ pushd fuzzgoat
$ CC=afl-gcc make
$ popd

If everything went well, we finally have our instrumented binary. Time to run afl. For this, we will use the sample seed file provided by fuzzgoat. Here is how we call afl-fuzz:

  • the -i flag specifies the directory containing the initial seed
  • the -o flag specifies the active workspace for the afl instance
  • -- separates the afl flags from the binary invocation command
  • everything following the -- separator is how the target binary would normally be invoked in bash; the only difference is that the input file name will be replaced by @@
$ afl-fuzz -i fuzzgoat/in -o afl_output -- ./fuzzgoat/fuzzgoat @@

afl may crash initially, complaining about some system settings. Just follow its instructions until everything is to its liking. Some of the problems may include:

  • the coredump generation pattern saving crash information somewhere other than the current directory, with the name core
  • the CPU running in powersave mode, rather than performance.

If you look in the afl_output/ directory, you will see a few files and directories; here is what they are:

  • .cur_input : current input that is tested; replaces @@ in the program invocation.
  • fuzzer_stats : statistics generated by afl, updated every few seconds by overwriting the old ones.
  • fuzz_bitmap : a 64KB array of counters used by the program instrumentation to report newly found paths. For every branch instruction, a hash is computed based on its address and the destination address. This hash is used as an offset into the 64KB map.
  • plot_data : time series that can be used with programs such as gnuplot to create visual representations of the fuzzer's performance over time.
  • queue/ : backups of all the input files that increased code coverage at that time. Note that some of the newer files may provide the same coverage as old ones, and then some. The reason why the old ones are not removed when this happens is that rechecking / caching coverage would be a pain and would bog down the fuzzing process. Depending on the binary under tests, we can expect a few thousand executions per second.
  • hangs/ : inputs that caused the process to execute past a timeout limit (20ms by default).
  • crashes/ : files that generate crashes. If you want to search for bugs and not just test for coverage increase, you should compile your binary with a sanitizer (e.g.: asan). Under normal circumstances, an out-of-bounds access can go undetected unless the accessed address is unmapped, thus creating a #PF (page fault). Different sanitizers give assurances that these bugs actually get caught, but also reduce the execution speed of the tested programs, meaning slower code coverage increase.

[10p] Task B - Profile AFL

Next, we will analyze the performance of afl. Using perf, we are able to specify one or more events (see man perf-list(1)) that the kernel knows to record only when our program under test (in this case afl) is running. When the internal event counter reaches a certain value (see the -c and -F flags in man perf-record(1)), a sample is taken. This sample can contain different kinds of information; for example, the -g option requires the inclusion of a backtrace of the program with every sample.

Let's record some stats using unhalted CPU cycles as an event trigger, every 1k events in userspace, and including frame pointers in samples:

$ perf record -e cycles -c 1000 -g --all-user \
    afl-fuzz -i fuzzgoat/in -o afl_output -- ./fuzzgoat/fuzzgoat @@

Perf might not be able to capture data samples if access to performance monitoring operations is not allowed. To open access for processes without CAP_PERFMON, CAP_SYS_PTRACE or CAP_SYS_ADMIN Linux capability, adjust (as root user) the value of /proc/sys/kernel/perf_event_paranoid to -1:

$ sudo su
$ echo -1 > /proc/sys/kernel/perf_event_paranoid
$ exit

More information can be found here.

Leave the process running for a minute or so; then kill it with <Ctrl + C>. perf will take a few moments longer to save all collected samples in a file named perf.data, which are read by perf script. Don't mess with it!

Let's see some raw trace output first. Then look at the perf record. The record aggregates the raw trace information and identifies stress areas.

$ perf script -i perf.data
$ perf report -i perf.data

Use perf script to identify the PID of afl-fuzz (hint: -F). Then, filter out any samples unrelated to afl-fuzz (i.e.: its child process, fuzzgoat) from the report. Then, identify the most heavily used functions in afl-fuzz. Can you figure out what they do from the source code?

Make sure to include plenty of screenshots and explanations for this task :p

[10p] Task C - Flame Graph

A Flame Graph is a graphical representation of the stack traces captured by the perf profiler during the execution of a program. It provides a visual depiction of the call stack, showing which functions were active and how much time was spent in each one of them. By analyzing the flame graph generated by perf, we can identify performance bottlenecks and pinpoint areas of the code that may need optimization or further investigation.

When analyzing flame graphs, it's crucial to focus on the width of each stack frame, as it directly indicates the number of recorded events following the same sequence of function calls. In contrast, the height of the frames does not carry significant implications for the analysis and should not be the primary focus during interpretation.

Using the samples previously obtained in perf.data, generate a corresponding Flame Graph in SVG format and analyze it.

How to do:

  1. Clone the following git repo: https://github.com/brendangregg/FlameGraph.
  2. Use the stackcollapse-perf.pl Perl script to convert the perf.data output into a suitable format (it folds the perf-script output into one line per stack, with a count of the number of times each stack was seen).
  3. Generate the Flame Graph using flamegraph.pl (based on the folded data) and redirect the output to an SVG file.
  4. Open in any browser the interactive SVG graph obtained and inspect it.

More details can also be found here and here.

ep/labs/03/contents/tasks/ex6.txt · Last modified: 2023/10/21 17:43 by andrei.mirciu
CC Attribution-Share Alike 3.0 Unported
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0