This is an old revision of the document!


03. [30p] bpftrace

The extended Berkley Packet Filter (eBPF) is an under-represented technology in CS curricula that has been around since 1994 but has served multiple purposes along the years. As a tl;dr, what you need to know about eBPF is that it's a purely virtual instruction set, meaning that no hardware implements it. eBPF programs can be uploaded to the kernel, where they are JIT translated to native bytecode and become callable by other kernel components.

The question is: why would we go through all this trouble instead of using a Linux Kernel Module (LKM)? Unlike LKMs, eBPF programs have a simpler structure and can be more easily verified by the kernel. Before being JIT translated, the kernel must ensure their safety by enforcing certain properties. For example, eBPF programs are guaranteed to finish. How does is this property checked and enforced? By making sure that eBPF programs have no back jumps. As you can imagine, this makes even writing a simple for loop a challenge.

Initially, BPF (the extended part was added when x64 architectures appeared ca. 2004) was used as a filtering criteria for network packet captures, limiting the amount of data copied to a userspace process for analysis. This is still used to this day. Try running tcpdump <expression> and adding the -d flag. Instead of actually listening for packets, this will dump the BPF program that tcpdump would otherwise compile from that expression and upload to the kernel. That program is invoked for each packet and it decides whether the tcpdump process should receive a copy of it.

More recently (since approx. 2012), eBPF has been used in cloud native solutions such as Cilium for profiling, resource observability and network policy enforcement. Technologies such as these have been long used by Netflix and Meta internally and are now becoming increasingly more relevant. You can find more information about this topic in Cilium: Up and Running, a recent book released by Isovalent, a company specialized in microservice architectures that was acquired by Cisco in 2024 to help improve their inter-cloud security technologies.

[0p] Task A - Hello World

bpftrace is a high-level scripting language that can be compiled into an eBPF program. This is similar to a tcpdump expression but implements more complex logic and can be used to instrument kernel functions. After installing the package, try running it on your system (sudo may be required):

$ bpftrace -e 'BEGIN { printf("hello world\n"); }'

A bpftrace script consists of multiple probes. Each probe is given a specific hook point available in the kernel and a function body where acquired data can be processed (e.g., incrementing a counter). BEGIN is a special type of probe that has no real correspondent symbol in the kernel, but instead is executed once, when the program starts. This is useful for initializing global counters, for instance.

Moving forward, you may find it useful to keep the bpftrace language documentation open in another tab.

[5p] Task B - Trace read() syscalls

By running bpftrace -l, we get a list of all available probes. Their name is a sequence of terms separated by :. The first term defines what type of probe it is. Meanwhile, the final term is the actual probe name. Here is a list of probe types, to get an idea of what can be monitored with bpftrace:

  • kprobe: Attaches to any place inside a kernel function in a manner similar to breakpoints in gdb.
  • fentry: Attaches to the entry of a kernel function. Safer and faster than kprobes.
  • tracepoint: Developer placed hooks with user-friendly, structured arguments to inspect.
  • rawtracepoint: Faster than tracepoints but provides raw arguments. Requires more knowledge of what you're monitoring.
  • hardware: Hooks into CPU performance counters (remember the PMC task in the CPU monitoring lab).
  • software: Subscribes to software-generated perf events. Yes, you can do perf sampling based on number of TCP packets sent, not just cache misses.
  • iter: Iterates over kernel data structures. Not event driven and still experimental due to locking restrictions for eBPF.

For this task, we are going to attach a probe to the sys_enter_read tracepoint and print the process name for each invocation:

# NOTE: you can shorten "tracepoint" to just "t"
$ bpftrace -e 'tracepoint:syscalls:sys_enter_read { printf("%s\n", comm); }'

Notice how we use the built-in variable comm that automatically resolves to the executable name to find out what process performed a read() syscall.

[10p] Task C - Filter read() syscalls

For this task, try to modify the previous one-liner to only print the comm and pid of the processes that have performed an invalid read() syscall (i.e., the return value is negative). For this, you will have to use the args built-in to access the return value. Note however that the return value is not available in the entry hook, but instead only in the exit hook.

What errno codes have been returned? What do these errors mean?

You can use bpftrace -lv to get a detailed description of the args attributes that are available. For example:

$ bpftrace -lv sys_enter_read
    tracepoint:syscalls:sys_enter_read
        int __syscall_nr
        unsigned int fd
        char * buf
        size_t count
 
$ bpftrace -lv sys_exit_read
    tracepoint:syscalls:sys_exit_read
        int __syscall_nr
        long ret

There are two methods of specifying filters:

  • An if statement inside the action block of the probe.
  • A predicate specified between the probe name and the action block.

[10p] Task D - Count read bytes

In this task, we are going to count how much data each application has read while our bpftrace script has been running. For this, we are going to be using eBPF maps. These maps consist of shared memory between the JIT translated programs that are resident in kernel space and the user applications that need to collect the data that the probes gather. In this case, the user space application is the bpftrace program.

In its specific scripting language, maps are identified by a unique name and prefixed by the @ symbol. Optionally, the map names can be followed by a […], effectively turning them into hash maps. You can use these maps without declaring them in a BEGIN block, unless you want to initialize them with non-zero values. For example, incrementing the amount of data read on a per-application basis can be as simple as:

@bytes_read[comm] += args.ret

Make sure you filter out negative return values and execute your bpftrace script. Let it run for a few seconds, then interrupt it via a SIGINT (i.e., Ctrl + C). When unloading the probes and before terminating the process, all maps will be printed to stdout.

Periodic statistics

Let's say you want to display these statistics every 2 seconds and reset the counters after each print. Make it feel more like vmstat.

Use the interval probe to achieve this. You can print() the map and then clear() it to reset its contents.

[5p] Task E - Built-in histogram function

Use the hist() eBPF helper to visualize the distribution of bytes read for each syscall. The data that you visualize is not the total bytes read, but how many read() calls returned a value that fits within that specific log2 bucket.

ep/labs/05/contents/tasks/ex3.1774909522.txt.gz · Last modified: 2026/03/31 01:25 by radu.mantu
CC Attribution-Share Alike 3.0 Unported
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0