This is an old revision of the document!
The extended Berkley Packet Filter (eBPF) is an under-represented technology in CS curricula that has been around since 1994 but has served multiple purposes along the years. As a tl;dr, what you need to know about eBPF is that it's a purely virtual instruction set, meaning that no hardware implements it. eBPF programs can be uploaded to the kernel, where they are JIT translated to native bytecode and become callable by other kernel components.
The question is: why would we go through all this trouble instead of using a Linux Kernel Module (LKM)? Unlike LKMs, eBPF programs have a simpler structure and can be more easily verified by the kernel. Before being JIT translated, the kernel must ensure their safety by enforcing certain properties. For example, eBPF programs are guaranteed to finish. How does is this property checked and enforced? By making sure that eBPF programs have no back jumps. As you can imagine, this makes even writing a simple for loop a challenge.
Initially, BPF (the extended part was added when x64 architectures appeared ca. 2004) was used as a filtering criteria for network packet captures, limiting the amount of data copied to a userspace process for analysis. This is still used to this day. Try running tcpdump <expression> and adding the -d flag. Instead of actually listening for packets, this will dump the BPF program that tcpdump would otherwise compile from that expression and upload to the kernel. That program is invoked for each packet and it decides whether the tcpdump process should receive a copy of it.
More recently (since approx. 2012), eBPF has been used in cloud native solutions such as Cilium for profiling, resource observability and network policy enforcement. Technologies such as these have been long used by Netflix and Meta internally and are now becoming increasingly more relevant. You can find more information about this topic in Cilium: Up and Running, a recent book released by Isovalent, a company specialized in microservice architectures that was acquired by Cisco in 2024 to help improve their inter-cloud security technologies.
bpftrace is a high-level scripting language that can be compiled into an eBPF program. This is similar to a tcpdump expression but implements more complex logic and can be used to instrument kernel functions. After installing the package, try running it on your system (sudo may be required):
$ bpftrace -e 'BEGIN { printf("hello world\n"); }'
A bpftrace script consists of multiple probes. Each probe is given a specific hook point available in the kernel and a function body where acquired data can be processed (e.g., incrementing a counter). BEGIN is a special type of probe that has no real correspondent symbol in the kernel, but instead is executed once, when the program starts. This is useful for initializing global counters, for instance.
By running bpftrace -l, we get a list of all available probes. Their name is a sequence of terms separated by :. The first term defines what type of probe it is. Meanwhile, the final term is the actual probe name. Here is a list of probe types, to get an idea of what can be monitored with bpftrace:
For this task, we are going to attach a probe to the sys_enter_read tracepoint and print the process name for each invocation:
# NOTE: you can shorten "tracepoint" to just "t" $ bpftrace -e 'tracepoint:syscalls:sys_enter_read { printf("%s\n", comm); }'
Notice how we use the built-in variable comm that automatically resolves to the executable name to find out what process performed a read() syscall.
For this task, try to modify the previous one-liner to only print the comm and pid of the processes that have performed an invalid read() syscall (i.e., the return value is negative). For this, you will have to use the args built-in to access the return value. Note however that the return value is not available in the entry hook, but instead only in the exit hook.
What errno codes have been returned? What do these errors mean?
bpftrace -lv to get a detailed description of the args attributes that are available. For example:
$ bpftrace -lv sys_enter_read tracepoint:syscalls:sys_enter_read int __syscall_nr unsigned int fd char * buf size_t count $ bpftrace -lv sys_exit_read tracepoint:syscalls:sys_exit_read int __syscall_nr long ret
In this task, we are going to count how much data each application has read while our bpftrace script has been running. For this, we are going to be using eBPF maps. These maps consist of shared memory between the JIT translated programs that are resident in kernel space and the user applications that need to collect the data that the probes gather. In this case, the user space application is the bpftrace program.
In its specific scripting language, maps are identified by a unique name and prefixed by the @ symbol. Optionally, the map names can be followed by a […], effectively turning them into hash maps. You can use these maps without declaring them in a BEGIN block, unless you want to initialize them with non-zero values. For example, incrementing the amount of data read on a per-application basis can be as simple as:
@bytes_read[comm] += args.ret
Make sure you filter out negative return values and execute your bpftrace script. Let it run for a few seconds, then interrupt it via a SIGINT (i.e., Ctrl + C). When unloading the probes and before terminating the process, all maps will be printed to stdout.
Let's say you want to display these statistics every 2 seconds and reset the counters after each print. Make it feel more like vmstat.
Use the interval probe to achieve this. You can print() the map and then clear() it to reset its contents.
Use the hist() eBPF helper to visualize the distribution of bytes read for each syscall. The data that you visualize is not the total bytes read, but how many read() calls returned a value that fits within that specific log2 bucket.