is a series of micro benchmarks intended to measure basic operating
system and hardware system metrics. The benchmarks fall into three
general classes: bandwidth, latency, and ``other''.
Most of the
benchmarks use a standard timing harness described in timing(3)
and have a few standard options:
specifies the number of benchmark processes to run in parallel.
This is primarily useful when measuring the performance of SMP
or distributed computers and can be used to evaluate the system's
is the number of minimum number of microseconds the benchmark should
execute the benchmarked capability before it begins measuring
performance. Again this is primarily useful for SMP or distributed
systems and it is intended to give the process scheduler time to
"settle" and migrate processes to other processors. By measuring
performance over various
periods, users may evaulate the scheduler's responsiveness.
is the number of measurements that the benchmark should take. This
allows lmbench to provide greater or lesser statistical strength to
the results it reports. The default number of
Data movement is fundamental to the performance on most computer systems.
The bandwidth measurements are intended to show how the system can move
data. The results of the bandwidth metrics can be compared but care
must be taken to understand what it is that is being compared. The
bandwidth benchmarks can be reduced to two main components: operating
system overhead and memory speeds. The bandwidth benchmarks report
their results as megabytes moved per second but please note that the
data moved is not necessarily the same as the memory bandwidth
used to move the data. Consult the individual man pages for more
Each of the bandwidth benchmarks is listed below with a brief overview of the
intent of the benchmark.
reading and summing of a file via the read(2) interface.
memory reading and summing.
reading and summing of a file via the memory mapping mmap(2) interface.
reading of data via a pipe.
reading of data via a TCP/IP socket.
reading data from a UNIX socket.
Control messages are also fundamental to the performance on most
computer systems. The latency measurements are intended to show how fast
a system can be told to do some operation. The results of the
latency metrics can be compared to each other
for the most part. In particular, the
pipe, rpc, tcp, and udp transactions are all identical benchmarks
carried out over different system abstractions.
Latency numbers here should mostly be in microseconds per operation.
the time it takes to establish a TCP/IP connection.
context switching; the number and size of processes is varied.
fcntl file locking.
``hot potato'' transaction through a UNIX FIFO.
creating and deleting small files.
the time it takes to fault in a page from a file.
memory read latency (accurate to the ~2-5 nanosecond range,
reported in nanoseconds).
time to set up a memory mapping.
basic processor operations, such as integer XOR, ADD, SUB, MUL, DIV,
and MOD, and float ADD, MUL, DIV, and double ADD, MUL, DIV.
``hot potato'' transaction through a Unix pipe.
process creation times (various sorts).
``hot potato'' transaction through Sun RPC over UDP or TCP.
signal installation and catch latencies. Also protection fault signal
non trivial entry into the system.
``hot potato'' transaction through TCP.
``hot potato'' transaction through UDP.
``hot potato'' transaction through UNIX sockets.
the time it takes to establish a UNIX socket connection.
processor cycle time
TLB size and TLB miss latency
cache line size (in bytes)
cache statistics, such as line size, cache sizes, memory parallelism.
John McCalpin's stream benchmark
memory subsystem parallelism. How many requests can the memory
subsystem service in parallel, which may depend on the location of the
data in the memory hierarchy.