hyperfine
Command-line benchmarking tool.
Official WebsiteFeatures
Warmup RunsStatistical AnalysisExport ResultsShell Selection
Installation
Homebrew
brew install hyperfinePacman (Arch)
pacman -S hyperfineCargo (Rust)
cargo install hyperfineWhy use hyperfine?
hyperfine is a benchmark tool that accurately measures command execution time. More feature-rich than the time command, it provides statistically reliable results.
High-Precision Measurement
Runs multiple times to calculate mean and standard deviation. Automatically detects outliers.
Warmup Support
Warmup runs that account for cache effects. Get more accurate results.
Comparison Feature
Compare multiple commands and statistically determine which is faster.
Export Results
Output in JSON, CSV, and Markdown formats. Easy integration with CI and documentation.
Installation
Installation
# macOS (Homebrew)
brew install hyperfine
# Cargo (Rust)
cargo install hyperfine
# Arch Linux
sudo pacman -S hyperfine
# Ubuntu/Debian
wget https://github.com/sharkdp/hyperfine/releases/download/v1.18.0/hyperfine_1.18.0_amd64.deb
sudo dpkg -i hyperfine_1.18.0_amd64.deb
# Windows (Chocolatey)
choco install hyperfine
# Windows (Scoop)
scoop install hyperfineBasic Usage
Basic Commands
Basic
# Benchmark a single command
hyperfine 'sleep 0.3'
# Compare multiple commands
hyperfine 'find . -name "*.md"' 'fd -e md'
# Execute shell commands
hyperfine 'echo "Hello" && sleep 0.1'Output Example
Benchmark 1: sleep 0.3
Time (mean +- std): 300.5 ms +- 0.3 ms [User: 0.8 ms, System: 0.4 ms]
Range (min ... max): 300.1 ms ... 301.2 ms 10 runs
Summary
'sleep 0.3' ran
1.00 +- 0.00 times faster than 'sleep 0.3'Displays mean execution time, standard deviation, min/max time, and number of runs.
Common Patterns
Warmup
Warmup
# Add warmup runs (warm up the cache)
hyperfine --warmup 3 'cat large_file.txt'
# Specify prepare command
hyperfine --prepare 'sync; echo 3 | sudo tee /proc/sys/vm/drop_caches' 'cat large_file.txt'
# Specify cleanup command
hyperfine --cleanup 'rm -f output.txt' 'generate_output > output.txt'Controlling Run Count
Run Count
# Specify minimum number of runs
hyperfine --min-runs 20 'command'
# Specify maximum number of runs
hyperfine --max-runs 100 'command'
# Specify exact number of runs
hyperfine --runs 50 'command'Parameterized Benchmarks
Parameters
# Compare with varying parameters
hyperfine -P threads 1 8 'make -j {threads}'
# Specify multiple values
hyperfine -L compiler gcc,clang '{compiler} -O2 main.c'
# Combination of multiple parameters
hyperfine -L size 100,1000,10000 -L algo quick,merge './sort --algo {algo} --size {size}'Advanced Usage
Exporting Results
Export
# Output in JSON format
hyperfine --export-json results.json 'command1' 'command2'
# Output in CSV format
hyperfine --export-csv results.csv 'command1' 'command2'
# Output in Markdown format (for documentation)
hyperfine --export-markdown results.md 'command1' 'command2'
# Output in AsciiDoc format
hyperfine --export-asciidoc results.adoc 'command1' 'command2'Markdown Output Example
| Command | Mean [ms] | Min [ms] | Max [ms] | Relative |
|---|---|---|---|---|
fd -e md | 12.3 +- 1.2 | 10.5 | 15.2 | 1.00 |
find . -name "*.md" | 156.7 +- 8.3 | 142.1 | 178.9 | 12.74 +- 1.45 |
Shell Specification
Shell Specification
# Specify shell to use
hyperfine --shell=bash 'echo $BASH_VERSION'
hyperfine --shell=zsh 'echo $ZSH_VERSION'
# Run without shell (direct execution)
hyperfine --shell=none './my_program'
# Default shell (usually /bin/sh)
hyperfine 'command'Error Handling
Error Handling
# Ignore failures and continue
hyperfine --ignore-failure 'command_that_might_fail'
# Set timeout
hyperfine --command-name 'slow command' 'sleep 10'
# Show output
hyperfine --show-output 'command'Common Options
| Option | Description | Example |
|---|---|---|
-w, --warmup | Number of warmup runs | hyperfine -w 3 'cmd' |
-r, --runs | Specify number of runs | hyperfine -r 50 'cmd' |
-p, --prepare | Command before each run | hyperfine -p 'sync' 'cmd' |
-P, --parameter-scan | Scan numeric parameter | hyperfine -P n 1 10 'cmd {n}' |
-L, --parameter-list | List parameter | hyperfine -L v a,b 'cmd {v}' |
--export-json | Export to JSON format | hyperfine --export-json r.json |
--export-markdown | Export to Markdown format | hyperfine --export-markdown r.md |
-n, --command-name | Name the command | hyperfine -n 'test' 'cmd' |
Practical Examples
Comparing CLI Tools
Tool Comparison
# Compare find and fd
hyperfine --warmup 3 \
'find . -name "*.rs"' \
'fd -e rs'
# Compare grep and ripgrep
hyperfine --warmup 3 \
'grep -r "TODO" .' \
'rg "TODO"'
# Compare cat and bat
hyperfine --warmup 5 \
'cat large_file.txt > /dev/null' \
'bat --style=plain large_file.txt > /dev/null'Program Optimization
Optimization
# Compare compiler optimization options
hyperfine --prepare 'make clean' \
-L opt O0,O1,O2,O3 \
'gcc -{opt} -o test main.c && ./test'
# Compare parallelism
hyperfine -P jobs 1 8 \
--prepare 'make clean' \
'make -j {jobs}'
# Compare algorithms
hyperfine \
--export-markdown bench.md \
'./sort_bubble' \
'./sort_quick' \
'./sort_merge'CI/CD Usage
CI/CD
# Benchmark example in GitHub Actions
name: Benchmark
on: [push]
jobs:
benchmark:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install hyperfine
run: |
wget https://github.com/sharkdp/hyperfine/releases/download/v1.18.0/hyperfine_1.18.0_amd64.deb
sudo dpkg -i hyperfine_1.18.0_amd64.deb
- name: Build
run: cargo build --release
- name: Benchmark
run: |
hyperfine --warmup 3 --export-markdown bench.md './target/release/myapp'
- name: Upload results
uses: actions/upload-artifact@v3
with:
name: benchmark-results
path: bench.mdDisk I/O Benchmark
I/O Benchmark
# Run after clearing cache (requires root)
hyperfine \
--prepare 'sync; echo 3 | sudo tee /proc/sys/vm/drop_caches > /dev/null' \
--warmup 0 \
--min-runs 5 \
'cat largefile.bin > /dev/null'
# File write benchmark
hyperfine \
--cleanup 'rm -f testfile' \
'dd if=/dev/zero of=testfile bs=1M count=100'Tips
- •To avoid disk cache effects, use
--warmupor clear cache with--prepare - •Commands that run too fast (under 10ms) have larger errors. Increase
--min-runsor run multiple times in a loop - •If you see "outliers were detected" warning, check for background process interference
- •
--export-markdownoutput can be directly pasted into README or PR - •Use
--shell=noneto measure pure execution time without shell startup overhead