Pure Unix pipeline approach to monitoring file handles. No apps needed.
Built with 1975-era technology: sh, awk, grep, sort, cut, and date. If it worked on a PDP-11, it works here.
Each tool does ONE thing and does it well. Want more features? Pipe them together. Need something custom? Write a 10-line awk script. This is how Unix was meant to be used.
No frameworks. No dependencies. No build steps. No apps. Just text streams and pipes.
chmod +x collect graph avg spikes top timelineAdd to crontab (every 5 minutes):
*/5 * * * * /Users/rmelton/projects/robertmeta/graph-handles/collect >> /Users/rmelton/projects/robertmeta/graph-handles/handles.csv 2>&1
./collect > handles.csvcat handles.csv | ./avg 5Output:
Count: 80
Average: 403.24
Min: 220
Max: 860
Sum: 32259
grep "Spotify" handles.csv | ./graph 5 60Output:
2025-12-12 22:11:35 ####################################### 615
2025-12-12 22:12:19 ######################################## 617
2025-12-12 22:12:21 ####################################### 614
2025-12-12 22:12:24 ####################################### 614
Max: 617
cat handles.csv | ./spikes 5 2Output:
2025-12-12 22:11:35: 860 (2.1x avg)
2025-12-12 22:12:19: 631 (2.0x avg)
cat handles.csv | ./top 5 5Output:
1765595544,2025-12-12 22:12:24,20073,com.apple,860
1765595541,2025-12-12 22:12:21,20076,com.apple,860
1765595539,2025-12-12 22:12:19,20081,com.apple,860
1765595495,2025-12-12 22:11:35,20100,com.apple,860
1765595544,2025-12-12 22:12:24,20073,Stream,631
cat handles.csv | ./timeline 5 60Output:
2025-12-12 22:00,32259,403
# Show graph of top process
cat handles.csv | ./top 5 1 | ./graph 5
# Find what spiked and graph it
proc=$(cat handles.csv | ./spikes 5 3 | head -1 | cut -d: -f1)
grep "$proc" handles.csv | ./graph 5
# Hourly timeline of total handles
cat handles.csv | ./timeline 3 60 | ./graph 3Example composition output:
# Find top process and graph it
grep "com.apple" handles.csv | ./graph 5 40Output:
2025-12-12 22:11:35 ######################################## 860
2025-12-12 22:12:19 ######################################## 860
2025-12-12 22:12:21 ######################################## 860
2025-12-12 22:12:24 ######################################## 860
Max: 860
timestamp,datetime,total_handles,process_name,process_handles
Example data:
1765595495,2025-12-12 22:11:35,20100,com.apple,860
1765595495,2025-12-12 22:11:35,20100,Stream,631
1765595495,2025-12-12 22:11:35,20100,Spotify,615
1765595495,2025-12-12 22:11:35,20100,MTLCompil,602
1765595495,2025-12-12 22:11:35,20100,Discord,471
These tools output CSV. That means you can extend them trivially:
# Email yourself when handles spike
cat handles.csv | ./spikes 5 3 | \
mail -s "File handle spike detected" you@example.com# Feed into gnuplot for publication-quality graphs
cat handles.csv | ./timeline 5 60 | \
awk -F, '{print $1,$3}' | gnuplot -e "plot '-' with lines"
# Convert to JSON for APIs
cat handles.csv | ./top 5 10 | \
awk -F, '{printf "{\"time\":\"%s\",\"proc\":\"%s\",\"handles\":%d}\n",$2,$4,$5}'
# Store in any database
cat handles.csv | while IFS=, read ts dt tot proc cnt; do
sqlite3 metrics.db "INSERT INTO handles VALUES($ts,'$proc',$cnt)"
done# Find processes that grew over time
awk -F, '{
if (!start[$4]) start[$4] = $5
end[$4] = $5
}
END {
for (p in start) {
growth = end[p] - start[p]
if (growth > 100) printf "%s: +%d\n", p, growth
}
}' handles.csv
# Calculate rate of change
awk -F, '{
if (last[$4]) {
rate = ($5 - last[$4]) / ($1 - lasttime[$4])
if (rate > 1) printf "%s: %.2f handles/sec\n", $4, rate
}
last[$4] = $5
lasttime[$4] = $1
}' handles.csv# Correlate with memory usage
join -t, <(./collect | sort -t, -k4) \
<(ps aux | awk '{print $11","$4}' | sort -t, -k1)
# Watch for leaks in real-time
watch -n 5 "cat handles.csv | ./timeline 5 5 | tail -10 | ./graph 2 60"The possibilities are endless because it's just text. Any tool from the last 50 years can process it.
- sh - Bourne shell (1977, but close enough)
- awk - Pattern scanning and text processing (1977)
- grep - Text search (1974)
- sort - Line sorting (1971)
- cut - Column extraction (1984, but the concept existed earlier)
- date - Date/time formatting (1970s)
No external dependencies. No package managers. No build tools. This could run on original Unix v7.
Each tool does ONE thing. Compose with pipes. This approach has worked since the 70s and will work for the next 50 years.
Modern "observability platforms" do the same thing, but with:
- 200MB Docker images
- Cloud dependencies
- Subscription fees
- Vendor lock-in
- Breaking changes every 6 months
These 6 shell scripts do what you need. When you need more, you extend them. In 10 minutes. With awk.
MIT License - see LICENSE file for details.