sniffer is designed for network troubleshooting. It can be started at any time to analyze the processes or connections causing increases in network traffic without loading any kernel modules. By the way, the TUI of it is responsive that can fit with terminals of all sizes automatically.
sniffer manipulates gopacket to sniff the interfaces and record packets' info. gopacket wraps the Golang port of
libpacp library, and provides some additional features. One of the projects that inspired the sniffer is
bandwhich, which has a sophisticated interface and multiple ways to display data, but it does not support BPF filters. Another one is
nethlogs, which supports BPF filters, but can only view data by process, without connections or remote address perspective. sniffer combines the advantages of those two projects also adhering a new Plot mode.
Connections and Process Matching
On Linux, sniffer refers to the ways in which the ss tool used, obtaining the connections of the
ESTABLISHED state by netlink socket. Since that approach is more efficient than reading the
/proc/net/* files directly. But both need to aggregate and calculate the network traffic of the process by matching the
inode information under
On macOS, the lsof command is invoked, which relies on capturing the command output for analyzing process connections information. And sniffer manipulates the API provided by gopsutil directly on Windows.
sniffer relies on the
libpcap library to capture user-level packets hence you need to have it installed first.
Linux / Windows
$ sudo apt-get install libpcap-dev
$ sudo yum install libpcap libpcap-devel
Windows need to have npcap installed for capturing packets.
After that, install sniffer by
go get command.
$ go get -u github.com/chenjiandongx/sniffer
$ brew install sniffer
❯ sniffer -h # A modern alternative network traffic sniffer. Usage: sniffer [flags] Examples: # processes mode for pid 1024,2048 in MB unit $ sniffer -p 1024 -p 2048 -m 2 -u MB # only capture the TCP protocol packets with lo,eth prefixed devices $ sniffer -b tcp -d lo -d eth Flags: -a, --all-devices listen all devices if present -b, --bpf string specify string pcap filter with the BPF syntax (default "tcp or udp") -d, --devices-prefix stringArray prefixed devices to monitor (default [en,lo,eth,em,bond]) -h, --help help for sniffer -i, --interval int interval for refresh rate in seconds (default 1) -l, --list list all devices name -m, --mode int view mode of sniffer (0: bytes 1: packets 2: processes) -n, --no-dns-resolve disable the DNS resolution -p, --pids int32Slice pids to watch, empty stands for all pids (default ) -u, --unit string unit of traffic stats, optional: B, Kb, KB, Mb, MB, Gb, GB (default "KB") -v, --version version for sniffer
|s||switch next view mode|
iperf is a tool for active measurements of the maximum achievable bandwidth on IP networks. Next we use this tool to forge massive packets on the
$ iperf -s -p 5001 $ iperf -c localhost --parallel 40 -i 1 -t 2000
sniffer vs bandwhich vs nethogs
As you can see, CPU overheads
bandwhich > sniffer > nethogs, memory overheads
sniffer > nethogs > bandwhich.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 128405 root 20 0 210168 5184 3596 S 31.0 0.3 1:21.69 bandwhich 128596 root 20 0 1449872 21912 8512 S 20.7 1.1 0:28.54 sniffer 128415 root 20 0 18936 7464 6900 S 5.7 0.4 0:11.56 nethogs
See what stats they show, sniffer and bandwhich output are very approximate(~ 2.5GB/s). netlogs can only handles packets 1.122GB/s.
Bytes Mode: display traffic stats in bytes by the Table widget.
Packets Mode: display traffic stats in packets by the Table widget.
Processes Mode: display traffic stats groups by process using Plot widget.