We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I am trying to exclude node exporter from reporting on anything related to br- or veth interfaces.
In my prometheus-node-exporter config I have
NODE_EXPORTER_ARGS="--collector.processes --collector.systemd --collector.filesystem.mount-points-exclude=^/(dev|proc|sys|run|var/lib/docker/.+|var/lib/kubelet/.+)($|/) --collector.netdev.device-exclude=^(veth.+|br-.+|enp3s0.*|docker[0-9])$"
Yet no matter what, I still get these exposed via metrics Example curl -s localhost:9100/metrics | grep veth
--snipped--
node_network_receive_packets_total{device="vethe8f5975"} 4937 node_network_receive_packets_total{device="vetheeb4c80"} 2198 node_network_receive_packets_total{device="vethf1e51d0"} 578 node_network_speed_bytes{device="veth0919012"} 1.25e+09 node_network_speed_bytes{device="veth127f34a"} 1.25e+09 node_network_speed_bytes{device="veth4789342"} 1.25e+09
--snipped-- etc
Same results with br interfaces.
Is there some stupid mistake I'm doing with regex?
systemctl info:
systemctl status prometheus-node-exporter ● prometheus-node-exporter.service - Prometheus exporter for machine metrics Loaded: loaded (/usr/lib/systemd/system/prometheus-node-exporter.service; enabled; preset: disabled) Active: active (running) since Sun 2025-03-16 13:52:15 EDT; 30s ago Invocation: ca5d9c1098d54e25a6e7a8ec71bc8004 Main PID: 33886 (prometheus-node) Tasks: 6 (limit: 28380) Memory: 13.6M (peak: 14.2M) CPU: 144ms CGroup: /system.slice/prometheus-node-exporter.service └─33886 /usr/bin/prometheus-node-exporter --collector.processes --collector.systemd "--collector.filesystem.mount-points-exclude=^/(dev|proc|sys|run|var/lib/docker/.+|var/lib/kubelet/.+)(\$|/)" "--collector.netdev.device-exclude=^(veth.+|br-.+|enp3s0.*|docker[0-9])\$" Mar 16 13:52:15 beelink prometheus-node-exporter[33886]: ts=2025-03-16T17:52:15.651Z caller=node_exporter.go:118 level=info collector=time Mar 16 13:52:15 beelink prometheus-node-exporter[33886]: ts=2025-03-16T17:52:15.651Z caller=node_exporter.go:118 level=info collector=timex Mar 16 13:52:15 beelink prometheus-node-exporter[33886]: ts=2025-03-16T17:52:15.651Z caller=node_exporter.go:118 level=info collector=udp_queues Mar 16 13:52:15 beelink prometheus-node-exporter[33886]: ts=2025-03-16T17:52:15.651Z caller=node_exporter.go:118 level=info collector=uname Mar 16 13:52:15 beelink prometheus-node-exporter[33886]: ts=2025-03-16T17:52:15.651Z caller=node_exporter.go:118 level=info collector=vmstat Mar 16 13:52:15 beelink prometheus-node-exporter[33886]: ts=2025-03-16T17:52:15.651Z caller=node_exporter.go:118 level=info collector=watchdog Mar 16 13:52:15 beelink prometheus-node-exporter[33886]: ts=2025-03-16T17:52:15.651Z caller=node_exporter.go:118 level=info collector=xfs Mar 16 13:52:15 beelink prometheus-node-exporter[33886]: ts=2025-03-16T17:52:15.651Z caller=node_exporter.go:118 level=info collector=zfs Mar 16 13:52:15 beelink prometheus-node-exporter[33886]: ts=2025-03-16T17:52:15.652Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9100 Mar 16 13:52:15 beelink prometheus-node-exporter[33886]: ts=2025-03-16T17:52:15.652Z caller=tls_config.go:316 level=info msg="TLS is disabled." http2=false address=[::]:9100
The text was updated successfully, but these errors were encountered:
No branches or pull requests
I am trying to exclude node exporter from reporting on anything related to br- or veth interfaces.
In my prometheus-node-exporter config I have
NODE_EXPORTER_ARGS="--collector.processes --collector.systemd --collector.filesystem.mount-points-exclude=^/(dev|proc|sys|run|var/lib/docker/.+|var/lib/kubelet/.+)($|/) --collector.netdev.device-exclude=^(veth.+|br-.+|enp3s0.*|docker[0-9])$"
Yet no matter what, I still get these exposed via metrics
Example
curl -s localhost:9100/metrics | grep veth
--snipped--
--snipped--
etc
Same results with br interfaces.
Is there some stupid mistake I'm doing with regex?
systemctl info:
The text was updated successfully, but these errors were encountered: