A Rust macro that makes it easy to understand the error rate, response time, and production usage of any function in your code.
Jump from your IDE to live Prometheus charts for each HTTP/RPC handler, database method, or other piece of application logic.
autometrics.mp4
- β¨
#[autometrics]
macro instruments any function orimpl
block to track the most useful metrics - π‘ Writes Prometheus queries so you can understand the data generated without knowing PromQL
- π Injects links to live Prometheus charts directly into each function's doc comments
- π (Coming Soon!) Grafana dashboard showing the performance of all instrumented functions
- π¨ Generates Prometheus alerting rules using SLO best practices from simple annotations in your code
- βοΈ Configurable metric collection library (
opentelemetry
,prometheus
, ormetrics
) - β‘ Minimal runtime overhead
See Why Autometrics? for more details on the ideas behind autometrics.
To see autometrics in action:
- Install prometheus locally
- Run the complete example:
cargo run -p example-full-api serve
- Hover over the function names to see the generated query links (like in the image above) and try clicking on them to go straight to that Prometheus chart.
See the other examples for details on how to use the various features and integrations.
Or run the example in Gitpod:
Autometrics includes optional functions to help collect and prepare metrics to be collected by Prometheus.
In your Cargo.toml
file, enable the optional prometheus-exporter
feature:
autometrics = { version = "*", features = ["prometheus-exporter"] }
Then, call the global_metrics_exporter
function in your main
function:
pub fn main() {
let _exporter = autometrics::global_metrics_exporter();
// ...
}
And create a route on your API (probably mounted under /metrics
) that returns the following:
pub fn get_metrics() -> (StatusCode, String) {
match autometrics::encode_global_metrics() {
Ok(metrics) => (StatusCode::OK, metrics),
Err(err) => (StatusCode::INTERNAL_SERVER_ERROR, format!("{:?}", err))
}
}
Autometrics can generate alerting rules for Prometheus based on simple annotations in your code. The specific rules are based on Sloth and the Google SRE Workbook section on Service-Level Objectives (SLOs).
In your Cargo.toml
file, enable the optional alerts
feature:
autometrics = { version = "*", features = ["alerts"] }
Then, pass the alerts
argument to the autometrics
macro for 1-3 top-level functions:
#[autometrics(alerts(success_rate = 99.9%, latency(99% <= 200ms)))]
pub async fn handle_http_requests(req: Request) -> Result<Response, Error> {
// ...
}
Use the generate_alerts
function to produce the Prometheus rules YAML file:
fn print_prometheus_alerts() {
println!("{}", autometrics::generate_alerts());
}
Take a look at the alerts example to see how to integrate generating the alert definitions into your Clap-based binary.
Refer to the Prometheus docs section on Alerting for more details on configuring Prometheus to use the alerting rules and on how to use Alertmanager to de-duplicate alerts.
By default, Autometrics creates Prometheus query links that point to http://localhost:9090
.
You can configure a custom Prometheus URL using a build-time environment in your build.rs
file:
// build.rs
fn main() {
let prometheus_url = "https://your-prometheus-url.example";
println!("cargo:rustc-env=PROMETHEUS_URL={prometheus_url}");
}
Note that when using Rust Analyzer, you may need to reload the workspace in order for URL changes to take effect.
alerts
- generate Prometheus alerting rules to notify you when a given function's error rate or latency is too highprometheus-exporter
- exports a Prometheus metrics collector and exporter (compatible with any of the Metrics Libraries)
Configure the crate that autometrics will use to produce metrics by using one of the following feature flags:
opentelemetry
(enabled by default) - use the opentelemetry crate for producing metricsmetrics
- use the metrics crate for producing metricsprometheus
- use the prometheus crate for producing metrics