Skip to content

Commit

Permalink
Added gRPC telemetry instrumentation example. (#163)
Browse files Browse the repository at this point in the history
* Added gRPC telemetry instrumentation example.

* Added more comprehensive signal handler that covers more signals i.e. SIGINT, SIGTERM, SIGQUIT and that works on Windows, Linux, and Mac.

Signed-off-by: Marvin Hansen <marvin.hansen@gmail.com>

* Formatted code with rustfmt and updated Readme

Signed-off-by: Marvin Hansen <marvin.hansen@gmail.com>

* Improved docs and code formatting.

Signed-off-by: Marvin Hansen <marvin.hansen@gmail.com>

* swap around `async_trait` and `autometrics` attributes

---------

Signed-off-by: Marvin Hansen <marvin.hansen@gmail.com>
Co-authored-by: Mari <me@cutegirl.tech>
  • Loading branch information
marvin-hansen and mellowagain committed Dec 4, 2023
1 parent 1c6ae63 commit fb06b33
Show file tree
Hide file tree
Showing 9 changed files with 473 additions and 0 deletions.
1 change: 1 addition & 0 deletions examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ cargo run --package example-{name of example}
- [custom-metrics](./custom-metrics/) - Define your own custom metrics alongside the ones generated by autometrics (using any of the metrics collection crates)
- [exemplars-tracing](./exemplars-tracing/) - Use fields from `tracing::Span`s as Prometheus exemplars
- [opentelemetry-push](./opentelemetry-push/) - Push metrics to an OpenTelemetry Collector via the OTLP HTTP or gRPC protocol using the Autometrics provided interface
- [grpc-http](./grpc-http/) - Instrument Rust gRPC services with metrics using Tonic, warp, and Autometrics.
- [opentelemetry-push-custom](./opentelemetry-push-custom/) - Push metrics to an OpenTelemetry Collector via the OTLP gRPC protocol using custom options

## Full Example
Expand Down
16 changes: 16 additions & 0 deletions examples/grpc-http/Cargo.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
[package]
name = "grpc-http"
version = "0.0.0"
publish = false
edition = "2021"

[dependencies]
autometrics = { path = "../../autometrics", features = ["prometheus-exporter"] }
prost = "0.12"
tokio = { version = "1", features = ["full"] }
tonic = "0.10"
tonic-health = "0.10"
warp = "0.3"

[build-dependencies]
tonic-build = "0.10"
153 changes: 153 additions & 0 deletions examples/grpc-http/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,153 @@
# gRPC service built with Tonic, http server build with warp, and Instrumented with Autometrics

This code example has been adapted and modified from a Blog post by Mies Hernandez van Leuffen: [Adding observability to Rust gRPC services using Tonic and Autometrics](https://autometrics.dev/blog/adding-observability-to-rust-grpc-services-using-tonic-and-autometrics).

## Overview

This example shows how to:
* Add observability to a gRPC services
* Add a http service
* Start both, the grpc and http server
* Add a graceful shutdown to both, grpc and http server
* Closes a DB connection during graceful shutdown

### Install the protobuf compiler

The protobuf compiler (protoc) compiles protocol buffers into Rust code.
Cargo will call protoc automatically during the build process, but you will
get an error when protoc is not installed. Therefore, ensure protoc is installed.

The recommended installation for macOS is via [Homebrew](https://brew.sh/):

```bash
brew install protobuf
```
Check if the installation worked correctly:

```bash
protoc --version
```

## Local Observability Development

The easiest way to get up and running with this application is to clone the repo and get a local Prometheus setup using the [Autometrics CLI](https://github.com/autometrics-dev/am).

Read more about Autometrics in Rust [here](https://github.com/autometrics-dev/autometrics-rs) and general docs [here](https://docs.autometrics.dev/).


### Install the Autometrics CLI

The recommended installation for macOS is via [Homebrew](https://brew.sh/):

```
brew install autometrics-dev/tap/am
```

Alternatively, you can download the latest version from the [releases page](https://github.com/autometrics-dev/am/releases)

Spin up local Prometheus and start scraping your application that listens on port :8080.

```
am start :8080
```

If you now inspect the Autometrics explorer on `http://localhost:6789` you will see your metrics. However, upon first start, all matrics are
empty because no request has been sent yet.

Now you can test your endpoints and generate some traffic and refresh the autometrics explorer to see you metrics.

### Starting the Service

```bash
cargo run
```

Expected output:

```
Started gRPC server on port 50051
Started metrics on port 8080
Explore autometrics at http://127.0.0.1:6789
```

### Stopping the Service

You can stop the service either via ctrl-c ore by sending a SIGTERM signal to kill the process. This has been implemented for Windows, Linux, Mac, and should also work on Docker and Kubernetes.

On Windows, Linux, or Mac, just hit Ctrl-C

Alternatively, you can send a SIGTERM signal from another process
using the kill command on Linux or Mac.

In a second terminal, run

```bash
ps | grep grpc-http
```

Sample output:

```
73014 ttys002 0:00.25 /Users/.../autometrics-rs/target/debug/grpc-http
```

In this example, the service runs on PID 73014. Let's send a sigterm signal to shutdown the service. On you system, a different PID will be returned so please use that one instead.

```bash
kill 73014
```

Expected output:

```
Received SIGTERM
DB connection closed
gRPC shutdown complete
http shutdown complete
```


## Testing the GRPC endpoints

Easiest way to test the endpoints is with `grpcurl` (`brew install grpcurl`).

```bash
grpcurl -plaintext -import-path ./proto -proto job.proto -d '{"name": "Tonic"}' 'localhost:50051' job.JobRunner.SendJob
```

returns

```
{
"message": "Hello Tonic!"
}
```

Getting the list of jobs (currently hardcoded to return one job)

```bash
grpcurl -plaintext -import-path ./proto -proto job.proto -d '{}' 'localhost:50051' job.JobRunner.ListJobs
```

returns:

```
{
"job": [
{
"id": 1,
"name": "test"
}
]
}
```

## Viewing the metrics

When you inspect the Autometrics explorer on `http://localhost:6789` you will see your metrics and SLOs. The explorer shows four tabs:

1) Dashboard: Aggregated overview of all metrics
2) Functions: Detailed metrics for each instrumented API function
3) SLO's: Service Level Agreements for each instrumented API function
4) Alerts: Notifications of violated SLO's or any other anomaly.

4 changes: 4 additions & 0 deletions examples/grpc-http/build.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
fn main() -> Result<(), Box<dyn std::error::Error>> {
tonic_build::compile_protos("proto/job.proto")?;
Ok(())
}
38 changes: 38 additions & 0 deletions examples/grpc-http/proto/job.proto
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
syntax = "proto3";
package job;

service JobRunner {
rpc SendJob (JobRequest) returns (JobReply);
rpc ListJobs (Empty) returns (JobList);
}

message Empty {}

message Job {
int32 id = 1;
string name = 2;

enum Status {
NOT_STARTED = 0;
RUNNING = 1;
FINISHED = 2;
}
}

message JobRequest {
string name = 1;
}

message JobReply {
string message = 1;

enum Status {
NOT_STARTED = 0;
RUNNING = 1;
FINISHED = 2;
}
}

message JobList {
repeated Job job = 1;
}
36 changes: 36 additions & 0 deletions examples/grpc-http/src/db_manager.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
use std::fmt::Error;

// Clone is required for the `tokio::signal::unix::SignalKind::terminate()` handler
// Sometimes, you can't derive clone, then you have to wrap the DBManager in an Arc or Arc<Mutex>
#[derive(Debug, Default, Clone, Copy)]
pub struct DBManager {
// Put your DB client here. For example:
// db: rusqlite,
}

impl DBManager {
pub fn new() -> DBManager {
DBManager {
// Put your database client here. For example:
// db: rusqlite::Connection::open(":memory:").unwrap();
}
}

pub async fn connect_to_db(&self) -> Result<(), Error> {
Ok(())
}

pub async fn close_db(&self) -> Result<(), Error> {
Ok(())
}

pub async fn query_table(&self) -> Result<(), Error> {
println!("Query table");
Ok(())
}

pub async fn write_into_table(&self) -> Result<(), Error> {
println!("Write into table");
Ok(())
}
}
82 changes: 82 additions & 0 deletions examples/grpc-http/src/main.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
use std::net::SocketAddr;
use tonic::transport::Server as TonicServer;
use warp::Filter;

use autometrics::prometheus_exporter;
use server::MyJobRunner;

use crate::server::job::job_runner_server::JobRunnerServer;

mod db_manager;
mod server;
mod shutdown;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Set up prometheus metrics exporter
prometheus_exporter::init();

// Set up two different ports for gRPC and HTTP
let grpc_addr = "127.0.0.1:50051"
.parse()
.expect("Failed to parse gRPC address");
let web_addr: SocketAddr = "127.0.0.1:8080"
.parse()
.expect("Failed to parse web address");

// Build new DBManager that connects to the database
let dbm = db_manager::DBManager::new();
// Connect to the database
dbm.connect_to_db()
.await
.expect("Failed to connect to database");

// gRPC server with DBManager
let grpc_svc = JobRunnerServer::new(MyJobRunner::new(dbm));

// Sigint signal handler that closes the DB connection upon shutdown
let signal = shutdown::grpc_sigint(dbm.clone());

// Construct health service for gRPC server
let (mut health_reporter, health_svc) = tonic_health::server::health_reporter();
health_reporter
.set_serving::<JobRunnerServer<MyJobRunner>>()
.await;

// Build gRPC server with health service and signal sigint handler
let grpc_server = TonicServer::builder()
.add_service(grpc_svc)
.add_service(health_svc)
.serve_with_shutdown(grpc_addr, signal);

// Build http /metrics endpoint
let routes = warp::get()
.and(warp::path("metrics"))
.map(|| prometheus_exporter::encode_http_response());

// Build http web server
let (_, web_server) =
warp::serve(routes).bind_with_graceful_shutdown(web_addr, shutdown::http_sigint());

// Create handler for each server
// https://github.com/hyperium/tonic/discussions/740
let grpc_handle = tokio::spawn(grpc_server);
let grpc_web_handle = tokio::spawn(web_server);

// Join all servers together and start the the main loop
print_start(&web_addr, &grpc_addr);
let _ = tokio::try_join!(grpc_handle, grpc_web_handle)
.expect("Failed to start gRPC and http server");

Ok(())
}

fn print_start(web_addr: &SocketAddr, grpc_addr: &SocketAddr) {
println!();
println!("Started gRPC server on port {:?}", grpc_addr.port());
println!("Started metrics on port {:?}", web_addr.port());
println!("Stop service with Ctrl+C");
println!();
println!("Explore autometrics at http://127.0.0.1:6789");
println!();
}
Loading

0 comments on commit fb06b33

Please sign in to comment.