-
Notifications
You must be signed in to change notification settings - Fork 163
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question - getting last byte latency metric with trace layer #119
Comments
After some digging i believe i understand the reason for this implementation - I think my initial expectation was that tower-http/tower-http/src/trace/body.rs Line 53 in 0cdef06
Does it make sense to expose additional callback from there (something like on_response_end )?
|
Right. I've wondered about this myself. Not sure how its supposed to work.
The main reason for this is that there is no standard way to classify the end of a stream based on the status code, because well there is no status code. It could be classified as success always, but that feels kinda arbitrary. I think this can be achieved by writing a custom classifier and abusing the fact that the classifier is dropped when the body has been fully sent: use axum::body::Bytes;
use http::{HeaderMap, Request, Response};
use hyper::Body;
use std::convert::Infallible;
use std::time::Instant;
use std::{net::SocketAddr, time::Duration};
use tower::ServiceBuilder;
use tower_http::classify::{ClassifiedResponse, ClassifyEos, ClassifyResponse, MakeClassifier};
use tower_http::trace::TraceLayer;
#[tokio::main]
async fn main() {
let svc = ServiceBuilder::new()
.layer(TraceLayer::new(MyMakeClassifier))
.service_fn(|request: Request<Body>| async move {
let (mut tx, body) = Body::channel();
tokio::spawn(async move {
// simulate sending a slow response
tokio::time::sleep(Duration::from_secs(1)).await;
tx.send_data(Bytes::from("foo")).await.unwrap();
tx.send_data(Bytes::from("bar")).await.unwrap();
tx.send_data(Bytes::from("baz")).await.unwrap();
});
let response = Response::new(body);
Ok::<_, Infallible>(response)
});
// run it
let addr = SocketAddr::from(([127, 0, 0, 1], 3000));
tracing::debug!("listening on {}", addr);
axum::Server::bind(&addr)
.serve(tower::make::Shared::new(svc))
.await
.unwrap();
}
#[derive(Clone)]
struct MyMakeClassifier;
impl MakeClassifier for MyMakeClassifier {
type Classifier = MyClassifier;
type FailureClass = Infallible;
type ClassifyEos = MyClassifier;
fn make_classifier<B>(&self, req: &Request<B>) -> Self::Classifier {
MyClassifier {
request_received_at: Instant::now(),
}
}
}
#[derive(Clone)]
struct MyClassifier {
request_received_at: Instant,
}
impl ClassifyResponse for MyClassifier {
type FailureClass = Infallible;
type ClassifyEos = Self;
fn classify_response<B>(
self,
res: &http::Response<B>,
) -> ClassifiedResponse<Self::FailureClass, Self::ClassifyEos> {
ClassifiedResponse::RequiresEos(self)
}
fn classify_error<E>(self, error: &E) -> Self::FailureClass
where
E: std::fmt::Display + 'static,
{
unimplemented!()
}
}
impl ClassifyEos for MyClassifier {
type FailureClass = Infallible;
fn classify_eos(self, trailers: Option<&HeaderMap>) -> Result<(), Self::FailureClass> {
Ok(())
}
fn classify_error<E>(self, error: &E) -> Self::FailureClass
where
E: std::fmt::Display + 'static,
{
unimplemented!()
}
}
impl Drop for MyClassifier {
fn drop(&mut self) {
// or whatever else you need to do
println!(
"response sent after {:?}",
self.request_received_at.elapsed()
);
}
} |
Thanks for your response. I have similar approach - i wrapped a Regarding my previous suggestion for |
That sounds easier than what I did actually. I'll close this for now. Feel free to re-open if there is more to discuss. |
I defined a response extension with a struct holding a closure that calls the closure when it is dropped. |
First of all, thanks everyone involved for amazing work on trace layer.
I want to get last-byte latency metric from it to understand how much overall time clients spend getting response from server (i use
on_response
to get first-byte latency now but it does not tell the full picture).Using
on_body_chunk
i don't have enough information to decide if it's the last chunk in stream. Initially I thought that i can get it out of the box thanks toon_eos
. Later however i have discovered that classifier fromnew_for_http
never classifies responses to invoke it and i don't have enough context to understand why.What would be the way to get this working? Can i just have a classifier that returns
ClassifiedResponse::RequiresEos
for successful responses?The text was updated successfully, but these errors were encountered: