-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Observed Performance Decrease in Request Filter Level with 403 Response. Suspecting there are blocked I/O in the service. #190
Comments
Because pub async fn respond_error(&mut self, error: u16) {
let resp = match error {
/* common error responses are pre-generated */
502 => error_resp::HTTP_502_RESPONSE.clone(),
400 => error_resp::HTTP_400_RESPONSE.clone(),
_ => error_resp::gen_error_response(error),
};
// TODO: we shouldn't be closing downstream connections on internally generated errors
// and possibly other upstream connect() errors (connection refused, timeout, etc)
//
// This change is only here because we DO NOT re-use downstream connections
// today on these errors and we should signal to the client that pingora is dropping it
// rather than a misleading the client with 'keep-alive'
self.set_keepalive(None);
self.write_response_header(Box::new(resp))
.await
.unwrap_or_else(|e| {
error!("failed to send error response to downstream: {e}");
});
} |
You should use My performance test result: let path = session.req_header().uri.path();
if path.starts_with("/login") {
let _ = session.respond_error(403).await;
return Ok(true);
}
if path.starts_with("/logout") {
let _ = HttpResponse {
status: StatusCode::FORBIDDEN,
..Default::default()
}
.send(session)
.await;
return Ok(true);
} wrk 'http://127.0.0.1:6188/login'
Running 10s test @ http://127.0.0.1:6188/login
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 129.34us 83.50us 3.14ms 96.79%
Req/Sec 14.88k 7.84k 20.66k 72.73%
16286 requests in 10.09s, 2.36MB read
Non-2xx or 3xx responses: 16286
Requests/sec: 1613.95
Transfer/sec: 239.57KB
wrk 'http://127.0.0.1:6188/logout'
Running 10s test @ http://127.0.0.1:6188/logout
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 64.49us 22.15us 1.51ms 81.94%
Req/Sec 75.88k 2.08k 79.69k 98.02%
1524149 requests in 10.10s, 203.50MB read
Non-2xx or 3xx responses: 1524149
Requests/sec: 150901.18
Transfer/sec: 20.15MB |
Nice. I was able to reproduce it as well. Thank you! |
Thanks @vicanso ! I will close this, let us know if there are further concerns. |
Though it is a common behavior for a subsequent 'Connection: Close' header while responding a error status.
Thank you all ! |
Describe the bug
I created a gateway service using the pingora proxy by referencing the gateway example. While I was benchmarking the service, I found that the latency of the requests that got rejected by request filter increased a lot faster than the requests that passed request filter. However, the CPU utilization remained stable. I suspect that there are blocked I/Os in the service, which got triggered by
session.respond_error(403).await;
Pingora info
Please include the following information about your environment:
Pingora version: 0.1.0
Rust version: i.e. 1.75
Operating system version: Centos 7 Kernel x86_64
Steps to reproduce
If we create a service with the gateway example, and compare the max RPS between rejected requests vs passed requests.
Max RPS for rejected requests (1,200 rps ) is a lot smaller than passed requests (3,200 rps). And the CPU utilization usage remains stable when latency is increased.
Expected results
Max RPS for rejected requests should be a lot higher than passed requests since it does not need to call upstream.
The CPU utilization should increase as the concurrent requests go up.
Observed results
Max RPS for rejected requests (1,200 rps ) is a lot smaller than passed requests (3,200 rps). And the CPU utilization usage remains stable when latency is increased.
Additional context
The tests were both completed in
release mode
, and theconf
wasThe tests were completed by Apache Jmeter.
The text was updated successfully, but these errors were encountered: