Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why does the performance of reqwest fluctuate so much? #1116

Closed
milescui opened this issue Dec 16, 2020 · 10 comments
Closed

Why does the performance of reqwest fluctuate so much? #1116

milescui opened this issue Dec 16, 2020 · 10 comments

Comments

@milescui
Copy link

Why does the performance of reqwest fluctuate so much? There is no such problem when using nodejs to make the same request.

rust version: 1.50.0 nightly
os: ubuntu server 20.04.1
intel 16 cores 3.6Ghz, and 32G mem.

Command:
cargo run --release

[profile.release]
lto = true
codegen-units = 1

[dependencies.reqwest]
version = "0.10.9"
default-features = false
features = ["blocking", "gzip", "json", "rustls-tls-native-roots"]

rust code:

use std::time::Instant;

use rsex::constant::{ORDER_SIDE_BUY, ORDER_TYPE_POST_ONLY};
use rsex::huobi::spot_rest::Huobi;
use rsex::traits::SpotRest;
use uuid::Uuid;

const HOST: &'static str = "https://api.huobi.pro";
const API_KEY: &'static str = "";
const SECRET_KEY: &'static str = "";

fn main() -> Result<(), Box<dyn std::error::Error>> {
    env_logger::init();

    let mut api = Huobi::new(Some(API_KEY.into()), Some(SECRET_KEY.into()), HOST.into());
    // set account_id
    let acc_id = api.get_account_id("spot", None).unwrap();
    api.set_account("spot", &acc_id);

    for _ in 0..50 {
        let before = Instant::now();
        // create_order
        let order = api
            .create_order(
                "WAVESUSDT",
                6.2,
                1.0,
                ORDER_SIDE_BUY,
                ORDER_TYPE_POST_ONLY,
                Option::from(Uuid::new_v4().to_string().replace("-", "").as_str()),
            )
            .unwrap();
        println!("create_order Elapsed time: {:.3?}", before.elapsed());

        let canceled_order_before = Instant::now();
        let _canceled_order = api.cancel(&*order.id, Option::from("WAVESUSDT"));
        println!(
            "_canceled_order Elapsed time: {:.3?}",
            canceled_order_before.elapsed()
        );
    }

    Ok(())
}
pub fn create_order(
        &self,
        symbol: &str,
        price: f64,
        amount: f64,
        side: &str,
        ty: &str,
        cid: Option<&str>,
    ) -> APIResult<Order> {
        let uri = if self.is_margin {
            MARGIN_URI.get("create_order").unwrap()
        } else {
            SPOT_URI.get("create_order").unwrap()
        };
        let mut params: BTreeMap<String, String> = BTreeMap::new();
        params.insert("symbol".into(), symbol.into());
        params.insert("side".into(), side.into());
        if ty == ORDER_TYPE_POST_ONLY {
            params.insert("type".into(), "LIMIT_MAKER".into());
        } else {
            params.insert("type".into(), ty.into());
        }
        if ty == ORDER_TYPE_LIMIT {
            params.insert("timeInForce".into(), "GTC".into());
        }
        if let Some(cid) = cid {
            params.insert("newClientOrderId".into(), cid.into());
        }
        if ty == ORDER_TYPE_LIMIT || ty == ORDER_TYPE_POST_ONLY {
            params.insert("price".into(), price.to_string());
        }

        params.insert("quantity".into(), amount.to_string());
        let req = self.build_signed_request(params)?;

        let ret = self.post_signed(uri, &req)?;
        let resp: bn_types::OrderResult = serde_json::from_str(&ret)?;

        let order = Order {
            symbol: symbol.to_string(),
            id: resp.order_id.to_string(),
            cid: resp.client_order_id.to_string(),
            price,
            amount,
            filled: 0.0,
            side: side.to_string(),
            ty: ty.to_string(),
            status: ORDER_STATUS_SUBMITTED,
            timestamp: get_timestamp().unwrap_or(0),
        };

        Ok(order)
    }
lazy_static! {
    static ref CLIENT: Client = {
        Client::builder()
            .gzip(true)
            .tcp_nodelay_(true)
            .build()
            .unwrap()
    };
}


pub fn post_signed(&self, endpoint: &str, request: &str) -> APIResult<String> {
        let ts = get_utc_ts();
        params.insert("Timestamp".into(), ts);
        params.insert("AccessKeyId".into(), self.api_key.clone());
        params.insert("SignatureMethod".into(), "HmacSHA256".into());
        params.insert("SignatureVersion".into(), "2".into());

        let params_str = self.build_query_string(params);
        let split = self.host.split("//").collect::<Vec<&str>>();
        let hostname = split[1];
        let signature = self.sign(&format!(
            "{}\n{}\n{}\n{}",
            "POST", hostname, endpoint, params_str
        ));

        let req = format!(
            "{}{}?{}&Signature={}",
            self.host,
            endpoint,
            params_str,
            percent_encode(&signature)
        );

        let body = CLIENT
            .post(req.as_str())
            .json(&serde_json::json!(&body))
            .send()?
            .text()?;

        let val: Value = serde_json::from_str(body.as_str())?;
        if val["status"].as_str() == Some("error") {
            return if let Some(err_msg) = val["err_msg"].as_str() {
                Err(Box::new(ExError::ApiError(err_msg.into())))
            } else {
                Err(Box::new(ExError::ApiError(format!("response: {:?}", val))))
            };
        }
        Ok(body)
}

Result:

create_order Elapsed time: 6.536ms
_canceled_order Elapsed time: 3.184ms
create_order Elapsed time: 6.627ms
_canceled_order Elapsed time: 2.491ms
create_order Elapsed time: 6.610ms
_canceled_order Elapsed time: 2.656ms
create_order Elapsed time: 6.748ms
_canceled_order Elapsed time: 2.647ms
create_order Elapsed time: 6.339ms
_canceled_order Elapsed time: 2.223ms
create_order Elapsed time: 9.358ms
_canceled_order Elapsed time: 10.452ms
create_order Elapsed time: 6.174ms
_canceled_order Elapsed time: 3.036ms
create_order Elapsed time: 5.582ms
_canceled_order Elapsed time: 2.550ms
create_order Elapsed time: 7.714ms
_canceled_order Elapsed time: 3.237ms
create_order Elapsed time: 6.603ms
_canceled_order Elapsed time: 3.222ms
create_order Elapsed time: 6.495ms
_canceled_order Elapsed time: 3.348ms
create_order Elapsed time: 6.997ms
_canceled_order Elapsed time: 2.976ms
create_order Elapsed time: 6.145ms
_canceled_order Elapsed time: 2.763ms
create_order Elapsed time: 6.130ms
_canceled_order Elapsed time: 2.952ms
create_order Elapsed time: 7.106ms
_canceled_order Elapsed time: 3.081ms
create_order Elapsed time: 6.092ms
_canceled_order Elapsed time: 3.207ms
create_order Elapsed time: 6.135ms
_canceled_order Elapsed time: 3.020ms
create_order Elapsed time: 8.594ms
_canceled_order Elapsed time: 2.730ms
create_order Elapsed time: 7.101ms
_canceled_order Elapsed time: 2.887ms
create_order Elapsed time: 6.309ms
_canceled_order Elapsed time: 2.685ms
create_order Elapsed time: 6.444ms
_canceled_order Elapsed time: 2.504ms
create_order Elapsed time: 7.410ms
_canceled_order Elapsed time: 3.651ms
create_order Elapsed time: 6.000ms
_canceled_order Elapsed time: 2.566ms
create_order Elapsed time: 6.683ms
_canceled_order Elapsed time: 3.518ms
create_order Elapsed time: 6.269ms
_canceled_order Elapsed time: 2.463ms
create_order Elapsed time: 5.714ms
_canceled_order Elapsed time: 2.825ms
create_order Elapsed time: 5.650ms
_canceled_order Elapsed time: 2.326ms
create_order Elapsed time: 6.033ms
_canceled_order Elapsed time: 4.037ms
create_order Elapsed time: 6.399ms
_canceled_order Elapsed time: 2.786ms
create_order Elapsed time: 5.985ms
_canceled_order Elapsed time: 4.538ms
create_order Elapsed time: 6.931ms
_canceled_order Elapsed time: 2.387ms
create_order Elapsed time: 6.646ms
_canceled_order Elapsed time: 3.964ms
create_order Elapsed time: 5.989ms
_canceled_order Elapsed time: 2.634ms
create_order Elapsed time: 7.980ms
_canceled_order Elapsed time: 2.508ms
create_order Elapsed time: 6.740ms
_canceled_order Elapsed time: 3.152ms
create_order Elapsed time: 6.482ms
_canceled_order Elapsed time: 2.675ms
create_order Elapsed time: 6.026ms
_canceled_order Elapsed time: 2.595ms
create_order Elapsed time: 6.728ms
_canceled_order Elapsed time: 2.721ms
create_order Elapsed time: 6.360ms
_canceled_order Elapsed time: 2.384ms
create_order Elapsed time: 9.184ms
_canceled_order Elapsed time: 2.389ms
create_order Elapsed time: 5.761ms
_canceled_order Elapsed time: 3.097ms
create_order Elapsed time: 7.331ms
_canceled_order Elapsed time: 2.452ms
create_order Elapsed time: *13.436ms*
_canceled_order Elapsed time: *36.157ms*
create_order Elapsed time: 7.294ms
_canceled_order Elapsed time: 2.536ms
create_order Elapsed time: 5.904ms
_canceled_order Elapsed time: 2.884ms
create_order Elapsed time: 6.436ms
_canceled_order Elapsed time: 3.035ms
create_order Elapsed time: 6.702ms
_canceled_order Elapsed time: *131.106ms*
create_order Elapsed time: 9.712ms
_canceled_order Elapsed time: 3.922ms
create_order Elapsed time: 6.979ms
_canceled_order Elapsed time: 3.060ms
create_order Elapsed time: 6.336ms
_canceled_order Elapsed time: 3.376ms
@seanmonstar
Copy link
Owner

I do not know the reason. You're certain it's not the server, or your network? Do you see the same if using the async client?

@milescui
Copy link
Author

I connect to the target server through a LAN. I use nodejs without this problem. I haven't used an async client.

@milescui
Copy link
Author

In my low-concurrency scenario, is there any difference in performance between asynchronous and synchronous clients?

@seanmonstar
Copy link
Owner

Looking at what you posted, there shouldn't be much difference. The blocking client works by having a background thread run an async client, so I thought maybe the extra thread could be competing, but I see you have multiple cores so likely not an issue.

You could turn on logs for hyper and reqwest, to try to determine what it's doing in the slower requests.

@milescui
Copy link
Author

Ok i try it, thanks.

@milescui
Copy link
Author

milescui commented Dec 17, 2020

It looks like the problem is here. why?

[2020-12-17T07:35:59.499410Z TRACE hyper::proto::h1::conn] flushed({role=client}): State { reading: Init, writing: KeepAlive, keep_alive: Busy }
[2020-12-17T07:35:59.523475Z TRACE hyper::proto::h1::conn] Conn::read_head
[2020-12-17T07:35:59.499290Z TRACE hyper::proto::h1::decode] Chunk size is 0
[2020-12-17T07:35:59.499293Z TRACE hyper::proto::h1::decode] end of chunked
[2020-12-17T07:35:59.499297Z DEBUG hyper::proto::h1::conn] incoming body completed
[2020-12-17T07:35:59.499301Z TRACE hyper::proto::h1::conn] maybe_notify; read_from_io blocked
[2020-12-17T07:35:59.499307Z TRACE want] signal: Want
[2020-12-17T07:35:59.499310Z TRACE want] signal found waiting giver, notifying
[2020-12-17T07:35:59.499315Z TRACE hyper::proto::h1::conn] flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
[2020-12-17T07:35:59.499319Z TRACE want] signal: Want
[2020-12-17T07:35:59.499311Z TRACE reqwest::blocking::wait] wait at most 30s
[2020-12-17T07:35:59.499329Z TRACE reqwest::blocking::wait] (ThreadId(1)) park timeout 29.999999715s
[2020-12-17T07:35:59.499321Z TRACE hyper::proto::h1::conn] flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
[2020-12-17T07:35:59.499346Z TRACE want] poll_want: taker wants!
[2020-12-17T07:35:59.499351Z TRACE hyper::client::pool] put; add idle connection for ("http", 796285560210.coloc.huobi.com)
[2020-12-17T07:35:59.499354Z DEBUG hyper::client::pool] pooling idle connection for ("http", 796285560210.coloc.huobi.com)
[2020-12-17T07:35:59.499358Z TRACE want] signal: Want
[2020-12-17T07:35:59.499361Z TRACE hyper::proto::h1::conn] flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
[2020-12-17T07:35:59.499370Z TRACE hyper::client::pool] take? ("http", 796285560210.coloc.huobi.com): expiration = Some(90s)
[2020-12-17T07:35:59.499375Z DEBUG hyper::client::pool] reuse idle connection for ("http", 796285560210.coloc.huobi.com)
[2020-12-17T07:35:59.499380Z TRACE hyper::proto::h1::role] encode_headers
[2020-12-17T07:35:59.499383Z TRACE hyper::proto::h1::role] -> encode_headers
[2020-12-17T07:35:59.499385Z TRACE hyper::proto::h1::role] Client::encode method=POST, body=Some(Known(171))
[2020-12-17T07:35:59.499391Z TRACE hyper::proto::h1::role] <- encode_headers
[2020-12-17T07:35:59.499394Z TRACE hyper::proto::h1::role] -- encode_headers
[2020-12-17T07:35:59.499396Z TRACE hyper::proto::h1::encode] sized write, len = 171
[2020-12-17T07:35:59.499405Z DEBUG hyper::proto::h1::io] flushed 529 bytes
[2020-12-17T07:35:59.499410Z TRACE hyper::proto::h1::conn] flushed({role=client}): State { reading: Init, writing: KeepAlive, keep_alive: Busy }
[2020-12-17T07:35:59.523475Z TRACE hyper::proto::h1::conn] Conn::read_head
[2020-12-17T07:35:59.523487Z DEBUG hyper::proto::h1::io] read 517 bytes
[2020-12-17T07:35:59.523492Z TRACE hyper::proto::h1::role] parse_headers
[2020-12-17T07:35:59.523495Z TRACE hyper::proto::h1::role] -> parse_headers
[2020-12-17T07:35:59.523500Z TRACE hyper::proto::h1::role] Response.parse([Header; 100], [u8; 517])
[2020-12-17T07:35:59.523506Z TRACE hyper::proto::h1::role] Response.parse Complete(456)
[2020-12-17T07:35:59.523516Z TRACE hyper::proto::h1::role] <- parse_headers
[2020-12-17T07:35:59.523520Z TRACE hyper::proto::h1::role] -- parse_headers
[2020-12-17T07:35:59.523522Z DEBUG hyper::proto::h1::io] parsed 12 headers
[2020-12-17T07:35:59.523524Z DEBUG hyper::proto::h1::conn] incoming body is chunked encoding
[2020-12-17T07:35:59.523529Z TRACE hyper::proto::h1::decode] decode; state=Chunked(Size, 0)
[2020-12-17T07:35:59.523534Z TRACE hyper::proto::h1::decode] Read chunk hex size
[2020-12-17T07:35:59.523540Z TRACE hyper::proto::h1::decode] Read chunk hex size
[2020-12-17T07:35:59.523544Z TRACE hyper::proto::h1::decode] Read chunk hex size
[2020-12-17T07:35:59.523549Z TRACE hyper::proto::h1::decode] Chunk size is 55
[2020-12-17T07:35:59.523554Z DEBUG hyper::proto::h1::decode] incoming chunked header: 0x37 (55 bytes)
[2020-12-17T07:35:59.523559Z TRACE hyper::proto::h1::decode] Chunked read, remaining=55
[2020-12-17T07:35:59.523565Z TRACE hyper::proto::h1::conn] flushed({role=client}): State { reading: Body(Chunked(BodyCr, 0)), writing: KeepAlive, keep_alive: Busy }
[2020-12-17T07:35:59.523573Z DEBUG reqwest::async_impl::client] response '200 OK' for http://796285560210.coloc.huobi.com/v1/order/orders/place?AccessKeyId=xxxxxxxxxxxxxxxxxxx&SignatureMethod=HmacSHA256&SignatureVersion=2&Timestamp=2020-12-17T07%3A35%3A59&Signature=jX3RC%2Byuh8X5a%2BDhRRHh0k3spO7UeLlbB%2FXiXbaI9rA%3D
[2020-12-17T07:35:59.523587Z TRACE hyper::proto::h1::conn] flushed({role=client}): State { reading: Body(Chunked(BodyCr, 0)), writing: KeepAlive, keep_alive: Busy }
[2020-12-17T07:35:59.523591Z TRACE reqwest::blocking::wait] wait at most 30s
[2020-12-17T07:35:59.523612Z TRACE hyper::proto::h1::decode] decode; state=Chunked(BodyCr, 0)
[2020-12-17T07:35:59.523617Z TRACE hyper::proto::h1::decode] Read chunk hex size
[2020-12-17T07:35:59.523617Z TRACE reqwest::blocking::wait] (ThreadId(1)) park timeout 29.999980621s
[2020-12-17T07:35:59.523621Z DEBUG hyper::proto::h1::io] read 20 bytes
[2020-12-17T07:35:59.523635Z TRACE hyper::proto::h1::decode] Read chunk hex size
[2020-12-17T07:35:59.523640Z TRACE hyper::proto::h1::decode] Chunk size is 10
[2020-12-17T07:35:59.523645Z DEBUG hyper::proto::h1::decode] incoming chunked header: 0xA (10 bytes)
[2020-12-17T07:35:59.523650Z TRACE hyper::proto::h1::decode] Chunked read, remaining=10
[2020-12-17T07:35:59.523658Z TRACE hyper::proto::h1::conn] flushed({role=client}): State { reading: Body(Chunked(BodyCr, 0)), writing: KeepAlive, keep_alive: Busy }
[2020-12-17T07:35:59.523663Z TRACE hyper::proto::h1::decode] decode; state=Chunked(BodyCr, 0)
[2020-12-17T07:35:59.523667Z TRACE hyper::proto::h1::decode] Read chunk hex size
[2020-12-17T07:35:59.523671Z TRACE hyper::proto::h1::decode] Read chunk hex size
create_order Elapsed time: 24.381ms

@seanmonstar
Copy link
Owner

Cool, that helps zoom us in a bit, thanks! So, there's a delay between flushing the request and reading the response. The next thing to determine is: did the server take that long to respond, or did we miss a wakeup somehow? I think if you turn on the logs for mio and tokio, we can see exactly when it sees the readiness events from the OS.

@milescui
Copy link
Author

How to turn on the logs for mio and tokio, I use this method does not work.

RUST_LOG=trace cargo run --release --bin test_huobi_spot_order

@vorner
Copy link
Contributor

vorner commented Jan 3, 2021

I wild guess: HTTP pipelining and naggle's algorithm + delayed ACKs?

@ta3pks
Copy link

ta3pks commented Apr 10, 2021

I also have similar issue with nodejs I am able to send ~2K requests however using reqwest after about 100 requests remaining start reporting timeouts @seanmonstar I am using async client

@seanmonstar seanmonstar closed this as not planned Won't fix, can't repro, duplicate, stale Feb 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants