Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

big file download speed is 3 times slower than the python script #1344

Closed
mu0641 opened this issue Oct 7, 2021 · 4 comments
Closed

big file download speed is 3 times slower than the python script #1344

mu0641 opened this issue Oct 7, 2021 · 4 comments

Comments

@mu0641
Copy link

mu0641 commented Oct 7, 2021

The chunk length is always 16384, how can I receive more data at one time?
Now its download speed is 3 times slower than the python script, and the cpu consumption is also very high. Is there any good way?

#[tokio::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {

    let client = reqwest::Client::builder().build()?;
    let mut res = client
    .get(&url)
    .header("referer", &config.referer)
    .send()
    .await
    .unwrap();

    loop {
        match res.chunk().await {
            Ok(Some(chunk)) => match writer.write(&chunk).await {
                Ok(_result) => {

                  // This is always showing a chunk length of 16384, how can I receive more data at one time.
                    println!("block len:{}",chunk.len());
                    let new = std::cmp::min(
                        downloaded + (chunk.len() as u64),
                        total_size,
                    );
                }
                Err(err) => panic!("~~~{}-- {} --{} ", err, path, url),
            },
            Ok(None) => {
                println!("Finished!");
                break;
            }
            Err(e) => {
                // _ss.send(row.clone()).await.unwrap();
                break;
            }
        }
    }
}
@horacimacias
Copy link

did you try bytes() instead of looping over chunk() ?

@mu0641
Copy link
Author

mu0641 commented Oct 7, 2021

@horacimacias The logic behind bytes() should also use chunk() to loop and merge into a large byte array

@seanmonstar
Copy link
Owner

It's very likely that is how the server is sending chunks. The internal decoder is happy to read faster if the data is available, and would do so automatically.

@max-block
Copy link

@mu0641 Have you found a way how to change a chunk size?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants