Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ResponseTooBig: how to push to file rather than memory? #18

Open
SamuelMarks opened this issue Feb 19, 2019 · 11 comments
Open

ResponseTooBig: how to push to file rather than memory? #18

SamuelMarks opened this issue Feb 19, 2019 · 11 comments

Comments

@SamuelMarks
Copy link

I'm getting a ResponseTooBig error. I could adjust max_response, but then my memory usage would increase.

Could I instead 'stream' to a file, in chunks relative to speed (speed of disk IO, speed of network IO)?

@SergejJurecko
Copy link
Owner

Lower level streaming API should be used in this case. But yes write to file would also be a good feature.

SergejJurecko added a commit that referenced this issue Feb 19, 2019
@SamuelMarks
Copy link
Author

@SergejJurecko On a related note, my download function is very slow, multiples slower than curl (which is multiples slower than aria2c).

Is there a trick to resolve this speed issue?

@SergejJurecko
Copy link
Owner

Debug is unfortunately crazy slow. I'm not sure what to do about that.

@SamuelMarks
Copy link
Author

@SergejJurecko
Copy link
Owner

No what I meant was running mio_httpc and not compiling with --release will be pretty slow.

@SamuelMarks
Copy link
Author

Okay, well I'm looking at moving to hyper then, will see if I get the expected performance improvement

@helinwang
Copy link

helinwang commented Jul 17, 2019

@SamuelMarks I thought @SergejJurecko meant compiling with --release should be fast?

@helinwang
Copy link

helinwang commented Jul 17, 2019

@SergejJurecko thanks for the library! Do you have an idea of the performance of mio_httpc vs. libraries like hyper? I am trying to find a lightweight http client with low latency with ~100 concurrent requests.

@SergejJurecko
Copy link
Owner

My main goals were:

  • based on mio

  • easily integratable into other mio based apps

  • safe

  • not being tied to a single TLS implementation

  • reasonably memory efficient

  • not being too dependency heavy (native-pinning branch is also split into features so that one can further cut down on dependencies at the cost of functionality)

I have not measured performance against other libraries as I don't really care all that much. It is more then fast enough for anything I need it for. It is not really doing anything egregious that would be problematic. Allocations and data copying are both kept at a minimum. Individual calls are stored in a slab and accessed by index (as opposed to a hash table).

@SergejJurecko
Copy link
Owner

100 concurrent requests is nothing

@helinwang
Copy link

helinwang commented Jul 18, 2019

Great to know, thank you!
Btw, I care more about latency than throughput. But your answer is helpful nonetheless.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants