You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a piece of code written using the regular requests library as follows:
def post_request():
requests.post("http://httpbin.org/post", data={"foo": "bar", "baz": "true"})
def main():
while True:
for _ in range(5):
thread = Thread(target=post_request)
thread.start()
I am using 5 threads but in production this will be ramped up to as many as the computer can possibly handle concurrently. I want to re-create this functionality using faster_than_requests to speed it up.
I noticed that faster_than_requests.post was slower than requests.post, but I believe this is because faster_than_requests.post returns a dict of all the response data while requests.post only returns the response code.
I then tried faster_than_requests.post2str and found it was about equally as fast as requests.post.
I found that I cannot use my own threading method with faster_than_requests.post/post2str as the code just crashes and doesn't return any errors.
I am trying to send as many post requests per second as possible to the same url, with the same data, as fast as possible. Is there a way to do this with faster_than_requests by 1) using the fastest post method the library offers 2) using threading or asyncio or multiprocessing to spawn loads of these functions concurrently?
The text was updated successfully, but these errors were encountered:
Whats the full command you used to compile it?, it depends a lot on the options used to compile, try using: nim c -d:danger -d:ssl --threads:on --app:lib --gc:arc --out:faster_than_requests.pyd faster_than_requests.nim
is possible that some stuff may be similar speed to other libs nowadays, when it started it was faster on everything,
or maybe is just that your bottleneck is somewhere, like GIL, interpreter itself, mechanical disk, Routing/DNS, antivirus, etc.
It is not a drop-in replacement to any pre-existent software.
For more speed you do it from the Nim side,
the more stuff you move from the interpreter side to the compiled side the faster it goes, so is really up to you,
Nim has Python syntax, just need to import it https://github.com/Yardanico/nimpylib
I have a piece of code written using the regular requests library as follows:
I am using 5 threads but in production this will be ramped up to as many as the computer can possibly handle concurrently. I want to re-create this functionality using faster_than_requests to speed it up.
I am trying to send as many post requests per second as possible to the same url, with the same data, as fast as possible. Is there a way to do this with faster_than_requests by 1) using the fastest post method the library offers 2) using threading or asyncio or multiprocessing to spawn loads of these functions concurrently?
The text was updated successfully, but these errors were encountered: