Skip to content
This repository has been archived by the owner on Feb 28, 2018. It is now read-only.

Make receipts command faster #77

Closed
cuducos opened this issue Dec 20, 2016 · 5 comments
Closed

Make receipts command faster #77

cuducos opened this issue Dec 20, 2016 · 5 comments

Comments

@cuducos
Copy link
Collaborator

cuducos commented Dec 20, 2016

Multithreading and handling multiple requests in parallel are the key (it's the Reimbursement.get_receipt_url() from #76 that handles the HTTP requests)

@gwmoura
Copy link

gwmoura commented Dec 26, 2016

@cuducos sorry, I am don't know much about this area, but searching by this subject I found this package: multiprocessing - https://docs.python.org/3/library/multiprocessing.html

effectively side-stepping the Global Interpreter Lock by using subprocesses instead of threads.

Probabilly you know about this...

I found too this tutorial and I think he can help us - https://www.toptal.com/python/beginners-guide-to-concurrency-and-parallelism-in-python

@cuducos
Copy link
Collaborator Author

cuducos commented Dec 26, 2016

Yep, I used this on #66 — it works fine but I couldn't write tests for it (I mean, the tests I wrote were not working), so still on hold…

@cuducos
Copy link
Collaborator Author

cuducos commented Dec 26, 2016

Anyway… further thoughts: in the case of receipts, asyncio seems more promising than multiprocessing alone due to long time waiting for the response os a request (not long time waiting for processing). The trick is to start multiple HTTP requests ar once, not necessarily to distribute them among processes/threads ; )

@gwmoura
Copy link

gwmoura commented Dec 26, 2016

So, we can start multiple HTTP requests using multiprocessing or threads and use asyncio to create an asynchronous HTTP consumer or asyncio solves the all problems?

@cuducos
Copy link
Collaborator Author

cuducos commented Dec 26, 2016

Actually I think that asyncio would be better to make parallel requests: a coroutine that starts the next request while waits for the current one's response, and so on… this way we can have a lot of requests started in the latency in-between sending a request and getting its response. Later we can start different process like that, but we have to test and see the results first IMHO…

cuducos added a commit that referenced this issue Dec 29, 2016
cuducos added a commit that referenced this issue Dec 29, 2016
cuducos added a commit that referenced this issue Dec 29, 2016
@Irio Irio closed this as completed in #96 Jan 3, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants