-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Throttling, concurrency, structure #59
Comments
For throttling we should probably have a mechanism for waiting and retrying, with the interval increasing every moment. Queue everything up, basically. Message posting already works this way, but we might want some way to generalize. The concurrency bit makes sense, though I'd have to think a bit on the implementation details. I agree with the structure part. Oh, and asyncwrapper was obsolete, and should have been removed a long time ago :) |
The message scraping methods haven't been moved yet; they'll need to be disentangled from Message logic. PEP-8 formatting changes in browser. ref Manishearth#59
Made pinning/starring methods more intelligent with known attributes. ref #59
I've been thinking a bit about what to work on next, after Message (and possibly User and Room) are implemented. Here are some rough thoughts I've had. (I'll probably create more specific tickets for associated work, and implementation details, but these topics are pretty related so it would be useful to have initial discussion in one place.)
Throttling
When we post a new message, or edit one, the requests are throttled, and retried when appropriate. This is good, but doesn't apply to any of the other requests we make. We should generalize the existing code, and make it easy to apply for different types of requests. (There would need to be some code specific to recognizing success/temporary error/fatal error for different types of requests.) Ignoring the implementation for a sec, what behaviour do we want?
It might be reasonable to have two requests queues, one for read requests and one for write requests. That way we can keep seeing updates, even while our chat messages are throttled and being retried. Maybe by default they could limit us to one request per five seconds, or maybe a smaller limit that increases if we keep sending a lot of requests. Or maybe the read queue could allow a couple requests to be in-flight at once, while writing is limited to a single request.
Concurrency
The concurrency model of this code might have been sane before I touched it, but given the work I've done I'm sure isn't any more, and that there are probably many possible race conditions that could result in bugs.
For example, users from two different threads could both make requests and read and write from a Message event at the same time, possibly resulting in errors.
I propose that Wrapper ensures that nothing modify its data from outside of a main worker thread. Anything that could modify data will need to be passed in through a queue, which will be processed by that single thread. Wrapper will also manage our connections for throttling. Any public (non-prefixed) methods on Worker should be safe to use from any thread.
Given that you have a
message
, and you access the missingmessage.user_name
:message.scrape_transcript()
wrapper.scrape_transcript_for_message_id(...)
message.user_name
value now populatedThe Wrapper should also de-duplicate requests made at the same time, when possible. For example, if two different threads both call
request_transcript
around the same time because a field is missing, wrapper should notice that they want the same information and only make a single request.Structure
I'd like to clearly define a division of responsibilities between wrapper and client. One possibility is as follows:
chatexchange.Browser (possible alternative name: Connection)
chatexchange.Client (suggested rename from Wrapper1)
At this point, it might make sense to delete
asyncwrapper
;.1 "Wrapper" sounds like more of an implementation description than an explanation of what it provides, so I'd prefer a different name if one makes sense.
The text was updated successfully, but these errors were encountered: