Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Rhizome parallelization #68
I recently found that mdp packets and Rhizome stuff were executed on the same thread. Consequently, audio communications were broken during Rhizome synchronization.
I propose some code making all the Rhizome stuff executed in a separate POSIX thread (only in overlay mode, not necessary elsewhere).
With this code, I successfully exchange Rhizome bundles without impacting audio communications.
Here are the principles of my implementation (for more information, read the messages of the firsts commits).
The main idea is to use two fdqueue instances: one "main" and one "rhizome". In overlay mode, a new thread is started, which calls
An alarm (
Of course, the alarm data (its context) cannot be stack-stored (there is one stack per thread) and these alarms must be allocated dynamically, which implies quite a lot of changes. Commits with message beggining by "Alarm" create a wrapper to be scheduled on an fdqueue. Commits with message beggining by "Schedule" allocate parameters (if needed) and schedule an alarm to be executed on the other thread.
All tests pass, except
We're going to need a better multi-threaded fix for this.
overlay_mdp_dispatch always duplicates the source packet and creates an overlay_frame. It would be simpler to run overlay_mdp_dispatch in either thread. And call overlay_payload_enqueue_alarm if required. Passing the new frame to be queued; "struct overlay_frame *frame = alarm->context;"
Some code in
Keeping the rounting and MDP stuff serialized (i.e. handled by only one thread) is, in my opinion, very important, for simplicity, reliability and probably performance (avoid useless synchronization).
I'm not happy in general with the way we handle "struct overlay_mdp_frame" vs "struct overlay_frame". IMHO "struct overlay_mdp_frame" should only be used for interacting with mdp clients, and perhaps not even then. Internal services don't need a continuous memory buffer with all incoming packet information. We throw away information to put packets into this structure that we often end up rebuilding again. Personally I consider "struct overlay_mdp_frame" to be deprecated and would like to move away from using it. But I haven't had the time, or a pressing enough need, to revisit this part of the application yet.
Perhaps now is a good time to separate payload encryption / decryption from dependence on "struct overlay_mdp_frame". Then any future services which require more processing time can easily be shifted to a background thread or other processing queue, simply by passing whole "struct overlay_frame"'s around. Then parsing, processing and replying can be done in the background thread without needing to create a new alarm function and context data structure for each use case.
Packet encryption / signing could be done in a new function before calling overlay_payload_enqueue in the main thread. decryption / signature verification could be done in process_incoming_frame. I don't see any need for an internal service to send packets to another internal service. While we still need to reply to packets from mdp clients, these services could remain in the main thread for now.
We have internally discussed the idea of splitting rhizome into a completely separate process. Having both the daemon and command line api access the same sqlite database causes locking contention that we would also like to avoid. We would also like to support accessing more than one rhizome store at the same time, including copying content between stores when they are available.
We've considered turning rhizome into an mdp client. Extending the mdp client api to allow discovery and communication with other peers without requiring all packets to be sent via a single daemon. But we haven't had time to invest in this yet.
I only added the glue to make the current code multithreaded, without changing the logic.
Paul also said:
I read somewhere that you wanted to avoid threads, and use only monothreaded processes. The reasons are unclear to me. Is avoiding residual inter-thread locks the main reason? Could you explain?
On Tue, Aug 13, 2013 at 7:27 PM, ®om email@example.com wrote:
I've split overlay_mdp_dispatch such that internal services that only sent
Instead of allocating a new alarm per frame, we can probably build some
On that point, should we rename the "rhizome" thread to the "background"
I agree, the missing part of the work is to rewrite some algorithms, when
Ah? Where are these variables?
Which services do send packets to "local" mdp clients? Which are these "local" mdp clients?
But, that way, it could only apply for passing one frame from one thread to another.
My idea was to pass "runnables" (a generic function+argument to post whatever action you want). In practice, the alarms I scheduled do not always post frames (see parallel.h
I don't know if the overrhead of these
The way I've implemented it uses the same mechanism both for main thread and rhizome thread. As a consequence, if rhizome blocks waiting for main thread to be idle, then main thread will also block waiting for rhizome thread to be idle. The situation where the main thread needs to post a runnable on the rhizome thread occurs (1 2 3), but maybe it can be avoided…
I've considered this background thread to be rhizome-specific: another service would have its own thread too… Although, even a single service could have several threads.
I think it is a good idea.
Ideally, I think Rhizome could simply work as any other service on top of MDP: it would open an MDP socket on a predefined port and exchange with other peers, without any lower-level knowledge and, above all, without being referenced by any lower-level code.
This would remove the need of "internal services" hack: each service would use its own port dynamically (like with TCP or UDP).
In that case, Rhizome would create its own thread to handle its stuff separately, without impacting overlay* code.
What do you think?