-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Shared memory support for imaging #54
Comments
The complication seems like an extreme case of the general problem where the producer is sending faster than one of the consumers can handle messages. This may happen often in a chain of image analysis. This is a real problem in VRPN, where a sender can flood the network and the client can't ever get out of the message-handling loop -- there is an ugly hack there to only process a certain number of messages per mainloop() and then a slightly less ugly-source quench to slow the sender down. But source-quench doesn't work well when there are multiple clients. Rather than trying to track each message and who cares about it, consider switching to a circular queue of fixed size for each published message queue. Include a sequence ID on the messages, so that the receivers can know when they have missed one or more messages because they were too slow. Then the client can choose always to request the most-recent image or to try and gobble up all of the ones that arrived since it last received if it doesn't want to miss any. This is agnostic to the number of listeners and decouples tracking, but requires the receiver to copy the message out if it doesn't want it to disappear out from under it (or block the sender). A hybrid approach that avoids the copy-out would be to have each client increase a semaphore on an image when it opens it and decrease it when it is done with it. If the server finds the next image it wants to write locked, it can allocate a new buffer for its circular queue, pull the image out of the queue, and leave the orphan to be cleaned up when the last client is done and reduces the semaphore. This reduces the memory leak to those buffers that an application had locked before it died rather than all un-handled images. Cleanup for that app could reduce the semaphore on all of its locked buffers. A watch-dog timer could also sweep through all semaphored images and get rid of ones that hung around longer than a specified timeout. |
I agree with the circular queue approach, which is what we previously On 5/14/2015 11:42 AM, Russell Taylor wrote:
|
OK, that seems reasonable. I thought we had decided on reference-counting to reduce copying subsequent to discussing a circular queue. In any case, it is substantially easier, esp. in the absence of pipes. |
The hybrid approach avoids copying. |
Sounds good to me - this is why team dev is good :D |
OK, I have a functional prototype of this, with some todos and some commit squashing required, in the shared-memory branch. Successfully passing 640x480 video from my webcam via shared memory, with no modifications to the OSVR pluginkit or clientkit API required. (I changed the plugin to not scale down the image, but that was it.) CPU usage is very low, so that's promising. Ended up using a "interprocess sharable mutex" for each of the ring buffer entries, and returning smart pointers that hold a sharable lock on the buffer entry they correspond to. (Couldn't quite work out how to use a semaphore here, but this seems effectively like a reader-writer lock so a logical choice) A VRPN message is now sent with the image metadata, information on connecting to the shared memory, and a sequence number, after the server has copied the image into shared memory and released all of its locks. (If the image is small enough we still also send the old kind of message too) Currently remaining issues;
|
(oh and multiple clients works fine, no apparent change in performance) |
You might try pulling the VRPN pull request dealing with shared memory on the vrpn_ConnectionManager. It causes multi-threaded code from another project to crash on exit sometimes when a bunch of threads stop at the same time. This may fix the crash on shutdown. |
Well, the crash is not in VRPN code, so I'm pretty sure it's not that, but I will look at that. |
OK, #131 integrates this: I've solved the crash on shutdown, but have left the other open items listed above. |
If the sequence number is included in both the VRPN message and the shared-memory message, then the application should be able to determine the appropriate timestamp and other metadata for a given message. OSVR interface may be able to keep a table of these and handle the matching if needed. Seems like it would be possible to have to delay the delivery of a shared-memory message until the corresponding VRPN description arrives. |
Ah, but that's already what we're doing: the IPC ring buffer has a method "get by sequence number", which gets called when we get the VRPN message. (That's how the client is signalled that there is a frame - it doesn't do any regular checking of shared memory, avoids an IPC mutex that way) The open item was about the fact that we can also (internally, atomically, if we so desire) grab the most recent frame, not just the last frame we received a message about (get latest in addition to get by sequence number). Not sure if that's a meaningful difference. |
Closing this since it's in and usable. |
To avoid having to break up images into small chunks.
Current approach
Fixed-size circular queue of buffers, with sequence numbers. If a recipient wants to keep an image, it needs to copy it out. (Alternately, could use semaphores per-buffer to keep images around.)
Previous approach
Looking at a reference-counted, publish-subscribe model, with a per-subscriber message queue that can hold a reference-counting smart pointer to image buffers.
Subscriber registry:
Right now, have an implementation of the subscriber registry using shared memory, but that doesn't correctly handle apps closing "badly" - something like a pipe would be better.
General flow - Publisher
General flow - Subscriber
Detail/Complication
To maintain a proper reference count on the shared memory image blocks, we need each pending message to hold a reference (smart pointer) to the image block it corresponds to. However, then, we must make sure that each message is serviced somehow - either by an active application, or by some cleanup when that application exits - otherwise we'll exhaust our shared memory segment with images that nobody is really looking at.
The text was updated successfully, but these errors were encountered: