Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add write queueing to transports for better handling of high-latency instruments #179

Closed
azonenberg opened this issue Jul 19, 2020 · 3 comments
Labels
api API improvements

Comments

@azonenberg
Copy link
Collaborator

We shouldn't need to grab a mutex in the UI thread just to send a command.

@azonenberg azonenberg added the api API improvements label Jul 19, 2020
@pepijndevos
Copy link
Contributor

I'm looking forward to learn more about how you want to address high-latency instruments, because this issue affects me, so happy to think along. (I understand you want to work on this yourself since it touches a lot of things)

It seems that the API is currently designed under the assumption of imperceptible delay in IO functions, which is IMHO a dangerous assumption if it is to abstract over various transports and types of hardware. Even if true 99% of the time due to caching and fast networks, it may still cause hiccups in the UI 1% of the time.

I believe it is common practice in UI development to avoid any blocking calls on the UI thread, even for things that are very fast 99% of the time. This is, I think, the main driver behind a lot of APIs for background processing and asynchronous programming.

For write-only functions, a write queue may indeed be a convenient solution, but I think that for read/write calls a better solution also has to be found. It's very frustrating if it takes over half a second to open a menu. One possibility is for all reads and writes to go through the same queue, and provide a callback with the response.

Another thing that may be needed to consider depending how the write queue plays out is local feedback. A very common technique in networked applications is to pretend the thing happened before the network responds. Things like adding a message to a chat window before actually sending it over the network. This comes in play with things like changing scales and offsets, which currently seem to do a whole roundtrip to the scope before updating the UI.

@azonenberg
Copy link
Collaborator Author

Scale and offset changes currently update the local cache immediately, but then they block waiting to get the mutex to push the change to the hardware. If a waveform download is in progress on a slow instrument this may hang for hundreds to thousands of ms because the mutex is held this entire time.

Proposed architecture:

  • Transport has a FIFO of pending commands and replies for each thread
  • Transport has a worker thread that manages all networking
  • SendCommand() pushes write request to the current thread's FIFO, and takes a flag indicating the type of reply data expected (none, single string, or length-prefixed binary blob)
  • Worker thread pops from pending queues in an arbitrary order (optional QoS priority for UI thread), sends commands, and reads replies
  • ReadReply() blocks until reply FIFO has data, then returns it
  • Add new IsReplyReady() and ReadReplyNonblocking() APIs

azonenberg added a commit that referenced this issue Dec 17, 2020
…, take two. Also added better timeout handling and a bunch of other fixes.
@azonenberg
Copy link
Collaborator Author

Closing this as we've had the queued command API for a while. Not all drivers use it but we can make separate tickets for that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api API improvements
Projects
None yet
Development

No branches or pull requests

2 participants