Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Async #48

Closed
wants to merge 4 commits into from
Closed

Async #48

wants to merge 4 commits into from

Conversation

johnwstanford
Copy link

@johnwstanford johnwstanford commented Jan 24, 2021

This is an attempt at making the async API work. It's not complete yet and only includes bulk receive right now, but if nobody sees anything wrong with the basic idea, it'll be pretty straightforward to fill in the other types of transfers. The whole thing revolves around a struct called AsyncTransfer. It contains a raw pointer to a libusb_transfer from libusb, a buffer for receiving data, and a buffer for completed transfers.

TODO: I made all the struct members public for debugging, but I just realized these should probably all be private. We could create a pop_front function on this struct that takes &mut self and just calls pop_front on recv. That's the only function the owning scope should need after the struct is created.

For this to work, all three of these need to have the same lifetime and making them all owned by the same struct accomplishes this. We also use Pin for the VecDeque that collects the results because a pointer to this is passed to the libusb transfer struct as the user data, so it needs to maintain the same location in memory for as long as it lives.

pub struct AsyncTransfer {
	pub ptr: *mut libusb_transfer,
	pub buff: Vec<u8>,
	pub recv: Pin<Box<VecDeque<TransferStatus>>>,
}

The buff vector is the vector where libusb puts the data. ptr is the transfer struct provided by libusb. recv is where we accumulate the results. TransferStatus looks like this:

pub enum TransferStatus {
	Completed(Vec<u8>),
	Error,
	TimedOut,
	Stall,
	NoDevice,
	Overflow,
	Unknown(i32)
}

The callback function is a private function inside impl AsynTransfer and looks like this:

	extern "system" fn callback(xfer_ptr: *mut libusb_transfer) {

		unsafe {
			let xfer = match (*xfer_ptr).status {
				0 => {
					// Clone the memory into a vector
					let slice:&[u8] = std::slice::from_raw_parts((*xfer_ptr).buffer, (*xfer_ptr).actual_length as usize);
					TransferStatus::Completed(slice.to_vec())
				},
				1 => TransferStatus::Error,
				2 => TransferStatus::TimedOut,
				3 => TransferStatus::Stall,
				4 => TransferStatus::NoDevice,
				5 => TransferStatus::Overflow,
				n => TransferStatus::Unknown(n),
			};

			// Update the parser stored in the user data field
			let xfer_deque:*mut VecDeque<TransferStatus> = (*xfer_ptr).user_data as *mut VecDeque<TransferStatus>;
			(*xfer_deque).push_back(xfer);

			// Resubmit the transfer
			assert!(libusb1_sys::libusb_submit_transfer(xfer_ptr) == 0);
		}

	}

Basically, it puts the result into a VecDeque and the scope that owns the struct can process these whenever it wants. This is not threadsafe, but doesn't pretend to be. If you try to make this struct Send or Sync, the compiler gives an error. The callback function runs in the same thread that created the struct when you call libusb1_sys::libusb_handle_events.

@kevinmehall
Copy link
Contributor

Could you show an example of how you might use this API? It doesn't provide any way of waiting for transfers to complete or waking up an async task when the result is ready. When does the user know to call pop_front? It seems like this needs a safe wrapper around one of the libusb_handle_events functions to go with it.

What happens when you drop the AsyncTransfer? Per the libusb docs, "It is not legal to free an active transfer (one which has been submitted and has not yet completed)." The callback unconditionally resubmits the transfer, so there's no safe way to shut this down.

You can't assume that the callback will run on the same thread. For example, all the libusb sync APIs call into the libusb event loop while waiting, so your callback could be called by any thread the that is waiting for a synchronous transfer. What happens when you're reading from the VecDeque while another thread modifies it in the callback?

@johnwstanford
Copy link
Author

johnwstanford commented Jan 24, 2021

Right now I'm testing it on a project I'm working on that includes a lot of code that's specific to the device. I'll try to separate out the USB-specific stuff into an example.

Good point in the second paragraph. I think I just need to call libusb_cancel_transfer in impl std::ops::Drop for AsyncTransfer.

I think the libusb docs say that it will run on the same thread. Isn't that what this says (from http://libusb.sourceforge.net/api-1.0/group__libusb__asyncio.html)?

"""
All event handling is performed by whichever thread calls the libusb_handle_events() function. libusb does not invoke any callbacks outside of this context. Consequently, any callbacks will be run on the thread that calls the libusb_handle_events() function.
"""

If this doesn't mean what I think it means, then we could always protect that VecDeque with an Arc<Mutex<>> but it that would come with a performance cost.

Edit:
On the first paragraph, that's also a good point and maybe we could just call libusb_handle_events inside the pop_front function. That way we're delaying the libusb_handle_events call until the calling scope actually asks for output. Also, just like with VecDeque, the user doesn't necessarily "know" when to call pop_front. When you call, you're asking because this function returns an Option<T>, telling you whether or not anything was available to pop off the front of the deque.

@kevinmehall
Copy link
Contributor

Good point in the second paragraph. I think I just need to call libusb_cancel_transfer in impl std::ops::Drop for AsyncTransfer.

The callback is still called asynchronously with LIBUSB_ERROR_CANCELLED sometime after libusb_cancel_transfer, and you can't free the transfer until the callback has been called.

All event handling is performed by whichever thread calls the libusb_handle_events() function.

Go read the implementation of the libusb synchronous API. It calls libusb_handle_events_completed, and any thread running one of those event handling functions can call any callback across the libusb_context.

On the first paragraph, that's also a good point and maybe we could just call libusb_handle_events inside the pop_front function.

Be careful with race conditions here. If the transfer completes and callback runs on another thread between checking the VecDeque and sleeping, you can lose the event and never wake up. That's why libusb_handle_events_completed exists: Read http://libusb.sourceforge.net/api-1.0/libusb_mtasync.html carefully.

Fix those things, and add an abstraction for waiting for multiple different transfers, and you're basically reinventing the AsyncGroup API that I implemented a while ago. That was reverted because of the "leakpocalypse" issue -- if you mem::forget an AsyncGroup, its transfers keep running while the borrow checker no longer prevents aliasing writes to the buffers.

I think AsyncGroup could be made safe by making the transfers own rather than borrow the buffers. That API would be useful for many applications and should possibly be revived in some form, but it's a little lacking in that it doesn't integrate with the rest of the modern Rust async Future ecosystem at all.

@johnwstanford
Copy link
Author

That makes sense. It sounds like the risk comes mostly from using the sync and async API at the same time. I wonder if there's some way to use to type system to force you to use one or the other at a time for a given device handle.

I'll look at your AsyncGroup API and work on this a little more.

@kevinmehall
Copy link
Contributor

In theory, you could make a new version of UsbContext (event loop is tied to the context, rather than the device handle) that isn't Send or Sync, and use it on a single thread without worrying about thread safety. But I don't think it would be worth duplicating the entire API of the library. I'd expect the runtime cost of the necessary synchronization to be very low compared to everything in libusb and the kernel.

@johnwstanford
Copy link
Author

I was also thinking maybe you could have some kind of struct that takes ownership of UsbContext, then allows you to create instances of AsyncTransfer, then you could get the UsbContext back when you destroy this struct. While UsbContext is owned by this struct, it could be private so that it can't be used for synchronous calls at the same time. It would be kind of like giving ownership of something to a Box and then taking it back.

pub struct AsyncTransfer {
pub ptr: *mut libusb_transfer,
pub buff: Vec<u8>,
pub recv: Pin<Box<VecDeque<TransferStatus>>>,
Copy link

@TheButlah TheButlah May 13, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure AsyncTransfer or specifically recv is prevented from moving here. Neither the VecDeque or TransferStatus is !Unpin, so neither is AsyncTransfer. If this is true, it means that when an AsyncTransfer is moved, we get a dangling reference inside of ptr.

BUT I'm very unfamiliar with unsafe rust, and self-referential structs. Am I correct in this? Just going off of the std::pin documentation.


fn drop(&mut self) {
unsafe{
libusb1_sys::libusb_cancel_transfer(self.ptr);
Copy link

@TheButlah TheButlah May 14, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this function is asynchronous, and doesn't block until the transfer is cancelled. From here:

Asynchronously cancel a previously submitted transfer. This function returns immediately, but this does not indicate cancellation is complete. Your callback function will be invoked at some later time with a transfer status of LIBUSB_TRANSFER_CANCELLED.

Hence, calling free on the transfer will still be invalid, in most cases

@TheButlah
Copy link

I've written an alternative implementation in #64. I think that PR resolves the soundness issues in this one. Let me know what you think!

@a1ien
Copy link
Owner

a1ien commented Sep 4, 2022

Closed in favor of #143

@a1ien a1ien closed this Sep 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants