Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrate tokio runtime into propolis dispatcher #47

Closed
wants to merge 6 commits into from

Conversation

pfmooney
Copy link
Collaborator

@pfmooney pfmooney commented Sep 9, 2021

Building off the back of the lifecycle work (#30), this integrates a tokio runtime into the propolis dispatcher, and makes async tasks with access to machine resources (through a DispCtx) a first class notion.

The viona driver is the first test case for its use, but the Block interface(s) are presumably next in line to use the functionality.

@pfmooney
Copy link
Collaborator Author

pfmooney commented Sep 9, 2021

If this were gerrit, it would have only the latest commit, but I'm not versed enough in GH these days to know how to do that (or if it's even possible)

README.md Show resolved Hide resolved
propolis/src/dispatch/sync_tasks.rs Show resolved Hide resolved

pub(super) struct AsyncDispatch {
inner: Mutex<AsyncInner>,
rt: Mutex<Option<Runtime>>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Why let this be optional? It seems like it should always exist, if I understand the constructor usage correctly
  2. Would that avoid the need for the Mutex? Most Runtime methods act on &self, not &mut self.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ran into issues where the Runtime was dropped from an async context in propolis-server (When the Instance -> Dispatcher -> Runtime drop chain occurred). That's why it's in an option so I can drop it separately from a known sync context.

Comment on lines +373 to +374
// In the long term, viona should probably move to something like eventfd to
// make polling on those ring interrupt events more accessible.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about patching mio?

https://github.com/tokio-rs/mio/blob/master/src/sys/unix/selector/epoll.rs seems to consider "readable" to be "EPOLLIN" or "EPOLLRDHUP" - why wouldn't EPOLLRDBAND be part of this too?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit grumpy that there's no way to actually differentiate them as a consumer of mio. My eventual plan is to change how viona does these notifications. POLLRDBAND is rather awkward.

Comment on lines +412 to +415
// Explicitly drop the instance-held reference to the Machine. If all
// the worker tasks (sync or async) are complete, this should be the
// last holder and result in its destruction.
let _ = inner.machine.take();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need to do this explicitly? Forcing all access to the machine to go through an extra unwrap is kinda gross - wouldn't this happen implicitly as the last reference to the instance goes out of scope?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bit complicated (and perhaps would benefit from a longer comment) : I need the Machine to get dropped before the Instance, so the mmap() areas it owns are munmap()ed prior to the Instance doing a destroy of the vmm handle.

@luqmana
Copy link
Contributor

luqmana commented Sep 9, 2021

If this were gerrit, it would have only the latest commit, but I'm not versed enough in GH these days to know how to do that (or if it's even possible)

You could change the base branch from master to lifecycle and that should give just the changes on top.

/// an instance.
pub(crate) fn assoc_instance(&self, inst: Weak<instance::Instance>) {
let res = self.inner.inst.lock().unwrap().replace(inst);
assert!(res.is_none());
}

/// Spawns a new dedicated worker thread named `name` which invokes
/// `func` on `data`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: update comment as we don't take a data anymore

let sink = sink.unwrap();
loop {
let mut buf = [0u8];
loop {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we're already in an async context, can't we replace the nested loop with a AsyncReadExt::read call:

use tokio::io::AsyncReadExt;
...
if client.read(&mut buf).await? == 0 {
    return Ok(());
}

Copy link
Contributor

@luqmana luqmana Sep 15, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, that needs &mut to client, right.

Edit: SinkDriver::drive can take a tokio::net::unix::split::ReadHalf and SourceDriver::drive a tokio::net::unix::split::WriteHalf would make it work.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm iterating on changes to the chardev drivers to dogfood this async stuff. Hopefully it'll address your thoughts there.

XXX: Still need to do propolis-server bits
@pfmooney
Copy link
Collaborator Author

The latest commit will probably go in separately, but it's a continuation of my dog-fooding the async interfaces to see how well/poorly they work for device emulation. Feedback welcome!

@pfmooney
Copy link
Collaborator Author

Merged up through 4022dec

@pfmooney pfmooney closed this Sep 28, 2021
@pfmooney pfmooney deleted the tokio branch October 18, 2021 21:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants