-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate tokio runtime into propolis dispatcher #47
Conversation
If this were gerrit, it would have only the latest commit, but I'm not versed enough in GH these days to know how to do that (or if it's even possible) |
propolis/src/dispatch/async_tasks.rs
Outdated
|
||
pub(super) struct AsyncDispatch { | ||
inner: Mutex<AsyncInner>, | ||
rt: Mutex<Option<Runtime>>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Why let this be optional? It seems like it should always exist, if I understand the constructor usage correctly
- Would that avoid the need for the
Mutex
? MostRuntime
methods act on&self
, not&mut self
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ran into issues where the Runtime was dropped from an async context in propolis-server (When the Instance
-> Dispatcher
-> Runtime
drop chain occurred). That's why it's in an option so I can drop it separately from a known sync context.
// In the long term, viona should probably move to something like eventfd to | ||
// make polling on those ring interrupt events more accessible. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about patching mio?
https://github.com/tokio-rs/mio/blob/master/src/sys/unix/selector/epoll.rs seems to consider "readable" to be "EPOLLIN" or "EPOLLRDHUP" - why wouldn't EPOLLRDBAND
be part of this too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit grumpy that there's no way to actually differentiate them as a consumer of mio
. My eventual plan is to change how viona does these notifications. POLLRDBAND
is rather awkward.
// Explicitly drop the instance-held reference to the Machine. If all | ||
// the worker tasks (sync or async) are complete, this should be the | ||
// last holder and result in its destruction. | ||
let _ = inner.machine.take(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need to do this explicitly? Forcing all access to the machine to go through an extra unwrap
is kinda gross - wouldn't this happen implicitly as the last reference to the instance goes out of scope?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a bit complicated (and perhaps would benefit from a longer comment) : I need the Machine
to get dropped before the Instance
, so the mmap()
areas it owns are munmap()
ed prior to the Instance
doing a destroy of the vmm handle.
You could change the base branch from master to lifecycle and that should give just the changes on top. |
/// an instance. | ||
pub(crate) fn assoc_instance(&self, inst: Weak<instance::Instance>) { | ||
let res = self.inner.inst.lock().unwrap().replace(inst); | ||
assert!(res.is_none()); | ||
} | ||
|
||
/// Spawns a new dedicated worker thread named `name` which invokes | ||
/// `func` on `data`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: update comment as we don't take a data
anymore
propolis/src/chardev/sock.rs
Outdated
let sink = sink.unwrap(); | ||
loop { | ||
let mut buf = [0u8]; | ||
loop { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we're already in an async context, can't we replace the nested loop with a AsyncReadExt::read
call:
use tokio::io::AsyncReadExt;
...
if client.read(&mut buf).await? == 0 {
return Ok(());
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, that needs &mut to client, right.
Edit: SinkDriver::drive
can take a tokio::net::unix::split::ReadHalf
and SourceDriver::drive
a tokio::net::unix::split::WriteHalf
would make it work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm iterating on changes to the chardev drivers to dogfood this async stuff. Hopefully it'll address your thoughts there.
XXX: Still need to do propolis-server bits
The latest commit will probably go in separately, but it's a continuation of my dog-fooding the async interfaces to see how well/poorly they work for device emulation. Feedback welcome! |
Emulated devices missing from the inventory will not be properly reset when the instance is rebooted.
Merged up through 4022dec |
Building off the back of the lifecycle work (#30), this integrates a tokio runtime into the propolis dispatcher, and makes async tasks with access to machine resources (through a
DispCtx
) a first class notion.The viona driver is the first test case for its use, but the Block interface(s) are presumably next in line to use the functionality.