Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Port enclave-runner to tokio 0.2 #226

Merged
merged 4 commits into from
Apr 7, 2020
Merged

Port enclave-runner to tokio 0.2 #226

merged 4 commits into from
Apr 7, 2020

Conversation

mzohreva
Copy link
Contributor

@mzohreva mzohreva commented Apr 2, 2020

This is mostly the same as PR #210.

cc @Goirad

@@ -68,13 +71,13 @@ impl Library {
/// The caller must ensure that the parameters passed-in match what the
/// enclave is expecting.
pub unsafe fn call(
&self,
&mut self,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a public API change, so we need a minor version bump

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there docs somewhere (maybe the EDP website?) that also need to be updated?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This API change is not appropriate. Multiple threads should be able to call into the enclave.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, I'll find a way to avoid this change

})
}
}

impl<S: AsyncRead + AsyncWrite + Sync + Send + 'static> AsyncStream for S {}

/// AsyncListener lets an implementation implement a slightly modified form of `std::net::TcpListener::accept`.
pub trait AsyncListener: 'static + Send {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why remove 'static?

@@ -1226,18 +1216,7 @@ impl<'tcs> IOHandlerInput<'tcs> {
return Ok(self.alloc_fd(AsyncFileDesc::stream(stream_ext)).await);
}

// try to connect to all socket addresses one by one
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is now unnecessary since tokio::net::TcpStream::connect() takes care of it in tokio 0.2 (similar to std::net::TcpStream)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow is there anything tokio 0.2 doesn't do now?

Copy link
Contributor

@parthsane parthsane left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's great, beyond the few nits

Async::Ready(v)
}
}
self.poll_read(cx, v.as_mut_slice()).map(move |size| {
Copy link
Contributor

@parthsane parthsane Apr 7, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe rename the |size| to |ready| or |result|
nested map seems awkward if both are called size.

@@ -148,7 +124,7 @@ impl Read for Stdin {
}

lazy_static::lazy_static! {
static ref STDIN: Lock<AsyncStdin> = {
static ref STDIN: futures::lock::Mutex<AsyncStdin> = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason to use futures::lock::Mutex instead of the tokio Mutex which is already imported using use declaration?

I think we can choose one and stick with it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to use futures::lock::Mutex in some case because it allows calling lock() for T: ?Sized while tokio's Mutex does not. I replaced the tokio Mutex use with futures one

@@ -644,8 +634,15 @@ impl EnclaveState {
) -> StdResult<(u64, u64), EnclaveAbort<EnclavePanic>> {
let (tx_return_channel, mut rx_return_channel) = tokio::sync::mpsc::unbounded_channel();
let enclave_clone = enclave.clone();
let mut rt = tokio::runtime::Builder::new()
.basic_scheduler()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I understand basic_scheduler means everything runs on a single thread.
It's what_ we are aiming for.
But do we want to see if a (multi) threaded_scheduler improves performance?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can experiment with that, but I don't expect that would help. This might actually explain why we saw lower performance for tokio 0.2 implementation over 0.1 before, as we were using multi-threaded scheduler during that test for 0.2.

@mzohreva
Copy link
Contributor Author

mzohreva commented Apr 7, 2020

bors r+

@bors
Copy link
Contributor

bors bot commented Apr 7, 2020

Build succeeded

  • continuous-integration/travis-ci/push

@bors bors bot merged commit 6dfd886 into master Apr 7, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants