New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Async client (unchanged API) #43
Conversation
Let's make sure we are set on our high level goals. I think what we want to achieve is a runtime agnostic client that can easily plug into smol, tokio or async-std. Maybe default to smol and have a mush easier out of the box experience but you can swap out runtimes as needed. That make any sense? You both are the experts of course ;) |
examples/new-client.rs
Outdated
|
||
fn main() -> io::Result<()> { | ||
// Useful commands for testing: | ||
// nats-sub -s nats://demo.nats.io:4222 hello |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't need nats:// or the 4222 which is default.
nats-sub -s demo.nats.io hello
it's nice that so far this is pretty tight. I think the next steps to sketch out for architectural soundness assessment are:
|
This is just a sketch and a draft, don't look too much into the details right now (:
Totally. So the goal is to make using this client a seamless experience - and this client works with tokio, async-std, smol, what have you. How? If you look at the code, this client spawns a thread that calls Really, all we need in the NATS client is a thread that polls the |
I like that, very clean. I was worried that in the effort to support other runtimes we would have to supply a lower level future based api.. |
@derekcollison we will be compatible with every executor, because none of the user-facing types that implement Future will have runtime assumptions. we can make all functions async without requiring an async runtime, except for the flush commands, which we may add a But at this point, it's a high-level sketch, and after testing the high-level state transition architecture, we will drill down on the granularity of efforts as we move things over. |
Thanks. Excited to see how this plays out and to have Rust+NATS+Smol powered services. |
src/new_client/connection.rs
Outdated
.await?; | ||
|
||
// Current subscriptions in the form `(subject, sid, messages)`. | ||
let mut subscriptions: Vec<(String, usize, mpsc::UnboundedSender<Message>)> = Vec::new(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No locks on subscriptions, the whole client state is contained inside this client()
function :)
I pushed some more changes. The client can now do basic handling of SUB, UNSUB, MSG, PUB. The architecture is pretty clean:
|
We want to check how this impacts performance. NATS is known for performance, and so is Rust, so this needs to be a top goal. My original pass was about on par with Go more or less but needed some improvement. |
I'm curious - what tests did you use? I'd like to run the same thing. |
I did a simple pub and pub/sub for baseline. We have nats-bench under the Go client that is more involved. |
On my simple test we have lost about 3M msgs/sec in performance. |
src/new_client/client.rs
Outdated
|
||
// Periodically flush writes to the server. | ||
_ = Timer::at(next_flush).fuse() => { | ||
writer.flush().await?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does it behave if it can't flush whole buffer? If it blocks instead of returning success on partial flush, this operation can deadlock. If recently flushed command make server to send huge amount of data back, it might end up blocking on on it's own writes, waiting for client to read data from TCP stream.
Timeout on flush can help to resolve deadlock
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is still a draft. deadlock doesn't happen in this system because of the nats-server's flush logic that ejects slow consumers. duplex progress is incoming.
@derekcollison perf will come quite easily after the control loop is minimized. are you running against the new client? on my beefy laptop, the benches for 32 byte messages top out at 2.3 million/s. did you run your baseline before the test without disabling frequency throttling and turbo boost? this can cause massive perf loss while running the identical workload twice. |
I had to bump minimum required rust version (MSRV) to 1.40.0 because 1.39.0 is simply buggy. In particular, it can't compile this: impl<T> Unblock<T> {
pub async fn get_mut(&mut self) -> &mut T {
// ...
}
} |
…ct to crate::Options::connect_async
I've renamed ConnectionOptions to Options internally, and then I added a deprecated public type alias from ConnectionOptions to Options, which allows code to keep working without any changes, but there will be a deprecation warning that people see that encourages them to move over to the new name. So, this could be released as a point release 0.5.1 without changing any existing APIs. The addition is that Options now has a connect_async method, which returns an async-capable Connection. We've put this into the asynk module because the name async is a reserved keyword, and we figured we would follow the common pattern in other languages of renaming c's in reserved words to k, like klass etc... |
Okay here's a quick sketch of what a new, single-threaded client based on smol might look like.
Right now, this simple client can connect to
demo.nats.io
and publish messages.@spacejam What do you think and how would you like to move forward from here?