New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for other executors ? #110
Comments
Interesting - do you perhaps have more information about this issue? Is there a way to reproduce it? I'm a bit skeptical that the problem was due to interaction of smol and tokio - the
There are typically two parts to a runtime ("runtime" is a bit confusing word ascribed lots of kinds of meanings): reactor (polls I/O using epoll/kqueue/etc.) and executor (polls Still, I do think it would be beneficial to have some kind of integration. One idea would be for |
Thanks for the response and clarifications. So let's say I have an example like this one. If my understanding is correct then I will have one Executor created by Tokio and then another executor is created when NATS Client:connect is called and as far I see these are completely separated. It looks like NATS Client is creating a thread for itself to run its logic. Other idea could be to use Options::with_executor().connect("demo") ??
|
Yes, that's correct.
Something like that. Perhaps even |
For what it's worth, I've also run into issues with |
Interesting. I wonder if this only happens when mixing smol and tokio or also happens when using just smol. Either way, it would be great if we could get a reproducible example. Also, do you perhaps know more about how the locking issue manifests? Does |
What I experienced was with writing, not reading. However, it was a pretty complex application with lots of moving parts and streams, so I can't rule other sources for the issue. I did try replacing the NATS publishing with something else (writing to stdout) and didn't run into the problem - that's why I figured it's something in this crate - but not sure. Perhaps just something to keep in mind in case any more people run into this. |
Okay that’s good to know. Do you perhaps remember if there were multiple tasks concurrently publishing messages or just one? |
Sorry, I don't quite remember if the multiple producers were concurrent or synchronized. |
i was trying to reproduce the problem but so far wasn't able to with v0.8 |
Closing this as we have significantly reworked the async code in question. If the issue crops up again with the newly split out async-nats crate, please open another issue. |
I think I might face a similar issue with async not behaving as expected. Here is a simple repro with the latest version. I'm not a specialist with async runtimes so I won't risk myself theorizing what's happening. The only thing I know is that this contreived example works as expected: // Subscribe
let nc = async_nats::connect(&nats.address().to_string()).await?;
let sub = nc.subscribe("test").await?;
// Publish
let nc = async_nats::connect(&nats.address().to_string()).await?;
nc.publish("test", "foo").await?;
// Assert
let msg = sub.next().await.unwrap();
assert_eq!(String::from_utf8_lossy(&msg.data), "foo"); while this variant time outs: // Subscribe
async fn subscribe(addr: &str, subject: &str) -> Result<Subscription> {
let nc = async_nats::connect(addr).await?;
let sub = nc.subscribe(subject).await?;
Ok(sub)
}
let sub = subscribe(&nats.address().to_string(), "test").await?;
// Publish
let nc = async_nats::connect(&nats.address().to_string()).await?;
nc.publish("test", "foo").await?;
// Assert
let msg = sub.next().await.unwrap();
assert_eq!(String::from_utf8_lossy(&msg.data), "foo"); I would expect the second variant to work the same way. But for some reason, the second connection seems to be delayed after the event has been published, resulting in the subscription never receiving this event. |
First and foremost: We are planning a rework of whole async behaviour of NATS Rust Client to support many runtimes. About your current issue: Also, try creating just one connection and use it to subscribe, then publish, then fetch the messages on subscription with one of available methods. Let us know if that helped. |
@Jarema Thanks for the quick response.
Awesome, that's great news! About my issue: It's not a blocker for me either; I can work around this while waiting for official support of Thank you for all the great work you're all doing here! (we too often forget to be thankful to OSS maintainers 👍) |
If you need to create subscription in another function, don't create connection in that function scope, just pass it in. Small changes in your code makes it work: use anyhow::Result;
use nats::asynk::Subscription;
use nats_test_server::NatsTestServer;
mod test {
use super::*;
pub(super) async fn success() -> Result<()> {
let nats = NatsTestServer::build().spawn();
// Subscribe
let nc = nats::asynk::connect(&nats.address().to_string()).await?;
let sub = nc.subscribe("test").await?;
// Publish
let nc2 = nats::asynk::connect(&nats.address().to_string()).await?;
nc2.publish("test", "foo").await?;
// Assert
let msg = sub.next().await.unwrap();
assert_eq!(String::from_utf8_lossy(&msg.data), "foo");
println!("success: {}", String::from_utf8(msg.data).unwrap());
Ok(())
}
pub(super) async fn timeout() -> Result<()> {
let nats = NatsTestServer::build().spawn();
let nc = nats::asynk::connect(&nats.address().to_string()).await?;
// Subscribe
let sub = timeout::subscribe(nc.clone(), "test").await?;
// Publish
nc.publish("test", "foo").await?;
// Assert
let msg = sub.next().await.unwrap();
assert_eq!(String::from_utf8_lossy(&msg.data), "foo");
println!("success: {}", String::from_utf8(msg.data).unwrap());
Ok(())
}
mod timeout {
use nats::asynk::Connection;
use super::*;
pub(super) async fn subscribe(nc: Connection, subject: &str) -> Result<Subscription> {
let sub = nc.subscribe(subject).await?;
Ok(sub)
}
}
}
#[async_std::main]
async fn main() -> Result<()> {
test::success().await?;
test::timeout().await?;
Ok(())
} About async rework: Currently we're working hard on feature parity catch up (some JetStream features and KV). When it's done, async will probably come next. |
Would you accept an pr for async std support? @Jarema |
Hey! |
@stjepang Forgive me in advance :) but I was wondering if there is any easy way to remove Smol dependency such that I can only use one executor in my app?
My application relies heavily on Tokio and Tokio ecosystem so naturally, I would prefer to keep it that way.
The rationale for my request is that when I was trying to convert my app to use Async NATS, I have run into some locking problems when I was trying to pass messages from tasks running in Smal to tasks running in Tokio via channels.
I must admit, I quickly revert back to the blocking code, but it would be nice if everything was executed asynchronously in one executor.
The text was updated successfully, but these errors were encountered: