Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to get transport in multi threaded application #230

Closed
apezel opened this issue Nov 26, 2018 · 3 comments
Closed

Unable to get transport in multi threaded application #230

apezel opened this issue Nov 26, 2018 · 3 comments

Comments

@apezel
Copy link

apezel commented Nov 26, 2018

Hi, thank you for your great work on CDRS, it's amazing.
I'm encountering some problem with the latest beta in a multi threaded application :

thread 'tokio-runtime-worker-1' panicked at 'called `Result::unwrap()` on an `Err` value: General("Unable to get transport")', libcore/result.rs:1009:5
note: Run with `RUST_BACKTRACE=1` for a backtrace.

I'm using lazy_static to share the connection between threads :

lazy_static! {
  pub static ref CASSANDRA: Arc<ClusterSession> = {
    new_connection()
  };
}

pub type ClusterSession = Session<RoundRobinSync<TcpConnectionPool<NoneAuthenticator>>>;

pub fn new_connection() -> Arc<ClusterSession> {
  let node = NodeTcpConfigBuilder::new(super::CONF.cassandra_host.as_str(), NoneAuthenticator {}).build();
  let cluster_config = ClusterTcpConfig(vec![node]);
  Arc::new(new_lz4(&cluster_config, RoundRobinSync::new()).expect("session should be created"))
}

inside the thread :

do_something_with_cassandra(&CASSANDRA.clone())

The problem occurs specially if i'm running large batch inserts in parallel.

Am i doing something wrong or is it a bug with CDRS ?

apezel added a commit to apezel/cdrs that referenced this issue Nov 27, 2018
Switch to a blocking mutex lock to prevent "Unable to ge transport" error in multi threaded applications.
referenced issue AlexPikalov#230
@AlexPikalov
Copy link
Owner

AlexPikalov commented Nov 27, 2018

@apezel thank your creating the issue and the PR.

The way you use it seems to be okay for me. Probably existing solution with try_lock misses some retry logic. Your proposal #231 seems to be completely valid solution, but I just want to ensure it doesn't lead to unexpected panics.

@apezel
Copy link
Author

apezel commented Nov 29, 2018

Thanks !

@AlexPikalov
Copy link
Owner

Thank you for creating the issue and providing a solution

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants