Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

All TcpTransports sharing one SessionManager leads to unintended Session cancellation #285

Open
thomasklunker opened this issue Aug 30, 2023 · 3 comments

Comments

@thomasklunker
Copy link

Hi everyone,
I have run into a problem recently.

In commit 9741741 from May 25, 2022, the Server class behavior was changed, so that all TcpTransport instances it creates share one SessionManager instance owned by the Server. Focus here on line 632:

/// Create a new transport.
pub fn new_transport(&self) -> TcpTransport {
TcpTransport::new(
self.certificate_store.clone(),
self.server_state.clone(),
self.address_space.clone(),
self.session_manager.clone(),
)
}

Now, once a TcpTransport finishes, meaning when an OPC-UA client disconnects, it calls TcpTransport::finish:

transport.finish(final_status);

This method in turn calls SessionManager::clear on its SessionManager field, which, as I mentioned, is shared between all TcpTransports (client connections).

let mut session_manager = trace_write_lock!(self.session_manager);
session_manager.clear(self.address_space.clone());

SessionManager::clear terminates all Sessions, meaning all Sessions from other TcpTransports as well:

/// Puts all sessions into a terminated state, deregisters them, and clears the map
pub fn clear(&mut self, address_space: Arc<RwLock<AddressSpace>>) {
for (_nodeid, session) in self.sessions.drain() {
let mut session = trace_write_lock!(session);
session.set_terminated();
let mut space = trace_write_lock!(address_space);
let diagnostics = trace_write_lock!(session.session_diagnostics);
diagnostics.deregister_session(&session, &mut space);
}
}

This was not a problem before commit 9741741, when each TcpTransport had its own separate SessionManager, so it would only close its own Sessions.
The way it is now, however, leads to the bug that I came upon and which is the reason I investigated this in the first place.

  1. Start any (demo) OPC-UA server.
  2. Connect 2 or more clients to that server.
  3. Disconnect 1 client.
  4. Try continue using any of the other clients. This fails with BadSessionIdInvalid since all sessions were closed.

I strongly suspect that this behavior is not intended.
I managed to fix this issue by simply reversing the line change made in commit 9741741 that clones the SessionManager Arc into the TcpTransport, instead creating a new SessionManager for each TcpTransport. Now, from the commit message of 9741741 I can see that having a shared SessionManager is intentional, so this might not be the ideal solution to the problem. Maybe it would be preferrable to categorize the Sessions by TcpTransport in the SessionManager so that they can be closed separately.

Please tell me if I missed anything.

@kevincolyar
Copy link

I wonder if this is the same issue I'm having with #281

@thomasklunker
Copy link
Author

thomasklunker commented Sep 12, 2023

I wonder if this is the same issue I'm having with #281

I can definitely see them being related.

@evanjs
Copy link

evanjs commented Mar 8, 2024

Pretty sure I'm encountering this lately
Feels like it could be a priority issue

Per the current documented limitations:

Currently the following are not supported

  • Multiple created sessions in a single transport.

If I understand the issue I/OP are experiencing, it almost seems like this "currently unsupported" scenario is (inadvertantly?) the way that the library currently functions, leading to the previously mentioned unintended session cancellation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants