New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Server disconnect client with same ID when timeout expires #486
Comments
Hi, I pushed some changes which will restart the KeepAliveMonitor for the affected session. The problem may is that the initial packets for connecting are not affected by the monitor for some logical reasons. Please let me know if this fixes your issue. But you can also disconnect every client and cleaning sessions via the DeleteSessionAsync. It is available at the GetClientStatus method of the server. Best regards |
Thank you for the quick response. I'll test your changes with our (old) code and let you know what happens. |
Hi, This is a similar issue i am having with #454, with non-clean client disconnects. Workaround for me has been the client waits the timeout before reconnecting again. |
Please let me know if it works. I can also build another RC if required. |
@chkr1011 To be completely sure I need some more tests, but the old code seems to work without problems. Thank you very much. In the new code we use Are you interested in a patch to add the option for async interceptors? IMHO, it makes sense to have the option to use an async interceptor (for example if you need to validate the client certificate or some other data on an external data source, like a database). |
I did some more test and I've noted some errors in the MQTTnet logs. I don't know if this is related to the changes but here is the log:
|
@JanEggers I tried to fix the above exception. The problem is that the disconnect/dispose is called twice. One time from the workaround callback and the again in the finally block. I fixed this in develop branch leaving some comments. Please have a quick review (MqttClientSession.cs) and let me know if you agree with this approach. |
@chkr1011 looks ok, I tend to use TaskCompletionSource in such cases to let stuff happen just once. but you added a test so as long as its fixed its ok. |
@fogzot question: are your disconnected clients still able to publish? |
@lawzla can your workaround be managed entirely by the client? just set the auto reconnect delay to be greater than the keep alive period? |
@x37v nope, when the old session timeouts, the new client is disconnected and the connection is closed. We solved this by closing the connection and removing the old session when a new client with the same client id connects, as explained above. |
@fogzot Does this still apply to the latest version? |
I don't know, we still have the logic to disconnect the old client in place. What changed in the last version and what is supposed to happen? I can easily test if I know what to test for. |
Well, I assume you were using version 2.8.5 back then when the issue was created and @chkr1011 did a lot of changes towards the 3.x version. So there might be the chance that this doesn't occur anymore. |
I'll have to write a small test server to check this. I'll do that and let you know. |
I close this due to inactivity. |
How to reproduce the problem:
This happens because the client session is kept in a dictionary and retrieved by ID. The server has no way to know that the old client disconnected (the keep-alive is there exactly for this reason, after all) and when the old keep-alive expires the server retrieves the new session and closes it.
We solved the problem by refusing connections to new clients while the old session is still active because it was the easiest to implement with the current API but we would like to simply discard the old session - unfortunately we didn't find a way. Is there a way to close an existing session from the connecttion validator?
The text was updated successfully, but these errors were encountered: