New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When per_listener_settings is true, queued messages may get lost between bridged brokers #1891
Comments
This occurred when reloading a persistence file and `per_listener_settings true` is set and the client did not set a username. Closes #1891. Thanks to Mikkel Nepper-Christensen.
Thank you for the detailed explanation. What is happening here, is that if I've pushed fixes for this to the |
I've been scratching my head for a week, trying to figure out if this was a bug, a feature or a problem in my code. I posted this issue three hours ago, left work, jumped on a train and took the bike the last 5km home from the station. When entering my front door 90 minutes later I looked at my phone and you, @ralight had already pushed a fix. Thank you so much for looking into this. |
You'll notice that there is a bit of variability in response time for different issues, you've got lucky :) |
@ralight One more thing... I forgot to mention it - sorry about that. I'm not sure if this is related to the same issue, but I guess it is: If I add an ACL file to the mix, the same problem is triggered - even with per_listener_setting set to false. So with the configuration below the queued messages are not delivered either in the scenario described in my intital post: Broker A configuration
acl.txt
|
This occurred when reloading a persistence file and `per_listener_settings true` is set and the client did not set a username. Closes eclipse#1891. Thanks to Mikkel Nepper-Christensen.
@ralight I encountered this issue as well recently, however it started when I'm switched to the 2.0.14 version. When I found this post I changed back to 1.6.15 and now it works again. Is it possible the issue still exist in the 2.0.x branch? |
Problem
When per_listener_settings is set to true on a broker, messages published to and queued on a bridged broker are not delivered to the first broker's subscribing clients. Mosquitto version is 1.6.12.
How to reproduce
cleansession false
and topichello
hello
with QoS 1 andclean: false
hello
and payloadworld
on broker B with QoS 1Detailed description
This is how my MQTT clients and brokers are connected:
SUBSCRIBER
->BROKER A
->BROKER B
<-PUBLISHER
The subscriber (a nodejs client) connects with
clean: false
to broker A while broker A bridges to broker B withcleansession false
. So if broker A is temporarily stopped, messages published to broker B are queued on broker B in the meantime.When broker A is started again, the queued messages on Broker B should be delivered to the subscriber via broker A when the subscriber reconnects.
This seems to work only if per_listener_settings is set false on broker A. When per_listener_settings is true, the messages queued on broker B are not delivered to the subscriber.
It's also worth mentioning that if the messages are published directly to broker A, queuing is working fine. Can be verified with the following procedure:
hello
with QoS 1 andclean: false
hello
and payloadworld
on broker A with QoS 1So as far as I can see, there seems to be an issue when messages need to go through two brokers that are bridged.
Messages are published using mosquitto_pub with QoS 1:
I have stripped my configurations down to make things as simple as possible - everything except the settings listed below is default.
Broker A configuration
(I know that having per_listener_settings set to true looks stupid when there are no other listeners. I removed them to simplify the example, but the problem is the same with and without the extra listeners.)
Broker B configuration
Nodejs subscriber
Software versions used
Mosquitto version 1.6.12
Nodejs version v12.19.0
mqtt@4.2.4 (npm)
The brokers are running on Windows 2012R2
The node client is running on Windows 10
The text was updated successfully, but these errors were encountered: