Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrong topicIDs when sending a direct message #25

Closed
haadcode opened this issue Dec 2, 2017 · 5 comments · Fixed by #26 or #27
Closed

Wrong topicIDs when sending a direct message #25

haadcode opened this issue Dec 2, 2017 · 5 comments · Fixed by #26 or #27

Comments

@haadcode
Copy link
Contributor

haadcode commented Dec 2, 2017

As I started debugging what's causing orbitdb/orbitdb#258, I found out that when sending direct messages, we sometimes have the wrong topicID in our topicIDs array.

If I'm not mistaken, the peer connection is shared between rooms (topics), so when we send a direct message to the peer, it might end up getting handled in a different room instance. This is, I believe, because of the _handleDirectConnection() pulls directly from the (shared-between-rooms) connection and as such, the handler being used can be the one created in room A while the message was sent via room B. So essentially, when we set topicCIDs: [ this._topic ], this is not actually tied to the topic.

I'm not sure how we should fix this. It feels to me as it is today, it's breaking the room abstraction (a direct message sent from one room can "appear" in another), but then again, sending a direct message to a peer on the abstraction level doesn't necessarily need to be tied to a particular room. One solution would be to change the sendTo() method so that it creates a proper PubSub message like we do on the receiving end, that way we can tie the room (topic) ID to the direct message.

The question is, do we want that? Do we want the direct messages be per-room, or do we want them to be per-user/connection?

Any thoughts @pgte?

@haadcode
Copy link
Contributor Author

haadcode commented Dec 2, 2017

Fixed this in OrbitDB with https://github.com/haadcode/ipfs-pubsub-room/blob/fix/direct-messges-in-room/src/index.js#L69 and https://github.com/haadcode/ipfs-pubsub-room/blob/fix/direct-messges-in-room/src/index.js#L154. Not sure if this is what we want though, but would be happy to fix/PR it properly.

@pgte
Copy link
Contributor

pgte commented Dec 2, 2017

It looks like you're absolutely right.
As connections are shared, direct messages going to one topic end up distributed to all the topics because, right now, there is no way in the protocol to distinguish between them.

From your code, I'd only add the following:
In src/index.js, in the connection handler, only emit the message if and only if the topicIDs contain the current room.

Thoughts?

@haadcode
Copy link
Contributor Author

haadcode commented Dec 3, 2017

@pgte You're absolutely right, that would be much more elegant solution! 😄 👍

I'll PR this next week with the fix as per your proposal. Just need to make sure it actually fixes it (not sure how the connection handler works), should be fine though.

@pgte
Copy link
Contributor

pgte commented Dec 4, 2017

@haadcode thanks for the finding and the fix.
Heavily inspired on your fix, I took a stab at it (I had a problem downstream that I suspect was being caused by this).
Also added a bunch of tests to see if the direct messages don't get mixed up.

Landed in version 1.0.0. Care to give it a try?

@haadcode
Copy link
Contributor Author

haadcode commented Dec 4, 2017

Thanks for fixing this! I'll test it asap.

pgte added a commit that referenced this issue Dec 5, 2017
… delivered to the correct rooms by using a global handler. Should fix #25
pgte added a commit that referenced this issue Dec 5, 2017
… delivered to the correct rooms by using a global handler. Should fix #25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants