Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clients have no way to reliably send messages in order (other than serialising each and every send) #1316

Closed
ara4n opened this issue Nov 2, 2022 · 5 comments
Labels
A-Client-Server Issues affecting the CS API improvement An idea/future MSC for the spec wart A point where the protocol is inconsistent or inelegant

Comments

@ara4n
Copy link
Member

ara4n commented Nov 2, 2022

In the current CS API, there is no way to assert an ordering of /send calls (EDIT: other than sending each and everyone in series, waiting for the 200 before you make the next request). Therefore if one request happens to take longer than another, or get delayed due to network conditions, there is no efficient way to require that ordering is preserved. This happens particularly often when recovering after a network outage.

I think our options are:

  1. To change /send to (optionally) include a pointer to the preceeding message in order to express a partial ordering.
  2. To send messages via websockets or some other ordered transport
  3. To order based on timestamps (ugh)

It feels like option 1 is the least worst option to me.

This would solve chronic bug 44: "Users expect messages to always be sent (and received) in the order that they were written."

@turt2live turt2live added A-Client-Server Issues affecting the CS API wart A point where the protocol is inconsistent or inelegant improvement An idea/future MSC for the spec labels Nov 2, 2022
@richvdh
Copy link
Member

richvdh commented Nov 2, 2022

In the current CS API, there is no way to assert an ordering of /send calls.

well, clients can send messages in series (ie, not make a second /send request until the first one completes successfully). Which was what I thought our clients did, tbh...

@MTRNord
Copy link
Contributor

MTRNord commented Nov 2, 2022

Mostly wondering: why is waiting on the previous request to finish not an option for clients? Sure it may be a little slower but wouldn't the wait time just move to a different part of the chain as there is a reason the request takes longer in the first place?

@ara4n
Copy link
Member Author

ara4n commented Nov 2, 2022

true. i guess the only problem is that it means your throughput of messages is limited to one per RTT. but given we throttle messages by default pretty slowly, perhaps that's not a disaster.

@MadLittleMods
Copy link
Contributor

Related to #260 (could be a duplicate with some adjusting)

@richvdh
Copy link
Member

richvdh commented Apr 12, 2023

true. i guess the only problem is that it means your throughput of messages is limited to one per RTT. but given we throttle messages by default pretty slowly, perhaps that's not a disaster.

Given this, I'm going to close it for now. If the RTT becomes a real bottleneck (rather than server-side throughput), then we can reopen with something more focussed.

@richvdh richvdh closed this as completed Apr 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-Client-Server Issues affecting the CS API improvement An idea/future MSC for the spec wart A point where the protocol is inconsistent or inelegant
Projects
None yet
Development

No branches or pull requests

5 participants