-
-
Notifications
You must be signed in to change notification settings - Fork 9.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
QUIC FUTURE: Add concurrency architecture design document #24256
Conversation
As such, the concept of a **Concurrency Management Layer (CML)** is introduced. | ||
The CML lives between the APL and the QUIC core code. It is responsible for | ||
dispatching in-thread mutations of QUIC core objects when operating under CCM, | ||
and for dispatching meshsages to a worker thread under WCM. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo: messages
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it worth comitting the plantuml file this was generated from instead, in case it needs future edits?
decrease as a result of an `ossl_cml_read` call). Assuming that only one thread | ||
makes calls to CML functions at a given time *for a given pipe*, this therefore | ||
poses no issue for callers. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is "no issue for callers" really accurate here? It makes sense if the threading constraints are met, but it seems the entire point of this concurrency model is to allow multiple threads to do I/O on a given quic connection/stream in parallell.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is "no issue for callers" really accurate here? It makes sense if the threading constraints are met, but it seems the entire point of this concurrency model is to allow multiple threads to do I/O on a given quic connection/stream in parallell.
UDP is not required to preserve send order packets on the receiving side. It is up to the caller to determine the packet order and rearrange as needed. A typical example of this is where the routing tables are being updated during transmission - assuming there is a sequencing mechanism in the packets implemented by the application, and that a recvfrom() is always pending so zero packet loss due to timing.
- sendto(seq 1) issued in peer1
- recvfrom(seq 1) completes in peer2
- sendto(seq 2) issued in peer1
- Routing table update shortening the route between peer 1 and peer 2
- sendto(seq 3) issue in peer 1
- recvfrom(seq 3) completes in peer2
- recvfrom(seq 2) completes in peer2
This may be irrelevant to OpenSSL, and if so, I apologize.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
UDP has no ordering requirements, but QUIC does. RFC 9000 section 2.2 indicates that stream frames within a connection contain an offset field indicating the ordering of data within the stream. The core of the QUIC code receives this data from the underlying socket and queues it for availability to the application up to the point where there are no gaps in the stream (see ossl_quic_rstream_queue_data for an example of where this is done). Applications reading quic streams (via quic_read or one of the proposed concurrency models) will only read byte streams in order.
This PR is in a state where it requires action by @openssl/otc but the last update was 30 days ago |
This is not for the QUIC Server MVP. It is a design document produced as part of some necessary long term planning for API evolution to support high performance multithreaded I/O in advanced server applications.
As such, it is not a review priority. However, comment and feedback is of course welcomed.
However, the first part of this document introduces relevant concepts which will be used in the QUIC Server MVP in PRs to follow as this is necessary to fix #24166. You can stop reading at the heading "Architecture" if you don't care about the future stuff and only want to know about 3.4-relevant things.
The potential architecture for a Concurrency Management Layer (which enables both WCM and non-WCM concurrency models to be implemented without having to maintain two radically different code paths) discussed here is a first draft under contemplation and might change substantially further down the line. It is intended as a preview of some internal architectural designs which are being considered.