-
Notifications
You must be signed in to change notification settings - Fork 8.4k
bluetooth: l2cap: support sending buffers that are larger than MTU #90835
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bluetooth: l2cap: support sending buffers that are larger than MTU #90835
Conversation
34fe9f7 to
46b928b
Compare
Thalley
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's an interesting change request. Can you elaborate on the use case?
Splitting SDUs into multiple SDUs isn't something I've seen in the core spec. Is it defined?
Imagine sending an L2CAP SDU with a header and some data - If the receive is always expecting the first e.g. 8 octets to be a header, then splitting the SDU into multiple would tell the transmitting Zephyr application that everything went well, while the 2 SDUs may be discarded by the receive because of the first SDU perhaps failing a length check, and the second one not having the header.
I'm not suggesting to suggest one SDU into multiple SDUs. I'm trying to support a use case where a single net_buf might have be so large that it's bigger than the MTU, so can't fit in a single SDU. The current API would just reject this net_buf with an error if it was passed to bt_l2cap_chan_send(), forcing the caller to figure out the best way to divide their large message into multiple SDUs themselves and then pass those to bt_l2cap_chan_send(). I thought it was cleaner and easier to just have this feature instead in l2cap.c, so it will go ahead and send the large net_buf in as many SDUs as needed. Unless I'm missing something where the bt_l2cap_chan_send() has to be limited to a net_buf that is less than the MTU size. |
46b928b to
f7bcc86
Compare
That is a fair point. The current documentation does not explicitly state, but does imply (given references to This is definitely a stable API change, and arguably a breaking stable API change, since any application that may have handled the return value of My gut feeling is that this needs to go through the steps of https://docs.zephyrproject.org/latest/develop/api/api_lifecycle.html#introducing-breaking-api-changes, but I'll defer to the host maintainer, @alwa-nordic for that. Also keep in mind that this is just my input for now, and I'm not rejecting this change, but I think you need to provide a better reason than "just in case". |
|
I also agree it's better to try to introduce this as a new API, or even just a code sample demonstrating the technique. Besides this being a breaking change, L2CAP is meant to be a datagram protocol. The whole point of SDUs is that they provide a boundary between messages. Upper layers (the various profiles) expect the SDU boundary to be preserved or they will malfunction. It's likely unsafe to split the message into multiple SDUs, so we should not do this implicitly. It would also be a good idea to motivate the existence of sample code like this if you list reasons why a user can't increase the L2CAP TX MTU instead. |
Okay, let me describe the scenarios we ran into that motivated this CL. I definitely tried to implement this outside of the l2cap.c layer but found it problematic to do so and ultimately came up with this change (and I'll describe why it was problematic with the current API). First. to answer the question of why we can't just increase the TX MTU: it's because we aren't in control of that value. Our usage case was an embedded device, running like an nrf5340, communicating with both Android Pixel phones and iPhones. From our experience, Android Pixels (from 7 to 9) give a MTU of 65535 (max) and MPS of 251 (if data length extension is enabled). The few iPhones I've tried give a much smaller MTU (I noted one case of 1172 and another of 1251, with mps being the same). We have a data packet that is large, on the order of 20000 bytes, that we want to send from our device to the phone. For the Pixel case, we were able to send as one SDU with the existing API, but when connected to the iPhone, the same sending code would fail since the message was now larger than the MTU the iPhone gave us. So, I tried to add logic to chunk the message into multiple SDUs in our app layer above l2cap. I tried two methods. One was the use the completion callback. The problem with this method is that the completion callback is only passed the channel as the callback argument. The callback does not get a ptr to the net_buf itself. Since the channel could have multiple net_bufs sent to the l2cap layer simultaneously, it was problematic to try to figure out if the completion callback was for a net_buf that needed further chunking, or if it was for one of the other (typically smaller) messages that was being sent to the same channel. The other method I tried was to use the net_buf destroy callback. That had it's own problems, like the fact that the net_buf destroy callback might be called when the channel was disconnected rather than when it was sent, and trying to determine that special case wasn't reliable due to race conditions. Since the l2cap layer has all the information needed to chunk the large net_buf into multiple SDUs, analogous to how it chunks the net_buf into multiple PDUs as needed, it seemed a logical extension to me. If there's an alternative method to reliably/safely do the chunking outside of the l2cap layer, I'm open to suggestions. Would it be less invasive to change the callback to provide the net_buf that was completed? I thought that might be a generally useful change as well, but it's probably an even larger API change in terms of amount of application code that would be impacted. If a new API is preferred, I could explore that route. |
|
I think we can find a solution that gives the application the low level access that would let you implement the chunking as you wish. I have been wanting to propose this before. I propose a low-level operation equivalent Just as We model it on the segmentation logic in the Host. The application initiates transmission by marking a channel as wanting to send. The Host will then notify the channel when it's its turn to send, and expect a segment in return. The Host keeps count of the TX credits available to the channel and informs the application of new credits. If the channel has no TX credits, then wanting to send is an error. This API puts credits front and center and removes the "sent" callback, which is poorly defined. This would be a very powerful and flexible API. It allows your application to have full control over how a buffer is chunked into SDUs, since the Host would not care about that at all. It would allow minimalist applications to not include any segmentation logic at all, saving on code size, if that application knows it won't need it. It would allow on-demand rendering into buffers, possibly useful in some cases to decrease RAM usage. And it would allow just-in-time reading of a register to get fresher data. The Host should police that the MTU, MPS and credit values are respected, to enforce the application does not break the L2CAP protocol and is qualifiable. |
f7bcc86 to
fb82405
Compare
If the net_buf passed to bt_l2cap_chan_send() is larger than the TX MTU, the l2cap layer will now send as many SDUs as needed to send the net_buf instead of returning an error. The processing is similar to how a SDU that is larger than the MPS is fragmented into multiple PDUs. The net_buf is left on the tx_queue until it is done being sent. Adding of the SDU header is moved from bt_l2cap_dyn_chan_send() to l2cap_data_pull(), just like how the PDU header is added. Signed-off-by: Mike J. Chen <mjchen@google.com>
fb82405 to
42414c4
Compare
|
|
This pull request has been marked as stale because it has been open (more than) 60 days with no activity. Remove the stale label or add a comment saying that you would like to have the label removed otherwise this pull request will automatically be closed in 14 days. Note, that you can always re-open a closed pull request at any time. |
|
@alwa-nordic , I was busy for a few weeks but I have some time now to try to implement your suggestion for a seg_send callback, mirroring the seg_recv callback. A few questions:
|
My preference is that the callback fills out the SDU header. The Host should look at the header to validate it to benefit DX, but this could be optionally disabled. The validation logs an error if the SDU size exceeds MTU and also keeps track of the SDU length remaining and logs an errors if a PDU contains trailing data.
I prefer a new API. |
|
PR created with a draft of this new feature: #95027 I don't think as currently coded, the regular segmentation code in the stack will be compiled out. I don't see that happening for the seg_recv code either. The compiler doesn't appear able to tell that the code won't be used when the CONFIGs are enabled for SEG_SEND and SEG_RECV. Right now, I have PDU validation but no real SDU validation on the sending side. |



If the net_buf passed to bt_l2cap_chan_send() is larger than the TX MTU, the l2cap layer will now send as many SDUs as needed to send the net_buf instead of returning an error.
The processing is similar to how a SDU that is larger than a PDU is fragmented into multiple PDUs. The net_buf is left on the tx_queue until it is done being sent.
Adding of the SDU header is moved from bt_l2cap_dyn_chan_send() to l2cap_data_pull(), just like how the PDU header is added.