You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Consider the following socket reader, analogous to an asynchronous codec:
let data = socket.recv(|buffer| {let len = buffer.len();if len < 13{return(0,None)}(13,Some(buffer[0..13].to_vec()))});
With the current implementation of RingBuffer, the buffer will exhaust without wrapping around back to the 0th index. Inside of RingBuffer::deque_many_with, we can see that the implementation will attempt to wrap around only if the read data size is aligned with the total capacity of the buffer:
With a capacity of 256, after the 19th iteration we get to read_at = 247, but max_size is only 9 - meaning the read function only gets 9 bytes to read, returning 0 for its size, failing to adjust read_at and resulting in the buffer exhausting itself even though there is a full frame of usable data in the buffer.
Unfortunately we can't immediately solve this with losing the contiguous property of the buffer being passed to f, putting to onus of keeping track of full data frames on the user which quickly becomes unwieldy and inefficient when multiple sockets are used. I can think of two reasonable solutions:
Implement a bip buffer
Pass data to the user in at least one but up to two halves, each of which are contiguous. If the current data is wholely contiguous within the circular buffer, give the user (&buffer[start..end], None). If the data is split across the end of the buffer, give the user (&buffer[start..capacity], Some(&buffer[0..end])). This doesn't fully solve the problem, but it avoids the user having to reserve a buffer per-socket for reading data and it can instead be done per-call if required.
The text was updated successfully, but these errors were encountered:
This is by design. The only recv guarantee is "if there's bytes in the buffer, it returns some". Not necessarily all. This is how IO works, std::io::Read works the same way FWIW.
You're supposed to pop the 9 bytes, keep them around, request more bytes, append them to the 9 you got, then when you got a full frame process it.
Even if we changed smoltcp to actually guarantee "it can return all data in a single recv", you still need to handle half-frames in your code because a half frame might actually arrive on the wire due to TCP packet segmentation.
If you use a higher-level wrapper such as embassy-net, the TcpSockets implement embedded-ioRead trait, which has read_exact which will do the "assemble received bytes until you got a full frame" for you.
Consider the following socket reader, analogous to an asynchronous codec:
With the current implementation of RingBuffer, the buffer will exhaust without wrapping around back to the 0th index. Inside of
RingBuffer::deque_many_with
, we can see that the implementation will attempt to wrap around only if the read data size is aligned with the total capacity of the buffer:With a capacity of 256, after the 19th iteration we get to read_at = 247, but max_size is only 9 - meaning the read function only gets 9 bytes to read, returning 0 for its size, failing to adjust read_at and resulting in the buffer exhausting itself even though there is a full frame of usable data in the buffer.
Unfortunately we can't immediately solve this with losing the contiguous property of the buffer being passed to
f
, putting to onus of keeping track of full data frames on the user which quickly becomes unwieldy and inefficient when multiple sockets are used. I can think of two reasonable solutions:(&buffer[start..end], None)
. If the data is split across the end of the buffer, give the user(&buffer[start..capacity], Some(&buffer[0..end]))
. This doesn't fully solve the problem, but it avoids the user having to reserve a buffer per-socket for reading data and it can instead be done per-call if required.The text was updated successfully, but these errors were encountered: