Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Token change between each block2 part #16

Closed
jnohlgard opened this issue Feb 15, 2015 · 3 comments
Closed

Token change between each block2 part #16

jnohlgard opened this issue Feb 15, 2015 · 3 comments

Comments

@jnohlgard
Copy link

I noticed that the token is increased for each request sent for a new block in a block2 transfer and the observation list contains the latest token. This causes problem with observe requests with block2 responses from Contiki, which uses the token from the initial observe subscription request in each response instead of the token from the last block transferred. From what I can tell from the RFC it seems that the token is expected to be the same for each block in a GET block2 sequence.

@chrysn
Copy link
Owner

chrysn commented Feb 15, 2015

On Sun, Feb 15, 2015 at 03:55:53AM -0800, Joakim Gebart wrote:

From what I can tell from the rfc it seems that the token is expected
to be the same for each block in a GET block2 sequence.

the token is indeed changed with every block that is transferred. first
of all, this makes sense from the point of view of package matching:
unless responses are piggy-backed, there would be no way to tell which
response is related to which request. (and even if only one package is
in flight at any time, we must not rely on the transport not to deliver
packages out of sequence).

draft-ietf-core-block-16 says about this:

As a general comment on tokens, there is no other mention of tokens in
this document, as blockwise transfers handle tokens like any other
CoAP exchange. As usual the client is free to choose tokens for each
exchange as it likes.

this was clarified also in the discussion preceding this
on the core mailing list:

Each of the blocks (pieces) is its own CoAP transfer, with its own
request, response, and a token linking the two. For all the blocks in
a block-wise transfer, the token may be the same if the client chooses
so (and the conditions for re-using a token are fulfilled), they may
be all different, or a mixture of these.

with respect to that, it is my impression that aiocoap does the right
thing.

as for the combination of observe and blockwise: yes, that is so far
untested both server- and client-side. i'd assume that the "request a
further part" mechanisms would try to establish an observation on the
subsequent bocks too, which is not how blockwise transfer should be done
in an observation. that might easily confuse the server, but aiocoap
could not make sense of observation results to a blockwise resource
anyway so far.

@jnohlgard
Copy link
Author

Thank you for the clarifications regarding tokens.

Regarding the observe blockwise:
The problem here is that aiocoap sends the initial subscription request using one token, let's say 1, and receives a piggybacked response from the server with block2 More bit set. The aiocoap client then sends a follow up request for the next block using another token, 2, to get the rest of the response. The server replies with the block2 More bit cleared. Finally, aiocoap stores 2 as the token for the subscription. After some seconds the CoAP server sends a new response using the token 1, with some new content on the resource. aiocoap sees this "invalid" token and sends RST back, which would be correct if aiocoap did not request this.

I do believe Contiki has the correct behaviour in this situation. It saves the first token it sees for a new subscription to use for each server-side-initiated responses (notifications).

https://tools.ietf.org/html/draft-ietf-core-observe-04#section-3.2

chrysn added a commit that referenced this issue Nov 17, 2016
This does not make observation of multi-block resource work correctly
(new observed values have m=1 set and thus ignore the clients' request
to see full blocks); fixing that is for later.

Thanks to gebart for spotting this.

Contributes-To: #16
@chrysn
Copy link
Owner

chrysn commented Jun 26, 2020

This has long been fixed, and with other server-observe bugs fixed, can now be closed.

@chrysn chrysn closed this as completed Jun 26, 2020
chrysn added a commit that referenced this issue Jun 26, 2020
Outgoing notifications used to be sent as jumbograms; now they properly
update the block fragment cache and send only the requested size.

Spotted when checking <#16>.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants