New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LoRaWAN: FRMPayload size validity #7459

Merged
merged 2 commits into from Jul 13, 2018

Conversation

Projects
None yet
6 participants
@hasnainvirk
Contributor

hasnainvirk commented Jul 10, 2018

Description

It was pointed out in #7432 and #7232 that the stack was comparing frame payload size
with the allowed payload size in a wrong manner in shcedule_tx().
We must strip the overhead from the frame before comparison.

We did have a similar check in prepare_ongoing_tx() API which would correctly analyse
the situation but a check was needed in schedule_tx() as well. The reason is that the
schedule_tx() API can be called automatically by the stack if the user intiated requested
was not promptly entertained because of duty cycle restriction. Now, the datarate can change
here (for CONFIRMED messages if the ack was not received after retries max out). That's why
a test for validity was needed.

We strip off the frame overhead from the payload length to get correct FRMPayload size and
hence do a successful comparison.

In order to have consistent frame overhead, we have opted to always include Port field in the
frame.

Pull request type

[X] Fix
[ ] Refactor
[ ] New target
[ ] Feature
[ ] Breaking change
@hasnainvirk

This comment has been minimized.

Contributor

hasnainvirk commented Jul 10, 2018

@0xc0170 0xc0170 requested a review from ARMmbed/mbed-os-wan Jul 10, 2018

@hasnainvirk

This comment has been minimized.

Contributor

hasnainvirk commented Jul 10, 2018

@0xc0170 The build failed for some reason. Could you restart ?

@cmonr cmonr requested review from 0xc0170 and kjbracey-arm Jul 10, 2018

if (_ongoing_tx_msg.type == MCPS_PROPRIETARY) {
frm_payload_size = _params.tx_buffer_len;
} else {
frm_payload_size = _params.tx_buffer_len - LORA_MAC_FRMPAYLOAD_OVERHEAD;

This comment has been minimized.

@kjbracey-arm

kjbracey-arm Jul 11, 2018

Contributor

overhead is a fudge factor for the get_max_payload having it pre-subtracted, so always needs to be applied, even for proprietary.

@hasnainvirk hasnainvirk force-pushed the hasnainvirk:issue_7232_7432 branch from 9af7bc1 to df6a38a Jul 12, 2018

@hasnainvirk

This comment has been minimized.

Contributor

hasnainvirk commented Jul 12, 2018

@kjbracey-arm Please review again.

@hasnainvirk hasnainvirk changed the title from FRMPayload size validity to LoRaWAN: FRMPayload size validity Jul 12, 2018

@hasnainvirk hasnainvirk force-pushed the hasnainvirk:issue_7232_7432 branch 2 times, most recently from e87e73c to b2bdf76 Jul 12, 2018

hasnainvirk added some commits Jul 10, 2018

FRMPayload size validity
It was pointed out in #7432 and #7232 that the stack was comparing frame payload size
with the allowed payload size in a wrong manner in shcedule_tx().
We must strip the overhead from the frame before comparison.

We did have a similar check in prepare_ongoing_tx() API which would correctly analyse
the situation but a check was needed in schedule_tx() as well. The reason is that the
schedule_tx() API can be called automatically by the stack if the user intiated requested
was not promptly entertained because of duty cycle restriction. Now, the datarate can change
here (for CONFIRMED messages if the ack was not received after retries max out). That's why
a test for validity was needed.

We now perform a comparison using _ongoing_tx_message structure which contains the actual
FRMPayload size.

For proprietary type of messages only MHDR and Port field is used so we shouldn't add MAC commands
and other overhead into them.

In order to have consistent frame overhead, we have opted to always include Port field in the
frame.
Correcting unit for timeout
timeout unit should be ms not micro second.

@hasnainvirk hasnainvirk force-pushed the hasnainvirk:issue_7232_7432 branch from b2bdf76 to ed9048f Jul 12, 2018

@hasnainvirk

This comment has been minimized.

Contributor

hasnainvirk commented Jul 12, 2018

@0xc0170 Need CI here.

@cmonr

This comment has been minimized.

Contributor

cmonr commented Jul 12, 2018

@hasnainvirk Will start shortly. We're looking into a test issue.

@cmonr cmonr added needs: CI and removed needs: review labels Jul 12, 2018

@cmonr

This comment has been minimized.

Contributor

cmonr commented Jul 12, 2018

/morph build

@mbed-ci

This comment has been minimized.

mbed-ci commented Jul 12, 2018

Build : SUCCESS

Build number : 2590
Build artifacts/logs : http://mbed-os.s3-website-eu-west-1.amazonaws.com/?prefix=builds/7459/

Triggering tests

/morph test
/morph uvisor-test
/morph export-build
/morph mbed2-build

@mbed-ci

This comment has been minimized.

@mbed-ci

This comment has been minimized.

@cmonr cmonr merged commit 4c1a89c into ARMmbed:master Jul 13, 2018

14 checks passed

AWS-CI uVisor Build & Test Success
Details
ci-morph-build build completed
Details
ci-morph-exporter build completed
Details
ci-morph-mbed2-build build completed
Details
ci-morph-test test completed
Details
continuous-integration/jenkins/pr-head This commit looks good
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
travis-ci/astyle Passed, 791 files
Details
travis-ci/docs Local docs testing has passed
Details
travis-ci/events Passed, runtime is 10158 cycles (+1298 cycles)
Details
travis-ci/gitattributestest Local gitattributestest testing has passed
Details
travis-ci/licence_check Local licence_check testing has passed
Details
travis-ci/littlefs Passed, code size is 9960B (+0.00%)
Details
travis-ci/tools-py2.7 Local tools-py2.7 testing has passed
Details
@cmonr

This comment has been minimized.

Contributor

cmonr commented Jul 16, 2018

Pushing this out for 5.10.

This PR's changeset is based off of this: #7430

pan- pushed a commit to pan-/mbed that referenced this pull request Aug 22, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment