Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase background stack size to fix overflows #8551

Merged
merged 1 commit into from
Nov 8, 2018

Conversation

c1728p9
Copy link
Contributor

@c1728p9 c1728p9 commented Oct 25, 2018

Description

On platforms using both tickless and the low power ticker wrapper so much of the background stack is used that it overflows. To ensure the background thread's stack doesn't overflow in any configuration increase this size by 256 bytes. Worst case usage on the NUCLEO_F476RG was recorded at 656 when tickless is turned on so this increased size should safely prevent overflows.

Pull request type

[x] Fix
[ ] Refactor
[ ] Target update
[ ] Functionality change
[ ] Docs update
[ ] Test update
[ ] Breaking change

@c1728p9
Copy link
Contributor Author

c1728p9 commented Oct 25, 2018

This is one possible fix for #8184

@jeromecoutant
Copy link
Collaborator

There is no condition on TICKLESS ?

@c1728p9
Copy link
Contributor Author

c1728p9 commented Oct 26, 2018

Right now this patch bumps the background stack size regardless of if tickless is on. If this is too much overhead I could make two configs - one for with tickless and one for without.

@deepikabhavnani
Copy link

Right now this patch bumps the background stack size regardless of if tickless is on. If this is too much overhead I could make two configs - one for with tickless and one for without.

We have few devices with limited RAM, I am 90% sure we will see failures in CI for those specific targets, but in few test we skip if enough memory is not available. Good to have it configurable

@c1728p9
Copy link
Contributor Author

c1728p9 commented Oct 26, 2018

I updated this PR to only increase the background stack size when tickless is enabled.

@0xc0170
Copy link
Contributor

0xc0170 commented Nov 6, 2018

/morph build

@mbed-ci
Copy link

mbed-ci commented Nov 6, 2018

Build : FAILURE

Build number : 3554
Build artifacts/logs : http://mbed-os.s3-website-eu-west-1.amazonaws.com/?prefix=builds/8551/

@0xc0170
Copy link
Contributor

0xc0170 commented Nov 6, 2018

Please review the build failures. Few small targets fail for IAR (placement errors)

On platforms using both tickless and the low power ticker wrapper
so much of the background stack is used that it overflows. To
ensure the background thread's stack doesn't overflow increase this
size by 256 bytes when tickless is enabled. Worst case usage
on the NUCLEO_F476RG was recorded at 656 when tickless is
turned on so this increased size should safely prevent overflows.
@c1728p9
Copy link
Contributor Author

c1728p9 commented Nov 6, 2018

I updated this PR so the background thread stack is only increased if both MBED_TICKLESS and LPTICKER_DELAY_TICKS are used. This should allow the small devices without enough ram to build as these are not using LPTICKER_DELAY_TICKS.

@cmonr
Copy link
Contributor

cmonr commented Nov 7, 2018

/morph build

@mbed-ci
Copy link

mbed-ci commented Nov 7, 2018

Build : SUCCESS

Build number : 3565
Build artifacts/logs : http://mbed-os.s3-website-eu-west-1.amazonaws.com/?prefix=builds/8551/

Triggering tests

/morph test
/morph export-build
/morph mbed2-build

@mbed-ci
Copy link

mbed-ci commented Nov 7, 2018

@0xc0170
Copy link
Contributor

0xc0170 commented Nov 7, 2018

/morph export-build

@mbed-ci
Copy link

mbed-ci commented Nov 7, 2018

@mbed-ci
Copy link

mbed-ci commented Nov 7, 2018

@cmonr
Copy link
Contributor

cmonr commented Nov 8, 2018

Note: This PR is now a part of a rollup PR (#8675).

Jenkins CI export nodes experienced many drops throughout the day causing false failures. In an attempt to get those PRs through CI, while keeping CI load low, several PRs have been bundled into a single rollup PR.

If any more commits are made in this PR, this PR will remain open and have to go through CI on its own.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants