Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

samples: net: echo-client: Increase buf count for frdm-k64f #7775

Merged
merged 1 commit into from Jun 12, 2018

Conversation

jukkar
Copy link
Member

@jukkar jukkar commented May 23, 2018

Increasing the network buffer count for frdm-k64f so that the
device will not run out of memory so easily.

Fixes #7678

Signed-off-by: Jukka Rissanen jukka.rissanen@linux.intel.com

Increasing the network buffer count for frdm-k64f so that the
device will not run out of memory so easily.

Fixes zephyrproject-rtos#7678

Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
@codecov-io
Copy link

Codecov Report

Merging #7775 into master will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #7775   +/-   ##
=======================================
  Coverage   55.01%   55.01%           
=======================================
  Files         483      483           
  Lines       53946    53946           
  Branches    10493    10493           
=======================================
  Hits        29678    29678           
  Misses      19982    19982           
  Partials     4286     4286

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update fcc56e3...a7e43e4. Read the comment docs.

@pfalcon
Copy link
Contributor

pfalcon commented May 23, 2018

I'd like to challenge this with #7831 . Apparently, it's not a good way when some developers increase buffers, and other decrease. @jukkar, I encourage you to not workaround the issues by throwing more memory to the fire. That's masking the issues instead of resolving them, and even that starts to fail, as my PR shows.

Copy link
Contributor

@pfalcon pfalcon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, it seems that with #7678, #7571, #7818 people finally start to put Zephyr's IP stack to some use and spot all the issues with it. As someone experiencing these issues for 2nd year, let me make a case here and give -1, encouraging instead proper analysis and debugging of the underlying issue(s).

(Yeah, I'm just afraid that @nashif will merge this without looking, and we'll miss a chance to draw the core team to these issues for few more months.)

@pfl
Copy link
Collaborator

pfl commented May 24, 2018

Actually, it seems that with #7678, #7571, #7818 people finally start to put Zephyr's IP stack to some use and spot all the issues with it. As someone experiencing these issues for 2nd year, let me make a case here and give -1, encouraging instead proper analysis and debugging of the underlying issue(s).

(Yeah, I'm just afraid that @nashif will merge this without looking, and we'll miss a chance to draw the core team to these issues for few more months.)

Do you have an analysis on what issues enabling more buffers are hiding? Can you suggest a fix?

@jukkar
Copy link
Member Author

jukkar commented May 24, 2018

I could not reproduce the issue described in #7678 but as the device seemed to run out of memory the next logical step is to increase amount of memory. I was expecting the reporter to test if this helps his scenario.

Our buffer counts are typically quite small in various configurations even if the device itself has plenty of memory available. I think you are mixing out-of-memory cases and buffer bloat scenarios in various bugs that you started to comment things.

@pfalcon
Copy link
Contributor

pfalcon commented May 24, 2018

@pfl :

Do you have an analysis on what issues enabling more buffers are hiding?

I'm doing analysis on the various aspects of the IP stack working, and share any notes on the problem spots I see, e.g. a recent such comment: #7578 (comment) . Overall, we have many different issues, which interact in unpredictable ways, which complicate both "static" analysis and trying to reproduce (even isolate from others) a particular issue in practice.

Can you suggest a fix?

Unfortunately, no so far. That's why I think it's important that other developers joined forces trying to reproduce and debug such issues, instead of working around by adding more buffers.

@pfalcon
Copy link
Contributor

pfalcon commented May 24, 2018

@jukkar:

I could not reproduce the issue described in #7678 but as the device seemed to run out of memory the next logical step is to increase amount of memory. I was expecting the reporter to test if this helps his scenario.

I added a steps to reproduce #3132 issue (which appears to be just the generic variant of #7678): #3132 (comment) . It's just a single extra step after reproducing #7831 , so I hope you can give it a try.

Our buffer counts are typically quite small in various configurations even if the device itself has plenty of memory available.

I don't see a problem here. We should target the IP stack working reliably with the minimal memory requirements. Devices which have more memory can just benefit for particular applications targeted at them (for example, they could handle more concurrent connections or have higher throughput). But the baseline should be the stack working in the minimum memory requirements, and for many sample-style applications that's 1 receive and 1 transmit packets, couple with fragment space to hold data for those packets. So, we should not increase pkt/netbuf counts, but decrease them, then figure out why it doesn't work as expected, then try to fix it (either by documenting true minimums, or better by making it work with bare minimums, or explicitly error out if not possible).

I think you are mixing out-of-memory cases and buffer bloat scenarios in various bugs that you started to comment things.

I'm mixing up, because there's a whole mixup of issues in the IP stack, and now more people start to see some of them too. Can you please try to reproduce them on your side, that's the first step to untangle the matter.

@pfalcon pfalcon dismissed their stale review May 28, 2018 15:13

Given that there was no bright outcomes with #7831, withdrawing my -1 here. If it helped with jukkar's testing, let it be.

@nashif nashif merged commit 416614e into zephyrproject-rtos:master Jun 12, 2018
@jukkar jukkar deleted the bug-7678-frdm-ping branch June 21, 2018 06:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants