-
Notifications
You must be signed in to change notification settings - Fork 418
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove std::memcpy while processing data in compressor task #5302
Merged
sawenzel
merged 1 commit into
AliceO2Group:dev
from
preghenella:rdev-tofcompressor-nomemcpy
Jan 27, 2021
Merged
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@preghenella in which situation you would provide explicitly the buffer size, rather then relying on automatic one?
Isn't this prone providing too small size?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@shahor02
normal operation in principle better works with an explicit buffer size (via cmdline argument) in case the automatic size is found too small.
it can indeed come with risk of providing too small size, but we will check carefully (and ERRORS will be issued if too small)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
btw let me add that this PR is part of an incremental update.
eventually we want to try to remove creating these many messages (one for each input part) and possibly lower the number of shm allocations.
This will mean that the automatic buffer allocation will not be usable anymore (we do not know how large are the multi-part messages in beforehand) and we must rely on setting externally a buffer which is large enough to accommodate the output data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@preghenella thanks for explanations. Isn't output size guaranteed to be smaller than the input, in which case the input payload size would be a safe bet? As for merging multiple input parts into 1 output: since all inputs headers at available, one can estimate the output size upper limit in a same way as for single part.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, the automatic buffer size is indeed a safe bet (currently it is the default setting in fact).
re. the multi-part, you are probably right and an automatic buffer can be done also in that case.
maybe in the future we can just remove the possibility to define the buffer size from command line.
but for the time being, given that we have no freedom during tests on the FLP to update code on a quick basis, we keep it such that we have an adaptable soluion
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, wanted just to understand if it is really necessary.