Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove std::memcpy while processing data in compressor task #5302

Merged
merged 1 commit into from Jan 27, 2021

Conversation

preghenella
Copy link
Collaborator

This PR removes an unnecessary std::memcpy while running the TOF compressor device.
The buffer for the message is created before hand and used size adjusted according to the output to be forwarded downstream.

@preghenella
Copy link
Collaborator Author

ciao @noferini
if happy please approve, thanks.

@sawenzel sawenzel merged commit e6402d1 into AliceO2Group:dev Jan 27, 2021
auto headerIn = DataRefUtils::getHeader<o2::header::DataHeader*>(ref);
auto dataProcessingHeaderIn = DataRefUtils::getHeader<o2::framework::DataProcessingHeader*>(ref);
auto payloadIn = ref.payload;
auto payloadInSize = headerIn->payloadSize;

/** prepare **/
auto bufferSize = mOutputBufferSize > 0 ? mOutputBufferSize : payloadInSize;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@preghenella in which situation you would provide explicitly the buffer size, rather then relying on automatic one?
Isn't this prone providing too small size?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shahor02
normal operation in principle better works with an explicit buffer size (via cmdline argument) in case the automatic size is found too small.
it can indeed come with risk of providing too small size, but we will check carefully (and ERRORS will be issued if too small)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

btw let me add that this PR is part of an incremental update.
eventually we want to try to remove creating these many messages (one for each input part) and possibly lower the number of shm allocations.
This will mean that the automatic buffer allocation will not be usable anymore (we do not know how large are the multi-part messages in beforehand) and we must rely on setting externally a buffer which is large enough to accommodate the output data.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@preghenella thanks for explanations. Isn't output size guaranteed to be smaller than the input, in which case the input payload size would be a safe bet? As for merging multiple input parts into 1 output: since all inputs headers at available, one can estimate the output size upper limit in a same way as for single part.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, the automatic buffer size is indeed a safe bet (currently it is the default setting in fact).
re. the multi-part, you are probably right and an automatic buffer can be done also in that case.

maybe in the future we can just remove the possibility to define the buffer size from command line.
but for the time being, given that we have no freedom during tests on the FLP to update code on a quick basis, we keep it such that we have an adaptable soluion

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, wanted just to understand if it is really necessary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants