Skip to content

Reworking of the multi_output_file_test integtest#408

Merged
wesketchum merged 2 commits intoprep-release/fddaq-v5.3.1from
kbiery/candidate_MOFT_integtest_changes
Apr 25, 2025
Merged

Reworking of the multi_output_file_test integtest#408
wesketchum merged 2 commits intoprep-release/fddaq-v5.3.1from
kbiery/candidate_MOFT_integtest_changes

Conversation

@bieryAtFnal
Copy link
Copy Markdown
Collaborator

When Andrew and I talked about the possibility of creating a regression test to verify that HDF5 compression is working as intended, I thought of the dfmodules/integtest/multi_output_file_test.py test as a reasonable place to start.

However, this test had not been fully upgraded to work in v5, so I looked into that.

While doing that, I became convinced that this test was trying to do too much. It was testing scenarios with and without TPG, and it had scenarios in which only one file per DataWriter was written. I'm not sure what I was thinking about when I created the one-file-from-each-of-two-Data-Writers scenario.

In response to that, I've re-worked this test so that it really just focuses on the closing of one file and the opening of another one. And, it does that for both raw and TPStream files. And, it is set up to create 3 files per run so that we see two file-close/file-open operations.

In the new model, two runs are taken. The first is intended to produce 3 raw data files and 3 TPStream files. The max_file_size parameter and the run duration are tuned so that the size of the third file is just below the threshold for creating another file. In this way, if something goes haywire in the system, more than three files might be created, and we would notice that something was different/wrong.

The second run is also tuned to produce 3 raw and 3 TPStream files. In this scenario, the third file is expected to be relatively small in size. In that way, if some change in the system caused the data size to be smaller than before, or the file-rollover logic started waiting too long to close files, we would see fewer than 3 files, and notice that something had changed.

I'd like to first get feedback on this rather substantial change, and if this seems OK, then we can talk about adding a scenario to this test in which we enable HDF5 compression.

I'm targeting this change to the prep-release/fddaq-v5.3.1 branch in the hopes that it might be considered for that release. If we decide not to do that, however, we can easily change the target branch back to develop.

bieryAtFnal and others added 2 commits April 15, 2025 02:59
…testing that one functionality, instead of trying to do too much.
@bieryAtFnal bieryAtFnal requested a review from eflumerf April 15, 2025 20:42
@bieryAtFnal
Copy link
Copy Markdown
Collaborator Author

I should have mentioned that the other existing integtests in the dfmodules repo have not yet been updated to run in v5, so the safest way to run this test is to

cd $DBT_AREA_ROOT/sourcecode/dfmodules/integest
pytest -s multi_output_file_test.py

Copy link
Copy Markdown
Member

@eflumerf eflumerf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, runs. Did we want to open an Issue to create additional tests to cover some of the other test cases? (e.g. multiple DataWriter modules in one app)

@wesketchum wesketchum merged commit e5642b2 into prep-release/fddaq-v5.3.1 Apr 25, 2025
1 check passed
@wesketchum wesketchum deleted the kbiery/candidate_MOFT_integtest_changes branch April 25, 2025 09:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants