Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Choose square nav_shape if detected num_frames is square (web-only) #1338

Merged
merged 4 commits into from Oct 19, 2022

Conversation

matbryan52
Copy link
Member

@matbryan52 matbryan52 commented Oct 18, 2022

Fixes #1309

Required updating tests for MIB and MRC detect_params. Quite difficult to test this for the other datasets affected as we don't necessarily have data recorded on a square nav grid (apart from mocking the data).

Contributor Checklist:

Reviewer Checklist:

  • /azp run libertem.libertem-data passed
  • No import of GPL code from MIT code

@sk1p
Copy link
Member

sk1p commented Oct 18, 2022

/azp run libertem.libertem-data

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@sk1p
Copy link
Member

sk1p commented Oct 18, 2022

Test failure could be related to our quite low cleanup_timeout of 0.5s - I think if enough tasks of FailEventuallyUDF choose to sleep, the timeout could be hit. as we have 32 partitions in the test case, and each task can sleep for max 0.1 seconds, the total run time is bounded by something like 3.2s

Increasing this value should only influence the run time of tests that handle exceptions, or where the workers for some reason take a long time to die on close(). Could you try to maybe put a value of 5s and rerun tests?

@matbryan52
Copy link
Member Author

/azp run libertem.libertem-data

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@codecov
Copy link

codecov bot commented Oct 19, 2022

Codecov Report

Base: 67.70% // Head: 67.72% // Increases project coverage by +0.01% 🎉

Coverage data is based on head (9a3e1f9) compared to base (332d731).
Patch coverage: 94.73% of modified lines in pull request are covered.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1338      +/-   ##
==========================================
+ Coverage   67.70%   67.72%   +0.01%     
==========================================
  Files         299      299              
  Lines       16458    16468      +10     
  Branches     2822     2825       +3     
==========================================
+ Hits        11143    11153      +10     
  Misses       4875     4875              
  Partials      440      440              
Impacted Files Coverage Δ
src/libertem/io/dataset/k2is.py 77.05% <50.00%> (ø)
src/libertem/common/math.py 100.00% <100.00%> (ø)
src/libertem/io/dataset/mib.py 82.20% <100.00%> (ø)
src/libertem/io/dataset/mrc.py 88.23% <100.00%> (ø)
src/libertem/io/dataset/seq.py 83.44% <100.00%> (ø)
src/libertem/io/dataset/tvips.py 78.38% <100.00%> (ø)

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@matbryan52
Copy link
Member Author

Test failure could be related to our quite low cleanup_timeout of 0.5s - I think if enough tasks of FailEventuallyUDF choose to sleep, the timeout could be hit. as we have 32 partitions in the test case, and each task can sleep for max 0.1 seconds, the total run time is bounded by something like 3.2s

Increasing this value should only influence the run time of tests that handle exceptions, or where the workers for some reason take a long time to die on close(). Could you try to maybe put a value of 5s and rerun tests?

Seems to have done the trick, though I guess this is a random failure rather than consistent. Might be worth an issue ? (I don't 100% understand the effect of cleanup_timeout so I'd prefer if you made it!)

@sk1p
Copy link
Member

sk1p commented Oct 19, 2022

Seems to have done the trick, though I guess this is a random failure rather than consistent.

Great!

Might be worth an issue ? (I don't 100% understand the effect of cleanup_timeout so I'd prefer if you made it!)

I think this is mostly a matter of making the timeout work with the worst case scenario of our test cases, which this change should take care of. In general, the pipelined executor does have some weird behavior regarding cancellation, but this is most likely a problem in the offline processing case, where many tasks can be queued up for each worker, meaning cancellation only happens once the in-flight tasks are cleaned out of the queues. In the live case, the queues should be short enough to not cause issues on cancellation. I think we have to gain some more practical experience - if we do get timeout issues in practical usage, too, we may want to adjust the default timeout.

@sk1p sk1p merged commit d6bee0b into LiberTEM:master Oct 19, 2022
@sk1p sk1p added this to the 0.11 milestone Oct 24, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Guess a square nav_shape if a square number of frames detected in the web interface
2 participants