Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chunked model application #2133

Merged
merged 6 commits into from
Nov 25, 2022
Merged

Chunked model application #2133

merged 6 commits into from
Nov 25, 2022

Conversation

maxnoe
Copy link
Member

@maxnoe maxnoe commented Nov 23, 2022

No description provided.

@codecov
Copy link

codecov bot commented Nov 24, 2022

Codecov Report

Base: 92.74% // Head: 92.74% // Decreases project coverage by -0.00% ⚠️

Coverage data is based on head (b427dbd) compared to base (8004264).
Patch coverage: 95.00% of modified lines in pull request are covered.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #2133      +/-   ##
==========================================
- Coverage   92.74%   92.74%   -0.01%     
==========================================
  Files         214      214              
  Lines       17914    17941      +27     
==========================================
+ Hits        16615    16639      +24     
- Misses       1299     1302       +3     
Impacted Files Coverage Δ
ctapipe/io/metadata.py 96.58% <ø> (ø)
ctapipe/tools/tests/test_process_ml.py 100.00% <ø> (ø)
ctapipe/tools/apply_models.py 93.10% <87.50%> (-3.15%) ⬇️
ctapipe/io/tableloader.py 97.58% <96.42%> (+0.15%) ⬆️
ctapipe/conftest.py 94.58% <100.00%> (+0.31%) ⬆️
ctapipe/io/tests/test_table_loader.py 99.47% <100.00%> (+0.02%) ⬆️
ctapipe/tools/tests/test_apply_models.py 100.00% <100.00%> (ø)
ctapipe/tools/tests/test_train.py 100.00% <100.00%> (ø)
ctapipe/visualization/mpl_array.py 99.24% <0.00%> (-0.01%) ⬇️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

ctapipe/io/tableloader.py Show resolved Hide resolved
ctapipe/io/tableloader.py Show resolved Hide resolved
@maxnoe maxnoe marked this pull request as draft November 24, 2022 15:31
@maxnoe
Copy link
Member Author

maxnoe commented Nov 24, 2022

This is successful in that it reduced memory consumption a lot, but it's also a lot slower and I think it doesn't make sense to merge until we remove the bottlenecks from the table loader for that. I will try to incorporate this.

@maxnoe maxnoe marked this pull request as ready for review November 25, 2022 11:40
@maxnoe
Copy link
Member Author

maxnoe commented Nov 25, 2022

Ok, main problem was that adding the telescope trigger information naively was very slow, since it required loading the whole trigger telescope for each chunk.

Now, application is much faster (even a lot faster than non-chunked, since joining N chunks is faster than joining the whole table).

@kosack
Copy link
Contributor

kosack commented Nov 25, 2022

Looks good! The effect on memory useage is clear. I tried here some extreme cases: basically using 100k events (blue) vs 1k events (black):
image
Of course the optimal for speed is clearly at the higher end, but it's nice we can control the memory usage so well

@maxnoe maxnoe merged commit fe88479 into master Nov 25, 2022
@maxnoe maxnoe deleted the chunked_model_application branch November 25, 2022 15:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants