Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VMA Fixes #17

Merged
merged 9 commits into from
May 1, 2023
Merged

VMA Fixes #17

merged 9 commits into from
May 1, 2023

Conversation

JSCU-CNI
Copy link
Contributor

Two bugs related to VMA are covered in this PR:

  • vma-extract had a seek to the start of the data inside an extent in the wrong place (within the loop over the blocks instead of before the loop)
  • A logic-bug related to the fact that inside an extent, blocks of multiple devices could be stored. These blocks should be handled in the loop inside _iter_clusters as well, in order to return the right block_offset.

Copy link
Member

@Schamper Schamper left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also small suggestion (can't put a comment there), can you change the @lru_cache(65536) at line 94 to a self._extent = lru_cache(65536)(self._extent) at the bottom of __init__?

dissect/hypervisor/backup/vma.py Outdated Show resolved Hide resolved
@JSCU-CNI
Copy link
Contributor Author

Also small suggestion (can't put a comment there), can you change the @lru_cache(65536) at line 94 to a self._extent = lru_cache(65536)(self._extent) at the bottom of __init__?

Done! Could you please explain your reasons for asking this? We don't see how this benefits the code, but we suspect you've got good reasons for doing so 😄

dissect/hypervisor/backup/vma.py Outdated Show resolved Hide resolved
@Schamper
Copy link
Member

Schamper commented Apr 26, 2023

Also small suggestion (can't put a comment there), can you change the @lru_cache(65536) at line 94 to a self._extent = lru_cache(65536)(self._extent) at the bottom of __init__?

Done! Could you please explain your reasons for asking this? We don't see how this benefits the code, but we suspect you've got good reasons for doing so smile

https://rednafi.github.io/python/lru_cache_on_methods/

TL;DR: @lru_cache on class methods is bad, as it can result in an unintended memory leak because self is cached. New code should use the change suggested, and old code fixed as we go.

@codecov
Copy link

codecov bot commented May 1, 2023

Codecov Report

Merging #17 (93178b6) into main (072e025) will increase coverage by 0.02%.
The diff coverage is 72.72%.

@@            Coverage Diff             @@
##             main      #17      +/-   ##
==========================================
+ Coverage   60.96%   60.99%   +0.02%     
==========================================
  Files          29       29              
  Lines        2375     2374       -1     
==========================================
  Hits         1448     1448              
+ Misses        927      926       -1     
Flag Coverage Δ
unittests 60.99% <72.72%> (+0.02%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
dissect/hypervisor/tools/vma.py 0.00% <0.00%> (ø)
dissect/hypervisor/backup/vma.py 81.72% <88.88%> (ø)

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

dissect/hypervisor/backup/vma.py Outdated Show resolved Hide resolved
@Schamper Schamper merged commit 223d3ae into fox-it:main May 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants