-
Notifications
You must be signed in to change notification settings - Fork 262
Fix spikeglx events memmap #1790
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
zm711
merged 3 commits into
NeuralEnsemble:master
from
h-mayorquin:fix_spikeglx_events_memmap
Oct 10, 2025
Merged
Changes from all commits
Commits
Show all changes
3 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we cache this ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I agree with this. we were previously caching this, so if we are doing the unpacking we should cache right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't cache any other call to data. Why would this be different?
We were not caching before because we intended it to, we were just loading the data at init because someone outside of the team did the PR I think.
I prefer my resources to be freed outside of the call to get_data, how are you guys thinking about it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that's a fair point. How expensive is unpackbits. I don't know that well. It wouldn't be too hard to say:
if self.event_data is Not None:
event_data = self.event_data
else:
self.event_data = event_data = unpack
I guess the ultimate question is if this is a cheap operation then it doesn't hurt to do it in the init, it's just contrary to our strategy. If it is expensive (time or memory) we should move it. If expensive in memory then caching is expensive. If expensive in time and possibly something a user would want multiple times it would be nice to have a cache.
I'm curious to hear Sam on this one. I could be convinced not to cache it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the giant allocation? Is the memory actually being used to store the events. I just don't know what events look like for this reader.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question was more:
I think moving this from parse_header (sorry said init earlier), is smart because we just need the memmap. My further question though for caching is speed and memory allocation after. I don't use events from the neo api at all so for me it's hard to imagine use cases. But maybe someone would try to get the events multiple times for something in which case it might be nice to have a cache if the process is relatively slow. If fast then absolutely no caching.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The general question is whether we cache data request in general. There is a trade-off between memory allocation (the memory allocation will remain after you do the data request which is bad) vs making it faster the next time you request. So this is the lens to discuss I think:
To my surprise,
unpackbits
turned out to be more expensive than I expected. However, it’s not unusually costly compared to other operations we perform, such as demultiplexing signals (see Intan) or scaling them (see spike interface return scaled). In those cases, we don’t use caching, so I think we should be consistent: either apply caching across all of them or not use it at all.Here are some performance comparisons:
think we should merge this as it is because this is consistent with the way that the library works: there are are similarly expensive data request uncached in other places. It is Ok to change this, but we should adopt a general policy and change it consistently in all the places.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I think that rawio should remain simple and without these type of complications. People can still cache the data on the spot if they want.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm fine with this argument in general. Let's wait until tomorrow in case Sam wants to strongly fight for the other direction and if not we merge tomorrow afternoon?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok.