Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-introduced AEDAT4 and restructured file reading #59

Merged
merged 15 commits into from
Mar 27, 2023
Merged

Conversation

Jegp
Copy link
Contributor

@Jegp Jegp commented Mar 20, 2023

I reworked the reading of files, so we now have a common FileBase base class, that can .read_events(n) and .stream(n), where n represents a specific number of events or -1 for everything in a file. This both dramatically simplifies the C++ code but also lets us plug-and-play formats much easier.

Additionally, I

  • Moved the aestream_file library to header-only
  • Added C++ and Python test cases
  • Reverted to pypi release of nanobind instead of specific commit

@Jegp Jegp changed the title Re-introduced AEDAT4 and restructured file readign Re-introduced AEDAT4 and restructured file reading Mar 20, 2023
@Jegp Jegp self-assigned this Mar 20, 2023
src/file/aedat4.hpp Show resolved Hide resolved
src/file/aedat4.hpp Outdated Show resolved Hide resolved
// Record offsets of event packets
// TODO: add IMU + frames
auto packets = table->table();
for (size_t i = 0; i < packets->size(); i++) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nitpick: use pre-increment? (This is a general comment, I won't insert it at every loop, haha)

for (size_t i = 0; i < packets->size(); ++i)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason for that? Doesn't the increment happen between loops, so the ordering doesn't matter?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's just that the post-increment operator creates a temporary (save the current value, increment and return the old value), while and the pre-increment doesn't (it just increments the value). So the post-increment can have a minor performance impact (if you have millions of steps) without any benefit because we are not using the pre-increment value. As I said, very much a nitpick - maybe the compiler can even optimise this out?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow, I had no idea. I'll fix that and update - thank you for that information!

src/file/dat.cpp Outdated Show resolved Hide resolved
n_events > 0 ? n_events : total_number_of_events;

uint64_t *buffer = (uint64_t *)malloc(buffer_size * sizeof(uint64_t));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Er...can we use std::array here? Raw array with malloc is so...C 😄

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hehe that's very true. I have zero experience with the std::array construct. Won't you need compile-time size values?

I guess a similar thing could be achieved with a std::vector?

One concern would be that using containers is slower, though. Do you share that or am I being paranoid?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, std::vector might be slightly slower if we are trying to squeeze every bit of performance. But I think there might be a way to handle the compile-time size of std::array. I'll run some tests to see if there are any (dis)advantages. b(^^)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be fun to see what you come up with! I'll merge it for now and then we can re-evaluate. Should we perhaps open an issue?

src/file/dat.hpp Outdated Show resolved Hide resolved
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO: @cantordust to revise at a later point using std::filesystem.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto as above.

Copy link
Contributor Author

@Jegp Jegp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your inputs @cantordust! I fixed the comments that made immediate sense to me and asked questions about arrays and pre-increment.

// Record offsets of event packets
// TODO: add IMU + frames
auto packets = table->table();
for (size_t i = 0; i < packets->size(); i++) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason for that? Doesn't the increment happen between loops, so the ordering doesn't matter?

src/file/dat.cpp Outdated Show resolved Hide resolved
n_events > 0 ? n_events : total_number_of_events;

uint64_t *buffer = (uint64_t *)malloc(buffer_size * sizeof(uint64_t));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hehe that's very true. I have zero experience with the std::array construct. Won't you need compile-time size values?

I guess a similar thing could be achieved with a std::vector?

One concern would be that using containers is slower, though. Do you share that or am I being paranoid?

src/file/dat.hpp Outdated Show resolved Hide resolved
@Jegp Jegp merged commit 5c2fe1b into main Mar 27, 2023
@Jegp Jegp deleted the feature-aedat4 branch March 27, 2023 22:28
Jegp added a commit that referenced this pull request Mar 27, 2023
* Restructured AEDAT4 reading to support streaming
* Added aedat4 decoding to nanobind
* Cleaned up file reading hierarchy
* Added python tests
* Updated wheel workflow

Thanks to @cantordust for valuable input!
Jegp added a commit that referenced this pull request Mar 27, 2023
* Restructured AEDAT4 reading to support streaming
* Added aedat4 decoding to nanobind
* Cleaned up file reading hierarchy
* Added python tests
* Updated wheel workflow

Thanks to @cantordust for valuable input!
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants