Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

iter_packet_bytes incrementally read from stream, blocking for wait #92

Open
scandey opened this issue Aug 22, 2023 · 2 comments
Open

Comments

@scandey
Copy link

scandey commented Aug 22, 2023

It would be nice to have iter_packet_bytes (or similar util function) that can just sit and listen at a socket for packets by grabbing small chunks of data at a time. It would be a reasonable assumption (in my opinion) to trust that the packets are all well-formed, but that there might be extra bytes at the end of a given chunk of data that belong to the next packet.
Originated from #83 (comment)

@ddasilva
Copy link
Collaborator

I agree, this is a good idea. I will accept pull request from the community if someone wants to implement this before I get around to

@ehsteve
Copy link
Member

ehsteve commented Apr 3, 2024

Having written this kind of functionality myself (without using CCSDSPy), I do worry that it may not be very simple to write something like this that is generic enough to be useful widely useful. The main difference is that the strategy to find packets inside an existing file versus finding packets in real-time as they arrive are very different in my experience.

In real-time data, if you lose where your packets start and end then you end up reading bytes trying to get in sync again. This can be easier or harder based on how many APIDs you have in your stream. I also found that this is highly dependent on your communication protocol. Some read byte by byte (like serial) others send whole packets (like udp) which also has to be baked into the real-time packet finding strategy. Maybe a lot of this can be abstracted if we can generate a byte stream.

Don't get me wrong, I would love a generic solution to this problem and if it can be provided would definitely use it. If there is only one APID in the data stream then that may be a solvable problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants