New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SigMF currently lacks a way to represent stereo data #133
Comments
@bhilburn Generally I think the spec should expand to house an arbitrary number of channels of any type. Per the OPUS spec §5.1.1.2 this should include at least up to 8 audio channels. I was chatting with @ke8ctn about this a bit and we think the spec should allow multiple channels of any type with an interleaved structure. This would allow MIMO recordings or multichannel RF recordings as well as multichannel audio. I know there was a discussion long ago about this where someone was advocating that separate channels should be in separate files, but I think there is good reason to put them in the same file. I kinda doubt you want to overload the With the mod as I suggest, the logo would just contain: datatype = "i16_le"
num_channels = 2 |
Does the proposed multirecording extension not cover this? |
Agreed, @jacobagilbert, I think it does. @Teque5, see #99. So, I think the question comes down to: do we see stereo recordings as a specific instance of multirecordings, or as its own thing? It sounds like @Teque5 thinks of it as the former. What are your thoughts, @jacobagilbert ? |
@jacobagilbert After reading about the issue, it essentially proposes a nearly identical idea: adding an |
The number of channels would need to be inferred (from the length of the One limitation I can see is that |
@jacobagilbert This is how I see it for multiple channels: Multiple interleaved streams in one file
Multiple streams in multiple files (a la multirecordings)
Really I think the spec should support both options. I can easily envision scenarios where you may want annotations to span multiple channels or maybe you want them specific to a single channel. |
I agree, both should be supported - the option used will likely be dictated by the source (or consumer) of the data. For that reason im going to tag this as v2.x, though i think we can cover the second use case in 1.0. Multiple interleaved streams is essentially providing the ability to store 2D data - perhaps it should be generalized accordingly? |
@jacobagilbert Yea loading won't be hard, pretty simple in every language but I just meant it won't be obvious. |
Since I was tagged by @Teque5 above, I'll say that I've long been of the opinion that multi-channel support via interleaving is a missing feature of SigMF. Something like an optional core global field called The multirecording extension seems very useful for cases where the separate channels are more loosely related, such as when there is not a shared sample clock or datatype. It feels appropriate as an extension that gives application developers a consistent way to handle these cases. Both would be great. |
So, the multi-recordings extension is happening no matter what - it needs some changes, but it'll be part of v1.0.0. There are a ton of use-cases that need it, and I think it's fundamental to SigMF at this point. Per #99, in the next update, a channel index (or similar) field will also be added. The discussion here, in my mind, is whether that (the multi-recordings extension) 👆👆 is sufficient to cover usecases like stereo recordings, or whether we need a better mechanism. I think @ke8ctn, @Teque5, and @jacobagilbert have all raised good points, here, and it seems like the answer to that question is "no", and that we need another mechanism. The code that I shared in the OP proposed adding a way to do stereo recordings, but as noted by everyone above, should really be generalized to be arbitrary interleaved streams. My gut is that this ought to be a fairly easy change to make, and is backwards-compatible (all additions should default to Optional such that default parsing assumes current behavior). I'd like to include it in v1.0.0 if we can, both because of its utility, and also the fact that our own logo requires it 😄. Will mark it back to v1.0.0 for now, and let's see if we can get it done in a way that will work. @Teque5, are you still up for giving it a go? |
Yea I can probably get to it this weekend. I'll do the final logo then too since a few people have checked my annotations now. |
PR is done. After or if(?) you merge I will create another PR to easily convert load audio files into sigmf. |
@Teque5 - Left some minor comments in the PR, and then I think we're good to merge. And that follow-on PR sounds excellent 🙂 |
Per @Teque5's note in #117, SigMF currently lacks a way to represent stereo data (e.g., data traces from an oscilloscope). This data looks like sample pairs, just like complex sample pairs, but are not actually complex data. In the o-scope file @Teque5 generated in that Issue, the pairs are defined as a complex pair to keep them aligned, but that's not semantically accurate.
I think adding the ability to support recordings of stereo real data would not be too hard of a lift from our current spec, and would be really useful (beyond just our own logo 😄).
What about just adding
stereo
as a top-level type? Modifying the ABNF from the spec gives us:This, for example, would mean that
si16_le
indicates stereo int_16 LE data.Thoughts?
The text was updated successfully, but these errors were encountered: