Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

breaking changes for v0.4.0 #47

Open
12 tasks done
scottlamb opened this issue Dec 29, 2021 · 0 comments
Open
12 tasks done

breaking changes for v0.4.0 #47

scottlamb opened this issue Dec 29, 2021 · 0 comments
Labels
enhancement New feature or request

Comments

@scottlamb
Copy link
Owner

scottlamb commented Dec 29, 2021

Work underway on the next branch.

Breaking changes I'm considering:

  • hiding fields in favor of accessors. (Also: taking advantage of to reduce memory usage by keeping around Box<str> instead of String.)
  • similarly, maybe we want to add a way to pass through codec-level non-fatal errors. CodecItem::Error or the like. We could just make CodecItem #[non_exhaustive] rather than figure out the details now. (edit: just did the #[non_exhaustive] for now.)
  • removing the deprecated Session::teardown method.
  • remove the pub from a few items that got it without much thought. (they're already #[doc(hidden)])
  • replacing VideoFrame::new_parameters: Option<Box<VideoParameters>> with a boolean. You can obtain the latest parameters via Stream::parameters, so there's no good reason for the preemptive clone or for the extra bytes in every VideoFrame. Except:
    • after retina::client::Demuxed::poll_next returns Pending, currently Stream::parameters may reflect changes in the upcoming frame. This is probably confusing / worth avoiding.
    • Demuxed doesn't offer easy to access to the streams, but there's no reason it shouldn't.
  • replace retina::codec::Parameters with retina::codec::ParametersRef, to avoid having to clone the parameters when returning them from retina::client::Stream::parameters.
  • have Stream::setup take an options struct to allow customizing depacketizer options. Useful for e.g. annexb support #44.
  • alter the expectations for PacketContext to be paired with a new type StreamContext, rather than just RtspConnectionContext. I'd expose StreamContext as an accessor on the stream rather than include it in MessageFrame. Notably, MessageFrame has a pair of PacketContexts, so this should shrink MessageFrame's size by four socket addresses, at the cost of the caller having to pull more stuff together to get enough information to find the packet in a pcap file.
  • switch VideoFrame::data() and friends from returning a &Bytes to returning a &[u8], maybe adding a VideoFrame::take_data() that returns a Vec<u8>. If the caller wants to actually modify the bytes in-place for some reason, this would be better. I don't think there's any way to go from a Bytes back to a BytesMut, even if there are no other references. (I see an issue 0.5 release prevents going back from Bytes to BytesMut tokio-rs/bytes#350 that seems to be saying the same.)
  • PacketItem should expose RTCP compound packets, not just sender reports.

I'm open to others.

Deferred

I'm going to punt on returning anything borrowed from the Session/Demuxed. This can't be done with the futures::stream::Stream interface (as Rust doesn't have GATs yet / it's unclear if Stream can be extended to support them anyway). I think switching to just an inherent method async fn next(&mut self) -> ...Item might be worthwhile, which would let us borrow from a ring buffer (see #6). But I don't want to make the improvements already implemented wait for that, and folks say "version numbers are cheap".

  • possibly PacketItem and CodecItem should include a &Stream to save the caller having to look it up. They should have it handy anyway. CodecItem::VideoFrame could also return the params: Option<&VideoParameters> (likewise audio/message media) rather than requiring the caller match on an Parameters enum.
  • reverse API design: should depacketizers provide data as impl Buf or Bytes/Vec<u8>? #4 decision: have codec items expose a Buf that borrows from the internal buffers, rather than copying into a contiguous buffer. If we borrow, we don't have to make a new Vec of chunks for each frame. We'll make it easy for the caller to copy into a new contiguous Vec as before, or to copy into some other structure (like a single Vec for multiple frames, a ring buffer, etc).

Other deferred stuff, because again version numbers are cheap, and these aren't as fully baked as the stuff above the fold:

  • switching from log to tracing. This would allow the caller to supply context for interpreting the log messages. Motivating example: Moonfire NVR maintains all its streams from the same tokio thread pool. When Retina logs a message about a stream, it's unclear which stream it applies to. Retina at least knows what URL we're talking with, but Moonfire NVR can wrap that in better context like the name of the camera and main/sub stream, and tracing seems like a popular way to augment log messages in this way. Retina might also put in its own spans for each session.
    • why defer: using tracing properly at the Moonfire NVR level would involve moving stuff into fields rather than just the message and rethinking the log format. Not quite ready to take that on.
  • revamping how packet loss is reported at the CodecItem level. I don't like that right now an item's loss field doubles as the number of packets lost (not necessarily consecutively) and information about if the current frame is "complete". I wonder if it's better to represent these as orthogonal concepts. Sometimes we know the loss was in a previous frame; sometimes we're uncertain; sometimes we know we're missing part of the current frame (a prefix, one or more interior regions, or a suffix). I'm not totally sure what the best representation is, but it might involve losing the loss field. Having a separate CodecItem::Loss would also give a way to represent richer information about the loss (like the packet number range, the times of the packet before/after the loss).
    • why defer: because I haven't thought through the best representation yet.
  • removing PlayOptions::enforce_timestamps_with_max_jump_secs. It's already optional, but I'm not sure it belongs at this level at all. The caller can do this validation if they care to, and it may not be used often enough to be worth advertising in this library. They may instead want to do something more sophisticated than trusting the RTP timestamps at all, given that many cameras don't use the monotonic clock to produce them. If I come up with one really battle-tested reliable strategy, it might belong in this library, but I don't think enforce_timestamps_with_max_jump_secs is that.
    • why defer: I wonder if I should go even further and make the u32->crate::Timestamp step optional/customizable. Also requires more design effort.
@scottlamb scottlamb changed the title breaking changes for v0.2.0 breaking changes for v0.4.0 Dec 30, 2021
@scottlamb scottlamb added the enhancement New feature or request label Jan 26, 2022
scottlamb added a commit that referenced this issue Apr 28, 2022
As listed in #47.

This improves throughput in the client/h264 benchmark by about 20%,
I think because we copy fewer bytes around. CodecItem went from 256
bytes to 176.
scottlamb added a commit that referenced this issue Apr 28, 2022
As described in #47 and #58: revamp this type to keep the full raw
packet, and provide accessors rather than raw fields. It's also
smaller now, which is nice.
scottlamb added a commit that referenced this issue Apr 30, 2022
I left alone `VideoFrame::new_parameters`, as I'll likely eliminate it
as described in #47.
scottlamb added a commit that referenced this issue Apr 30, 2022
scottlamb added a commit that referenced this issue May 11, 2022
Part of #47

The main benefit is that `VideoFrame::into_data` can cheaply return a
`Vec<u8>` that the caller can mutate. In particular, they could convert
H.264 data from Annex B to AVC form or skip non-VCL NALs. Neither of
these transformations add bytes so they're both possible without
allocation.

Audio and message frames still use `Bytes` internally, as that allows
them to avoid copying when the frame is wholly contained in a packet.
I think this is much more common than for video. I didn't add an
`AudioFrame::into_data` or `VideoFrame::into_data`.
scottlamb added a commit that referenced this issue May 11, 2022
This reduces copying. Part of #47.
scottlamb added a commit that referenced this issue May 11, 2022
For #47.

Along the way, I simplified `aac::Depacketizer` impl a bit, eliminating
a clone and some redundancy.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant