Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Object/group TTL #249

Open
kixelated opened this issue Sep 12, 2023 · 18 comments
Open

Object/group TTL #249

kixelated opened this issue Sep 12, 2023 · 18 comments
Labels
Object Model Relating to the properties of Tracks, Groups and Object

Comments

@kixelated
Copy link
Collaborator

I'm (properly) implementing caching in my relay and need to know how long to cache an object.

For example, a catalog object should be cached until the session terminates. Meanwhile a media segment should only be cached for X seconds. The relay could evict the cache earlier than that, and possibly refetch the object if it's still within the advertised expires window.

It doesn't seem valid to have objects with different cache expirations in a group. For example, there's never a reason why you would expire the I-frame, but not the P-frames that depend on it. Expiration should ideally be a group property but there's no message to express that currently.

@kixelated
Copy link
Collaborator Author

kixelated commented Sep 12, 2023

My pitch: QUIC streams are the unit of caching. Each stream contains a header with expiration semantics, based on HTTP Cache-Control.

Applications that rely on ordering use a QUIC stream, ex. video GoPs and catalog updates. This means you don't deliver/expire objects before their dependencies. It also means that you could address the cache by byte offset.

Unlike HTTP, we should totally add an (inline?) message to update the TTL. We might say that a GoP has a 10s TTL initially, but maybe the encoder has decided to produce an increasingly long GoP, and keep bumping up the TTL as it grows.

quic stream:

GROUP  track=4 sequence=69 expires=10s
OBJECT size=x
OBJECT size=x
UPDATE expires=15s
OBJECT size=x
OBJECT size=x
UPDATE expires=20s
...
EOF

@wilaw
Copy link
Contributor

wilaw commented Sep 12, 2023

a catalog object should be cached until the session terminates.

With LIVE streams, true, but for VOD (not your use-case but still valid moq-transport ) we might want to cache a catalog for months.

For the record, I think streams as the atomic cache unit is interesting. One challenge is if you do a long running stream with the equivalent of a DVR window. Now you have to have partial object caching at arbitrary boundaries. This is messy. I'd rather have a scheme where we cache by Group. I don't mind serving byte-ranges out a group, but at least I would cache all of a group, or none of a group. If we put header info in groups then they can easily be served out in the same stream pattern as they arrived. (stream per Object, or stream per group etc). The header info would preserve the stream relationship while in cache.

@simonkorl
Copy link

I'd rather have a scheme where we cache by Group.

If I've got it right, Luke was transferring a single group over a single QUIC stream. The header info of groups arrives at the beginning of the stream and the whole stream may expire by canceling the current stream. This method looks good except for the possibly messy stream bytes.

I have a question: If the object is cached in the QUIC stream, then will the UPDATE message create discontinuous byte intervals in the stream?

If we put header info in groups then they can easily be served out in the same stream pattern as they arrived. (stream per Object, or stream per group etc).

I agree with this idea. It is a good idea to adapt to different stream patterns to transport a Group, such as stream per Object or stream per Group. Even if we send Objects of a Group in separate streams, we can also cache them in the stream and drop arbitrary Objects or Groups as wish.

@kixelated
Copy link
Collaborator Author

If I've got it right, Luke was transferring a single group over a single QUIC stream. The header info of groups arrives at the beginning of the stream and the whole stream may expire by canceling the current stream. This method looks good except for the possibly messy stream bytes.

Yeah exactly.

I have a question: If the object is cached in the QUIC stream, then will the UPDATE message create discontinuous byte intervals in the stream?

Good observation. The same thing happens today with HTTP since the body is broken into an arbitrary number of DATA frames or chunks. A HTTP range request ignores this framing when determining what bytes to serve.

There's generally two options:

  1. You only cache the content inside the DATA frames, which makes range requests easy to serve. You reframe the data on the way out at arbitrary boundaries based on what's currently available in the cache, sliiiighly reducing the overhead for slow receivers.
  2. You cache the raw stream contents. You can then just copy the stream to all downstreams, preserving the same original framing. However range requests are now more difficult to serve, as you either need to reparse these headers on demand, or keep a list of offset/size pairs for each DATA chunk.

There's nothing like UPDATE in HTTP as far as I know. Based on the two approaches above, you would:

  1. Separately cache a list of UPDATE frames by offset, reinserting them into the stream.
  2. No action needed.

I think we did option 2 at Twitch because we were using HTTP/1.1 and could copy the bytes directly to/from TCP sockets. With HTTP/2 and HTTP/3 it probably doesn't make a difference, with a sliiight edge to option 1 because UPDATEs would be rare.

Either way, it's going significantly faster than iterating through hundreds of separate OBJECT caches.

@kixelated
Copy link
Collaborator Author

kixelated commented Sep 15, 2023

If we put header info in groups then they can easily be served out in the same stream pattern as they arrived. (stream per Object, or stream per group etc).

I agree with this idea. It is a good idea to adapt to different stream patterns to transport a Group, such as stream per Object or stream per Group. Even if we send Objects of a Group in separate streams, we can also cache them in the stream and drop arbitrary Objects or Groups as wish.

<rant incoming; not directed at you don't worry>

The application decides how to fragment data into objects and groups depends on the properties they provide. As the draft is currently written, putting objects into groups doesn't accomplish much. They don't get delivered in order nor are they reliable. Groups only exists as a subtle hint to a relay that they should start subscriptions at group boundaries... but that isn't even enough information for higher latency targets.

The authors argued a lot about the properties of groups and the main issue was B-frames. OBJECT sequence=6 does not necessarily depend on OBJECT sequence=5 even within the same group, which is why you can't make any assumptions about reliability (ex. B-frame was dropped). The result is that OBJECTs are are more like jumbo-datagrams, able to be dropped at arbitrary points for arbitrary reasons (aka a future of gross business logic in relays), and group membership means nearly nothing.

I use QUIC streams instead of groups because they actually have useful properties. The idea is similar; objects mostly depend on earlier objects in the same group, however there's actually a strong ordering and delivery guarantee. The application doesn't need to support arbitrary gaps, reorder objects, and it can use byte offsets. A relay receives a QUIC stream and MUST deliver/cache it in the same order, or not at all.

@simonkorl
Copy link

Either way, it's going significantly faster than iterating through hundreds of separate OBJECT caches.
I use QUIC streams instead of groups because they actually have useful properties. The idea is similar; objects mostly depend on earlier objects in the same group, however there's actually a strong ordering and delivery guarantee.

I understand the reason why you don't send Objects through separate streams is that it introduces extra time costs to match Objects to corresponding Groups. Even though we can use data structures like the Link List to handle the Object caches, it is in fact not necessary because the stream has already kept the order of different Objects by itself. It is definitely a good implementation, but it would be better to be able to support different stream patterns for extensibility.

There's nothing like UPDATE in HTTP as far as I know. Based on the two approaches above, you would:
Separately cache a list of UPDATE frames by offset, reinserting them into the stream.

I didn't quite get why we need to cache the UPDATE frames. Shouldn't the UPDATE frames be consumed once they arrive and only change the parameter expires rather than making other influences on the system? Or is the UPDATE frame updating the expiration time of different Objects so we need to cache it?

@kixelated
Copy link
Collaborator Author

I didn't quite get why we need to cache the UPDATE frames. Shouldn't the UPDATE frames be consumed once they arrive and only change the parameter expires rather than making other influences on the system? Or is the UPDATE frame updating the expiration time of different Objects so we need to cache it?

Yeah, the edge could drop the UPDATE frames if they only contained cache information, as the viewer likely won't use it. However with a chain of relays you would need to reemit them or risk expiring a downstream cache early. It's also more complicated because you don't know if a downstream is an end user or actually another relay.

But I imagine we would want to use UPDATE for more than just caching expirations. Forwarding it always seems useful.

@suhasHere
Copy link
Collaborator

suhasHere commented Sep 16, 2023

I am not too sure of the updates, but setting the transport delivery mode of one group per stream and just sending one object in that stream (which is the entire GOP) should get the configuration needed.

Also agree with @simonkorl on allowing different grouping objects.

I like the simple properties we have for object defined today

  1. "An object is an addressable unit whose payload is a sequence of bytes" .
  2. A relay MUST NOT combine, split, or otherwise modify object payloads.

Objects are the units of caching and can be retrieved and it avoids the complications of splitting them, and also ending up delivering them into pieces across streams/connections.

@VMatrix1900
Copy link

+1 to @suhasHere . The caching should happen in the MoQT layer not in the underlying "real" transport(WebTransport or Raw QUIC). QUIC stream is only one way of transport the unit in MoQT layer(Object/Group).

@fluffy
Copy link
Contributor

fluffy commented Sep 18, 2023

Walk me through how this works for an audio call where you only want to cache the most recent 15 seconds.

@fluffy
Copy link
Contributor

fluffy commented Sep 18, 2023

To answer the top level questions at top of this issue. I think each object should have a Time To Live or Expiry Date. The main reason is to allow CDNs to bill for the time data is stored and for the applications to have a way to indicate to the CDN what the desired behavior is. This is a significant limitation of existing HTTP CDNs.

@wilaw
Copy link
Contributor

wilaw commented Sep 18, 2023

I think each object should have a Time To Live or Expiry Date.

+1

This is a significant limitation of existing HTTP CDNs.

HTTP CDNs do have pretty extensive cache control - see here for some details. We could for moq-transport think of simplifying this and reducing the applicaiton defined options (e.g s-maxage, max-age,no-store,no-cache) down to a single value which is the desired TTL in milliseconds.

@kixelated
Copy link
Collaborator Author

kixelated commented Sep 19, 2023

Walk me through how this works for an audio call where you only want to cache the most recent 15 seconds.

Sure. The simple answer is that you deliver independent chunks, over independent streams, which are cached independently. I'm going to call these "groups" because that's the spirit behind the abstraction in the draft. ex. There's no point caching/delivering a B frame if you already invalidated/dropped the I frame.

The audio group size is up to the application (and codec). It could be as small as 1 frame or as large as the entire video GoP. It's a trade-off between overhead (number of streams) and granularity (not latency). For example, if you make a group every 100ms, then you can only start/skip at 100ms boundaries. MAX_STREAMs is really the only reason why you wouldn't make 1 frame audio groups though.

@kixelated
Copy link
Collaborator Author

kixelated commented Sep 19, 2023

The long answer bleeds into QUIC stream mapping.

The key property is independence. You want media to be split into chunks which can then be cached/served independently.

My claim is that you want network dependency == cache dependency == application dependency. You want to each group to be it's own independent pipeline: encode -> transmit -> cache -> receive -> decode. That way when there's congestion or queuing, you can cancel or deprioritize less important groups without impacting more important groups.

Now, let's suppose you send two OBJECTs with two different groups over the same QUIC stream. You could cache these independently however the premise is flawed. The intent is to cache them independently so you can deliver them independently downstream, however they were delivered dependently from the upstream. Any congestion from upstream propagates downstream; you can't put the genie back in the bottle.

So that's not to say that caching objects independently is inherently wrong. The core issue is delivering independent objects on the same QUIC stream, which introduces dependencies, and then expecting to cache them independently.

Also just to clarify, a CDN does not need to use QUIC streams internally. It only matters when there's congestion, much like a CDN would use HTTP/3 externally but HTTP/1/2 internally. The streams/groups would still be logically separate, much like HTTP/2 requests are logically separate and are cached separately, even when are serialized over a single TCP stream.

@afrind afrind added the Object Model Relating to the properties of Tracks, Groups and Object label Oct 3, 2023
@ianswett
Copy link
Collaborator

I can imagine three possible uses of a TTL:

  1. How long can one cache this before revalidating the content - MoQT objects are immutable, so this is not an issue.
  2. How long can one cache this before it should be purged from the cache for policy reasons.
  3. How long would we expect the content to be cached in order to satisfy the typical user/usecase.

It seems like this issue is mostly focused on the 3rd concept, which is essentially a performance optimization/hint?

@afrind
Copy link
Collaborator

afrind commented Feb 20, 2024

Individual Comment:

TTLs are relative to something, for example Cache-Control: max-age is relative to the Date header in an HTTP response. There are no absolute dates in moqt right now, so are we talking about TTLs relative to when an entity receives an object? Or are we actually talking about an absolute 'expiration' field.

I agree with Ian that revalidation is not a use case.

I had thought that part of the use case for TTL was something mentioned in #396 - a point at which Objects should no longer be transmitted. Hinting to the cache when things can be purged may be orthogonal to that, so maybe moq requires multiple properties conveying different types of object lifetime information?

@wilaw
Copy link
Contributor

wilaw commented Feb 20, 2024

I think there are two different object time properties we need to consider - content expiration and cache expiration. The two are not the same. here is how I see them defined:

Expiration - an absolute timestamp after which the content is invalid. It MUST NOT be delivered after this time and may be purged from cache or dropped if it is in a send queue.
Cache TTL - a relative time that the content should be cached after receipt at any relay. This is a hint to the relays as to the period of time that storing this object would be useful.

Examples (as set by publisher)

  • Real-time video conference stream: Expiration: now + 2000ms, Cache TTL 2000ms
  • Sports broadcast with 1hr DVR window; Expiration: now + 3600000, cache TTL 3600000
  • VOD movie valid all year but cached for 30min: Expiration: now + 1314000000, cache TTL:1800000

If cacheTTL is not set, then relay MAY cache until expiration date.
If cacheTTL is set without any expiration date, and the cacheTTL has been exceeded, relay should renew the subscription before serving the content downstream.

Caching at all times is a scaling and efficiency mechanism for avoiding trips upstream. It is decoupled from the core pub/sub behavior of moqt. Therefore any cacheTTL properties are suggestions to the relays and should not be relied upon.

@fluffy
Copy link
Contributor

fluffy commented Mar 1, 2024

Just want to flag that I think a bunch of this is really going the wrong direction but I don't want to discuss in an issue. I think we need a better way to make progress on this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Object Model Relating to the properties of Tracks, Groups and Object
Projects
None yet
Development

No branches or pull requests

8 participants