Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't preallocate a full vector, allocate on demand #154

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

5225225
Copy link
Contributor

@5225225 5225225 commented Aug 29, 2022

This avoids OOM issues, at the cost of resizing a bit more.

I originally had a design where streams could say how long they are, but I figured just letting vec resize here is the best solution.

I looked through for any other places where vec![expr; length] is called where length is untrusted, and did only see these. If there's any more I'll fix em.

Sorry, no test files for this, I kinda did the work a while ago and forgot I did it, so didn't keep any test files.

@roderickvd
Copy link

Will this in any way also deprecate the buffer_len in MediaSourceStreamOptions?

MediaSourceStreamOptions { buffer_len: 64 * 1024 }

I don't think it's used anywhere yet. Not sure if it was intended for allocation or some sort of seeking strategy?

@pdeljanov
Copy link
Owner

Hi @5225225,

Apologies for how long it took to come around to this, but got very busy and had to put Symphonia on a bit of a hiatus. Catching up now!

If I understand this PR, the motivation is to protect against malicious files from allocating more bytes than could be read from the file?

It's a good first go at a solution, but I think there are some additional considerations that need to be made:

  1. Reading 1 kB blocks at a time is going to be slow, particularly with media that has large artwork.

    My suggestion would be to allow buffers up-to a certain size be allocated upfront (e.g., 64 kB), then grow the buffer exponentially up-to some other point (e.g., 8 MB), after which grow the buffer linearly.

    To complete the example, buffer growth could look like this: 64 kB, 128 kB, 256 kB, 512 kB, 1 MB, 2 MB, 4 MB, 8 MB, 12 MB, 16 MB, etc.

  2. A weakness of the approach in general is that it won't help in the case of very large files (GBs), or live streaming (infinite length).

    An approach I was thinking of taking was stubbed out by symphonia_core::meta::Limit. Basically there would be a configurable limit that could be consulted before reading large chunks of data from the file. If the limit would be exceeded, a LimitError would be returned.

What do you think?

@pdeljanov
Copy link
Owner

Hey @roderickvd, that option controls Symphonia's read-ahead/seekback buffer.

The documentation for MediaSourceStream should give a decent explanation about how the buffering works since it's a bit complicated.

Generally, you shouldn't need to modify it.

@roderickvd
Copy link

Am I correct that it is not used anywhere today? If there's anything I can tweak to improve librespot streaming or seeking performance then I'm all game of course 😆

@pdeljanov
Copy link
Owner

Am I correct that it is not used anywhere today? If there's anything I can tweak to improve librespot streaming or seeking performance then I'm all game of course 😆

No, it is used. However, it will not affect seek performance since MediaSourceStream does not try to fill the entire buffer on seeks or initialization. It buffers data in progressively bigger chunks from 1 kB up-to a maximum of 32 kB.

Are you having issues with seek performance?

The problem with OGG is that it's primarily a streaming format and the proper way to seek is either a linear scan or a bisection search of the file for the desired timestamp. I've implemented the latter, but this method does perform a lot of seeks.

I've read that Spotify adds a non-standard seek index packet to their OGG files reduce the number of seeks. If you have any docs on that I could look into implementing support for it.

Other than that, there are a few constants in MediaSourceStream that could be tuned (such as that starting chunk size), but I don't think it would improve things too much, if at all.

@5225225
Copy link
Contributor Author

5225225 commented Oct 12, 2022

If I understand this PR, the motivation is to protect against malicious files from allocating more bytes than could be read from the file?

yes

The solution here is to mainly protect against "lol you were sent a 50KB file that says it's 100GB, you're going to OOM now", and not make any behavioral changes.

I was originally planning to do a more clever scheme where we allow the decoder to ask the source how much is left, but that doesn't play nice with streaming sources.

I could just bump up the limit here to 1MB which should be more reasonable.

And yeah, this doesn't help with long files/streaming sources being too long. A limit might be smart, in that case. Though what would, say, a CLI player do? How do other players handle Very Large files? I forget if we require the whole file to fit in memory at once if we're playing it, but I assume we don't?

@roderickvd
Copy link

roderickvd commented Oct 12, 2022

Am I correct that it is not used anywhere today? If there's anything I can tweak to improve librespot streaming or seeking performance then I'm all game of course 😆

No, it is used. However, it will not affect seek performance since MediaSourceStream does not try to fill the entire buffer on seeks or initialization. It buffers data in progressively bigger chunks from 1 kB up-to a maximum of 32 kB.

My bad I could not find any occurrences doing a quick search.

Are you having issues with seek performance?

Not issues perse but I have been at working optimising it. In librespot <= 0.4 we have been downloading 16 kB chunks over Mercury (which is cheap to setup headerless), so seek performance never was an issue. Now that we switched to HTTPS things are different, and it's a balancing act between larger chunk sizes (better throughput, less overhead) and latency (when seeking).

Right now I have tuned it so that it initially grabs 64 kB chunks (which I took from buffer_len) until it is at the desired start position. Then it continues to grow download chunk sizes as network performance permits within ~1 second to keep acceptable seeking performance.

I've read that Spotify adds a non-standard seek index packet to their OGG files reduce the number of seeks. If you have any docs on that I could look into implementing support for it.

Spotify does prepend the file with its own Ogg header which is 166 bytes in length. Right now we get the normalisation values from it (they are not there with normal ReplayGain tags, but as specific bytes), then skip ahead and act as if byte 168 were actually byte 0 and we feed that to Symphonia.

This is the Spotify Ogg header deciphered as far as I know: https://sourceforge.net/p/despotify/code/346/tree/java/trunk/src/main/java/se/despotify/client/player/SpotifyOggHeader.java

@pdeljanov
Copy link
Owner

I was originally planning to do a more clever scheme where we allow the decoder to ask the source how much is left, but that doesn't play nice with streaming sources.

Yeah, unfortunately streaming sources throw a wrench into many ideas.

And yeah, this doesn't help with long files/streaming sources being too long. A limit might be smart, in that case. Though what would, say, a CLI player do?

If a limit is hit then the file would just be unreadable by Symphonia. The app. developer would need a way to allow the user to bump the limit or disable limits all together. For the most part, there are reasonable limits we could apply to metadata that would probably never be reached but not strain modern systems.

How do other players handle Very Large files?

Generally they will just blindly allocate/load the file. The idea of a limit in this case would be an added safety feature of Symphonia beyond the typical Rust memory safety.

I forget if we require the whole file to fit in memory at once if we're playing it, but I assume we don't?

Symphonia doesn't require the entire file to be loaded into memory.

Generally, it only has a 32 kB (by default, configurable by buffer_len) buffer of the file in-memory at any given time. Some demuxers can request this to be increased at runtime if required (e.g., OGG needs 64 kB when streaming).


Looping back to the original problem though.

I think the limit system as described is probably the way to go for metadata since it gives users flexibility. But metadata is also the only user of read_bytes_exact/read_boxed_slice_exact that needs to be protected in this way, so the solutions seem mutually exclusive.

Another thing we should consider is having an option to skip a metadata item that's too large instead of throwing an error. Since metadata needs to be loaded to memory, memory constrained device may want to skip large metadata items rather than make the file unplayable.

@pdeljanov
Copy link
Owner

Hey @roderickvd,

We should probably move this discussion to either a new Issue or a Discussion thread. Would you like to start one?

Not issues perse but I have been at working optimising it. In librespot <= 0.4 we have been downloading 16 kB chunks over Mercury (which is cheap to setup headerless), so seek performance never was an issue. Now that we switched to HTTPS things are different, and it's a balancing act between larger chunk sizes (better throughput, less overhead) and latency (when seeking).

Right now I have tuned it so that it initially grabs 64 kB chunks (which I took from buffer_len) until it is at the desired start position. Then it continues to grow download chunk sizes as network performance permits within ~1 second to keep acceptable seeking performance.

I see. I assume you're using range requests now?

You could try just using the read length Symphonia issues your Reader. It'll grow from 1 kB to a maximum of 32 kB.

Probably the most ideal solution would be to have a background thread download the file into a cache, from which Symphonia can read. But on seek you can immediately move to where Symphonia requests.

You'd probably need to track the ranges you have cached, but this would probably be an improvement.

Ultimately though, the fastest solution would be to decode the Spotify seek table since instead of many seeks we can only do one.

I've read that Spotify adds a non-standard seek index packet to their OGG files reduce the number of seeks. If you have any docs on that I could look into implementing support for it.

Spotify does prepend the file with its own Ogg header which is 167 bytes in length. Right now we get the normalisation values from it (they are not there with normal ReplayGain tags, but as specific bytes), then skip ahead and act as if byte 168 were actually byte 0 and we feed that to Symphonia.

This is the Spotify Ogg header deciphered as far as I know: https://sourceforge.net/p/despotify/code/346/tree/java/trunk/src/main/java/se/despotify/client/player/SpotifyOggHeader.java

This is just a guess from looking at the code, but...

It looks like the /* Table Lookup */ section is reading 100 bytes from the packet and uses each of those bytes to index into a lookup table that specifies a length value. On each iteration, the length value is added to an accumulator, and the value of the accumulator is stored in that iteration's respective table index.

So, based off this I assume that the seek table splits the file up into 100 segments, and stores the byte (or kB) offset to the start of each segment in the table. This is my hypothesis, but perhaps there's more too it.

@roderickvd
Copy link

We should probably move this discussion to either a new Issue or a Discussion thread. Would you like to start one?

#158

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants