Skip to content

Conversation

@xantoz
Copy link
Member

@xantoz xantoz commented Jun 29, 2019

I'm putting this here as a reference so we can see and discuss about how/what we want to include this/what can be backported etc. or just for curious people to play with the backwards playback :)

This of course includes some of the more controversial commits such as the one removing support for DVB, DVD, bluray etc. (like discussed in #6127) so that might be a good starting point. Personally I feel that the path of least resistance at this point would be to investigate the amount of work needed to bring wanted removed features back on top of this branch, due to how much the demuxing code has changed since.

wm4 added 30 commits September 21, 2018 15:51
I misunderstood how this extension works. If I understand it correctly
now, it's worse than I thought. They key thing is that the (ust, msc,
sbc) tripple is not for a single swap event. Instead, (ust, msc) run
independently from sbc. Assuming a CFR display/compositor, this means
you can at best know the vsync phase and frequency, but not the exact
time a sbc changed value.

There is GLX_INTEL_swap_event, which might work as expected, but it has
no EGL equivalent (while GLX_OML_sync_control does, in theory).

Redo the context_glx sync code. Now it's either more correct or less
correct. I wanted to add proper skip detection (if a vsync gets skipped
due to rendering taking too long and other problems), but it turned out
to be too complex, so only some unused fields in vo.h are left of it.
The "generic" skip detection has to do.

The vsync_duration field is also unused by vo.c.

Actually this seems to be an improvement. In cases where the flip call
timing is off, but the real driver-level timing apparently still works,
this will not report vsync skips or higher vsync jitter anymore. I could
observe this with screenshots and fullscreen switching. On the other
hand, maybe it just introduces an A/V offset or so.

Why the fuck can't there be a proper API for retrieving these
statistics? I'm not even asking for much.
Useful in particular with ytdl, where you never know what you get.
struct stream used to include the stream buffer, including peek buffer,
inline in the struct. It could not be resized, which means the maximum
peek size was set in stone. This meant demux_lavf.c could peek only so
much data.

Change it to use a dynamic buffer. Because it's possible, keep the
inline buffer for default buffer sizes (which are basically always used
outside of file opening). It's unknown whether it really helps with
anything. Probably not.

This is also the fallback plan in case we need something like the old
stream cache in order to deal with mp4 + unseekable http: the code can
now be easily changed to use any buffer size.
For those shitty mp3s with extremely large ID3v2/APIC tags, and for
which libavformat insists on reading all data until after the ID3v2.
This warns about player/audio.c:253 with gcc 8.2.0. Although this
warning could be useful to check the worst case estimation, the compiler
doesn't explain how it gets its dumb, bogus result, so this is useless.
You'd just end up trying to make the compiler happy for no reason.
I'd actually very much encourage demuxer implementations outside crappy
libavformat.
Seems to happen often with ytdl pseudo-DASH streams, so whatever. I
couldn't reproduce it and check what triggers it, I just remember seeing
the error message and found it annoying.
I found this sort of annoying.

You could argue that the "frontend" should maybe contain this logic, but
who cares.
If the number of chapters is 0, the chapter list can be NULL. clang
complains that we pass NULL to qsort(). This is yet another pointless UB
that exists for no reason other than wasting your time.
Bug caused by a recent refactor. The function is not supposed to unlock
the mutex anymore, but the unlock was forgotten on an obscure code path.
This could sometimes lead to crashes, depending on libc behavior (when
an unlocked mutex was unlocked again, or it tries to unlock a mutex
locked by a different thread).

None of those fancy debug tools could tell me that. I found it by
switching to an error checking mutex and wrapping pthread calls into a
macro that checks their return values.
The pointer could be NULL if the number of bytes to copy was 0. In a
sane world, this would be fine, but not the current world.
libcve (libarchive is the official name I think?) has this annoying
behavior that if after a "fatal" error, you do any operation on the
archive context other than querying the error and closing the context,
you get a free CVE. So we close the archive context in these situations.
This can set p->mpa to NULL, so code accessing this field needs to be
careful.

This was not considered in a certain code path, and a simple truncated
.rar file made it crash. Part of the problem was that the file inside
the rar was a mkv file, which triggered seeking when the demux_mkv
resync code encountered bogus data.

This is probably a regression from a relatively recent change to this
code (in any case mpv 0.29.1 doesn't crash).

Fix this by adding the check.

There's also a mechanism to reopen an archive context used to emulate
seeking, since most libarchive format handlers don't support this
natively. Add a reopen call to the codepath, because obviously it should
always be possible to seek back into a "working" area of the file.

There is a second bug with this: if reopening fails, we don't adjust the
current position back to 0, which in some cases means we accidentally
return bogus data to the reader when we shouldn't. Fix this by always
resetting the position on reopening.
EDLs can be provided either as external file, or "inline" as a big
edl:// URL. There is no difference between them, except if it's loaded
from an external file, there is some weird filename sanitation going on
(see fix_filenames() in demux_edl.c). It seems this is intended to be a
security mechanism, but probably makes no sense at all.

Note that playlists are allowed to access anything locally. One
difference to playlists is that the EDL code lacks the "security"
mechanism when accessing playlist entries (see handling of the
playlist_entry.stream_flags field - EDL would need something similar),
so don't remove that, as I'm unaware of the exact consequences.
This commit adds an extension to mpv EDL, which basically allows you to
do the same as --audio-file, --external-file, etc. in a single EDL file.

This is a relatively quick & dirty implementation. The dirty part lies
in the fact that several shortcuts are taken. For example, struct
timeline now forms a singly linked list, which is really weird, but also
means the other timeline using demuxers (cue, mkv) don't need to be
touched. Also, memory management becomes even worse (weird object
ownership rules that are just fragile WTFs). There are some other
dubious small changes, mostly related to the weird representation of
separate streams.

demux_timeline.c contains the actual implementation of the separate
stream handling. For the most part, most things that used to be on the
top level are now in struct virtual_source, of which one for each
separate stream exists. This is basically like running multiple
demux_edl.c in parallel. Some changes could strictly speaking be split
into a separate commit, such as the stream_map type change.

Mostly untested. Seems to work for the intended purpose. Potential for
regressions for other timeline uses (like ordered chapters) is probably
low. One thing which could definitely break and which I didn't test is
the pseudo-DASH fragmented EDL code, of which ytdl can trigger various
forms in obscure situations. (Uh why don't we have a test suite.)

Background:

The intention is to use this for the ytdl wrapper. A certain streaming
site from a particularly brain damaged and plain evil Silicon Valley
company usually provides streams as separate audio and video streams.
The ytdl wrapper simply does use audio-add (i.e. adding it as external
track, like with --audio-file), which works mostly fine. Unfortunately,
mpv manages caching completely separately for external files. This has
the following potential problems:

1. Seek ranges are rendered incorrectly. They always use the "main"
stream, in this case the video stream. E.g. clicking into a cached range
on the OSC could trigger a low level seek if the audio stream is
actually not cached at the target position.

2. The stream cache bloats unnecessarily. Each stream may allocate the
full configured maximum cache size, which is not what the user intends
to do. Cached ranges are not pruned the same way, which creates disjoint
cache ranges, which only use memory and won't help with fast seeking or
playback.

3. mpv will try to aggressively read from both streams. This is done
from different threads, with no regard which stream is more important.
So it might happen that one stream starves the other one, especially if
they have different bitrates.

4. Every stream will use a separate thread, which is an unnecessary
waste of system resources.

In theory, the following solutions are available (this commit works
towards D):

A. Centrally manage reading and caching of all streams. A single thread
would do all I/O, and decide from which stream it should read next. As
long as the total TCP/socket buffering is not too high, this should be
effective to avoid starvation issues. This can also manage the cached
ranges better. It would also get rid of the quite useless additional
demuxer threads. This solution is conceptually simple, but requires
refactoring the entire demuxer middle layer.

B. Attempt to coordinate the demuxer threads. This would maintain a
shared cache and readahead state to solve the mentioned problems
explicitly. While this sounds simple and like an incremental change,
it's probably hard to implement, creates more messy special cases,
solution A. seems just a better and simpler variant of this. (On the
other hand, A. requires refactoring more code.)

C. Render an intersection of the seek ranges across all streams. This
fixes only problem 1.

D. Merge all streams in a dedicated wrapper demuxer. The general demuxer
layer remains unchanged, and reading from separate streams is handled as
special case. This effectively achieves the same as A. In particular,
caching is simply handled by the usual demuxer cache layer, which sees
the wrapper demuxer as a single stream of interleaved packets. One
implementation variant of this is to reuse the EDL infrastructure, which
this commit does.

All in all, solution A would be preferable, because it's cleaner and
works for all external streams in general.

Some previous commit tried to prepare for implementing solution A. This
could still happen. But it could take years until this is finally
seriously started and finished. In any case, this commit doesn't block
or complicate such attempts, which is also why it's the way to go.

It's worth mentioning that original mplayer handles external files by
creating a wrapper demuxer. This is like a less ideal mixture of A. and
D. (The similarity with A. is that extending the mplayer approach to be
fully dynamic and without certain disadvantages caused by the wrapper
would end up with A. anyway. The similarity with D. is that due to the
wrapper, no higher level code needs to be changed.)
The ytdl wrapper can resolve web links to playlists. This playlist is
passed as big memory:// blob, and will contain further quite normal web
links. When playback of one of these playlist entries starts, ytdl is
called again and will resolve the web link to a media URL again.

This didn't work if playlist entries resolved to EDL URLs. Playback was
rejected with a "potentially unsafe URL from playlist" error. This was
completely weird and unexpected: using the playlist entry directly on
the command line worked fine, and there isn't a reason why it should be
different for a playlist entry (both are resolved by the ytdl wrapper
anyway). Also, if the only EDL URL was added via audio-add or sub-add,
the URL was accessed successfully.

The reason this happened is because the playlist entries were marked as
STREAM_SAFE_ONLY, and edl:// is not marked as "safe". Playlist entries
passed via command line directly are not marked, so resolving them to
EDL worked.

Fix this by making the ytdl hook set load-unsafe-playlists while the
playlist is parsed. (After the playlist is parsed, and before the first
playlist entry is played, file-local options are reset again.) Further,
extend the load-unsafe-playlists option so that the playlist entries are
not marked while the playlist is loaded.

Since playlist entries are already verified, this should change nothing
about the actual security situation.

There are now 2 locations which check load_unsafe_playlists. The old one
is a bit redundant now. In theory, the playlist loading code might not
be the only code which sets these flags, so keeping the old code is
somewhat justified (and in any case it doesn't hurt to keep it).

In general, the security concept sucks (and always did). I can for
example not answer the question whether you can "break" this mechanism
with various combinations of archives, EDL files, playlists files,
compromised sites, and so on. You probably can, and I'm fully aware that
it's probably possible, so don't blame me.
This merges separate audio and video tracks into one virtual stream,
which helps the mpv caching layer. See previous EDL commit for details.

It's apparently active for most of evil Silicon Valley giant's streaming
videos.

Initial tests seem to work fine, except it happens pretty often that
playback goes into buffering immediately even when seeking within a
cached range, because there is not enough forward cache data yet to
fully restart playback. (Or something like this.)

The audio stream title used to be derived from track.format_note; this
commit stops doing so. It seemed pointless anyway. If really necessary,
it could be restored by adding new EDL headers.

Note that we explicitly don't do this with subtitle tracks. Subtitle
tracks still have a chance with on-demand loading or loading in the
background while video is already playing; merging them with EDL would
prevent this. Currently, subtitles are still added in a "blocking"
manner, but in theory this could be loosened. For example, the Lua API
already provides a way to run processes asynchronously, which could be
used to add subtitles during playback. EDL will probably be never
flexible enough to provide this. Also, subtitles are downloaded at
once, rather than streamed like audio and video.

Still missing: disabling EDL's pointless chapter generation, and
propagating download speed statistics through the EDL wrapper.
EDL "headers" were always an afterthought, and kind of hacked on top of
the existing code. Improve it slightly, and make it follow the
conventions of the normal parsing. Basically use the same code structure
for them, just that they use different field names.
I think this is better. On the other hand, this is a behavior change.
The EDL "spec" says that unknown fields are igored. But strictly
speaking, unknown headers are not "fields", but unknown entities.
(Yes, a bit odd how this header is needed only for the first stream.)
It was an ugly hack, and the next commit will make it even uglier.
Slightly reduce the ugliness to prevent death of too many brain cells,
though it's still an ugly hack.

The cleanup is really minor, but I guess the following commit would be
much worse otherwise. In particular, this commit checks accesses
(instead of having a public field with evil access rules), which should
avoid misunderstandings and incorrect use. Strictly speaking, the added
field is redundant, but the next commit complicates it a bit.
demux_timeline doesn't do any transport accesses itself. The slave
demuxers do this (these will actually access the stream layer and
perform e.g. network accesses). As a consequence, demux_timeline always
reported 0 bytes read, and network speed display didn't work.

Fix this by awkwardly reporting the amount of read bytes upwards. This
is not very nice, and requires explicit calls whenever the slave "might"
have read data.

Due to the way the reporting is done, it only works if the slaves do not
run demuxer threads, which makes things even less nice. (Fortunately
they don't anyway, because it would be a waste of resources.) Some
identifiers contain the word "hack" as a warning.

Some of the stupidity comes from the fact that demux.c itself resets the
stats randomly in order to calculate the bytes_per_second value, which
is useless for a slave, but of course is still done, because demux.c
itself is not aware of whether it's on the slave or top-level layer.

Unfortunately, this must do.

In theory, the demuxer thread/cache layer should be separated from
demuxer implementations. This would get rid of all the awkwardness and
nonsense. For example, the only threading involved would be the caching
layer, completely separate from demuxers themselves. It'd be the only
thing calculates speed rates for the player frontend, too (instead of
doing it for each demuxer, even if unused).
This fixes that there were weird delay ("buffering") when seeking into
the last part of a seekable range. The exact case which triggers it if
SEEK_FORWARD is used, and the seek pts is after the second-last
keyframe, but before the end of the range. In that case,
find_seek_target() returned NULL, and the cache layer waited until the
_next_ keyframe the underlying demuxer returned until resuming playback.

find_seek_target() returned NULL, because the last keyframe had
kf_seek_pts unset. This field contains the lowest PTS in the packet
range from the keyframe until the next keyframe (or EOF). For normal
seeks, this is needed because keyframes don't necessarily have the
minimum PTS in the packet range, so it needs to be computed by waiting
for all packets until the next keyframe (or EOF).

Strictly speaking, this behavior was correct, but it meant that the
caller would set ds->skip_to_keyframe, which waits for the next newly
demuxed keyframe. No packets were returned to the decoder until this
happened, usually resulting in the frontend entering "buffering" mode.

What it really needs to do is returning the last keyframe in the cache.
In this situation, the seek target points in the middle of the last
completely cached packet range (as delimited by keyframes), and
SEEK_FORWARD is supposed to skip to the next keyframe. This is in line
with the basic assumptions the packet cache makes (e.g. the keyframe
flag means it's possible to start decoding, and the frames decoded from
it and following packets will strictly have PTS values above the
previous keyframe range). This means in this situation the kf_seek_pts
value doesn't matter either.

So fix this situation by explicitly detecting it and then returning the
last cached keyframe.

Should the search loop look at all packets, instead of only keyframe
ones? This would mean it can know that it's within the last keyframe
range (without looking at queue->seek_end). Maybe this would be a bit
more natural for the SEEK_FORWARD case, but due to PTS reordering it
doesn't sound like a useful thing to do.

Should skip_to_keyframe be checked by the code that sets kf_seek_pts to
a known value? This wouldn't help too much; the frontend would still go
into "buffering" mode for no reason until the packet range is completed,
although it would resume from the correct range.

Should a NULL return always unconditionally use keyframe_latest? This
makes sense because the seek PTS is usually already in the cached range,
so this is the only case that should happen. But there are scary special
cases, like sparse subtitle streams, or other uses of find_seek_target()
which could be out of range now or in future. Basically, don't "risk"
it.

One other potential problem with this is that the "adjust seek target"
code will be disabled in this case. It checks kf_seek_pts, and if it's
unset, the adjustment is not done. Maybe this could be changed to use
the queue's seek_end time, but I'm not sure if this is fully kosher. On
the other hand, I think the main use for this adjustment is with
backwards seeks, so this shouldn't matter.

A previous commit dealing with audio/video stream merging mentioned how
seeking forward entered "buffering" mode for unknown reasons; this
commit fixes this issue.
There isn't necessarily a segment opened, but this does not mean no
stream is selected.
Quite obviously the code was stupid garbage.
Normal EDL needs to clip packets coming from the underlying demuxer to
the segment range (including complicated stuff due to frame reordering).
This is unwanted In pseudo-DASH mode. A broken or subtly incorrect
manifest would lead to "bad stuff" happening. The intention of the
pseudo-DASH mode is to literally concatenate fragments.
wm4 and others added 10 commits July 10, 2019 21:34
This is the muxer used by all 3 stream recording features (why are there
so many?). It tried hard to avoid writing broken files. In particular,
it buffered packets until it new there was a keyframe packet (which, in
mpv's/FFmpeg's definition, mean seek points from which decoding can
resume), or final EOF. The danger that was probably considered here was
that due to video frame reordering, not muxing some trailing, missing
packets of a keyframe range could lead to broken decoding or skipped
frames, so better discard packets belonging to an incomplete range.
Sounds like a good idea so far.

Unfortunately, this will drop an entire keyframe range even if the
current packet run is complete and mp_recorder_mark_discontinuity() is
called, simply because recorder.c can not know that the next packet
would have been a keyframe.

It seems better to mux all packets to avoid losing valid data, even if
it means that sometimes packets/frames will be missing from the file. It
benefits especially the dump-cache command, which will call the function
to signal a discontinuity after every range. Before this commit, it
discarded the last packets, even if they were perfectly fine.

(An alternative solution for dump-cache would have been a second
discontinuity marker function, that communicates that the current packet
range is complete. But this commit's solution is simpler and overall
more robust, at the danger of producing more semi-broken files.)

This may make some of the complex buffering/waiting logic in recorder.c
pointless.

Untested (in this final form).
Helper for the ab-loop-dump-cache command, see manpage additions.

This is kind of shit. Not only is this a very "special" feature, but it
also vomits more messy code into the big and already bloated demux.c,
and the implementation is sort of duplicated with the dump-cache code.
(Except it's different.) In addition, the results sort of depend what a
video player would do with the dump-cache output, or what the user wants
(for example, a user might be more interested in the range of output
audio, instead of the video).

But hey, I don't actually need to justify it. I'm only justifying it for
fun.
Dear diary,

today I fixed a shitty bug that was all my fault because I made a
horrible mess. (Except it was a horrible mess before I even touched
this shit, but let's not blame others.)

Sometimes, updates to VO option that control video sizing (like panscan)
didn't update the screen correctly. They were delayed until the next
option change or so.

It turns out that if the option update happens at the "same" time as a
VOCTRL, update_opts() doesn't actually notify the vo_driver of the
change. This in turn happened because run_control() called
m_config_cache_update(). The latter function returns true if the options
changed since the last call, and update_opts() also calls it (on the
same config cache) for the same purpose. The update_opts() call, which
is triggered by a third mechanism, comes later, but the cache update
call will return false (as it should). Basically, given the config API,
you can't act differently on multiple update calls and expect it to
work. The skipped handling in update_opts() meant that the notification
required to apply the changed option wasn't run.

Fix this by simply calling update_opts() directly instead. Now there's
only 1 m_config_cache_update() call on this specific instance. Fix the
call in run_reconfig() too, so the previous sentence isn't a lie (but it
probably doesn't make a difference in practice due to certain details).

I'm not sure how I even ran into this sort-of race condition. The VOCTRL
that messed up the option update was VOCTRL_UPDATE_PLAYBACK_STATE, which
happens semi-regularly.

Why this config cache shit and all the other shit? Rediscovering this
crap wasn't pleasant. It's a bunch of hacks that became necessary when
the ancient MPlayer architecture made it hard to move the VO to a
separate thread.

All the VO code typically accesses vo->opts (whose fields all used to be
global variables in MPlayer). The frontend changes these on user input.
Putting locking around all the options would be a nightmare, and keeping
a copy of the options in the thread was much simpler. You need a way to
propagate option changes, notify the thread, and update the local copy
too. And the result of these thoughts was the config cache mechanism.

In this specific case, the relevant cache update call in update_opts()
triggers a VOCTRL_SET_PANSCAN to the VO driver, which isn't related to
its former function anymore. Instead, it causes the VO driver to update
the video sizing/placing options, which the generic VO code can't do.
(Mostly because the VO driver includes the windowing stuff and is
responsible for resizing etc. itself.)

VOCTRLs sent by the frontend are even worse. MPlayer had no real runtime
option change mechanism. Some options were vaguely duplicated by
properties, so you could effectively change those options at runtime.
Each of these options had its own VOCTRL, which still exist today, e.g.
VOCTRL_FULLSCREEN, or VOCTRL_ONTOP. I tried to make all options runtime
changeable, and to unify properties with options. But I couldn't be
bothered with updating all VO drivers to listen to option changes
directly, because that would be pretty tedious. So the property code is
still all there and sends the old VOCTRLs. But of course you need to
sync up the options, which is why the run_control() code did that.

(Unrelated: VO_EVENT_FULLSCREEN_STATE is the worst shithack of them all.
Currently, only the frontend can actually write to options (for awful
reasons), so if the fullscreen state changes due to outside interaction,
the VO driver can't update the corresponding option fields. So the VO
notifies the frontend with said VO_EVENT_, and the frontend then sends
VOCTRL_GET_FULLSCREEN, and updates the global copy of the option with
the value returned by that. I still like to think the situation is not
that bad considering the monstrous effort of converting single-threaded
code that had hundreds of options in global variables to multi-threaded
code with no global variables at all.)
This was one of those "shouldn't exist" type of functions that could
access internals that were supposed to be isolated away, but some code
needed to access it anyway.

It looks like the last use of it went away in 2016, shortly after it was
introduced.
Move the comments documenting exported functions to the header. It looks
like the header is the preferred place for that (although I don't really
appreciate headers where you lose the overview because of all the
documentation comments). Add comments to some undocumented prototypes.
Commit 3b74d28 obviously broke this. Although it's true that
there's no stream, there's still the hacked sub-demuxer reporting, that
matters for the final speed computation. This fixes in particular
reporting of speeds with the ytdl DASH hack.

Fixes: 3b74d28
Unused now. The old stream cache used it, but it was removed.

On a side note, the demuxer cache uses mp_mkostemps(). It looks like our
Windows open() emulation handles this correctly by using CREATE_NEW, so
no functionality gets lost by the "new" approach. On the other hand, the
demuxer cache does not set FILE_FLAG_DELETE_ON_CLOSE, but instead tries
to delete the file after opening (POSIX style), which probably won't
work on Windows. But I'm not sure how to make it use the DELETE_ON_CLOSE
flag, so whatever.
If the source file had a timestamp offset, the ab-loop-align-cache would
return nonsense. Obviously, the offset needs to be removed when
returning the aligned timestamp.

Fixes: af752a1
@xantoz
Copy link
Member Author

xantoz commented Jul 12, 2019

I'll try and keep this updated, anyway. It can at least serve as a place to go looking for things that might be useful to backport (as long is it doesn't touch one of the two AGPL:ed files 😐 )

wm4 and others added 12 commits July 13, 2019 00:38
Yawn. If you want explanations, read /dev/random.
m_config has a m_config_option array, that is used for all option
access. The code maintaining shadow copies also tried to make use of it,
and did so by "cleverly" assigning each m_sub_options run a slice of
that array. But actually it's much simpler to, you know, directly access
the damn options.

This helps separation m_config and the general option code slightly.

Still seems to work after a superficial test, good enough.
Now m_config_shadow is fully independent from m_config (except for the
fact that m_config is still involved in its creation).
A previous commit changed m_config so that it always creates the shadow
thing, and the function's only remaining purpose was to initialize
mpv_global. It makes much more sense to do that at the caller, and it's
only 1 line of code too.
Just in case a confused user or developer finds this file and wonders
why it's "incomplete".
A dumb thing that the cursed property-option bridge accidentally did.

Normal deprecated options on the other hand are fine in the property
list, because they're wanted for compatibility.
It seems using multiple prefixes for an option isn't supported out of
laziness (and shouldn't, because what the fuck). So assert() on this.

(Unfortunately this prefix nonsense is still needed. Especially AO and
VO options use this through the options_prefix field.)
With commit 0d0d134 (and possibly some after that), m_config_data
(i.e. config->data) depends on m_config_shadow, instead of m_config. In
particular, free_option_data() accesses the m_config_shadow.groups
array. Obviously it must be freed before m_config_shadow.

Also explicitly free m_config_shadow.data before destroying the rest,
although this is not really required for this fix.

Fixes: 0d0d134 (probably, partially)
This matters when talloc allocations set destructors. Before this
commit, destructors were called in the same order as they were added to
the parent allocations. Now it happens in reverse order.

I think this makes more sense. It's reasonable to assume that an
allocation that was added later may depend on any of the previous
allocations, so later additions should be destroyed first. (Of course
other orders are entirely possible too.)

Hopefully this doesn't fix or break anything, but I can't be sure (about
either of those). It's risky. (Then why do it?)

The destructor of a parent allocation is called before its children. It
makes sense and must stay this way, because in most cases, the
destructor wants to access the children.

This is a reason why I don't really like talloc (it wasn't my idea to
use talloc, is my excuse). Quite possible that destructors should be
removed from talloc entirely. Actually, this project should probably be
rewritten in Rust (or a better language), but that would be even more of
a pain; also, I think this is just the right level of suffering and
punishment.
@ghost
Copy link

ghost commented Sep 13, 2019

I pushed some of my removal commits.

@olifre ask if you need help how to readd the dvb runtime control

@olifre
Copy link
Member

olifre commented Sep 13, 2019

@wm4 Many thanks both for the work and keeping DVB in a functional state! I fully agree that dropping the stream_ctrl stuff is a good idea.
I'm currently slightly overloaded for at least two more weeks, but will get back to add channel switching.

Do you have an example commit "at hand" which adds an option affecting a stream which can be changed at runtime? This would speed my reading of the mechanism up quite a bit.

EDIT: Of course, just an example option is also fine, I'd just love a good lead on any option affecting a stream more or less directly.

@ghost
Copy link

ghost commented Sep 13, 2019

I couldn't find anything simple. Commit 6aad532 changed quite a lot of OSD-related code from accessing a global option struct to local copies (that are automatically updated in a safe way).

Basically you need to:

  1. replace mp_get_config_group with m_config_cache_alloc (actually the former is just a less powerful convenience wrapper for the latter)
  2. regularly poll m_config_cache_update (could also use a fancy notification mechanism, but likely not worth doing for this purpose)
  3. check which options have been possibly updated

This is uni-directional. Unlike the stream ctrls, it can't give feedback (other than log messages), and can't e.g. reset the global options on failure or anything like it. But this just follows the general way the command interface was changed in the recent years. Instead of making setting properties/option fail if they can't be applied at runtime, there should at most be something that reports the current state, independent of setters.

@olifre
Copy link
Member

olifre commented Sep 13, 2019

@wm4 Many thanks, while the commit of course is pretty huge, I get the mechanics now.
I'll either manage to give it a go end of next week or in two weeks from now (too many conferences at the moment which need preparation).

Uni-directional is perfectly fine in my opinion. On a side-note, I do regard the channel switching as a "bonus feature" of DVB in any case. Switching channels takes noticeably longer (it's actually the DVB hardware's tuning) than restarting mpv, so I doubt many people are "zapping" through channels.
My personal major usecase is to select a channel and watch it (with filters etc.) or record / reencode it live, so I am rarely using the actual channel switching. Of course I'll spend time bringing channel switching back, there appears to be a user base for it apart from myself and it does not seem too hard now that I got the mechanics.

Thanks again! 😄.

@ghost
Copy link

ghost commented Sep 21, 2019

Obsoleted.

@ghost ghost closed this Sep 21, 2019
This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants