Skip to content

Player APIs

WangBin edited this page Aug 11, 2024 · 127 revisions

NOT FINISHED. SEE SDK HEADERS.

Functions with callback(s) are async.

void* javaVM(void* vm = nullptr)

Deprecated since 0.11.0, use SetGlobalOption("jvm", vm) instead

Android only. Set a JavaVM* or get current value if vm is null. Required by android to use AMediaCodec, MediaCodec, AudioTrack and android IO and no System.loadLibrary("mdk")

static void foreignGLContextDestroyed()

Release GL resources bound to the context.

  • MUST be called when a foreign OpenGL context previously used is being destroyed and player object is already destroyed. The context MUST be current.
  • If player object is still alive, setVideoSurfaceSize(-1, -1, ...) is preferred.
  • If forget to call both foreignGLContextDestroyed() and setVideoSurfaceSize(-1, -1, ...) in the context, resources will be released in the next draw in the same context. But the context may be destroyed later, then resource will never be released

void setMedia(const char* url)

Set a new media url. If url changed, will stop current playback, and reset active tracks, external tracks set by setMedia(url, type). Supported protocols/schemes are:

  • FFmpeg protocols. For avdevice inputs, the url is "avdevice://format:filename", for example "avdevice://dshow:video=USB2.0 HD UVC WebCam"
  • Android: content, android.resource, assets
  • iOS: assets-library
  • UWP/WinRT: winrt. It's a custom protocol, the url format is winrt:IStorageItem@ADDRESS and winrt:IStorageFile@ADDRESS, where ADDRESS is the object address, and the object needs to be alive before media loaded.

A url query mdkopt=avformat&... will be treated as ffmpeg avformat options, for example, to speedup opening rtsp the default options is not suitable. some_url?mdkopt=avformat&fflags=+nobuffer&probesize=100&fpsprobesize=0 will set fflags option. You can also set the options globally without changing url SetGlobalOption("avformat", "fflags=+nobuffer:analyzeduration=10000:probesize=1000:fpsprobesize=0:avioflags=direct"), or via per player property(recommended)

player.setProperty("avformat.fflags", "+nobuffer"); 
player.setProperty("avformat.analyzeduration", "10000"); 
player.setProperty("avformat.probesize", "1000"); 
player.setProperty("avformat.fpsprobesize", "0"); 
player.setProperty("avformat.avioflags", "direct"); 

void setMedia(const char* url, MediaType type)

Set an individual source as track of type, e.g. audio track file, external subtile file. MUST be after main media setMedia(url).

If url is empty, use type tracks in MediaType::Video url.

The url can contains other track types, e.g. you can load an external audio/subtitle track from a video file, and use setActiveTracks() to select a track.

Note: because of filesystem restrictions on some platforms(iOS, macOS, uwp), and unable to access files in a sandbox, so you have to load subtitle files manually yourself via this function.

examples:

  • set subtitle file: setMedia("name.ass", MediaType::Subtitle)

const char* url() const

void currentMediaChanged(std::function<void()> cb)

Set a callback which is invoked when current media is stopped and a new media is about to play, or when setMedia() is called.

Call before setMedia() to take effect.

void setNextMedia(const char* url, int64_t startPosition = 0, SeekFlag flags = SeekFlag::FromStart)

Gapless play the next media after current media playback end. setState(State::Stopped) only stops current media. Call setNextMedia(nullptr, -1) first to disable next media.

  • startPosition: start milliseconds of next media
  • flags: seek flag if startPosition > 0

Usually you can call currentMediaChanged() to set a callback which invokes setNextMedia(), then call setMedia().

Player& setRenderAPI(RenderAPI* api, void* vo_opaque = nullptr

see Render API

RenderAPI* renderAPI(void* vo_opaque = nullptr) const

Get render api. For offscreen rendering, may only api type be valid in setRenderAPI(), and other members are filled internally, and used by user after renderVideo()

void setBufferRange(int64_t minMs, int64_t maxMs, bool drop = false)

same as property "buffer". set duration range of buffered data.

  • minMs: default 1000. wait for buffered duration >= minMs when before popping a packet.
    • minMs < 0: minMs, maxMs and drop will be reset to the default value.
    • minMs > 0: when packets queue becomes empty, MediaStatus::Buffering will be set until queue duration >= minMs, "reader.buffering" MediaEvent will be triggered.
    • minMs == 0: decode ASAP.
  • maxMs: default 4000. max buffered duration. Large value is recommended. Latency is not affected.
    • maxMs < 0: maxMs and drop will be reset to the default value
    • maxMs == 0: same as INT64_MAX
  • drop
    • true: drop old non-key frame packets to reduce buffered duration until < maxMs.
      • maxMs == 0 or INT64_MAX: always drop old packets and keep at most 1 key-frame packet.
      • maxMs(!=0 or INT64_MAX) < key-frame interval: no drop effect.
      • maxMs(!=0 or INT64_MAX) > key-frame interval: start to drop packets when buffered duration > maxMs.
    • false: wait for buffered duration < maxMs before pushing packets

For realtime streams like(rtp, rtsp, rtmp sdp etc.), the default range is [0, INT64_MAX, true].

Usually you don't need to call this api. This api can be used for low latency live videos, for example setBufferRange(0, INT64_MAX, true) will decode as soon as possible when media data received, and no accumulated delay.

int64_t buffered(int64_t* bytes = nullptr) const

Get buffered undecoded data duration and size.

  • bytes: buffered bytes
  • return: buffered data(packets) duration in milliseconds

void setVolume(float value, int channel = -1)

Set audio volume level

Examples

  • play left channel only:
    player.setVolume(0);
    player.setVolume(1.0f, 0);

void setMute(bool value = true)

mute or not

void setFrameRate(float value)

Set frame rate, frames per seconds. Useful for videos without audio and timestamp.

  • value: frame rate
    • 0 (default): use frame timestamp, or default frame rate 25.0fps if stream has no timestamp
    • -: render ASAP.
    • +: desired frame rate

void setBackgroundColor(float r, float g, float b, float a, void* vo_opaque = nullptr)

Set background color. r, g, b, a range is [0, 1], and default is 0. If out of range, background color will not be filled.

Player& set(VideoEffect effect, const float& values, void* vo_opaque = nullptr)

See https://github.com/wang-bin/mdk-sdk/wiki/Types#enum-videoeffect

Player& set(ColorSpace value, void* vo_opaque = nullptr)

set output color space.

  • value target ColorSpace.
    • If invalid(ColorSpaceUnknown), the renderer will try to use the value of decoded frame, and will send hdr10 metadata when possible(example). Currently only supported by metal, and MetalRenderAPI.layer must be a CAMetalLayer (example)
    • If target color space is hdr(for example ColorSpaceBT2100_PQ), no hdr metadata will be sent to the display, sdr will map to hdr. Can be used by the gui toolkits which support hdr swapchain but no api to change swapchain colorspace and format on the fly, see Qt example
    • The default target color space is sdr ColorSpaceBT709

To render multiple HDR and SDR videos(on the same device) at the same time, choose ColorSpaceBT2100_PQ and make sure your gui toolkit is running in hdr10 colorspace.

void setVideoSurfaceSize(int width, int height, void* vo_opaque = nullptr)

Window size, surface size or drawable size. Render callback(if exists) will be invoked if width and height > 0.

Usually for foreign contexts, i.e. not use updateNativeSurface().

If width or heigh < 0, corresponding video renderer (for vo_opaque) will be removed. But subsequence call with this vo_opaque will create renderer again. So it can be used before destroying the renderer.

OpenGL: resources must be released by setVideoSurfaceSize(-1, -1, ...) in a correct context. If player is destroyed before context, MUST call Player::foreignGLContextDestroyed() when destroying the context.

void setVideoViewport(float x, float y, float w, float h, void* vo_opaque = nullptr)

The rectangular viewport where the scene will be drawn relative to surface viewport.

x, y, w, h are normalized to [0, 1]

void setAspectRatio(float value, void* vo_opaque = nullptr)

Set video frame display aspect ratio.

  • value: aspect ratio. can be any, or one of predfined value. If value > 0, frame expend inside viewport. If value < 0, frame expend outside and crop(mdk>0.7.0 or nightly).
    • IgnoreAspectRatio: 0, ignore aspect ratio and scale to fit renderer viewport.
    • KeepAspectRatio: default, keep frame aspect ratio and scale as large as possible inside renderer viewport.
    • KeepAspectRatioCrop: keep frame aspect ratio and scale as small as possible outside renderer viewport.
    • other value > 0: like KeepAspectRatio, but keep given aspect ratio and scale as large as possible inside renderer viewport
    • other value < 0(mdk>0.7.0 or nightly): like KeepAspectRatioCrop, but keep given aspect ratio and scale as small as possible inside renderer viewport

void rotate(int degree, void* vo_opaque = nullptr)

rotate around video frame center

  • degree: 0, 90, 180, 270, counterclockwise

void scale(float x, float y, void* vo_opaque = nullptr)

scale frame size. x, y can be < 0, means scale and flip.

void mapPoint(MapDirection dir, float* x, float* y, float* z = nullptr, void* vo_opaque = nullptr)

Map a point from one coordinates to another. a frame must be rendered. coordinates is normalized to [0, 1].

  • dir: value of
    enum MapDirection {
        FrameToViewport, // left-hand
        ViewportToFrame, // left-hand
    };
  • x, y, z: points to x/y/z coordinate of viewport or currently rendered video frame. z is not used.

void setPointMap(const float* videoRoi, const float* viewRoi = nullptr, int count = 2, void* vo_opaque = nullptr)

Set points of region of interest. Can be called on any thread.

  • videoRoi: array of 2d point {x1, y1, x2, y2} in video frame, (x1, y1) is top-left point, (x2, y2) is bottom-left point of interested rectangle in video. coordinate: top-left = (0, 0), bottom-right=(1, 1). set null to disable mapping
  • viewRoi: array of 2d point {x1, y1, x2, y2} in video renderer. coordinate: top-left = (0, 0), bottom-right=(1, 1). null is the whole renderer.
  • count: point count. only support 2. set 0 to disable mapping

examples

  • video scale 2x: const float videoRoi[] = {0.25f, 0.25f, 0.75f, 0.75f}

void setDecoders(MediaType type, const std::vector<std::string>& names)

Try decoders by name(case sensitive) in the given order and select it if works for current media. This function can be called at anytime. When state is State::Playing, new decoders will be applied immediately.

names can contain decoder options/properties. Properties are separated by : and in key=value pattern. For example, a MFT decoder with d3d11 acceleration is MFT:d3d=11, without d3d acceleration and with pool enabled is MFT:d3d=0:pool=1.

Decoder properties can also be set via Player.setProperty("video.decoder", "key1=val1:key2=val2") or Player.setProperty("audio.decoder", "key1=val1:key2=val2"), then properties apply for all video or audio decoders, and can set multiple times.

Decoder name and properties are listed here: https://github.com/wang-bin/mdk-sdk/wiki/Decoders#video-decoders

Examples

  • Recommended decoders for win32: player->setDecoders(MediaType::Video, {"MFT:d3d=11", "hap", "D3D11", "DXVA", "CUDA", "FFmpeg", "dav1d"});
  • Recommended decoders for linux desktop: player->setDecoders(MediaType::Video, {"hap", "VAAPI", "CUDA", "VDPAU", "FFmpeg", "dav1d"});
  • Recommended decoders for macOS and iOS: player->setDecoders(MediaType::Video, {"VT", "hap", "FFmpeg", "dav1d"});
  • Recommended decoders for android: player->setDecoders(MediaType::Video, {"AMediaCodec", "FFmpeg", "dav1d"});
  • Recommended decoders for raspberry pi: player->setDecoders(MediaType::Video, {"MMAL", "FFmpeg", "dav1d"});

void setVideoDecoders(const std::vector<std::string>& names)

Deprecated since 0.11.0, use setDecoders(MediaType::Video, names) instead

setAudioDecoders(const std::vector<std::string>& names)

Deprecated since 0.11.0, use setDecoders(MediaType::Audio, names) instead

void setActiveTracks(MediaType type, const std::set<int>& tracks)

Enable given tracks of a type to decode and render. The first track of each type is active by default.

  • type:
    • MediaType::Unknown: select a program(usually for mpeg ts streams). tracks must contains only 1 value, N, indicates using the Nth program's audio and video tracks
    • MediaType::Audio: select audio tracks
    • MediaType::Video: select video tracks
  • track: set of active track number, from 0~N. Invalid track numbers will be ignored. An empty set will disable all tracks of given type

void setTimeout(int64_t ms, std::function<bool(int64_t ms)> cb = nullptr)

  • ms(default 10s): timeout value in milliseconds. Negative is infinit.
  • cb: callback to be invoked when time is out
    • return true to abort current operation on timeout
    • A null callback can abort current operation

void prepare(int64_t startPosition = 0, function<bool(int64_t position, bool* boost)> cb = nullptr, SeekFlag flags = SeekFlag::FromStart)

Preload a media and then becomes State::Paused.

To play a media from a given position, call prepare(ms) then setState(State::Playing).

  • startPosition start from position, relative to media start position(i.e. MediaInfo.start_time)
  • flags seek flag if startPosition != 0.
  • cb: if startPosition > 0, same as callback of seek(), called after the first frame is decoded or load/seek/decode error. If startPosition == 0, called when media is loaded and mediaInfo is ready or load error
    • position: seek result, < 0 is error
    • boost: in callback can be set by user(*boost = true/false) to boost the first frame rendering. default is true. example: always return false can be used as media information reader

bool seek(int64_t pos, SeekFlag flags, std::function<void(int64_t ret)> cb = nullptr)

Seek to a give position.

  • pos target position. If flags has SeekFlag::Frame, pos is frame count, otherwise it's milliseconds.

    • If pos > media time range, e.g. INT64_MAX, will seek to the last frame of media for SeekFlag::AnyFrame, and the last key frame of media for SeekFlag::Fast.
    • If pos > media time range with SeekFlag::AnyFrame, playback will stop unless setProperty("keep_open", "1") was called
  • flags If has flag SeekFlag::Frame, only SeekFlag::FromNow|SeekFlag::Frame is supported, and video frame rate MUST be known.

  • cb if succeeded, callback is called when stream seek finished and after the 1st frame decoded or decode error(e.g. video tracks disabled), ret(>=0) is the timestamp of the 1st frame(video if exists) after seek. If error(io, demux, not decode) occured(ret < 0, usually -1) or skipped because of unfinished previous seek(ret == -2), out of range(-4) or media unloaded(-3).

    NOTE: the result position in seek callback is usually <= requested pos, while timestamp of the first frame decoded after seek is the nearest position to requested pos

examples:

  • step forward 1 frame: seek(1LL, SeekFlag::FromNow|SeekFlag::Frame)
  • step backward 1 frame: seek(-1LL, SeekFlag::FromNow|SeekFlag::Frame)
  • seek to the end of media(last frame): seek(INT64_MAX, SeekFlag::FromStart)
  • seek to the last key frame: seek(INT64_MAX, SeekFlag::FromStart|SeekFlag::KeyFrame)

int64_t position() const

Current playback time in milliseconds. Relative to media's first timestamp. i.e. mediaInfo().start_time, which usually is 0.

const MediaInfo& mediaInfo() const

Current MediaInfo. You can call it in prepare() callback which is called when loaded or load failed.

Some fields can change during playback, e.g. video frame size change(via MediaEvent), live stream duration change, realtime bitrate change.

You may get an invalid value if mediaInfo() is called immediately after set(State::Playing) or prepare() because media is still opening but not loaded , i.e. mediaStatus() has no MediaStatus::Loaded flag.

A live stream's duration is 0 in prepare() callback or when MediaStatus::Loaded is added, then duration increases current read duration.

void setState(PlaybackState value)

Request a new state. It's async and may take effect later. see https://github.com/wang-bin/mdk-sdk/wiki/Types#enum-state

setState(State::Stopped) only stops current media. Call setNextMedia(nullptr, -1) before stop to disable next media.

setState(State::Stopped) will release all resouces and clear video renderer viewport. While a normal playback end will keep renderer resources and the last video frame. Manually call setState(State::Stopped) to clear them.

call SetGlobalOption("videoout.clear_on_stop", 0) to keep renderer resource and the last frame

NOTE: the requested state is not queued. so set one state immediately after another may have no effect. e.g. State::Playing after State::Stopped may have no effect if playback have not been stopped and still in Playing state so the final state is State::Stopped. Current solution is waitFor(State::Stopped) before setState(State::Playing). Usually no waitFor(State::Playing) because we want async load

PlaybackState state() const

Player& onStateChanged(std::function<void(State)> cb)

bool waitFor(State value, long timeout = -1)

MediaStatus mediaStatus() const

Player& onMediaStatus(std::function<bool(MediaStatus oldValue, MediaStatus newValue)> cb, CallbackToken* token = nullptr)

Add/Remove a callback or clear all callbacks for MediaStatus change.

Player& onMediaStatusChanged(std::function<bool(MediaStatus)> cb, CallbackToken* token = nullptr)

Deprecated. Use onMediaStatus instead.

void setRenderCallback(std::function<void(void* vo_opaque)> cb)

Set a callback which is invoked when the vo coresponding to vo_opaque needs to update/draw content, e.g. when a new frame is received in the renderer. Also invoked in setVideoSurfaceSize(), setVideoViewport(), setAspectRatio() and rotate().

With vo_opaque, user can know which vo/renderer is rendering, useful for multiple renderers

double renderVideo(void* vo_opaque = nullptr)

Render the next or current(redraw) frame. Foreign render context only (i.e. not created by createSurface()/updateNativeSurface()).

OpenGL: Can be called in multiple foreign contexts for the same vo_opaque.

  • return: timestamp of rendered frame, or < 0 if no frame is rendered. precision is microsecond.

void enqueue(const VideoFrame& frame, void* vo_opaque = nullptr)

Send a user provided frame to video renderer. You must call renderVideo() later in render thread. The frame data can be in host memory, and also can be d3d11/9 resources, for example

mdkVideoBufferPool* pool{}; // can be reused for textures from the same producer

player.enqueue(VideoFrame::from(&pool, DX11Resources{
        .resource = tex,
        .subResource = index,
    })//.setTimestamp(...)

mdkVideoBufferPoolFree(&pool);

void setPlaybackRate(float value)

set playback speed. FFmpeg atempo filter is required.

  • value: >= 0.5. 1.0 is original speed

float playbackRate() const

void setLoop(int count)

Set A-B loop repeat count.

  • param: repeat count. 0 to disable looping and stop when out of range(B)

void setRange(int64_t a, int64_t b = INT64_MAX)

Set A-B loop range, or playback range.

  • a: loop position begin, in ms.
  • b: loop position end, in ms. -1, INT64_MAX or numeric_limit<int64_t>::max() indicates b is the end of media

Player& onLoop(std::function<void(int)> cb, CallbackToken* token = nullptr)

Add/Remove a callback which will be invoked right before a new A-B loop.

  • cb: callback with current loop count elapsed

Player& onSync(std::function<double()> cb, int minInterval = 10)

Set custom sync callback as clock

  • cb: called when about to render a frame. return expected current playback position(seconds). sync callback clock should handle pause, resume, seek and seek finish events.

void snapshot(SnapshotRequest* request, SnapshotCallback cb, void* vo_opaque = nullptr)

using SnapshotCallback = std::function<std::string(SnapshotRequest*, double frameTime)>;

Take a snapshot from current renderer. The result is in bgra format, or null on failure. An MediaEvent may be fired.

When snapshot() is called, redraw is scheduled for vo_opaque's renderer, then renderer will take a snapshot in rendering thread. So for a foreign context, if renderer's surface/window/widget is invisible or minimized, snapshot may do nothing because of system or gui toolkit painting optimization.

If no on-screen renderer, an offscreen OpenGL(or other RenderAPI) context is required, and setRenderCallback() must schedule a task to call renderVideo() in the offscreen context.

BUG: to capture the first frame, must call snapshot() twice because there is no frame rendered when the 1st snapshot is called

void record(const char* url = nullptr, const char* format = nullptr)

Start to record or stop recording current media by remuxing packets read. If media is not loaded, recorder will start when playback starts

  • url: destination. null or the same value as recording one to stop recording. can be a local file, or a network stream
  • format: forced format. if null, guess from url. if null and format guessed from url does not support all codecs of current media, another suitable format will be used.

examples:

    // start
    player.record("record.mov");
    player.record("rtmp://127.0.0.1/live/0", "flv");
    player.record("rtsp://127.0.0.1/live/0", "rtsp");
    
    // stop
    player.record(nullptr);

template<class Frame> Player& onFrame(std::function<int(Frame&, int/*track*/)> cb)

Set a callback to be invoked before delivering a decoded and avfilter processed(if exists) frame to renderers. Frame can be VideoFrame and AudioFrame(NOT IMPLEMENTED).

The callback can be used as a filter.

  • cb: callback to be invoked. returns pendding number of frames. callback parameter is input and output frame. if input frame is an invalid frame, output a pendding frame.

WARNING: set(State::Stopped) in the callback is undefined, may result in a dead lock

For most filters, 1 input frame generates 1 output frame, then return 0.

Example:

  player.onFrame<VideoFrame>([&](auto& frame, int){
    // read frame info. or edit the frame and set as output like a filter
    return 0; // usually it's 0, unless you need to output multiple frames
  });

Player& onEvent(std::function<bool(const MediaEvent&)> cb, CallbackToken* token = nullptr)

Add/Remove a MediaEvent listener, or remove listeners.

void setProperty(const std::string& key, const std::string& value)

Since 0.7.0. Set additional properties. Can be used to store user data, or change player behavior if the property is defined internally.

Predefined properties are:

  • "video.avfilter": ffmpeg avfilter filter graph string for video track. take effect immediately when playing(not paused). ONLY WORKS WITH SOFTWARE DECODERS
  • "audio.avfilter": ffmpeg avfilter filter graph string for audio track. take effect immediately when playing(not paused).
  • "continue_at_end" or "keep_open": do not stop playback when decode and render to end of stream. Useful for timeline preview. only setState(State::Stopped) can stop playback
  • "cc": "0" or "1"(default). enable closed caption decoding and rendering.
  • "subtitle": "0" or "1"(default). enable subtitle(including cc) rendering. setActiveTracks(MediaType::Subtitle, {...}) enables decoding only.
  • "avformat.$opt_name": avformat option via AVOption, e.g. {"avformat.fpsprobesize": "0"}. if global option "demuxer.io=0", it also can be AVIOContext/URLProtocol option. video_codec_id, audio_codec_id and subtitle_codec_id are also supported even are not AVOption, value is codec name. video_codec_id is useful for capture devices with multiple codecs supported.
  • "avio.$opt_name": AVIOContext/URLProtocol option, e.g. avio.user_agent for UA, avio.headers for http headers.
  • "avcodec.$opt_name": AVCodecContext option, will apply for all FFmpeg based video/audio/subtitle decoders. To set for a single decoder, use setDecoders() with properties.
  • "audio.decoders": decoder list for setDecoders(), with or without decoder properties. "name1,name2:key21=val21"
  • "video.decoders": decoder list for setDecoders(), with or without decoder properties. "name1,name2:key21=val21"
  • "audio.decoder": audio decoder property, value is "key=value" or "key1=value1:key2=value2". override "decoder" property
  • "video.decoder": video decoder property, value is "key=value" or "key1=value1:key2=value2". override "decoder" property
  • "decoder": video and audio decoder property, value is "key=value" or "key1=value1:key2=value2"
  • "record.$opt_name": option for recorder's muxer or io, opt_name can also be an ffmpeg option, e.g. "record.avformat.$opt_name" and "record.avio.$opt_name".
  • "recorder.copyts", "record.copyts": "1" or "0"(default), use input packet timestamp, or correct packet timestamp to be continuous.
  • "reader.starts_with_key": "0" or "1"(default). if "1", recorder and video decoder starts with key-frame, and drop non-key packets before the first decode.
  • "buffer" or "buffer.range": parameters setBufferRange(). value is "minMs", "minMs+maxMs", "minMs+maxMs-", "minMs-". the last '-' indicates drop mode
  • "demux.buffer.ranges": default "0". set a positive integer to enable demuxer's packet cache(if protocol is listed in property "demux.buffer.protocols"), the value is cache ranges count. Cache is useful for network streams, download data only once(if a cache range is not dropped), speedup seeking. Cache ranges are increased by seeking to a uncached position, decreased by merging ranges which are overlapped and LRU algorithm.
  • "demux.buffer.protocols": default is "http,https". only these protocols will enable demuxer cache.
  • "demux.max_errors": continue to demux the stream if error count is less than this value. same as global option "demuxer.max_errors"

continue to demux the stream if error count is less than this value


std::string property(const std::string& key, const std::string& defaultValue = std::string()) const

vo_opaque 参数

用于作为渲染器的id。目前用于外部提供的渲染(OpenGL)上下文。可以为nullptr。调用带有此参数的接口会保证创建了一个相应的渲染器

To support multiple video outputs, mdk uses vo_opaque to identify a video output(maybe rendererId is a better name). vo_opaque is unique for an video output, but it can be any value. For example it's the widget ptr in https://github.com/wang-bin/mdk-examples/blob/master/Qt/QMDKRenderer.cpp#L64 vo_opaque can be null, which is used when there is only 1 video output. For most programs, 1 output is enough, so null is the default value.

Clone this wiki locally