Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rework connect_pad_added closure #19

Closed
fengalin opened this issue Aug 15, 2017 · 4 comments
Closed

Rework connect_pad_added closure #19

fengalin opened this issue Aug 15, 2017 · 4 comments
Assignees

Comments

@fengalin
Copy link
Owner

There are several possible enhancements:

  • Audio processing: a tee can be added to split the audio processing in two queues (which also implies 2 threads). One will handle audio playback just as right now, the other will be in charge of pre-processing the samples to be ready for waveform rendering.
  • Video sink construction: it seems that the way the sink is constructed right now is not really thread safe and may not work on other platforms. See this discussion.
  • Caps might change during playback, so this should be monitored. See this discussion.
@fengalin fengalin self-assigned this Aug 15, 2017
@fengalin
Copy link
Owner Author

Audio processing: see this tutorial and this conversation.

@fengalin
Copy link
Owner Author

Configure buffering to make sure we always have more than 1s worse of samples ahead.

@fengalin
Copy link
Owner Author

For the video widget potential issue, maybe send-cell would be applicable to guarantee thread safety.

@fengalin
Copy link
Owner Author

Possible enhancements in audio buffers management. See this comment. The only problem is the buffer drain that requires knowing samples/pixel.

fengalin added a commit that referenced this issue Aug 19, 2017
This allows taking advantage of the viualization queue thread to perform all the required computation as soon as the samples are received. There is no need to send the gst::Buffers through the channel: the AudioBuffer is now shared between the AudioController and the appsink.

Using a similar approach, the video widget box is now passed at Context creation. The video sink is constructed before the pipeline creation which ensures it is created in the GTK thread and which allows adding the widget to the video widget box without using channels.

Since channels are no longer required for realtime communication, the listener can be invoked less frequently.

This also fixes:
- Handling of media with multiple audio or video streams.
- Waveform flickering with some files (#18)
fengalin added a commit that referenced this issue Aug 19, 2017
No longer needed as there is enough data buffered to guarantee rendering on time.
fengalin added a commit that referenced this issue Aug 19, 2017
fengalin added a commit that referenced this issue Aug 20, 2017
The waveform rendering used to rely on a position that was read before in the main controller. In order to limit the drifts between the waveform rendered and the actual position, the position is now read during renderding. This requires the Context to be visible in AudioController::draw. So the Context is now a Weak Ref of AudioController.
fengalin added a commit that referenced this issue Aug 20, 2017
fengalin added a commit that referenced this issue Aug 22, 2017
This modifications introduce a double buffering mechanism in order to smoothen the waveform rendering. However, another cause of locks was found: context.get_position()
fengalin added a commit that referenced this issue Aug 28, 2017
It doesn't solve the stuttering in waveform rendering, but seems to reduce latency getting the position
@fengalin fengalin closed this as completed Sep 1, 2017
@fengalin fengalin reopened this Sep 1, 2017
fengalin added a commit that referenced this issue Sep 2, 2017
@fengalin fengalin closed this as completed Sep 2, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant