Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transformation of camera signal into displayed frames on screen in a video capture pipeline. #981

Closed
abdikaiym01 opened this issue Nov 30, 2023 · 1 comment
Labels

Comments

@abdikaiym01
Copy link

Does the browser introduce additional compression resulting in loss in the final frame displayed on the screen from the webcam, or does it primarily involve format transitions (e.g., YUV to RGB) without compression or losses?
In such a context getUserMedia(...) -> HTMLMediaElement

@aboba aboba added the question label Jan 4, 2024
@jan-ivar
Copy link
Member

jan-ivar commented Jan 23, 2024

Closing as this sounds like an implementation question, whereas this repo is about questions around the specification of the API.

In short you'd have to ask each browser vendor what they do, though I suspect the answer from most is: no, why would they add additional compression apart from format transitions?

However, that doesn't mean you'll get full resolution by default, as this spec note points out.

You can interrogate and select resolution and frame rates using constraints. E.g. https://jsfiddle.net/jib1/3kvb7j9o/show

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants