You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Capturing the h264 packets directly from the camera module, written into mp4 can reduce by 80% space compared to mjpeg.
Use webrtc::VideoEncoderFactory to produce some custom "encoder"
The encoder subclass passes encoded frames (e.g. H264) to the webrtc::EncodedImage::OnEncodedImage callback.
The encoder doesn't really have to encode anything, it just delivers pre-encoded frames.
The text was updated successfully, but these errors were encountered:
obviously, directly capturing the h264 format from cameras in v4l2 is not a good idea in the webrtc framework.
If captured h264 in AdaptedVideoTrackSource and then sent frames to VideoEncoder, webrtc drops frames sometimes, it will make the NAL unit not continuous. So the decoding order is wrong making the broken real-time video on the client side.
If captured h264 in VideoEncoder. Although it will not interrupt the sequence, it would not be triggered when webrtc drops frames. We can not drop the delayed buffer because of keeping it in sequence, therefore, the delay will increase over time.
Try v4l2m2m encode raw frames could be a better idea.
Capturing the h264 packets directly from the camera module, written into mp4 can reduce by 80% space compared to mjpeg.
Use
webrtc::VideoEncoderFactory
to produce some custom "encoder"The encoder subclass passes encoded frames (e.g. H264) to the webrtc::EncodedImage::OnEncodedImage callback.
The encoder doesn't really have to encode anything, it just delivers pre-encoded frames.
The text was updated successfully, but these errors were encountered: