Some excellent research by Petros Douvantzis determined that there is a race in the texture handling in "show + capture camera". The source of the problem is the use of a shared EGL context -- the camera frames are converted to an "external" texture in one context, but rendered from another.
For correct behavior, the application code must ensure mutually exclusive access during the texture update, and needs to issue GL commands that effectively provide memory barriers. On the producer side, updateTexImage() must be followed by glFinish(), with the two wrapped with a synchronized block or the write op of a read/write lock. On the consumer side, the texture must be re-bound before being drawn to effect the memory barrier, and the rendering operation must be synchronized or within the read op of a read/write lock.
These operations will potentially stall the involved threads, reducing throughput. It's better, and simpler, to use a single EGLContext, and call updateTexImage() from the thread that does the rendering. It's possible to use a single context with GLSurfaceView by attaching the SurfaceTexture to the GLSurfaceView's context with the appropriate API calls. The "show + capture camera" demo should be updated to use this approach, and avoid shared contexts altogether.
The other Activities in Grafika use SurfaceView and a single context, and are not affected by this issue.
The text was updated successfully, but these errors were encountered:
Sorry to dig this out of the crypt but I was hoping for some clarification about restricting CameraCaptureActivity to use a single EGLContext.
From what I can tell, it appears that the SurfaceTexture passed to the camera is already bound to the same rendering context as the GLSurfaceView, and that updateTexImage() is called in the GLSurfaceViews's rendering thread.
In the original problem statement, you identified the issue as one where the camera is converting frames to an external texture on one context but then rendering them from another - can you elaborate on this a bit? From what I can see in the sample, the camera is sending its image stream to the SurfaceTexture presumably from another thread, but the actual create/bind dance to the external texture all happens within the GLSurfaceView's rendering thread
The texture is shared between two EGL contexts -- written by the camera, read by the video encoder. The GLSurfaceView side is fine by itself. Once you start recording, TextureMovieEncoder creates a shared EGL config, and it's the rendering from that thread that causes the trouble.