Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Capture frames from the canvas #754

Open
kushalkolar opened this issue May 8, 2024 · 7 comments
Open

Capture frames from the canvas #754

kushalkolar opened this issue May 8, 2024 · 7 comments

Comments

@kushalkolar
Copy link
Contributor

I'm wondering what's the best way to basically create a video of the canvas. The offscreen canvas renders to a texture, and we could save that texture as video frames. But what if we're not using the offscreen canvas?

@Vipitis
Copy link

Vipitis commented May 8, 2024

It seem to change with the canvas used, for example the wpgu offscreen canvas will return a memory view of the current canvas with self._canvas.draw() which is how I hacked a snapshot method for wgpu-shadertoy originally.
The JupyterCanvas had self._canvas.snapshot().data and I didn't dig deep enough for the other canvases. A common method for all canvases would be beneficial

For video, my idea was to essentially use the snapshot method with precomputed timesteps, which would easy allow users to pick a start, duration and framerate. Would also not be limited to real time. And then handle encoding externally - likely ffmpeg

I only look at pygfx for reference, so they might be something more useful I am not aware of.

@kushalkolar
Copy link
Contributor Author

Yup the jupyter canvas method you mentioned is what we have implemented in fastplotlib, I'll do some digging to figure out how to do this with Qt and glfw.

@panxinmiao
Copy link
Contributor

It is easy to obtain a real-time screenshot of the scene by reading the "ColorTexture" of the "RenderTarget", and off-screen rendering is not necessarily required.

@panxinmiao
Copy link
Contributor

I noticed that the WgpuRenderer class already has a snapshot() method. Would this method solve your issue?

@kushalkolar
Copy link
Contributor Author

kushalkolar commented May 9, 2024

I noticed that the WgpuRenderer class already has a snapshot() method. Would this method solve your issue?

Partially! What's the best way to capture frames to make a video? Right now we basically run it in the main animation loop, i.e. the function we set as canvas.draw_frame() using canvas.request_draw(draw_function=animate), this basically looks something like this. Is there a better way, perhaps with async to poll the renderer?

# multiprocessing queue
q = Queue()

def animation():
  if time_elapsed > (1/30) # some timer used to capture frames at intervals so it doesn't run on every animation call because then it blocks
  frame = renderer.snapshot()
  q.put(frame)

canvas.request_draw(animation)

@panxinmiao
Copy link
Contributor

panxinmiao commented May 9, 2024

What's the best way to capture frames to make a video?

I don't have much practical experience with this. 😅

However, I think getting video frames in the rendering loop may not guarantee an absolutely fixed interval, as it depends on the rendering time of each frame.

If you need to obtain video frames at an absolutely fixed frame rate, you may need to use multi-threading (another thread to fetch from the WgpuRenderer object at a fixed frame rate), but maybe the internal objects of pygfx are not thread-safe. Therefore, an alternative approach is to cache the latest rendered frame in the main rendering loop, and have another thread read this cached frame at a fixed frame rate to generate a sequence of video frames.

Maybe something like this:

# multiprocessing queue
q = Queue()

latest_frame = None

def animation():
  renderer.render(...)
  latest_frame = renderer.snapshot()
  canvas.request_draw()

def capture():
    while True:
        q.put(latest_frame)
        time.sleep(1/30)

t = threading.Thread(target=capture)

canvas.request_draw(animation)
t.start()
run()

@almarklein
Copy link
Collaborator

The renderer.snapshot() works, but note that it samples from the internal texture, so the result may be different then what's shown on screen. If anything, the resolution will be higher.

I think it makes sense to have more sophisticated snapshot functionality. I added a note in #492, because it relates to viewports too. We can leave this issue open to explicitly track this feature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants