New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for 3D animation #448
Comments
Adding some sort of storyboarding to the animation process would be very beneficial especially as there are multiple frames and vantage points wanted for timeseries data. |
I think it would be nice to be compatible with - https://www.nature.com/articles/s41592-019-0359-1 - there is a FIJI plugin https://github.com/bene51/3Dscript which we could maybe port to python. If anyone has any experience with this tool or knows the original developers please let us know. I have not used it, but it seems like people really like it, for example https://twitter.com/DougPShepherd/status/1190365654311792641 |
Cross posting a link to an amazing naparimove repo built ontop of napari by @guiwitz - originally posted in this image.sc post I was able to successfully use it to make movies right out of the box, and I’m curious from @guiwitz if there’s more stuff we need to expose better to make this easier for you, if you have feedback on our apis or napari in general, or if you’re interested in contributing any of this into the main napari repo. I think we see support for high quality animations as something pretty fundamental and something we want to provide to all users. |
Hi @sofroniewn, I would definitely be enthusiastic at trying to integrate naparimovie directly into napari! I guess my main question is what form this integration should take. If I remember correctly there's the plan to have a sort of plugin system in napari. Should that be the way or should it be a standard feature of napari? In both cases, I'd need some pointers on "where to put things", e.g where should I add entire classes as for example the one replicating 3Dscript, or where's the best place to add list of interpolated camera states etc. In the contribution guide I couldn't find guidelines on the general organisation philosophy of the code. Is that info available somewhere or can you summarise general principles? On a more detailed level, I guess the most difficult part was handling getting and setting the camera state, in particular its rotation. A few comments:
but I'm really not sure that the best solution, and there might be something much simpler that I missed. If not, that would probably be something nice to have access to at a higher level e.g. for the rotation as:
If we definitely try to integrate my code into napari, keep in mind that I'm not a software engineer, and that I therefore most certainly do strange things in my code that you should not hesitate to point out : ) |
@guiwitz as I said in some of the other threads it's so exciting to see naparimovie - one of our goals is definitely to enable people to build on top of this tool and starting a new repo for such an advanced feature was a fine idea. You’re correct that we are planning for a plugin system around napari, but we haven’t done too much to support that yet. We’re also working on a For now I’ll say that at a high level I think we imagine any domain specific, highly custom, or complex dependency code to go into plugins and general purpose, domain agnostic, widely used code to go into the core. I think some form of movie making support belongs in the core, but possibly not all of it, and possibly not all the dependencies that might be required should be installed automatically. We’re planning to better support optional dependencies so that one could do As to getting towards the details, my initial inclination is to try and get naparimovie.py and state_interpolations.py. As you suspect, I’d prefer to avoid the additional pyquaternion dependency. I’m not sure vispy supports that interpolation out of the box, but we might be able to add it quite easily. Using the vispy quaternion is also ok, though depending on what we want to get done we might end up with our own simple Quaternion class. We can consult with the vispy team if we end up ultimately wanting to make additions to their quaternion class, but starting with our own will give us more flexibility at the beginning. We can also expose As to what does integration look like in more detail, I imagine that we’d have one of your movie objects accessible at viewer.movie.record(inter_steps=15, with_viewer=False) to “activate it” and enable recording, and then maybe viewer.movie.finish() to “deactivate it”. When in the I think at the beginning we probably also only want to support writing options that are supported by I also think at the beginning we might not want to add support for the scrip commands inside scriptcommands.py as these might be more custom functionality for a plugin - sort of depends on how widely spread that scripting language already is / if we have cross compatibility with scripts people have already written and how easy it is to write similar “scripts” in pure python. I'm open to that being added to core at some point, but have to get the take of @royerloic and @jni too. Finally one overall design note is that we’ve tried to separate out our model / view files - where our model files don’t depend on any Maybe as first steps do you want to create a new folder called Ultimately once this is done, we'll want to add a How does this all sound to you? We’re super excited to work together with you to get this functionality into napari!! |
@sofroniewn This all makes perfect sense, thanks for the detailed answer! I also think that keeping the scripting part out is better for now, and I'll make a repo with just that part to avoid confusing people. I can then turn it into a plugin whenever this becomes possible in napari. I found that one can probably do all necessary quaternion calculations with scipy so we should be able to dump the pyquaternion dependency. And we can definitely skip ffmpeg for the moment and create a png folder, although if you manage to create a conda package for napari at some point, adding ffmpeg should not be too much of an issue. I'll be getting started with all this in the coming days. |
Very excited to see this happen @guiwitz ! |
Hi @sofroniewn, I started integrating naparimovie into napari following your detailed description. The good news is that I managed to get rid of the pyquaternion dependency by using scipy. Now I have a question regarding the integration and in particular this part of your comment:
I'd like to go the first route and have my classes accessible e.g. in class Viewer:
def __init__(self, state = 15):
self.state = state
class Movie:
def __init__(self, viewer = None):
self.viewer = viewer
def set_viewer_state(self, var):
self.viewer.state = var
viewer = Viewer()
viewer.movie = Movie(viewer=viewer)
#set the viewer from within the movie instance
viewer.movie.set_viewer_state(100)
#it really does change the viewer instance
print(viewer.state)
>>100 In that way the viewer can be get/set directly from within the Movie class, but it looks incredibly ugly and I'm pretty sure it's a trivial problem and there's an elegant solution that I just can't see now. Any idea ? |
@guiwitz that’s great that you could drop the extra dependency and that your beginning work on the integration. You’re right that I think we want to avoid that setting of the viewer state from within your You will likely want like add a couple lines to napari/viewer.py. One at the very top from .movie import Movie and then another at the end of the init self.movie = Movie(self, param1=value1, param2=value2) You should then have everything you need within Movie to make the movies. Does this make sense? If you any example branch on your fork of napari I'm happy to take a look before you make the PR |
Wait, doesn't movie have to modify the viewer, e.g. to set viewpoints/rotation parameters? In my opinion, the viewer should not be touched by this functionality at all, except maybe in having additional methods to control viewpoints/animations. The movie object or function should exist entirely outside the viewer, take in a viewer as a parameter, and modify it as needed. I don't see a reason for the viewer to be aware of any of this, but please correct me if I'm wrong! |
I'm a bit confused too now, so let me try to summarise. We indeed have to modify the viewer, so somehow we need access to it where the movie methods are also accessible. I think there's three solutions for that: (1) we add all the movie methods directly to the viewer, (2) we do the ugly self-referencing mentioned above, (3) we keep things separate and pass the viewer as a parameter to the independent movie object. I think we definitely don't want (1) to avoid messing with what is a core piece of napari, and also (2) because it seems very ugly. (3) is then essentially what I did until now with my independent package : import napari
from napariviewer import Movie
view = napari.View()
#add images here
movie = Movie(view)
#do what's necessary for making the movie here So I can just add the movie class as a napari module, and then people would do: import napari
view = napari.View()
#add images here
movie = napari.Movie(view)
#do what's necessary for making the movie here
movie.finish() Calling If that's alright, I'll push this and you can have a look. |
I’m sorry yes, I wasn’t thinking it all through when I wrote the above. Thanks for the catch @jni. Starting with something like (3) seems reasonable, but maybe @jni can weigh in more. Also though if @guiwitz you are close to having a PR ready maybe best to submit it and we can continue the discussion there |
Looking great overall! Very exciting and great work @guiwitz! I agree with @jni and @sofroniewn, solution 3 is the right way to go -- more modular. A few thoughts: Curious about what @jni @sofroniewn think about this long term... ii) Doing the above instead of creating a separate 'state' object that needs to be 'synced' and maintained leads to less duplication of the notion of 'viewer state'. Having said that, there might be (bad) reasons why we can't do the above on the short term. iii) The ability to do the above would be a great stress-test to make sure that our viewer and layers model are fully modular and reentrant. iv) IMPORTANT: Having said all that above, I think we should merge this PR as soon as we are happy with the basic architecture: modular, viewer does not know about movies, but movies know about viewers. It's already extremely useful! We can revisit these more advanced concepts later. v) One nice thing about having the movie object separate from the viewer, is that you could imagine applying the same movie to 2 different viewers (maybe with a movie.viewer = ... setter) and render two animations with perhaps some data changing, etc... |
@royerloic I love these comments, thanks for weighing in. I'm very excited about (i) - this should become easier after we get #686 in - which even includes round-trip tests for consistency. I also really like (v) - one can then imagine us or someone else making some standard "pre-recorded" "Movie" instances that you could then apply regardless of your data if you wanted to automate and standardize movie making - like a "zoom in" / "zoom out", or "loop through sliders once", or "full 3D rotation". For now though, like you suggested, I think we keep these principles in mind as we press on with a more minimal PR from @guiwitz that will add initial movie support |
:-) vi) Support for headless rendering of movies. In my experience, things can go sometimes go wrong when rendering in opengl, capturing the frames and making a video. If window visibility is changed or other OpenGL complexities are involved (disconnecting external screen), you get into trouble. Ideally and eventually, we should have means to do the rendering in a completely headless fashion (would still require OpenGL to be available though). this would permit high-quality and high-resolution rendering without spawning a napari window or doing any on-screen rendering. This takes on to a more general discussion of the utility of having our models capable of living without views (which we are doing quite ok right now), and having ways to generate images from the model without necessarily having Qt windows open, Vispy/OpenGl canvases active. Again, not for now, but important to keep in mind... |
Oooh, this to me sounds like a context manager! with napari.Movie(viewer) as mov:
mov.record_keyframes() # this blocks and lets the user pick keyframes
mov.animate(option1=..., option2=...)
mov.save(filename) or with napari.Movie(viewer) as mov:
mov.play_script(path='path/to/3dscript.txt') This seems pretty nice to me, no? |
I finally made this pull request and we can then go on with the discussion there. Just a few points in regard to previous comments. @royerloic I agree that it would be nice to essentially just copy entire view states to create key-frames. But there's to issues at the moment: (1) I don't know how to copy an entire view state. Using deepcopy generates an error coming from Qt. I tried also with lower level objects such as the camera, but those can't be deepcopied either. When trying to deepcopying the layer object, there's yer another problem: copying the layers just adds them to the viewer. Maybe someone knows how to generically copy these objects? (2) Even if we save entire states, as you mentioned we still have to go inside to do the interpolation part. So in the end I'm not sure it's worth copying these entire views. I still tried to go in a more modular direction by changing the way I'm saving states. E.g. before I was saving specific camera features (rotation, displacement etc.). Now I save the entire camera state. Similarly, instead of implementing a specific "time" feature, I'm just interpolating all the available sliders now. I also tried to simplify the interpolation part by creating "generic" interpolation functions that one can re-use for different types of features. However one hits very fast a limit in how generic these functions can be as different features need a different treatment. For example while one wants a smooth interpolation e.g. for camera zooming, one doesn't want to fudge the timing of a layer being set ON/OFF. So I think each additional feature will have to be added "manually" after some thinking. One just has to 1) add it to the series of features tracked, 2) define an interpolation function (ideally re-using an existing one) and 3) update the viewer. @jni I'm not sure a context manager is the right thing here. When creating these movies, I often go through multiple iterations where I create the movie, readjust it, create the movie etc. With a context manger I would be limited to do this just once. But maybe this could be solved by making it possible to watch the movie directly in napari. I could add a new key-binding that just goes through all frames automatically. Let me also note here that the module now allows to mix 2D and 3D views, while it was limited to 3D before. Let me know your thoughts in the PR. |
Link to the PR: #780 |
Just as an additional note and to partially answer my own question, I guess this PR #686 will solve the problem of saving the full viewer state. Whenever this is ready we can could just save views as key-frames. It wont however change the fact that we'll have to interpolate whatever needs to be interpolated. In that context, saving the full viewer only makes sense if by default we interpolate ALL properties of the viewer, which will require some more work. |
Great work @guiwitz ! Let me answer your comment below:
(1) What I described only applied to the 'model' of the viewer, not the Qt stuff! Of course that would never work and be terrible in many ways :-) The problem is that currently a lot of the viewer state is actually in the vispy side of things, like the camera position and such things, when they should be in fact at least reflected in the viewer model. I am just realizing now (pinging @sofroniewn here) that this is a bit of a deviation from our strict model-view architecture, and we should keep it in mind medium term: we should at least sync that with some field in the viewer model so that the model carries explicitly the information about the viewpoint (and related states of the viewer). (2) Assuming that the viewer model has all the relevant states, then, because we have full control of the model, we can make sure that we know how to do interpolations... I am not suggesting we do this anytime soon, as discussed above, our model-view separation is not yet fully implemented for the viewer. In the mean time we should get something reasonable merged and perhaps later we can revisit all this... |
🚀 Feature
3D animation
Motivation
It would be great to have an open-source viewer that displays 3D volume and you can fly through it and record as you do so or just have a 360-degree rotation of volume.
Pitch
Details
We can either use ffmpeg or gif to record frames and generate the video.
The text was updated successfully, but these errors were encountered: