-
Notifications
You must be signed in to change notification settings - Fork 525
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade endoscopy to support arbitrary camera poses and key frame interpolation #6502
Comments
I am looking to modify That is, we want the new functionality to be a transformation that is applied after the current set of transformations, rather than instead of the current transformations, right? It would be only this relative-to-Endoscopy's-current-behavior transformation that we'd interpolate with quaternion slerp or similar. Likewise, if we support it, adjustments to focal length or zoom would be relative to Endoscopy's current behavior. |
Starting to update this module makes sense. As we move forward, we may have to improve/refactor some of the underlying libraries. |
Glad you are working on this @Leengit. This was the first Python ScriptedModule so it definitely is due for an update : D But even before that there was an even more sophisticated endoscopy module in slicer2 (in Tcl of course) from Delphine Nain that is described in this paper and you can also find her thesis on the topic if you search around. She implemented a lot of good features for this task. |
Markups curve provide a robust and smooth curve coordinate system (tangent and normal direction for each curve point), so it makes sense to use this and apply any specify camera pose in this coordinate system. By default, camera can be forward-looking but the user could move it around (rotate and slightly move, maybe also adjust the field of view, speed of motion). Keyframe data could be stored in the curve node itself: for each control point as static measurement. Or it could be stored in a sequence, using distance along the centerline as index. Linear interpolation probably works for position, zoom, speed; and slerp for orientation. To adjust the pose of the camera for each key frame, we should re-map the camera manipulation keyboard and mouse gestures so that the when you move the camera forward/backward it moves the camera focal point forward/backward. There is an example in the script repository for this. We should also make the camera focal point setting more discoverable (it is a right-click menu action in recent Slicer versions), maybe with a movable markup point. |
Iirc it does use markups to let the user orient the camera, but in working at it, i wasn't sure if a second mode that just describes the camera view point (being one, or a curve as well) wouldn't be easier to control for the user |
Default behavior: I think it makes sense to have the forward-looking current approach be the default and key frames only intermittently influencing the camera pose. However, it would also be useful to provide a way to control how far the influence of a keyframe goes, perhaps using two weights (approach and departing). key-frame editing: I am hoping that we could include both, third-person as well as first-person key frame manipulation as both approaches could be useful. Either way, we might have to show the result of third-person manipulation in a small window/frame, which could be made interactable as first-person manipulation??? |
Thanks for pointing this out 🙏 For reference:
We could leverage dual 3d view or having a dedicated layout
Ditto. Working on hardening the infrastructure and how the information is stored/organized/serialized should be done first. |
|
The implementation in #7165 uses quaternion slerp to interpolate between keyframes. If an effect is to start at some frame, reach a maximum value, and then end at some frame, that is entered by setting the first and third of these keyframes to have identity (unchanged) orientation, and setting the second keyframe to have the desired excursion. The remaining (non-key) frames from the first of these to the third of these will have their orientations computed via interpolation. |
The implementation in #7165 supports first-person keyframe manipulation. |
We are supporting new camera orientations (pitch, roll, yaw) in #7165, but not other degrees of freedom that a camera could have. The latter are deferred for now. |
Should we close this as completed by Pull Request #7165? If not, let's re-iterate, in fresh comments here, those aspects of this issue that are outstanding. |
Is your feature request related to a problem? Please describe.
We need a flythrough animation feature in VPAW. This is to aid in communicating aspects of a patient’s airway/anatomy to other clinicians. Current Endoscopy module comes very close to satisfying our requirements. However, it has some missing feature such as:
Describe the solution you'd like
Description of suggested solution copied / referenced from the posted comment here.
Based on our internal discussions and from the discourse discussion thread, we have decided to modify the Endoscopy module to provide full support for a camera flythrough and keyframe interpolation. We can borrow specific implementations from CameraPath1 as/if needed.
Keyframe Interpolation
(From Discourse)
vtkAddon
if it’s C++ or in Endoscopy logic if it could be done in python (I’d think it could).User Interface
(From Discourse)
We could use the 3D view camera state itself as a way to insert keyframes. Initially we were considering having a flight mode for the 3D view to assist in flying around and placing keyframes, but since we can define the path curve first, it is probably not needed. The user can just fly along the path and stop and insert keyframes wherever necessary.
Regarding display of keyframes we can simply use a camera mesh model to indicate the position or use Plane markups
Along with camera pos and fov, would it make sense to also interpolate volume property, etc for better local control of visualization?
The Animator2 module has some code for this and it would be nice to either reuse or generalize this feature. Currently it requires that the transfer functions have the same number of control points at the keyframes.
We should provide headlights that move with the camera (endoscopy simulation).
Use of Markups
(Discussions with @jcfr)
https://projectweek.na-mic.org/PW37_2022_Virtual/Projects/MarkupConstraints/
Nice To Have
(From Discourse)
Describe alternatives you've considered
Considered alternatives and some pros / cons identified in each are documented here: KitwareMedical/vpaw#9 (comment)
Additional context
While this issue has already been described in detail on the VPAW project here, I am creating a Slicer issue since we hope to contribute upstream / upgrade the official Endoscopy module. We believe that the Endoscopy module, in its current form, is not as useful for virtual endoscopy animation due to the restricted camera movement.
Footnotes
https://github.com/KitwareMedical/Slicer-CameraPath ↩
https://github.com/SlicerMorph/SlicerMorph/blob/master/Animator/Animator.py#L277-L434 ↩
The text was updated successfully, but these errors were encountered: