Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Easy entry point for DLC users #15

Closed
MMathisLab opened this issue Mar 5, 2020 · 10 comments
Closed

Easy entry point for DLC users #15

MMathisLab opened this issue Mar 5, 2020 · 10 comments

Comments

@MMathisLab
Copy link

MMathisLab commented Mar 5, 2020

Is your feature request related to a problem? Please describe.
Hi @lambdaloop! Thanks for this great package. We are hoping to have better integration with DLC users to DLC (see here: https://github.com/AlexEMG/DeepLabCut/blob/master/docs/Overviewof3D.md#more-than-2-camera-support)

One issue our users are having is an "easy entry" point. Right now, your docs are for setting up 2D then 3D tracking, but many users have the 2D done, and some images they want to calibrate. I really like the merger of DLC + anipose/calligator, so I want to help/ask to make this more seamless.

My understanding of the workflow for you is it requires users to use your file structure, which is a bit rigid. Would it be possible to make docs such that a user could jump in at this point: https://github.com/lambdaloop/anipose/blob/master/docs/github/start_3d.md#calibration-marker-configuration // https://anipose.readthedocs.io/en/latest/tutorial.html#calibrating-the-cameras

i.e. minimally, now after the 3d set up there is no link to your nice "readthedocs.io" demo (https://anipose.readthedocs.io/en/latest/tutorial.html#calibrating-the-cameras)

i.e. a more optimal solution would be: a simple Jupyter Notebook demo such that the person can navigate to their DLC project folder and then set up a simple config file for your system, then be able to run:

anipose calibrate
anipose triangulate
anipose label-3d

Also, for each of these functions, there is (at least I cannot find) any easy to read docstrings. For example, it is not clear what each function requires. (What file type, format, etc is points?)

triangulate(points, undistort=True, progress=False)
Given an CxNx2 array, this returns an Nx3 array of points, where N is the number of points and C is the number of cameras

Describe the solution you'd like
Ideally, one could come in and use your functions (i.e. inhttps://github.com/lambdaloop/calligator, some below) with a clear entry point. i.e.

import anipose (or calligator directly?)

the use your functions: https://calligator.readthedocs.io/en/latest/api.html#module-calligator.boards
anipose calibrate

then your triangulation functions:

anipose triangulate

triangulate(points, undistort=True, progress=False)
Given an CxNx2 array, this returns an Nx3 array of points, where N is the number of points and C is the number of cameras

triangulate_possible(points, undistort=True, min_cams=2, progress=False, threshold=0.5)
Given an CxNxPx2 array, this returns an Nx3 array of points by triangulating all possible points and picking the ones with best reprojection error where: C: number of cameras N: number of points P: number of possible options per point

triangulate_ransac(points, undistort=True, min_cams=2, progress=False)
Given an CxNx2 array, this returns an Nx3 array of points, where N is the number of points and C is the number of cameras

reprojection_error
Given an Nx3 array of 3D points and an CxNx2 array of 2D points, where N is the number of points and C is the number of cameras, this returns an CxNx2 array of errors. Optionally mean=True, this averages the errors and returns array of length N of errors

bundle_adjust_iter(p2ds, extra=None, n_iters=10, start_mu=15, end_mu=1, max_nfev=200, ftol=0.0001, n_samp_iter=100, n_samp_full=1000, error_threshold=0.3, verbose=False)
Given an CxNx2 array of 2D points, where N is the number of points and C is the number of cameras, this performs iterative bundle adjustsment to fine-tune the parameters of the cameras. That is, it performs bundle adjustment multiple times, adjusting the weights given to points to reduce the influence of outliers. This is inspired by the algorithm for Fast Global Registration by Zhou, Park, and Koltun

bundle_adjust(p2ds, extra=None, loss='linear', threshold=50, ftol=0.0001, max_nfev=1000, weights=None, start_params=None, verbose=True)
Given an CxNx2 array of 2D points, where N is the number of points and C is the number of cameras, this performs bundle adjustsment to fine-tune the parameters of the cameras

Describe alternatives you've considered
We have considered making DLC n-camera support, but we would rather not :)

Additional context
Right now the documentation is making this hard for us to guide users

Cheers,
DLC hackathon subgroup "3d, yeah!"

@lambdaloop
Copy link
Owner

Hello @MMathisLab ,

Thank you for this detailed issue!

As you noticed, we have started revamping the documentation in preparation for an Anipose pre-print release and there is still much work to be done! In particular, you found our WIP Anipose tutorial and Calligator API.

Just so we're on the same page, there will be two options for users integrating their DLC models with anipose/calligator:

  1. Place their files into a folder structure that Anipose expects and process with the Anipose pipeline
  2. Write their own custom data processing by importing calligator and using the python functions directly

If they go with the first route, they would need help with:

  • placing their files into an Anipose structure
  • creating a minimal Anipose config.toml file
  • running commands from the terminal or within a Jupyter notebook

If they go with the second route, they would need:

  • better documentation on each of the calligator functions
  • some tutorial or example script of how to use the calligator functions by themselves

Is that right?

Please let me know if that makes sense, and if there are additional key improvements to the documentation that I missed.

@MMathisLab
Copy link
Author

MMathisLab commented Mar 8, 2020

That sounds exactly right to me, thanks! Please let us know if we can be helpful - we had lots of discussions on 3D options at the hack-a-thon, and I’m sure some users would be happy to chip in testing workflows, esp path 2 seemed a highly preferable option :). Thanks again!

@lambdaloop
Copy link
Owner

Sounds good! It sounds like the most pressing missing thing is a Calligator tutorial then.
We'll put that together in the coming weeks!

@SjoerdBruijn
Copy link
Contributor

OK, don't know if this is the right place to ask, but seems my question is highly related. We have data collected with Simi, which doesnt do a good job in labelling (or even 3d reconstructing) the data (actually, it's horrible). We are now thinking of going the anipose/DLC route. Since we have the calibration for the camera's already (was done using simi softwar, this part is still kind of OK, we have all the necesary transforms etc), could we use these calibrations with data that was labelled in DLC? (we could use anipose for the calibration as well, but we don't have checkerboard calibration for each measurement, only with a wand)

It would seem that setting these camera parameters (via calligator set_camera_matrix(matrix)), and then using triangulate would work?
Any answer is highly appreciated. I understand from the above that you are working on the documentation etc, but I would just like to get an idea if the way I'm thinking is going to work, or if it is going to require excessive amounts of rewriting stuff....
Thanks,
Sjoerd

@MMathisLab
Copy link
Author

in case useful, I don't know simi, but here is helper code for wand-based calibration: https://github.com/DeepLabCut/DLCutils i.e. see https://github.com/DeepLabCut/DLCutils#3d-reconstruction-with-easywandargus-dlt-system-with-deeplabcut-data

@lambdaloop
Copy link
Owner

@SjoerdBruijn
I don't know Simi, but most tools (including Anipose) use the same camera model, so it's doable to translate across them. The calibration file for Anipose is a toml file, which you could write to based on your Simi parameters. Here's an example file:

[cam_0]
name = "54138969"
size = [ 1000, 1000,]
rotation = [ 0.4673375748186822, 2.173592643964762, -1.778427743746773,]
translation = [ -219.3059666108619, 544.4787497640639, 5518.740477016156,]
distortions = [ -0.207098910824901, 0.247775183068982, -0.00142447157470321, -0.000975698859470499, -0.00307515035078854,]
matrix = [ [ 1145.04940458804, 0.0, 512.541504956548,], [ 0.0, 1143.78109572365, 515.4514869776,], [ 0.0, 0.0, 1.0,],]

[cam_1]
name = "55011271"
size = [ 1000, 1000,]
rotation = [ 1.75656787919027, 0.3622445290737643, -0.2740327492410326,]
translation = [ 103.9028206775199, 395.6716946895197, 5767.97265758172,]
distortions = [ -0.194213629607385, 0.240408539138292, -0.0027408943961907, -0.001619026613787, 0.00681997559022603,]
matrix = [ [ 1149.67569986785, 0.0, 508.848621645943,], [ 0.0, 1147.59161666764, 508.064917088557,], [ 0.0, 0.0, 1.0,],]

Similarly, if you use calligator (now renamed to anipose-lib), you can create a CameraGroup object and update the parameters by calling the appropriate functions.

When converting calibration parameters across systems in the past, the main issue that I found has been the rotation and translation parameters, which may be in a different format.
Anipose uses the OpenCV format for calibration parameters, with rotation specified as an axis rotation vector (can be converted to/from matrix using the Rodrigues function).

@lambdaloop
Copy link
Owner

@SjoerdBruijn
If you figure out how to convert calibration formats from Simi to Anipose, let me know and we can make a tutorial on that together in the Anipose docs.

@SjoerdBruijn
Copy link
Contributor

Tnx for all the suggestions. I think this will work; I have the data in the attached screenshots (still working on finding where this is stored in the simi software). Looks like all we need, right?
We will start working on it, and of course are happy to help in making a tutorial so that others can follow our workflow.

calibration validation -3
calibration validation - 2

@lambdaloop
Copy link
Owner

lambdaloop commented Mar 19, 2020

@SjoerdBruijn that looks great! It seems that there is enough information there, and the parameters look similar to the OpenCV ones so it's probably the same camera model.
I think the next step would be to figure how to easily export the numbers for all cameras for easy conversion.

To make the Simi conversion more visible to other users (and also keep this issue conversation focused), I've made a separate issue here for this case: #18

@lambdaloop
Copy link
Owner

Closing this as the documentation has changed quite a bit since this was written.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants