New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the cube is not where it should be #17

Open
pocce90 opened this Issue Jun 8, 2017 · 10 comments

Comments

Projects
None yet
7 participants
@pocce90

pocce90 commented Jun 8, 2017

Hello, I've tried single scene, and when I start it in my hololens the cube is positioned 5-6 cm above and 5-6 cm closer to me than the marker. I've controlled marker size and settings in app and it's ok: 80mm.

@tsteffelbauer

This comment has been minimized.

Show comment
Hide comment
@tsteffelbauer

tsteffelbauer Jun 14, 2017

You could try to calibrate your Hololens camera with this repository: https://github.com/qian256/HoloLensCamCalib

tsteffelbauer commented Jun 14, 2017

You could try to calibrate your Hololens camera with this repository: https://github.com/qian256/HoloLensCamCalib

@qian256

This comment has been minimized.

Show comment
Hide comment
@qian256

qian256 Jun 23, 2017

Owner

Hi @pocce90
It is very common for the virtual object not aligning very well with the marker, because we never do a calibration between the coordinate system of HoloLens and coordinate system of tracking camera.
The magic functions in ARUWPMarker.cs script is the place where you can tune the tracking transformation to the display transformation (for alignment). Currently, you have to manually supply the numbers in the magic function.

Owner

qian256 commented Jun 23, 2017

Hi @pocce90
It is very common for the virtual object not aligning very well with the marker, because we never do a calibration between the coordinate system of HoloLens and coordinate system of tracking camera.
The magic functions in ARUWPMarker.cs script is the place where you can tune the tracking transformation to the display transformation (for alignment). Currently, you have to manually supply the numbers in the magic function.

@araujokth

This comment has been minimized.

Show comment
Hide comment
@araujokth

araujokth Jun 27, 2017

Hi @qian256, I finally found some nice parameters for the magic functions for my Hololens which work very well for a distance between the Hololens and the object up to 60-70 cm. No need for rotation adjustments, only translation as [0.002, 0.04, 0.08]. However, beyond 1 meter, the error between the virtual and real object increases quite a lot. I guess that this may be because of tracking errors from the ARToolKit, IPD error, among others? Did you also experience that when doing the experiments for your paper?

araujokth commented Jun 27, 2017

Hi @qian256, I finally found some nice parameters for the magic functions for my Hololens which work very well for a distance between the Hololens and the object up to 60-70 cm. No need for rotation adjustments, only translation as [0.002, 0.04, 0.08]. However, beyond 1 meter, the error between the virtual and real object increases quite a lot. I guess that this may be because of tracking errors from the ARToolKit, IPD error, among others? Did you also experience that when doing the experiments for your paper?

@qian256

This comment has been minimized.

Show comment
Hide comment
@qian256

qian256 Jun 27, 2017

Owner

@araujokth , good to hear that you found working parameters.
The calibration in the paper also has a restricted volume, and the result must favor that volume and gets worse and worse if it is out of the calibration volume. The 3D space of tracking and 3D space of display may not be only affine, or perspective, there is distortion as well. This includes the distortion of camera tracking parameters too.

Owner

qian256 commented Jun 27, 2017

@araujokth , good to hear that you found working parameters.
The calibration in the paper also has a restricted volume, and the result must favor that volume and gets worse and worse if it is out of the calibration volume. The 3D space of tracking and 3D space of display may not be only affine, or perspective, there is distortion as well. This includes the distortion of camera tracking parameters too.

@araujokth

This comment has been minimized.

Show comment
Hide comment
@araujokth

araujokth Jun 27, 2017

@qian256 makes sense! I guess that will be good material for your next paper? :)

araujokth commented Jun 27, 2017

@qian256 makes sense! I guess that will be good material for your next paper? :)

@pocketmagic

This comment has been minimized.

Show comment
Hide comment
@pocketmagic

pocketmagic Jun 30, 2017

hi @qian256,we do not understand what the mean of the magicMatrix in the magic functions,how to adjust the magicMatrix

pocketmagic commented Jun 30, 2017

hi @qian256,we do not understand what the mean of the magicMatrix in the magic functions,how to adjust the magicMatrix

@RaymondKen

This comment has been minimized.

Show comment
Hide comment
@RaymondKen

RaymondKen Jul 3, 2017

hi @qian256 same as @pocketmagic , I don't understand the mean of the magicMatrixx1, and how can we adjust it.

RaymondKen commented Jul 3, 2017

hi @qian256 same as @pocketmagic , I don't understand the mean of the magicMatrixx1, and how can we adjust it.

@tsteffelbauer

This comment has been minimized.

Show comment
Hide comment
@tsteffelbauer

tsteffelbauer Jul 5, 2017

The magicMatrix is a rotation matrix (https://en.wikipedia.org/wiki/Rotation_matrix) that is added to the transformation to reduce the error manually. Look up which parameter affects which transformation parameter and edit the matrix according to the error you see.

tsteffelbauer commented Jul 5, 2017

The magicMatrix is a rotation matrix (https://en.wikipedia.org/wiki/Rotation_matrix) that is added to the transformation to reduce the error manually. Look up which parameter affects which transformation parameter and edit the matrix according to the error you see.

@araujokth

This comment has been minimized.

Show comment
Hide comment
@araujokth

araujokth Jul 5, 2017

Hi @tsteffelbauer, both magic matrices are 3D transformation matrices (https://en.wikipedia.org/wiki/Transformation_matrix) since they are performing both translation (magicMatrix1) and rotation (magicMatrix2), but of course one could do the operation in the MagicFunction in a single step using a single transformation magicMatrix instead.

From my experience I would not recommend doing this manually by looking into the error you see since this would take quite some tedious time to perform and it is quicker to implement a calibration method to do it properly. Mainly if one has to perform a rotation. I would suggest implementing the method described in @qian256's paper https://arxiv.org/abs/1703.05834 since its quite quick to implement.

araujokth commented Jul 5, 2017

Hi @tsteffelbauer, both magic matrices are 3D transformation matrices (https://en.wikipedia.org/wiki/Transformation_matrix) since they are performing both translation (magicMatrix1) and rotation (magicMatrix2), but of course one could do the operation in the MagicFunction in a single step using a single transformation magicMatrix instead.

From my experience I would not recommend doing this manually by looking into the error you see since this would take quite some tedious time to perform and it is quicker to implement a calibration method to do it properly. Mainly if one has to perform a rotation. I would suggest implementing the method described in @qian256's paper https://arxiv.org/abs/1703.05834 since its quite quick to implement.

@danilogr

This comment has been minimized.

Show comment
Hide comment
@danilogr

danilogr Jul 17, 2018

Hello everyone,
I am here dealing with the same issue of misaligned holograms. This issue stems from ARUWPVideo.cs using Windows.Media.Capture (C#) to receive video frames. As far as I can tell from Microsoft Documentation this API does not provide a CameraViewTransform per frame. As a result - and in accordance to what @qian256 said before - the ARToolkit coordinates are in the locatable camera space and not in the app space.

Solving this problem might be a little bit involved as one would have to rewrite ARUWPVideo.cs from the WinRT / C++ side of things, exporting the CameraViewTransform of each frame, and applying it to the marker coordinates.

Also, I don't believe that per-user calibration is needed unless trying to fine tune for a specific application and viewpoint (i.e., surgical procedures). I've been fairly successful with Vuforia for various marker sizes. All in all, if anyone is willing to improve that part of things, this repository has some sample code capturing frames and applying transforms to the Locatable Camera.

danilogr commented Jul 17, 2018

Hello everyone,
I am here dealing with the same issue of misaligned holograms. This issue stems from ARUWPVideo.cs using Windows.Media.Capture (C#) to receive video frames. As far as I can tell from Microsoft Documentation this API does not provide a CameraViewTransform per frame. As a result - and in accordance to what @qian256 said before - the ARToolkit coordinates are in the locatable camera space and not in the app space.

Solving this problem might be a little bit involved as one would have to rewrite ARUWPVideo.cs from the WinRT / C++ side of things, exporting the CameraViewTransform of each frame, and applying it to the marker coordinates.

Also, I don't believe that per-user calibration is needed unless trying to fine tune for a specific application and viewpoint (i.e., surgical procedures). I've been fairly successful with Vuforia for various marker sizes. All in all, if anyone is willing to improve that part of things, this repository has some sample code capturing frames and applying transforms to the Locatable Camera.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment