Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Calibration between Unity space and DSLR or HoloLens Camera and DSLR #322

Closed
felixvh opened this issue Feb 15, 2019 · 4 comments
Closed
Assignees

Comments

@felixvh
Copy link

felixvh commented Feb 15, 2019

I have some misalignment between my two coordinate systems. Is the calibration done regarding unity space and DSLR or the HoloLens Webcam and the DSLR. In the second case, I would need to apply another transformation, right?

@chrisfromwork
Copy link
Contributor

Hello,

The calibration process does a few things:

  1. We generate intrinsic camera parameters for both the HoloLens PV camera and the DSLR camera. Intrinsics parameters include things like focal points and principal points (https://en.wikipedia.org/wiki/Camera_resectioning)
  2. After generating intrinsic camera parameters, we calculate the extrinsic/physical transform between the two cameras.

When using this calibration information the HoloLens drives the unity camera location. This is based on the HoloLens's perceived location in the world, which is not equal to the actual physical location of the PV camera used during calibration. Some error is introduced by this assumption. In an ideal world, the physical transform for the DSLR camera would be both the transform from the HoloLens's perceived self to the physical PV camera combined with the transform from the PV camera to the DSLR camera. This error is likely small enough to not detract from the end filming experience, but may be worth fixing long term if information is available on the physical location of the PV camera relative to the HoloLens's position tracking.

Is there more information on what sort of misalignment you are seeing? Calibration has been a major pain point for many of the contributors/users of Spectator View Pro. Is it possible for you to share the CalibrationData.txt you generated so that we can take a look on our end at the transforms you generated?

@felixvh
Copy link
Author

felixvh commented Mar 8, 2019

Thank you for your reply!

I was wondering if the transformation between physical location of the PV camera relative to the HoloLens's position in Unity is actually available. At least I was trying it with the code you can see below:

    Matrix4x4 cameraToWorldMatrix;
    photoCaptureFrame.TryGetCameraToWorldMatrix(out cameraToWorldMatrix);
    Matrix4x4 HoloLensToWorldMatrix = Matrix4x4.identity;
    HoloLensToWorldMatrix.SetTRS(HoloLens.transform.position, HoloLens.transform.rotation, HoloLens.transform.lossyScale);

Nevertheless, the results did not make really sense to me...

Reagrding your question: I am trying to calibrate HoloLens and a RealSense D415. I will attach the Calibration file. What I realized is that the intrincsic parameters of the D415 calculated from the Calibration app do not match the ones I get from the manufacturer. See below:

Mafucaturer:
image
Calibration app:
DSLR camera Matrix: fx, fy, cx, cy:
DSLR_camera_Matrix: 1151.68, 1158, 674.547, 339.866

CalibrationData.txt

@chrisfromwork
Copy link
Contributor

It is possible to obtain camera extrinsic information/the physical location of the PV camera. However, it doesn't appear to be possible to obtain this information when grabbing a PV camera frame via the mixed reality capture REST api. If you built your own logic for handing frames from the hololens to the calibration app, you could access the information in a similar manner to here:
https://github.com/Microsoft/MixedRealityToolkit-Unity/blob/41e3feaddb26c1d20c3a6571b7d3500604a314a6/Assets/MixedRealityToolkit.Extensions/SpectatorView/Scripts/Utilities/HoloLensCamera.cs#L1655
In this HoloLensCamera wrapper we obtain both PV camera extrinsic (physical location) and intrinsic information (focal length, principal points, etc). Both of these values could be used in calibration and make for a more accurate spectator view pro experience. On our end, we are going to look into improvements that we can make to both calibration and spectator view pro, but its unclear whether or not we will be able to invest in this soon given other HoloLens 2 related work. Any changes also likely won't align 1:1 with the code in this repo as we are trying to refactor and consolidate both the spectator view pro (dslr based) and spectator view preview (iOS based) codebases.

Note: that above class is written to work in both a uwp c# app or in unity. It is more heavily used/tested with unity so you may hit some issues if you chose to build a uwp for obtaining and relaying camera frames from the hololens to the calibration app.

The manufacturer provided focal length and principal points will be more accurate than the values you obtain with opencv. The calibration app roughly supports providing your own intrinsic information, which I would suggest doing to try and get a more accurate calibration. But I would also suggest some changes based on the richness of information that you have.

  1. Set the following flag to true:
    https://github.com/Microsoft/MixedRealityCompanionKit/blob/efa41f37519aafa833ad0687d01a7375b4e94711/SpectatorView/Calibration/Calibration/stdafx.h#L27

  2. Then go in and update the camera intrinsic information to use your own focal length and principal point
    https://github.com/Microsoft/MixedRealityCompanionKit/blob/efa41f37519aafa833ad0687d01a7375b4e94711/SpectatorView/Calibration/Calibration/CalibrationApp.cpp#L503
    The calibration app is not in the best shape for this, it only allows a single focal length and defaults to a principal point in the center of the image. You should write your own logic here to generate an intrinsic matrix. See https://en.wikipedia.org/wiki/Camera_resectioning for more information, specifically the intrinsic parameter matrix definition 'K'. You will need to get focal length and principal point values from [0,1] by dividing by the corresponding frame dimension component (scaled focal length x = focal length x / 1920, etc). Then from there, multiply these scaled focal length and principal point values by the hololens frame dimension (final focal length x = scaled focal length x * hololens frame width, etc) since that's the dimension of images that will be used during calibration (dslr camera images are resized to match the hololens image for easier processing).

  3. Opencv calculates its own values for distortion coefficients during this processing, but you could provide your own. I'm not sure if it will be more accurate to use the opencv distortion coefficient values compared to the ones you have from the manufacturer (0's kind of suggest distortion wasn't actually calculated for the device).

  4. From here normal processing should work with the updated intrinsic information and hopefully you'll get better calibration results.

@felixvh
Copy link
Author

felixvh commented Mar 11, 2019

Thanks a lot for the detailed answer! I will give it a try.

@felixvh felixvh closed this as completed Mar 11, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants