Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Calibration issue w KinectV2 #156

Open
legshampoo opened this issue Oct 18, 2017 · 9 comments
Open

Calibration issue w KinectV2 #156

legshampoo opened this issue Oct 18, 2017 · 9 comments

Comments

@legshampoo
Copy link

Hello, I've been trying to calibrate 2 KinectV2's and running into some issues getting things to line up correctly. I have followed the instructions to time sync using ntp and it seems that my offset values are acceptable, but I am still getting a 'split frame', resulting in a double detection. Basically, the kinects are offset. From what I can tell, the calibration process goes smoothly, but when I view the point clouds in rviz there is always an offset. It seems to be consistent, as in I can calibrate 100 times and the offset is always the same. I have tried different kinect positions/orientations with similar results. I've also tried switching out the kinects with new ones, still the same. Depending on the kinect locations, the offset can be anywhere from 1m (when the kinects are opposing each other from across the room) to 1/3m (when the kinects are on the same side of the room at roughly 60 degrees to eachother).

It seems like the same problem that others have reported - splitting due to poor time sync - but no matter what I try, nothing seems to change, regardless of my offset values

I am currently using:
1 mac mini - kinect node 1
1 macbook air - kinect node 2
1 macbook pro - master
all machines running Ubuntu 14.05
(we have not purchased production cpu's yet, so this is just the makeshift setup until those arrive)

Perhaps my issue is due to using macs without nvidia gpu's?

At the moment, my solution is to manually adjust the xyz position of one of the kinects in the 'calibration_results.yaml' until I get it to line up, which is pretty tedious. Will this improve the blob tracking? Is there a preferred method for manually tweaking calibration settings if the calibration is off? I will eventually be using 4 kinects so it would be less than ideal to do this for each one.

Here are some screenshots

Any ideas? Thanks in advance!

screenshot from 2017-10-18 14 28 27
screenshot from 2017-10-18 14 39 08
screenshot from 2017-10-18 15 05 05
screenshot from 2017-10-18 16 21 15

@nanodust
Copy link
Collaborator

theoretically mac usb 3.0 should work, as kinectSDK works on my dualboot windows - yet, I can't speak to ubuntu native on macbook. that said, I'm surprised it's fast enough to process (i thought gpu was required for kinect 2 linux driver).

questions :

  • re: kinect node 1 - did you do intrinsic calibration for kinect 1 before extrinsic for the system ?
    can assume intrinsic is not necessary for kinect 2 - but for kinect 1, it can help.

  • did you run calibration refinement ?

the point clouds for the different kinect models might still appear split, but it will vastly improve if not eliminate splitting on tracking regardless of kinect model.

@jburkeucla
Copy link
Contributor

jburkeucla commented Oct 18, 2017 via email

@legshampoo
Copy link
Author

legshampoo commented Oct 19, 2017

re: kinect node 1 - did you do intrinsic calibration for kinect 1 before extrinsic for the system ?
can assume intrinsic is not necessary for kinect 2 - but for kinect 1, it can help.

I have not done intrinsic calibration on either KinectV2. I will try that and see if it helps.

did you run calibration refinement ?

I thought that I did. Is that the step where you walk around the coverage zone and it generates the tracking image, as seen in the screenshot with the green and pink lines? Is it typical for the lines to not match up in the image that's generated?

My biggest question at the moment is, is the 'manual calibration adjustment' workaround sufficient to get accurate tracking? I just want to be sure that it's not simply a cosmetic adjustment... that what I see in rviz is in fact the same point cloud that is being tracked, if that makes sense.

The reason I ask is because the 'camera_poses.yaml' is copied to each node, but I am only changing the values in 'opt_calibration_results.launch', which is on the master cpu. Do I have to adjust the values in both files?

@nanodust
Copy link
Collaborator

I have not done intrinsic calibration on either KinectV2. I will try that and see if it helps.

to clarify intrinsic calibration is only recommended for the Kinect V1 (normal USB).

the new Kinect v2 (usb 3.0) does not typically require intrinsic calibration.

If you have two Kinect v2, you likely do not need to do intrinsic on them.

indeed ! I see - you did run calibration refinement. Unfortunately, you ended up with far worse result after refinement :(
A good refinement will 'fuse' the different colors into a solid grid (see reference image), yours has done the reverse.

observing your initial refinement tracks, a few tips:

  • make sure only one person is detected in the space during refinement process (including spurious detections - detection must be well calibrated prior to refinement)
  • never step in same place twice, from the same direction .
  • suggest not to cross over like you're doing... the 'x' pattern... better to walk like a grid as in the reference image. Walk at a steady constant pace, as densely as possible for the space you're in.

'manual calibration adjustment' workaround sufficient to get accurate tracking

i've never had to edit the calibration files manually, to get a system working.

@nanodust
Copy link
Collaborator

nanodust commented Oct 19, 2017

Looking more closely - I will say that you have a challenging space - there are a lot of dynamic objects on the perimeter (bicycles, boxes, tv, chairs) - and a lot of people in background. not really a problem... one can avoid detection by normal tuning per docs - though, it would be challenging to use background subtraction in your space.
Without background subtraction, it may help to limit the detection distance of the sensors in the configs so that you're not getting spurious tracks from objects in the background... if the 'box' in your refinement image is the perimeter of the open area, then the detections beyond that box during refinement are problematic.

Also, have you tried tracking the space with just one Kinect 2 ? I would bet it does a fine job just with the one facing the booths. right now your field of views cover the same space, so you're not getting much out of the 2nd camera.

  • also, note the point clouds will not fuse (more than normal) - refinement only affects the actual tracks.

@legshampoo
Copy link
Author

Thanks for the feedback - wanted to follow up on this issue.

The setup is now in a dedicated, empty space without all the complications from earlier (bikes, people, etc). However, the calibration issues remain. When I calibrate, the kinects are consistently about 1 meter off from each other. The calibration refinement process still yields the same results (nothing matches and it's actually makes things much worse).

I have been working around this by manually adjusting the kinects x,y,z coordinates in opt_calibration_results.launch, until the point clouds visually match up.

The data is being sent to a node.js app and being visualized in a simple html canvas rendering.

The issue I'm having now, is that the tracking positions (x,y,z) have a 'jitter'. You can see some of the jitter in Rviz, and it's amplified when mapped to the scale of our project in the node app to be very noticeable. It seems as if the kinects are fighting with each other to determine the 'true' centroid, so the centroid is constantly shifting slightly. For example, when someone is standing perfectly still their centroid has an erratic jitter.

I have tried adjusting the settings in moving_average_filter.yaml. Increasing the window size helps to reduce the jitter, but it creates a delay that is unacceptable for our requirements.

So I'm still wondering why my calibration is off (but consistently 'off' the same amount), and why the calibration refinement has the opposite effect that it should.

Also, is this 'jitter' to be expected? If the calibration was working correctly would it go away?

Thanks again for any insight

@jburkeucla
Copy link
Contributor

Yes, calibration is not working correctly, it seems. Standard issues are non-rigid checkboards, lots of ambient / changing light, odd floor reflections causing Kinect noise. But these don't seem apparent in your photos. Did you update the calibration file with the physical measurements of your checkboard sizes?

@legshampoo
Copy link
Author

Yeah, I printed the checkerboard at scale on rigid foam board, and measured it. As far as I can tell the checkerboard dims are correct (using the defaults) but I will check that again. The new space is controlled - empty room, blacked out windows, wood floor. Nothing I can think of that would cause noise.

If it helps I can provide updated details/photos/screenshots of the setup (the images above were from an earlier iteration).

Are you saying that when the calibration is working the jitter is not present?

@jburkeucla
Copy link
Contributor

You shouldn't see much jitter in a normal circumstance. The fact that calibration refinement is not helping is a signal that something is odd. Updated screenshots would help - I can share with the developers and see what they think. Also send the output of ntpq -p on all nodes, if possible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants