Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to measure (real life measurement) the dimensions of an object ? #569

Open
salemgithub-box opened this issue Apr 29, 2022 · 10 comments
Open

Comments

@salemgithub-box
Copy link

Greetings,
Community Discussion Link

is there a way to measure the length and width of objects that we commonly have in our room like a chair or a book ?

@Luxonis-Brandon
Copy link
Contributor

Yes so actually @tersekmatija has some examples/attempts at this. The accuracy isn't great yet though.

image

That said we're actively working on it.

@conorsim
Copy link

conorsim commented May 3, 2022

Hey! Yes, we are currently exploring methods for this. The most promising method we have right now uses a combination of ML and stereo. On a high-level, the algorithm is as follows:

  1. Take any black box keypoint detector to detect the corners and center of an object. (e.g. Objectron)
  2. Use PnP to find the 6DoF pose of the 3D bounding box using an object coordinate frame similar to that of Objectron. The details of this method can also be found in the Objectron paper and some code from Mediapipe here.
  3. Use stereo depth to scale the recovered box to metric dimensions and recover the base, width, and height of the box (9 DoF).

Currently, we are working on open sourcing our work related to this. I can update you when we have some code available for this task!

@Erol444
Copy link
Member

Erol444 commented May 3, 2022

Cross-posting same question from discuss.

@robotaiguy
Copy link

If you're staying within the oak-d box for this, could you calibrate with charuco targets using a combination of stereo vision and edge detection? I'm not sure if we have a canny blob available, but I'm hoping we could use the intrinsics from the device, and maybe some high quality charuco targets. Like maybe 80mm squares with 4x4 or 5x5 bit dictionary of arucos. And maybe use a 100 character dictionary, but only use like 30 of them, but skip characters in between to allow for a more disparate collection. Print them out onto sheets of glass or something rigid and FLAT like that. Oh, and I've also used some UV spray to reduce outside glare...a colleague of mine made a set that he somehow chemically etched into metal. It was still binary enough for calibration, but no glare outdoors at all. One thing I've learned it to get them printed (with a very high dpi so the internal corners are CRISP. But then you'd probably always need some kind of fiducials in view to frequently recalibrate.
I currently measure and locate joint seams to +- 3mm on large pipes (30ish feet long, and 5-ish inches diameter, from 12 feet away, outdoors, with dual 5MP Gige-V cameras with pretty crappy 2MP rated lenses.
Of course, the sensors on our oaks are probably not quite as rectilinear, and I would expect to lose a great deal of FOV from dewarping.
What kind of accuracy are you looking for?
This is why I LOVE this stuff. Because it's so challenging, nothing ever really works great, but it's super cool, and EVERY day I get to go to work and have NO IDEA how to do what they want me to do. Just constant absorption of information and testing.

@salemgithub-box
Copy link
Author

salemgithub-box commented Jul 1, 2022

If you're staying within the oak-d box for this, could you calibrate with charuco targets using a combination of stereo vision and edge detection? I'm not sure if we have a canny blob available, but I'm hoping we could use the intrinsics from the device, and maybe some high quality charuco targets. Like maybe 80mm squares with 4x4 or 5x5 bit dictionary of arucos. And maybe use a 100 character dictionary, but only use like 30 of them, but skip characters in between to allow for a more disparate collection. Print them out onto sheets of glass or something rigid and FLAT like that. Oh, and I've also used some UV spray to reduce outside glare...a colleague of mine made a set that he somehow chemically etched into metal. It was still binary enough for calibration, but no glare outdoors at all. One thing I've learned it to get them printed (with a very high dpi so the internal corners are CRISP. But then you'd probably always need some kind of fiducials in view to frequently recalibrate. I currently measure and locate joint seams to +- 3mm on large pipes (30ish feet long, and 5-ish inches diameter, from 12 feet away, outdoors, with dual 5MP Gige-V cameras with pretty crappy 2MP rated lenses. Of course, the sensors on our oaks are probably not quite as rectilinear, and I would expect to lose a great deal of FOV from dewarping. What kind of accuracy are you looking for? This is why I LOVE this stuff. Because it's so challenging, nothing ever really works great, but it's super cool, and EVERY day I get to go to work and have NO IDEA how to do what they want me to do. Just constant absorption of information and testing.

@robotwhispering
I am looking for a 2cm-3cm error. I tried to measure objects using OAK-D's depth map, the results are acceptable if the object is literally parallel to the camera. In my case, objects won't necessarily be parallel to the camera.

The image shown below shows an attempt to measure a mobile phone, the actual measurement is 16 cm. The user here defines 2 areas at the far ends of the object and using the averaged spatial coordinates we calculate the distance. I am using openCV and DepthAI
DepthMeasurement

@SzabolcsGergely
Copy link
Collaborator

SzabolcsGergely commented Jul 4, 2022

@salemgithub-box
Hello, the depth map contains the Z distance, so that's why it works when parallel to the camera.
What you are looking for is the difference between the Euclidean distance a.k.a. "real-world distance" of the 2 points, which is sqrt((x2-x1)^2+(y2-y1)^2+(z2-z1)^2).
See here

@salemgithub-box
Copy link
Author

@szabi-luxonis
Euclidead Distance was used to calculate the distance shown in the image above. When i try to do measurements on objects that aren't parallel to the camera i get inaccurate resutlts.

@FrescoFresco
Copy link

Hey! Yes, we are currently exploring methods for this. The most promising method we have right now uses a combination of ML and stereo. On a high-level, the algorithm is as follows:

  1. Take any black box keypoint detector to detect the corners and center of an object. (e.g. Objectron)
  2. Use PnP to find the 6DoF pose of the 3D bounding box using an object coordinate frame similar to that of Objectron. The details of this method can also be found in the Objectron paper and some code from Mediapipe here.
  3. Use stereo depth to scale the recovered box to metric dimensions and recover the base, width, and height of the box (9 DoF).

Currently, we are working on open sourcing our work related to this. I can update you when we have some code available for this task!

Hi! i would love to know if you guys already made some progress :O

@conorsim
Copy link

conorsim commented Jan 1, 2023

@FrescoFresco for measuring the dimensions of rectangular prism objects (e.g. boxes), we have an example here that doesn't depend on AI: https://github.com/luxonis/depthai-experiments/tree/master/gen2-box_measurement

For other objects (e.g. Objectron dataset), we haven't found great accuracy with the PnP method I mentioned earlier, but we are still exploring some other AI-assisted methods.

@FrescoFresco
Copy link

@FrescoFresco for measuring the dimensions of rectangular prism objects (e.g. boxes), we have an example here that doesn't depend on AI: https://github.com/luxonis/depthai-experiments/tree/master/gen2-box_measurement

For other objects (e.g. Objectron dataset), we haven't found great accuracy with the PnP method I mentioned earlier, but we are still exploring some other AI-assisted methods.

thank you so much. I think you guys already thought it but anyways I say it. With 2d measurments u need a 1:1 to make then the formula. Can it be made with 1:1:1 ? (the magic cube hah) have a nice day

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants