Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

annotation error #6

Closed
YoungSharp opened this issue Aug 5, 2019 · 14 comments
Closed

annotation error #6

YoungSharp opened this issue Aug 5, 2019 · 14 comments

Comments

@YoungSharp
Copy link

Hi
after i put cube into example scene, get wrong annotation.
why?
or how to get right annotation? "projected_cuboid_centroid": [ 256.00009155273438, 287.43289184570312 ], "bounding_box": { "top_left": [ -116833.7734375, -140902.03125 ], "bottom_right": [ 117898.078125, 141403.25 ] }, "cuboid": [ [ 21.611099243164062, -33.871700286865234, -97.358100891113281 ], [ 21.611099243164062, -33.871700286865234, -2.6419000625610352 ], [ 21.611099243164062, 21.59320068359375, -2.6419000625610352 ], [ 21.611099243164062, 21.59320068359375, -97.358100891113281 ], [ -21.611099243164062, -33.871700286865234, -97.358100891113281 ], [ -21.611200332641602, -33.871700286865234, -2.6419000625610352 ], [ -21.611200332641602, 21.59320068359375, -2.6419000625610352 ], [ -21.611099243164062, 21.59320068359375, -97.358100891113281 ] ], "projected_cuboid": [ [ 199.17430114746094, 345.06460571289062 ], [ -1838.110595703125, 3538.15771484375 ], [ -1838.110595703125, -1836.37744140625 ], [ 199.17430114746094, 199.22129821777344 ], [ 312.8258056640625, 345.06460571289062 ], [ 2350.114501953125, 3538.15771484375 ], [ 2350.114501953125, -1836.37744140625 ], [ 312.8258056640625, 199.22129821777344 ] ]
as you can see, bounding box value is invalid, projected_cuboid is invalid. projected_cuboid_centroid is not at the center of cube in rgb image.

@marckernest
Copy link

can you post some screenshots of the actor and tags? I can't tell what the issue is unless I see your setup

@YoungSharp
Copy link
Author

YoungSharp commented Aug 8, 2019

thanks@marckernest, solved it after putting a tags。
i have another problem, box annotation includes invisible point position in 2d image, how to get invisible key point and set it to 0?

@thangt
Copy link

thangt commented Aug 8, 2019

invisible point position => which point position is this? How did you set up keypoints for the object?

@YoungSharp
Copy link
Author

one box have 8 points,when looking at the box, from any view you can only see 7 points at most, one point will be blocked by itself. invisible points means blocked points in 2d.

@thangt
Copy link

thangt commented Aug 9, 2019

I understand your problem now. NDDS doesn't check whether a point is blocked/occluded or not.
1 of our main user is DOPE: https://github.com/NVlabs/Deep_Object_Pose. That network actually quite good at predicting occluded keypoints, whether they are occluded by the object itself or by other objects in the scene.
Does the hidden keypoint causing problem for your network?

@YoungSharp
Copy link
Author

Yes,now we are using key points for measuring, network couldn’t predict keypoint very well, or the predicted hidden keypoint(or the point is out of view) confidence coefficient is not high.

@YoungSharp
Copy link
Author

Do you have solutions for blocking the hidden points?

@thangt
Copy link

thangt commented Aug 9, 2019

Sorry, I don't, the current NDDS doesn't have any solution for that. It's a feature we need to implement and add to the later version.

@YoungSharp
Copy link
Author

When are you going to release the later version?

@thangt
Copy link

thangt commented Aug 10, 2019

Right now we don't have the exact plan but I will let you know when we do.
For now, I think you can limit the random rotation (e.g: yaw in range (-30, 30) instead of the full 360) of your object so the same corner is behind and in your training code you can always ignore the same one. It's a bad hack but it should work for now.

@thangt
Copy link

thangt commented Aug 10, 2019

Now I think more about it, I have a solution for you.
A 3d point is consider occluded if its depth value is larger than the depth value captured in the depth map. This is the basic concept of the occlusion check.
With this idea, you can export the depth of the scene and use the keypoint's projected 2d points as index [x, y] into the depth image to get the depth value at the keypoint location and compare it to the keypoint's Z value (from its location, since we are in OpenCV coordinate system, Z value is the depth value). If the keypoint's Z value is larger than the depth value then it mean it's occluded.
For the depth sensor, you can use:

  • RawDepth - it captured the raw depth value and encode the values into RGBA - 32 bits value. This sensor give you the best accuracy but the exported image is really big.
  • Depth 16 - it capture the depth value and quantify it: depth_value / MAX_DEPTH and store them in a grayscale 16 bits, to get the real depth value, you can do: depth_image[x, y] * MAX_DEPTH / 65535. This sensor give you better accuracy than the default 8 bits sensor but it's not as good as the RawDepth.

You need to navigate to the Feature Extractors section of the camera and choose to add feature extractors to see those sensors.
You should try this solution with some simple setup where you can see exactly which corner is occluded. Please let me know if it work or not. In the future version we may add this as a feature of the tool so you don't need to handle it yourself.

@YoungSharp
Copy link
Author

got you,I think it could work,thanks

@Sserpenthraxus-nv
Copy link
Contributor

Did this solution work for you?

@Sserpenthraxus-nv
Copy link
Contributor

Please reopen as needed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants