Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

#5 Missing bounding boxes problem #9

Open
rak7045 opened this issue Oct 28, 2021 · 9 comments
Open

#5 Missing bounding boxes problem #9

rak7045 opened this issue Oct 28, 2021 · 9 comments

Comments

@rak7045
Copy link

rak7045 commented Oct 28, 2021

Hello again,
I am collecting data for a CNN network to learn. As already mentioned in #5 about missing Bboxes in the field of view has been solved. I see the problem still persists.

I attached a sensor to a car. The Altitude (z-axis) and the Picth angle of the camera changes according to the requirements.
While collecting the data the missing bboxes occurs. I am attaching the picture for reference

070210

The sensor which is attached to the object is stationary due to traffic signal. Then, a car has appeared in the frame as shown above.

The above image is the frame where the car entered and after for some frames the object has bounding box to it.

070270

While the object is at the end i.e while leaving it has no bounding box to it. This problem has appeared for each requirement we collected.

070510

Do you have an idea on this how to solve the problem?

TIA

Regards
Ravi

@MukhlasAdib
Copy link
Owner

Hi!

Are you using the semantic lidar-based method? If yes, can you make visualization like what you did here?

Maybe it's because the vehicle's position is out of the LIDAR's upper fov range.

@rak7045
Copy link
Author

rak7045 commented Oct 29, 2021

Hello,
Thanks for your reply. I am using depth camera information. I would like to use the information from depth than semantic LiDAR.

@MukhlasAdib
Copy link
Owner

Sorry for my late reply. Right, the current version of depth-based annotation still suffer from false negative problem. There are several reasons that can cause it:

  • The occlusion filter failed in detecting the car. If this is the problem, then you need to tune the occlusion filter parameters again, i.e. depth_margin, patch_ratio, and resize_ratio. Yes, it requires more works.
  • The center point of the car is out of camera FoV. The algorithm use the center point of the car to determine whether the car is inside the camera FoV or not. The algorithm can miss a car if there is only a small portion of the body that appears in camera. I think it can be solved by using a more robust way to perform FoV filter. But currently I don't have time to upgrade the algorithm. Moreover, I don't think this is the problem in your case since the center of the car is clearly visible in the image.

In case you haven't read the details of the algorithm, you can find it here. Thank you.

@rak7045
Copy link
Author

rak7045 commented Nov 20, 2021

Hello again,

When I tried to collect data at 60fps with the following orientation of camera:
carla.Transform(carla.Location(x=23,y= 0, z=25), carla.Rotation(roll= 0.0, pitch = -90, yaw= 0.0)).
The FOV is set to 69°. I see there is a problem with the bounding box text information.

The problem is:
In an image there is only one object but the bbox txt information contains of two objects where one is out of the image resolution. Why this problem arises and how can we eliminate this?
This problem arises with 90° pitch angle irrespective of altitude, but when altitude increases the inappropriate information decreases. Do you have any idea on this?

I think due to FOV it gets the information and couldn't deal how can we eliminate these. Could you please help me with this?

Below I am attaching the picture along with the bbox txt file for reference.

002470

002470.txt

TIA

@rak7045
Copy link
Author

rak7045 commented Nov 22, 2021

Hello @MukhlasAdib
Sorry to disturb you. Excuse me. Could you please help me with the above question?

@MukhlasAdib
Copy link
Owner

Ah right, sorry. It looks like a bug in the algorithm. So I need to look deeper into the problem. I will inform you if I get something, but I cannot promise you it will be fast.

@MukhlasAdib
Copy link
Owner

@rak7045 Are you using semantic LIDAR or depth camera?

@rak7045
Copy link
Author

rak7045 commented Nov 22, 2021

Hello,
I am using depth camera information.

@MukhlasAdib
Copy link
Owner

Yeah, looks like it has something to do with the angle filter and your specific rotation setting. I need some time to check it. But for temporary solution, you can add a simple pixel coordinate filter by yourself before saving the results. Something like

if any(bbox[:,1] < 0) or any(bbox[:,1] > IMAGE_HEIGHT):
    # remove the bbox
elif any(bbox[:,0] < 0) or any(bbox[:,0] > IMAGE_WIDTH):
    # remove the bbox
else:
    # keep the bbox

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants