Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does this work? #28

Closed
SteveMacenski opened this issue Aug 17, 2020 · 10 comments
Closed

Does this work? #28

SteveMacenski opened this issue Aug 17, 2020 · 10 comments

Comments

@SteveMacenski
Copy link

We're doing some dynamic obstacle avoidance work right now in Nav2. Is this working well enough for you to consider adding it to that effort?

@fmrico
Copy link
Contributor

fmrico commented Aug 18, 2020

Hi @SteveMacenski

Yes, this is working. Now, ros2 development is in the ros2_eloquent branch. Maybe I can reorganize, making master the ros2 development branch, and check it today to make sure there is not any problem currently.

I did the initial version for a robot competition, and since then, @fgonzalezr1998 is maintaining it as part of his Degree's final project. As I recall it, the only detail is that the X ax of the working frame should point towards the scene where doing the detections.

If you have any problems using it, please tell me, and I will try to fix it quickly.

Best

@fgonzalezr1998
Copy link
Contributor

fgonzalezr1998 commented Aug 18, 2020

Hi @SteveMacenski

Now, the system is working for Melodic (branch: melodic) and Eloquent (branch: ros2_eloquent and master). Last days we are reorganizing the code so It has been reviewed a few days ago.

If you have any problems using this package do not hesitate to let us know. Your feedback is really important.

@fmrico
Copy link
Contributor

fmrico commented Aug 18, 2020

darknet_ros is broken in Focal because of OpenCv 4.2.

I have just worked in a PR for making it works in ROS2 Foxy leggedrobotics/darknet_ros#257

@SteveMacenski
Copy link
Author

SteveMacenski commented Aug 18, 2020

Do you have a gif or something of the 3D bounding box quality for your robotics application? We've been looking for quality 3D detectors and mostly come up short in a robotics context. If this works, we should really take a look at this. How well does it do / do you have a video?

Is this a pure-visual approach or also using depth information in the NN (or in some derivative pipeline)

@fmrico
Copy link
Contributor

fmrico commented Aug 19, 2020

If you wait until next week, @fgonzalezr1998 can record a video with the current status. He could move to simulate what you want to detect.

We have this video where you can see the output of this software. This version still had a bug that made bounding boxes not very accurate: https://youtu.be/HZIZSTDtmA0

@fgonzalezr1998
Copy link
Contributor

Hi, @SteveMacenski , here I have uploaded a little usage demo using ROS2 Eloquent

@SteveMacenski
Copy link
Author

Awesome, I'll add this to my list for when we have some of the dynamic work further along. Does this only work on RGBD sensors?

@fgonzalezr1998
Copy link
Contributor

@SteveMacenski This tool combine neural network output bounding boxes with point cloud information to compose the 3D bounding boxes. I always used RGBD sensors for its develop and trials (ASUS Xtion, Orbecc Astra and D435 Realsense) but, if you use other tool that build a point cloud from LaserScan information, for example, you only have to modify the point cloud topic in darknet3d.yaml file and Darknet ROS 3D will take this point cloud. Also, if you do this, it's possible that working frame must to be changed in the yaml.
As @fmrico said, that tool has a vulnerability and it is that you have to use a frame whose axis are in the following way: X aiming to the scene, Y aiming to left and Z aiming to top.

@SteveMacenski
Copy link
Author

I glanced through the code and have a better understanding of how this works, there's a strong analog to this and the work that we're doing on Nav2 on dynamic detection / tracking (I actually sort of wish @fmrico had mentioned this project sooner so that we could reduce redundant work). One of the summer program projects is to do essentially this that is being worked on. We're using Detectron2 from Facebook Research for 2D instance segmentation and working on the size estimation from depth info at the moment.

@fgonzalezr1998
Copy link
Contributor

@SteveMacenski I have taken a look at the project and it is very interesting. We are now developing yolact_ros_3d which is so similar to darknet_ros_3d but using YOLACT as neural network instead of Darknet. It presents some advantages and, the next step is to be able to create a costmap 3d using its output.

In addition, I have seen in your project tasks that you want that your tool could run in a Jetson or similar. About this topic, I was tried darknet_ros_3d in my Nvidia Jetson Nano mounted on a Turtlebot and it works fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants