-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does this work? #28
Comments
Yes, this is working. Now, ros2 development is in the I did the initial version for a robot competition, and since then, @fgonzalezr1998 is maintaining it as part of his Degree's final project. As I recall it, the only detail is that the X ax of the working frame should point towards the scene where doing the detections. If you have any problems using it, please tell me, and I will try to fix it quickly. Best |
Now, the system is working for Melodic (branch: melodic) and Eloquent (branch: ros2_eloquent and master). Last days we are reorganizing the code so It has been reviewed a few days ago. If you have any problems using this package do not hesitate to let us know. Your feedback is really important. |
I have just worked in a PR for making it works in ROS2 Foxy leggedrobotics/darknet_ros#257 |
Do you have a gif or something of the 3D bounding box quality for your robotics application? We've been looking for quality 3D detectors and mostly come up short in a robotics context. If this works, we should really take a look at this. How well does it do / do you have a video? Is this a pure-visual approach or also using depth information in the NN (or in some derivative pipeline) |
If you wait until next week, @fgonzalezr1998 can record a video with the current status. He could move to simulate what you want to detect. We have this video where you can see the output of this software. This version still had a bug that made bounding boxes not very accurate: https://youtu.be/HZIZSTDtmA0 |
Hi, @SteveMacenski , here I have uploaded a little usage demo using ROS2 Eloquent |
Awesome, I'll add this to my list for when we have some of the dynamic work further along. Does this only work on RGBD sensors? |
@SteveMacenski This tool combine neural network output bounding boxes with point cloud information to compose the 3D bounding boxes. I always used RGBD sensors for its develop and trials (ASUS Xtion, Orbecc Astra and D435 Realsense) but, if you use other tool that build a point cloud from LaserScan information, for example, you only have to modify the point cloud topic in darknet3d.yaml file and Darknet ROS 3D will take this point cloud. Also, if you do this, it's possible that working frame must to be changed in the yaml. |
I glanced through the code and have a better understanding of how this works, there's a strong analog to this and the work that we're doing on Nav2 on dynamic detection / tracking (I actually sort of wish @fmrico had mentioned this project sooner so that we could reduce redundant work). One of the summer program projects is to do essentially this that is being worked on. We're using Detectron2 from Facebook Research for 2D instance segmentation and working on the size estimation from depth info at the moment. |
@SteveMacenski I have taken a look at the project and it is very interesting. We are now developing yolact_ros_3d which is so similar to darknet_ros_3d but using YOLACT as neural network instead of Darknet. It presents some advantages and, the next step is to be able to create a costmap 3d using its output. In addition, I have seen in your project tasks that you want that your tool could run in a Jetson or similar. About this topic, I was tried darknet_ros_3d in my Nvidia Jetson Nano mounted on a Turtlebot and it works fine. |
We're doing some dynamic obstacle avoidance work right now in Nav2. Is this working well enough for you to consider adding it to that effort?
The text was updated successfully, but these errors were encountered: