-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document Robot Vehicle Design #4
Comments
. |
Per today's meeting, @adamdbrw will drive the next iteration of the design, focusing on sensors. |
After giving it some thought, I have a design in mind that is loosely inspired by Abundant Robotics demo video. I am sharing my initial thoughts so we can kickstart a discussion. I will be working on a description and some drawings (I am not good at drawing) and will also interact with https://github.com/aws-lumberyard/ROSConDemo/wiki/Demo-Walkthrough. For the manipulation, picking:
Alternative: 4-8 arms (2-4 each side) on vertical sliders with 2 joints each (elbow, hand), 3-finger gripper. Bringing the apple to the storage could be a bit awkward, but we can have an extensible feeder half-pipe for each slider. Sensors (models TBD):
Mobile base: Let me know if this high-level view is something you would like me to progress with. |
@adamdbrw - Thanks for this! The AWS team reviewed it, and loves the suggestion. As a next step, @SLeibrick will create a sketch for review during tomorrow's meeting based on your above suggestion (and based on the suction design). We are hopeful @j-rivero 's team will be able to create the model based on this sketch. |
Red are areas with cameras or lidar sensors, pretty simple design based on Adam's comments, so camera sensors on front, back and the apple tube, as well as Lidar sensor on the front. Blue arrows indicate motion for the picking array, and a black box is where the apples go and are then teleported to the back of the blackbox to come out into the container on the back of the vehicle. |
@SLeibrick I like it! :) I think that adding the apple vacuum pipe would be a nice improvement since my initial impression was "where do the apples go?". It does not need to be functional physics-wise since we will teleport the apples, but in a reference use-case simulation it would be of importance to simulate the actual apple movement, especially to check possibilities such as apple bruising or clogging. We can at least relate to that visually through adding of the pipe. Note that the pipe should be elastic (but with limited bend angles so that the apples can always go through) and extensible (or long enough for the most extreme case of suction device position). It should feed into the middle box (we assume the magic of soft landing and sorting out the bad apples happens there). Further details include cables for sensors (power, data, perhaps sync) - these are completely optional. Consider whether we would like to make it more realistic in further iterations. The other point I mentioned in design is that another frame could be added on the other side. It is very dependent on the apple tree rows spacing vs robot width + telescopic range of our manipulators (for it to make sense, both sides need to be fully reachable). I think it just looks more cool if we have another manipulator frame on the other side. This is just a second-of improvement though and we might postpone it for later. We could place some graphics / decoration on the side of the machine: our logos, ROS2 Inside logo (after checks), or a fancy name such as "Apple Kraken". |
Notes from today's meeting:
|
The distance between the rows for the apples should be 3m, so robot dimensions should be 1.5m wide, 3m long, max height of robot pneumatic tube arm 2m. |
@SLeibrick - Now that you have provided the robot size estimates, do you have any more work to do on this? If not, can we assign it to @j-rivero for feedback (per action item above)? |
No more feedback for now unless there are more questions about dimensions. |
Hi All, @j-rivero has asked me to provide feedback from Open Robotics’ side. Without a lot of technical context, this is a combination of probing questions and suggestions that hopefully helps the demo to be the best it can be. Feel free to take or ignore whatever items applicable.
Is this referring to the suction grasp, or the placing? I was curious if the physics for suction is real, or if it’s implemented as a translation / teleportation.
An opportunity to showcase O3DE, with so many joints and moving parts, might be the performance, in terms of time and accuracy. I guess accuracy comes more from PhysX than O3DE. Rendering-wise, this might not be anything special.
Sensors may be more relevant for showcasing performance, since O3DE is more about graphics. With so many sensors in the world, especially with both images and point clouds, it can be challenging for simulators to perform in real time or faster than real time. Real time factor might be something to stress test and showcase. Obviously, with powerful enough computers, anything can be real time; for this to be relevant to most users, it should probably be measured on some typical hardware the target audience is expected to have.
Are there advantages in camera image rendering that come with O3DE? How high can the camera frame rate go? Does camera frame rate matter much for agricultural applications - perhaps not as much as dynamic manipulations, since it is using a suction cup. Maybe one relevant thing is, how fast is the robot picking apples, whether it’s stopping completely before picking or might there be motion from the chassis, and whether the camera frame rate helps with more efficient picking. General comments:
Please let me know if this type of feedback from us is adequate, as I'm essentially parachuting into this thread, and if you have any questions or clarifications to anything I said. Thanks! |
@mabelzhang thank you for your feedback and putting the effort to think about it!. Let me try to answer some of the questions. I might not have all the answers, but perhaps collectively we can arrive at a good understanding.
O3DE is a game/simulation development engine, which includes, among other parts, a multi-threaded renderer and a physics engine (PhysX). We would like to showcase how O3DE can be used for a robotic simulation with ROS 2. I believe the message is that O3DE with its ROS 2 Gem is very promising and already quite capable. Our goal is to invite community to try it out and to contribute to its development.
O3DE is developing at a solid pace. While we certainly can not make up for years of development that some existing engines already had with robotic simulation in such a short time, I believe that O3DE has/will have substantial advantages. Some of them are:
For the imminent demo at ROSCon 2022, we would like to underline these items and show that O3DE could be a good choice for developing a robotic use-case. Our showcase example is to be pretty (visually), relevant to an actual use-case in robotic agriculture, and demonstrating the engine and the ROS 2 Gem successfully applied to a problem. It is also easy to show scaling up, considering the area and multiple rows of apple trees.
Note that these items would be more relevant if we were simulating a real robot and providing a tool to validate it. Our approach is to show the operation as intended and look at it in a modular way: we are doing X based on ground truth, but one could replace this with an actual algorithm to validate.
Note that we also use this demo to drive development of features around urdf import, vehicle dynamics and manipulation. I believe that a perfect next milestone for O3DE + ROS 2 Gem would be a simulation of a real use-case in cooperation with an end user.
Yes, we will create the URDF for this robot and use our Importer.
These are just my answers. If anyone has something to add or dispute - please join in to the discussion. |
Quick notes for the navigation:
Given than the terrain is flat, I think we can assume that the front lidar is an horizontal lidar. We need to assure that its location is not detecting the own vehicle structure. Question: assuming that the movement is going to be managed by the navstack and given the environment design in #12, is the goal of the demo to be able to move the robot to any place in the scene? Only processing straight apple trees lines in the bottom of the scene? Reading the scripting design seems to me like we are going to control the navstack goals but process all the apple tree lines. If that is the case, not sure if we can go with a single front lidar to perform some kind of turns with a 3m long vehicle (specially U turns between contiguous lines of tress) without crashing. For this two ideas to make our life easier:
A side note if that we need to construct the map of the scene for the navstack, SLAM makes little sense to me in the context of the demo. |
Not related to the vehicle design but as preparation for possible answers given in the ROSCon:
While I understand perfectly the state of the current development and the scope of the demo, questions in the ROSCon can be picky, so for example:
Imaginary attendee asking questions: "Ah great!, do you think that the simulator is fully capable of simulate the aerodynamics of this case? Do you have an example of that kind of simulation?" 😉
same for the nonrigid bodies. |
@adamdbrw thank you for those detailed answers! That gives me a better sense. @j-rivero raises good points above. I think the last points about the capability of the simulation are very valid, and they apply to this bullet too:
While reading it, I was thinking that this is actually really difficult to do. At the state of the art, contact forces are very difficult for any simulator to do accurately.
This can be a double-edge sword, as some users view ROS as a large dependency. How much of ROS 2 needs to be installed for this integration to work - does it work with just the minimum installation of ROS 2? |
It could be enough for many cases to simply simulate whether apple was bruised (not the size, placement or other characteristics of a bruise).
Current version of Gem needs the following (and their deps): If we want to support a case with reduced ROS dependencies, standalone releases are possible as well - where all necessary libraries are actually included and no dependencies need to be installed. |
Return to this black box: if someone we need to show the teleportation of the apples, the apple container needs to be open or there is another potential option not too complex? To simplify things we could go with a simple fixed designs that show partially the capacity of the open basket, something like:
|
I think we can progress in the following way:
|
Notes from today's conversation:
|
Document the robot vehicle design, including the specification (size, number, etc.) of all parts (wheels, motor, container, robotic arm, sensors, etc.), as well as scale, mesh, joint, and rig requirements.
Acceptance Criteria
Linked Issues
The text was updated successfully, but these errors were encountered: