Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document Robot Vehicle Design #4

Closed
forhalle opened this issue Aug 9, 2022 · 20 comments
Closed

Document Robot Vehicle Design #4

forhalle opened this issue Aug 9, 2022 · 20 comments
Assignees

Comments

@forhalle
Copy link
Collaborator

forhalle commented Aug 9, 2022

Document the robot vehicle design, including the specification (size, number, etc.) of all parts (wheels, motor, container, robotic arm, sensors, etc.), as well as scale, mesh, joint, and rig requirements.

Acceptance Criteria

  • Design is reviewed and agreed upon with all parties (Robotec.ai, Open Robotics, AWS)

Linked Issues

@forhalle
Copy link
Collaborator Author

forhalle commented Aug 9, 2022

.

@forhalle forhalle changed the title Document Robot Design Document Robot Vehicle Design Aug 9, 2022
@forhalle
Copy link
Collaborator Author

Per today's meeting, @adamdbrw will drive the next iteration of the design, focusing on sensors.

@adamdbrw
Copy link
Collaborator

adamdbrw commented Aug 12, 2022

After giving it some thought, I have a design in mind that is loosely inspired by Abundant Robotics demo video. I am sharing my initial thoughts so we can kickstart a discussion. I will be working on a description and some drawings (I am not good at drawing) and will also interact with https://github.com/aws-lumberyard/ROSConDemo/wiki/Demo-Walkthrough.

For the manipulation, picking:
XY sliding frame with a telescopic suction gripper and pipe. Width, height and extension length are our parameters to play with.
We need one on each side, but two on each side could be valid as well. Camera is mounted on top of the suction gripper.
What we simulate:

  • XYZ movement of the sliding and extension joints. However, we might not have direct support for this in O3DE (prismatic joints).
  • Suction: we can simply teleport the apple to the storage bin.

Alternative: 4-8 arms (2-4 each side) on vertical sliders with 2 joints each (elbow, hand), 3-finger gripper. Bringing the apple to the storage could be a bit awkward, but we can have an extensible feeder half-pipe for each slider.

Sensors (models TBD):

  • Camera sensors on grippers, for visualization.
    • Determining apple position - based on ground truth (no real image-based detection)
  • (optional) Front and back cameras (visualization)
  • Front lidar (for obstacle detection, and navigation stack). Additional lidars on the frame, (optionally): front left, front right, back.
  • GPS/IMU - ground truth
  • Readings of gripper position (frames of reference TBD) - as ground truth.

Mobile base:
A long rough terrain chassis with four wheels. A picture for inspiration:
image

Let me know if this high-level view is something you would like me to progress with.

@forhalle
Copy link
Collaborator Author

@adamdbrw - Thanks for this! The AWS team reviewed it, and loves the suggestion. As a next step, @SLeibrick will create a sketch for review during tomorrow's meeting based on your above suggestion (and based on the suction design). We are hopeful @j-rivero 's team will be able to create the model based on this sketch.

@SLeibrick
Copy link
Collaborator

top
Front
side-rear

@SLeibrick
Copy link
Collaborator

Red are areas with cameras or lidar sensors, pretty simple design based on Adam's comments, so camera sensors on front, back and the apple tube, as well as Lidar sensor on the front. Blue arrows indicate motion for the picking array, and a black box is where the apples go and are then teleported to the back of the blackbox to come out into the container on the back of the vehicle.

@adamdbrw
Copy link
Collaborator

adamdbrw commented Aug 18, 2022

@SLeibrick I like it! :)

I think that adding the apple vacuum pipe would be a nice improvement since my initial impression was "where do the apples go?". It does not need to be functional physics-wise since we will teleport the apples, but in a reference use-case simulation it would be of importance to simulate the actual apple movement, especially to check possibilities such as apple bruising or clogging. We can at least relate to that visually through adding of the pipe.

Note that the pipe should be elastic (but with limited bend angles so that the apples can always go through) and extensible (or long enough for the most extreme case of suction device position). It should feed into the middle box (we assume the magic of soft landing and sorting out the bad apples happens there).

Further details include cables for sensors (power, data, perhaps sync) - these are completely optional. Consider whether we would like to make it more realistic in further iterations.

The other point I mentioned in design is that another frame could be added on the other side. It is very dependent on the apple tree rows spacing vs robot width + telescopic range of our manipulators (for it to make sense, both sides need to be fully reachable). I think it just looks more cool if we have another manipulator frame on the other side. This is just a second-of improvement though and we might postpone it for later.

We could place some graphics / decoration on the side of the machine: our logos, ROS2 Inside logo (after checks), or a fancy name such as "Apple Kraken".

@forhalle
Copy link
Collaborator Author

Notes from today's meeting:

  • no pipe is necessary
  • logos would be ideal
  • @j-rivero's team can provide additional design feedback in the next two days, and can likely start asset creation in next week
  • assets should be created from scratch (not re-used) to avoid legal complexities.
  • we may improve visuals over time (stretch goal; not required), such as adding cables, etc.
  • may want two frames - one on each side of the vehicle
  • we will assume a fairly flat terrain (terrain is pretty flat in most apple orchards)
  • @SLeibrick will provide robot size estimates (in metric measurements)

@forhalle forhalle assigned SLeibrick and unassigned adamdbrw Aug 18, 2022
@SLeibrick
Copy link
Collaborator

The distance between the rows for the apples should be 3m, so robot dimensions should be 1.5m wide, 3m long, max height of robot pneumatic tube arm 2m.

@forhalle
Copy link
Collaborator Author

@SLeibrick - Now that you have provided the robot size estimates, do you have any more work to do on this? If not, can we assign it to @j-rivero for feedback (per action item above)?

@SLeibrick
Copy link
Collaborator

No more feedback for now unless there are more questions about dimensions.

@mabelzhang
Copy link
Collaborator

Hi All,

@j-rivero has asked me to provide feedback from Open Robotics’ side.

Without a lot of technical context, this is a combination of probing questions and suggestions that hopefully helps the demo to be the best it can be. Feel free to take or ignore whatever items applicable.

  • A preliminary clarifying question is, what is the primary goal the demo is trying to showcase? In my understanding, this is to showcase the combination of O3DE and ROS 2? O3DE is a graphics engine, and the physics is provided by PhysX?

  • What are the characteristics and advantages of O3DE that make it stand out from other engines that people may already be using? Does this demo, and specifically, this robot design, highlight these advantages? Why should people choose O3DE for the agricultural application showcased over competitor software? How do they know this isn’t just another reproduction of something that’s already working in existing engines, say, Unreal Engine or Gazebo, that makes O3DE more suitable for their applications?

Suction: we can simply teleport the apple to the storage bin.

Is this referring to the suction grasp, or the placing? I was curious if the physics for suction is real, or if it’s implemented as a translation / teleportation.

4-8 arms (2-4 each side) on vertical sliders with 2 joints each (elbow, hand), 3-finger gripper.

An opportunity to showcase O3DE, with so many joints and moving parts, might be the performance, in terms of time and accuracy. I guess accuracy comes more from PhysX than O3DE. Rendering-wise, this might not be anything special.

Sensors (models TBD):

Sensors may be more relevant for showcasing performance, since O3DE is more about graphics. With so many sensors in the world, especially with both images and point clouds, it can be challenging for simulators to perform in real time or faster than real time. Real time factor might be something to stress test and showcase. Obviously, with powerful enough computers, anything can be real time; for this to be relevant to most users, it should probably be measured on some typical hardware the target audience is expected to have.

Camera sensors on grippers, for visualization.

Are there advantages in camera image rendering that come with O3DE? How high can the camera frame rate go? Does camera frame rate matter much for agricultural applications - perhaps not as much as dynamic manipulations, since it is using a suction cup. Maybe one relevant thing is, how fast is the robot picking apples, whether it’s stopping completely before picking or might there be motion from the chassis, and whether the camera frame rate helps with more efficient picking.

General comments:

  • In general, if the demo can tell people a speed of picking that is more efficient than some baseline, that would be helpful. That speed would have to be compared to real-world speed to be meaningful.

    Perhaps some profiler that tells the amount of time taken for each module in each time step - rendering, physics update, joint actuation, controller, etc. An even more rigorous comparison would be to profile several different simulators, and show that O3DE is better in some way. That might be out of the scope of this demo.

  • What format is the robot description in? With ROS 2 in the picture, is it something like URDF?

  • Accuracy will always be a question for any simulator. Since physics is coming from PhysX (I think?), this might be less of a direct concern of this demo, but people will probably still ask. Given that the manipulation is done by suction cup, physical accuracy won’t matter much. How about the visual accuracy - when an apple is sucked, for example, does it shake around a bit like in the Abundant Robotics demo video? If not, is there mockup work that will be done to make it appear more realistic? Presumably, to showcase a graphics engine, visual believability is important.

  • The above are my simulation-oriented comments. As for the robot design itself, I don’t have much feedback as you can make a mobile robot whatever you want. It looks reasonable enough to me. If it can navigate (which looks like it can with the LIDARs and nav stack), and it can manipulate (on-hand camera seems reasonable), then it should be fine.

    The only thing that probably requires some testing and tuning is the position of the on-hand / in-hand camera. Those can be tricky to get right, because when the gripper is too close to the object, then the object completely blocks the camera view. But again, with suction cup as the end-effector, it is as minimal as it gets. If one camera is not enough, you might need two cameras, which is common in the real world.

  • I understand that you do not wish to reuse existing robot models because of legal constraints. I would just like to point out that there’s a database on Gazebo Fuel, specifically the SubT Tech Repo collection, which is authored by Open Robotics. There may be implementation references for sensors, joints, or whatever robot parts you might run into. Looks like many mobile robot models are licensed under Creative Commons Attribution 4.0 International. https://app.gazebosim.org/OpenRobotics/fuel/collections/SubT%20Tech%20Repo

Please let me know if this type of feedback from us is adequate, as I'm essentially parachuting into this thread, and if you have any questions or clarifications to anything I said. Thanks!

@adamdbrw
Copy link
Collaborator

@mabelzhang thank you for your feedback and putting the effort to think about it!. Let me try to answer some of the questions. I might not have all the answers, but perhaps collectively we can arrive at a good understanding.

A preliminary clarifying question is, what is the primary goal the demo is trying to showcase? In my understanding, this is to showcase the combination of O3DE and ROS 2? O3DE is a graphics engine, and the physics is provided by PhysX?

O3DE is a game/simulation development engine, which includes, among other parts, a multi-threaded renderer and a physics engine (PhysX). We would like to showcase how O3DE can be used for a robotic simulation with ROS 2. I believe the message is that O3DE with its ROS 2 Gem is very promising and already quite capable. Our goal is to invite community to try it out and to contribute to its development.

What are the characteristics and advantages of O3DE that make it stand out from other engines that people may already be using? Does this demo, and specifically, this robot design, highlight these advantages? Why should people choose O3DE for the agricultural application showcased over competitor software? How do they know this isn’t just another reproduction of something that’s already working in existing engines, say, Unreal Engine or Gazebo, that makes O3DE more suitable for their applications?

O3DE is developing at a solid pace. While we certainly can not make up for years of development that some existing engines already had with robotic simulation in such a short time, I believe that O3DE has/will have substantial advantages. Some of them are:

  1. It is open source with no fees and has active and supportive community.
  2. The ROS 2 integration in O3DE is posed to be better in terms of developers' power and overall experience as well as performance (no bridging) than in some other engines.
  3. We aim for it to be well-documented. Quite some work towards this goal is already completed.
  4. We aim for it to be scalable, which means both performant and well-integrated with scale-up solutions such as AWS RoboMaker. We have already deployed such a solution with the last demo.

For the imminent demo at ROSCon 2022, we would like to underline these items and show that O3DE could be a good choice for developing a robotic use-case. Our showcase example is to be pretty (visually), relevant to an actual use-case in robotic agriculture, and demonstrating the engine and the ROS 2 Gem successfully applied to a problem. It is also easy to show scaling up, considering the area and multiple rows of apple trees.

Is this referring to the suction grasp, or the placing? I was curious if the physics for suction is real, or if it’s implemented as a translation / teleportation.
How about the visual accuracy - when an apple is sucked, for example, does it shake around a bit like in the Abundant Robotics demo video?
The only thing that probably requires some testing and tuning is the position of the on-hand / in-hand camera

Note that these items would be more relevant if we were simulating a real robot and providing a tool to validate it. Our approach is to show the operation as intended and look at it in a modular way: we are doing X based on ground truth, but one could replace this with an actual algorithm to validate.

  1. Suction gripper just teleports the apple - but it could just as well include simulation of forces.
  2. Transmission belt within is not simulated - but it could be, if this is the part someone would want to validate in a similar robot.
  3. Apple bruising is not simulated - but since it is important for the use-case one could add such simulation based on forces applied to its rigid body.
  4. There is no apple detection in the demo (we are using ground truth) - but one could as well run a ROS 2 detector package with sim camera data.
  5. .... (other items include: replacing ground truth position with some EKF, simulating distortion and noise of sensors, apple storage, battery life / charging of the robot, and many many more).

Note that we also use this demo to drive development of features around urdf import, vehicle dynamics and manipulation. I believe that a perfect next milestone for O3DE + ROS 2 Gem would be a simulation of a real use-case in cooperation with an end user.

What format is the robot description in? With ROS 2 in the picture, is it something like URDF?

Yes, we will create the URDF for this robot and use our Importer.

(Points about sensors and performance)
These points are good and we certainly would like to have great performance. Initial work towards it has been done, but much more remains. Not sure how much we can still do for the demo, but I have some ideas. Performance benchmarking and comparison is something I enjoy doing, so I would love if we could find time for it, even if it is after the ROSCon.

These are just my answers. If anyone has something to add or dispute - please join in to the discussion.

@j-rivero
Copy link
Collaborator

Quick notes for the navigation:

Front lidar (for obstacle detection, and navigation stack). Additional lidars on the frame, (optionally): front left, front right, back.

Given than the terrain is flat, I think we can assume that the front lidar is an horizontal lidar. We need to assure that its location is not detecting the own vehicle structure.

Question: assuming that the movement is going to be managed by the navstack and given the environment design in #12, is the goal of the demo to be able to move the robot to any place in the scene? Only processing straight apple trees lines in the bottom of the scene?

Reading the scripting design seems to me like we are going to control the navstack goals but process all the apple tree lines. If that is the case, not sure if we can go with a single front lidar to perform some kind of turns with a 3m long vehicle (specially U turns between contiguous lines of tress) without crashing. For this two ideas to make our life easier:

  • Be sure that we have lot of space between apple-tree lines.
  • Not doing short U turns between contiguous lines and instead increase the space by an algorithm like: row1 -> row3 -> row5 -> row2 -> row4 -> row6.

A side note if that we need to construct the map of the scene for the navstack, SLAM makes little sense to me in the context of the demo.

@j-rivero
Copy link
Collaborator

Not related to the vehicle design but as preparation for possible answers given in the ROSCon:

Note that these items would be more relevant if we were simulating a real robot and providing a tool to validate it. Our approach is to show the operation as intended and look at it in a modular way: we are doing X based on ground truth, but one could replace this with an actual algorithm to validate.

While I understand perfectly the state of the current development and the scope of the demo, questions in the ROSCon can be picky, so for example:

1. Suction gripper just teleports the apple - but it could just as well include simulation of forces.

Imaginary attendee asking questions: "Ah great!, do you think that the simulator is fully capable of simulate the aerodynamics of this case? Do you have an example of that kind of simulation?" 😉

2. Transmission belt within is not simulated - but it could be, if this is the part someone would want to validate in a similar robot.

same for the nonrigid bodies.

@mabelzhang
Copy link
Collaborator

mabelzhang commented Aug 24, 2022

@adamdbrw thank you for those detailed answers! That gives me a better sense.

@j-rivero raises good points above.

I think the last points about the capability of the simulation are very valid, and they apply to this bullet too:

3. Apple bruising is not simulated - but since it is important for the use-case one could add such simulation based on forces applied to its rigid body.

While reading it, I was thinking that this is actually really difficult to do. At the state of the art, contact forces are very difficult for any simulator to do accurately.
Probably questions from advance users will be along these very technical lines, as pointed out above.

The ROS 2 integration in O3DE is posed to be better in terms of developers' power and overall experience as well as performance (no bridging) than in some other engines.

This can be a double-edge sword, as some users view ROS as a large dependency. How much of ROS 2 needs to be installed for this integration to work - does it work with just the minimum installation of ROS 2?

@adamdbrw
Copy link
Collaborator

adamdbrw commented Aug 24, 2022

While reading it, I was thinking that this is actually really difficult to do. At the state of the art, contact forces are very difficult for any simulator to do accurately.

It could be enough for many cases to simply simulate whether apple was bruised (not the size, placement or other characteristics of a bruise).
I think that the question of "what degree of realism is possible using a certain engine" is often not easy to answer without a lot of work (trying different models of a given physical phenomenon), and often is not as important as the question "what do I need to simulate to get most of the value".
Having said this, proofs of capabilities are important and it will be good to keep that in mind for stretch goals and further milestones.

How much of ROS 2 needs to be installed for this integration to work - does it work with just the minimum installation of ROS 2?

Current version of Gem needs the following (and their deps): rclcpp builtin_interfaces std_msgs sensor_msgs urdfdom tf2_ros.
On top of that, the project would use additional packages such as the navigation stack (really up to the project's developers).

If we want to support a case with reduced ROS dependencies, standalone releases are possible as well - where all necessary libraries are actually included and no dependencies need to be installed.

@j-rivero
Copy link
Collaborator

black box is where the apples go and are then teleported to the back of the blackbox to come out into the container on the back of the vehicle.

Return to this black box: if someone we need to show the teleportation of the apples, the apple container needs to be open or there is another potential option not too complex? To simplify things we could go with a simple fixed designs that show partially the capacity of the open basket, something like:

  • 1 to 4: few apples are added to the container
  • 4 to 10: a bunch of apples are added to the container
  • 10 to infinite: container is full of apples

@adamdbrw
Copy link
Collaborator

I think we can progress in the following way:

  1. Make an opaque storage that holds infinity apples.
  2. Show visuals for a few distinct states as you proposed.
    2a. (Stretch goal) actually place the apples there (we could make them kinematic objects if needed and compute placement in reference to storage frame).
  3. Integrate with unload scripting (when full, spawn a couple of crates of apples)
  4. (Even more stretched goal) actually lower the crates through some kind of mechanism (e. g. with floor opening).

@forhalle
Copy link
Collaborator Author

Notes from today's conversation:

  • We agreed to close
  • Asset creation can start; some iteration will be needed

michalpelka pushed a commit that referenced this issue Dec 2, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants