Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS Live Demo - Document Script (demo steps + talking points) #3

Closed
forhalle opened this issue Aug 9, 2022 · 16 comments
Closed

AWS Live Demo - Document Script (demo steps + talking points) #3

forhalle opened this issue Aug 9, 2022 · 16 comments
Assignees

Comments

@forhalle
Copy link
Collaborator

forhalle commented Aug 9, 2022

To support manually executing a live demo inside the ROSCon booths, document each step of the user story to be told through the demo, including the estimated amount of time dedicated to each step.

For example:
0:00 - 2:00: Import robot into software
2:01 - 2:30: Insert image recognition software
2:31 - 5:00: Manually drive robot, identify fruit, move the robot arm to pick the fruit, and place fruit in the vehicle's container
etc.

Acceptance Criteria:

  • Script is reviewed and agreed upon with all parties (Robotec.ai, Open Robotics, AWS)
@forhalle forhalle changed the title Document Demo Walkthrough Document Demo Story Aug 9, 2022
@forhalle forhalle changed the title Document Demo Story Document Demo Script Aug 9, 2022
@forhalle
Copy link
Collaborator Author

forhalle commented Aug 11, 2022

Notes from today's meeting:

  • We will start with the first priorities listed below, and will only move on to the second priority if time allows.
  • Driving the vehicle:
    • First Priority (Required): Manually driven robot
    • Second Priority (Stretch): Automated driven robot (which will allow us to scale up to multiple robots)
  • Driving the robot arm:
    • First Priority (Required): Scripted manipulation (working via. endpoints)
    • Second Priority (Stretch): Apple detection
    • Third Priority (Future): Sensing the apple

@forhalle forhalle changed the title Document Demo Script Live Demo - Document Script Aug 11, 2022
@forhalle
Copy link
Collaborator Author

Notes from today's meeting:

  • Will have first version by next call

@adamdbrw
Copy link
Collaborator

adamdbrw commented Sep 1, 2022

Longer engagement: - (probably needs to be shortened)

  1. Initial state. The demo operator has to reset to this after each user engagement. Make it easy to reset to initial state.
    1. The scene is ready with the default camera view (facing one of the rows as seen from the dirt road), no robot yet.
    2. The simulation is not running - we are in the Editor mode.
    3. Robot prefab is not loaded / URDF is not imported.
  2. Import the robot using URDF Importer. Improve UX for the Importer. (1 minute)
    1. Select the Importer from the menu, open the tool.
    2. Select the URDF. By default, suggest the correct apple-robot file.
    3. The robot by default should appear in our view oriented towards apple row entrance. Default spawning point(s).
  3. Add required simulation components (perhaps as prefabs to save time) (2-3 minutes). Optimize the flow here.
    1. Talk about what needs to be added (since it is not a part of URDF)
    2. If it takes to much time at any point, instead go ahead and select a ready prefab (apple-kraken_ready.prefab) and summarize how it was created from the imported one.
    3. Follow up with adding other items. Measure how much we can do in short time, focus on adding the most interesting parts and fast forward to the ready one. Determine steps here (add vehicle control, wheel controllers, sensors, manipulator control components).
  4. Run the simulation and the ROS 2 stack (launch file, including RViz2) (0.5 minute). Create the launch file.
  5. Explain what is happening with ROS2 stack.
  6. Set the navigation goal to the nearest apple tree using RViz2. Prepare rviz2 config and map.
    1. Watch the robot go, explain some things as it travels. This should only take up to 30 seconds.
  7. Give the user manual control over the manipulator, explain controls, let him pick apples. Determine controls, figure out camera, bring gamepad(s)? (1.5 minutes)
    1. We can gamify it - make the picked apple count appear in the simulation, limited time. Implement counting, scoring.
      1. (Optionally) display the best human score
  8. Have the user start the script of automated picking (1 minute)
    1. The robot moves to the next tree.
    2. The scripted picking starts (it should be substantially faster than the user)
    3. Explain what is happening and that we are using ground truth here, but this could also be plugged to ROS 2 detection package.
  9. Add more robots - the user can use the spawner to scale up. Set fixed spawning points, determine limit for performance, figure out how to run ROS 2 stacks for each. (1 minute)
  10. Ask for feedback, what would be useful to simulate, what the user would like to see if they were to use such a tool. (1 minute)

Shorter engagement (custom time)

Run the scripted picking in the background. Allow user to take over at any point (and restore automation when he is finished). The user would manually control the robot, both manipulator and the mobile base. We need robust apple picking script for this.
We can have counts of apples (manually picked by ROSCon visitors, automatically picked through the conference).

These are ideas that could use plenty of brainstorming, please comment and contribute!

Notes:

  • Two screens would be best here, one for O3DE, the other one with the console (for ROS 2 stack) and RViz2.
  • We could have real yummy apples at our booth to compliment the demo.
  • We could run a trained apple detector out-of-the-loop on screen 2 -> visualise ground truth boxes vs detected or similar. Add a clarifying caption so that people understand it is not in the loop.

@adamdbrw
Copy link
Collaborator

adamdbrw commented Sep 2, 2022

Since we likely would not be able to get real yummy apples (@forhalle already checked), I suppose we could go for some apple-themed gadgets.
My quick search (Google, I really want a fruit apple, not Steve's Apple) yielded this:
https://www.bitsandpieces.com/product/shiny-3d-apple-puzzle
image

Would it be possible to get 20-30 like this, but with O3DE logo?
Does not have to be puzzles, but some themed gadget as a prize for engaged users would be great.
Other idea would be branded t-shirts with "Certified Apple-Kraken operator" and apples as well as O3DE logo.
Help me, I am not good at this :)

@forhalle
Copy link
Collaborator Author

forhalle commented Sep 7, 2022

You have great ideas, @adamdbrw. We're excited to talk with you about this at our upcoming meeting. In the meantime, here is a cut/paste of the conversation I had with the venue about the real apples for your reference:

apple conversation.pdf

@adamdbrw
Copy link
Collaborator

adamdbrw commented Sep 8, 2022

I believe we discuss that we will likely have two versions (one for each booth), a variant of longer and shorter engagement as mentioned in this comment:
#3 (comment)
Details are still TBD.

@forhalle
Copy link
Collaborator Author

forhalle commented Sep 8, 2022

Notes from today's meeting:

  • At the Robotec.ai booth, we will demo the "longer engagement" (i.e. A technical demo showing import from URDF, adding components, etc.) mentioned above.
  • At the AWS booth, we will demo the "shorter engagement" (i.e. gamified multiple robot picking demo running on Robomaker using AWS services) mentioned above.

@forhalle forhalle changed the title Live Demo - Document Script AWS Live Demo - Document Script Sep 12, 2022
@forhalle
Copy link
Collaborator Author

forhalle commented Sep 26, 2022

@adamdbrw - After much discussion, we have decided to omit the gamification requirement you previously (above) suggested from the apple-picking simulation, as it is not directly relevant to demonstrating O3DE for simulation, and we do not have resourcing available. We will instead focus on our stretch goal of simulating multiple robots through RoboMaker integration with AWS services. We can talk about this more at our meeting this week.

@spham-amzn has agreed to add some detail to this ticket regarding the final script. In the meantime, however, I'd expect the script to borrow the below items from your original script above:

"0" Initial state. Robot is already imported and simulation components are already set up....

"3" Run the simulation and the ROS 2 stack (launch file, including RViz2) (0.5 minute). Create the launch file.
"4" Explain what is happening with ROS2 stack.
"5" Set the navigation goal to the nearest apple tree using RViz2. Prepare rviz2 config and map.
"i" Watch the robot go, explain some things as it travels. This should only take up to 30 seconds.

"8" Add more robots - the user can use the spawner to scale up. Set fixed spawning points, determine limit for performance, figure out how to run ROS 2 stacks for each. (1 minute)
"9" Ask for feedback, what would be useful to simulate, what the user would like to see if they were to use such a tool. (1 minute)

@forhalle
Copy link
Collaborator Author

@adamdbrw
Copy link
Collaborator

@forhalle @spham-amzn I understand the reasoning and we will refocus on the new goal. I guess the most important part for our work plan is whether the manipulation is in or out of the demo scope (I can not determine that from the comment). It is a big item that we can handle in several different ways:

  1. No manipulation. The robot will not be able to pick apples. Two options here:
    1. Remove the manipulator frame and the manipulator altogether
    2. Add visual elements and polish the looks. We could say this is a work in progress.
  2. Implement scripts for manipulator handling, but do not implement any components. This is aiming for a better result since the scene is an apple orchard and there would not be an obvious thing missing. But there are picking challenges to solve.
    1. Do not implement orchestration, instead use simple input control. Kraken can only pick apples manually.
    2. Implement picking orchestration using these scripts - Kraken can automatically pick apples. Optionally: also implement manual control (not a big item).
  3. Implement basic components and use them for scripting. We can claim support for manipulation feature in the Gem (first version). Users interested in manipulation might be more attracted. On top of point two items, we add components and use them in the model.

I guess the main question is what would we want robots to do except be there and move around the orchard.
I believe we certainly can do better than 1.i. would suggest going with these points as stretch goals in their order.
1.ii -> 2.i -> 2.ii -> 3
We need to ensure other goals are reached (e.g. we have a stable simulation and working live demo including all the points you mentioned such as navigation and scaling up).

I suppose we could look at manipulation in the following way:
With 1.ii being minimal goal (no manipulation, but looks are there), 2.ii being our target, and 3 as a stretch goal.

Let me know what you think.

@forhalle
Copy link
Collaborator Author

Hi @adamdbrw - We agree with the prioritization you mention above. Really hoping we can get to 2.ii.

@spham-amzn
Copy link
Collaborator

First draft of the script for the AWS demonstration:

  1. Initiate ros2/humble environment

  2. Launch O3DE / Apple Orchard Level on Desktop

  3. Spawn a robot onto the Orchard Level

  4. Launch Navigation stack for new robot

  5. Set navigation goal for robot to an Apple tree

  6. Watch the robot navigate to the tree

  7. When the robot arrives at the tree, initiate the manipulator script to start apple picking

  8. Explain during this process the sensors involved

  9. While collecting apples, spawn another robot on the apple orchard

  10. Repeat steps 5-7

  11. Ask for questions/feedback

@forhalle forhalle assigned SLeibrick and unassigned spham-amzn Sep 28, 2022
@SLeibrick
Copy link
Collaborator

I think the above script aligns with what we discussed, with the addition of having buttons to 'start' the demo.

@SLeibrick
Copy link
Collaborator

  1. Spawn robot 1 from console using a service call, we look from the camera view point of that robot
  2. We direct the robot to go to a particular apple tree using the console screen by running the navigation stack and then using Rvis to set the goal
  3. When we are in position and immobile, click a button to begin and trigger the apple picking. Starting pose and camera perspective begin from a specific default location.
  4. Invite the user to move the robot to a new tree and repeat apple picking
  5. While the robot is moving, talk about scaling up for multi-robot simulation and using Robomaker to spawn different instances
  6. Spawn a second robot, automatically switch to the camera view showing the new robot in the scene. Spawning points are defined in the scene for multiple robots. Some scripted behaviors with specific predetermined goals.

Additional: text overlay displays status of robot

@spham-amzn
Copy link
Collaborator

Comments:

  • We could start with the first robot initially (without spawning). (But we can spawn as well)
  • The camera POV should be a third person view of the robot, not a POV of the robot. I think this is to not mislead experienced ROS people that this is what the camera sensor will see, because it is not
  • We still use rviz + the navigation stack for the first robot to direct the robot to a specific tree
  • Pose camera perspective is a good idea to automatically invoke once the apple picking is triggered
  • We can continue either wait for the picking to complete, or allow a cancel command (sent through ROS) to cancel the apple picking orchestration to resume navigation
  • At any time during the demo, we can demonstrate spawning new robots. When a new robot is spawned, maybe we can have a trigger in O3DE to switch to a camera view that keeps all the robots in the simulation within view (track view).
  • According to Adam, it will not be possible (at least to do before the demo) to configure rviz on the same host to control more than one robot. So when we spawn a robot, we will need to create commands through the command line to instruct the spawned robots to move 'somewhere'. It could be possible to define fixed points near apple trees to have them start picking apples automatically (stretch)

In addition, after conversations with the Robomaker team, robomaker is designed to scale up simulations,but one robot app per simulation app at a time. It is not designed to spin up multiple robot applications to interact with a single simulation, so using Robomaker to high-light that type of scalability isn't appropriate. We can still spawn additional robots and navigation stacks in the same robot application in Robomaker, but not in a scalable way.

@SLeibrick
Copy link
Collaborator

@forhalle forhalle changed the title AWS Live Demo - Document Script AWS Live Demo - Document Script (demo steps + talking points) Oct 7, 2022
michalpelka pushed a commit that referenced this issue Dec 2, 2022
Multi robot support

Signed-off-by: Piotr Jaroszek <piotr.jaroszek@robotec.ai>
michalpelka pushed a commit that referenced this issue Dec 8, 2022
Multi robot support

Signed-off-by: Piotr Jaroszek <piotr.jaroszek@robotec.ai>
michalpelka pushed a commit that referenced this issue Dec 8, 2022
Multi robot support

Signed-off-by: Piotr Jaroszek <piotr.jaroszek@robotec.ai>
michalpelka pushed a commit that referenced this issue Dec 8, 2022
Multi robot support

Signed-off-by: Piotr Jaroszek <piotr.jaroszek@robotec.ai>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants