-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AWS Live Demo - Document Script (demo steps + talking points) #3
Comments
Notes from today's meeting:
|
Notes from today's meeting:
|
Longer engagement: - (probably needs to be shortened)
Shorter engagement (custom time) Run the scripted picking in the background. Allow user to take over at any point (and restore automation when he is finished). The user would manually control the robot, both manipulator and the mobile base. These are ideas that could use plenty of brainstorming, please comment and contribute! Notes:
|
Since we likely would not be able to get real yummy apples (@forhalle already checked), I suppose we could go for some apple-themed gadgets. Would it be possible to get 20-30 like this, but with O3DE logo? |
You have great ideas, @adamdbrw. We're excited to talk with you about this at our upcoming meeting. In the meantime, here is a cut/paste of the conversation I had with the venue about the real apples for your reference: |
I believe we discuss that we will likely have two versions (one for each booth), a variant of longer and shorter engagement as mentioned in this comment: |
Notes from today's meeting:
|
@adamdbrw - After much discussion, we have decided to omit the gamification requirement you previously (above) suggested from the apple-picking simulation, as it is not directly relevant to demonstrating O3DE for simulation, and we do not have resourcing available. We will instead focus on our stretch goal of simulating multiple robots through RoboMaker integration with AWS services. We can talk about this more at our meeting this week. @spham-amzn has agreed to add some detail to this ticket regarding the final script. In the meantime, however, I'd expect the script to borrow the below items from your original script above: "0" Initial state. Robot is already imported and simulation components are already set up.... "3" Run the simulation and the ROS 2 stack (launch file, including RViz2) (0.5 minute). "8" Add more robots - the user can use the spawner to scale up. |
@spham-amzn - Here's a starting point: https://github.com/aws-lumberyard/ROSConDemo/wiki/Demo-Walkthrough |
@forhalle @spham-amzn I understand the reasoning and we will refocus on the new goal. I guess the most important part for our work plan is whether the manipulation is in or out of the demo scope (I can not determine that from the comment). It is a big item that we can handle in several different ways:
I guess the main question is what would we want robots to do except be there and move around the orchard. I suppose we could look at manipulation in the following way: Let me know what you think. |
Hi @adamdbrw - We agree with the prioritization you mention above. Really hoping we can get to 2.ii. |
First draft of the script for the AWS demonstration:
|
I think the above script aligns with what we discussed, with the addition of having buttons to 'start' the demo. |
Additional: text overlay displays status of robot |
Comments:
In addition, after conversations with the Robomaker team, robomaker is designed to scale up simulations,but one robot app per simulation app at a time. It is not designed to spin up multiple robot applications to interact with a single simulation, so using Robomaker to high-light that type of scalability isn't appropriate. We can still spawn additional robots and navigation stacks in the same robot application in Robomaker, but not in a scalable way. |
Multi robot support Signed-off-by: Piotr Jaroszek <piotr.jaroszek@robotec.ai>
Multi robot support Signed-off-by: Piotr Jaroszek <piotr.jaroszek@robotec.ai>
Multi robot support Signed-off-by: Piotr Jaroszek <piotr.jaroszek@robotec.ai>
Multi robot support Signed-off-by: Piotr Jaroszek <piotr.jaroszek@robotec.ai>
To support manually executing a live demo inside the ROSCon booths, document each step of the user story to be told through the demo, including the estimated amount of time dedicated to each step.
For example:
0:00 - 2:00: Import robot into software
2:01 - 2:30: Insert image recognition software
2:31 - 5:00: Manually drive robot, identify fruit, move the robot arm to pick the fruit, and place fruit in the vehicle's container
etc.
Acceptance Criteria:
The text was updated successfully, but these errors were encountered: