Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Identify Orchestration Needs #9

Closed
forhalle opened this issue Aug 9, 2022 · 6 comments
Closed

Identify Orchestration Needs #9

forhalle opened this issue Aug 9, 2022 · 6 comments
Assignees

Comments

@forhalle
Copy link
Collaborator

forhalle commented Aug 9, 2022

Identify the scripts necessary for the demo. For example, we may need a script to harvest the apples from the tree (if the robot uses suction to pick the apple, then make the apples disappear when they are harvested, and make them reappear when dropped into the bin).

Acceptance Criteria:

  • A list of scripts necessary for the demo is documented
  • A set of GitHub issues for each needed script is created
@forhalle
Copy link
Collaborator Author

forhalle commented Aug 9, 2022

.

@forhalle forhalle changed the title Create Harvesting Script Identify Scripting Needs Aug 9, 2022
@adamdbrw
Copy link
Collaborator

adamdbrw commented Aug 18, 2022

To initiate this task, here is a rough description of scripting. A "mission" script: gather apples from specified rows until interrupted or done (no more apples):

  1. Queue all the trees in the given rows (can be implemented with a lazy / buffered approach).
  2. For each tree in our queue:
    1. Go to a first tree in a given row (set ros2 nav goal but use ground truth position next to a tree). Each tree should have one or more gathering points (for total gathering coverage).
    2. Gather apples from the current tree until all are gathered.
      1. Calculate our manipulator's reach (x, y, z ranges coverage, it will be a cube - this could be done on startup).
      2. For each gathering position for the current tree (we can start with a single position).
        1. Query the environment for the pickable apples (in manipulator reach).
        2. Queue the picking order (some simplified algorithm for a traveling salesman problem, or just "book reading" top-bottom, left-right sweep).
        3. For each apple in the queue
        1. Calculate the desired position of a manipulator.
        2. Position manipulator in front of an apple (simultaneously approach with x and y, extend at the end).
        3. Apple picking: our apple vanishes.
        4. Add the apple to our storage.
        5. If storage is full, do the unloading (e. g. spawn a couple of full crates of apples behind the robot, become empty).

@adamdbrw adamdbrw self-assigned this Aug 18, 2022
@adamdbrw
Copy link
Collaborator

If points 3a and 3b could be done through integration with moveIt2, that would be great (we would like to demonstrate it).

@adamdbrw
Copy link
Collaborator

adamdbrw commented Sep 1, 2022

Giving it a bit of thought - do you think it would be good to move some of this to ROS 2 packages? This would reflect a real use-case a bit more. One or more of the following could replace selected parts of scripting (based on ground truth):

  • A node which plans the work
  • Navigation node (we want this one anyway)
  • Picking planner
  • A node responsible for "finding" apples
  • A node to manage unloading
    This would take some work, but we would have an example where replacing implementations with real ones (e.g. real apple detector) is much closer.

If we make only one such node, the apple detector would be best.
It would take camera image as well as manipulator pose(s) on topic and publish bounding boxes for apples in the image. It could even be a stub which only publishes from a ground truth topic (sim "cheating") to detector topic but it would still be an example of ROS 2 interaction.

Let me know if it makes sense to you.

@adamdbrw
Copy link
Collaborator

adamdbrw commented Sep 19, 2022

Orchestration Needs have been identified with the following subtasks: #42 #43 #44 #45 #46 #47.
@forhalle this task can be now closed. 42-46 tasks might spawn smaller sub-tasks in the future, but they are supposed to cover all the required automation / scripting for the live demo.

Note: we should add a task for user interaction within the AWS live demo - we should ensure manual control work, there is certain gamification to it, a timer, good camera views for the task etc.

@forhalle forhalle changed the title Identify Scripting Needs Identify Orchestration Needs Sep 22, 2022
@forhalle
Copy link
Collaborator Author

Hi @adamdbrw - I'll close the issue this time, but feel free to close issues in the future (I don't want to be a bottleneck). Also, per these notes, thanks for creating the user interaction tasks you mention above.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants