Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 17 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,16 +10,28 @@ This is a central repository for tools, tutorials, resources, and documentation

Simulation plays an important role in robotics development, and we’re here to ensure that roboticists can use Unity for these simulations. We're starting off with a set of tools to make it easier to use Unity with existing ROS-based workflows. Try out some of our samples below to get started quickly.

## Getting Started with Unity Robotics
## Getting Started
### [Quick Installation Instructions](tutorials/quick_setup.md)

Brief steps on installing the Unity Robotics packages.

### [Pick-and-Place Demo](tutorials/pick_and_place/README.md)

A complete end-to-end demonstration, including how to set up the Unity environment, how to import a robot from URDF, and how to set up two-way communication with ROS for control.

### [**New**][Pose Estimation Demo](tutorials/pose_estimation/README.md)

A complete end-to-end demonstration, in which we collect training data in Unity, and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick-and-place task.

### [Articulations Robot Demo](https://github.com/Unity-Technologies/articulations-robot-demo)

A robot simulation demonstrating Unity's new physics solver (no ROS dependency).
## Documentation

| Tutorial | Description |
|---|---|
| [Quick Installation Instructions](tutorials/quick_setup.md) | Brief steps on installing the Unity Robotics packages |
| [Pick-and-Place Demo](tutorials/pick_and_place/README.md) | A complete end-to-end demonstration, including how to set up the Unity environment, how to import a robot from URDF, and how to set up two-way communication with ROS for control |
| [Pose Estimation Demo](tutorials/pose_estimation/README.md) | A complete end-to-end demonstration, in which we collect training data in Unity, and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick & place task. |
| [ROS–Unity Integration](tutorials/ros_unity_integration/README.md) | A set of component-level tutorials showing how to set up communication between ROS and Unity |
| [URDF Importer](tutorials/urdf_importer/urdf_tutorial.md) | Steps on using the Unity package for loading [URDF](http://wiki.ros.org/urdf) files |
| [Articulations Robot Demo](https://github.com/Unity-Technologies/articulations-robot-demo) | A robot simulation demonstrating Unity's new physics solver (no ROS dependency)


## Component Repos
Expand Down
4 changes: 2 additions & 2 deletions tutorials/pose_estimation/Documentation/0_ros_setup.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Pose-Estimation-Demo Tutorial: Phase 0
# Pose-Estimation-Demo Tutorial: Part 0

This page provides steps on how to manually set up a catkin workspace for the Pose Estimation tutorial.

Expand Down Expand Up @@ -64,4 +64,4 @@ This YAML file is a rosparam set from the launch files provided for this tutoria

>Note: The launch files for this project are available in the package's launch directory, i.e. `src/ur3_moveit/launch/`.

The ROS workspace is now ready to accept commands! Return to [Phase 3: Set up the Unity side](3_pick_and_place.md#step-3-set-up-the-unity-side) to continue the tutorial.
The ROS workspace is now ready to accept commands! Return to [Part 4: Set up the Unity side](4_pick_and_place.md#step-3) to continue the tutorial.
16 changes: 9 additions & 7 deletions tutorials/pose_estimation/Documentation/1_set_up_the_scene.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Pose Estimation Demo: Phase 1
# Pose Estimation Demo: Part 1

In this first phase of the tutorial, we will start by downloading and installing the Unity Editor. We will install our project's dependencies: the Perception, URDF, and TCP Connector packages. We will then use a set of provided prefabs to easily prepare a simulated environment containing a table, a cube, and a working robot arm.
In this first part of the tutorial, we will start by downloading and installing the Unity Editor. We will install our project's dependencies: the Perception, URDF, and TCP Connector packages. We will then use a set of provided prefabs to easily prepare a simulated environment containing a table, a cube, and a working robot arm.


**Table of Contents**
Expand Down Expand Up @@ -76,7 +76,7 @@ Install the following packages with the provided git URLs:

>Note: If you encounter a Package Manager issue, check the [Troubleshooting Guide](troubleshooting.md) for potential solutions.

### <a name="step-3">Setup Ground Truth Render Feature</a>
### <a name="step-3">Set up Ground Truth Render Feature</a>

The Hierarchy, Scene View, Game View, Play/Pause/Step toolbar, Inspector, Project, and Console windows of the Unity Editor have been highlighted below for reference, based on the default layout. Custom Unity Editor layouts may vary slightly. A top menu bar option is available to re-open any of these windows: Window > General.

Expand All @@ -100,10 +100,10 @@ The perception packages relies on a "ground truth render feature" to save out la
</p>


### <a name="step-4">Setup the Scene</a>
### <a name="step-4">Set up the Scene</a>

#### The Scene
Simply put in Unity, Scenes contain any object that exists in the world. This world can be a game, or in this case, a data-collection oriented simulation. Every new project contains a Scene named SampleScene, which is automatically opened when the project is created. This Scene comes with several objects and settings that we do not need, so let's create a new one.
Simply put in Unity, a `Scene` contains any object that exists in the world. This world can be a game, or in this case, a data-collection-oriented simulation. Every new project contains a Scene named SampleScene, which is automatically opened when the project is created. This Scene comes with several objects and settings that we do not need, so let's create a new one.

1. In the _**Project**_ tab, right-click on the `Assets > Scenes` folder and click _**Create -> Scene**_. Name this new Scene `TutorialPoseEstimation` and double-click on it to open it.

Expand Down Expand Up @@ -163,7 +163,7 @@ Finally we will add the robot and the URDF files in order to import the UR3 Robo

8. In the _**Project**_ tab, go to `Assets > TutorialAssets > URDFs > ur3_with_gripper` and right click on the `ur3_with_gripper.urdf` file and select `Import Robot From Selected URDF file`. A window will pop up, keep the default **Y Axis** type in the Import menu and the **Mesh Decomposer** to `VHACD`. Then, click Import URDF. These set of actions are showed in the following video.

>Note: Unity uses a "left handed" coordinate system in which the y-axis points up. However, many robotics packages use a right-handed coordinate system in which the z-axis or x-axis points up. For this reason, it is important to pay attention to the coordinate system when importing URDF files or interfacing with other robotics software.
>Note Unity uses a left-handed coordinate system in which the y-axis points up. However, many robotics packages use a right-handed coordinate system in which the z-axis or x-axis points up. For this reason, it is important to pay attention to the coordinate system when importing URDF files or interfacing with other robotics software.

>Note: VHACD algorithm produces higher quality convex hull for collision detection than the default algorithm.

Expand All @@ -184,4 +184,6 @@ Finally we will add the robot and the URDF files in order to import the UR3 Robo
</p>


### Proceed to [Phase 2](2_set_up_the_data_collection_scene.md).
### Proceed to [Part 2](2_set_up_the_data_collection_scene.md).


Original file line number Diff line number Diff line change
@@ -1,35 +1,36 @@
# Pose Estimation Demo: Phase 2
# Pose Estimation Demo: Part 2

In [Phase 1](1_set_up_the_scene.md) of the tutorial, we learned:
In [Part 1](1_set_up_the_scene.md) of the tutorial, we learned:
* How to create a Unity Scene
* How to use Unity's Package Manager to download packages
* How to move and rotate objects in the scene
* How to instantiate Game Objects with Prefabs
* How to import a robot from a URDF file

You should now have a table, cube, camera, and working robot arm in your scene. In this phase we will prepare the scene for data collection with the Perception package.
You should now have a table, a cube, a camera, and a robot arm in your scene. In this part we will prepare the scene for data collection with the Perception package.

<p align="center">
<img src="Images/2_Pose_Estimation_Data_Collection.png" width="680" height="520"/>
</p>

**Table of Contents**
- [Equipping the Camera for Data Collection](#step-1)
- [Equipping the Cube for Data Collection](#step-2-equipping-the-cube-for-data-collection)
- [Add and set up randomizers](#step-3-add-and-setup-randomizers)
- [Equipping the Cube for Data Collection](#step-2)
- [Add and set up randomizers](#step-3)

---

### <a name="step-1">Equipping the Camera for Data Collection</a>

We need to have a fixed aspect ratio so that you are sure to have the same size of images we have when you collect the data.
You need to have a fixed aspect ratio so that you are sure to have the same size of images you have when you collect the data. This matters as we have trained our Deep Learning model on it and during the pick-and-place task we will take a screenshot of the game view to feed to the model. This screenshot needs to also have the same resolution than the images collected during the training.

1. Select the `Game` view and select `Free Aspect`. Then select the **+**, with the message `Add new item` on it if you put your mouse over the + sign. For the Width select `650` and for the Height select `400`. A gif below shows you how to do it.

<p align="center">
<img src="Gifs/2_aspect_ratio.gif"/>
</p>

We need to add a few components to our camera in order to equip it for the perception workflow.
We need to add a few components to our camera in order to equip it for synthetic data generation.

2. Select the `Main Camera` GameObject in the _**Hierarchy**_ tab and in the _**Inspector**_ tab, click on _**Add Component**_.

Expand All @@ -39,7 +40,7 @@ We need to add a few components to our camera in order to equip it for the perce

5. Go to `Edit > Project Settings > Editor` and uncheck `Asynchronous Shader Compilation`.

As you can see in the Inspector view for the Perception Camera component, the list of Camera Labelers is currently empty as you can see with `List is Empty`. For each type of ground-truth you wish to generate alongside your captured frames, you will need to add a corresponding Camera Labeler to this list. In our project we want to extract the position and orientation of an object, so we will use the `BoudingBox3DLabeler`.
In the Inspector view for the Perception Camera component, the list of Camera Labelers is currently empty, as you can see with `List is Empty`. For each type of ground-truth you wish to generate alongside your captured frames, you will need to add a corresponding Camera Labeler to this list. In our project we want to extract the position and orientation of an object, so we will use the `BoudingBox3DLabeler`.

There are several other types of labelers available, and you can even write your own. If you want more information on labelers, you can consult the [Perception package documentation](https://github.com/Unity-Technologies/com.unity.perception).

Expand Down Expand Up @@ -91,13 +92,13 @@ The _**Inspector**_ view of the `Cube` should look like the following:

#### Domain Randomization
We will be collecting training data from a simulation, but most real perception use-cases occur in the real world.
To train a model to be robust enough to generalize to the real domain, we rely on a technique called [Domain Randomization](https://arxiv.org/pdf/1703.06907.pdf). Instead of training a model in a single fixed environment, we _randomize_ aspects of the environment during training in order to introduce sufficient variation into the generated data. This forces the machine learning model to handle many small visual variations, making it more robust.
To train a model to be robust enough to generalize to the real domain, we rely on a technique called [Domain Randomization](https://arxiv.org/pdf/1703.06907.pdf). Instead of training a model in a single, fixed environment, we _randomize_ aspects of the environment during training in order to introduce sufficient variation into the generated data. This forces the machine learning model to handle many small visual variations, making it more robust.

In this tutorial we will randomize the position and the orientation of the cube on the table but also the color, intensity and position of the light. However, the Randomizers in the Perception package can be extended to many other aspects of the environment.
In this tutorial, we will randomize the position and the orientation of the cube on the table, and also the color, intensity, and position of the light. Note that the Randomizers in the Perception package can be extended to many other aspects of the environment as well.


#### The Scenario
To start randomizing your simulation, you will first need to add a **Scenario** to your scene. Scenarios control the execution flow of your simulation by coordinating all Randomizer components added to them. If you want to know more about it, you can go see [this tutorial](https://github.com/Unity-Technologies/com.unity.perception/blob/master/com.unity.perception/Documentation~/Tutorial/Phase1.md#step-5-set-up-background-randomizers). There are several pre-built randomizers provided by the Perception package, but they don't fit our specific problem. Fortunately, the Perception package also allows one to write [custom randomizers](https://github.com/Unity-Technologies/com.unity.perception/blob/master/com.unity.perception/Documentation~/Tutorial/Phase2.md), which we will do here.
To start randomizing your simulation, you will first need to add a **Scenario** to your scene. Scenarios control the execution flow of your simulation by coordinating all Randomizer components added to them. If you want to know more about it, you can go see [this tutorial](https://github.com/Unity-Technologies/com.unity.perception/blob/master/com.unity.perception/Documentation~/Tutorial/Phase1.md#step-5-set-up-background-randomizers). There are several pre-built Randomizers provided by the Perception package, but they don't fit our specific problem. Fortunately, the Perception package also allows one to write [custom randomizers](https://github.com/Unity-Technologies/com.unity.perception/blob/master/com.unity.perception/Documentation~/Tutorial/Phase2.md), which we will do here.


1. In the _**Hierarchy**_, select the **+** and `Create Empty`. Rename this GameObject `Simulation Scenario`.
Expand All @@ -108,7 +109,7 @@ Each Scenario executes a number of Iterations, and each Iteration carries on for


#### Writing our Custom Object Rotation Randomizer
Each new randomizer requires two C# scripts: a **Randomizer** and **RandomizerTag**. The **Randomizer** will go on the scenario to orchestrate the randomization. The corresponding **RandomizerTag** is added to any GameObject(s) we want to _apply_ the randomization to.
Each new Randomizer requires two C# scripts: a **Randomizer** and **RandomizerTag**. The **Randomizer** will go on the Scenario to orchestrate the randomization. The corresponding **RandomizerTag** is added to any GameObject(s) we want to _apply_ the randomization to.

First, we will write a randomizer to randomly rotate the cube around its y-axis each iteration.

Expand Down Expand Up @@ -167,7 +168,7 @@ Let's go through the code above and understand each part:
* The `YRotationRandomizer` class extends `Randomizer`, which is the base class for all Randomizers that can be added to a Scenario. This base class provides a plethora of useful functions and properties that can help catalyze the process of creating new Randomizers.
* The `FloatParameter` field contains a seeded random number generator. We can set the range, and the distribution of this value in the editor UI.
* The `OnIterationStart()` function is a life-cycle method on all `Randomizer`s. It is called by the scenario every iteration (e.g., once per frame).
* The `OnCustomiteration()` function is responsible of the actions to randomize the Y-axis of the cube. Although you could incorporate the content of the `OnCustomIteration()` function inside the `OnIterationStart()` function, We chose this architecture so that we can call the method reponsible for the Y rotation axis in other scripts.
* The `OnCustomiteration()` function for responsible of the actions to randomize the Y-axis of the cube. Although you could incorporate the content of the `OnCustomIteration()` function inside the `OnIterationStart()` function, we chose this architecture so that we can call the method reponsible for the Y rotation axis in other scripts.
* The `tagManager` is an object available to every `Randomizer` that can help us find game objects tagged with a given `RandomizerTag`. In our case, we query the `tagManager` to gather references to all the objects with a `YRotationRandomizerTag` on them.
* We then loop through these `tags` to rotate each object having one:
* `random.Sample()` gives us a random float between 0 and 1, which we multiply by 360 to convert to degrees.
Expand Down Expand Up @@ -217,7 +218,7 @@ If you return to your list of Randomizers in the Inspector view of SimulationSce
</p>


10. Run the simulation again and inspect how the cube now switches between different orientations. You can pause the simulation and then use the step button (to the right of the pause button) to move the simulation one frame forward and clearly see the variation of the cube's y-rotation. You should see something similar to the following.
10. Run the simulation and inspect how the cube now switches between different orientations. You can pause the simulation and then use the step button (to the right of the pause button) to move the simulation one frame forward and clearly see the variation of the cube's y-rotation. You should see something similar to the following.

<p align="center">
<img src="Gifs/2_y_rotation_randomizer.gif" height=411 width=800/>
Expand Down Expand Up @@ -282,6 +283,6 @@ If you press play, you should see the color, direction, and intensity of the lig
<img src="Gifs/2_light_randomizer.gif" height=600/>
</p>

### Proceed to [Phase 3](3_data_collection_model_training.md).
### Proceed to [Part 3](3_data_collection_model_training.md).

### Go back to [Phase 1](1_set_up_the_scene.md)
### Go back to [Part 1](1_set_up_the_scene.md)
Loading