From fc821c1fcfa5615f7902b922ee02a031f1ffae75 Mon Sep 17 00:00:00 2001 From: vincentpierre Date: Fri, 12 Oct 2018 10:48:46 -0700 Subject: [PATCH 1/6] Fix Typo #1323 --- docs/Training-Curriculum-Learning.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/Training-Curriculum-Learning.md b/docs/Training-Curriculum-Learning.md index 57a174140b..1fece28992 100644 --- a/docs/Training-Curriculum-Learning.md +++ b/docs/Training-Curriculum-Learning.md @@ -104,7 +104,7 @@ We will save this file into our metacurriculum folder with the name of its corresponding Brain. For example, in the Wall Jump environment, there are two Brains---BigWallBrain and SmallWallBrain. If we want to define a curriculum for the BigWallBrain, we will save `BigWallBrain.json` into -`curricula/wall-jump/`. +`config/curricula/wall-jump/`. ### Training with a Curriculum @@ -114,7 +114,7 @@ folder and PPO will train using Curriculum Learning. For example, to train agents in the Wall Jump environment with curriculum learning, we can run ```sh -mlagents-learn config/trainer_config.yaml --curriculum=curricula/wall-jump/ --run-id=wall-jump-curriculum --train +mlagents-learn config/trainer_config.yaml --curriculum=config/curricula/wall-jump/ --run-id=wall-jump-curriculum --train ``` We can then keep track of the current lessons and progresses via TensorBoard. From 0c3721f037277dd2de1bcc199edff291b43c78bc Mon Sep 17 00:00:00 2001 From: vincentpierre Date: Fri, 12 Oct 2018 12:04:27 -0700 Subject: [PATCH 2/6] First update to the docs --- docs/Background-TensorFlow.md | 2 +- docs/Basic-Guide.md | 51 ++++++++++--------- docs/Getting-Started-with-Balance-Ball.md | 2 +- ...ning-Environment-Design-Learning-Brains.md | 12 ++--- docs/Learning-Environment-Executable.md | 38 +++++++------- 5 files changed, 53 insertions(+), 52 deletions(-) diff --git a/docs/Background-TensorFlow.md b/docs/Background-TensorFlow.md index ebaaabbd54..920f45b082 100644 --- a/docs/Background-TensorFlow.md +++ b/docs/Background-TensorFlow.md @@ -16,7 +16,7 @@ to TensorFlow-related tools that we leverage within the ML-Agents toolkit. performing computations using data flow graphs, the underlying representation of deep learning models. It facilitates training and inference on CPUs and GPUs in a desktop, server, or mobile device. Within the ML-Agents toolkit, when you -train the behavior of an agent, the output is a TensorFlow model (.bytes) file +train the behavior of an agent, the output is a TensorFlow model (.tf) file that you can then embed within a Learning Brain. Unless you implement a new algorithm, the use of TensorFlow is mostly abstracted away and behind the scenes. diff --git a/docs/Basic-Guide.md b/docs/Basic-Guide.md index e043141569..2a898db9ac 100644 --- a/docs/Basic-Guide.md +++ b/docs/Basic-Guide.md @@ -49,15 +49,15 @@ TensorFlow files in the Project window under **Assets** > **ML-Agents** > and open the `3DBall` scene file. 2. In the **Project** window, go to `Assets/ML-Agents/Examples/3DBall/Prefabs` folder and select the `Game/Platform` prefab. -3. In the `Ball 3D Agent` Component: Drag the **Ball3DBrain** located into +3. In the `Ball 3D Agent` Component: Drag the **3DBallLearning** located into `Assets/ML-Agents/Examples/3DBall/Brains` into the `Brain` property of the `Ball 3D Agent`. -4. Make sure that all of the Agents in the Scene now have **Ball3DBrain** as `Brain`. +4. Make sure that all of the Agents in the Scene now have **3DBallLearning** as `Brain`. __Note__ : You can modify multiple game objects in a scene by selecting them all at once using the search bar in the Scene Hierarchy. 5. In the **Project** window, locate the `Assets/ML-Agents/Examples/3DBall/TFModels` folder. 6. Drag the `3DBall` model file from the `Assets/ML-Agents/Examples/3DBall/TFModels` - folder to the **Model** field of the **Ball3DBrain**. + folder to the **Model** field of the **3DBallLearning**. 7. Click the **Play** button and you will see the platforms balance the balls using the pretrained model. @@ -83,10 +83,11 @@ Since we are going to build this environment to conduct training, we need to add the Brain to the training session. This allows the Agents linked to that Brain to communicate with the external training process when making their decisions. -1. Assign the **Ball3DBrain** to the agents you would like to train. +1. Assign the **3DBallLearning** to the agents you would like to train and the **3DBallPlayer** Brain to the agents you want to control manually. __Note:__ You can only perform training with an `Learning Brain`. -2. Select the **Ball3DAcademy** GameObject and add the **Ball3DBrain** - to the Broadcast Hub and toggle the `Control` checkbox. +2. Select the **Ball3DAcademy** GameObject and make sure the **3DBallLearning** Brain + is in the Broadcast Hub. In order to train, you need to toggle the + `Control` checkbox. ![Set Brain to External](images/mlagents-SetBrainToTrain.png) @@ -168,17 +169,17 @@ INFO:mlagents.envs: 'Ball3DAcademy' started successfully! Unity Academy name: Ball3DAcademy Number of Brains: 1 - Number of External Brains : 1 + Number of Training Brains : 1 Reset Parameters : -Unity brain name: Ball3DBrain +Unity brain name: 3DBallLearning Number of Visual Observations (per agent): 0 Vector Observation space size (per agent): 8 Number of stacked Vector Observation: 1 Vector Action space type: continuous Vector Action space size (per agent): [2] Vector Action descriptions: , -INFO:mlagents.envs:Hyperparameters for the PPO Trainer of brain Ball3DBrain: +INFO:mlagents.envs:Hyperparameters for the PPO Trainer of brain 3DBallLearning: batch_size: 64 beta: 0.001 buffer_size: 12000 @@ -200,24 +201,24 @@ INFO:mlagents.envs:Hyperparameters for the PPO Trainer of brain Ball3DBrain: use_curiosity: False curiosity_strength: 0.01 curiosity_enc_size: 128 - model_path: ./models/first-run-0/Ball3DBrain -INFO:mlagents.trainers: first-run-0: Ball3DBrain: Step: 1000. Mean Reward: 1.242. Std of Reward: 0.746. Training. -INFO:mlagents.trainers: first-run-0: Ball3DBrain: Step: 2000. Mean Reward: 1.319. Std of Reward: 0.693. Training. -INFO:mlagents.trainers: first-run-0: Ball3DBrain: Step: 3000. Mean Reward: 1.804. Std of Reward: 1.056. Training. -INFO:mlagents.trainers: first-run-0: Ball3DBrain: Step: 4000. Mean Reward: 2.151. Std of Reward: 1.432. Training. -INFO:mlagents.trainers: first-run-0: Ball3DBrain: Step: 5000. Mean Reward: 3.175. Std of Reward: 2.250. Training. -INFO:mlagents.trainers: first-run-0: Ball3DBrain: Step: 6000. Mean Reward: 4.898. Std of Reward: 4.019. Training. -INFO:mlagents.trainers: first-run-0: Ball3DBrain: Step: 7000. Mean Reward: 6.716. Std of Reward: 5.125. Training. -INFO:mlagents.trainers: first-run-0: Ball3DBrain: Step: 8000. Mean Reward: 12.124. Std of Reward: 11.929. Training. -INFO:mlagents.trainers: first-run-0: Ball3DBrain: Step: 9000. Mean Reward: 18.151. Std of Reward: 16.871. Training. -INFO:mlagents.trainers: first-run-0: Ball3DBrain: Step: 10000. Mean Reward: 27.284. Std of Reward: 28.667. Training. + model_path: ./models/first-run-0/3DBallLearning +INFO:mlagents.trainers: first-run-0: 3DBallLearning: Step: 1000. Mean Reward: 1.242. Std of Reward: 0.746. Training. +INFO:mlagents.trainers: first-run-0: 3DBallLearning: Step: 2000. Mean Reward: 1.319. Std of Reward: 0.693. Training. +INFO:mlagents.trainers: first-run-0: 3DBallLearning: Step: 3000. Mean Reward: 1.804. Std of Reward: 1.056. Training. +INFO:mlagents.trainers: first-run-0: 3DBallLearning: Step: 4000. Mean Reward: 2.151. Std of Reward: 1.432. Training. +INFO:mlagents.trainers: first-run-0: 3DBallLearning: Step: 5000. Mean Reward: 3.175. Std of Reward: 2.250. Training. +INFO:mlagents.trainers: first-run-0: 3DBallLearning: Step: 6000. Mean Reward: 4.898. Std of Reward: 4.019. Training. +INFO:mlagents.trainers: first-run-0: 3DBallLearning: Step: 7000. Mean Reward: 6.716. Std of Reward: 5.125. Training. +INFO:mlagents.trainers: first-run-0: 3DBallLearning: Step: 8000. Mean Reward: 12.124. Std of Reward: 11.929. Training. +INFO:mlagents.trainers: first-run-0: 3DBallLearning: Step: 9000. Mean Reward: 18.151. Std of Reward: 16.871. Training. +INFO:mlagents.trainers: first-run-0: 3DBallLearning: Step: 10000. Mean Reward: 27.284. Std of Reward: 28.667. Training. ``` ### After training You can press Ctrl+C to stop the training, and your trained model will be at -`models//editor__.bytes` where -`` is the name of the Academy GameObject in the current scene. +`models//.tf` where +`` is the name of the Brain corresponding to the model. (**Note:** There is a known bug on Windows that causes the saving of the model to fail when you early terminate the training, it's recommended to wait until Step has reached the max_steps parameter you set in trainer_config.yaml.) This file @@ -229,9 +230,9 @@ the steps described 1. Move your model file into `UnitySDK/Assets/ML-Agents/Examples/3DBall/TFModels/`. 2. Open the Unity Editor, and select the **3DBall** scene as described above. -3. Select the **Ball3DBrain** Learning Brain from the Scene hierarchy. -5. Drag the `_.bytes` file from the Project window of - the Editor to the **Graph Model** placeholder in the **Ball3DBrain** +3. Select the **3DBallLearning** Learning Brain from the Scene hierarchy. +5. Drag the `.tf` file from the Project window of + the Editor to the **Model** placeholder in the **3DBallLearning** inspector window. 6. Press the :arrow_forward: button at the top of the Editor. diff --git a/docs/Getting-Started-with-Balance-Ball.md b/docs/Getting-Started-with-Balance-Ball.md index d5ba590a43..2fb1075601 100644 --- a/docs/Getting-Started-with-Balance-Ball.md +++ b/docs/Getting-Started-with-Balance-Ball.md @@ -89,7 +89,7 @@ environment around the Agents. ### Brain Brains are assets that exist in your project folder. The Ball3DAgents are connected -to a brain, for example : the **Ball3DBrain**. +to a brain, for example : the **3DBallLearning**. A Brain doesn't store any information about an Agent, it just routes the Agent's collected observations to the decision making process and returns the chosen action to the Agent. Thus, all Agents can share the same diff --git a/docs/Learning-Environment-Design-Learning-Brains.md b/docs/Learning-Environment-Design-Learning-Brains.md index 521015c6c2..836efd1077 100644 --- a/docs/Learning-Environment-Design-Learning-Brains.md +++ b/docs/Learning-Environment-Design-Learning-Brains.md @@ -45,8 +45,8 @@ To use a graph model: 1. Select the **Learning Brain** asset in the **Project** window of the Unity Editor. **Note:** In order to see the **Learning** Brain Type option, you must [enable TensorFlowSharp](Using-TensorFlow-Sharp-in-Unity.md). -3. Import the `environment_run-id.bytes` file produced by the PPO training - program. (Where `environment_run-id` is the name of the model file, which is +3. Import the `model_name` file produced by the PPO training + program. (Where `model_name` is the name of the model file, which is constructed from the name of your Unity environment executable and the run-id value you assigned when running the training process.) @@ -54,7 +54,7 @@ To use a graph model: [import assets into Unity](https://docs.unity3d.com/Manual/ImportingAssets.html) in various ways. The easiest way is to simply drag the file into the **Project** window and drop it into an appropriate folder. -4. Once the `environment.bytes` file is imported, drag it from the **Project** +4. Once the `model_name .tf` file is imported, drag it from the **Project** window to the **Model** field of the Brain component. If you are using a model produced by the ML-Agents `mlagents-learn` command, use @@ -64,10 +64,10 @@ the default values for the other Learning Brain parameters. The default values of the TensorFlow graph parameters work with the model produced by the PPO and BC training code in the ML-Agents SDK. To use a default -ML-Agents model, the only parameter that you need to set is the `Graph Model`, -which must be set to the .bytes file containing the trained model itself. +ML-Agents model, the only parameter that you need to set is the `Model`, +which must be set to the `.tf` file containing the trained model itself. -* `Model` : This must be the `bytes` file corresponding to the pre-trained +* `Model` : This must be the `.tf` file corresponding to the pre-trained TensorFlow graph. (You must first drag this file into your Project window and then from the Resources folder into the inspector) diff --git a/docs/Learning-Environment-Executable.md b/docs/Learning-Environment-Executable.md index ce22dfbccc..573b92c9ff 100644 --- a/docs/Learning-Environment-Executable.md +++ b/docs/Learning-Environment-Executable.md @@ -152,17 +152,17 @@ INFO:mlagents.envs: 'Ball3DAcademy' started successfully! Unity Academy name: Ball3DAcademy Number of Brains: 1 - Number of External Brains : 1 + Number of Training Brains : 1 Reset Parameters : -Unity brain name: Ball3DBrain +Unity brain name: Ball3DLearning Number of Visual Observations (per agent): 0 Vector Observation space size (per agent): 8 Number of stacked Vector Observation: 1 Vector Action space type: continuous Vector Action space size (per agent): [2] Vector Action descriptions: , -INFO:mlagents.envs:Hyperparameters for the PPO Trainer of brain Ball3DBrain: +INFO:mlagents.envs:Hyperparameters for the PPO Trainer of brain Ball3DLearning: batch_size: 64 beta: 0.001 buffer_size: 12000 @@ -184,21 +184,21 @@ INFO:mlagents.envs:Hyperparameters for the PPO Trainer of brain Ball3DBrain: use_curiosity: False curiosity_strength: 0.01 curiosity_enc_size: 128 - model_path: ./models/first-run-0/Ball3DBrain -INFO:mlagents.trainers: first-run-0: Ball3DBrain: Step: 1000. Mean Reward: 1.242. Std of Reward: 0.746. Training. -INFO:mlagents.trainers: first-run-0: Ball3DBrain: Step: 2000. Mean Reward: 1.319. Std of Reward: 0.693. Training. -INFO:mlagents.trainers: first-run-0: Ball3DBrain: Step: 3000. Mean Reward: 1.804. Std of Reward: 1.056. Training. -INFO:mlagents.trainers: first-run-0: Ball3DBrain: Step: 4000. Mean Reward: 2.151. Std of Reward: 1.432. Training. -INFO:mlagents.trainers: first-run-0: Ball3DBrain: Step: 5000. Mean Reward: 3.175. Std of Reward: 2.250. Training. -INFO:mlagents.trainers: first-run-0: Ball3DBrain: Step: 6000. Mean Reward: 4.898. Std of Reward: 4.019. Training. -INFO:mlagents.trainers: first-run-0: Ball3DBrain: Step: 7000. Mean Reward: 6.716. Std of Reward: 5.125. Training. -INFO:mlagents.trainers: first-run-0: Ball3DBrain: Step: 8000. Mean Reward: 12.124. Std of Reward: 11.929. Training. -INFO:mlagents.trainers: first-run-0: Ball3DBrain: Step: 9000. Mean Reward: 18.151. Std of Reward: 16.871. Training. -INFO:mlagents.trainers: first-run-0: Ball3DBrain: Step: 10000. Mean Reward: 27.284. Std of Reward: 28.667. Training. + model_path: ./models/first-run-0/Ball3DLearning +INFO:mlagents.trainers: first-run-0: Ball3DLearning: Step: 1000. Mean Reward: 1.242. Std of Reward: 0.746. Training. +INFO:mlagents.trainers: first-run-0: Ball3DLearning: Step: 2000. Mean Reward: 1.319. Std of Reward: 0.693. Training. +INFO:mlagents.trainers: first-run-0: Ball3DLearning: Step: 3000. Mean Reward: 1.804. Std of Reward: 1.056. Training. +INFO:mlagents.trainers: first-run-0: Ball3DLearning: Step: 4000. Mean Reward: 2.151. Std of Reward: 1.432. Training. +INFO:mlagents.trainers: first-run-0: Ball3DLearning: Step: 5000. Mean Reward: 3.175. Std of Reward: 2.250. Training. +INFO:mlagents.trainers: first-run-0: Ball3DLearning: Step: 6000. Mean Reward: 4.898. Std of Reward: 4.019. Training. +INFO:mlagents.trainers: first-run-0: Ball3DLearning: Step: 7000. Mean Reward: 6.716. Std of Reward: 5.125. Training. +INFO:mlagents.trainers: first-run-0: Ball3DLearning: Step: 8000. Mean Reward: 12.124. Std of Reward: 11.929. Training. +INFO:mlagents.trainers: first-run-0: Ball3DLearning: Step: 9000. Mean Reward: 18.151. Std of Reward: 16.871. Training. +INFO:mlagents.trainers: first-run-0: Ball3DLearning: Step: 10000. Mean Reward: 27.284. Std of Reward: 28.667. Training. ``` You can press Ctrl+C to stop the training, and your trained model will be at -`models//_.bytes`, which corresponds +`models//.tf`, which corresponds to your model's latest checkpoint. (**Note:** There is a known bug on Windows that causes the saving of the model to fail when you early terminate the training, it's recommended to wait until Step has reached the max_steps @@ -208,9 +208,9 @@ into your Learning Brain by following the steps below: 1. Move your model file into `UnitySDK/Assets/ML-Agents/Examples/3DBall/TFModels/`. 2. Open the Unity Editor, and select the **3DBall** scene as described above. -3. Select the **Ball3DBrain** object from the Project window. -5. Drag the `_.bytes` file from the Project window of - the Editor to the **Model** placeholder in the **Ball3DBrain** +3. Select the **Ball3DLearning** object from the Project window. +5. Drag the `.bytes` file from the Project window of + the Editor to the **Model** placeholder in the **Ball3DLearning** inspector window. -6. Remove the **Ball3DBrain** from the Academy's `Broadcast Hub` +6. Remove the **Ball3DLearning** from the Academy's `Broadcast Hub` 7. Press the Play button at the top of the editor. From 9a66789637a96961b0244c6c47879aff1f6b8bc9 Mon Sep 17 00:00:00 2001 From: vincentpierre Date: Fri, 12 Oct 2018 14:37:53 -0700 Subject: [PATCH 3/6] Addressed comments --- docs/Getting-Started-with-Balance-Ball.md | 2 +- docs/Learning-Environment-Design-Learning-Brains.md | 2 +- docs/Learning-Environment-Executable.md | 2 +- docs/Training-Imitation-Learning.md | 4 ++-- docs/Training-ML-Agents.md | 2 +- 5 files changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/Getting-Started-with-Balance-Ball.md b/docs/Getting-Started-with-Balance-Ball.md index 2fb1075601..f7ae11065f 100644 --- a/docs/Getting-Started-with-Balance-Ball.md +++ b/docs/Getting-Started-with-Balance-Ball.md @@ -273,7 +273,7 @@ Once the training process completes, and the training process saves the model use it with Agents having a **Learning Brain**. __Note:__ Do not just close the Unity Window once the `Saved Model` message appears. Either wait for the training process to close the window or press Ctrl+C at the -command-line prompt. If you simply close the window manually, the .bytes file +command-line prompt. If you simply close the window manually, the `.tf` file containing the trained model is not exported into the ml-agents folder. ### Setting up TensorFlowSharp Support diff --git a/docs/Learning-Environment-Design-Learning-Brains.md b/docs/Learning-Environment-Design-Learning-Brains.md index 836efd1077..f73347f855 100644 --- a/docs/Learning-Environment-Design-Learning-Brains.md +++ b/docs/Learning-Environment-Design-Learning-Brains.md @@ -54,7 +54,7 @@ To use a graph model: [import assets into Unity](https://docs.unity3d.com/Manual/ImportingAssets.html) in various ways. The easiest way is to simply drag the file into the **Project** window and drop it into an appropriate folder. -4. Once the `model_name .tf` file is imported, drag it from the **Project** +4. Once the `model_name.tf` file is imported, drag it from the **Project** window to the **Model** field of the Brain component. If you are using a model produced by the ML-Agents `mlagents-learn` command, use diff --git a/docs/Learning-Environment-Executable.md b/docs/Learning-Environment-Executable.md index 573b92c9ff..22210f2ea5 100644 --- a/docs/Learning-Environment-Executable.md +++ b/docs/Learning-Environment-Executable.md @@ -209,7 +209,7 @@ into your Learning Brain by following the steps below: `UnitySDK/Assets/ML-Agents/Examples/3DBall/TFModels/`. 2. Open the Unity Editor, and select the **3DBall** scene as described above. 3. Select the **Ball3DLearning** object from the Project window. -5. Drag the `.bytes` file from the Project window of +5. Drag the `.tf` file from the Project window of the Editor to the **Model** placeholder in the **Ball3DLearning** inspector window. 6. Remove the **Ball3DLearning** from the Academy's `Broadcast Hub` diff --git a/docs/Training-Imitation-Learning.md b/docs/Training-Imitation-Learning.md index 70f1e775ba..2fb7125ec6 100644 --- a/docs/Training-Imitation-Learning.md +++ b/docs/Training-Imitation-Learning.md @@ -85,7 +85,7 @@ It is also possible to provide demonstrations in realtime during training, witho similarly to the demonstrations. 9. Once the Student Agents are exhibiting the desired behavior, end the training process with `CTL+C` from the command line. -10. Move the resulting `*.bytes` file into the `TFModels` subdirectory of the +10. Move the resulting `*.tf` file into the `TFModels` subdirectory of the Assets folder (or a subdirectory within Assets of your choosing) , and use with `Learning` Brain. @@ -110,4 +110,4 @@ This utility enables you to use keyboard shortcuts to do the following: 2. Reset the training buffer. This enables you to instruct the agents to forget their buffer of recent experiences. This is useful if you'd like to get them to quickly learn a new behavior. The default command to reset the buffer is - to press `C` on the keyboard. \ No newline at end of file + to press `C` on the keyboard. diff --git a/docs/Training-ML-Agents.md b/docs/Training-ML-Agents.md index 6b11dda986..43891d18df 100644 --- a/docs/Training-ML-Agents.md +++ b/docs/Training-ML-Agents.md @@ -85,7 +85,7 @@ And then opening the URL: [localhost:6006](http://localhost:6006). When training is finished, you can find the saved model in the `models` folder under the assigned run-id — in the cats example, the path to the model would be -`models/cob_1/CatsOnBicycles_cob_1.bytes`. +`models/cob_1/CatsOnBicycles_cob_1.tf`. While this example used the default training hyperparameters, you can edit the [training_config.yaml file](#training-config-file) with a text editor to set From d6ad25d132092124fc210182cb6f18450b5d6780 Mon Sep 17 00:00:00 2001 From: vincentpierre Date: Thu, 11 Oct 2018 11:50:33 -0700 Subject: [PATCH 4/6] remove references to TF# --- docs/Background-TensorFlow.md | 17 +- docs/Basic-Guide.md | 22 +-- docs/FAQ.md | 2 +- docs/Getting-Started-with-Balance-Ball.md | 12 +- ...ning-Environment-Design-Learning-Brains.md | 8 +- docs/Limitations.md | 3 +- docs/ML-Agents-Overview.md | 5 +- docs/Readme.md | 5 +- docs/Using-TensorFlow-Sharp-in-Unity.md | 181 ------------------ docs/images/imported-tensorflowsharp.png | Bin 27572 -> 0 bytes 10 files changed, 22 insertions(+), 233 deletions(-) delete mode 100644 docs/Using-TensorFlow-Sharp-in-Unity.md delete mode 100644 docs/images/imported-tensorflowsharp.png diff --git a/docs/Background-TensorFlow.md b/docs/Background-TensorFlow.md index 920f45b082..7bab1b8179 100644 --- a/docs/Background-TensorFlow.md +++ b/docs/Background-TensorFlow.md @@ -36,18 +36,9 @@ documentation, but, in the meantime, if you are unfamiliar with TensorBoard we recommend this [tutorial](https://github.com/dandelionmane/tf-dev-summit-tensorboard-tutorial). -## TensorFlowSharp +## Tensorflow Model Inference One of the drawbacks of TensorFlow is that it does not provide a native C# API. -This means that the Learning Brain is not natively supported since Unity scripts -are written in C#. Consequently, to enable the Learning Brain, we leverage a -third-party library -[TensorFlowSharp](https://github.com/migueldeicaza/TensorFlowSharp) which -provides .NET bindings to TensorFlow. Thus, when a Unity environment that -contains a Learning Brain is built, inference is performed via TensorFlowSharp. -We provide an additional in-depth overview of how to leverage -[TensorFlowSharp within Unity](Using-TensorFlow-Sharp-in-Unity.md) -which will become more -relevant once you install and start training behaviors within the ML-Agents -toolkit. Given the reliance on TensorFlowSharp, the Learning Brain is currently -marked as experimental. +We have are using the [Unity Machine Learning Inference SDK](TensorflowSharp) to +run the models inside of Unity. In order to use it, you will need to have an +appropriate backend downloaded. You can find more information [here](TensorflowSharp) diff --git a/docs/Basic-Guide.md b/docs/Basic-Guide.md index 2a898db9ac..d9a7a5b4cb 100644 --- a/docs/Basic-Guide.md +++ b/docs/Basic-Guide.md @@ -11,10 +11,9 @@ the basic concepts of Unity. ## Setting up the ML-Agents Toolkit within Unity In order to use the ML-Agents toolkit within Unity, you need to change some -Unity settings first. Also [TensorFlowSharp -plugin](https://s3.amazonaws.com/unity-ml-agents/0.5/TFSharpPlugin.unitypackage) -is needed for you to use pre-trained model within Unity, which is based on the -[TensorFlowSharp repo](https://github.com/migueldeicaza/TensorFlowSharp). +Unity settings first. Youy will also need to have appropriate inference backends +installed in order to run your models inside of Unity. See [here](TensorflowSharp) +for more information. 1. Launch Unity 2. On the Projects dialog, choose the **Open** option at the top of the window. @@ -26,23 +25,8 @@ is needed for you to use pre-trained model within Unity, which is based on the 1. Option the **Other Settings** section. 2. Select **Scripting Runtime Version** to **Experimental (.NET 4.6 Equivalent or .NET 4.x Equivalent)** - 3. In **Scripting Defined Symbols**, add the flag `ENABLE_TENSORFLOW`. After - typing in the flag name, press Enter. 6. Go to **File** > **Save Project** -![Project Settings](images/project-settings.png) - -[Download](https://s3.amazonaws.com/unity-ml-agents/0.5/TFSharpPlugin.unitypackage) -the TensorFlowSharp plugin. Then import it into Unity by double clicking the -downloaded file. You can check if it was successfully imported by checking the -TensorFlow files in the Project window under **Assets** > **ML-Agents** > -**Plugins** > **Computer**. - -**Note**: If you don't see anything under **Assets**, drag the -`UnitySDK/Assets/ML-Agents` folder under **Assets** within Project window. - -![Imported TensorFlowsharp](images/imported-tensorflowsharp.png) - ## Running a Pre-trained Model 1. In the **Project** window, go to `Assets/ML-Agents/Examples/3DBall/Scenes` folder diff --git a/docs/FAQ.md b/docs/FAQ.md index dc2b9cab1f..db51ef5291 100644 --- a/docs/FAQ.md +++ b/docs/FAQ.md @@ -15,7 +15,7 @@ Unity](Installation.md#setting-up-ml-agent-within-unity) for solution. ## TensorFlowSharp flag not turned on -If you have already imported the TensorFlowSharp plugin, but haven't set +Before version 0.6, we use specific scripting define symbols when using TensorflowSharp. If you have already imported the TensorFlowSharp plugin, but haven't set ENABLE_TENSORFLOW flag for your scripting define symbols, you will see the following error message: diff --git a/docs/Getting-Started-with-Balance-Ball.md b/docs/Getting-Started-with-Balance-Ball.md index f7ae11065f..ff95907e54 100644 --- a/docs/Getting-Started-with-Balance-Ball.md +++ b/docs/Getting-Started-with-Balance-Ball.md @@ -276,15 +276,11 @@ Either wait for the training process to close the window or press Ctrl+C at the command-line prompt. If you simply close the window manually, the `.tf` file containing the trained model is not exported into the ml-agents folder. -### Setting up TensorFlowSharp Support +### Setting up Inference Support -Because TensorFlowSharp support is still experimental, it is disabled by -default. In order to enable it, you must follow these steps. Please note that -the `Learning` Brain mode will only be available once completing these steps. - -To set up the TensorFlowSharp Support, follow [Setting up ML-Agents Toolkit -within Unity](Basic-Guide.md#setting-up-ml-agents-within-unity) section. of the -Basic Guide page. +In order to run neural network models inside of Unity, you will need to setup the +Inference Engine with an appropriate backend. See [here](TensorflowSharp) for more +information. ### Embedding the trained model into Unity diff --git a/docs/Learning-Environment-Design-Learning-Brains.md b/docs/Learning-Environment-Design-Learning-Brains.md index f73347f855..8d2b5fe45b 100644 --- a/docs/Learning-Environment-Design-Learning-Brains.md +++ b/docs/Learning-Environment-Design-Learning-Brains.md @@ -43,9 +43,9 @@ model. To use a graph model: 1. Select the **Learning Brain** asset in the **Project** window of the Unity Editor. - **Note:** In order to see the **Learning** Brain Type option, you must - [enable TensorFlowSharp](Using-TensorFlow-Sharp-in-Unity.md). -3. Import the `model_name` file produced by the PPO training + **Note:** In order to use the **Learning** Brain, you have appropriate backend for the + Inference Engine. See [here](TensorFlowSharp). +2. Import the `model_name` file produced by the PPO training program. (Where `model_name` is the name of the model file, which is constructed from the name of your Unity environment executable and the run-id value you assigned when running the training process.) @@ -54,7 +54,7 @@ To use a graph model: [import assets into Unity](https://docs.unity3d.com/Manual/ImportingAssets.html) in various ways. The easiest way is to simply drag the file into the **Project** window and drop it into an appropriate folder. -4. Once the `model_name.tf` file is imported, drag it from the **Project** +3. Once the `model_name.tf` file is imported, drag it from the **Project** window to the **Model** field of the Brain component. If you are using a model produced by the ML-Agents `mlagents-learn` command, use diff --git a/docs/Limitations.md b/docs/Limitations.md index e16a0335e5..9a756486b0 100644 --- a/docs/Limitations.md +++ b/docs/Limitations.md @@ -24,5 +24,4 @@ As of version 0.3, we no longer support Python 2. ### TensorFlow support -Currently the Ml-Agents toolkit uses TensorFlow 1.7.1 due to the version of the -TensorFlowSharp plugin we are using. +Currently the Ml-Agents toolkit uses TensorFlow 1.7.1 only. diff --git a/docs/ML-Agents-Overview.md b/docs/ML-Agents-Overview.md index 1daf297c1f..64faea827e 100644 --- a/docs/ML-Agents-Overview.md +++ b/docs/ML-Agents-Overview.md @@ -244,10 +244,7 @@ time. To summarize: our built-in implementations are based on TensorFlow, thus, during training the Python API uses the observations it receives to learn a TensorFlow model. This model is then embedded within the Learning Brain during inference to -generate the optimal actions for all Agents linked to that Brain. **Note that -our Learning Brain is currently experimental as it is limited to TensorFlow -models and leverages the third-party -[TensorFlowSharp](https://github.com/migueldeicaza/TensorFlowSharp) library.** +generate the optimal actions for all Agents linked to that Brain. The [Getting Started with the 3D Balance Ball Example](Getting-Started-with-Balance-Ball.md) diff --git a/docs/Readme.md b/docs/Readme.md index b27288f25e..ad44fa4bfb 100644 --- a/docs/Readme.md +++ b/docs/Readme.md @@ -29,7 +29,6 @@ * [Learning Environment Best Practices](Learning-Environment-Best-Practices.md) * [Using the Monitor](Feature-Monitor.md) * [Using an Executable Environment](Learning-Environment-Executable.md) -* [TensorFlowSharp in Unity (Experimental)](Using-TensorFlow-Sharp-in-Unity.md) ## Training @@ -42,6 +41,10 @@ * [Training on the Cloud with Microsoft Azure](Training-on-Microsoft-Azure.md) * [Using TensorBoard to Observe Training](Using-Tensorboard.md) +## Inference +* Link to [*Unity Machine Learning Inference SDK*](TensorflowSharp) +* [Installing Backends](TensorflowSharp) + ## Help * [Migrating from earlier versions of ML-Agents](Migrating.md) diff --git a/docs/Using-TensorFlow-Sharp-in-Unity.md b/docs/Using-TensorFlow-Sharp-in-Unity.md deleted file mode 100644 index 065066b32e..0000000000 --- a/docs/Using-TensorFlow-Sharp-in-Unity.md +++ /dev/null @@ -1,181 +0,0 @@ -# Using TensorFlowSharp in Unity (Experimental) - -The ML-Agents toolkit allows you to use pre-trained -[TensorFlow graphs](https://www.tensorflow.org/programmers_guide/graphs) -inside your Unity -games. This support is possible thanks to the -[TensorFlowSharp project](https://github.com/migueldeicaza/TensorFlowSharp). -The primary purpose for this support is to use the TensorFlow models produced by -the ML-Agents toolkit's own training programs, but a side benefit is that you -can use any TensorFlow model. - -_Notice: This feature is still experimental. While it is possible to embed -trained models into Unity games, Unity Technologies does not officially support -this use-case for production games at this time. As such, no guarantees are -provided regarding the quality of experience. If you encounter issues regarding -battery life, or general performance (especially on mobile), please let us -know._ - -## Supported devices - -* Linux 64 bits -* Mac OS X 64 bits -* Windows 64 bits -* iOS (Requires additional steps) -* Android - -## Requirements - -* Unity 2017.4 or above -* Unity TensorFlow Plugin ([Download here](https://s3.amazonaws.com/unity-ml-agents/0.5/TFSharpPlugin.unitypackage)) - -## Using TensorFlowSharp with ML-Agents - -Go to `Edit` -> `Player Settings` and add `ENABLE_TENSORFLOW` to the `Scripting -Define Symbols` for each type of device you want to use (**`PC, Mac and Linux -Standalone`**, **`iOS`** or **`Android`**). - -Set the Brain you used for training to `Learning`. Drag `your_name_graph.bytes` -into Unity and then drag it into The `Model` field in the Brain. - -## Using your own trained graphs - -The TensorFlow data graphs produced by the ML-Agents training programs work -without any additional settings. - -In order to use a TensorFlow data graph in Unity, make sure the nodes of your -graph have appropriate names. You can assign names to nodes in TensorFlow : - -```python -variable= tf.identity(variable, name="variable_name") -``` - -We recommend using the following naming conventions: - -* Name the batch size input placeholder `batch_size` -* Name the input vector observation placeholder `state` -* Name the output node `action` -* Name the recurrent vector (memory) input placeholder `recurrent_in` (if any) -* Name the recurrent vector (memory) output node `recurrent_out` (if any) -* Name the observations placeholders input placeholders `visual_observation_i` - where `i` is the index of the observation (starting at 0) - -You can have additional placeholders for float or integers but they must be -placed in placeholders of dimension 1 and size 1. (Be sure to name them.) - -It is important that the inputs and outputs of the graph are exactly the ones -you receive and return when training your model with an External Brain. This -means you cannot have any operations such as reshaping outside of the graph. The -object you get by calling `step` or `reset` has fields `vector_observations`, -`visual_observations` and `memories` which must correspond to the placeholders -of your graph. Similarly, the arguments `action` and `memory` you pass to `step` -must correspond to the output nodes of your graph. - -While training your Agent using the Python API, you can save your graph at any -point of the training. Note that the argument `output_node_names` must be the -name of the tensor your graph outputs (separated by a coma if using multiple -outputs). In this case, it will be either `action` or `action,recurrent_out` if -you have recurrent outputs. - -```python -from tensorflow.python.tools import freeze_graph - -freeze_graph.freeze_graph(input_graph = model_path +'/raw_graph_def.pb', - input_binary = True, - input_checkpoint = last_checkpoint, - output_node_names = "action", - output_graph = model_path +'/your_name_graph.bytes' , - clear_devices = True, initializer_nodes = "",input_saver = "", - restore_op_name = "save/restore_all", filename_tensor_name = "save/Const:0") -``` - -Your model will be saved with the name `your_name_graph.bytes` and will contain -both the graph and associated weights. Note that you must save your graph as a -.bytes file so Unity can load it. - -See -[Learning Brain](Learning-Environment-Design-Learning-Brains.md#learning-brain) -for more information about using Learning Brains. - -If you followed these instructions well, the Agents in your environment that use -this Brain will use your fully trained network to make decisions. - -## iOS additional instructions for building - -* Before build your game against iOS platform, make sure you've set the - flag `ENABLE_TENSORFLOW` for it. -* Once you build the project for iOS in the editor, open the .xcodeproj file - within the project folder using Xcode. -* Set up your ios account following the - [iOS Account setup page](https://docs.unity3d.com/Manual/iphone-accountsetup.html). -* In **Build Settings** > **Linking** > **Other Linker Flags**: - * Double click on the flag list to expand the list - * Add `-force_load` - * Drag the library `libtensorflow-core.a` from the **Project Navigator** on - the left under `Libraries/ML-Agents/Plugins/iOS` into the flag list, after - `-force_load`. - -## Using TensorFlowSharp without ML-Agents - -Beyond controlling an in-game agent, you can also use TensorFlowSharp for more -general computation. The following instructions describe how to generally embed -TensorFlow models without using the ML-Agents framework. - -You must have a TensorFlow graph, such as `your_name_graph.bytes`, made using -TensorFlow's `freeze_graph.py`. The process to create such graph is explained in -the [Using your own trained graphs](#using-your-own-trained-graphs) section. - -### Inside of Unity - -To load and use a TensorFlow data graph in Unity: - -1. Put the file, `your_name_graph.bytes`, into Resources. - -2. At the top off your C# script, add the line: - - ```csharp - using TensorFlow; - ``` - -3. If you will be building for android, you must add this block at the start of - your code : - - ```csharp - #if UNITY_ANDROID && !UNITY_EDITOR - TensorFlowSharp.Android.NativeBinding.Init(); - #endif - ``` - -4. Load your graph as a text asset into a variable, such as `graphModel`: - - ```csharp - TextAsset graphModel = Resources.Load (your_name_graph) as TextAsset; - ``` - -5. You then must instantiate the graph in Unity by adding the code : - - ```csharp graph = new TFGraph (); - graph.Import (graphModel.bytes); - session = new TFSession (graph); - ``` - -6. Assign the input tensors for the graph. For example, the following code - assigns a one dimensional input tensor of size 2: - - ```csharp - var runner = session.GetRunner (); - runner.AddInput (graph ["input_placeholder_name"] [0], new float[]{ placeholder_value1, placeholder_value2 }); - ``` - - You must provide all required inputs to the graph. Supply one input per - TensorFlow placeholder. - -7. To calculate and access the output of your graph, run the following code. - - ```csharp - runner.Fetch (graph["output_placeholder_name"][0]); - float[,] recurrent_tensor = runner.Run () [0].GetValue () as float[,]; - ``` - -Note that this example assumes the output array is a two-dimensional tensor of -floats. Cast to a long array if your outputs are integers. diff --git a/docs/images/imported-tensorflowsharp.png b/docs/images/imported-tensorflowsharp.png deleted file mode 100644 index 3e2d424da75b2751e707c87f030585919cedf776..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 27572 zcmZU*1ymeOw>69mVSoe}+}+*X-7Pr51`ie_xa;5=7@R;LAt88h0>Ry#V8H_f4-Q}R zJnwz){nyW0ti|+nb#+yps&n=}`w*k0q4)xw1RVhZ;f1o2oDKp4VmI(h1%3uxVTnbK z13m;e$jWLd%gRz|xx3mrIN2Z|Fr`>oSgXLr6+rQZS`2m~s#-1X^28ZSZ-y;WYj z@gO79q9*6%>g48PSnnV#hzy{}(@Xt*PG!>2`T?O^0?|4S50M;fy^2^pMSXxp<%I(u z=|Nqv43>v!X%aF&Kpev?Rgc=w0VC}Qnd*}N@v}o`E+wQz`B{^2 z374qU`<}FHLXzk1sMteXK3t7lOk8kYA}&LoT^^x~5BljLcp^K5x&$_*cIiw)NU{QY zKZGS43hU^;DJlNozHt!W^Uob3z~4`oIccc=xy92_oW@XHi%Qnj-G+*v zgO7uYMgpCRib~Ag+EzqIPT@bzfluNz_MV<^L^wIUy}dcSc{yC&?KruGg@rk}csO}@ z*nvCPJ$zj}Eq&NsJZS%Y$p1V?&c?&a-QkU=gR2YG({nAMu5UfXX=t80`tQ%b?`h-X z@V`B|c>HHrzyLX)zTxEN;Ntx6vw@~!Pgg~>9DHn?4CNf0ZCpHnJ|y_~dBpy?|Nr;R z|MvKQS{nUtOCBMi|J(BaeDj}{Vw_I{_&)>sceehy3QU&-x)|qwr(ObGkrDbC0YM5u zSx#El2k|fu6^_?48-*2yK_i`-8lEaArwzf!fBp&PqbG+#TLje~HtG%-DWYMRUS<1z z@i}A;W+2aHfj|ewIY*}<_A<&izBDnEb#}dsSkB7J%Jay#t8e-8<@1;JKZh^h{+_;O z5r4cp%aN0ZXo2t$<)t9~5YYB$9;+hx+w^j?w#A<($Ln2Ne_D<*zPfKdTnOMl-2>bj z`U(6=CLTR0U-v)F@IjxQ7VDJtD_$`>&X%XVAQjjg478gL1TGlWFMFAk>3zx2{jAGe zyjYr`{nUoa0Q*xIkJIFbnDgBU+4{xTtkTk_+O}U({XiYriTzh68(WvF!I_KS96vr3 z`Jw!$TO6>irS*K9zxP6;TX7<`YDd})NR;M->}fZy<|i37@y-9-i3Iu7xf_whgASR1 zK&dpCRqJTo6n0+MXb=w@SJM=NnUJ_4pP(M|SiF6IzKf6j&33FvdAvUgW+-#p=yjhn z^=|t3*8IP7VhlpUhmJ45a_hl&Kd9~-tg#sy-V=HIyWb|$l;LvO+g$4})ZuWcL9kRx zK`McSKh?4;N_l5AFK#Plr_govyZZorW`8Iu1(BxLNHNfNYzXiQ-3cMuus^Wm!1k9T z4j=Ov!cnm@j@lp1X1|))v5XB3@uHt@rfCX{3c+7_FTXWty7~P@^;NF#Ah%VoQXB#M z=5~&22B}M<^h)66GR`%WKAuci=k;tku}D~g{&$lGmtQE<$$4Ij%*OTZl6KWz1=3A| zGMt~DdY2NW(bCeQ;i3u-A zJxAc3d>p}mDVC6h7(3 z9m&{GOYR4AFQPVm`qa-{r>PUDsL))_)h7 zjuP>if^+A+bZj+$!`h6)V;3};`$D85Cu6)Uv~SIHSheq=N%Q2&E|Ks#UrrQ zXU#MZYSMm9rkN|GZT#)EO*(a7A7yYWmv+%)MjT=qr(E4ouS zdqLt?%6-o|>B_Qy?3RI+%lC8>_6R=g-i-aOq`ThJ+y|4XW6+csKdAjyvymv;yM;fv z*j)pzw-(cKo0f)f7*=%*qu%WQw2UT?FMh5eD0Ef0%hgsxwgGI%WWZ3D;AY(4zW-fx z4EuF3S6T4t%@34b>!DWbQdp13r`lv@=V`Bh_G_{R=zvFDU&PQx$zO*@##6G_Yt^NR zY5&;gcemZ(qE!()7oah41mxaW+jZGIOj9~{E|*(0%1UQ}x= zvc|bJ5NIj0{N)KY1eF0ii`}dt=#uZWI%a)WOK#%a4%};vi4-F;D(C1ssR(0hw5#Bl zicDEM-5TO;%d&Y*IVOV~kclS4UjEM#FdK^T8)$%kdW?y0dD6GV z|I=+#WJlZO3h7x?LMAia4*`WTlQ5s+2!kPkfmts*($Ph;CnrM&lDIqW>R&vtu4mD^ z_A^w9jF+)~U~$4^WfL!~h*x28Y(aZ=Jiu5$ z^d=_!b!Lh6wEAnSr&*Gr?I3xr1i9*<^csZ^?0*idD=_r3g068EB_qBG?V%A%i)uUi;YB}bPGTy z;CRJ!wlkI)P2s0h*g@NBlbqbnw%S%GqO9uO7JPRSeNE6e6a|@H@o|e`8~&I|g{F!% zn4|1a;(W=*se{RYw5^AR+9ylJz@jSl(*h;SvvQFDn1W9};Nuq?eeT?0#lzYBs=a=E zehj)Lia~3HIjaSKBo%~@6KHXWZcSUf_+MICru2vXj+Eb>HF0Ive|!07d1Ubz8pDnG zNi-XK6c?B2I|sRlCoF^Co#RtmN*cFUyvzqeob6?z42t5Fx&jXM7VfmfOZ!h=htmVC zAzTiH&(XlgORnu`(mg4F^Oys%*rl|`{(G>>gd~Zo#JNK9&Pe!Q!(DZ8IX@GwGnrDG zmmg7FKqx%!u1->lBATt21>bXqi`+|JEbdpW=L8I~U2n22Op{%-kNy zA+-TRxDk*V+N@uH81&U&ikQKDlLO_j}Hv7&n;&Sg^t>rd`P4y zpeYjf=TlJ|T=_g{4C<*9#fmxmCq+`YHsQWYrzH+U?ne^4hvTHRgUL5U#J`*!i+8x1 z5*WX*$i>yW0HG7rJ8&V;_~dHvW1tm}*f!isO%IOgtE3hr)2W)3NG>{9-ACaF4n3KSs77*;b-5~d zy!%LfQ!=b)U`o%~dK`w5mmltH#LP|(E9gu znS$FywcTb64H^q2Fbd)?RNdd<{P$4B*xSQ~f=As=p1tHx7QS?Xda)hyg}dI z-ANxN7*LZ)F!G?@J8eC9)Y^>+{CyQHVv$5HI?JWfVff<%d$D3VncC%Q;AP(4`g3v- zU5bEXY2Mat>Gy^veh^SUYR3c5!~QXpOU25%Kf_8;)39Y(O#1eDm+$z48IDI(t6Hnl z-TbOml6YhMzdK+n3Sm`bcKYNjxa*+5zhLy_V?J}5EBn@Zf=$O$cp|(Of;?P<2@lnB zlD6skZ37}wUq8v)G?spw8U@M=zeze`UA3w>e9-#Ir1pJE+C~|}(NzqKnMbDX#W^9m zik%z`dR$OUtrj5QFxj0OK7!Po$XeMtDHhM~I15KW&=fm~*xM_Vs>W{-gy{ALlP@Vy z*v>aqd722=4;jC_i8$;*qQ}seOEKCF^_O z%MO!84+zr-cXo1Pa&Pn|@FFAp-tq=K_mWeJvtgU{8tE{?ud^}9q)l&ciJM*8;=(uc zCvkeO9y}9;qmk_P(Wl-@U+!+BeHPRwHjya8?usgq-W#4X1%g@kD8-MuU-a2u5?)_T z5SctnV_Ljx-mj>gOWVo!ABAh89mg8r2r3bai7^xE_RbQP(Iw_hrdu|C|GCdRb2i-4 zO7_=s9I>N(ZIrm^`PZ)|@1SH@a8 z5G}hw*kJUGgfWZXSKvXPWU);Y0@M3x;y+E_6^y zE!TN@?dVgGmv!L>m({i&CY~fRDOF6OP}no|7~zkKKJKSY;*0nWSG604TT-xKtZ9lK z^1Z}-$pZSpq|y*ya7s%;^f%WzTs+avr0=DBycThd)km*)MeVJ6VQLIZfx#5w`u_kL zVz8kNjFE9LjNkXcHTU-LZ6)#c$$(g%`Q2N0iE^`s^0%+gS>=5k!xu@un03Y!l~af8 z_{YzryoFySay7vEoZ+Bvg1g_t1~XqrD9K}={QW&M1lKsZs&+h$!lI;@IM{62E6o`m z&kR&c?dKg6!!?ra8ksPh~RTR5$-J!n5OA!R-K;W%7%U@{UQX24StzTN_{J`~r0=C zm6<o3t+(H~ zqk8Xo5Yn5E!AhCoGSp;}(L;Tb9>bVq;@x^%^0i@@w%do)aOL{fAwD^~PZR0X~SFN@avpy!dP6Tf4eL_EM7f*!8Yc&bmk>Y4eLJZliA$z^VfD;~7P>oA@$$)CpP)(eM^?a!;4isB0< zqyagEm%`j6dQTAf7Sh3p!Z~9I#e3eqgtudyqsZ3;OCzp&?`NIVBd=~x%-POeV7BRJ@i0+Z9r)!GWc9?kZg0Ii9ohoi zJ>hBV!_B@7xX-7A25NJ(O#ge?rRDHB)>@Timzq!Y?NPhAhWN!a{AWmgMHTz7#I&rB z>bq%bY#|xCzhv*-!y-IKTL6Zl;U3iFBlZ2K=Q<|OZsVYPf4-S(sr`3CE4zhl^9eCw zpQP2zgALXkvCZYO6;QF#a$P6P8mej9x(G1Hyltd}fQXz+qh*b)!}pQ)z%7A~jV z)}<;ss!io{Q%GIAhxCU?UP&qdjRFWth2HvFLn;r2ydT3H}9CzaTJj&h8 zGbVVL<*N_%4Mr5a2HV_sHWP+5Em9WqDY%~NTuVu@#QzG|Oi@(ZrGAixOx|1`QF8m^ z(A_V3D$j^uH45Tc`6}^-=B&%foTx) z0Z78zTAUK0b0F7+o(3(D2z=hoocqZx-`92%UApm2Dg;+<%i=lXDuA?|20vi%evj#1 z+!PI&cTE+(y2Z*3ue$(H2OqQHccG{RB;UP~%wdZV#2-(T#;DsPSZ+7D(&Rl^-rMg+ zY&jdGHu!5W2<>x3uJ=<~;}ak1j+E*|YOh}v%4|jjo=$DrtY>7){*Z-`AH%-rq20sK?rT$<$1Ewx};`iE*@E6y|K+tZd z#JI;TEgDvw$X|A<89ESlRUB`8Z3s|$1L@&i#~CIr+?3#oaAl$(+0N%ik&26-^;TLx zGJec;k6kAV@7o99FV|t?Vf_+FUL`3D@JwLs*kjR2+0C0f+0VKGAo!T2DelCbFC!7X zv1JqF$8R7sbuKn!6W1^lIs1J)enOEzA#R)>bmPD)Q(WA8J1jQ+ZO+V*<{ov%v2uTP zQu^-ilIzH0Uy*#A3~~Sc78l~_9Log}yJowW%P- z_4@)`e~+rrLb#7tZoYy1jlUWrvV_cvWdFSHY^xQ3EGwzasMgR`1hUirlE>)V2^3)H z!9d#1%gt}A{Wsv}RH3ECE?sS#16Y`+6h)a`mEcE-(Xmw(g}`aqde=pIPt34HxIhD;jM(}Y;M`niFzFv0vp5M$!%Hx4M;1b^(8#lXXlE_ zD4K0W@4;Z#{)EE#b9R;IZH2_c)kf)AB-(Qlp7w}$mpf;aXOG>Avw@fQzCW$tKd8(GkG<<+S04%yjzDI+T+ z%g&r68gv5r3LQ5{2Vdmd5G(-(zeZ7F=@W>I&yE&SJOyX-E)6ds1$&-%ss`Z^hop$U zfK*M{@sXtEMYC_%Uk6f^(L~+w(?$oTH|b)Ryq29HtNS-&sE5UHBBN%>ha@Y@$l8Q^ z%8NNt48nGlA@lr=)6VBx5s@(0P@4eo>!h6Rpg1^zjMEOr`&@{LoE?S)i|NS*~_wyVWlfZ+GBk#DnJm9WHno#vDF@-~kRGvm48I7Q6V>6tz7d0UVZ=kn8rx z+jgamRo@+QtIxk=BXHKb# zk7|ZEY(WHLj}<0lJ-zlQr!KcCUSqeH*XC$Cpn&vZ%bqrKfPXIa2-S(yoT=w0;dt0@ z>D%iZc9tFg{=)qF{&JPHkKS>czOo0iR=C*tjuzWsAuEC?n23*Sv6!~Ewoe(G$IGO> z-N$GJ5^RdUF7#Pu^BJk2i_>t1lNB0e*w+SQc3%^J^d!vDDD@ia8Lv{qw(Oy{Kbk4J zmA}bdejb$An%@~-Kp8(OCdy*qKGa1LqJ}}$O2wSSHp-jQZ?x8ghS$i#nenFxJa}5^ zPSERpE5aS#iM-ifr?2UYsv$Mpsu(wlpYBsYb(9e5=1v=1 z!m(seE5lJx^0>aE<$7NG?VpzL8e+z8_J(dr>GuF*>>z>_U$<*dGw}wkb$&Gl?lTo? zR43quX(ydq{>HE~yQhuEJU)avk6Fa1l{dGG@&}7{8grtgY9ZIp8wzheV+O|=ikAssC7AP z>LhIGs4JqTBA$O7g}(o4qd-p>u=!|;5YO}B1_ptipo~E3S;&Jj0 z`7cu)NH_|ngQtEv#hUJx zT%9)`84hV*eD>w&osB24YqQ>1UrYiqFEj?3xVsNMW^XZ1lam&5Ac7!iQuq z!4!2mf(V3pZpDjr8WioGpGtu^oARTuNS%N*TE zCE2gU+~pG!3Xa3o$5uLUGn}r#bf#?${3l)z4n7`vlWC$m+!7yX5P$Gy_@y!>lClA; zqzQl9xN7f@7Nhy!qXYRe2QBqk*K)dccDCfB;D^y1>(w9Ua47TALL;e&JvMYa|+jOBiYFzGFNj(a4YwCg}fZ#4n~f^J22uGSpMJIaKY= zx42a_D4#IuQ;r7CZ9yT$$LFD31{dk3TC7iTe>zo%1^U>EVXA&4OmGG`kr>01OxO}$ zkE9y(9x{0lH1IR%%)Yum**lsHoNIF5T`5qS`6|-CGqwGU!!Yq}nxEPKAQo7plrcU{ zao4DGhDw~@%dG`&a+|=Jz*ZQ^A+58mS*U!;#&;6Vw#!>-hC9QN9CIhV&jYrziPg`* zmv6l#-K(Fk4)hAx34Pd_6q}vqCJ(tR#NVBIlMS1Tw;I~h2YS(olbRlPE^DiUD?Rvs zyVcufFXsrZaB!pDdB^Vr0qjoA*#aCde3r#tger-DPRNWUmZ8J&BuX+{ z0SW38gY{~w>O&yZB4);_q(#q3IS^b8u=;abn&ZNws@{Q+$Qyj0(iwcjg~TARZDf~c z^ZU49Zev&~WP&=t6RsYk0S%6r4lCf@RYbV2nxM_&`4#ufNa|Oc6<)Qf9=@yaseG+% zddZ61Vxcd7s4h+NtgswG{zJ8o$wJwQ&h=%4d4ew<{T@1H(##hE<*CE7tvE{3s8PzA zAP5D2(anZ>#H6L9_TED6$C&gVu%u&CEzyN4=jpdmgvF_RS3@v5w_)x+qAk!RIXA0i zrv*8EHS_fwh)W#R7?W|4;&FBO3*=ZLQr~Dk2`{G8!kcA(grqq$|B-He`tz3#3Y{*2 z%>~9Fq8G-cz7=)u2@UD?i=IXjf01`ObF7`$MXZ+XSIWzD7uF!JQNaY*VT)%jQ`AtE z_hRy2iML|v%~;97AM~hk?tFfoXSm<2<61v1XAq-+BGVIefAmq>Oza>e;a0!4%DPkK z<$U&gTkB0w6%jd(ZcY*wo=7+1hq+i}SsIxRm-zv0pW`O(U>NZbdZ~KjmVIVkTHVe; zhW+38*D*u(DD=)<%&(qX~YYWEv`NtwQ9+bFY1w{KL(s5by?h$mx)G67m}{4LN1 zK6YBA;MG8H^`2BIR!}c-Y`CX!V@nWT;XPWqv^bqxjoow|a2;}6F_wb0fP*Qdk07HE zqU?zcfOc~fu{L<+CLi%3W4;i+6$9GfxU1i?h<_ihaO^j~=o^godh8L08V${2tVK1b zoY;_?P0?Pft$w6g4^;Caw2nnCedg}k)cW^<(d3si#?yg#k<$r9kGAXXX4}o$dW*>O zy>)GYZ$|6u&EGto@;E)jW8% z=%rF9x-gX2s~hK-Q%wbBL(|nK{9H*atWL_p)YY)&herkak)ZfA3%i9sI!^&$nth4S z<7~K;^|2iQP4UbHpEQM{1Me^9QnHzMjM=~uNt59Yq22`a&*P%=(5WP6;|(tqI0{8$ zr3k38&)W?uMs^DRynCcz3Y<&G3^$RaM)jOG2r{3LuCJ)k&YnJtE-<}7j!gQE8`ozw^RJ86H~Uj z7&BPgMtGEx>3GG)qEI;_9uayq^21nzq8ZoyHLHj+yjW7Tig*S+YGY3x*W1Hx<(?JE z6+)3WPT|&5r52G&sQuqUo|qGQHmWyk+%KdHoo+cEj=QjWyOVh2sOF`v5G!r!6kV1mhg3a zdrWHnjCpTOq3LagHY8?ZeLrN=+007OZ|77z;5E;l&1M2JsPT0fW^9hnP4`yXTF_GC zJXDLqOvFYxndA(p*h&On5R#qwi2i9T#)cF9sQaz$Kpc%StoCNWa&|M?;{_&Pb@yyi zZG3x>NyZj&ycV}BH+qM#BE1L1u%Ti+hxFC{$UVM~^`v8~H_s|5Ie}ksc@a)K%@O&> zt_{M;st}98ZYZMqi?c;TsaI~It^(xQd_yMoQ+FnF&>!D^`UxpUPp($@{Jt<(Ou`RU zQ=~eil|!I`33^x#Wj3Uj1&6vOV4tHgOlXVZgD^L8Mi)Hnz1inObJ%VlytkV52BG=brhVF zLulZ&hQU#yBHJ%TIe)hcP?BDD5$Ly@BwkZot#q)@_Knt?G&o3g=`q#+?uxK)C7VCD&IcV#8*YTzPkg`ZvI&k_j0~RIV|p*E(HS(?RA;9BiYxr?}oV1 z`&p>H3*v5<5^Ge`87IC@Eql8rQUnx@%DDJ0b5eq4zp|!&+YeVS);39QXnHA6k&eBs zLo{vrBRtzzf{n_MpcBNHEsG^S&30lTZkkjcbkW;fk#7Lf z2}L+t^<TSpJ zdrssj6W9d)U`owG33aFZG+X+VSqFp_BB067*ja?S65W&HLhL2~UQK=lnrz0L*pdy{ z3$L$fo_t_Cyzi_62oW2&R{?koUi@$y)#5vHd(bE@>now_AWMNeLywMes$- zzPs8;%Cx{=Bg6OvAZ7>Lricqpxo;;Jp4xS!ncf77U1Ei)ZQ#ZL*dNp5m99u4x@0ge z<6I>+_m5RTtH1+@H8z@J2k-@A4fSD#j{hVvLA1&Gk#Oe#*!eIuqAi56%?Iyz{JpGA zHw+@_W(E+^Nc@ZB z-ajFGcpS!cOovzF0z?LdYN~X}fs3RSFSI~N*h>3kMO(5?TK;YkSk**XpFh$W7Bog; zRm<--bEKyYJqy`DI`fG^4}fW{0VAl#gd=EqG0B2vF~S0^ZfrJpD@4QfI?#741VboO zw5cc|R}fP-U(A7{XNF#9dqIDxIy|DVPSke<-)=i~MbF>PTSb^x29vRii^u)Y?jvUn zye-%~335SOvOV$~OlRJH452!4p>lDYWu!cMzJvBWc!Ylrrk7C{>d8iK=OdQ;Bt6QS z>yAZg;`g;&t-6L~Ewu*+rbc^tdKgmdwYd0Bqh_|4<&&$^YlIo za8inF3D?={COemG9#cbQN^0>1?1-?UsS!J?&7?>pPs0*yj@N$lda8cJ9CHyVsOp!x z8Iq7be3BC}l?c_R>$UEC1>w=(0{r99*M^nX3NS?Xt_wvQRoYHlFX_p&vT3D^$7r%_<&BEUn*lRr*bx-sbNojJLBaykk`_!d@ zRVbyM&JrudK$GkhNwhATyCm~H`}*6#qEkJe>)Hvm%O|!14QHmb9STnJoO>E^?}Bqd z%3so_D?4A>@}1s+?{T*$sU*4q`ANS}E!`>Bh-^&&jDuEqj|;IQ4JO<_2I{!e7|Rz& zHbQw7B)4s3h!sOvKBN=kn@J&jQA4(MI>fn?%3_p=@G}}58ww#<;)=;V+ml!!UP0R~ zGfg!!(c~M79C?||t|B!28=_Bni9JH{D?6a;_qeA#mFd$d<-Re_V1q#kJz8mdLHq_A zx8q6mXjc&E6`_aF<3JxzIk6?9n=c!ukFwx5NKG67Gi0ce5(9D?z0(fw@Je+J&3|oi z#iQzp*v3CUf%e=fD`Kr@55Fs>1Cp?& zN7-k@$x$f5--R=YScbjrduKR{Em--76)7uBN&&8x)zCq2a*fTq^=;gX4jIwg^fk|K zRU2}qD(zwgUrU_*l2abz2AbrFr*)~xcj!GIaF?kRbC`Qm!*)|EwLSWo+SkN2dG-ay zhoOlV*1yI`x0X-NJ~+;~yyw>BC9!eNrAbBsH)7n^7B(%)$y4n;VH2LQ!T^|ZaH;Y8 zf5aTIQq2G0N6-1D5gi9@*Uw_2rN!ly%}IpGErFW?fb5kHBua5P3n{)g$os?>itpsP`1~Jc-V@&_k3Fn)9#E zc5;{~<*w~cYVc31u(Kh|5Et-$R0r&2E7Lq1!h;CMeIahQKj-WRQ6iwE$)h!7*)6Z$ zL^@;jR5dgd+71-Gn0`sSj-eeko|-b++)-3S<9j;5IE4{qW(w&?dWuJQQ?OE!P>aPH z(Xd2zsm{Pz$nM4GCPA_iJZuWvR&wW)P9xsE@g|2g)9Y(&V&8?v_)ruX z9BOEfb?ic#uso|vdJySEfukXHQE>8-U!KTYYe2VVn{K}$zR1H^5Kt4d8G458kufdZ z;rfPpTyT!A_lt@bO~`2&R&__t%lgSr+Aq7J!L?{I9Y+_zXApO>>wQ=87xna#4%4!` z0zzQlkL!x901l*Kxav(HB~6rO!()>BZR^?iW?cq%_C?y)CmN_*;mN7<@(jq)Jox%NtMjaIrc*B2>OvmeMaKfGn>SZRBm-{J8E3QiKK#~ z=Y^c#ub>`t5%Dx{1iDLO`hhg2qk`uH&!3CCyC{HhuB>~0%&-+60ks0{;XJ$D--hQ) zP**o%o@K^k1G+8SE(MJ{m3M4%9sZ@_i3g_T>yBp_){B}i<2UpsZfG>A3GCYEH<|kj zgH=SL;4prOwM3AH({U=oEuGwPg0Hd{9 zh8oHR2RA9FF=S0pgr;04zgN08+Cncz$V19B3IXil4v%@OY#Sd;ZU z$^ayIimWS~+s#;F$5LI%h{9ZA`ChueXFuKoF>n=1=-2v#QajGhXrUuM-vBkrtn)+k zvewQG*}F5-Yfl+`-?gtBTj8#vad^JAA7+k*Ah*MrrQaAxY+aXraY@L*G~I%f>(}}% zXm4<5C_e(E)pcpN4OcMFrO&L9U6!yiQvgNaxprDo63IS3v9#HfnhEe2B({iZ(0Hcn z!nWg1j8bp&BB**l32050jqTMww-TO@hp(d;V|jq9r{QkG-&mEq*jDdcOl`jR$ODVL zot-0MaRQuYE4dyv1%ZFLNBQD8_HxUf;#yD8q=d`|->m-V-A(}@;deQH5Pd@Wq=eQXAX!SBD6;|)OM?(` ztuc}D0eOpFjCah)3b7fXVczObI*%e+0B%eOaGP9VJNpxFMRH@!iw$EKHkdox3oQ0S)DcjJIn{!K=%~{9eB4&w8pJ8jk>gh?O#1nXU0$M=I0) z;`LMg)i-F@(Q3&eI$MOOTpn>Yi^X z62=Q1mZ8eI!sf*gc21y4rT|U;vB}O~n*GA^ulAk^sRyDJX-I}8uJfeN0%XV9plR#u zVk|S)mH=!miSE-MMwLZiz|HEx2Bft2pMrC$&+(_!&k%~0W2K^K3emY`DdPoLZy`$b zlu*8kE<>@Hg{1NoAWR_v=~9NxlX$TcK}uynr)Mm~Rd8HJ$2 z00dnw$X$(?>yz<4X-eXt#wU*m%H|CM^fJJn0tRSOH7@>yI(-HijQ~_(Y56MpEcaQC`1@Y-5dsv+Ukws$H;l@$PEngMEs6t_fx8Q5*N zo9BvpPD}&v)U}W4Krx1J%+sg>4*aIJLZtw#w(7Uf9*Tv>ISJIO=u?3(H=dgHe**X} zNj9&2aliBAlZ`$?)hT44P6;p~_!mvHBhf7|T{sp5$AZj(0uY@PKX>qCtLU;9i(zhe zXan?NJzzR@>etvCY(W@xd(6^Qz@h*(JsQeObvl6YF~vyf;L)tONlV8wYUJ7zU_@8@ zWjZC6G9lrUBJL~4BERUA@uyp%$7QH}01BjlOiMCeBz%ynm$}LDt1d)|7Re_TD2f@Y zD+G#hp2`SHs`>pKfCpEBK`h_q-W+ z?kr71WT5d2peY`*9qrCn-Oku|YP*;nDA(E70Tt17fZYaC>1!;zo~0!Pwkp1jAi@_A zF9WAzAcdk=%SL0KF4z^IaL{a1W}XEpi^1G@%W5poifn>SecHYne;Z`#2692>Hy8Um zW6)XXy3-ortUZu&rgoPUDIZw`Qrs^*LQE75eFc+%P}fE}U-37 z8Oz*FXyzkr=DUS&)aLh%oB#evWTK9y+Bb3+!Z+Z)x!B&&-LWryz!pjt z0;VD5H7wxA&%(Sr#)M$ANmUr(YnFAY-T)#MT{@ZRcz}o)U?bQ@ekubRsB@Sa{*;b| zH26ZJVThwe4UlyUzLYf-K}>ROITR2+YCTiKwi4QXijIZ1(qDq)V@>Yf8JIC!ZXsuX z1kRpLF!uXr46#&hqIpb18hIkhW@cv9<#$+^`CiKH`uyYAYfT1V6nWZTMhrNwl zN0f2gM-tJ7-R&0}Ql4kswMnUkygooWP33HRWP>?XD_`xGv32^3BP$@cgxicdlDh;> z&yDanwx9X@PFsfVy{8Z+ZWs|E0Zv182Q9VfMpo!)WZlJn9N<-BXS4W%O#_6qynqTf zoK7wA;|^p6-@v<*K3j&i9+lp@iD23DZ&{iHHY_m((4$GawBLKx7W7{fealQdLu1Uf_>(E(+PqiV|BC&v zPsf_O`VibXz^C%KNr$A3PczCqD>o?6Z?ttJxX4sq^NibenTGQe(0g%Iw*Ae%hE&An z?B#_Vphj?Q( z=7}U_M4EG&iw9PyR$5sk5=v*BJ${V3cf-C|bm%wv9NOcSJ?_i1dgcri59WxHieFn+ z9pI!DN)7IF>+7mD-L4brXfmi@sPrCOp|bfm%M|IuSnz{YMUQ(*@WWX_SSIsX%DMd$ zgc1ho^Hirz%H8}5!Ra`NYG**PW)hQ(nA&nvm*LFmV~2mhS}5vZIF^Jx9)6uVDrG>G z7(mV2ZYq6XazvBft8-eI98Th8Spt-I$ebrnnirIdUvZoyk+C}R3ky;tOoBxGu*Rus zE5)b!05ypW({(T&-QC!`3`g$S3+Ok(Nb|48{8MaydeDotOBL)TJ7J_hgt;hk)okz6 zaKYWWutMy*;ZAwRv78)}o4%(=sy>J8ZSYL&*5vnv?ffqzFywf1AK$Nx%sacj9H!0F z=f|);*s&&2`lzs91C!CL|H&9s+nE3LA<=_b4Lu)L#rH^#d~SDlf-5+*z`n;w=1!T=clqiQKT+hCc_a-2-`!|rBxMkEOLt> zO*oe19Ed`|v1Bkj&`WNb6|eme07Rt@jOZuJfGJShi zz~b!BteEWCI=7%mB!7m{#oUOJZ9D-{ij`4Qz}|?(tXnbvLt`QN-78UCn+`5%_owRr z6XRyRzUttm9Az3)FqIjwtnc>l`w@oX4s@E+%2WjKMKVz@NNsbiG{jEio+@w?9PmH# zZcn<2RC~OsUX1x={Q&7B7Zv$tqqW8{M>F>n*(X?G=w|~KV$K!HK;!vZb;@jT9V{0e z>WV+b4$t?ckQSn!WEe52#{RT<@8Ki@=w63ab*f#|j}#zG;z;qkPvviu25afo-xI#o zRfAB%i=P)uwXY#`>d;C2@%kJhCN!T51Xs3ZKTTDnMP6cbD}RffoE?BoCGsh6G^;Ud zankndqB}!lJ6B8PDvTEYPI9l`3^N+{mD>N)bIt;I9IXqhWNY;$k{WlTpX=4}x;s$B zj9QT}b|5-6%Zy|cg-|-`(-ZXR{#skVf~IcACIxUFz$SA3OGL0$5Nuo@{sW8=jU92d_pw00F^?!~ai&iOC<$Vcc=}%eg}*>VQ6=H< z*Z@2uRQU*O!DAjb));^h)uH3NiZBI9za) z(TM&ql!#;Zcm@a_xpbPgy&MBI6m>8=g>~P2CHV1PNlgV7qYbn3aU*(?2IKk=6Z(<* zz>__N%nWgJnnE51SU~QfSSR`Ze^P)wlB>#4y6$RMPv!V00cpa|2-@IuII(`&TAcJW@qpF6X$cz z8KT>2*B#g){u3sBqN{+gwMUoTyHBw5$U2o2B?Xkq<1C?H7baou^$0dEN?9g`-AN$XJa>N5jg~+Atvj7tGlor}ezbai z-5e78Y;k?@VU)#a+gbICM5{PKtjw1PE9ZlKziRpwm6V|>R*id!5K z!iw9g+#9qEh{$IJ!%^N@|jVe z!n0cB{`sh7kjh{NTHUi(Dpao4UkoiwfVKF@tsyGTSGeMeP@W=Zq;wP3aXqOD_ zuv+)6o#MH7f^k0E)Pi>4EZ^IKA+fK!cWmBAoe)jF#45AL^AvXXkYAGts6jbkq&7W?Y>9(#+#rKIu2!Dem0)t*;DYF-M#D)FhVwUWw7iN!E3L@(6UC9wK8s?a-(aVxyKL*U~_xTrXhwSij;4XWoqLjq4-PdiT}FSDodhB!~lOfO#*y2jQkZb>ZtOa+9_jO z;MQSHqu0NaWnnV=h-F=o=|%?`OL5K@1V4Mi zsPpF1$KCu{6rxHhlED(vRj?xe4JZL4j3@lf3)b$L@o+YwoZR87RKv!>K z@}K>$8?8_02v_i!;nb`j?hU*!br0Vd|4j<3DF7X7I$p`+Z^fs z#PABaKTRf=r0@dnGP!}aMAojUsKdQh<0nfwG=doMXSz9({bN+vcq7ZY{>zoIzGBAn zawkVTBFykRxoQAm97%@XQq!bAig_!ui;4aje(6Tc4iLFs_c_i{)r|*x;)I)Ucz!)g zLSnWK>(TJrRiM2ukaNVZdr>VB9~oh!;d9&EsCh2>za3WlHJ?eN5?W{oB=8EKQ{nJc z#DJW(jp8l#{B4f>iX(kktr?vG^WM*Ii}RYsODjEZJp}wV`A7X0cUqr5z)vL;K7OlU zo<~TFuhda-;Rlg<@@qzV`A*x*%y8mIaZ{g?z4=O+ zg_jC&b+M?aUlQi9r3)B6%?SKBdVQ}aF-8s*Vi>Bg5B44`Lq}N$?jl4ywn6_s2p$as zb6M6u-Pa~;KqF#c9fl+vDXtPSer-L^!pcJ+@UbGzn}Cyem3(XAiVN+ZpBta;g;5e{ zhz+$6!#AMgD_3Ar1PW;!X(16sXj`O&uV_(d`q9n~n}v7y0HVju#Hp`GMDkiV9FSxO z7rrtSr+mMa#LpQ0;=ilK{C5=T-VX+4h!G-k)eZxy3l4GUoG$(lS!jg;nvW_dctn!h zlhXQ(oHmlBuw=718|2X?vl}pTslFa-R?7K@FHe5ED04)lD9<;!#bQ~i*kXS~ zYd<uES!gRiex0}*kd)6( z$MIV=(p}2@F(h?`Y>GrKKnn`nTi|%j%378O9M?X=_B+k0;>OQ*@bJJhR0tvmdtg22 zzhWA>j@Gl!ZYr`!KXK+E{vE`O`+5jGx<>#$!~{~lhvCi(_&xB%G*10O#7G?k3ZOlEUqm3yq#081#%+`XmOlWrn?*7D)v52lI#fJMY@;K<-$!bdpk4>+_L z{^U(x!>?*5LMrc^woj1^D*%YA4uRmAsmoFDt`5N$OL&k+5Z#sTF+~+drQX8`ljl0} z#|_c*K~pHzL9hO64s|kM=D5>KV+l%IaH6Ump5LL9Hn{Jj5vqXXQ3+-ShC+RT_J=!I z`v>F2C0S%il7$c?3x+wSz;8&d1;Bs=SbG9ut{s483M!!`%3(xX$6(}1Z~X}v_6#D% z%Bx?R>6#t?`6Xe2Zu{0+PIOKVfT46!-i33W%Q3u#KxLuVwx6!+%WG7T2oAWV`(W%Q za7%KUs5F;g#Vi0=nA4^YBW#7$eg5y~u~TI!pn}J9D6*-zOQ(gPryfqIa}B0)>*Hgi zy61s~lbX0*v-|DqAEWWYd4|$_-bnQ$gz`7GfWXH4)%i;qlRx;WDJfOzNNI%>fQt|Z zLha3HmRR{aH8O3aVO1}LIS4^Q812k-} zFH2dIgE>FqYD74N%e$IqTFC(CB4%sV*O8q@)DJ|9Y`w}{U$b0!j~IarKBio$SUG0_ zZleOsqHbRglLeS#A&>&uy=bKd;}>YNoP*W}A~K*|vx7Ay)A;j6TB%+(PRl#kfVwQ1 z0@Va+#(R*pg7T&5CO7j*@${$$$8jZ6`6JK6y;;jq1SvKRu`PI~5kms(#B{pY7r>y} z+RbJ~Ef-`gDmFp;2Wg}icB2i;ji_GG8?sVA(k8=Q7D0taR{%qiZdWO5iZR1C&(Vd( z(F%8u%sug&F}0WbvE9%7SoYU~K4cX|2PR*6?5tNG{2W)s=En+Xte+R^Cu|WWl~Il) z5gwW~{06L_XjYkp`4xx90r*9i9N@XnC5M0P zX5U&28tAkgq_B}21wLIRo`t5ZzNHG%RFxIx3E#LPU%iz-;NHJp??e^#zivj;Wsc|b{@@qM zH^Nz^DUhq-LQ8!n899N3^AzH`L`Hl6b1|GO9po^o^93+fO;>^IC&mUdtqmaygp~L4 zqT4d;`6TeQ-GBSE+`c7P2;xg-C{*Y4`HUHzEvG`d&-xUOcO5 zv$&$5M@`*E{}Yg_;fL}WqO)zM+Xb<;!)-&~+K$=0JD=VcR(B6^D(2}SgtwkKPr(GC z5O_I#G^NZ?Tw|1eK7(ax-+ptdOdx{yVRA%@Wu1DG+nu)M?-;@1NXrhm*KvJiXn{ReLG=7GnFdbR}iRj7Fx^e4MS;QU2rgOb0C*Du( zyoS8NUUDu*kMsS^Wo(SAxDy8_AEZ;+oPM_5Q);K8R=2WxM&n;uT4CN1q1A2wp0heR z=T8sCq0QXImL;^AH>%pq`y}Po)`p%3opC@E1ZOK!F9NzM&$RQcy&!4z z6U%yy1+I;pU&He@eS>DItm3XyD*GEvPosC$?<~hw{eri?^K|@pfA>jejCDVQ>=p6j zU+|6g&#R6?-v%87D+C|OVvS51)csvDxjKHmLw*bJi5RTAmt7`uFh;NU;*RZ?Fu#fR z2UIP;BkY6>2Kl+BlNj#Y;T$&dNbP@9#Z?_ri+m&bJaXzoaq_~SHyW8j1!Ggf>xuLe zmG{NwYOGRZPP%vw{r7MD+QBKGz?0J@g@{`#axS7@Rs!SSq-ZfTAs%U@?P|6%M-8P( z3@WPb3#7M;KdjU|c8aMmZ`O{Pp$R0eJy`f?-)o{bhZ>wo;~k8bHb}bKX82gAxPsqE z@KR+{HQfB!Cz-h>XAUQedp?OjZp^!LNL;ft7YG8bY`@onr0lh_-zfC9ZN2p)4ze;Ou=|^DMP&y?H+OaxD5YG*6GULfbc`CVF@= z)HFwpT?pkNp>7Ok0_HXc?~X*4%QKUtX#xhj56gSrZ%h5KEjq@-zkHf~3G=6m*3Ob~ zaxBR6D(>aL3xNIMpkUoI&qjkPv)7DKi|(ZdFlC;$ox`G?KNCGq96WpGU|h$Pk3Ycj z%IOB~tj7rje{YE+k@b=e{8T?J5~OsbA+?e&YWi1H#iEEW!lS2Eq>)x+U$8jBUxSr2 z5#MF)mh5_)p)NZ^l))ADpaxa(>rI+b%#25J*1RjnbW;`k{a>X_rS?NdaHmO6m$?qZ zT;Z174+W8~nhJ_~D5~$&v%zNDmC=Vg;Svwu*_Hf9?zIy0Z@(w<;EOni4`>;cPbxKy zT3R<*Nj&NF@1P8dscE-e_(hPbuX)?_EC1UA>!4?ACiRUiKl7;=R^Bul4J_q~#l0Tw z{~!`~&DjH$vE5LuSUKeV*2!<>7Y9RO=31i9sdV-ezh&<7z<*B-R7cz@?v?f_viToA z6<3xGEz8r~`ubHP=xHRCMU!jG%JjSPo(<(Fex4*`chWsq_0C@(d5hfRDs2tBIJtLZ z6ajzai`l2BC4=j;*<>mM)AOaTE5G*#ZmwWd#iMd;TKYF$pYD9A<{R$dpg;MoW%pb4 z?73nIZ3~q!ezT3{TWeDJQ;+i_ZrRwy<-9E)S16nHV35-GbtrB2g{ys&?uh3zdpVvV zch4!^A>9-o8=Ky@09QN@@n@idbRtT@h^NQx%QwW4h@yLtU5l`vNL}{ntcJ&+>NdG8 zX9H^ftgnJGfS~?%^M<8S^_^?d&8xcj-uR(U(@2F?a~nPsLp&MGj>zH!-=hBC{0+J% z8Yq~j6XXN`r`wQ+=kQ~}1cS*u#S2bzbw#e7$NNd5U@nHclL3BwZw*$8O8cFa!HE(b z<_)0IUQvGtCOwd|qam7H#TWP8Ll=};HmQ~i)l&+=3|OxK8JD5+{o(1nCXFFr`DF^* zAlXlw=G}Q?X#~>&XeXF#fA7mor3*3v*k9DCt1k+e)~Nv7i|bPR?`-PxKNO1V@#5~6 zQkC;|;vekSTNqY38rT!DdI=i9q-#;sf1EDxlKC5bp zO6+)~SW@XNA*s!Zdbl0M+_PgaAu}tbvi|t;z8x^F((8P1Z2g$urK6VzOzxov?tLz_ z!Wb>XruhtbxX~d0t2A#;H>xxv(hyLiy+ZM2lvpO_qfWNKaQ}n5Dee3+F`Fa_);>Dp zMsqcp#e;)){pC-Od{Wjx5eAJLHLycg&sL8(B$``notL=*g{{^opFp@d&=(>vLP3xs zl=vC!tDCTOw%q+zR6K?n8p`Qp@(=#s+YOO-?@y`?q=bw0wqJAJQhBg3F8&u`R zQRIlZqzh^IQo(a8fK%y9nV)0IB39k_kn&u<0A~6<5Eu�q2WCr^#ZGpr4M1K*GVM zRmeyxhv+FYET{J{N&NsirR13IcKMjc6)*W=I9J_}x#fA6b`xfJxI zCgh;X`?@pxw6ee8V6m{8t&>~+1i@=ml`NJ^ag&?gp5&cyj^)^sf(%lr6`0KfS-?3S zUB?7U8w&t?0LXDR^P!458)jcO{0KHXKg}NS@QvhWl2W6}tWUv5<1WmB=0U?siGcI5 zfEWR#X{OB|uG}EEJO4`$DlCpa^hJ!*3k^#3I93_0O9LpIf4~^nS>Q|(?m3xlZIHbQ zeT>IQhA>1+P#q)A(zmF$j|0RC`zDMU@?*U8X|%Qf<|oFUfxWm4AP?{!sBp}Yp>i{Cko6oKioZ*9Z3#kTB2B}(deA*+dEEe1$XX<8NP z)C_3aR;F4(uOqJzsAdDhu#bpt`I7_Xs?L+&6m}Gq%_;<0e|zb8HeDyccgl_jL#-Ub zZ*lN!;fJA9&#B}SmdmJyi{_c0urcKi23clY=;>)274J=RGda$1qBC69qSvCRlrX#9 z67tPKu=N+%7CFPP^$uPP)L1R?93gKohUG1xP z-pIav*+KoM3A31743TPM6sq8f`ou|5TL6e(bbTk}CD+5PUrhU+!#cAy^aZ~v>ul)# zWytQPpbzTaJ`1svn1jel!aZrrB5qRC&!II2wB7ePP3SwZs*hB@tiXU zyA>9^mLiz!6^J4n03)gyb-b>&WQFOqB2o9aB-6MWsGcFucM;yaLo z6yy+Hy$7s`hfe-{_aKZUvRv^Nyto)!{MYM?mvALr>1Qra2)b$xR@A*d%tLefs6S`@ z61SjWwaK*;5klp8wzawj!W zDAYPxZ(m^NtFUvYOF(94`$k_-UbB?Y*C2yU*f{wOBNJO!K$y>;>ODZw$hzVAu8T63 zjjS6%wSV~=7T6jgj4(ds#?|6Z*QEVZAxKK(GsuFs57LUS^$O%&!u3i93i#g6;Te2# zDoAIEMcm*qsVNQSw^#k|IX~U)rL(zr+Tv(v{~s5Ag2Mq;zEq*>^xGDl9#ufZoJ+_{AUKhAYBUJhHtFI(9&E zab9IV-k9ogq~M}&*Cj`mRXphl86won9s)8dw7o5ELQ!H4sNg_nZgr*Luk~Au!WUdj z2$D<*CTJ6dw=_Dtoi&(>ge$3!onYs(&UAlyg#RhLiG!t)p)5nrLIw*|o-I3-hM*Oz zL;o(xJn&aa4cl^`_;WznwX#v4p3u7qC~$Y!N53r%s@pyWyqdYfF@E8jrfX4bn!5P3 z=?*pT^?GU_sofT-1?&|Dd=sKSBVAS`r@n+q{7Q`+?LnOUx&INYpS)=u7I?LeCLH9e)S~{?y^Q!_FZVJhTPYjZ z0X=hD{IXrl(p{fvL*qe^ZV_a12YDeNz<7gfdMk}b>e9pZF{i)TVCY(Cj(G+dv$Yi~ zr7l6P>Nk?7o=(!L)fC~4A9y-%{NtA71eBv@Wumgtsg2kJ)eg*GKDtc z>FDQI{}1se#gnqDH*c;|o2mL(kwFoJAK1Hg>DbQW#ixt)71m>UiW8vZ&jgj)C%6ot zqiD?lvXu#hE;`LtJ@KmAn5oR(0yX=I^G}nrRS^fRijm=YvvdIq7BT1Pcn0qS*u2E1 z1^g8B1#C_r9CZLoM;p90SItq7e$fcD?_K zZ=N+x09Ti?6Txl+zDI9x^jk`o@&vB8~+MaXJ}g!rh>d&#c$!ToFe`!fVA8d!BmnOguYEY-wVR2yHY&? z?iLDj!@}?am!r8z*pQ%nQWyDzb-nAE0@U(<+d;i}Xji0ShNB2uLC2IAjOW^O#I&)f0n*M>t%=IB%2|pHB=zxv zdL4S^{QKJf_}v%Dm|O!W`Z08D5N%Iu=6^@S6U~XoFNn{DpuwTzxz9c_nt_%M*K2?p zE>ca>%v84IZ(9S{?%vw0-S^RXsX__A(^4+`tXObJhy{G~zPA7U<44bHtuoPLvZh4K z>MIsd`$kP7mTgw}w<%IM1Q$wj)_WC2+vi71rjvh)vIInK5x4iE%H&x25{O+3Rga}0 ziXzd2E6U+_Y4>9sEuz48Jx)mfjbm|t2q?g@xN*l8(n4_vB+3xbZ4`QAhq{0F0Jwez zh8Kl<=qqy&K-|mZ1-NclU|GSNUi>#*fs3d@D+JKLS_mW8ua2iz{x7z;G zfN+ir*1^zdDueHi@_Ke8={0BgYuFvgwGJW}a~z2Ry%;Z>YJ4i-Tyt3%UH%s;FG2up zr~qAijP- zyr!R9bNUduGb4y-7|HS->J%`@&V%L4R}WW#N4%I{+V{`1xLEs% zGi9z_ElZrt{t<@mI)*6C*YQcK?*{zBCDK&jv1#t!v1fE$$@aqzxaq5suyTV^I6~)N zp!&(WG)unWq8?_feX-h#_VFhWCLNK%KxrL;9Rg%u*6A`spOl!dHuM}h<|W!sGy|GV z>+R`^*6qts1(ARds#n#%qoGXN4u(-}|9e?TUJm)d`-sCI^wiWa&0CqJi8}G4BPxkgS{9LD^X+mC0e|&YO_WSrl56J8zjGx;y@R0-*nCKLKpi$qNxOvI_Nb9 zBA|zm0(Rs0Sta1VEkOSXzf8?1^oMoYsqSBJVo6LR7JhqOQQ+7Hw1kGz?r5V*6M?VU zegf%F$4$ZGhe*t+Gr)p&B$;^A;JW|TJ8M7BC7rgFp!K-#W5+a&u&1*^3laBt~23HpjjpvN!Y0m zOXhbwZX*5H;j>?Y2E>C)Vw$8O94>)C7*GUsFdRCL|B5&y8+6|(%a0TLncp5&hWS-#MS1s_bnOR)bkBPF;d@6_k1 h?6Vm3|G9}Hc}&K$0Ok05w$MwS)Ng95R4LsJ`#)NX*Khy; From 2b4f34c54ce4a5d480798925e845ca92f6626949 Mon Sep 17 00:00:00 2001 From: vincentpierre Date: Fri, 12 Oct 2018 16:04:26 -0700 Subject: [PATCH 5/6] Replaced the references to TF# with new document. --- docs/Background-TensorFlow.md | 4 ++-- docs/Basic-Guide.md | 2 +- docs/Getting-Started-with-Balance-Ball.md | 2 +- docs/Learning-Environment-Design-Learning-Brains.md | 2 +- docs/Readme.md | 3 +-- 5 files changed, 6 insertions(+), 7 deletions(-) diff --git a/docs/Background-TensorFlow.md b/docs/Background-TensorFlow.md index 7bab1b8179..f38ecfe88f 100644 --- a/docs/Background-TensorFlow.md +++ b/docs/Background-TensorFlow.md @@ -39,6 +39,6 @@ recommend this ## Tensorflow Model Inference One of the drawbacks of TensorFlow is that it does not provide a native C# API. -We have are using the [Unity Machine Learning Inference SDK](TensorflowSharp) to +We have are using the [Unity Machine Learning Inference SDK](Inference-Engine.md) to run the models inside of Unity. In order to use it, you will need to have an -appropriate backend downloaded. You can find more information [here](TensorflowSharp) +appropriate backend downloaded. diff --git a/docs/Basic-Guide.md b/docs/Basic-Guide.md index d9a7a5b4cb..0bbe01c473 100644 --- a/docs/Basic-Guide.md +++ b/docs/Basic-Guide.md @@ -12,7 +12,7 @@ the basic concepts of Unity. In order to use the ML-Agents toolkit within Unity, you need to change some Unity settings first. Youy will also need to have appropriate inference backends -installed in order to run your models inside of Unity. See [here](TensorflowSharp) +installed in order to run your models inside of Unity. See [here](Inference-Engine.md) for more information. 1. Launch Unity diff --git a/docs/Getting-Started-with-Balance-Ball.md b/docs/Getting-Started-with-Balance-Ball.md index ff95907e54..bf885de0f3 100644 --- a/docs/Getting-Started-with-Balance-Ball.md +++ b/docs/Getting-Started-with-Balance-Ball.md @@ -279,7 +279,7 @@ containing the trained model is not exported into the ml-agents folder. ### Setting up Inference Support In order to run neural network models inside of Unity, you will need to setup the -Inference Engine with an appropriate backend. See [here](TensorflowSharp) for more +Inference Engine with an appropriate backend. See [here](Inference-Engine.md) for more information. ### Embedding the trained model into Unity diff --git a/docs/Learning-Environment-Design-Learning-Brains.md b/docs/Learning-Environment-Design-Learning-Brains.md index 8d2b5fe45b..7c68977217 100644 --- a/docs/Learning-Environment-Design-Learning-Brains.md +++ b/docs/Learning-Environment-Design-Learning-Brains.md @@ -44,7 +44,7 @@ To use a graph model: 1. Select the **Learning Brain** asset in the **Project** window of the Unity Editor. **Note:** In order to use the **Learning** Brain, you have appropriate backend for the - Inference Engine. See [here](TensorFlowSharp). + Inference Engine. See [here](Inference-Engine.md). 2. Import the `model_name` file produced by the PPO training program. (Where `model_name` is the name of the model file, which is constructed from the name of your Unity environment executable and the run-id diff --git a/docs/Readme.md b/docs/Readme.md index ad44fa4bfb..7a47ad7948 100644 --- a/docs/Readme.md +++ b/docs/Readme.md @@ -42,8 +42,7 @@ * [Using TensorBoard to Observe Training](Using-Tensorboard.md) ## Inference -* Link to [*Unity Machine Learning Inference SDK*](TensorflowSharp) -* [Installing Backends](TensorflowSharp) +* [Unity Machine Learning Inference SDK](Inference-Engine.md) ## Help From 51714b08abf7f3b44e6734c8cde37c6a4a828a49 Mon Sep 17 00:00:00 2001 From: vincentpierre Date: Fri, 12 Oct 2018 17:11:34 -0700 Subject: [PATCH 6/6] Edditied the FAQ --- docs/FAQ.md | 32 ++++---------------------------- 1 file changed, 4 insertions(+), 28 deletions(-) diff --git a/docs/FAQ.md b/docs/FAQ.md index db51ef5291..1de4d9d375 100644 --- a/docs/FAQ.md +++ b/docs/FAQ.md @@ -13,36 +13,12 @@ This is because .NET 3.5 doesn't support method Clear() for StringBuilder, refer to [Setting Up The ML-Agents Toolkit Within Unity](Installation.md#setting-up-ml-agent-within-unity) for solution. -## TensorFlowSharp flag not turned on +## Cannot drag Model into Learning Brain -Before version 0.6, we use specific scripting define symbols when using TensorflowSharp. If you have already imported the TensorFlowSharp plugin, but haven't set -ENABLE_TENSORFLOW flag for your scripting define symbols, you will see the -following error message: +You migh not have the appropriate backend required to import the model. Refer to the +[Inference Engine](Inference-Engine.md) for more information on how to import backends +and reimport the asset. -```console -You need to install and enable the TensorFlowSharp plugin in order to use the Learning Brain. -``` - -This error message occurs because the TensorFlowSharp plugin won't be usage -without the ENABLE_TENSORFLOW flag, refer to [Setting Up The ML-Agents Toolkit -Within Unity](Installation.md#setting-up-ml-agent-within-unity) for solution. - -## Instance of CoreBrainInternal couldn't be created - -If you try to use ML-Agents in Unity versions 2017.1 - 2017.3, you might -encounter an error that looks like this: - -```console -Instance of CoreBrainInternal couldn't be created. The the script -class needs to derive from ScriptableObject. -UnityEngine.ScriptableObject:CreateInstance(String) -``` - -You can fix the error by removing `CoreBrain` from CoreBrainInternal.cs:16, -clicking on your Brain Gameobject to let the scene recompile all the changed -C# scripts, then adding the `CoreBrain` back. Make sure your brain is in -Internal mode, your TensorFlowSharp plugin is imported and the -ENABLE_TENSORFLOW flag is set. This fix is only valid locally and unstable. ## Environment Permission Error