From 501a9d653e43a99f20293ae21c3e9f2d21a87221 Mon Sep 17 00:00:00 2001 From: Liam Paull Date: Sun, 15 Nov 2020 14:33:07 -0500 Subject: [PATCH] cosmetics --- .../36_rl_baseline.md | 20 +++++++++---------- .../42_rpl_baseline.md | 10 +++++----- 2 files changed, 15 insertions(+), 15 deletions(-) diff --git a/book/AIDO/31_task_embodied_strategies/36_rl_baseline.md b/book/AIDO/31_task_embodied_strategies/36_rl_baseline.md index 3d690f7..39cbda9 100644 --- a/book/AIDO/31_task_embodied_strategies/36_rl_baseline.md +++ b/book/AIDO/31_task_embodied_strategies/36_rl_baseline.md @@ -53,29 +53,29 @@ The previous uses the model that is included in the baseline repository. You are To do so: - 1. Change into the directory: +Change into the directory: - $ cd challenge-aido_LF-baseline-RL-sim-pytorch + $ cd challenge-aido_LF-baseline-RL-sim-pytorch - 2. Install this package: +Install this package: - $ pip3 install -e . + $ pip3 install -e . and the `gym-duckietown` package: - $ pip3 install -e git://github.com/duckietown/gym-duckietown.git@daffy#egg=gym-duckietown + $ pip3 install -e git://github.com/duckietown/gym-duckietown.git@daffy#egg=gym-duckietown Note: Depending on your configuration, you might need to use pip instead of pip3 - 3. Change into the `duckietown_rl` directory and run the training script + Change into the `duckietown_rl` directory and run the training script - $ cd duckietown_rl - $ python3 -m scripts.train_cnn.py --seed 123 + $ cd duckietown_rl + $ python3 -m scripts.train_cnn.py --seed 123 - 4. When it finishes, try it out (make sure you pass in the same seed as the one passed to the training script) + When it finishes, try it out (make sure you pass in the same seed as the one passed to the training script) - $ python3 -m scripts.test_cnn.py --seed 123 + $ python3 -m scripts.test_cnn.py --seed 123 diff --git a/book/AIDO/31_task_embodied_strategies/42_rpl_baseline.md b/book/AIDO/31_task_embodied_strategies/42_rpl_baseline.md index cca44cd..c35e72e 100644 --- a/book/AIDO/31_task_embodied_strategies/42_rpl_baseline.md +++ b/book/AIDO/31_task_embodied_strategies/42_rpl_baseline.md @@ -32,11 +32,11 @@ Here's a few pointers: Clone [this repo](https://github.com/duckietown/challenge-aido_LF-baseline-RPL-ros): - $ git clone https://github.com/duckietown/challenge-aido_LF-baseline-RPL-ros.git + $ git clone https://github.com/duckietown/challenge-aido_LF-baseline-RPL-ros.git Change into the directory: - $ cd challenge-aido_LF-baseline-RPL-ros + $ cd challenge-aido_LF-baseline-RPL-ros Test the submission, either locally with: @@ -75,18 +75,18 @@ The final docker container then runs the simulator and the agent in parallel, al From the ` challenge-aido_LF-baseline-RPL-ros` directory, change into the `local_dev` directory: - $ cd local_dev + $ cd local_dev and open the `args.py` file. This is how you will control the training and testing in this repo. For now, just change the `--test` argument to `default=False`. Then, we can train with: - $ make run + $ make run As mentioned [](#rlp-baseline-overview), this will first build two subsequent docker images. This might take a while. Then, it will train an RL policy over the ROS stack inside Docker. When it finishes, see how it works. Simply change the `--test` flag back to `default=True` in `args.py` and test with: - $ make run + $ make run This will launch a simulator window on your host machine for you to view how your agent performs. You should see something like this: