NeurIPS 2021 - AWS Deepracer Challenge - Starter Kit
This is the starter kit for the AWS Deepracer Challenge, a part of AI Driving Olympics at NeurIPS 2021, hosted on AIcrowd.
In this competition, you will train a reinforcement learning agent (i.e. an autonomous car), to run on the deepracer simulator. This model will then be tested on a real world track with a miniature AWS Deepracer car. Your goal is to train a model that can complete a lap as fast as possible without going off track, while avoiding crashing into the objects placed on the track.
Clone the repository to compete now!
This repository contains:
- Deepracer Gym Environment which makes it easy to use the deepracer simulator.
- Documentation on how to submit your models to the leaderboard.
- Information on best practices to have hassle free submissions.
- Starter code for you to get started!
IMPORTANT - Accept the rules before you submit
- 📚 Competition procedure
- 💪 Getting started
- 🏎 Deepracer Gym Environment
- 🛠 Preparing your submission
- 📨 Submission
- 📝 Submission checklist
- 📎 Important links
- ✨ Contributors
The AWS Deepracer Challenge is an opportunity for participants to test their agents for simulation to real world transfer, testing it on a real world track with a miniature AWS Deepracer car. Your goal is to train a model that can complete a lap as fast as possible without going off track, while avoiding crashing into the objects placed on the track.
In this challenge, you will train your models locally and then upload them to AIcrowd (via git) to be evaluated.
The following is a high level description of how this process works.
- Sign up to join the competition on the AIcrowd website.
- Clone this repo and start developing your solution.
- Design and build your agents that can compete in Deepracer environment and implement an agent class as described in writing your agents section.
- Submit your agents to AIcrowd Gitlab for evaluation. [Refer this for detailed instructions].
We recommend using
python 3.6
or higher. If you are using Miniconda/Anaconda, you can install it usingconda install python=3.6
Clone the starter kit repository and install the dependencies.
git clone http://gitlab.aicrowd.com/deepracer/neurips-2021-aws-deepracer-starter-kit.git
cd neurips-2021-aws-deepracer-starter-kit
#Optional: Install Deepracer Gym Environment
pip install -e ./deepracer-gym
Originally, AWS Deepracer is a service hosted with AWS Robomaker platform. To make it easy for partcipants, we are releasing a gym environment for Deepracer. The simulator runs by starting a Docker container that runs the simulator and using a network connection with ZeroMQ server to provide a Gym interface.
Run these to quickly get started.
# Install docker if needed
sudo snap install docker
# Install the Deepracer Gym Environment
pip install -e ./deepracer-gym
# Start the Deepracer docker container
source deepracer-gym/start_deepracer_docker.sh
# This might take a while to download and start
# Wait until the terminal says "===Waiting for gym client==="
# Open a new terminal
# Run a random actions agent with Deepracer Gym
python deepracer-gym/random_actions_example.py
# Stop the docker container once done
source deepracer-gym/stop_deepracer_docker.sh
For more instructions look at deepracer-gym/README.md
Your agents need to implement a subclass of DeepracerAgent
class from agents/deepracer_base_agent.py
. You can check the code in agents
directory for examples.
Note: If your agent doesn't inherit the DeepracerAgent
class, the evaluation will fail.
Once your agent class is ready, you can specify the class to use as the player agent in your submission_config.py
. The starter kit comes with a random agent submission. The submission_config.py
in the starter kit points to this class. You should update it to use your class.
File/Directory | Description |
---|---|
agents |
Directory containing different scripted bots, baseline agent and bots performing random actions. We recommend that you add your agents to this directory. |
submission_config.py |
File containing the configuration options for local evaluation. We will use the same player agent you specify here during the evaluation. |
utils/submit.sh |
Helper script to submit your repository to AIcrowd GitLab. |
Dockerfile |
(Optional) Docker config for your submission. Refer runtime configuration for more information. |
requirements.txt |
File containing the list of python packages you want to install for the submission to run. Refer runtime configuration for more information. |
apt.txt |
File containing the list of packages you want to install for submission to run. Refer runtime configuration for more information. |
You can specify the list of python packages needed for your code to run in your requirements.txt
file. We will install the packages using pip install
command.
You can also specify the OS packages needed using apt.txt
file. We install these packages using apt-get install
command.
For more information on how you can configure the evaluation runtime, please refer RUNTIME.md
.
You can add your SSH Keys to your GitLab account by going to your profile settings here. If you do not have SSH Keys, you will first need to generate one.
Your repository should have an aicrowd.json
file with following fields:
{
"challenge_id" : "neurips-2021-aws-deepracer-ai-driving-olympics-challenge",
"authors" : ["Your Name"],
"description" : "Brief description for your submission"
}
This file is used to identify your submission as a part of the AWS Deepracer Challenge. You must use the challenge_id
as specified above.
git remote add aicrowd git@gitlab.aicrowd.com:<username>/neurips-2021-aws-deepracer-starter-kit.git
Note: The above step needs to be done only once. This configuration will be saved in your repository for future use.
./utils/submit.sh "some description"
If you want to submit without the helper script, please refer SUBMISSION.md
.
- Accept the challenge rules. You can do this by going to the challenge overview page and clicking the "Participate" button. You only need to do this once.
- Add your agent code that implements the
DeepracerAgent
class fromevaluator/base_agent
. - Add your model checkpoints (if any) to the repo. The
utils/submit.sh
will automatically detect large files and add them to git LFS. If you are using the script, please refer to this post explaining how to add your models. - Update runtime configuration using
requirements.txt
,apt.txt
and/orDockerfile
as necessary. Please make sure that you specified the same package versions that you use locally on your machine.
- 💪 Challenge information
- 🗣 Community
- 🎮 Deepracer resources
- Dipam Chakraborty
- Siddhartha Laghuvarapu
- Jyotish Poonganam
- Sahika Genc
Best of Luck 🎉