Skip to content

Emerge-Lab/nocturne_lab

 
 

Repository files navigation

nocturne_lab: A lightweight, multi-agent driving simulator 🧪 + 🚗

nocturne_lab is a maintained fork of Nocturne; a 2D, partially observed, driving simulator built in C++. You can get started with the intro examples 🏎️💨 here.


🚨 See our project page and a 📝 wandb report with videos and full training logs


Dataset

You can download a part of the dataset (~2000 scenes) here. Once downloaded, add the data to the ./data folder and make sure the data_path in env_config is set correctly.

Algorithms

Algorithm Reference Implementation How to run
MAPPO (Vinitsky et al., 2021) ma_ppo.py python experiments/hr_rl/run_hr_ppo_cli.py --reg-weight 0.0
Human-Regularized (MA) PPO (Cornelisse et al., 2024) reg_ppo.py python experiments/hr_rl/run_hr_ppo_cli.py --reg-weight 0.06

Trained policies 🏋️‍♂️

We release the best PPO-trained models with human regularization in models_trained/hr_rl. Additionally, we release the human reference policies, which can be found at models_trained/il. For the results presented in the paper, we used the IL policy trained on AVs (human_policy_D651_S500_02_18_20_05_AV_ONLY.pt).

Run HR-PPO in 3 steps 🚀

After installing nocturne_lab, here is how you can run your own Human-Regularized PPO in 3 steps:

  • Step 1: Make sure you installed the dataset and set the data_path in configs/env_config.yaml to your folder.
  • Step 2: You have access to our trained imitation learning policy in models_trained/il. Make sure that the human_policy_path in the configs/exp_config.yaml file is set to the IL policy you want to use.
  • Step 3: That's it! Now run:
python experiments/hr_rl/run_hr_ppo_cli.py --reg-weight <your-regularization-weight>

where setting reg-weight 0.0 will just run standard MAPPO. We used a regularization weight between 0.02 - 0.08 for the paper.

Nocturne PPO and HR-PPO benchmark

For transparency and reproducibility, we provide a detailed report with a PPO and HR-PPO run on 15 scenarios.

Basic RL interface

from nocturne.envs.base_env import BaseEnv

# Initialize an environment
env = BaseEnv(config=env_config)

# Reset
obs_dict = env.reset()

# Get info
agent_ids = [agent_id for agent_id in obs_dict.keys()]
dead_agent_ids = []

for step in range(1000):

    # Sample actions
    action_dict = {
        agent_id: env.action_space.sample()
        for agent_id in agent_ids
        if agent_id not in dead_agent_ids
    }

    # Step in env
    obs_dict, rew_dict, done_dict, info_dict = env.step(action_dict)

    # Update dead agents
    for agent_id, is_done in done_dict.items():
        if is_done and agent_id not in dead_agent_ids:
            dead_agent_ids.append(agent_id)

    # Reset if all agents are done
    if done_dict["__all__"]:
        obs_dict = env.reset()
        dead_agent_ids = []

# Close environment
env.close()

Installation

The instructions for installing Nocturne locally are provided below. To use the package on a HPC (e.g. HPC Greene), follow the instructions in ./hpc/hpc_setup.md.

Requirements

  • Python (>=3.10)

Virtual environment

Below different options for setting up a virtual environment are described. Either option works although pyenv is recommended.

Note: The virtual environment needs to be activated each time before you start working.

Option 1: pyenv

Create a virtual environment by running:

pyenv virtualenv 3.10.12 nocturne_lab

The virtual environment should be activated every time you start a new shell session before running subsequent commands:

pyenv shell nocturne_lab

Fortunately, pyenv provides a way to assign a virtual environment to a directory. To set it for this project, run:

pyenv local nocturne_lab

Option 2: conda

Create a conda environment by running:

conda env create -f ./environment.yml

This creates a conda environment using Python 3.10 called nocturne_lab.

To activate the virtual environment, run:

conda activate nocturne_lab

Option 3: venv

Create a virtual environment by running:

python -m venv .venv

The virtual environment should be activated every time you start a new shell session before running the subsequent command:

source .venv/bin/activate

Dependencies

poetry is used to manage the project and its dependencies. Start by installing poetry in your virtual environment:

pip install poetry

Before installing the package, you first need to synchronise and update the git submodules by running:

# Synchronise and update git submodules
git submodule sync
git submodule update --init --recursive

Now install the package by running:

poetry install

Note: If it fails to build nocturne, try running poetry build to get a more descriptive error message. One reason it fails may be because you don't have SFML installed, which can be done by running brew install sfml on mac or sudo apt-get install libsfml-dev on Linux.


Under the hood the nocturne package uses the nocturne_cpp Python package that wraps the Nocturne C++ code base and provides bindings for Python to interact with the C++ code using pybind11.


Common errors

  • KeyringLocked Failed to unlock the collection!. Solution: first run export PYTHON_KEYRING_BACKEND=keyring.backends.null.Keyring in your terminal, then rerun poetry install stackOverflow with more info

Development setup

To configure the development setup, run:

# Install poetry dev dependencies
poetry install --only=dev

# Install pre-commit (for flake8, isort, black, etc.)
pre-commit install

# Optional: Install poetry docs dependencies
poetry install --only=docs

About

A data-driven, fast driving simulator for multi-agent coordination under partial observability.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 52.1%
  • Python 25.2%
  • C++ 21.9%
  • Other 0.8%