Skip to content

nhanph/c-MBA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Attacking c-MARL More Effectively: A Data Driven Approach

This is the official repository for the paper Attacking c-MARL More Effectively: A Data Driven Approach published at ICDM 2023, and includes implementations of the training and adversarial attacks including:

The experiments are conducted using the multi-agent MuJoCo and multi-agent particle environments.

Our repo is built upon the implementation of MADDPG here.

Installation instructions

  1. Create a conda environment with python 3.8
conda create -n cmba python=3.8
  1. Install torch here (the experiments were run on CPU only but they can work on GPU machines too). Our experiments use torch==1.10.1.

  2. Install mujoco-py, follow instructions here to download mujoco binaries first. Check here if run into problems during installation/running mujoco-py.

  3. Install packages in requirements.txt

pip install -r requirements.txt

Quickstart

All the scripts to run are placed in the folder run_script.

Configs of each algorithms are in src/config/algs.

Pretrained models are in the results folder including dynamics models, MARL models and the single adversarial model when using Li et al. (2020) approach. The path to pretrained models can be changed in src/config/algs or when running the experiments. See scripts in run_script/adv_attack for more info.

When running experiments, log files will be store under results/sacred/[env_name]/[run_name]/[run_number] and the models checkpoints (during training) will be stored at results/models. In addition, transitions collected to train dynamic models are stored (by default) in results/collected_data.

The notebook folder contains notebooks to train dynamic models. It also includes a sample notebook to plot the results. Change the respective path when needed. You might need to install Jupyter Notebook or Jupyter Lab at https://jupyter.org/.

Adversarial attacks

Once we obtain all pre-trained models. We can perform the adversarial attack by running the scripts in run_script/adv_attack.

Random noise attack scripts have the template adv_noise_[norm_type]_norm.sh where norm_type can be l1 or linf.

For Lin et al. (2020) + iFGSM attack, the script template is:

  • fgsm_atk_[norm_type]_norm_[agent_index].sh: iFGSM attack on agent=agent_index using adversarial policy trained on the same agent.

For model-based attack (c-MBA) variants, the templates of the scripts are:

  • model_atk_fix_[norm_type]_norm.sh: original c-MBA attack on a fixed agent using expert-defined failure state (c-MBA-F).
  • model_atk_data_[norm_type]_norm.sh: original c-MBA attack on a fixed agent using learned failure state (c-MBA-D).
  • model_atk_[norm_type]_norm_opt.sh: c-MBA attack with optimal victim agent selection.
  • model_atk_[norm_type]_norm_opt_2s.sh: c-MBA attack with optimal victim agent selection then run the original c-MBA attack again on the selected agent(s).
  • model_atk_[norm_type]_norm_opt_bf.sh: c-MBA attack with greedy agent selection.
  • model_atk_[norm_type]_norm_opt_rd.sh: c-MBA attack with random agent selection.

Example: Running adversarial noise attack with $\ell_\infty$-norm in Walker (2x3) environment:

cd run_script/adv_attack/Walker_2x3
sh adv_noise_linf_norm.sh

Plot results

We include a sample notebook to plot the results generated by the code. Users just need to specify the correct path to where sacred stores the output files. Please see more details in the sample notebook in notebook/Plot-results-Walker_2x3.ipynb

Cite this work

If you find this repository helpful and use it in your work, consider citing the following paper:

@article{pham2023cmba,
  title={Attacking c-MARL More Effectively: A Data Driven Approach},
  author={Pham, Nhan H and Nguyen, Lam M and Chen, Jie and Lam, Hoang Thanh and Das, Subhro and Weng, Tsui-Wei},
  journal={2023 IEEE International Conference on Data Mining (ICDM)},
  year={2023}
}

The extended version of the paper is also available here.

About

Implementation of the paper "Attacking c-MARL More Effectively: A Data Driven Approach".

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published