Skip to content

shivaninanda/MARL_Project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

MARL_Project

Who's deceiving? An investigation into making multi-agent coordination robust to adversarial spoofing.

Robots and autonomous systems are increasingly becoming a reality. However, as things get more autonomous, there is a risk of malicious elements attacking such systems and causing harm. The goal of our project is to investigate how malicious agents can affect multi-robot systems and use this to inform the design of resilient multi-robot systems. The Rendezvous problem has four benign agents that want to converge, while a 5th malicious agent attempts to delay the process. Each of the agents knows the others’ location at every time step. The benign agents move towards the centroid of all of the agents, while the malicious agent’s moves are determined using reinforcement learning. The scenario is implemented as an OpenAI environment, which allows visualization and generation of training data using Python. We first control the malicious agent using policy gradients and then encourage it to act more like a benign agent by applying Generative Adversarial Network (GAN) techniques. After demonstrating how a malicious agent can disrupt multi-robot coordination, we briefly discuss potential solutions for adversarial agent scenarios.

About

Who's deceiving? An investigation into making multi-agent coordination robust to adversarial spoofing.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages