Skip to content
Multi-agent shape formation using policy search
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
README.md
cost_heur_astar.py
gworld.py
macros.py
pqueue.py
solver_model.py
visualize.py

README.md

ECE6504 Autonomous Coordination - Presentation Two

Multi-Agent Shape Formation in a Gridworld

  • Agents are initialized at random positions in the gridworld.

  • Run a reverse search to get the cost from any cell to the nearest goal cell

  • Uses a modified version of A* search in reverse from all goal positions until all start positions are covered.

  • Each agent greedily moves towards its nearest goal position

  • When an agent reaches a goal position, the any other agent that was trying to reach this point has to find another goal position now.

  • So the search process is repeated, and the cost table is updated

  • Because the order of planning can have an impact on the time taken by the solution, the planning sequence is randomized at each step.

  • The reverse search can be thought of as a virtual exploration, with the added benefit of being guided by a heuristic, that is the manhattan distance heuristic that A* uses.

  • However, because there is no actual motion during this search process, it is not equivalent to learning to cooperate in a task.

  • To execute: run solver_model.py

  • Link to YouTube video: https://youtu.be/DCipbUKtsTM

You can’t perform that action at this time.