Skip to content

danifuertes/top_transformer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Solving the Team Orienteering Problem with Transformers

TSP100

Paper

Solve a variant of the Orienteering Problem (OP) called the Team Orienteering Problem (TOP) with a cooperative multi-agent system based on Transformer Networks. For more details, please see our paper. If this repository is useful for your work, please cite our paper:

@misc{fuertes2023,
    title={Solving the Team Orienteering Problem with Transformers}, 
    author={Daniel Fuertes and Carlos R. del-Blanco and Fernando Jaureguizar and Narciso García},
    year={2023},
    eprint={2311.18662},
    archivePrefix={arXiv},
    primaryClass={cs.AI}
}

Dependencies

Usage

First, it is necessary to create test and validation sets:

python generate_data.py --name test --seed 1234 --graph_sizes 20 20 20 35 35 35 50 50 50 75 75 75 100 100 100 --max_length 1.5 2 2.5 1.5 2 2.5 1.5 2 2.5 1.5 2 2.5 1.5 2 2.5
python generate_data.py --name val --seed 4321 --graph_sizes 20 20 20 35 35 35 50 50 50 75 75 75 100 100 100 --max_length 1.5 2 2.5 1.5 2 2.5 1.5 2 2.5 1.5 2 2.5 1.5 2 2.5

To train a Transformer model (attention) use:

python run.py --problem top --model attention --val_dataset data/1depots/const/20/val_seed4321_L2.0.pkl --graph_size 20 --data_distribution const --num_agents 2 --max_length 2.0 --baseline rollout

and change the environment conditions (number of agents, graph size, max length, reward distribution...) at your convenience.

Pretrained weights are available here. You can unzip the file with unzip (sudo apt-get install unzip):

unzip pretrained.zip

Pointer Network (pointer), Graph Pointer Network (gpn) and GAMMA (gamma) can also be trained with the --model option. To resume training, load your last saved model with the --resume option. Additionally, pretrained models are provided inside the folder pretrained.

Evaluate your trained models with:

python eval.py data/1depots/const/20/test_seed1234_L2.0.pkl --model outputs/top_const20/attention_... --num_agents 2

If the epoch is not specified, by default the last one in the folder will be used.

Baselines algorithms like Ant Colony Optimization (aco), Particle Swarm Optimization (pso), or Genetic Algorithm (opga) can be executed as follows:

python -m problems.op.eval_baselines --method aco --multiprocessing True --datasets data/1depots/const/20/test_seed1234_L2.pkl

Finally, you can visualize an example using:

python visualize.py --graph_size 20 --num_agents 2 --max_length 2 --data_distribution const --model outputs/top_const20/attention_...
python visualize.py --graph_size 20 --num_agents 2 --max_length 2 --data_distribution const --model aco

Other options and help

python run.py -h
python eval.py -h
python -m problems.op.eval_baselines -h
python visualize.py -h

Acknowledgements

This repository is an adaptation of wouterkool/attention-learn-to-route for the TOP. The baseline algorithms (ACO, PSO, and GA) were implemented following the next repositories: robin-shaun/Multi-UAV-Task-Assignment-Benchmark and dietmarwo/Multi-UAV-Task-Assignment-Benchmark

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages