Skip to content

zhangzizhen/ML-DAM

 
 

Repository files navigation

Meta-Learning-based Deep Reinforcement Learning for Multiobjective Optimization Problems

Dependencies

Meta-Learning

For training meta-model on MOTSP-20 instances:

python run.py --graph_size 20 --CUDA_VISIBLE_ID "0" --is_train --meta_iterations 10000

For training meta-model on MOTSP-50 instances:

python run.py --graph_size 50 --CUDA_VISIBLE_ID "0" --is_train --meta_iterations 5000

You can initialize or resume a run using a pretrained meta-model by using the --load_path option, e.g.:

python run.py --graph_size 50 --is_load --load_path "meta-model-MOTSP50.pt" --CUDA_VISIBLE_ID "0" --is_train --meta_iterations 10000 --start_meta_iteration 5000

Fine-tuning

For fine-tuning the trained meta-model on MOTSP-50 instances with 10-step per subproblem:

python run.py --graph_size 50 --is_load --load_path "meta-model-MOTSP50.pt" --CUDA_VISIBLE_ID "0" --is_test --update_step_test 10

For fine-tuning the trained meta-model on MOTSP-30 instances with 100-step per subproblem:

python run.py --graph_size 30 --is_load --load_path "meta-model-MOTSP50.pt" --CUDA_VISIBLE_ID "0" --is_test --update_step_test 100

For fine-tuning the random-model on MOTSP-50 instances with 10-step per subproblem:

python run.py --graph_size 50 --CUDA_VISIBLE_ID "0" --is_test --update_step_test 10

Transfer-Learning

For training all the submodels with transfer-learning by loading the well trained 1st-submodel on MOTSP-50 instances with 10-step per subproblem:

python run.py --graph_size 50 --is_load --load_path "model-0.pt" --CUDA_VISIBLE_ID "0" --is_transfer --is_test --update_step_test 10

Acknowledgements

Thanks to wouterkool/attention-learn-to-route for getting me started with the code for the Attention Model.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%