Skip to content

Experiments on Model-Agnostic Meta-Learning on Few-Shot Image Classification and Meta-RL (Meta-World)

License

Notifications You must be signed in to change notification settings

Kostis-S-Z/exploring_meta

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Forks Stargazers Issues

Experiments on Model-Agnostic Meta-Learning PDF

robot_arm_opening_a_door

Table of Contents

About

Exploring the effects of the Model-Agnostic Meta-Learning algorithms MAML & ANIL on Vision and meta-RL tasks. Including Representation similarity experiments for further insights on the network dynamics during meta-testing (adaptation) and Continual Learning experiments to test their ability to adapt without forgetting. Feel free to checkout the whole M.Sc Thesis on here

Check out some of our runs & results in our W&B Project

Built With

Installation

NOTE: This repo was build with Meta-World experiments in mind and thus depends on the proprietary environment MuJoCo. So you would need to first get a license for that and install it before you can run the experiments of this repo (Yes, even for the vision or RL-but-not-Meta-World experiments it's required, I am working on separating this dependency)

  1. Clone the repo
git clone https://github.com/github_username/repo_name.git

1b. (Optional, but highly recommended) Make a virtual environment

python3 -m venv meta_env or virtualenv meta_env

  1. Install Cython:

pip install cython

  1. Install core dependencies

pip install -r requirements.txt

  1. (Optional) You can easily track models with W&B

pip install wandb

Roadmap Overview

Currently implemented:

Vision: Omniglot, Mini-ImageNet

  • MAML with CNN
  • ANIL with CNN

RL: Particles2D, MuJoCo Ant, Meta-World

  • MAML-PPO
  • MAML-TRPO
  • ANIL-PPO
  • ANIL-TRPO
  • Baselines: PPO, TRPO & VPG

Possible future extensions

Check branches:

  • Procgen (it works, but incredibly difficult to train)
  • MAML/ANIL - VPG (very unstable)

Project Structure

For a walk-through of the code for vision datasets check here.

Modules (core dependencies)

core_functions: Functions necessary to train & evaluate RL and Vision

utils: Functions for data processing, environment making etc (not related to algorithms)

Running scripts

baselines: Scripts to train & evaluate RL and vision

rl: Scripts to train & evaluate meta-RL

vision: Scripts to train & evaluate meta-vision

misc_scripts: Scripts to run Continual Learning & Representation Change experiments or to render trained policies in Meta-World

Training example

python3 rl/maml_trpo.py --outer_lr 0.1 --adapt_steps 3

If you get import errors, add the project's root to PYTHONPATH. Make sure the content root is added to the PYTHONPATH in the configuration or in the .bashrc file add export PYTHONPATH="${PYTHONPATH}:~/Path/to/project"

License

Distributed under the MIT License. See LICENSE for more information.

Contact

Konstantinos Saitas - Zarkias (Kostis S-Z)

Feel free to open an issue for anything related to this repo.

Project Link: https://github.com/Kostis-S-Z/exploring_meta

Acknowledgements

Many thanks to fellow researchers & colleagues at RISE, KTH and Séb Arnold from the learn2learn team for insightful discussions about this project.

About

Experiments on Model-Agnostic Meta-Learning on Few-Shot Image Classification and Meta-RL (Meta-World)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages