Skip to content
Switch branches/tags

Meta-Learning Curiosity Algorithms

This is the code for "Meta-Learning Curiosity Algorithms" by Ferran Alet*, Martin Schneider*, Tomas Lozano-Perez, and Leslie Kaelbling. Published at ICLR 2020 (and previously in Meta-Learning and Reinforcment Learning Workshops at NeurIPS 2019).

See the paper here.

Overview of Running an Experiment

  1. Specify your operations in
  2. Specify a list of operations to use in
  3. Run to synthesize programs with your list of operations.
  4. Specify an experperiment in
  5. Run to search over your program space.
  6. Use scripts/ to analyze your results.

Code Overview The datastructures manipulated by program operations. Executes a Program object. Takes a list of programs and finds / prunes duplicates by testing each program on a fake environment and looking at the output signature. Our gridworld environments. The module that runs intrinsic curiosity programs and reward combiner programs. A configuration file that specifies the operations that can appear in different program classes The operations that are composed to create a program. The regressor that predicts program performance from its A configuration file for experimenting with performance regressors. The core abstraction of a program, represented by a DAG of operations**. The search module that synthesizes programs. The types that operations in our language can output. The module that runs an agent in an environment. The module that searches over a program space, given a list of programs, an environment, and a program selection metric. A configuration file for simulating program searches. A module that simulates searching through programs. The module that takes a set of synthesized programs and initiates a search over them. The configuration file for testing / searching over programs.\


No description, website, or topics provided.






No releases published


No packages published