Skip to content

muellerjohannes/geometry-natural-policy-gradients

Repository files navigation

Geometry and convergence of natural policy gradient methods

This repository contains Julia code for the experimental evaluation of the convergence results obtained in the manuscript Geometry and convergence of natural policy gradient methods (joint work with Guido Montúfar). In particular, utilities.jl includes the basic functions computing the reward and state-action frequency of a policy. The definition of the the different Gram matrices used in the NPG methods is implemented for tabular softmax parametrizations in utilitiesNPGSoftmax.jl and can be easily generalized to arbitrary parametrizations. The code for the experiments, which implements vanilla PG, Kakade's, Morimura's and $\sigma$-NPG methods is documented in illustrationsUnregularizedReward.jl.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages