This repository contains Julia code for the experimental evaluation of the convergence results obtained in the manuscript Geometry and convergence of natural policy gradient methods (joint work with Guido Montúfar). In particular, utilities.jl includes the basic functions computing the reward and state-action frequency of a policy. The definition of the the different Gram matrices used in the NPG methods is implemented for tabular softmax parametrizations in utilitiesNPGSoftmax.jl and can be easily generalized to arbitrary parametrizations. The code for the experiments, which implements vanilla PG, Kakade's, Morimura's and
-
Notifications
You must be signed in to change notification settings - Fork 0
muellerjohannes/geometry-natural-policy-gradients
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
No description, website, or topics provided.
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published