Founder of FAR AI @AlignmentResearch
-
FAR AI
- Berkeley, California
- http://gleave.me
Pinned Loading
-
hill-a/stable-baselines
hill-a/stable-baselines PublicForked from openai/baselines
A fork of OpenAI Baselines, implementations of reinforcement learning algorithms
-
HumanCompatibleAI/adversarial-policies
HumanCompatibleAI/adversarial-policies PublicFind best-response to a fixed policy in multi-agent RL
-
HumanCompatibleAI/imitation
HumanCompatibleAI/imitation PublicClean PyTorch implementations of imitation and reward learning algorithms
-
HumanCompatibleAI/seals
HumanCompatibleAI/seals PublicBenchmark environments for reward modelling and imitation learning algorithms.
-
HumanCompatibleAI/population-irl
HumanCompatibleAI/population-irl Public(Experimental) Inverse reinforcement learning from trajectories generated by multiple agents with different (but correlated) rewards
-
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.