Skip to content

😉➡️ 😷🔁 Learning to Reset for Safe and Autonomous Reinforcement Learning

License

Notifications You must be signed in to change notification settings

steph1793/Leave-No-Trace

Repository files navigation

Leave-No-Trace

😉 🚮 Learning to Reset for Safe and Autonomous Reinforcement Learning

We based our work on this paper. The authors aimed to provide a framework for safe learning, allowing our agent to learn good behaviour (quickly) and run for longer episodes avoiding numerous resets which can be costly.

How to run the code

We can launch the project by running the notebook on google colab. You have two main sections : The experiments section and the results section. In the Notebook we clone the repository. if you want to visualize the results, you can go directly to the results section(after installing and importing the dependencies). And if you want to launch you own experiments, you need to rename or remove the old results folder.

Our experiments

We train our agent to walk using a Soft Actor Critic method. We experimented this method alone and we also tested the Leave-No-Trace.

Highlight Some results

SAC Only method visualiztion (after 1100 steps)

Sac only

SAC using Leave-No-Trace Framework visualization (after 800 steps)

Sac only

About

😉➡️ 😷🔁 Learning to Reset for Safe and Autonomous Reinforcement Learning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published