-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] Why is the SARSA algorithm not available in Stable Baselines 3 #786
Comments
stable-baselines3 is mainly for "deep" reinforcement learning algorithms, where algorithms like A2C and DQN and PPO are the prominent "baselines". While SARSA is applicable, maybe as an modification of DQN, it has not been used much in deep learning literature, and has not received enough attention for somebody to add it to SB3. However if you feel like experiment, we could review a PR to add it to our contrib package :). |
Hello, |
Thanks for your answers. |
@araffin can you highlight the differences between SB3 and Mushroom RL? Edit: Would you agree with their chart here? |
i recommend you to read our paper/blog post (link is in the readme), we also have an issue here: #20 The table you are showing is about Stable baselines (SB2), not SB3. |
Actually I am new in the field of Reinforcement Learning and I have often encoutered the SARSA algorithms in books, tutorials, videos etc.. It seems to be a very popular on-policy learning algorithm for Reinforcement learning. However, it noticed that it is not available in Stable Baselines 3. Is there a specific reason for that and is it planned to be included in future updates of Stable Baselines 3?
The text was updated successfully, but these errors were encountered: