Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Categorical Policy for Discrete Action Spaces? #86

Closed
RylanSchaeffer opened this issue Mar 23, 2022 · 9 comments
Closed

Categorical Policy for Discrete Action Spaces? #86

RylanSchaeffer opened this issue Mar 23, 2022 · 9 comments

Comments

@RylanSchaeffer
Copy link

RylanSchaeffer commented Mar 23, 2022

I want to explore policy gradient and actor critic agents on GridWorld environments. To that end, I want to parameterize the policy as a Categorical distribution at each state. How do I do this?

Looking through the available policies, policy.td_policy.Boltzmann appears to perform softmax(logits), which is what I have in mind, but its logits appear to be dictated by Q values:

q_beta = self._approximator.predict(state, **self._predict_params) * self._beta(state)
q_beta -= q_beta.max()
qs = np.exp(q_beta)

I don't want the policy gradient agents to learn a Q function, and the fact that Boltzmann is under td_policy is making me hesitate because policy gradient methods are not a form of TD learning.

@RylanSchaeffer RylanSchaeffer changed the title Categorical Policy for Discrete Action Spaces Categorical Policy for Discrete Action Spaces? Mar 23, 2022
@boris-il-forte
Copy link
Collaborator

Actually, you cannot use the Boltzman policy for policy gradient methods, as the interface is lacking the gradient of the logarithm. I will put it in the ToDo list.
For (deep) actor-critic, it exists a Boltzmann policy not based on q functions: it's the "BoltzmannTorchPolicy".
In principle, you can think to use that. However, you need to handle discrete state space by yourself, which may not be easy (I never tried to use a deep actor-critic on a grid world, for obvious reasons... but I understand the curiosity to try things)

@RylanSchaeffer
Copy link
Author

RylanSchaeffer commented Mar 24, 2022

Actually, you cannot use the Boltzman policy for policy gradient methods, as the interface is lacking the gradient of the logarithm.

Ok thanks for clarifying! When I tried last night, I discovered that the Boltzmann policy has no ._approximator and wasn't working.

I need this categorical policy for discrete actions and discrete state spaces for my research and I'm happy to implement it myself. How would you recommend doing so?

To be clear, I don't think anything should need to be deep in gridworld. Tabular PG and Tabular AC methods should (at least in principle) be applicable to gridworld, right?

@RylanSchaeffer
Copy link
Author

RylanSchaeffer commented Mar 24, 2022

As a work-around, is the following a possible solution to obtain a PG agent in a discrete state space and discrete action space with the following approach?

Use a BoltzmannTorchPolicy with a torch approximator that is an S x A matrix. Then in each state s, the policy will slice the correct row from the matrix, softmax the row and sample from a Categorical distribution?

@RylanSchaeffer
Copy link
Author

To explain why, for my research, I want to test policy gradient and actor-critic methods against value-based approaches in tabular domains with discrete action spaces. Is there a way to do this using mushroom-rl?

I'm happy to implement whatever I need to myself, if you give me an outline of what needs to change where (and what pitfalls to watch out for)!

@RylanSchaeffer
Copy link
Author

RylanSchaeffer commented Mar 24, 2022

I just tried this myself, and hit the following error inside REINFORCE:

    self.sum_d_log_pi = np.zeros(self.policy.weights_size)
AttributeError: 'BoltzmannTorchPolicy' object has no attribute 'weights_size'

Specifically stemming from the method:

    def _init_update(self):
        self.sum_d_log_pi = np.zeros(self.policy.weights_size)

@boris-il-forte
Copy link
Collaborator

The simplest approach is to implement the ParametricPolicy interface, with an appropriate policy. This will allow standard policy gradient to work, at least as far as I know. If that's not true, you may want to change the policy gradient approaches to support your setting or implement another approximator to support integer inputs.

I want to remark that you can define the policy however you want, there's no need to use any of the mushroom tools (but they can be helpful for more complex scenarios).

For deep actor-critic, you can use the torch Boltzmann policy, and define an appropriate network that makes sense for an integer input. In general, it doesn't seem to be a very good idea to do so, however, I'll not comment on this point further as it's out of the scope of mushroom and it's a very particular setting. Probably, you cannot expect that a deep actor-critic approach will have amazing results on grid worlds...

@RylanSchaeffer
Copy link
Author

RylanSchaeffer commented Mar 24, 2022

Probably, you cannot expect that a deep actor-critic approach will have amazing results on grid worlds...

I think you're misunderstanding what I want to do.

The goal is simple: REINFORCE in Gridworld using a Categorical policy. No deep learning required. This is maybe the simplest application of REINFORCE and I'm finding it surprisingly difficult to implement.

@boris-il-forte
Copy link
Collaborator

The solution for this is described in the post above: implement a Boltzmann policy using the ParametricPolicy interface.
We don't support policy search approaches to finite state space, in general. There are many reasons for this choice. You can try to adapt the existing code following the above solution, but I cannot ensure it will work.

My comment on deep actor-critic is that these approaches, even without deep networks, are unlikely to work. Also, they will be pretty complex to implement in this setting, requiring many complicated assumptions.

Classical actor-critic, if you make standard policy search to work, instead, can be ported similarly.

@RylanSchaeffer
Copy link
Author

Ok thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants