Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confused about the observation dropout #32

Open
wzn0828 opened this issue Oct 8, 2023 · 3 comments
Open

Confused about the observation dropout #32

wzn0828 opened this issue Oct 8, 2023 · 3 comments

Comments

@wzn0828
Copy link

wzn0828 commented Oct 8, 2023

Through the paper, I learned the working mechanism of observation dropout. But, during training, when decoding the h and s to the output, why not use the observation dropout ?

image

@anthonyhu
Copy link
Collaborator

Hello,

The observation dropout is only used to drop the input observation fed to the posterior. There is no need to additionally dropout when feeding the state to the policy.

@wzn0828
Copy link
Author

wzn0828 commented Oct 20, 2023

Hello,

The observation dropout is only used to drop the input observation fed to the posterior. There is no need to additionally dropout when feeding the state to the policy.

However, In the screenshot, you always use the posterior sample to decoder action and BEV. That's to say, you do not use the dropout when action decoding and BEV decoding, you just use it for the input of GRU.

@anthonyhu
Copy link
Collaborator

That's correct, dropout is only used for the input of the GRU. During training, the action and BEV output are predicted from the posterior.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants