Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

potential bug in the encoder #2

Closed
xmax1 opened this issue Sep 9, 2021 · 1 comment
Closed

potential bug in the encoder #2

xmax1 opened this issue Sep 9, 2021 · 1 comment

Comments

@xmax1
Copy link

xmax1 commented Sep 9, 2021

for level in range(1, self._levels): for i_dl in range(self._dense_layers-1): hidden = self.get('h{}_dense'.format(5+(level-1)*self._dense_layers+i_dl), tfkl.Dense, self._embed_size, activation=tf.nn.relu)(hidden) if self._dense_layers > 0: hidden = self.get('h{}_dense'.format(4+level*self._dense_layers), tfkl.Dense, feat_size, activation=None)(hidden) layer = hidden

line 39 onwards in the cnn.py Encoder(), the depth of these layers increases with the level as the hidden variables is overwritten. At large n_levels and n_enc_dense_layers this will result in a very deep network mapping from the observation embedding to the latent space. Not sure it's intentional, doesn't seem to have a purpose, ie is there a reason the higher latent spaces need a deeper function to map from the embedding?

@xmax1
Copy link
Author

xmax1 commented Sep 9, 2021

Ah, it's intentional for a ladder vae type structure. Closed.

@xmax1 xmax1 closed this as completed Sep 9, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant