New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NEAT Autoencoder / embedding? #10
Comments
Hey, sorry I'm just seeing this. I'm not totally sure I understand the topological constraints you are talking about. Does this just mean you would like to use a traditional dense layer without evolving the topology? If so, that is possible. With radiate, you can still stack layers, so say you want a network with three dense layers. You can stack say, a dense_pool layer, which will evolve its topology, then a normal dense layer, which will not evolve it's topology and act like a traditional feed forward layer, then you could add another dense_pool layer, which again would evolve its topology. This way your second layer would maintain its dimensionality through evolution while still allowing the first and last layers to evolve. The readme in the models folder has an example of stacking layers like this: https://github.com/pkalivas/radiate/tree/master/radiate/src/models |
Yes this works for my use-case! Thank you, I'll take a look at the example! |
Great! I'm going to close this issue. Go ahead and open another one if anything comes up. |
Hello,
I'd like to evolve a network with a topological constraint for it's hidden layers.
In a dense network this would be some middle layer that forces the reduction of the dimensionality of the information.
In NEAT I'm not sure what this would mean, as the network topology has so much more freedom.
Is this possible in general? If so, can this be accomplished using
radiate
? I'm not very familiar with the library, just only getting started.Thank you!
The text was updated successfully, but these errors were encountered: