Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NEAT Autoencoder / embedding? #10

Closed
wbrickner opened this issue May 30, 2021 · 3 comments
Closed

NEAT Autoencoder / embedding? #10

wbrickner opened this issue May 30, 2021 · 3 comments

Comments

@wbrickner
Copy link

Hello,

I'd like to evolve a network with a topological constraint for it's hidden layers.

In a dense network this would be some middle layer that forces the reduction of the dimensionality of the information.

In NEAT I'm not sure what this would mean, as the network topology has so much more freedom.

Is this possible in general? If so, can this be accomplished using radiate? I'm not very familiar with the library, just only getting started.

Thank you!

@pkalivas
Copy link
Owner

pkalivas commented Jun 30, 2021

Hey, sorry I'm just seeing this.

I'm not totally sure I understand the topological constraints you are talking about. Does this just mean you would like to use a traditional dense layer without evolving the topology? If so, that is possible.

With radiate, you can still stack layers, so say you want a network with three dense layers. You can stack say, a dense_pool layer, which will evolve its topology, then a normal dense layer, which will not evolve it's topology and act like a traditional feed forward layer, then you could add another dense_pool layer, which again would evolve its topology. This way your second layer would maintain its dimensionality through evolution while still allowing the first and last layers to evolve.

The readme in the models folder has an example of stacking layers like this: https://github.com/pkalivas/radiate/tree/master/radiate/src/models

@wbrickner
Copy link
Author

Yes this works for my use-case!
The key is that some hidden layer maintains it's dimensionality throughout, and that the dimensionality is much less than the input and output dimension.

Thank you, I'll take a look at the example!

@pkalivas
Copy link
Owner

Great! I'm going to close this issue. Go ahead and open another one if anything comes up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants