-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NAS for (Variational) Autoencoders #573
Comments
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
@tik0 Have you found out anything on this one? Have you been able to use AutoKeras to create an Autoencoder or did you use something different? |
I was trying to create a LSTM Autoencoder, but couldn't realize how to do this. Do you have a clue now? |
@daviembrito I ended up using a generic hyperparameter optimizer with a simple, handcrafted, chain-structured search space. It worked reasonably well in my use case, but it also introduces a large human bias with quite a limited search space of possible network architectures. You can have a look at it here: https://github.com/maechler/a2e |
Feature Description
Training of predefined models with constraints (e.g. bottleneck layers) and additional losses.
Reason
It is unclear how to use autokeras on models with particular constraints, like Autencoders or Bottleneck-Networks.
Furthermore, the Variational Autoencoder has additional regularizer and sampling layer.
Solution
Some API like
# define model constraints first
ak_model = ak.GenericModel(my_keras_model)
ak_model.fit(x_train, y_train, time_limit=12 * 60 * 60) # while y_train can be also None
ak_model.final_fit(x_train, y_train, x_test, y_test, retrain=True)
Alternative Solutions
Additional Context
The text was updated successfully, but these errors were encountered: