Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to apply deeplift on autoencoder made by keras? #23

Open
xiaofangyuan opened this issue Apr 20, 2017 · 1 comment
Open

How to apply deeplift on autoencoder made by keras? #23

xiaofangyuan opened this issue Apr 20, 2017 · 1 comment

Comments

@xiaofangyuan
Copy link

Hello! How to apply deeplift on autoencoder made by keras?
Since the autoencoder on keras now is like this:

input_img = Input(shape=(784,))

"encoded" is the encoded representation of the input

encoded = Dense(encoding_dim, activation='relu')(input_img)

"decoded" is the lossy reconstruction of the input

decoded = Dense(784, activation='sigmoid')(encoded)

this model maps an input to its reconstruction

autoencoder = Model(input_img, decoded)

so how to convert the autoencoder layer to deeplift format?
Thank you so much!

@AvantiShri
Copy link
Collaborator

Hi Fangyuan,

(moving from the private email thread to hear) currently, we are focused on studying how to define the reference in an automated way for DeepLIFT - thus, the most I can do in the near term is provide a very alpha implementation of autoencoders with DeepLIFT. If you are comfortable with using an alpha implementation, I can provide it but I can't make any promises about how well it will work. If you want results in the next couple of weeks, my short-term recommendation would be to use the gradients of the input w.r.t. the autoencoder loss as a quick way to get a sense of which inputs are most relevant. I can give you pointers on how to compute the gradients if you are interested in that route.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants