Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory increases and iterations slow down while using visualize_saliency #223

Open
Delarti opened this issue May 7, 2020 · 0 comments
Open

Comments

@Delarti
Copy link

Delarti commented May 7, 2020

Hi,
I am using a slightly modified version of visualize_saliency to retrieve, for 1 image, the gradients of every neuron of the last dense layer of my model. My dense layer is composed of 4096 neurons and I thus call visualize_saliency 4096 times for one image. For the purpose of what I am doing, I set the weight of the neuron I want to retrieve the gradients to 1 and all the other neurons weights to 0 at each iteration.

However, while running, my memory increases et the iterations slow down :
After 4 minutes running: 4.71% completed - 80.93 min remaining
After 8 minutes running: 7.52% completed - 98.10 min remaining
After 12 minutes running: 9.58% completed - 113.62 min remaining
After 16 minutes running: 11.25% completed - 126.22 min remaining
...
and remaining time seems to quadratically increase over time

I saw on /issues/71 that Hommoner found the leak is due to every time the line "opt = Optimizer(input_tensor, losses, wrt_tensor=penultimate_output, norm_grads=False)" is called, the tensorflow graph adds a new tensor.
And that a workaround is only get "opt" once and keep it in memory.

However I cannot do that because I change the weights of my last dense layer every iterations and so I need to call Optimizer() every iteration...

Any suggestions on how to resolve this?

@Delarti Delarti closed this as completed May 12, 2020
@Delarti Delarti reopened this May 12, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant