Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regardless of class output fiaxtions are the same #2

Open
joeyearsley opened this issue Oct 2, 2017 · 2 comments
Open

Regardless of class output fiaxtions are the same #2

joeyearsley opened this issue Oct 2, 2017 · 2 comments

Comments

@joeyearsley
Copy link
Contributor

joeyearsley commented Oct 2, 2017

I hard coded the output points to be:
[[900], [900], [900], [900], [900]]

and received exactly the same points as if I had hard coded:
[[100], [100], [100], [100], [100]]

Is this a fluke or did you see similar results?

The only reason I can think of is due to the original activations matching the learned filters hence firing more, providing a majority of the activations regardless of the output. Whilst the output helps guide the remaining points to areas of interest.

@mopurikreddy
Copy link
Collaborator

We have observed similar behaviour too, it is not a fluke. As you suspected, we can explain this behaviour with the distributed nature of the learned representations (e.g. 4096D feature in fc7 of Alexnet, similarly other features like fc6, etc.). These activations encode input information across multiple neurons (among the available 4096 neurons in case of fc7) in the same layer and it is not possible to separate the neurons into semantic groups. That is, it is not possible to identify the neurons that fire only for a specific visual stimulus (say dog images). Because of this, the strongly fired neurons are not different for different categories (during backtracking). Therefore we end up tracing the same path onto the object regions regardless of the label.

@insikk
Copy link

insikk commented May 24, 2018

@mopurikreddy I want to get the pixel level evidences for feed forward pass without specifying particular label.
If label change does not change much of the visualization input, is there a correct way not to give initial values for points at prediction layer?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants