New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Anonymous review 2 #4

Closed
goodfeli opened this Issue Oct 31, 2017 · 2 comments

Comments

Projects
None yet
4 participants
@goodfeli
Collaborator

goodfeli commented Oct 31, 2017

This is an anonymous review that I am sharing from a peer reviewer. They sent it to me as an e-mail with formatted text. Since I don't know of a way to copy-paste a formatted e-mail into Markdown, I'm just sharing it as an RTF document:

https://drive.google.com/a/google.com/file/d/0Bz8CQw2wxLVwUEF5TTY3SjRoUkU/view?usp=sharing

@colah

This comment has been minimized.

Show comment
Hide comment
@colah

colah Oct 31, 2017

Member

Transcription of review from the RTF document, to preserve record of review in GitHub issues.


Thoughts on first read-through:

  • The first set of images are really crisp!
    • Clarify what dataset this network was trained on? Are all of these
      examples from the same network? I’m assuming this is a classification
      network, could be nice to make this concrete
  • Intro paragraph
    • “state-of-the-art visualizations”, by what metric?
  • Optimization Objectives
    • Graphics are great
    • “What do we want examples of?” Good overview that highlights all
      the possibilities of feature visualization. Would be good to state which
      objective(s) the paper is focusing on.
  • Why visualize by optimization
    • The images of dataset vs. optimization work well
  • Diversity
    • Interesting that one neuron has these clusters of images at min/max
      activation (ie flowers and clocks). Could make it clear what “facets” means
      -- the kinds of images that result in different strength activations of the
      specific neuron? Or is it the kinds of images that result in maximum
      activation?
  • Achieving Diversity with Optimization
    • Graphics: make it more clear that rightmost images = w/o diversity,
      and center images are with diversity
    • What diversity term is used in the examples?
  • Interaction between Neurons
    • The combination of two neurons is very cool
    • Last paragraph is a good description of the current challenges
  • The Enemy of Feature Visualization
    • A bit unclear on why optimization worked in above paragraphs, but
      didn’t over here?
    • Nice, clear overview on regularization options
  • Preconditioning and Parameterization
    • Paragraph on transforming gradient is great, makes sense
  • Conclusion
    • Possibly include a section on future work in feature visualization?
    • What high-level problems need to be solved?

High-level thoughts

  • Paper contributes:
    • A method to get diverse feature visualizations via optimization
    • If the feature visualization is a sort of generative network, simple
      arithmetic or interpolations can be done in feature space
    • An overview of prior methods
  • The graphics and visualizations are great
Member

colah commented Oct 31, 2017

Transcription of review from the RTF document, to preserve record of review in GitHub issues.


Thoughts on first read-through:

  • The first set of images are really crisp!
    • Clarify what dataset this network was trained on? Are all of these
      examples from the same network? I’m assuming this is a classification
      network, could be nice to make this concrete
  • Intro paragraph
    • “state-of-the-art visualizations”, by what metric?
  • Optimization Objectives
    • Graphics are great
    • “What do we want examples of?” Good overview that highlights all
      the possibilities of feature visualization. Would be good to state which
      objective(s) the paper is focusing on.
  • Why visualize by optimization
    • The images of dataset vs. optimization work well
  • Diversity
    • Interesting that one neuron has these clusters of images at min/max
      activation (ie flowers and clocks). Could make it clear what “facets” means
      -- the kinds of images that result in different strength activations of the
      specific neuron? Or is it the kinds of images that result in maximum
      activation?
  • Achieving Diversity with Optimization
    • Graphics: make it more clear that rightmost images = w/o diversity,
      and center images are with diversity
    • What diversity term is used in the examples?
  • Interaction between Neurons
    • The combination of two neurons is very cool
    • Last paragraph is a good description of the current challenges
  • The Enemy of Feature Visualization
    • A bit unclear on why optimization worked in above paragraphs, but
      didn’t over here?
    • Nice, clear overview on regularization options
  • Preconditioning and Parameterization
    • Paragraph on transforming gradient is great, makes sense
  • Conclusion
    • Possibly include a section on future work in feature visualization?
    • What high-level problems need to be solved?

High-level thoughts

  • Paper contributes:
    • A method to get diverse feature visualizations via optimization
    • If the feature visualization is a sort of generative network, simple
      arithmetic or interpolations can be done in feature space
    • An overview of prior methods
  • The graphics and visualizations are great
@ludwigschubert

This comment has been minimized.

Show comment
Hide comment
@ludwigschubert

ludwigschubert Nov 3, 2017

Member

Thank you for your high-quality feedback! We went through every bullet point and have made numerous changes to the article based upon the review you provided. These can collectively be found in the pull request #7.

We are especially grateful both for the insightful critique as well as the hints about sections that may be hard to understand—we do not just want to be factually correct, but also approachable.

Clarify what dataset this network was trained on? Are all of these examples from the same network? I’m assuming this is a classification network, could be nice to make this concrete

We use GoogLeNet trained on ImageNet.

We added additional captioning on the hero diagram mentioning both the model and the dataset it was trained on in aa394e7.

“state-of-the-art visualizations”, by what metric?

Since we know of no quantiative metric that could settle this yet, and this is a subjective judgement, we changed the wording to "high-quality".

Would be good to state which objective(s) the paper is focusing on.

We have added that we are mostly using the channel objective.

Interesting that one neuron has these clusters of images at min/max activation (ie flowers and clocks). Could make it clear what “facets” means-- the kinds of images that result in different strength activations of the specific neuron? Or is it the kinds of images that result in maximum activation?

We mean the latter when we say facets and have added additional clarification for the term in the Diversity section.

Graphics: make it more clear that rightmost images = w/o diversity, and center images are with diversity

We have explicitly labelled those images as "Simple Optimization" and "Optimization with diversity" to make the difference clearer.

What diversity term is used in the examples?

We have added a footnote with the mathematical definition of our diversity term: cosine dissimilarity between the flattened Gram matrices.

A bit unclear on why optimization worked in above paragraphs, but didn’t over here?

We have added a section to the introduction of the optimization section stating that naive optimization doesn't work; linking to the section about challenegs in feature visualization by optimization.

Possibly include a section on future work in feature visualization? What high-level problems need to be solved?

We have added a section explicitly enumerating areas of future work we believe to be important.

Thank you again for your time and helpful comments! We think the article was significantly improved by incorporating your feedback. :-)

Member

ludwigschubert commented Nov 3, 2017

Thank you for your high-quality feedback! We went through every bullet point and have made numerous changes to the article based upon the review you provided. These can collectively be found in the pull request #7.

We are especially grateful both for the insightful critique as well as the hints about sections that may be hard to understand—we do not just want to be factually correct, but also approachable.

Clarify what dataset this network was trained on? Are all of these examples from the same network? I’m assuming this is a classification network, could be nice to make this concrete

We use GoogLeNet trained on ImageNet.

We added additional captioning on the hero diagram mentioning both the model and the dataset it was trained on in aa394e7.

“state-of-the-art visualizations”, by what metric?

Since we know of no quantiative metric that could settle this yet, and this is a subjective judgement, we changed the wording to "high-quality".

Would be good to state which objective(s) the paper is focusing on.

We have added that we are mostly using the channel objective.

Interesting that one neuron has these clusters of images at min/max activation (ie flowers and clocks). Could make it clear what “facets” means-- the kinds of images that result in different strength activations of the specific neuron? Or is it the kinds of images that result in maximum activation?

We mean the latter when we say facets and have added additional clarification for the term in the Diversity section.

Graphics: make it more clear that rightmost images = w/o diversity, and center images are with diversity

We have explicitly labelled those images as "Simple Optimization" and "Optimization with diversity" to make the difference clearer.

What diversity term is used in the examples?

We have added a footnote with the mathematical definition of our diversity term: cosine dissimilarity between the flattened Gram matrices.

A bit unclear on why optimization worked in above paragraphs, but didn’t over here?

We have added a section to the introduction of the optimization section stating that naive optimization doesn't work; linking to the section about challenegs in feature visualization by optimization.

Possibly include a section on future work in feature visualization? What high-level problems need to be solved?

We have added a section explicitly enumerating areas of future work we believe to be important.

Thank you again for your time and helpful comments! We think the article was significantly improved by incorporating your feedback. :-)

@ludwigschubert ludwigschubert removed their assignment Nov 3, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment