New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Anonymous Review 1 #1

Closed
goodfeli opened this Issue Oct 30, 2017 · 3 comments

Comments

Projects
None yet
4 participants
@goodfeli
Collaborator

goodfeli commented Oct 30, 2017

The following peer review was solicited as part of the Distill review process. Some points in this review were clarified by an editor after consulting the reviewer.

The reviewer chose to keep keep anonymity. Distill offers reviewers a choice between anonymous review and offering reviews under their name. Non-anonymous review allows reviewers to get credit for the service them offer to the community.

Distill is grateful to the reviewer, for taking the time to review this article.

Conflicts of Interest: Reviewer disclosed no conflicts of interest.

The images shown in this this paper are truly fascinating, and provide an interesting and useful manner to visualize the behavior of neurons in a neural network. Overall, this article is well written and contains many useful results, but in many areas omits background material that makes it difficult to follow exactly what is occurring. When clarified, this article would be highly useful to those not already familiar with the area.

There seems to be a short background paragraph that's missing from the introduction. This article immediately jumps into talking about feature visualization through optimization, but skips some important questions. What network is being used? On what dataset? Similarly, I presume that "conv2d0", "mixed3a", etc are layers of a network. If I look it up, I can see that it's Inception, but it would be good to state this explicitly. Similarly, what does "mixed4b,c" mean? Similarly, some figures are not clearly explained. In the figure talking about different objectives, what do x, y, z, and n represent? Is the layer_n the same n as the softmax[n]? (I assume not.)

I was caught off guard that after spending the majority of the paper describing how optimization can be used to produce these figures, they then say that directly optimizing for the objectives doesn't work. It might have been nice to mention this fact earlier, and just forward reference it -- if someone were to stop reading half way through they would just think that by performing direct optimization they'd be golden. The next section does survey regularization techniques (but even then, none of the regularized figures look nearly as nice as the ones prior). This seems to be the most important part of the paper, but I feel like i get the fewest details about how this is done. It also leaves me wondering which regularization methods were used to make the earlier figures.

When discussing preconditioning, I get the feeling that this is an important aspect of generating high-quality images, but I don't actually know what is happening. How is something spatially decorelated? What is done to minimize over this space? Similarly, what does "Let’s compare the direction of steepest descent in a decorelated parameterization to two other directions of steepest descent" -- I would expect there is only one steepest direction. How do you pick two other steepest ones that aren't the same? What does "compare" mean, and how do you compare to two other? This sentence seems important, but I don't understand what it is trying to say. (CSS issue: footnotes 7 and 8 do not display in Chrome.) On the whole, this section could be better explained.

Minor comments to author:

  • At various points, the authors make statements saying "it would be impossible to list all the things people have tried." or "The truth is that we have almost no clue how to select meaningful directions" or "and we don’t have much understanding of their benefits yet". These statements are definitely true -- but they seem out of place and unnecessarily negative.
  • I'm not sure what "As is often the case in visualizing neural networks, this problem was initially recognized and addressed by Nguyen, Yosinski, and collaborators." is supposed to mean. I take it to mean that Nguyen, Yosinski, and collaborators often do the first work in visualization areas, is this right?
  • I didn't quite understand the purpose of the italicized text under the headers.
  • The phrase is "adversarial examples" not "adversarial counterexamples".
  • "And if we want to create examples of output classes from a classifier, we have two options:" but nothing follows the colon, there's an image (with 5 figures, not 2). Was the sentence cut off?
@ludwigschubert

This comment has been minimized.

Show comment
Hide comment
@ludwigschubert

ludwigschubert Nov 1, 2017

Member

Thank you for your high-quality in-depth feedback! We went through every sentence of it and have made numerous changes to the article based upon the review you provided. These can collectively be found in the pull request #5 and are mentioned on a per-commit basis in this response.

We are especially grateful both for the insightful critique as well as the hints about sections that may be hard to understand—we do not just want to be factually correct, but also approachable.

Major comments

"What network is being used? On what dataset? Similarly, I presume that "conv2d0", "mixed3a", etc are layers of a network."

We use GoogLeNet trained on ImageNet.

We added additional captioning on the hero diagram mentioning both the model and the dataset it was trained on. We also expanded the confusingly named mixed4b,c to just list the names of both layers in aa394e7.

"In the figure talking about different objectives, what do x, y, z, and n represent? Is the layer_n the same n as the softmax[n]? (I assume not.)"

We have changed the class index from n to k to clarify it is different from the layer index n. We also added a legend in 8ed5819 so as not to rely on conventions only.

"I was caught off guard that after spending the majority of the paper describing how optimization can be used to produce these figures, they then say that directly optimizing for the objectives doesn't work. It might have been nice to mention this fact earlier, and just forward reference it -- if someone were to stop reading half way through they would just think that by performing direct optimization they'd be golden. "

We added a paragraph linking to the "The Enemy of Feature Visualization" section directly after the first diagram. We also explicitly list the challenges with feature visualization at the end of the introduction from 8ed5819 on.

"The next section does survey regularization techniques (but even then, none of the regularized figures look nearly as nice as the ones prior). This seems to be the most important part of the paper, but I feel like i get the fewest details about how this is done. It also leaves me wondering which regularization methods were used to make the earlier figures."

We try to make this clearer in multiple areas:

  • We have clarified that the diagrams in the regularization techniques section are enlarged to show artifacts more clearly, which makes them look worse.
  • We added a statement that the images in the article were created using the preconditioner and transformation robustness. A footnote describes the exact transforms used to create the images, as well as which optimizer and learning rates were used.
  • We added step numbers to all of these diagrams
  • Finally, we reworded the transformation section to give more detail on how the regularization techniques tie into the optimization process.

"[…] but I don't actually know what is happening. How is something spatially decorelated? What is done to minimize over this space?"

On reviewing the section on preconditioning we agree that we were trying to be very general, potentially at the expense of concreteness and approachability. We rewrote the section in 29e8b19 to be more explicit about how this technique works when applied to images. We also added additional footnotes going into more detail on the the derivation of these techniques.

"Similarly, what does "Let's compare the direction of steepest descent in a decorelated parameterization to two other directions of steepest descent" -- I would expect there is only one steepest direction."

We rewrote this section and simplified the diagram to explain how the Fourier Transform induces a different metric under which the direction of steepest descent is different from the regular (L2) gradient in 29e8b19.

Minor comments

"At various points, the authors make statements saying "it would be impossible to list all the things people have tried." or "The truth is that we have almost no clue how to select meaningful directions" or "and we don't have much understanding of their benefits yet". These statements are definitely true -- but they seem out of place and unnecessarily negative."

We have reworded those sections while keeping their intent to show that these areas offer ample opportunity for further work in 29e8b19.

"I'm not sure what "As is often the case in visualizing neural networks, this problem was initially recognized and addressed by Nguyen, Yosinski, and collaborators." is supposed to mean. I take it to mean that Nguyen, Yosinski, and collaborators often do the first work in visualization areas, is this right?"

That is right. We reordered this sentence to be more clear in 29e8b19.

We realize this kind of praise is unusual in an academic context. We are making a deliberate choice to do it because we think it fosters a healthy and collegial atmosphere. We think Nguyen, Yosinski, and their collaborators have made truly outstanding contributions and some other parts of our article could be read as critiquing their work, so it seems especially important to make it clear that we value their work.

"I didn't quite understand the purpose of the italicized text under the headers."

After your prompt we removed those sections in 1c583d2.

"The phrase is "adversarial examples" not "adversarial counterexamples".

You are right—fixed in 94ddda1.

"And if we want to create examples of output classes from a classifier, we have two options:" but nothing follows the colon, there's an image (with 5 figures, not 2). Was the sentence cut off?"

In this case, the colon referred to the following diagram rather than a continued sentence. However, we found we could reword it to be more clear in 91edce5.

Thank you again for your time and helpful comments! We think the article was significantly improved by incorporating your feedback. :-)

Member

ludwigschubert commented Nov 1, 2017

Thank you for your high-quality in-depth feedback! We went through every sentence of it and have made numerous changes to the article based upon the review you provided. These can collectively be found in the pull request #5 and are mentioned on a per-commit basis in this response.

We are especially grateful both for the insightful critique as well as the hints about sections that may be hard to understand—we do not just want to be factually correct, but also approachable.

Major comments

"What network is being used? On what dataset? Similarly, I presume that "conv2d0", "mixed3a", etc are layers of a network."

We use GoogLeNet trained on ImageNet.

We added additional captioning on the hero diagram mentioning both the model and the dataset it was trained on. We also expanded the confusingly named mixed4b,c to just list the names of both layers in aa394e7.

"In the figure talking about different objectives, what do x, y, z, and n represent? Is the layer_n the same n as the softmax[n]? (I assume not.)"

We have changed the class index from n to k to clarify it is different from the layer index n. We also added a legend in 8ed5819 so as not to rely on conventions only.

"I was caught off guard that after spending the majority of the paper describing how optimization can be used to produce these figures, they then say that directly optimizing for the objectives doesn't work. It might have been nice to mention this fact earlier, and just forward reference it -- if someone were to stop reading half way through they would just think that by performing direct optimization they'd be golden. "

We added a paragraph linking to the "The Enemy of Feature Visualization" section directly after the first diagram. We also explicitly list the challenges with feature visualization at the end of the introduction from 8ed5819 on.

"The next section does survey regularization techniques (but even then, none of the regularized figures look nearly as nice as the ones prior). This seems to be the most important part of the paper, but I feel like i get the fewest details about how this is done. It also leaves me wondering which regularization methods were used to make the earlier figures."

We try to make this clearer in multiple areas:

  • We have clarified that the diagrams in the regularization techniques section are enlarged to show artifacts more clearly, which makes them look worse.
  • We added a statement that the images in the article were created using the preconditioner and transformation robustness. A footnote describes the exact transforms used to create the images, as well as which optimizer and learning rates were used.
  • We added step numbers to all of these diagrams
  • Finally, we reworded the transformation section to give more detail on how the regularization techniques tie into the optimization process.

"[…] but I don't actually know what is happening. How is something spatially decorelated? What is done to minimize over this space?"

On reviewing the section on preconditioning we agree that we were trying to be very general, potentially at the expense of concreteness and approachability. We rewrote the section in 29e8b19 to be more explicit about how this technique works when applied to images. We also added additional footnotes going into more detail on the the derivation of these techniques.

"Similarly, what does "Let's compare the direction of steepest descent in a decorelated parameterization to two other directions of steepest descent" -- I would expect there is only one steepest direction."

We rewrote this section and simplified the diagram to explain how the Fourier Transform induces a different metric under which the direction of steepest descent is different from the regular (L2) gradient in 29e8b19.

Minor comments

"At various points, the authors make statements saying "it would be impossible to list all the things people have tried." or "The truth is that we have almost no clue how to select meaningful directions" or "and we don't have much understanding of their benefits yet". These statements are definitely true -- but they seem out of place and unnecessarily negative."

We have reworded those sections while keeping their intent to show that these areas offer ample opportunity for further work in 29e8b19.

"I'm not sure what "As is often the case in visualizing neural networks, this problem was initially recognized and addressed by Nguyen, Yosinski, and collaborators." is supposed to mean. I take it to mean that Nguyen, Yosinski, and collaborators often do the first work in visualization areas, is this right?"

That is right. We reordered this sentence to be more clear in 29e8b19.

We realize this kind of praise is unusual in an academic context. We are making a deliberate choice to do it because we think it fosters a healthy and collegial atmosphere. We think Nguyen, Yosinski, and their collaborators have made truly outstanding contributions and some other parts of our article could be read as critiquing their work, so it seems especially important to make it clear that we value their work.

"I didn't quite understand the purpose of the italicized text under the headers."

After your prompt we removed those sections in 1c583d2.

"The phrase is "adversarial examples" not "adversarial counterexamples".

You are right—fixed in 94ddda1.

"And if we want to create examples of output classes from a classifier, we have two options:" but nothing follows the colon, there's an image (with 5 figures, not 2). Was the sentence cut off?"

In this case, the colon referred to the following diagram rather than a continued sentence. However, we found we could reword it to be more clear in 91edce5.

Thank you again for your time and helpful comments! We think the article was significantly improved by incorporating your feedback. :-)

@s-l-lee

This comment has been minimized.

Show comment
Hide comment
@s-l-lee

s-l-lee Nov 8, 2017

"The phrase is "adversarial examples" not "adversarial counterexamples".

You are right—fixed in 94ddda1.

I think you missed one instance of this due to an alternate spelling (or typo? I lack the domain knowledge to tell), "Adverserial counterexamples". I found only one other instance of "adverserial" in the article—in the JavaScript generated d-figure with id="steepest-descent".

s-l-lee commented Nov 8, 2017

"The phrase is "adversarial examples" not "adversarial counterexamples".

You are right—fixed in 94ddda1.

I think you missed one instance of this due to an alternate spelling (or typo? I lack the domain knowledge to tell), "Adverserial counterexamples". I found only one other instance of "adverserial" in the article—in the JavaScript generated d-figure with id="steepest-descent".

@ludwigschubert

This comment has been minimized.

Show comment
Hide comment
@ludwigschubert

ludwigschubert Nov 8, 2017

Member

@s-l-lee thanks for the heads up—is fixed in 496aa8f and will be live soon.

Member

ludwigschubert commented Nov 8, 2017

@s-l-lee thanks for the heads up—is fixed in 496aa8f and will be live soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment