Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Review #1 #50
The following peer review was solicited as part of the Distill review process. The review was formatted by the editor to help with readability.
The reviewer chose to waive anonymity. Distill offers reviewers a choice between anonymous review and offering reviews under their name. Non-anonymous review allows reviewers to get credit for the service them offer to the community.
Distill is grateful to the reviewer, Pang Wei Koh, for taking the time to write such a thorough review.
I found the article interesting and thought-provoking, and the visualizations were eye-catching and very helpful. Thanks to the authors for putting in the effort to write this article and make all of the associated notebooks and visualizations!
There are two main ways that I think the article could be improved:
Here are more details on these.
I think the biggest missing thing is a big-picture view about why different parameterizations might lead to different results, and why we might prefer one type of parameterization over another. For example, after reading the intro, I was still not sure about the motivation for the work. The argument went something like, we should use different parameterizations because we can. But what are examples of different parameterizations and why would we expect them to work better/differently?
The most persuasive argument (to me) was the one advanced in the CPPN section: that parameterizations impose constraints upon the optimized image that fit more with the kinds of pictures we'd like to see. This could be: pictures that are more realistic (CPPN); pictures that obey some sort of 3D smoothness (style transfer 3D section); etc. A variant of this argument can also be applied to the shared parameterization section. So perhaps this intuition could be given at the start of the article, together with more signposting of the kinds of parameterizations that the rest of the article would consider.
I found it hard to follow some parts of the article. The argument roughly makes sense, but it was difficult for me to precisely understand what the authors were trying to convey. For example, take the first paragraph of the second section (Aligned Neuron Interpolation):
Similarly, the article talks about a "decorrelated parameterization" that somehow works better, but doesn't explain why (except by a brief reference to checkerboard gradients, which I'm guessing a decorrelated parameterization doesn't suffer from, but I'm not sure why that would be the case).
I'd suggest going through the article carefully and making sure that every sentence clearly follows from the previous one, especially for someone with only the minimum level of background knowledge.
Comments on figures
First figure: I was initially a bit confused by why the RGB representation was in the middle of the figure, instead of on the left. (I realized later that you're using neural networks that still operate on the RGB representation; so perhaps it's worth clarifying that you're only considering different parameterizations for the visualization, instead of the training.)
Second/third figures: These were broken for me (see attached screenshot). I only saw grey blocks.
Fourth figure: For some choices of style/content, including the first/default one, the decorrelated space picture looked exactly the same as the image space picture (and both looked bad; see attached screenshot). Is this a bug?
CPPN figure: I can't see the last figure of this section (there's just a big blank space). I'm also not sure what objective you're optimizing for in this section -- how are the pictures being generated?
"to fuel a small artistic movement based neural art." -> "to fuel a small artistic movement based on neural art."
Thank you for the thoughtful review! We agree with your feedback, and it helped us focus on improving the weaknesses of our article. In particular:
We now provide a description of four different reasons while different parameterizations can be used. We then provide examples for all of them and, for each section, we mention in which of these four categories the developed parameterization falls. (PR #53; Commits: fb9fcb6, 6bb3446, 60d2642)
This was excellent feedback. This section in particular seems to have assumed a lot of prior knowledge about visualizing neuron interactions. We rewrote the section to address this in PR #64. We now also explicitly link to the much longer discussion of these ideas in the Feature Visualization article.
Great point! We revised the section to clarify this in #67. :)
We added a footnote to better explain the reference to the work of Olah et al. and we improved the description of the benefits w.r.t. a pixel space optimization
We Improved the caption to highlight that the parameterization goes beyond the RGB space for which the network is trained for. We also highlighted that, by using parameterizations that can be “plugged” to the RGB parameterization, we have more flexibility in optimizing for existing network architectures.
These problems should be fixed now.