Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gradient Ascent for Caffe and Torch + Layer Viz #1009

Closed
wants to merge 1 commit into from

Conversation

Lucaszw
Copy link
Contributor

@Lucaszw Lucaszw commented Aug 27, 2016

Do Not Merge!
Gradient Ascent For Torch and Caffe + Layer Visualizations.
Things left to do:

  • Add receptive field calculation for Torch . Perhaps this step could be moved out of tools, and into the views before rendering (giving the user options to dev # deviations, size of bounding box etc.
  • Customizeable parameters: Currently these are hardcoded into the python and lua scripts. It would be nice to have a few set of options available from the UI like: "Best Alexnet" or "Best Googlenet" etc.
  • Additional Regularization Options (I didn't have time to implement all the params from Deep Vis Toolbox)
  • Use mean file during Gradient Ascent. (The mean file is sent as a param for both jobs but not used, as it seems to make certain black and white jobs like LeNet(MNIST) return odd regularizations) . Perhaps this could be toggles with a checkbox in the UI

@Lucaszw
Copy link
Contributor Author

Lucaszw commented Aug 27, 2016

screen1

Layer Visualizations - Caffe

@Lucaszw
Copy link
Contributor Author

Lucaszw commented Aug 27, 2016

screen2

Layer Visualizations - Torch (Notice Lack of Receptive Field Calculations)

@Lucaszw
Copy link
Contributor Author

Lucaszw commented Aug 27, 2016

snap3

Gradient Ascent - Torch

@Lucaszw
Copy link
Contributor Author

Lucaszw commented Aug 27, 2016

snap4

@TimZaman
Copy link
Contributor

How does this PR relate to #968 and #937?

"""
Get path to file containing model def for deploy/visualization
"""
return os.path.join(self.job_dir,"original.lua")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe just call this network.lua for consistency? Or will there be more/'unoriginal' models?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TimZaman This PR is just of my top of branch before finishing my internship. Its the same as those two combines, but rebased with master, and Gradient Ascent for Torch.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TimZaman Yeah network.lua would definitely be a better name. Ill try updating this next time I get a chance :)


fileReader.onreadystatechange = function (){
if(fileReader.readyState === 4){
if(fileReader.status === 200 || fileReader.readyState == 4){
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In line 50 you say
if(fileReader.readyState === 4){
and immediatelly after
if(fileReader.status === 200 || fileReader.readyState == 4){
So the latter is always true if the former is true. I bet you meant to say
(fileReader.status === 200 || fileReader.status == 0) here.

@lukeyeager
Copy link
Member

Closing for now (there are merge conflicts anyway). I'd like to revisit this.

I've backed up this work at https://github.com/lukeyeager/DIGITS/tree/backup/lucas/gradientAscentWithLayerVis.

@lukeyeager lukeyeager closed this Sep 23, 2016
@Lucaszw
Copy link
Contributor Author

Lucaszw commented Sep 23, 2016

@lukeyeager @TimZaman Sorry school has been super busy. But hopefully soon I'll get some time to revisit this and create some smaller pull requests (such as layer visualizations for Torch models, etc.) once I have a machine up and running.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants