Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add support to CNNMRF #2

Open
rayset opened this issue Mar 5, 2016 · 2 comments
Open

add support to CNNMRF #2

rayset opened this issue Mar 5, 2016 · 2 comments

Comments

@rayset
Copy link

rayset commented Mar 5, 2016

I was messing a bit with the code to try to include https://github.com/chuanli11/CNNMRF

considering how things are done, is it a waste of time? (i.e. bad results expected?)

for now no luck, even if it doesn't seem hard it's a field in which I have no experience at all.

@Cortexelus
Copy link
Owner

It's a good idea. To turn pizzafire into a CNNMRF factory, you just need an AMI with CNNMRF installed and ready to render via command-line.

Steps

  1. Start a new large GPU instance with my AMI (ami-e095ca8a)
  2. Clone CNNMRF (to the home directory), install everything you need to make it work. Take note of any path stuff you needed to change (anything like export PATH=$PATH:/usr/local/cuda/bin).
  3. Render some files with CNNMRF. Take note of the command you need to call and the parameters. Take note of any interesting output files that are created, and in what directory they are placed, and if this can be set with a parameter.
  4. Save this machine as a new AMI.
  5. Copy pizzafire.ipynb, new version for CNNMRF jobs
  6. Append the path stuff to the pathStuff variable
  7. Look inside doJob. There's three parts

Send files over with scpSend

scpSend(job['style'], user, host, "~/neural-style")`

This function scpSend(file, user, host, dir) will send local files to the remote machine.

Execute code with sshx

If you give sshx a string, it will execute that command on the machine remotely. These are bash commands you would execute on your machine's terminal, except you can do it directly from iPython notebook! Wow! You'll even see the output directly under your code block. Separate with \n to send multiple commands.

Observe what is happening for the neural style factory:

sshx(pathStuff+"\n\
cd ~/neural-style \n\
rm output* \n\
th neural_style.lua -num_iterations %s -style_image \"%s\" -content_image \"%s\" -image_size %s -backend cudnn -output_image output.png \n\
tar -cvzf output.tar.gz output_* \n\
"%(num_iterations, style_filename,content_filename,image_size),user,host)
  • First we update all the pathStuff
  • We change directory (cd) to the directory where neural_style was cloned.
  • We remove (rm) any old output files still lingering
  • We execute neural_style.lua with our parameters. This is the main part of the process where the image gets rendered!
  • We tarall the intermediate output files into output.tar.gz so we can look at those too. Tar works just like zip, it compresses a bunch of files into one archive file.

Retrieve files with wget

After the code execution, you collect the output files with wget.

wget = !wget --tries=3 http://{host}/nn/neural-style/output.png -O "./nn/{style_filename}_{content_filename}.png"
...
wget = !wget --tries=3 http://{host}/nn/neural-style/output.tar.gz -O "./nn/archive/{style_filename}_{content_filename}.tar.gz"

Your machine will have port 80 open and pointing the home directory, so we can download files directly with wget. You can navigate in a web browser to your machine's hostname and download output files that way.

Updating for CNNMRF

  • You want to change all reference to 'neural-style' paths to 'cnnmrf' or whichever directory into
    which you cloned cnnmrf on the remote machine.
  • You want to change th neural_style.lua ... to the command you need to execute cnnmrf
  • Make sure you are scpSend-ing input files to the right place, and wget-ing output files from the right place.

Configure

  • Test with a m3.medium instance first, because they are cheaper. When everything is working (up to the point where CNNMRF files because it cannot find a GPU).
  • change neuralstyle_ami = "ami-e095ca8a" to your new AMI
  • Set number_of_vms = 1 while testing
  • Set maximum_hours = 1 while testing
  • image_size and num_iterations are a global variables, note are used in doJob when executing 'neural_style.lua. You can set new global parameters this way. Remember to setglobal new_variable` in any functions it is used.

Add_to_jobs

add_to_jobs("inputs/pizza.png", "inputs/FireLoop1_seq", True)

Look in the comments to understand how add_to_jobs works. This is where you queue up your image processing jobs.


That's all you need to do to turn Pizzafire into a CNNMRF factory. You could use this to automate any scripts on remote AWS machines.

@rayset
Copy link
Author

rayset commented Mar 6, 2016

I'll try, the path part seems the most hard to me (just because I have little clues of what we are talking about), I already tried to make a few of the other points.

This frameworks has potential, I really like it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants
@Cortexelus @rayset and others