-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improving Performance To Allow Larger Images #22
Comments
I'm definitely on board for this - in fact I'd noticed that there seemed to be a bunch of new stuff in Johnson's code and so had intended to spend some time today looking at merging his latest version. Presumably that's what you started in the gist above? |
@martinbenson Yes, the gist has the laplacian part setup with the newer code, but it's made so that the laplacian code is only used if you supply a laplacian. |
@ProGamerGov @martinbenson Thank you so much for your effort to extend the original code! I wish i could help, but unfortunately i'm just taking beginner courses in ML and don't have much experience with Torch or the Lua language. And thanks for the info – i didn't know that the neural style code has been improved to support larger resolutions, will have to look at the project again! So with the updated code i should achieve 1000 – 1280px images with a 6GB card. And when i get my 1080 Ti images with up to 1536px – that would be nice. BTW: I know you are probably Linux users, but i just read yesterday that with the launch of the new Titan XP Nvidia will also release Mac drivers for the whole 10XX series. So hopefully more Mac users in the ML community can benefit from the faster cards with more VRAM! |
@subzerofun With the updated code, in addition to increased memory efficiency, you could also use both your current 6GB card, and your 11GB 1080 Ti at the same time. To use them at the same time, you would use the |
@ProGamerGov Oh thanks, i didn't even think about that. But i would need to change my power supply for a second card (only 500W atm). Fortunately i have a 1000W power supply lying around :-). I still have my old GTX 770 with 2GB VRAM – do you think it would help to combine it with my GTX 780 (6GB)? So if i want to test the multi-gpu function, i could try the file The only things missing from rewriting the original code now are the segmentation functions – or is there something else? |
@subzerofun
I haven't seen anyone experiment with that combination, and I don't have multiple GPUs, so I can't say whether or not that would help. So you'll probably have to experiment with that yourself.
Yes, as far as I know, only the segmentation related code is missing. I have been trying to add segmentation into the current version of Neural-Style, for a long time from NeuralImageSynthesis, but I still haven't been able to get things working (Specifically the style and content loss function related code). Then deep-photo-styletransfer came along, with masks that supported multiple colors, but the code was made in an older less efficient version of Neural-Style. |
As a first step to this, I've moved to the newer version of Justin's neural style code. I've not tried to sort out masking yet though - so as of now that functionality is removed/broken. |
Now done! |
@martinbenson I found another way to farther optimize the code: #32. It seems that the way the mask regions are setup, they waste a lot of precious GPU resources. |
As you probably already know, deep-photo-styletransfer is based on the Neural-Style code, and thus shares the same strengths and weaknesses. The current deep-photo-styletransfer code in both
neuralstyle_seg.lua
anddeepmatting_seg.lua
is based on an older and less memory efficient version Neural-Style.I am posting this propsal here, because luanfujun is unlikely to change his code. This is because changing the code would make it different than what was used to create the images in the research paper. Though the changes I am purposing are not very drastic, and will allow for larger images to be created.
On Dec 2016, this commit changed the structure of the style loss and content loss functions, into a more efficient structure: jcjohnson/neural-style@ea75cbc
Specifically as outlined in the commit:
Now, quantifying these changes, the newer more efficient code uses up to 2GB less than the old code, while performing the same task. Though it will likely use less memory, as you increase the image size higher than the maximum size of
1536
that used in my tests.I graphed the results of my experiments below, with blue as the old code, and red as the new code:
The GPU usage on the graph is measured in MiB.
An extra 2-4 MiB are used by the system depending on the chosen
-image_size
value for other things, but I felt this amount was too small to factor into the graph.You can find the two versions (old and new) of Neural-Style, here: https://gist.github.com/ProGamerGov/34cb206a1f0fa8d7e7a1d7aed0048554
The experiments were performed with a Tesla K80 GPU.
The
-image_size
values used, were256
,512
,1000
,1280
, and1536
.The old code using
-image_size
1536
failed due to lack of memory, so it is likely far higher than 12GB in terms of GPU usage.The content image dimensions were: 1897x2441
The style image dimensions were: 3000x1688
The commands I used used for the experiment:
If we get the deep-photo-styletransfer code updated, into the new format, we should be able to create larger images with the same hardware.
Currently, I have gotten everything but the segmentation/masks working with the newer code, here. I require help with getting the multi-color segmentation working.
The text was updated successfully, but these errors were encountered: