Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenCL support #44

Closed
napsternxg opened this issue Sep 17, 2015 · 158 comments
Closed

OpenCL support #44

napsternxg opened this issue Sep 17, 2015 · 158 comments

Comments

@napsternxg
Copy link
Contributor

I tried implementing OpenCL support and the code is at: https://github.com/napsternxg/neural-style/tree/opencl

However I get the following error when running the code:

$ $ th neural_style_opencl.lua -style_image examples/inputs/picasso_selfport1907.jpg -content_image examples/inputs/brad_pitt.jpg -gpu 0 -backend 'clnn' -output_image profile.png
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message.  If the message turns out to be larger than 1073741824 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 574671192
Successfully loaded models/VGG_ILSVRC_19_layers.caffemodel
conv1_1: 64 3 3 3
conv1_2: 64 64 3 3
conv2_1: 128 64 3 3
conv2_2: 128 128 3 3
conv3_1: 256 128 3 3
conv3_2: 256 256 3 3
conv3_3: 256 256 3 3
conv3_4: 256 256 3 3
conv4_1: 512 256 3 3
conv4_2: 512 512 3 3
conv4_3: 512 512 3 3
conv4_4: 512 512 3 3
conv5_1: 512 512 3 3
conv5_2: 512 512 3 3
conv5_3: 512 512 3 3
conv5_4: 512 512 3 3
fc6: 1 1 25088 4096
fc7: 1 1 4096 4096
fc8: 1 1 4096 1000
Using Advanced Micro Devices, Inc. , OpenCL platform: AMD Accelerated Parallel Processing
Using OpenCL device: Turks
/home/torch/install/bin/luajit: C++ exception

I believe the issue is because of the SpatialConvolutionMM which is implemented in ccn2 module.

@jcjohnson
Copy link
Owner

Nice work! I took a look at your code and didn't see anything obviously wrong. Can you figure out exactly where it's crashing? My guess is either here where you cast the network to OpenCL, or here or here where you first try to run the network forward.

@napsternxg
Copy link
Contributor Author

I will try to debug this and get back to this post in a few days. Unfortunately, I am new to lua and torch and this is the first code that I have written in this language, hence am still learning.

Also, it occurred to me today that can my GPU memory be an issue? I only have a 1 GB ATI FirePro 3900 GPU.

@rkrzr
Copy link

rkrzr commented Sep 17, 2015

@napsternxg GPU memory could indeed be an issue: It seems that the memory requirements are growing exponentially (quadratically?) with the size of the image you are rendering. I am running in CPU mode on a 32GB machine and I start running out of memory at -image_size > 1024.
So if you want to be sure that memory is not the problem, just run with -image_size 50 or so.

@jcjohnson
Copy link
Owner

Good call on GPU memory; 1GB is not enough for the default settings.

@napsternxg
Copy link
Contributor Author

I tried running with -image_size 10 and still get the same error.

@napsternxg
Copy link
Contributor Author

Ok using multiple print statements, I believe I have figured out the issue:
@jcjohnson was right the issue is while casting the cnn object to cl()

I checked the clnn documentation and I see all the layers are implemented. Is there something I am missing ?

These are the first few lines of my generated models/VGG_ILSVRC_19_layers_deploy.prototxt.opencl.lua

require 'nn'
require 'clnn'
local model = {}
table.insert(model, {'conv1_1', nn.SpatialConvolutionMM(3, 64, 3, 3, 1, 1, 1, 1)})
table.insert(model, {'relu1_1', nn.ReLU(true)})
table.insert(model, {'conv1_2', nn.SpatialConvolutionMM(64, 64, 3, 3, 1, 1, 1, 1)})
table.insert(model, {'relu1_2', nn.ReLU(true)})
table.insert(model, {'pool1', nn.SpatialMaxPooling(2, 2, 2, 2, 0, 0):ceil()})
table.insert(model, {'conv2_1', nn.SpatialConvolutionMM(64, 128, 3, 3, 1, 1, 1, 1)})
table.insert(model, {'relu2_1', nn.ReLU(true)})
table.insert(model, {'conv2_2', nn.SpatialConvolutionMM(128, 128, 3, 3, 1, 1, 1, 1)})
table.insert(model, {'relu2_2', nn.ReLU(true)})
table.insert(model, {'pool2', nn.SpatialMaxPooling(2, 2, 2, 2, 0, 0):ceil()})

@jcjohnson
Copy link
Owner

I'm not really sure what's wrong; here are two random ideas:

(1) In the .opencl.lua file maybe you also need to require 'cltorch'?
(2) Maybe the call to ceil() for nn.SpatialMaxPooling() is not supported for clnn? You can chop out these method calls with some dirty string manipulation like this: cba886c#diff-00b26e06a3b5ecc7938a4da2d6fe0332R49

@vkorablin
Copy link

@jcjohnson

Maybe the call to ceil() for nn.SpatialMaxPooling() is not supported for clnn

seems to be the case if I understand correctly: https://github.com/hughperkins/clnn/search?q=ceil

@hughperkins could you confirm?

@hughperkins
Copy link
Contributor

Yes, ceil() is not currently implemented. Per @szagoruyko , this should be fairly easy to add hughperkins/clnn#5 Not sure I have time in the immediate future, but seems to be a popular request, so I might find time, if it's still open in a week or two.

(Edit: an alternative way to hack this for now, if you dont need the functionality behind :ceil(), just that the method call doesnt throw an exception, would be to add something to your code like:

function nn.SpatialMaxPooling:ceil()
   return self
end

This will monkey-patch SpatialMaxPooling to have this method, although the method wont actually do anything for now
)

(PS Wow, the pictures of output from the neural-style project on the front-page README.md look awesome :-O )

(Edit 3: by the way, when th crashes, it is often the case that running directly with luajit instead produces fractionally more error information. Typically you'd probably want to run it from gdb too, and get the callstack. I have a script called rungdb.sh, which looks like:

#!/bin/bash
gdb $1 -ex "catch throw" -ex "run $2 $3 $4 $5 $6 $7 $8 $9" 

then I run it like:

rungdb.sh luajit myluascript.lua
# and once it's crashed, type:
bt
# ... to get the backtrace

you need to build in debug mode to get line numbers and stuff. I usually do this by editing the rocks file for hte relevant torch projects, to have -DCMAKE_BUILD_TYPE=Debug, and then do luarocks make rocks/name-of-luarocks-file.rockspec, to reinstall it
)

@napsternxg
Copy link
Contributor Author

Thanks @hughperkins. I ran GDB on the file and here is the result.

$ gdb luajit -ex "catch throw"
GNU gdb (Ubuntu 7.7.1-0ubuntu5~14.04.2) 7.7.1
Copyright (C) 2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from luajit...(no debugging symbols found)...done.
Catchpoint 1 (throw)
(gdb) run neural_style_opencl.lua -style_image examples/inputs/picasso_selfport1907.jpg -content_image examples/inputs/brad_pitt.jpg -gpu 0 -backend 'clnn' -output_image profile.png -image_size 10
Starting program: /home/username/torch/install/bin/luajit neural_style_opencl.lua -style_image examples/inputs/picasso_selfport1907.jpg -content_image examples/inputs/brad_pitt.jpg -gpu 0 -backend 'clnn' -output_image profile.png -image_size 10
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Traceback (most recent call last):
  File "/usr/share/gdb/auto-load/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19-gdb.py", line 63, in <module>
    from libstdcxx.v6.printers import register_libstdcxx_printers
ImportError: No module named 'libstdcxx'
In Function main
Starting load model
In loadcaffe_load
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message.  If the message turns out to be larger than 1073741824 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 574671192
Successfully loaded models/VGG_ILSVRC_19_layers.caffemodel
Finished proto to lua
In iteration %d 1
conv1_1: 64 3 3 3
In iteration %d 2
In iteration %d 3
conv1_2: 64 64 3 3
In iteration %d 4
In iteration %d 5
In iteration %d 6
conv2_1: 128 64 3 3
In iteration %d 7
In iteration %d 8
conv2_2: 128 128 3 3
In iteration %d 9
In iteration %d 10
In iteration %d 11
conv3_1: 256 128 3 3
In iteration %d 12
In iteration %d 13
conv3_2: 256 256 3 3
In iteration %d 14
In iteration %d 15
conv3_3: 256 256 3 3
In iteration %d 16
In iteration %d 17
conv3_4: 256 256 3 3
In iteration %d 18
In iteration %d 19
In iteration %d 20
conv4_1: 512 256 3 3
In iteration %d 21
In iteration %d 22
conv4_2: 512 512 3 3
In iteration %d 23
In iteration %d 24
conv4_3: 512 512 3 3
In iteration %d 25
In iteration %d 26
conv4_4: 512 512 3 3
In iteration %d 27
In iteration %d 28
In iteration %d 29
conv5_1: 512 512 3 3
In iteration %d 30
In iteration %d 31
conv5_2: 512 512 3 3
In iteration %d 32
In iteration %d 33
conv5_3: 512 512 3 3
In iteration %d 34
In iteration %d 35
conv5_4: 512 512 3 3
In iteration %d 36
In iteration %d 37
In iteration %d 38
In iteration %d 39
fc6: 1 1 25088 4096
In iteration %d 40
In iteration %d 41
In iteration %d 42
fc7: 1 1 4096 4096
In iteration %d 43
In iteration %d 44
In iteration %d 45
fc8: 1 1 4096 1000
In iteration %d 46
Finished iterations     clnn
Finished network setup
Using Advanced Micro Devices, Inc. , OpenCL platform: AMD Accelerated Parallel Processing
Using OpenCL device: Turks
[New Thread 0x7fffb2cc1700 (LWP 5091)]
Catchpoint 1 (exception thrown), 0x00007fffc389b8b0 in __cxa_throw () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6

And here is the back trace:

(gdb) bt
#0  0x00007fffc389b8b0 in __cxa_throw () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#1  0x00007fffc2f3719b in EasyCL::checkError (error=<optimized out>) at /home/username/Downloads/cltorch/src/EasyCL/EasyCL.cpp:514
#2  0x00007fffc2f42159 in CLWrapper::createOnDevice (this=0x910b20) at /home/username/Downloads/cltorch/src/EasyCL/CLWrapper.cpp:62
#3  0x00007fffc3170c9c in THClStorage_resize (state=<optimized out>, self=<optimized out>, size=102760448) at /home/username/Downloads/cltorch/src/lib/THClStorage.cpp:196
#4  0x00007fffc33f69f1 in torch_ClStorage_resize (L=0x40000378) at /home/username/Downloads/cltorch/src/torch/generic/Storage.cpp:114
#5  0x000000000047d01a in lj_BC_FUNCC ()
#6  0x000000000046c5fd in lua_pcall ()
#7  0x0000000000406f4f in pmain ()
#8  0x000000000047d01a in lj_BC_FUNCC ()
#9  0x000000000046c677 in lua_cpcall ()
#10 0x0000000000404f04 in main ()

@hughperkins
Copy link
Contributor

Ok, good. Then, if you do the following you should get the error message, I think:

f 1
print message

I suspect, given where it is, and what it's doing, that it might say 'out of memory', ie "CL_MEM_OBJECT_ALLOCATION_FAILURE".

@hughperkins
Copy link
Contributor

Note that since using gdb is kind of annoying :-P So, I've pushed a couple of updates to cltorch, that will catch the exception, and convert it into a torch error, eg:

$ th /tmp/runst.lua
Using NVIDIA Corporation , OpenCL platform: NVIDIA CUDA
Using OpenCL device: GeForce 940M
a   
1e-38 *
 6.8234
[torch.ClStorage of size 1]

/home/user/torch/install/bin/luajit: /tmp/runst.lua:4: Something went wrong: std::bad_alloc at /home/user/git/cltorch/src/torch/generic/Storage.cpp:127
stack traceback:
    [C]: in function 'resize'
    /tmp/runst.lua:4: in main chunk
    [C]: in function 'dofile'
    ...user/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
    [C]: at 0x00406670
Segmentation fault

It still contains a lot of 'magic messages', but mildly more informative than before perhaps? You can update to this version by simply rerunning luarocks install cltorch

@jcjohnson
Copy link
Owner

@hughperkins You rock! Thanks for helping out on the OpenCL port - I know almost nothing about OpenCL.

@napsternxg
Copy link
Contributor Author

Thanks a lot @hughperkins for looking into this. I have not yet updated cltorch but have run f 1 in gbd and got the following output:

(gdb) f 1
#1  0x00007fffc2f3719b in EasyCL::checkError (error=<optimized out>) at /home/username/Downloads/cltorch/src/EasyCL/EasyCL.cpp:514
514             throw std::runtime_error( std::string("OpenCL error, code: ") + message );
(gdb) print message
$1 = {static npos = <optimized out>, _M_dataplus = {<std::allocator<char>> = {<__gnu_cxx::new_allocator<char>> = {<No data fields>}, <No data fields>}, 
    _M_p = 0x910b78 "CL_INVALID_BUFFER_SIZE"}}

I will run my code again with the new cltorch and report any other findings.

@hughperkins
Copy link
Contributor

"Invalid buffer size". Hmmm. It probably means that trying to allocate a buffer that is far too big, or perhaps wont fit in available memory. It plausibly could mean that the size of the buffer being requested has been corrupted somehow. However, looking at the stack trace you provided earlier, we can see the size in frame 3, size=102760448, which is number of floats I believe, so is about 400MB. It sounds like a non-corrupted number. It sounds like an amount large enough to either have exhausted available GPU memory, or to be larger than maximum GPU buffer alloc size.

For the second point, maximum GPU buffer alloc size, you might have an executable in ~/torch/install/bin called gpuinfo. If you run this, it will give an output like:

num platforms: 1

platform index: 0:
platform id: 0x1ea39b0
platform vendor: NVIDIA Corporation
platform name: NVIDIA CUDA
platform num devices: 1

   device index: 0
   device id: 0x1ea3a70
   device type: 4
   global memory size: 1023MB
   local memory size: 47KB
   global cache size: 48KB
   global cacheline size: 128
   max memory alloc size: 255MB
   max compute units: 3
   max workgroup size: 1024
   max workitem dimensions: 3
   max workitem sizes: 1024 1024 64
   device name: GeForce 940M
   opencl c version: OpenCL C 1.1 
   opencl device version: OpenCL 1.1 CUDA
   frequency MHz: 980

On line 8, you can see 'max memory alloc size', ie the largest buffer you can allocate at once. For my laptop, it is 256MB, less than 400MB.

@napsternxg
Copy link
Contributor Author

Here is the output after updating my cltorch package.

th neural_style_opencl.lua -style_image examples/inputs/picasso_selfport1907.jpg -content_image examples/inputs/brad_pitt.jpg -gpu 0 -backend 'clnn' -output_image profile.png -image_size 10
In Function main
Starting load model
In loadcaffe_load
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message.  If the message turns out to be larger than 1073741824 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 574671192
Successfully loaded models/VGG_ILSVRC_19_layers.caffemodel
Finished proto to lua   
In iteration %d 1
conv1_1: 64 3 3 3
In iteration %d 2
In iteration %d 3
conv1_2: 64 64 3 3
In iteration %d 4
In iteration %d 5
In iteration %d 6
conv2_1: 128 64 3 3
In iteration %d 7
In iteration %d 8
conv2_2: 128 128 3 3
In iteration %d 9
In iteration %d 10
In iteration %d 11
conv3_1: 256 128 3 3
In iteration %d 12
In iteration %d 13
conv3_2: 256 256 3 3
In iteration %d 14
In iteration %d 15
conv3_3: 256 256 3 3
In iteration %d 16
In iteration %d 17
conv3_4: 256 256 3 3
In iteration %d 18
In iteration %d 19
In iteration %d 20
conv4_1: 512 256 3 3
In iteration %d 21
In iteration %d 22
conv4_2: 512 512 3 3
In iteration %d 23
In iteration %d 24
conv4_3: 512 512 3 3
In iteration %d 25
In iteration %d 26
conv4_4: 512 512 3 3
In iteration %d 27
In iteration %d 28
In iteration %d 29
conv5_1: 512 512 3 3
In iteration %d 30
In iteration %d 31
conv5_2: 512 512 3 3
In iteration %d 32
In iteration %d 33
conv5_3: 512 512 3 3
In iteration %d 34
In iteration %d 35
conv5_4: 512 512 3 3
In iteration %d 36
In iteration %d 37
In iteration %d 38
In iteration %d 39
fc6: 1 1 25088 4096
In iteration %d 40
In iteration %d 41
In iteration %d 42
fc7: 1 1 4096 4096
In iteration %d 43
In iteration %d 44
In iteration %d 45
fc8: 1 1 4096 1000
In iteration %d 46
Finished iterations     clnn
Finished network setup  
Using Advanced Micro Devices, Inc. , OpenCL platform: AMD Accelerated Parallel Processing
Using OpenCL device: Turks
/home/username/Downloads/torch/install/bin/luajit: ...ntity/Downloads/torch/install/share/lua/5.1/nn/utils.lua:11: Something went wrong: OpenCL error, code: CL_INVALID_BUFFER_SIZE at /tmp/luarocks_cltorch-scm-1-1524/cltorch/cltorch/src/torch/generic/Storage.cpp:127
stack traceback:
        [C]: in function 'resize'
        ...ntity/Downloads/torch/install/share/lua/5.1/nn/utils.lua:11: in function 'torch_Storage_type'
        ...ntity/Downloads/torch/install/share/lua/5.1/nn/utils.lua:57: in function 'recursiveType'
        ...tity/Downloads/torch/install/share/lua/5.1/nn/Module.lua:123: in function 'type'
        ...ntity/Downloads/torch/install/share/lua/5.1/nn/utils.lua:45: in function 'recursiveType'
        ...ntity/Downloads/torch/install/share/lua/5.1/nn/utils.lua:41: in function 'recursiveType'
        ...tity/Downloads/torch/install/share/lua/5.1/nn/Module.lua:123: in function 'cl'
        neural_style_opencl.lua:66: in function 'main'
        neural_style_opencl.lua:424: in main chunk
        [C]: in function 'dofile'
        ...oads/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
        [C]: at 0x00406670

@hughperkins
Copy link
Contributor

Ok, looks like the error message is more informative than the original c++ exception, which is good. As far as this specific error, please see my comment that I sent about the same time as yours just now. I reckon the buffer size being requested is larger than is supported by the card. So, will need to somehow do ... something ... to reduce the buffer size requested, eg use a smaller input image perhaps.

@napsternxg
Copy link
Contributor Author

If you notice in the command I am running. I am setting the image_size to 10. Should I try with even a smaller number ? The default is 50 I believe.

Also, I couldn't find gpuinfo in my torch folder.

@hughperkins
Copy link
Contributor

Hmmmm.... 10? You mean, it's a 10 by 10 image?

Edit: for gpuinfo, you might have a system/opencl command clinfo. That command doesnt work for me, but should give a bunch of information I believe.

@napsternxg
Copy link
Contributor Author

These are the lines where the image transformation is taking place:

  local content_image = image.load(params.content_image, 3)
  content_image = image.scale(content_image, params.image_size, 'bilinear')
  local content_image_caffe = preprocess(content_image):float()

  local style_image = image.load(params.style_image, 3)
  local style_size = math.ceil(params.style_scale * params.image_size)
  style_image = image.scale(style_image, style_size, 'bilinear')
  local style_image_caffe = preprocess(style_image):float()

I believe it is re-sizing it to 10x10 but am not fully sure about this.

@hughperkins
Copy link
Contributor

Hmmm, looks like it is the fully-connected layer that is causing the 400MB alloc:

25088*4096*4/1024/1024
= 392MB

@hughperkins
Copy link
Contributor

If you modify the conv5_4 layer to have eg 256 output planes, instead of 512, then you can probably reduce the fc6 layer from 25088 => 4096 to 12544 => 4096, which might fit into the card's maximum alloc?

@napsternxg
Copy link
Contributor Author

I couldn't find the gpuinfo file in my torch folder. Is there any other way to figure out my maximum GPU buffer alloc size ?

@hughperkins
Copy link
Contributor

Try clinfo

@napsternxg
Copy link
Contributor Author

This is the input from clinfo

$ clinfo 
Number of platforms:                             1
  Platform Profile:                              FULL_PROFILE
  Platform Version:                              OpenCL 2.0 AMD-APP (1642.5)
  Platform Name:                                 AMD Accelerated Parallel Processing
  Platform Vendor:                               Advanced Micro Devices, Inc.
  Platform Extensions:                           cl_khr_icd cl_amd_event_callback cl_amd_offline_devices 


  Platform Name:                                 AMD Accelerated Parallel Processing
Number of devices:                               2
  Device Type:                                   CL_DEVICE_TYPE_GPU
  Vendor ID:                                     1002h
  Board name:                                    
  Device Topology:                               PCI[ B#5, D#0, F#0 ]
  Max compute units:                             6
  Max work items dimensions:                     3
    Max work items[0]:                           256
    Max work items[1]:                           256
    Max work items[2]:                           256
  Max work group size:                           256
  Preferred vector width char:                   16
  Preferred vector width short:                  8
  Preferred vector width int:                    4
  Preferred vector width long:                   2
  Preferred vector width float:                  4
  Preferred vector width double:                 0
  Native vector width char:                      16
  Native vector width short:                     8
  Native vector width int:                       4
  Native vector width long:                      2
  Native vector width float:                     4
  Native vector width double:                    0
  Max clock frequency:                           650Mhz
  Address bits:                                  32
  Max memory allocation:                         134217728
  Image support:                                 Yes
  Max number of images read arguments:           128
  Max number of images write arguments:          8
  Max image 2D width:                            16384
  Max image 2D height:                           16384
  Max image 3D width:                            2048
  Max image 3D height:                           2048
  Max image 3D depth:                            2048
  Max samplers within kernel:                    16
  Max size of kernel argument:                   1024
  Alignment (bits) of base address:              2048
  Minimum alignment (bytes) for any datatype:    128
  Single precision floating point capability
    Denorms:                                     No
    Quiet NaNs:                                  Yes
    Round to nearest even:                       Yes
    Round to zero:                               Yes
    Round to +ve and infinity:                   Yes
    IEEE754-2008 fused multiply-add:             Yes
  Cache type:                                    None
  Cache line size:                               0
  Cache size:                                    0
  Global memory size:                            536870912
  Constant buffer size:                          65536
  Max number of constant args:                   8
  Local memory type:                             Scratchpad
  Local memory size:                             32768
  Max pipe arguments:                            0
  Max pipe active reservations:                  0
  Max pipe packet size:                          0
  Max global variable size:                      0
  Max global variable preferred total size:      0
  Max read/write image args:                     0
  Max on device events:                          0
  Queue on device max size:                      0
  Max on device queues:                          0
  Queue on device preferred size:                0
  SVM capabilities:                              
    Coarse grain buffer:                         No
    Fine grain buffer:                           No
    Fine grain system:                           No
    Atomics:                                     No
  Preferred platform atomic alignment:           0
  Preferred global atomic alignment:             0
  Preferred local atomic alignment:              0
  Kernel Preferred work group size multiple:     64
  Error correction support:                      0
  Unified memory for Host and Device:            0
  Profiling timer resolution:                    1
  Device endianess:                              Little
  Available:                                     Yes
  Compiler available:                            Yes
  Execution capabilities:                                
    Execute OpenCL kernels:                      Yes
    Execute native function:                     No
  Queue on Host properties:                              
    Out-of-Order:                                No
    Profiling :                                  Yes
  Queue on Device properties:                            
    Out-of-Order:                                No
    Profiling :                                  No
  Platform ID:                                   0x7fd922436fd0
  Name:                                          Turks
  Vendor:                                        Advanced Micro Devices, Inc.
  Device OpenCL C version:                       OpenCL C 1.2 
  Driver version:                                1642.5
  Profile:                                       FULL_PROFILE
  Version:                                       OpenCL 1.2 AMD-APP (1642.5)
  Extensions:                                    cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_gl_sharing cl_ext_atomic_counters_32 cl_amd_device_attribute_query cl_amd_vec3 cl_amd_printf cl_amd_media_ops cl_amd_media_ops2 cl_amd_popcnt cl_amd_image2d_from_buffer_read_only cl_khr_spir cl_khr_gl_event 


  Device Type:                                   CL_DEVICE_TYPE_CPU
  Vendor ID:                                     1002h
  Board name:                                    
  Max compute units:                             24
  Max work items dimensions:                     3
    Max work items[0]:                           1024
    Max work items[1]:                           1024
    Max work items[2]:                           1024
  Max work group size:                           1024
  Preferred vector width char:                   16
  Preferred vector width short:                  8
  Preferred vector width int:                    4
  Preferred vector width long:                   2
  Preferred vector width float:                  8
  Preferred vector width double:                 4
  Native vector width char:                      16
  Native vector width short:                     8
  Native vector width int:                       4
  Native vector width long:                      2
  Native vector width float:                     8
  Native vector width double:                    4
  Max clock frequency:                           1200Mhz
  Address bits:                                  64
  Max memory allocation:                         8415937536
  Image support:                                 Yes
  Max number of images read arguments:           128
  Max number of images write arguments:          64
  Max image 2D width:                            8192
  Max image 2D height:                           8192
  Max image 3D width:                            2048
  Max image 3D height:                           2048
  Max image 3D depth:                            2048
  Max samplers within kernel:                    16
  Max size of kernel argument:                   4096
  Alignment (bits) of base address:              1024
  Minimum alignment (bytes) for any datatype:    128
  Single precision floating point capability
    Denorms:                                     Yes
    Quiet NaNs:                                  Yes
    Round to nearest even:                       Yes
    Round to zero:                               Yes
    Round to +ve and infinity:                   Yes
    IEEE754-2008 fused multiply-add:             Yes
  Cache type:                                    Read/Write
  Cache line size:                               64
  Cache size:                                    32768
  Global memory size:                            33663750144
  Constant buffer size:                          65536
  Max number of constant args:                   8
  Local memory type:                             Global
  Local memory size:                             32768
  Max pipe arguments:                            16
  Max pipe active reservations:                  16
  Max pipe packet size:                          4120970240
  Max global variable size:                      1879048192
  Max global variable preferred total size:      1879048192
  Max read/write image args:                     64
  Max on device events:                          0
  Queue on device max size:                      0
  Max on device queues:                          0
  Queue on device preferred size:                0
  SVM capabilities:                              
    Coarse grain buffer:                         Yes
    Fine grain buffer:                           Yes
    Fine grain system:                           Yes
    Atomics:                                     Yes
  Preferred platform atomic alignment:           0
  Preferred global atomic alignment:             0
  Preferred local atomic alignment:              0
  Kernel Preferred work group size multiple:     1
  Error correction support:                      0
  Unified memory for Host and Device:            1
  Profiling timer resolution:                    1
  Device endianess:                              Little
  Available:                                     Yes
  Compiler available:                            Yes
  Execution capabilities:                                
    Execute OpenCL kernels:                      Yes
    Execute native function:                     Yes
  Queue on Host properties:                              
    Out-of-Order:                                No
    Profiling :                                  Yes
  Queue on Device properties:                            
    Out-of-Order:                                No
    Profiling :                                  No
  Platform ID:                                   0x7fd922436fd0
  Name:                                          Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz
  Vendor:                                        GenuineIntel
  Device OpenCL C version:                       OpenCL C 1.2 
  Driver version:                                1642.5 (sse2,avx)
  Profile:                                       FULL_PROFILE
  Version:                                       OpenCL 1.2 AMD-APP (1642.5)
  Extensions:                                    cl_khr_fp64 cl_amd_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_gl_sharing cl_ext_device_fission cl_amd_device_attribute_query cl_amd_vec3 cl_amd_printf cl_amd_media_ops cl_amd_media_ops2 cl_amd_popcnt cl_khr_spir cl_khr_gl_event 

@hughperkins
Copy link
Contributor

Global memory size: 536870912 => your card has 512MB memory, right?
Max memory allocation: 134217728 => max alloc size 128MB

So, what you can try doing is changing conv5_4 layer to have 128 output planes, and change fc6 from 25088 => 4096 to 6272 => 4096

@napsternxg
Copy link
Contributor Author

I made the changes and am still getting a new error:

$ th neural_style_opencl.lua -style_image examples/inputs/picasso_selfport1907.jpg -content_image examples/inputs/brad_pitt.jpg -gpu 0 -backend 'clnn' -output_image profile.png -image_size 10
In Function main
Starting load model
In loadcaffe_load
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message.  If the message turns out to be larger than 1073741824 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 574671192
Successfully loaded models/VGG_ILSVRC_19_layers.caffemodel
Updated Line to: %s     table.insert(model, {'conv5_4', nn.SpatialConvolutionMM(128, 128, 3, 3, 1, 1, 1, 1)})   
Updated Line to: %s     table.insert(model, {'fc6', nn.Linear(6272, 4096)})
Finished proto to lua   
conv1_1: 64 3 3 3
conv1_2: 64 64 3 3
conv2_1: 128 64 3 3
conv2_2: 128 128 3 3
conv3_1: 256 128 3 3
conv3_2: 256 256 3 3
conv3_3: 256 256 3 3
conv3_4: 256 256 3 3
conv4_1: 512 256 3 3
conv4_2: 512 512 3 3
conv4_3: 512 512 3 3
conv4_4: 512 512 3 3
conv5_1: 512 512 3 3
conv5_2: 512 512 3 3
conv5_3: 512 512 3 3
conv5_4: 512 512 3 3
/home/username/Downloads/torch/install/bin/luajit: ...y/Downloads/torch/install/share/lua/5.1/cltorch/init.lua:30: inconsistent tensor size at /home/username/Downloads/torch/pkg/torch/lib/TH/generic/THTensorCopy.c:21
stack traceback:
        [C]: in function 'cloldcopy'
        ...y/Downloads/torch/install/share/lua/5.1/cltorch/init.lua:30: in function 'copy'
        ./loadcaffe_wrapper.lua:97: in function 'load'
        neural_style_opencl.lua:62: in function 'main'
        neural_style_opencl.lua:424: in main chunk
        [C]: in function 'dofile'
        ...oads/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
        [C]: at 0x00406670

I believe the reason is that the original caffee model has that layer size which can't be changed. This neural-style code is only to do inference and not train the model. Hence, I should use the original model. Probably @jcjohnson can confirm on this.

@hughperkins
Copy link
Contributor

Hmmm, right, your explanation appears to match the error message. I guess you will need to use a smaller model perhaps. How about this one? https://gist.github.com/mavenlin/d802a5849de39225bcc6

@napsternxg
Copy link
Contributor Author

I have pushed my changes to https://github.com/napsternxg/neural-style/tree/opencl

I will try to run with the smaller model. In the meanwhile will it be possible for you to run my code and see if it works, maybe the issue is only my GPU memory. It would be great to know if the port works on other opencl systems without using any cuda libraries.

@vkorablin
Copy link

Tried it on my 1Gb card.

Command line: th neural_style_opencl.lua -style_image examples/inputs/picasso_selfport1907.jpg -content_image examples/inputs/brad_pitt.jpg -gpu 0 -backend 'clnn' -output_image profile.png -image_size 25

Got further than @napsternxg managed (so it does seem to be the lack of GPU memory in his case), but then got a 'not implemented':

/home/vkorablin/torch/install/bin/luajit: ...lin/torch/install/share/lua/5.1/nn/SpatialMaxPooling.lua:41: Not implemented at /tmp/luarocks_clnn-scm-1-7207/clnn/SpatialMaxPooling.cpp:166
stack traceback:
    [C]: in function 'SpatialMaxPooling_updateGradInput'
    ...lin/torch/install/share/lua/5.1/nn/SpatialMaxPooling.lua:41: in function 'updateGradInput'
    /home/vkorablin/torch/install/share/lua/5.1/nn/Module.lua:30: in function 'backward'
    .../vkorablin/torch/install/share/lua/5.1/nn/Sequential.lua:84: in function 'backward'
    neural_style_opencl.lua:244: in function 'opfunc'
    /home/vkorablin/torch/install/share/lua/5.1/optim/lbfgs.lua:66: in function 'lbfgs'
    neural_style_opencl.lua:263: in function 'main'
    neural_style_opencl.lua:424: in main chunk
    [C]: in function 'dofile'
    ...blin/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
    [C]: at 0x00406670

The line that throws that error: https://github.com/hughperkins/clnn/blob/6f79cd72d4a2434dd55d5e8a365013c632146155/SpatialMaxPooling.cpp#L166

@napsternxg
Copy link
Contributor Author

Ok got it.

@hughperkins
Copy link
Contributor

I think you can remove accGradParameters. That cuts down on memory slightly. Have a look at hughperkins@d9e4dd4

@napsternxg
Copy link
Contributor Author

Got it. Changed that and now I am able to run it using multiple content and style layers. And I am getting pretty promising results.

If you run it with the following command:

$ th neural_style.lua -style_image examples/inputs/picasso_selfport1907.jpg -content_image examples/inputs/brad_pitt.jpg -output_image profile.png -model_file models/nin_imagenet_conv.caffemodel -proto_file models/train_val.prototxt -gpu 0 -backend clnn -num_iterations 1000 -seed 123 -content_layers relu0,relu3,relu7,relu12 -style_layers relu0,relu3,relu7,relu12 -content_weight 10 -style_weight 1000 -image_size 320 -optimizer adam

You will get image like the following:

profile

@jcjohnson what do you think of this output. It was constructed using the nin_imagenet_conv.caffemodel file. .

@hughperkins
Copy link
Contributor

Got it. Changed that and now I am able to run it using multiple content and style layers.

Cool :-)

@jcjohnson
Copy link
Owner

@napsternxg Looks pretty good to me - better than the results I got with CaffeNet, but not quite as nice as the VGG-19 results.

At any rate it looks like the OpenCL port is pretty much working as intended at this point; I'm happy to merge into master if you send me a PR.

@napsternxg
Copy link
Contributor Author

@jcjohnson I can send the pull request but it will not work out of the box. @hughperkins has made some changes to torch code as well as clnn code for average pooling which may cause an issue. I will clean up some of the things on my side and update the code on my repo as of now. I think once the clnn issue is fixed we can merge it into your repo.

@jcjohnson
Copy link
Owner

Sounds good to me.

@hughperkins hughperkins mentioned this issue Oct 1, 2015
@hughperkins
Copy link
Contributor

Hi. I've created a new version of clnn which uses less memory. Comparing with other versions, on my 1GB NVIDIA card:

  • cunn, image_size 224 works, 256 fails
  • clnn, master branch, image size 200 works, 256 fails
  • clnn, multi-conv branch, image_size 300 works, 320 fails

You need to install branch multi-conv of https://github.com/hughperkins/clnn ,and then I tested it as follows:

th neural_style.lua -style_image examples/inputs/picasso_selfport1907.jpg -content_image examples/inputs/brad_pitt.jpg -gpu 0 -output_image profile.png -image_size 300 -model_file models/vgg_normalised.caffemodel -backend clnn -num_iterations 1000 -save_iter 50 -normalize_gradients -content_weight 50000 -style_weight 90000

@codeaudit
Copy link

th neural_style.lua -style_image examples/inputs/picasso_selfport1907.jpg -content_image examples/inputs/brad_pitt.jpg -gpu 1 -output_image profile.png -image_size 300 -model_file models/vgg_normalised.caffemodel -backend clnn -num_iterations 1000 -save_iter 50 -normalize_gradients -content_weight 50000 -style_weight 90000
/home/ceperez/torch/install/bin/luajit: /home/ceperez/torch/install/share/lua/5.1/trepl/init.lua:363: module 'cutorch' not found:No LuaRocks module found for cutorch
no field package.preload['cutorch']
no file '/home/ceperez/.luarocks/share/lua/5.1/cutorch.lua'
no file '/home/ceperez/.luarocks/share/lua/5.1/cutorch/init.lua'
no file '/home/ceperez/torch/install/share/lua/5.1/cutorch.lua'
no file '/home/ceperez/torch/install/share/lua/5.1/cutorch/init.lua'
no file './cutorch.lua'
no file '/home/ceperez/torch/install/share/luajit-2.1.0-alpha/cutorch.lua'
no file '/usr/local/share/lua/5.1/cutorch.lua'
no file '/usr/local/share/lua/5.1/cutorch/init.lua'
no file '/home/ceperez/.luarocks/lib/lua/5.1/cutorch.so'
no file '/home/ceperez/torch/install/lib/lua/5.1/cutorch.so'
no file './cutorch.so'
no file '/usr/local/lib/lua/5.1/cutorch.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
stack traceback:
[C]: in function 'error'
/home/ceperez/torch/install/share/lua/5.1/trepl/init.lua:363: in function 'require'
neural_style.lua:48: in function 'main'
neural_style.lua:437: in main chunk
[C]: in function 'dofile'
...erez/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:133: in main chunk
[C]: at 0x004064d0

@hughperkins
Copy link
Contributor

Hi codeaudit, this thread is getting a bit crazy long :-P Do you mind opening a new issue in https://github.com/hughperkins/clnn/issues please? Also, please provide the exact commit, branch, and repository that you are running from. It looks like you are using a branch/commit that is importing cutorch for some reason, but I need to know the exact branch etc to check more closely.

@bbert81
Copy link

bbert81 commented Nov 11, 2015

it give me this :Successfully loaded models/nin_imagenet_conv.caffemodel
MODULE data UNDEFINED
warning: module 'data [type 5]' not found
/home/rob/torch/install/bin/luajit: models/train_val.prototxt.opencl.lua:4: bad argument #1 to 'insert' (table expected, got nil)
stack traceback:
[C]: in function 'insert'
models/train_val.prototxt.opencl.lua:4: in main chunk
[C]: in function 'dofile'
./loadcaffe_wrapper.lua:77: in function 'load'
neural_style.lua:66: in function 'main'
neural_style.lua:484: in main chunk
[C]: in function 'dofile'
.../rob/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00406670
any clues?

@hughperkins
Copy link
Contributor

Strong preference for logging issues at https://github.com/hughperkins/clnn/issues ;-) But... anything to do with prototxt is often a problem with the downloaded model. Can you run md5sum models/* please, and provide the output? On my system I get:

67fbe66d382c55f742a6f8c6171011eb  models/download_models.sh
b568958c0dcf1d97cbcff4c22b02a2be  models/nin_imagenet.caffemodel
8fbacb8dd696607876386e34ff68a84a  models/nin_imagenet_conv.caffemodel
a748f26c3838cb69c04cafe7db87b2f1  models/solver.prototxt.cpu.lua
b2bb41db64297b31704139a3d3210cd8  models/solver.prototxt.lua
b2bb41db64297b31704139a3d3210cd8  models/train_val.prototxt.lua
590f49298e850d571f5477845549d7a5  models/train_val.prototxt.opencl.lua
ccbbdda59210208be39f8974f5b5765e  models/VGG_ILSVRC_19_layers_deploy.prototxt
6fcb910320e8c77a3611e56469fdb833  models/VGG_ILSVRC_19_layers_deploy.prototxt.cpu.lua
18ab1f0a732d4dff05ebe57fc1cca306  models/VGG_ILSVRC_19_layers_deploy.prototxt.lua
96835ff7338b42e24db50b0c64901644  models/VGG_ILSVRC_19_layers_deploy.prototxt.opencl.lua
6adcfbc93e8f6762e6421515940526f4  models/vgg_normalised.caffemodel

(Edit: hmmm, I suppose logging this at clnn would look a bit strange. I guess logging things into this thread is ok for now)

@bbert81
Copy link

bbert81 commented Nov 11, 2015

7fbe66d382c55f742a6f8c6171011eb models/download_models.sh
b568958c0dcf1d97cbcff4c22b02a2be models/nin_imagenet.caffemodel
8fbacb8dd696607876386e34ff68a84a models/nin_imagenet_conv.caffemodel
21d09edfc3f4cc2546db215d88a49ab2 models/solver.prototxt.lua
086e8c3e1ec6863f8a033db81e94ab67 models/train_val.prototxt.lua
f3599c103dbce6d1873755f39d0148f4 models/train_val.prototxt.opencl.lua
b5c644beabd7cf06bdd9065cfd674c97 models/VGG_ILSVRC_19_layers.caffemodel
ccbbdda59210208be39f8974f5b5765e models/VGG_ILSVRC_19_layers_deploy.prototxt
28c553449fd7080f2475197ffd071aec models/VGG_ILSVRC_19_layers_deploy.prototxt.cpu.lua
f379287f935de278b5f65bf07c456fce models/VGG_ILSVRC_19_layers_deploy.prototxt.lua
bdb3dd013a802d25367d46339cbee6a6 models/VGG_ILSVRC_19_layers_deploy.prototxt.opencl.lua
6adcfbc93e8f6762e6421515940526f4 models/vgg_normalised.caffemodel

@hughperkins
Copy link
Contributor

Hmmm, looks convincing. will ponder a bit. To avoid spamming everyone subscribed to this thread, I've enabled issue logging at https://github.com/hughperkins/neural-style/issues , and logged your issue at hughperkins#1

@hughperkins
Copy link
Contributor

Update: spatialaveragepooling ceil mode has now been merged into nn, ie torch/nn#365 , and to clnn master, commit hughperkins/clnn@6e4976c , so we should be good to create a pull request for the opencl changes into neural-style now. Who's going to do that?

@napsternxg
Copy link
Contributor Author

I will send that by tomorrow.

@brunoro
Copy link

brunoro commented Dec 15, 2015

you're great guys, now I can finally create fake Rembrandts with my Radeon 💃

@hughperkins
Copy link
Contributor

I will send that by tomorrow.

Cool :-)

you're great guys, now I can finally create fake Rembrandts with my Radeon

:-D

napsternxg added a commit to napsternxg/neural-style that referenced this issue Dec 16, 2015
Closes jcjohnson#44

Thanks to @hugeperkins and @jcjohnson for all the help.
README updated with the new example.
@napsternxg
Copy link
Contributor Author

@jcjohnson I have send the pull request. Ready to Merge =)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants