Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Any simple example? #550

Closed
rmanor opened this issue Jun 27, 2014 · 76 comments
Closed

Any simple example? #550

rmanor opened this issue Jun 27, 2014 · 76 comments

Comments

@rmanor
Copy link
Contributor

rmanor commented Jun 27, 2014

Hi,

I started with Caffe and the mnist example ran well.
However, I can not understand how am I suppose to use this for my own data for a classification task.
What should be the data format? Where should I specify the files?
How do I see the results for a test set?
All of these are not mentioned at all in the documentation.
Any pointers will be appreciated, thanks.

@dennis-chen
Copy link

Hi rmanor,

I'm not very knowledgeable as I just got started using Caffe as well, so folks should feel free to jump in and correct me. The documentation for the general procedure of training with your data is here: http://caffe.berkeleyvision.org/imagenet_training.html , and you will be able to do all your training by copying and modifying the files in CAFFE_ROOT_DIR/examples/imagenet, which we will call the imagenet directory. Using the imagenet architecture should yield decent out of the box results for categorizing images.

To summarize, the steps I followed to train Caffe were:

  1. Group your data into a training folder and a testing folder. Caffe will train on one set of images and test it's accuracy on the other set of images. Your data should be formatted to be 256x256 color jpeg files. For each set, create a text file specifying the categories that the pictures belong to. This text file is formatted like so,

/home/my_test_dir/picture-foo.jpg 0
/home/my_test_dir/picture-foo1.jpg 1

where picture-foo belongs to category 0 and picture-foo1 belongs to category 1.

  1. Now copy and modify create_imagenet.sh from the imagenet directory, changing the arguments to point to your folders and text files. Run create_imagenet.sh and it will generate training and testing leveldb directories. Caffe will work with these leveldb directories from now on.
  2. Copy and modify make_imagenet_mean.sh from the imagenet directory, changing the arguments to point at your spanking new leveldb folders. This will generate mean.prototxt files that caffe uses to normalize images, improving your results. I would recommend specifying absolute paths for everything to minimize headaches.
  3. Copy and modify imagenet_{deploy,solver,train,val}.prototxt. You'll want to change the source and mean_file parameters in imagenet_{train,val} to point to your leveldbs and your mean.prototxt files (again, absolute paths). You may also want to change the batch_size parameter based on the hardware that you'll be running caffe on. Lastly, change the solver.prototxt file to point to your newly modified train and val prototxt files! I believe you can leave deploy.prototxt alone.
  4. Take a step back and make sure you haven't missed anything. You will have deploy, solver, train, and val prototxt files; two image mean binaryproto files; one train_leveldb folder, and one val_leveldb folder. That's two folders and six files in total.
  5. You guessed it- copy and modify train_imagenet.sh! Point it to your solver prototxt file.
  6. Run the modified train_imagenet script. This will periodically spit out solverstate files and data files with names like caffe_train_iter_#.
  7. After training terminates, you can find a script in CAFFE_ROOT_DIR/build/tools called test_net.bin. test_net.bin will take your val.prototxt, a caffe_train_iter_# data file, and the number of testing iterations as arguments. It will tell you how your trained network is doing.

Best of luck!

@rmanor
Copy link
Contributor Author

rmanor commented Jun 28, 2014

Thanks, I read imagenet example, a bit clearer.
I'm trying to write some code now that converts my data into leveldb for caffe.
How should I compile it?
When I do g++ it doesn't find the caffee include files for some reason.
Thanks.

@dennis-chen
Copy link

Hi rmanor, I would recommend against writing your own code to convert your data into a leveldb. convert_imageset.bin in the CAFFE_ROOT_DIR/build/tools directory will do this for you automatically. To see how to use convert_imageset to convert your data, take a look at the contents of create_imagenet.sh in the CAFFE_ROOT_DIR/examples/imagenet directory. If you follow the first two steps in my post above and copy and modify the create_imagenet shell script, you shouldn't have to compile anything and you will save yourself a lot of time. That said, if you truly intend to write your own code to replicate or extend the functionality of convert_imageset, I'm afraid I can't help you because I've only used built in caffe tools so far.

@rmanor
Copy link
Contributor Author

rmanor commented Jun 28, 2014

I need to write my own code because my data isn't images... but if conver_imageset doesn't do any image processing then maybe I can use it anyway.
I'll try, thanks.

@rmanor rmanor closed this as completed Jun 28, 2014
@rmanor rmanor reopened this Jun 28, 2014
@dennis-chen
Copy link

What kind of data do you have? Please disregard all the instructions above if you aren't using images/this isn't for computer vision purposes. I am curious because I was under the impression that convoluted neural nets were designed to recognize visual patterns. Are you re-purposing caffe for something else?

@rmanor
Copy link
Contributor Author

rmanor commented Jun 28, 2014

I think convnets were designed for images, but they had success in recent
years with speech, electroencephalography and more types of data.
I'm looking at the source of convert_imageset and indeed it has no
image-specific processing.
I'll try to use it as is. Thank you. :)

  • Ran

On Sat, Jun 28, 2014 at 7:24 PM, dennis-chen notifications@github.com
wrote:

What kind of data do you have? Please disregard all the instructions above
if you aren't using images/this isn't for computer vision purposes. I am
curious because I was under the impression that convoluted neural nets were
designed to recognize visual patterns. Are you re-purposing caffe for
something else?


Reply to this email directly or view it on GitHub
#550 (comment).

@sguada
Copy link
Contributor

sguada commented Jun 28, 2014

You could also use the HDF5 layer to save and read data.

Sergio

2014-06-28 10:11 GMT-07:00 rmanor notifications@github.com:

I think convnets were designed for images, but they had success in recent
years with speech, electroencephalography and more types of data.
I'm looking at the source of convert_imageset and indeed it has no
image-specific processing.
I'll try to use it as is. Thank you. :)

  • Ran

On Sat, Jun 28, 2014 at 7:24 PM, dennis-chen notifications@github.com
wrote:

What kind of data do you have? Please disregard all the instructions
above
if you aren't using images/this isn't for computer vision purposes. I am
curious because I was under the impression that convoluted neural nets
were
designed to recognize visual patterns. Are you re-purposing caffe for
something else?


Reply to this email directly or view it on GitHub
#550 (comment).


Reply to this email directly or view it on GitHub
#550 (comment).

@rmanor
Copy link
Contributor Author

rmanor commented Jun 28, 2014

@sguada I see that I create HDF5 files from MATLAB, cool!
Any examples on how to use this type of layer? Thanks!

@sguada
Copy link
Contributor

sguada commented Jun 30, 2014

@sergeyk could you post a simple example using a HDF5 data layer

@rmanor
Copy link
Contributor Author

rmanor commented Jul 1, 2014

Thanks. Specifically I would like to now how the hdf5 should be built.
I looked at the code and it seemed like there should be a dataset "data" and a dataset "label" in the same file, and Matlab wouldn't let me create that.
Am I correct?
Thanks.

@sergeyk
Copy link
Contributor

sergeyk commented Jul 1, 2014

Okay, will PR an example to master soon.

@rmanor
Copy link
Contributor Author

rmanor commented Jul 1, 2014

Thanks.
Btw: my data doesn't have to be images, right?
Just checking before I invest too much time :) Thanks.

  • Ran

On Tue, Jul 1, 2014 at 11:43 PM, Sergey Karayev notifications@github.com
wrote:

Okay, will PR an example to master soon.


Reply to this email directly or view it on GitHub
#550 (comment).

@sergeyk
Copy link
Contributor

sergeyk commented Jul 1, 2014

Correct, it can be anything.

On Tue, Jul 1, 2014 at 1:46 PM, rmanor notifications@github.com wrote:

Thanks.
Btw: my data doesn't have to be images, right?
Just checking before I invest too much time :) Thanks.

  • Ran

On Tue, Jul 1, 2014 at 11:43 PM, Sergey Karayev notifications@github.com

wrote:

Okay, will PR an example to master soon.


Reply to this email directly or view it on GitHub
#550 (comment).


Reply to this email directly or view it on GitHub
#550 (comment).

@rmanor
Copy link
Contributor Author

rmanor commented Jul 1, 2014

Thank you, I appreciate the help from all of you.

  • Ran

On Tue, Jul 1, 2014 at 11:47 PM, Sergey Karayev notifications@github.com
wrote:

Correct, it can be anything.

On Tue, Jul 1, 2014 at 1:46 PM, rmanor notifications@github.com wrote:

Thanks.
Btw: my data doesn't have to be images, right?
Just checking before I invest too much time :) Thanks.

  • Ran

On Tue, Jul 1, 2014 at 11:43 PM, Sergey Karayev <
notifications@github.com>

wrote:

Okay, will PR an example to master soon.


Reply to this email directly or view it on GitHub
#550 (comment).


Reply to this email directly or view it on GitHub
#550 (comment).


Reply to this email directly or view it on GitHub
#550 (comment).

@sergeyk
Copy link
Contributor

sergeyk commented Jul 3, 2014

@rmanor sorry i haven't got to packaging up a notebook example, but please consider https://github.com/BVLC/caffe/blob/master/src/caffe/test/test_data/generate_sample_data.py

Running python generate_sample_data.py will generate the test hdf5 files that you're interested in.

This example creates a simple dataset and is used in https://github.com/BVLC/caffe/blob/master/src/caffe/test/test_hdf5data_layer.cpp

I am not sure how to create this example in Matlab, but it should be equally easy.

@rmanor
Copy link
Contributor Author

rmanor commented Jul 4, 2014

@sergeyk Thanks!

@rmanor
Copy link
Contributor Author

rmanor commented Jul 9, 2014

Hey,
Look at the first post by dennis-chen, I used his method of using
test_net.bin .
That worked for me...
I'm trying now to also classify by using a python script in hope that it
will be easier.

  • Ran

On Wed, Jul 9, 2014 at 5:02 AM, wusx11 notifications@github.com wrote:

@rmanor https://github.com/rmanor Hi rmanor, which step are you
reaching now? I get into using caffe recently and have similar confusion...
I have already use caffe to train my own data, and I have already got a
.solverstate file. But I'm not sure how to use this file to classify a new
input? I mean how to use the system I have already trained?
Thanks!


Reply to this email directly or view it on GitHub
#550 (comment).

@sergeyk
Copy link
Contributor

sergeyk commented Jul 14, 2014

Will be resolved by #691

@sergeyk sergeyk closed this as completed Jul 14, 2014
@roseperrone
Copy link

The documentation link in @dennis-chen's first post is broken. I think it should be http://caffe.berkeleyvision.org/gathered/examples/imagenet.html

@pulkit1991
Copy link

Thanks a lot @dennis-chen for your post. It was really helpful! Do you any similar post for testing the data? I want to test an image with the learned model using python wrapper. I am editing the classifier.py file in CAFFE_ROOT/python to classify the test image, but there are some strange errors. Any help in this regard would be really useful.

@dennis-chen
Copy link

@pulkit1991 , I'm very glad you found it helpful! Below are instructions I wrote on testing the learn model with the python wrapper when I was documenting this earlier this summer, hope it helps!


How do I use the python wrapper?

Compiling the python wrapper on futuregrid is a uphill battle that you will have to fight alone, brave warrior. I got the wrapper working on my personal
computer but couldn't do it on futuregrid. Not a terrible thing because
classification tasks don't take that much computing power once the model itself has been generated, so I'd recommend getting caffe and the python wrapper installed elsewhere if futuregrid is too tough of a nut to crack.

That said, if you import numpy and add "CAFFE_HOME_DIR/python" to your system path, you should be able to import and use caffe without a problem in your python programs. Initiating a caffe classifier looks like this:

self.net = caffe.Classifier(DEPLOY_PROTOTXT,TRAINED_NET)
self.net.set_phase_test()
self.net.set_mode_cpu()
self.net.set_mean('data', IMAGE_MEAN)
self.net.set_channel_swap('data', (2,1,0))
self.net.set_input_scale('data', 255)

As stated previously, your DEPLOY_PROTOTXT, TRAINED_NET, and IMAGE_MEAN should've been generated by training. Just plug in their file paths and caffe does the rest of the magic. To do classification on a specific image, call:

scores = self.net.predict([caffe.io.load_image(img_path)])

Where scores is a vector of length 1000. The index of a score indicates caffe's confidence that the image is of class index. For example, if scores looks like [.999,.1,...], then caffe has a high confidence that the image is of class 0. You defined the classes and labels earlier on in a text file when generating the leveldbs for training.

But I trained on 2 classes, not 1000. What's going on?
Don't worry, if you copied the imagenet model, it has 1000 outputs at its
final layer. Caffe is smart enough to only map to the outputs that you
specified, so the vector will still map to the numbers that you used to
label your classes. All the "unused" classes consistently evaluate to
some obscenely small number, as expected. To eliminate this annoying but harmless result, dig into your val and train prototxt files. FC8, the inner product layer, should say that it has 1000 outputs. Change this to the relevant amount of classes that you have and you're all done. Remember that numbering starts at 0!

@pulkit1991
Copy link

Hey @dennis-chen ! Thanks a lot for this help! I have a good idea now as to how to proceed. There is one small thing I would like to ask. In the MEAN_FILE file you need .npy file, but I dont have it yet for my data. I am using my own data for training and testing. What should I do for the mean file issue?
Thanks.

@pulkit1991
Copy link

Just found out!
#290
Thanks! :)

@vinc30
Copy link

vinc30 commented Dec 15, 2014

Thanks for detailed tutorial! You're awesome!

@vinc30
Copy link

vinc30 commented Dec 15, 2014

btw, can you tell more about how you get DEPLOY_PROTOTXT? I cp one from caffe/models/bvlc_reference_caffenet/deploy.prototxt. I try to adjust it and use it with python wrapper you mentioned above but end up with some strange errors
Any comment?
Thanks!

@roseperrone
Copy link

To write a deploy.prototxt, copy this to a new file called deploy.prototxt in

name: "<the name in the train_val.prototxt>"
input: "data"
input_dim: 10
input_dim: 3
input_dim: 224 # or whatever the crop_size is in train_val.prototxt
input_dim: 224

Append to that all layers in train_val.prototxt.
Delete the first few layers that don't have a "bottom" field.
Delete all paramaters that have to do exclusively with learning.
e.g.:

  • blobs_lr
  • weight_decay
  • weight_filler
  • bias_filler

Change the value of the layer that contains a "num_output" field
to the number of categories.

@zacharyhorvitz
Copy link

@andersonRocha091
I think the issue is your path to the lmdb folders. First, I would check the prototxt file that solver.prototxt links to. Within that file there should be definitions of the paths to the train and val datasets. You should make sure that these link to the datasets you are using. If all looks correct, then try recreating the lmdbs using createimagenet.sh, but only after deleting the original lmdbs (otherwise, the folders will not be created)

@andersonRocha091
Copy link

@zacharyhorvitz thank you very much for the heads up. I was looking for the dataset lmdb files and I found that when I ran CreateImagenet.sh I executed like root of the system. So, the lmdb files regarded to the validation images was saved in a totally different location. After copying those lmdb to the right path everything worked just fine. Thank you for the help

@rakesh1990ramesh
Copy link

Hello,

I am new to caffe and my goal is to download a pre trained network(the MIT places database trained network). I just want to supply a few test images to this network to see the results before I dive into training it on my own, and other stuff.

Is there a document or some source where I can look into so that I can do this quickly. I am running Ubuntu .

@adibi
Copy link

adibi commented Jul 12, 2015

hi,
I have the same problem as sbanerj1 posted:
In @dennis-chen's original post I could do steps 1 & 2, but after that I couldn't find any of the .prototxt files mentioned i.e. imagenet_{deploy,solver,train,val}.prototxt. Are they supposed to be generated by some script? Or should they be inside the /examples/imagenet folders by default? I don't have them. Please help.

Also the make_imagenet_mean.sh script only generated data/ilsvrc12/imagenet_mean.binaryproto file for me. Is that all it should generate? Or am I missing something?

thanks

@zacharyhorvitz
Copy link

@adibi

If I remember correctly, they should be in models/bvlc_reference_net(or something of the like)

@adibi
Copy link

adibi commented Jul 13, 2015

thanks, found it.
anyway, @dennis-chen mentioned in step 4 that after the two first steps "You will have deploy, solver, train, and val prototxt files; two image mean binaryproto files; one train_leveldb folder, and one val_leveldb folder." but I have deploy, solver, train_val prototxt files=3 prototxt files, one image mean binaryproto file and not two, and the two ther folders. Am I missing something or that's allI should have?
thanks,

@adibi
Copy link

adibi commented Jul 14, 2015

hi, I would like if someone could reffer to my previous question.
however, I ran the cammand 'sudo examples/Beela/train_caffenet.sh' (whereis Beela is my directory in which all the reffered files and directories (create,train_val,solver,deploy, ilsvrc12_train_lmdb,
ilsvrc12_val_lmdb,) are stored.
but then caffe is stuck in line 'iteration 0, Testing net (#0)' for about 3 hours, and the rest seems like this:
I0713 15:49:17.405992 21562 solver.cpp:294] Iteration 0, Testing net (#0)
I0713 18:16:16.448539 21562 solver.cpp:343] Test net output #0: accuracy = 0
I0713 18:16:16.493039 21562 solver.cpp:343] Test net output #1: loss = 5.93315 (* 1 = 5.93315 loss)
F0713 18:16:19.504870 21562 syncedmem.hpp:27] Check failed: ptr host allocation of size 297369600 failed
*
* Check failure stack trace: ***
@ 0x7eff1262bdaa (unknown)
@ 0x7eff1262bce4 (unknown)
@ 0x7eff1262b6e6 (unknown)
@ 0x7eff1262e687 (unknown)
@ 0x7eff12a3b415 caffe::SyncedMemory::mutable_cpu_data()
@ 0x7eff12a3bbb2 caffe::Blob<>::mutable_cpu_data()
@ 0x7eff12a2d5d4 caffe::ConvolutionLayer<>::Forward_cpu()
@ 0x7eff129732ba caffe::Net<>::ForwardFromTo()
@ 0x7eff12973557 caffe::Net<>::ForwardPrefilled()
@ 0x7eff12a50977 caffe::Solver<>::Step()
@ 0x7eff12a512af caffe::Solver<>::Solve()
@ 0x4065a6 train()
@ 0x404a81 main
@ 0x7eff11b3dec5 (unknown)
@ 0x40502d (unknown)
@ (nil) (unknown)
Aborted (core dumped)

what is wrong with what I have done?

@sharath88
Copy link

i was able to train cifarnet with my own dataset(with the example steps provided dennis-chen). Now i have a trained network with .caffemodel and .solverstate .

How do i call test_net.bin to know how good the network is trained. Can anyone tell me what are the arguments i have to pass to this script.

Thanks

@netsourc
Copy link

dennis chen ,
im a new user of caffe,
i read your first answer and it help me a lot ,
but i have something that is not so clear to- me what do you mean in the last answer?
i want to understand after train i got the two files .now what do i have to do to see the result of the test after trained?
is there a comprehensible file that categorizes every image ?and where could i find such a file?
thanks a lot

@sharath88
Copy link

you need to call caffe.net to categorize the images, i use the below python script to categorize the images.

import numpy as np
import matplotlib.pyplot as plt
import re

Make sure that caffe is on the python path:

caffe_root = '/home/sharath/caffe/' # this file is expected to be in {caffe_root}/examples
rootdir = '/home/sharath/caffe/examples/cifar10/test/images/'

import sys
sys.path.insert(0, caffe_root + 'python')

import caffe

plt.rcParams['figure.figsize'] = (10, 10)
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'

import os
if not os.path.isfile(caffe_root + 'examples/imagenet/cifar10/cifar2_full_iter_70000.caffemodel'):
print("Downloading pre-trained CaffeNet model...")

caffe.set_mode_cpu()
net = caffe.Net(caffe_root + 'examples/cifar10/cifar_deploy.prototxt',
caffe_root + 'examples/cifar10/cifar2_full_iter_70000.caffemodel',
caffe.TEST)

input preprocessing: 'data' is the name of the input blob == net.inputs[0]

transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape})
transformer.set_transpose('data', (2,0,1))
transformer.set_mean('data', np.load(caffe_root + 'examples/cifar10/out.npy').mean(1).mean(1)) # mean pixel
transformer.set_raw_scale('data', 255) # the reference model operates on images in [0,255] range instead of [0,1]
transformer.set_channel_swap('data', (0,1,2)) # the reference model has channels in BGR order instead of RGBc

set net to batch size of 50

net.blobs['data'].reshape(50,3,64,32)

l = os.listdir(rootdir)

def sort_nicely( l ):
""" Sort the given list in the way that humans expect.
"""
convert = lambda text: int(text) if text.isdigit() else text
alphanum_key = lambda key: [ convert(c) for c in re.split('([0-9]+)', key) ]

text_file = open("predict.txt", "w")
for fileList in os.walk(rootdir):
filenames = fileList[2]
filenames.sort(key=alphanum_key)
for fname in filenames:
net.blobs['data'].data[...] = transformer.preprocess('data', caffe.io.load_image(rootdir + fname))
out = net.forward()
print("Predicted class is #{}. for ".format(out['prob'].argmax()) + fname)
text_file.write("{}\n ".format(out['prob'].argmax()))

text_file.close()

@ghost
Copy link

ghost commented Sep 8, 2015

Hey guys,

Could you please tell me if the way I am generating HDF5 from images is correct or not? I have written the following script based on the demo.m from the hdf5creation directory. What I am not 100% sure about is the lines between asterisks. After reading an image I down scale the pixel values by 255 so that they fall between 0 and 1. Then, I change the order of channels from RGB to BGR and finally switch the order of rows and columns before passing to the "store2hdf5" function. I am not subtracting the mean like we do in LMDB format because the data has to fall between 0 and 1. Am I correct?

%------------------------------------------------------
filename='trial.h5';
num_total_samples=3000;

chunksz=500;
created_flag=false;
totalct=0;

for batchno=1:ceil(num_total_samples/chunksz),
fprintf('batch no. %d\n', batchno);
last_read=(batchno-1)*chunksz;
left_num_samples = num_total_samples - last_read;
Adap_chunksz = min(left_num_samples,chunksz);

batchdata = zeros(IMAGE_DIM,IMAGE_DIM,3,Adap_chunksz);
batchlabs = zeros(1,Adap_chunksz);
for t=1:Adap_chunksz,
n = (batchno-1)chunksz + t;
full_image_address = [patches_root_address Filename_Stack_Cell{n}];
im = imread(full_image_address);
im = imresize(im, [IMAGE_DIM IMAGE_DIM], 'bilinear');
%
***************************************
im = single(im./255);
im = im(:,:,[3 2 1]); % RGB -> BGR
im = permute(im, [2 1 3]); % switch row and column to match with caffe
%*****************************************
batchdata(:,:,:,t) = im;
batchlabs(t) = Label_Vec(n);
end

% store to hdf5
startloc=struct('dat',[1,1,1,totalct+1], 'lab', [1,totalct+1]);
curr_dat_sz=store2hdf5(filename, batchdata, batchlabs, ~created_flag, startloc, chunksz);
created_flag=true;% flag set so that file is created only once
totalct=curr_dat_sz(end);% updated dataset size (#samples)
disp(['Number of Images Left to Store (' SetType ' Mode): ' num2str(left_num_samples)]);
end

@shaibagon
Copy link
Member

You might find this stackoverflow thread relevant.

@preksha12
Copy link

when i am running caffe on imagenet then imagenet_mean.binaryproto file is not getting generated so what could be the possible reason behind that.

@vnakon
Copy link

vnakon commented Nov 9, 2015

Hi y'all,

I have followed http://caffe.berkeleyvision.org/imagenet_training.html and prepared image set with 20 images, 18 for train and 2 for validation (2 types of classes: apples and tomatoes, image converted to 256)

Now when I started to train, it takes for me around 2-3 hours just for "iteration 0". What might be my issue?
(on vagrant - OS: ubuntu 64, ram=5Gb, CPU mode)

solver:

net: "models/bvlc_reference_caffenet/train_val.prototxt"
test_iter: 1000
test_interval: 1000
base_lr: 0.01
lr_policy: "step"
gamma: 0.1
stepsize: 200000
display: 20
max_iter: 450000
momentum: 0.9
weight_decay: 0.0005
snapshot: 10000
snapshot_prefix: "models/bvlc_reference_caffenet/caffenet_train"
solver_mode : CPU

train_val.:

data_param {
source: "examples/imagenet/test_train_lmdb"
batch_size: 64
backend: LMDB
}
data_param {
source: "examples/imagenet/test_val_lmdb"
batch_size: 16
backend: LMDB
}


Thanks for any ideas in advance!

@ctrevino
Copy link

ctrevino commented Nov 9, 2015

I don't know how big the image files are, but it seems the problem is the
batch size. I used a batch size equal to four. Also look for the iter_size
variable in the solver.prototxt file. This number is usually set to 20.
Hope this helps.

Carlos

2015-11-09 17:46 GMT+01:00 Viktor N. notifications@github.com:

Hi y'all,

I have followed http://caffe.berkeleyvision.org/imagenet_training.html
and prepared image set with 20 images, 18 for train and 2 for validation (2
types of classes: apples and tomatoes, image converted to 256)

Now when I started to train, it takes for me around 2-3 hours just for
"iteration 0". What might be my issue?
(on vagrant - OS: ubuntu 64, ram=5Gb, CPU mode)
solver:

net: "models/bvlc_reference_caffenet/train_val.prototxt"
test_iter: 1000
test_interval: 1000
base_lr: 0.01
lr_policy: "step"
gamma: 0.1
stepsize: 200000
display: 20
max_iter: 450000
momentum: 0.9
weight_decay: 0.0005
snapshot: 10000
snapshot_prefix: "models/bvlc_reference_caffenet/caffenet_train"
solver_mode: CPU train_val.:

data_param {
source: "examples/imagenet/test_train_lmdb"
batch_size: 64
backend: LMDB
}
data_param {
source: "examples/imagenet/test_val_lmdb"
batch_size: 16
backend: LMDB

}

Thanks for any ideas in advance!


Reply to this email directly or view it on GitHub
#550 (comment).

@vnakon
Copy link

vnakon commented Nov 12, 2015

Thank you Sir. Carlos you solved my issue with batch size=4.

But now have accuracy and training time questions:

  1. With training "bvlc_reference_caffenet" on (train:61 image/val:26 images, size: 50x50) 9 classes it takes around 3-6 minutes for 100 iterations, and it starts with accuracy:0.195 and after few steps getting loss:-nan (fixed by changing base_lr: 0.0001). As I move with every 100 iterations in 2-6 minutes - that gonna take for me quite long to get accuracy close to 80% right, sounds like not okay with only 9 classes?

  2. Training the same image set with 9 classes on lenet for mnist which is so much faster and takes around 1 minute for 100 iterations, and there accuracy changes following way: 0.01,...0.19,...0.0334, 0.0338..... what might be the reason that accuracy degradate?

machine: ubuntu64, ram=5Gb, CPU mode, i7

Thank you in advance

@saifrahmed
Copy link

FYI http://caffe.berkeleyvision.org/imagenet_training.html referenced above is a broken link.

@camille-jing
Copy link

How to make my own data set into leveldb format, then input the Siamese network in caffe?如果有会中文的,望联系,感激不尽!

@ashenafimenza
Copy link

Guys i am new to deep learning and caffe framework. My prof. asked me to do a project that is to Download at least 20 images for each 10 different cities and Train the system and try to recognise. i have tried to read the documentations and some tutorials on deep learning but i dont know how to start it. i managed to install caffe linked with python on mac. i have already the images(training and validation). i need someone to guide me through :)

@sorajsadeghi
Copy link

hi guys...i'm very new in deep learning... I have a small subset of a large video dataset with 15 category and 500 images per each category for perform video event detection... i'm using caffe and caffe reference model for training ... first I extracted '5' key frame from each this short video and train network with this training images along as their labels. my question is this : how refine this model structure to fit it to my data?? it is enough that set last fully connected layer to 15 or must be refined convolution and pooling and other layers?? for the next step I want to extract last convolution layer output for clustering... how to do this?? thank's a lot friends.

@ToruHironaka
Copy link

I wrote a python script in order to convert image data into hdf5 below. Caffe accept my hdf5 dataset but I always got low accuracy and high loss. So, I doubt my data conversion script, did not convert properly. Can anyone find problems from my script?

# initialize the total number of files 
# and input file list
numberOfFiles=0
inputFileList=[]
hdfFileList=[]
channel=3
visualize=False
# open train or test file list with label
with open(inputFile, 'r') as inputData:
    for fileName in inputData: 
        # this input file list includes label information as well
        inputFileList.append(fileName)  
        numberOfFiles = numberOfFiles + 1 

print "A number of files: ", numberOfFiles

# this loop will open file from inputFileList one by one and put it into
# hdf output

# initialize index 
index=0
fileIndex=0
periodNum=100

if numberOfFiles < periodNum:
    periodNum=numberOfFiles


# loop through thed list of data filess
for dataFileName in inputFileList:

    if (fileIndex % periodNum) == 0:

        # open and create hdf5 file output directory
        outputHDFFile = fileType + "-" + str(fileIndex) + ".h5"
        print "file name: " + outputHDFFile
        outputHDFPath = join(outputDir, outputHDFFile)
        print "hdf5 file: ", outputHDFPath
        fileOut = h5py.File(outputHDFPath, 'w')
        hdfFileList.append(outputHDFPath)

        data = fileOut.create_dataset("data", (periodNum,channel,width,height), dtype=np.float32)
        label = fileOut.create_dataset("label", (periodNum,), dtype=np.float32)

        # image data matrix
        imageStack = np.empty((periodNum,channel,width,height)) # Create empty HxWxN array/matrix
        labelStack = np.empty((periodNum))
        # initialize index at every periodNum 
        index=0

    dataPathandLabel=dataFileName.split(' ', 1)
    dataFilePath=dataPathandLabel[0]
    # print(dataFilePath)
    dataLabel=dataPathandLabel[1]
    # print(dataLabel)
    lastSubDirName=dataFilePath.split('/')
    subDirName=lastSubDirName[-1]
    # print(subDirName)

    labelNumber=int(dataLabel)
    # print labelNumber

    # load image:
    if channel == 1: 
        img=cv2.imread(dataFilePath, cv2.CV_LOAD_IMAGE_GRAYSCALE) # load grayscale
        print 'grayscale: ', img.shape

        imageStack[index,:,:,:]=img
        labelStack[index]=labelNumber

    elif channel == 3:
        img = cv2.imread(dataFilePath, cv2.CV_LOAD_IMAGE_COLOR) # color
        # height, width = img.shape[:2]
        img = cv2.resize(img,(width, height), interpolation = cv2.INTER_CUBIC)
        if index < 5 and visualize:
            plt.imshow(img)
            plt.show()

        img = img.transpose(2,1,0)

        # print 'RGB', img.shape
        # imageStack[index,:,:,:]=img
        # labelStack[index]=labelNumber
        data[index,:,:,:]=img
        label[index]=labelNumber

    index=index+1
    fileIndex=fileIndex+1

    if (fileIndex % periodNum) == 0:
        # close the last file
        imageStack.__init__()
        labelStack.__init__()
        fileOut.close()
        print 'file close'


# list hdf5 train dataset file list
outputHDFListFile = fileType + '.txt'
outputHDFListPath = join(outputDir, outputHDFListFile)

if exists(outputHDFListPath): 
    outputHDFListFile = fileType + '-list.txt'
    outputHDFListPath = join(outputDir, outputHDFListFile)

print 'list: ', outputHDFListFile
print 'Output dir: ', outputHDFListPath

with open(outputHDFListPath, 'w') as trainOut:
    for hdfFile in hdfFileList:
        print hdfFile
        writeOut=hdfFile + "\n"
        trainOut.write(writeOut)

@benduv
Copy link

benduv commented Jan 13, 2016

Hi guys.
I'm new on Caffe too and I'm a bit struggling with the 5th step of the tutorial. When I run the train_caffenet.sh , I have this error: ./CAFFE_ROOT/build/tools/caffe: not found
and that : --solver=/CAFFE_ROOT/examples/imagenet/solver.prototxt: not found
whereas I've modified the path correcly to those files. I don't understand...
btw, I also have only deploy, solver, train_val prototxt files=3 prototxt files but I don't think it's linked with my issue.
Does anyone can help?

@seanbell
Copy link

To new caffe users with questions:

Please do not post usage, installation, or modeling questions, or other requests for help to Issues.
Use the caffe-users list instead. This helps developers maintain a clear, uncluttered, and efficient view of the state of Caffe.

(from https://raw.githubusercontent.com/BVLC/caffe/master/CONTRIBUTING.md)

@komalsukre
Copy link

Hello Everyone,
I am new in caffe.. I want to classify SAR image... so for that how to select train data and test data...

@mtrth
Copy link

mtrth commented Feb 11, 2016

I am working on a data set which has 10 classes and during training each image has 2 or more classes present in it , for example
/image1.png 1,2,8
/image2.png 2,3,6

Is it possible to use imagnet to train for such multi label multi instance classification?

@ayeshasGithub
Copy link

imagenet

In imagenet where does it unify/use 3 channels of the input image, Is every channel classified separately? or the output of everychannel is summed up? how, where?

@monjoybme
Copy link

How to prepare train.txt and val.txt from images. Please explain.

@seanbell
Copy link

To all users with questions about how to use caffe, please visit the tutorial page or ask questions on the caffe-users mailing list.

I am locking this conversation because it is generating noise in the tracker/notifications, and because the users posting here aren't actually being helped.

@BVLC BVLC locked and limited conversation to collaborators Mar 16, 2016
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests