Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

faster RCNN #50

Open
northeastsquare opened this issue Jul 2, 2015 · 25 comments
Open

faster RCNN #50

northeastsquare opened this issue Jul 2, 2015 · 25 comments

Comments

@northeastsquare
Copy link

Hello , when will you open source faster RCNN, or any schedule? Thanks in advance.

@adeelwaris88
Copy link

Hi,
You can find fast-rcnn at below link.
https://github.com/rbgirshick/fast-rcnn

@wuqiangch
Copy link

Not fast-rcnn,it is faster-rcnn。

@drvoneinzbern
Copy link

I'd also like to try the faster-rcnn and the region-proposal-network. I've noticed that you've deleted the branch "faster-R-CNN" couple of days ago. Would you push the branch again to the remote? Thanks in advance.

@ShaoqingRen
Copy link
Owner

Hi, the branch "faster-R-CNN" in my caffe forked repo appears just because I've forked the latest caffe vesion and modifying it for faster r-cnn. I'm busy doing the modification, and updating faster r-cnn code for the new matlab interface in caffe.
Thanks very much for your patience :)

@drvoneinzbern
Copy link

Hi, I've checked out the faster-R-CNN branch and it compiled! Thanks a lot. The question is now, how to use it. Could you probably provide an example deploy file of the network and a pre-trained weight file? A short tutorial of how to train an region-proposal-network would also be nice. Thanks in advance again.

@ShaoqingRen
Copy link
Owner

Hi Davidsrao,

This branch is just the caffe part of faster R-CNN.

Main matlab code is ready and just waiting for legal procedures. We will release our code very soon.

Best,

Shaoqing

From: noreply@github.com [mailto:noreply@github.com] On Behalf Of Davidsrao
Sent: 2015年8月31日 22:02
To: ShaoqingRen/SPP_net
Cc: Shaoqing Ren
Subject: Re: [SPP_net] faster RCNN (#50)

Hi, I've checked out the faster-R-CNN branch and it compiled! Thanks a lot. The question is now, how to use it. Could you probably provide an example deploy file of the network and a pre-trained weight file? A short tutorial of how to train an region-proposal-network would also be nice. Thanks in advance again.


Reply to this email directly or view it on GitHub #50 (comment) . https://github.com/notifications/beacon/AGi7v6wSU_hikyYDZGseW8wpVFlZeOD_ks5otFXBgaJpZM4FQfKp.gif

@legolas123
Copy link

Great algorithm to bypass time consuming selective search.
I implemented faster-rcnn using caffe, but in the top 300 regions generated through caffenet or zeiler, I hardly get an exact patch fitting the ground truth box closely which is the case in selective search proposals. Although with VGG net I always get a proposal corresponding to the gt box. My observation is that with smaller net, the algorithm does the correct classification between object and background but lags in rectifying the coordinates through regression. So I also tried giving more weight to regression loss but with no success. So I feel with these proposals and smaller net I wont be getting the same MAP score for detection as I do with selective search region proposals.
Have you also experienced the same thing with smaller nets?

@yao5461
Copy link

yao5461 commented Sep 6, 2015

@legolas123 @ShaoqingRen Could you share the prototxt files of Region-Proposal-Network. I'm not sure about the architecture.
This architecture is naturally implemented with an n × n conv layer followed by two sibling 1 × 1 conv layers (for reg and cls, respectively).
Thanks in advance.

@legolas123
Copy link

There you go.

name: "CaffeNet"

layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
convolution_param {
num_output: 96
kernel_size: 11
pad: 5
stride: 4
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "conv1"
top: "conv1"
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 3
pad: 1
stride: 2
}
}
layer {
name: "norm1"
type: "LRN"
bottom: "pool1"
top: "norm1"
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layer {
name: "conv2"
type: "Convolution"
bottom: "norm1"
top: "conv2"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
kernel_size: 5
pad: 2
group: 2
}
}
layer {
name: "relu2"
type: "ReLU"
bottom: "conv2"
top: "conv2"
}
layer {
name: "pool2"
type: "Pooling"
bottom: "conv2"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 3
pad: 1
stride: 2
}
}
layer {
name: "norm2"
type: "LRN"
bottom: "pool2"
top: "norm2"
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layer {
name: "conv3"
type: "Convolution"
bottom: "norm2"
top: "conv3"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 384
kernel_size: 3
pad: 1
}
}
layer {
name: "relu3"
type: "ReLU"
bottom: "conv3"
top: "conv3"
}
layer {
name: "conv4"
type: "Convolution"
bottom: "conv3"
top: "conv4"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 384
kernel_size: 3
pad: 1
group: 2
}
}
layer {
name: "relu4"
type: "ReLU"
bottom: "conv4"
top: "conv4"
}
layer {
name: "conv5"
type: "Convolution"
bottom: "conv4"
top: "conv5"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
kernel_size: 3
pad: 1
group: 2
}
}

layer {
name: "relu5"
type: "ReLU"
bottom: "conv5"
top: "conv5"
}

layer {
name: "conv6"
type: "Convolution"
bottom: "conv5"
top: "conv6"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
kernel_size: 3
pad: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}

layer {
name: "relu6"
type: "ReLU"
bottom: "conv6"
top: "conv6"
}

layer {
name: "conv6_1"
type: "Convolution"
bottom: "conv6"
top: "conv6_1"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 18
pad: 0
kernel_size: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}

layer {
name: "conv6_2"
type: "Convolution"
bottom: "conv6"
top: "conv6_2"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 36
pad: 0
kernel_size: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}

layer {
name: "loss_cls"
type: "SoftmaxWithLoss"
bottom: "conv6_1"
bottom: "labels"
bottom: "loss_wts"
top: "loss_cls"
loss_weight: 1
}

layer {
name: "loss_bbox"
type: "SmoothL1Loss"
bottom: "conv6_2"
bottom: "bbox_targets"
bottom: "bbox_loss_weights"
top: "loss_bbox"
loss_weight: 1
}

@xmubingo
Copy link

xmubingo commented Sep 9, 2015

@legolas123 Thanks!

@yao5461
Copy link

yao5461 commented Sep 12, 2015

@legolas123 Thanks for your sharing! I'm trying to implement faster-rcnn using caffe, but I have some doubts about the input and output of Region-Proposal-Network. The dimensions of output layers are: cls_score->1*2k*H*W, bbox_pred->1*4k*H*W. So I guess the dimensions of data layer may be: labels->1*k*H*W, bbox_targets->1*4k*H*W. But that's wrong and I don't know how to design that.
The Caffe error information: Number of labels must match number of predictions; e.g., if softmax axis == 1 and prediction shape is (N, C, H, W), label count (number of labels) must be N*H*W, with integer values in {0, 1, ..., C-1}. Does it mean that the dimension of labels must be 1*H*W?
I'm a newbie about CNN and Caffe. Thanks in advance.

@ShaoqingRen
Copy link
Owner

Hi all,

We have released Faster RCNN code on https://github.com/ShaoqingRen/faster_rcnn , you can check it for more details.

@Elpidam
Copy link

Elpidam commented Sep 18, 2015

Hello!
I'm here to mention a problem while doing "make runtest". I get an error about "Unknown V1LayerParameter layer type:40". You can see the snapshot below.
selection_019

@MonPower
Copy link

@Elpidam @ShaoqingRen
I also meet the same error
screen shot 2015-09-19 at 2 34 07 pm

@SunShineMin
Copy link

I have suffered the save issue above @ShaoqingRen @Elpidam @MonPower , and I found that it resulted from the UpgradeV1LayerType() in upgrade_proto.cpp, you have to add Reshape, ROIPooling and SmoothL1Loss type in it

@Elpidam
Copy link

Elpidam commented Oct 5, 2015

@SunShineMin Hello! I can't remember the way I solved it but you should be sure that you use MatlabR2014a and caffe fork that author proposes. This is the only combination that works. I use CPU mode for testing but you can not train in this mode as author informed me before. Please let me know if ur problem is solved.

(i saw in previous thread that u have fine-tuned rcnn in another dataset. could u provide me some details?)

@soseazi
Copy link

soseazi commented Oct 6, 2015

@SunShineMin
image

is this the way you solve it?

@SunShineMin
Copy link

you need to transform the images into need imdb and roidb mat firstly. Then you can train the net as the author mentioned @Elpidam

@SunShineMin
Copy link

Yes @soseazi

@duygusar
Copy link

duygusar commented Nov 5, 2015

@ShaoqingRen @SunShineMin @soseazi @Elpidam @MonPower

Hi, I am having the same problem, running on R2014a and with the faster-rcnn caffe fork, I did change those parts as @SunShineMin 's post - didn't work, so I also added to else if part. Since I don't know cpp I might be missing a point here, can you tell me where else I should modify? Thanks !

@duygusar
Copy link

duygusar commented Nov 6, 2015

Ohh is it because the test_upgrade_proto.cpp uses networks in unmodified caffe original network layers while running make runtest?? Still I would like to make sure, I made all necessary changes in upgrade_proto, so answers are appreciated because it does not build with the change :S

@tjbwyk
Copy link

tjbwyk commented Nov 26, 2015

@duygusar I solved it with accroding to this post:#50 (comment)

How's yours going?

@duygusar
Copy link

@tjbwyk Oh, I have figured out it is just the test script that is giving the error because it uses an unmodified prototxt in the test script, so actually the build is working. I skipped the runtest.

@hongjiw
Copy link

hongjiw commented Feb 4, 2016

@soseazi Note that it should be V1LayerParameter_LayerType_SMOOTH_L1_LOSS instead of V1LayerParameter_LayerType_SMOOTH_L1_Loss

@Anhaoxu
Copy link

Anhaoxu commented Mar 5, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests