Skip to content
This repository has been archived by the owner on Aug 5, 2022. It is now read-only.

Forward pass with different batch size cause segmentation fault in C++ #270

Open
sysuxiaoming opened this issue Apr 11, 2019 · 4 comments
Open

Comments

@sysuxiaoming
Copy link

Hi, all!

My question is quite similar with https://github.com/intel/caffe/issues/150,but its code is python,I work well with that python code. However i get segmentation fault in C++. Here is my C++ code

#define CPU_ONLY
#include <caffe/caffe.hpp>
#include <iostream>

using namespace caffe;  // NOLINT(build/namespaces)
using namespace std;

int channel = 3;
int height = 227;
int width = 227;

int main() {
    char model_file[] = "/caffe/models/bvlc_alexnet/deploy.prototxt";
    char weights_file[] = "/caffe/models/bvlc_alexnet/bvlc_alexnet.caffemodel";
    Caffe::set_mode(Caffe::CPU);
    static Net<float>* net_ = new Net<float>(model_file, TEST);
    net_->CopyTrainedLayersFrom(weights_file);


    for (int batch_size = 1; batch_size < 5; batch_size++) {
        Blob<float>* input_layer = net_->input_blobs()[0];
        input_layer->Reshape(batch_size, channel, height, width);
        net_->Reshape();
        cout << "forward begin with batch_size " << batch_size << endl;
        net_->Forward();
        cout << "forward end with batch_size " << batch_size << endl;
    }
    return 0;
}
@ftian1
Copy link
Contributor

ftian1 commented Apr 11, 2019

alexnet contains fullyconnect layer, it doesn't allow variable batch size.

@sysuxiaoming
Copy link
Author

Hi, ftian1.

Firstly, thanks for reply

I still have some questions

  1. why the python code in issue 150 works well but my C++ code get segmentation fault ?
  2. where I can get more info about which layers allow variable batch size and which layers not allow ?
  3. Is there any way to use alexnet with variable batch size with MKLDNN engine ?
  4. extra question: i found that using bigger batch-size do not improve FPS(frame per second) in intel-caffe. Usually bigger batch-size usually improve speed in GPU mode. Is it normal in MKLDNN?

@yflv-yanxia
Copy link

yflv-yanxia commented May 15, 2019

I have the same problem. Although my net dosn't contain fullyconnect layer, forward passing with different batch size still cause segmentation fault in c++ programs. @ftian1

@ftian1
Copy link
Contributor

ftian1 commented May 16, 2019

fully connection layer weight number usually is oc x ic x ih x iw if the axis is 1. if the axis is 0, the weight number would be oc x in x ic x ih x iw. for alexnet case, it will be the former. so changing "in" is allowed for your case but not allowed for later.

as for the c++ code issue, it's caused by :

  1. you have to call mn::init() before creating net.
  2. remove net_->Reshape() call as it's redudant and will bring assertion.

@yflv-yanxia @sysuxiaoming

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants