You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 5, 2022. It is now read-only.
My question is quite similar with https://github.com/intel/caffe/issues/150,but its code is python,I work well with that python code. However i get segmentation fault in C++. Here is my C++ code
#defineCPU_ONLY
#include<caffe/caffe.hpp>
#include<iostream>usingnamespacecaffe;// NOLINT(build/namespaces)usingnamespacestd;int channel = 3;
int height = 227;
int width = 227;
intmain() {
char model_file[] = "/caffe/models/bvlc_alexnet/deploy.prototxt";
char weights_file[] = "/caffe/models/bvlc_alexnet/bvlc_alexnet.caffemodel";
Caffe::set_mode(Caffe::CPU);
static Net<float>* net_ = new Net<float>(model_file, TEST);
net_->CopyTrainedLayersFrom(weights_file);
for (int batch_size = 1; batch_size < 5; batch_size++) {
Blob<float>* input_layer = net_->input_blobs()[0];
input_layer->Reshape(batch_size, channel, height, width);
net_->Reshape();
cout << "forward begin with batch_size " << batch_size << endl;
net_->Forward();
cout << "forward end with batch_size " << batch_size << endl;
}
return0;
}
The text was updated successfully, but these errors were encountered:
why the python code in issue 150 works well but my C++ code get segmentation fault ?
where I can get more info about which layers allow variable batch size and which layers not allow ?
Is there any way to use alexnet with variable batch size with MKLDNN engine ?
extra question: i found that using bigger batch-size do not improve FPS(frame per second) in intel-caffe. Usually bigger batch-size usually improve speed in GPU mode. Is it normal in MKLDNN?
I have the same problem. Although my net dosn't contain fullyconnect layer, forward passing with different batch size still cause segmentation fault in c++ programs. @ftian1
fully connection layer weight number usually is oc x ic x ih x iw if the axis is 1. if the axis is 0, the weight number would be oc x in x ic x ih x iw. for alexnet case, it will be the former. so changing "in" is allowed for your case but not allowed for later.
as for the c++ code issue, it's caused by :
you have to call mn::init() before creating net.
remove net_->Reshape() call as it's redudant and will bring assertion.
Hi, all!
My question is quite similar with https://github.com/intel/caffe/issues/150,but its code is python,I work well with that python code. However i get segmentation fault in C++. Here is my C++ code
The text was updated successfully, but these errors were encountered: