Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The values of running variance from pretrained file are negative #25

Open
WilliamKRobert opened this issue Jan 16, 2019 · 2 comments
Open

Comments

@WilliamKRobert
Copy link

No description provided.

@angshine
Copy link

But it seems that kitti_demo.py can work properly with that negative running_var in batchnorm, why is that?

@angshine
Copy link

But it seems that kitti_demo.py can work properly with that negative running_var in batchnorm, why is that?

While, it turns out that I referenced the wrong caffe layer in when parsing the caffe weights. The order of blobs can be found in(bn_layer.cpp).

    // slope
    this->blobs_[0].reset(new Blob<Dtype>(shape));
    shared_ptr<Filler<Dtype> > slope_filler(GetFiller<Dtype>(
        this->layer_param_.bn_param().slope_filler()));
    slope_filler->Fill(this->blobs_[0].get());
    // bias
    this->blobs_[1].reset(new Blob<Dtype>(shape));
    shared_ptr<Filler<Dtype> > bias_filler(GetFiller<Dtype>(
        this->layer_param_.bn_param().bias_filler()));
    bias_filler->Fill(this->blobs_[1].get());
    // moving average mean
    this->blobs_[2].reset(new Blob<Dtype>(shape));
    caffe_set(this->blobs_[2]->count(), Dtype(0),
        this->blobs_[2]->mutable_cpu_data());
    // moving average variance
    this->blobs_[3].reset(new Blob<Dtype>(shape));
    caffe_set(this->blobs_[3]->count(), frozen_ ? Dtype(1) : Dtype(0),
        this->blobs_[3]->mutable_cpu_data());

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants