Skip to content

"Error: 152 of 197 weights are not set" when loading converted model from Keras with Batch Norm Layers #3004

@armand-go

Description

@armand-go

TensorFlow.js version

@tensorflow/tfjs-node: ^1.7.1,

Browser version

None, running in Visual Studio Code terminal

Describe the problem or feature request

I'm porting to TF.js a model converted from Keras with its pretrained weights.
I followed the instructions on the documentation and imported the weights from a local path. I had several issues because I was importing customized layers but I don't think this has anything to do with the current problem.

When I initialize my model by loading the .json file, I get the following error:

(node:39703) UnhandledPromiseRejectionWarning: Error: 152 of 197 weights are not set: conv1_1/3x3_s1/bn/gamma,conv1_1/3x3_s1/bn/beta,conv1_1/3x3_s1/bn/moving_mean,conv1_1/3x3_s1/bn/moving_variance,conv2_a_1x1_reduce/bn/gamma,conv2_a_1x1_reduce/bn/beta,conv2_a_1x1_reduce/bn/moving_mean,conv2_a_1x1_reduce/bn/moving_variance,conv2_a_3x3/bn/gamma,conv2_a_3x3/bn/beta,conv2_a_3x3/bn/moving_mean,conv2_a_3x3/bn/moving_variance,conv2_a_1x1_increase/bn/gamma,conv2_a_1x1_increase/bn/beta,conv2_a_1x1_increase/bn/moving_mean,conv2_a_1x1_increase/bn/moving_variance,conv2_a_1x1_proj/bn/gamma,conv2_a_1x1_proj/bn/beta,conv2_a_1x1_proj/bn/moving_mean,conv2_a_1x1_proj/bn/moving_variance,conv2_b_1x1_reduce/bn/gamma,conv2_b_1x1_reduce/bn/beta,conv2_b_1x1_reduce/bn/moving_mean,conv2_b_1x1_reduce/bn/moving_variance,conv2_b_3x3/bn/gamma,conv2_b_3x3/bn/beta,conv2_b_3x3/bn/moving_mean,conv2_b_3x3/bn/moving_variance,conv2_b_1x1_increase/bn/gamma,conv2_b_1x1_increase/bn/beta,conv2_b_1x1_increase/bn/moving_mean,conv2_b_1x1_increase/bn/moving_variance,conv3_a_1x1_reduce/bn/gamma,conv3_a_1x1_reduce/bn/beta,conv3_a_1x1_reduce/bn/moving_mean,conv3_a_1x1_reduce/bn/moving_variance,conv3_a_3x3/bn/gamma,conv3_a_3x3/bn/beta,conv3_a_3x3/bn/moving_mean,conv3_a_3x3/bn/moving_variance,conv3_a_1x1_increase/bn/gamma,conv3_a_1x1_increase/bn/beta,conv3_a_1x1_increase/bn/moving_mean,conv3_a_1x1_increase/bn/moving_variance,conv3_a_1x1_proj/bn/gamma,conv3_a_1x1_proj/bn/beta,conv3_a_1x1_proj/bn/moving_mean,conv3_a_1x1_proj/bn/moving_variance,conv3_b_1x1_reduce/bn/gamma,conv3_b_1x1_reduce/bn/beta,conv3_b_1x1_reduce/bn/moving_mean,conv3_b_1x1_reduce/bn/moving_variance,conv3_b_3x3/bn/gamma,conv3_b_3x3/bn/beta,conv3_b_3x3/bn/moving_mean,conv3_b_3x3/bn/moving_variance,conv3_b_1x1_increase/bn/gamma,conv3_b_1x1_increase/bn/beta,conv3_b_1x1_increase/bn/moving_mean,conv3_b_1x1_increase/bn/moving_variance,conv3_c_1x1_reduce/bn/gamma,conv3_c_1x1_reduce/bn/beta,conv3_c_1x1_reduce/bn/moving_mean,conv3_c_1x1_reduce/bn/moving_variance,conv3_c_3x3/bn/gamma,conv3_c_3x3/bn/beta,conv3_c_3x3/bn/moving_mean,conv3_c_3x3/bn/moving_variance,conv3_c_1x1_increase/bn/gamma,conv3_c_1x1_increase/bn/beta,conv3_c_1x1_increase/bn/moving_mean,conv3_c_1x1_increase/bn/moving_variance,conv4_a_1x1_reduce/bn/gamma,conv4_a_1x1_reduce/bn/beta,conv4_a_1x1_reduce/bn/moving_mean,conv4_a_1x1_reduce/bn/moving_variance,conv4_a_3x3/bn/gamma,conv4_a_3x3/bn/beta,conv4_a_3x3/bn/moving_mean,conv4_a_3x3/bn/moving_variance,conv4_a_1x1_increase/bn/gamma,conv4_a_1x1_increase/bn/beta,conv4_a_1x1_increase/bn/moving_mean,conv4_a_1x1_increase/bn/moving_variance,conv4_a_1x1_proj/bn/gamma,conv4_a_1x1_proj/bn/beta,conv4_a_1x1_proj/bn/moving_mean,conv4_a_1x1_proj/bn/moving_variance,conv4_b_1x1_reduce/bn/gamma,conv4_b_1x1_reduce/bn/beta,conv4_b_1x1_reduce/bn/moving_mean,conv4_b_1x1_reduce/bn/moving_variance,conv4_b_3x3/bn/gamma,conv4_b_3x3/bn/beta,conv4_b_3x3/bn/moving_mean,conv4_b_3x3/bn/moving_variance,conv4_b_1x1_increase/bn/gamma,conv4_b_1x1_increase/bn/beta,conv4_b_1x1_increase/bn/moving_mean,conv4_b_1x1_increase/bn/moving_variance,conv4_c_1x1_reduce/bn/gamma,conv4_c_1x1_reduce/bn/beta,conv4_c_1x1_reduce/bn/moving_mean,conv4_c_1x1_reduce/bn/moving_variance,conv4_c_3x3/bn/gamma,conv4_c_3x3/bn/beta,conv4_c_3x3/bn/moving_mean,conv4_c_3x3/bn/moving_variance,conv4_c_1x1_increase/bn/gamma,conv4_c_1x1_increase/bn/beta,conv4_c_1x1_increase/bn/moving_mean,conv4_c_1x1_increase/bn/moving_variance,conv5_a_1x1_reduce/bn/gamma,conv5_a_1x1_reduce/bn/beta,conv5_a_1x1_reduce/bn/moving_mean,conv5_a_1x1_reduce/bn/moving_variance,conv5_a_3x3/bn/gamma,conv5_a_3x3/bn/beta,conv5_a_3x3/bn/moving_mean,conv5_a_3x3/bn/moving_variance,conv5_a_1x1_increase/bn/gamma,conv5_a_1x1_increase/bn/beta,conv5_a_1x1_increase/bn/moving_mean,conv5_a_1x1_increase/bn/moving_variance,conv5_a_1x1_proj/bn/gamma,conv5_a_1x1_proj/bn/beta,conv5_a_1x1_proj/bn/moving_mean,conv5_a_1x1_proj/bn/moving_variance,conv5_b_1x1_reduce/bn/gamma,conv5_b_1x1_reduce/bn/beta,conv5_b_1x1_reduce/bn/moving_mean,conv5_b_1x1_reduce/bn/moving_variance,conv5_b_3x3/bn/gamma,conv5_b_3x3/bn/beta,conv5_b_3x3/bn/moving_mean,conv5_b_3x3/bn/moving_variance,conv5_b_1x1_increase/bn/gamma,conv5_b_1x1_increase/bn/beta,conv5_b_1x1_increase/bn/moving_mean,conv5_b_1x1_increase/bn/moving_variance,conv5_c_1x1_reduce/bn/gamma,conv5_c_1x1_reduce/bn/beta,conv5_c_1x1_reduce/bn/moving_mean,conv5_c_1x1_reduce/bn/moving_variance,conv5_c_3x3/bn/gamma,conv5_c_3x3/bn/beta,conv5_c_3x3/bn/moving_mean,conv5_c_3x3/bn/moving_variance,conv5_c_1x1_increase/bn/gamma,conv5_c_1x1_increase/bn/beta,conv5_c_1x1_increase/bn/moving_mean,conv5_c_1x1_increase/bn/moving_variance
    at new ValueError (./AudioReco/node_modules/@tensorflow/tfjs-layers/dist/errors.js:68:28)
    at LayersModel.Container.loadWeights (./AudioReco/node_modules/@tensorflow/tfjs-layers/dist/engine/container.js:569:23)
    at ./AudioReco/node_modules/@tensorflow/tfjs-layers/dist/models.js:303:27
    at step (./AudioReco/node_modules/@tensorflow/tfjs-layers/dist/models.js:54:23)
    at Object.next (./AudioReco/node_modules/@tensorflow/tfjs-layers/dist/models.js:35:53)
    at fulfilled (./AudioReco/node_modules/@tensorflow/tfjs-layers/dist/models.js:26:58)

I understand this has something to do with all the Batch Normalization layers the model uses but the how do I properly set the Gammas, Betas, Moving Mean & Moving Average to each of them?

Code to reproduce the bug / link to feature request

Model taken from the work of Wei Xie from VoxCeleb. The original weights of the model can be found on his Google Drive and the converted ones in the .json format can be found on this link.

Code ran:

model.js

const tf = require('@tensorflow/tfjs-node')

class VGGVox_Model {
    constructor() {
        this.model;
    }

    async init() {
        // Loading the custom layers
        require('./layers/VladPooling');
        require('./layers/Lambda');

        this.model = await tf.loadLayersModel('file://resources/model/model.json', false);
        //this.model.summary();
    }
}

(async function main() {
  const myModel = new VGGVox_Model();
  myModel.init();
})();

As for the two custom layers (Vlad Pooling and Lambda), I added them as well in the Google Drive folder. Lambda layer's code was taken here and Ididn't include the call() of the Vlad Pooling yet since I couldn't check the outputs (model doesn't load 😄 ). Anyways, they are required because the model loading won't work without them.


Thanks to the devs for the time you would spend on this issue!

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions