Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BatchNorm behavior during inference #17

Open
slala2121 opened this issue Aug 23, 2020 · 1 comment
Open

BatchNorm behavior during inference #17

slala2121 opened this issue Aug 23, 2020 · 1 comment

Comments

@slala2121
Copy link

This is bit more detailed but I wanted to check if the batch norm layer of the computation graph (1st picture below) matches the one you find when importing the model.

When zeroing in on the model graph loaded under inference mode, it seems that the batch norm doesn't use the running_mean and running_var during forward pass (see 1st graph below; I also checked by computing expected output and it didn't match).

But when I construct a simpler net, batch norm uses the running_mean and running_var during forward pass (see 2nd graph; I also checked by computing expected output).

I'm wondering if this is could be an error related to FusedBatchNorm vs FusedBatchNormV3; upon importing the model, it uses FusedBN yet when constructing a new model, it used FusedBNV3. I'm not sure what causes this difference in the type of class used to construct the BN layer.

Thanks.

Screen Shot 2020-08-22 at 1 30 07 PM

Screen Shot 2020-08-22 at 1 41 57 PM

@mikevoets
Copy link
Owner

Hi @slala2121, thanks for your question! Sorry for my extremely late reply.

Can you first please let me know what Python and Tensorflow version you're running this with? And give me some instructions on how to get this graph please?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants