We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi, I have some question about this ComplexBN class at line 409,410,
if training in {0, False}: return input_bn
why we don't need moving_mean and moving_variance when testing?
I also see another function at line 421,
def normalize_inference(): if self.center: inference_centred = inputs - K.reshape(self.moving_mean, broadcast_mu_shape) else: inference_centred = inputs return ComplexBN( inference_centred, self.moving_Vrr, self.moving_Vii, self.moving_Vri, self.beta, self.gamma_rr, self.gamma_ri, self.gamma_ii, self.scale, self.center, axis=self.axis )
When testing, this function doesn't be called. Is there any mistake?
The code below are the source code in Keras, they return "normalize_inference()" when testing.
if training in {0, False}: return normalize_inference()
Thanks!
The text was updated successfully, but these errors were encountered:
I think so. Why use not normalize_inference when test?
Sorry, something went wrong.
No branches or pull requests
Hi,
I have some question about this ComplexBN class at line 409,410,
why we don't need moving_mean and moving_variance when testing?
I also see another function at line 421,
When testing, this function doesn't be called. Is there any mistake?
The code below are the source code in Keras, they return "normalize_inference()" when testing.
Thanks!
The text was updated successfully, but these errors were encountered: