Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trainable batch normalization #467

Closed
ViliamVadocz opened this issue Aug 16, 2023 · 10 comments
Closed

Trainable batch normalization #467

ViliamVadocz opened this issue Aug 16, 2023 · 10 comments

Comments

@ViliamVadocz
Copy link

I am trying to translate some code I wrote with tch-rs into candle as an experiment to see what the library is like.
It looks like I stumbled into a road-block almost immediately. I have a convolutional neural network made up of many residual blocks. Each residual block internally uses batch normalization.

In tch-rs, I could use nn::batch_norm_2d. Is batch normalization is not implemented by candle yet?

@LaurentMazare
Copy link
Collaborator

Right, batch-normalization is not available yet. We started by focussing on language models where group-norm is far more frequent than batch-norm. We've just started adding the vision bits, e.g. convolutions so as to get stable-diffusion to run, we would like to add some actual vision model now so batch norm is likely to be added soonish (a week or two I would say).

@LaurentMazare
Copy link
Collaborator

Not sure if it will be enough for your use case but I've just merged #508 which adds a batch normalization layer. It could be used in a similar way to nn::batch_norm_2d but with the limitation that it's only designed for inference and would not work for training (it doesn't keep track/learn the running stats). I've tested it on some examples against the PyTorch implementation and it seems reasonable but let me know if you see anything weird with it.

@ViliamVadocz
Copy link
Author

I am training networks, so unfortunately this is not enough for my usecase.

@LaurentMazare
Copy link
Collaborator

Interesting, what models do you actually care about?
I had the feeling that most recent architectures use some form of group/layer norm instead of batch-norm (e.g. dinov2, the unet/vae from stable diffusion) and so I was thinking that we would only have batch-norm for inference as it's a mess to get right for training contrary to group/layer norms. That said, certainly happy to reconsider if there is much demand for it.

@ViliamVadocz
Copy link
Author

I am working with ResNets for AlphaZero / MuZero.

@ViliamVadocz
Copy link
Author

Has there been any progress on this front?

@ViliamVadocz ViliamVadocz changed the title Where is batch normalization? Trainable batch normalization Oct 26, 2023
@Awpteamoose
Copy link

Interesting, what models do you actually care about? I had the feeling that most recent architectures use some form of group/layer norm instead of batch-norm (e.g. dinov2, the unet/vae from stable diffusion) and so I was thinking that we would only have batch-norm for inference as it's a mess to get right for training contrary to group/layer norms. That said, certainly happy to reconsider if there is much demand for it.

I'm using MobileNetV3, which needs trainable batchnorms, as well as other mobile-scale realtime classification convnets.

@LaurentMazare
Copy link
Collaborator

Not much progress I'm afraid. @Awpteamoose do you have some MobileNetV3 or other models code that you could share? Would be very interesting to point at it as external resources that use candle. If I understand you're training these models? I would have assumed that nowadays even mobile scale vision models have mostly switched to transformers like tinyvit etc.

@Awpteamoose
Copy link

Awpteamoose commented Oct 26, 2023

I was porting my implementation from dfdx (coreylowman/dfdx#794) and halfway through noticed that batchnorms aren't trainable so I don't really have any code to share.

I would have assumed that nowadays even mobile scale vision models have mostly switched to transformers like tinyvit etc.

I'm probably just out of date as the field moves very fast, but also transformers that I have looked at require an order of magnitude more FLOPS. I'm doing inference on tiny single-core CPUs as part of massively parallelised video analysis so even real-time is too slow for me.

@nkoppel
Copy link
Contributor

nkoppel commented Dec 30, 2023

@LaurentMazare This should be closed due to the merge of #1504

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants