-
Notifications
You must be signed in to change notification settings - Fork 24.9k
Description
🚀 Feature
Provide a way to honor training
passed to batch_norm
Motivation
After this PR, ONNX export starts using training
parameter passed to torch.onnx.export
instead of the one for batch_norm
. This made it impossible to export frozen BN in training mode.
Pitch
How about honoring training
of batch_norm
when training=torch.onnx.TrainingMode.PRESERVE
?
Alternatives
I personally like the behavior before this change, so reverting this is the best for me. However, changing this for TrainingMode.EVAL and TrainingMode.TRAINING might be confusing. This is why I am proposing changing the behavior only for TrainingMode.PRESERVE
.
Additional context
I think it's common to use fixed BN even during training when you train detection/segmentation model with pre-trained classification backbone.
See frozen BN in detectron2 for example: https://github.com/facebookresearch/detectron2/blob/84a09834f6d838534951907fd9ef90fec73614d2/detectron2/layers/batch_norm.py#L14
cc @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof