When unloading from multiple loras on flux pipeline, I believe that the norm layers are not restored here.
Shouldn't we have:
        if len(transformer_norm_state_dict) > 0:
            original_norm_layers_state_dict = self._load_norm_into_transformer(
                transformer_norm_state_dict,
                transformer=transformer,
                discard_original_layers=False,
            )
            if not hasattr(transformer, "_transformer_norm_layers"):
                 transformer._transformer_norm_layers = original_norm_layers_state_dict