Skip to content

Commit

Permalink
add loggings for internal adoption tracking (#426)
Browse files Browse the repository at this point in the history
Summary:
Pull Request resolved: #426

add adoption loggings for models with most internal usage. To avoid duplicated count, we only enable loggings for model entrypoints.
This diff includes: vision transformer, MLP, av_concat_fusion, contrastive loss and transformer fusion. CLIP logging has been enabled previously.

Reviewed By: ankitade

Differential Revision: D45532797

fbshipit-source-id: 7e1a2a56a99bc0fe180c1103402003dc1ad5cbe0
  • Loading branch information
Peng Chen authored and facebook-github-bot committed May 4, 2023
1 parent f51c16b commit b5981a4
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 1 deletion.
2 changes: 1 addition & 1 deletion torchmultimodal/modules/layers/mlp.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ def __init__(
normalization: Optional[Callable[..., nn.Module]] = None,
) -> None:
super().__init__()

torch._C._log_api_usage_once(f"torchmultimodal.{self.__class__.__name__}")
layers = nn.ModuleList()

if hidden_dims is None:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -165,6 +165,7 @@ def __init__(
logit_scale_max: Optional[float] = math.log(100),
):
super().__init__()
torch._C._log_api_usage_once(f"torchmultimodal.{self.__class__.__name__}")

if not logit_scale_min and not logit_scale_max:
raise ValueError(
Expand Down

0 comments on commit b5981a4

Please sign in to comment.