Skip to content

Commit

Permalink
fixing import bug and gating by torch version
Browse files Browse the repository at this point in the history
Summary: caused #87 when trying to migrate fvcore to ao, this fixes by gating before importing

Differential Revision: D31706627

fbshipit-source-id: 1b0374325093e8492bb93afd7ea8e8b140d8bcb6
  • Loading branch information
HDCharles authored and facebook-github-bot committed Oct 16, 2021
1 parent 4a39fce commit 21171ec
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 2 deletions.
6 changes: 5 additions & 1 deletion fvcore/common/checkpoint.py
Expand Up @@ -18,7 +18,11 @@
if TORCH_VERSION >= (1, 11):
from torch.ao import quantization
from torch.ao.quantization import ObserverBase, FakeQuantizeBase
else:
elif (
TORCH_VERSION >= (1, 8)
and hasattr(torch.quantization, "FakeQuantizeBase")
and hasattr(torch.quantization, "ObserverBase")
):
from torch import quantization
from torch.quantization import ObserverBase, FakeQuantizeBase

Expand Down
6 changes: 5 additions & 1 deletion tests/test_checkpoint.py
Expand Up @@ -24,7 +24,11 @@
disable_observer,
enable_fake_quant,
)
else:
elif (
TORCH_VERSION >= (1, 8)
and hasattr(torch.quantization, "FakeQuantizeBase")
and hasattr(torch.quantization, "ObserverBase")
):
from torch import quantization
from torch.quantization import (
get_default_qat_qconfig,
Expand Down

0 comments on commit 21171ec

Please sign in to comment.