Skip to content
This repository has been archived by the owner on Nov 3, 2023. It is now read-only.

Commit

Permalink
Fixing the unittests 3.8 (#5002)
Browse files Browse the repository at this point in the history
* skipping the apex test for torch >= 1.13

* retired the starspace moel

* clearml import ignored

* Updated config.yml

* Updated config.yml

* Revert "retired the starspace moel"

This reverts commit 00391c9.
  • Loading branch information
mojtaba-komeili committed Apr 11, 2023
1 parent c354b89 commit d921d22
Show file tree
Hide file tree
Showing 3 changed files with 13 additions and 2 deletions.
2 changes: 1 addition & 1 deletion .circleci/config.yml
Expand Up @@ -191,7 +191,7 @@ commands:
- run:
name: Install checklist and dependencies
command: |
python -m pip install --progress-bar off checklist
for i in $(seq 1 3); do python -m pip install --progress-bar off checklist && s=0 && break || s=$? && sleep 10; done; (exit $s)
setupcuda:
description: Setup CUDA
Expand Down
12 changes: 11 additions & 1 deletion parlai/utils/testing.py
Expand Up @@ -71,7 +71,7 @@


try:
import clearml
import clearml # noqa: F401

CLEARML__AVAILABLE = True
except ImportError:
Expand Down Expand Up @@ -102,6 +102,16 @@ def skipUnlessTorch17(testfn, reason='Test requires pytorch 1.7+'):
return unittest.skipIf(skip, reason)(testfn)


def skipUnlessTorch113(testfn, reason='Test requires pytorch 1.13+'):
if not TORCH_AVAILABLE:
skip = True
else:
from packaging import version

skip = version.parse(torch.__version__) >= version.parse('1.13')
return unittest.skipIf(skip, reason)(testfn)


def skipIfGPU(testfn, reason='Test is CPU-only'):
"""
Decorate a test to skip if a GPU is available.
Expand Down
1 change: 1 addition & 0 deletions tests/test_apex.py
Expand Up @@ -43,6 +43,7 @@ def test_fused_adam(self):
)
create_agent(opt, requireModelExists=True)

@testing_utils.skipUnlessTorch113
def test_fp16(self):
# nice clean fallback if no fp16
valid, test = testing_utils.eval_model(
Expand Down

0 comments on commit d921d22

Please sign in to comment.