-
Notifications
You must be signed in to change notification settings - Fork 325
Migrate to enable Python 3.10 #1261
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@xuzhao9 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@xuzhao9 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
if self.device == "cuda": | ||
current_device_name = torch.cuda.get_device_name() | ||
assert current_device_name, f"torch.cuda.get_device_name() returns None when device is set to cuda, please double check." | ||
elif self.device == "cpu": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious what is this change for?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We now support specifying a smaller batch size for the CPU device. In the code, we try to use the same batch size as upstream. However, upstream batch sizes are often optimized for GPU, for CPU+inference tests, we want to use a smaller batch size by default to save test time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome! Thank you so much for getting this done so quickly!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome! Thank you so much for getting this done so quickly!
To support Python 3.10, we need to update a few model dependencies such as fairseq and spacy.
We need to add torchaudio dependency because newer version of fairseq now depends on torchaudio: https://github.com/facebookresearch/fairseq/blob/main/setup.py#L190
Torchtext also adds new dependency torchdata so we need to include that as well: pytorch/text#1961
Currently, the CircleCI test runs on Python 3.8, and the GHA test runs on Python 3.10.