Skip to content

Conversation

xuzhao9
Copy link
Contributor

@xuzhao9 xuzhao9 commented Oct 26, 2022

To support Python 3.10, we need to update a few model dependencies such as fairseq and spacy.

We need to add torchaudio dependency because newer version of fairseq now depends on torchaudio: https://github.com/facebookresearch/fairseq/blob/main/setup.py#L190
Torchtext also adds new dependency torchdata so we need to include that as well: pytorch/text#1961

Currently, the CircleCI test runs on Python 3.8, and the GHA test runs on Python 3.10.

@facebook-github-bot
Copy link
Contributor

@xuzhao9 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@xuzhao9 xuzhao9 requested a review from desertfire October 28, 2022 00:16
@facebook-github-bot
Copy link
Contributor

@xuzhao9 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

if self.device == "cuda":
current_device_name = torch.cuda.get_device_name()
assert current_device_name, f"torch.cuda.get_device_name() returns None when device is set to cuda, please double check."
elif self.device == "cpu":
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious what is this change for?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We now support specifying a smaller batch size for the CPU device. In the code, we try to use the same batch size as upstream. However, upstream batch sizes are often optimized for GPU, for CPU+inference tests, we want to use a smaller batch size by default to save test time.

Copy link
Contributor

@desertfire desertfire left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome! Thank you so much for getting this done so quickly!

Copy link
Contributor

@desertfire desertfire left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome! Thank you so much for getting this done so quickly!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants