Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FIX] fix the wrong nproc_per_node in the multi gpu test #2422

Merged
merged 2 commits into from
Feb 9, 2024

Conversation

faaany
Copy link
Contributor

@faaany faaany commented Feb 7, 2024

What does this PR do?

With the command pytest test_multigpu.py on my NV A100 device, I got a failed test for test_distributed_data_loop. Based on the code, I think it is intended to use 2 processes instead of the available device number.

I also change the require_multi_device to require_multi_gpu, because cuda_visible_devices=0,1 reveals that this test can only be applied to GPU.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Pls have a review, thanks!
@muellerzr or @pacman100

@faaany faaany changed the title [FIX] fix the wrong nproc_per_node in test_multgpu [FIX] fix the wrong nproc_per_node in the multi gpu test Feb 7, 2024
@muellerzr
Copy link
Collaborator

Does it pass with two GPUs? (Yes we only test on two gpus but I can't recall if that was meant to be two GPUs or not). I ask if it passes because we've had some people post timeout issues on a100's (due to some hardware issues on later torch versions) so just curious if it passes or not

@faaany
Copy link
Contributor Author

faaany commented Feb 7, 2024

Does it pass with two GPUs? (Yes we only test on two gpus but I can't recall if that was meant to be two GPUs or not). I ask if it passes because we've had some people post timeout issues on a100's (due to some hardware issues on later torch versions) so just curious if it passes or not

yes, it passed on my A100 and I use the latest pytorch 2.2.0.

image

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@faaany
Copy link
Contributor Author

faaany commented Feb 8, 2024

@muellerzr BTW, do you mind if I add further XPU tests to this file? e.g.

 @require_multi_xpu
    def test_distributed_data_loop(self):
        print(f"Found {device_count} devices, using 2 devices only")
        cmd = ["torchrun", f"--nproc_per_node={device_count}", self.data_loop_file_path]
        cmd = ["torchrun", "--nproc_per_node=2", self.data_loop_file_path]
        with patch_environment(omp_num_threads=1, ze_affinity_mask="0,1"):
            execute_subprocess_async(cmd, env=os.environ.copy())

Or would it be better that I create a new file called "test_xpu.py" or "test_multixpu.py" in the test folder?

At the moment, when I run pytest test_multigpu.py on XPU, the test will fail. That's also the reason why it is better to change require_multi_device to require_multi_gpu in the current PR. But I am thinking about what would be the best way to add XPU-related tests...

Copy link
Collaborator

@muellerzr muellerzr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@muellerzr muellerzr merged commit 433d693 into huggingface:main Feb 9, 2024
23 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants