Skip to content

Conversation

wincent8
Copy link
Contributor

@wincent8 wincent8 commented Aug 22, 2025

In this pr, we port test/distributed/parallel 4 test files and test/distributed/debug 1 test file for Intel GPU
We could enable Intel GPU with following methods and try the best to keep the original code styles:

  1. Use torch.accelerator for general gpu
  2. Skip the case if running on xpu which has known issues

cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta

Copy link

pytorch-bot bot commented Aug 22, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/161261

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 1619284 with merge base cd87f30 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added oncall: distributed Add this issue/PR to distributed oncall triage queue topic: not user facing topic category labels Aug 22, 2025
@wincent8 wincent8 force-pushed the wliao2/add_tensor_1 branch from eb22371 to d1bc8b7 Compare August 22, 2025 10:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
oncall: distributed Add this issue/PR to distributed oncall triage queue open source topic: not user facing topic category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants