Update CIFAR10 tutorial device selection for CUDA, MPS, and CPU#3826
Update CIFAR10 tutorial device selection for CUDA, MPS, and CPU#3826amit-chaubey wants to merge 2 commits intopytorch:mainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/tutorials/3826
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
Claude finished @svekars's task in 2m 15s —— View job Review of PR #3826
SummaryThe intent of this PR is good — making the CIFAR10 tutorial device-agnostic benefits Apple Silicon and CPU-only users. However, the implementation uses an older device-selection pattern rather than the modern Recommendation: Use
|
|
Thanks! @claude Updated to use torch.accelerator (same one-liner style as quickstart_tutorial.py), adjusted the surrounding prose so it stays accurate when device is "cpu", and kept num_workers=0 for direct script runs on macOS/Windows. Pushed in the latest commit. |
Description
This PR updates
beginner_source/blitz/cifar10_tutorial.pyto make device selection explicit across CUDA, MPS, and CPU.Changes
cuda:0if CUDA is availablempsif Apple MPS backend is availablecpuotherwisedeviceusage instead of CUDA-only wording.Why
The tutorial currently frames this section as CUDA-specific. This change keeps the same tutorial flow while making guidance clearer for Apple Silicon and CPU-only users.
Checklist
cc @subramen @albanD @jbschlosser