Skip to content

fix(rf3): bind Fabric to caller-specified GPU#222

Merged
k-chrispens merged 3 commits intomainfrom
fix/rf3-device-selection
Apr 16, 2026
Merged

fix(rf3): bind Fabric to caller-specified GPU#222
k-chrispens merged 3 commits intomainfrom
fix/rf3-device-selection

Conversation

@saada
Copy link
Copy Markdown
Contributor

@saada saada commented Apr 14, 2026

Closes #37.

RF3Wrapper didn't accept a device argument, so the grid-search dispatcher's cuda:N assignment was silently dropped. Lightning Fabric's default devices=1 always resolves to GPU 0, which meant eight parallel workers all piled onto cuda:0.

The wrapper now takes a device and forwards the CUDA index to Fabric as devices_per_node=[idx], matching the Boltz and Protenix wrappers.

Reproduction & verification

Two RF3 wrappers launched in parallel inside the same container (different processes, no CUDA_VISIBLE_DEVICES hacks):

Before — both pinned to the same GPU:

[worker0] wrapper.device=cuda:0  current_device=0
[worker1] wrapper.device=cuda:0  current_device=0

After (passing device="cuda:0" and device="cuda:3"):

[worker0] target=cuda:0  wrapper.device=cuda:0  current_device=0
[worker1] target=cuda:3  wrapper.device=cuda:3  current_device=3

Unit + regression tests:

tests/models/test_rf3_device_selection.py ....... 7 passed

Summary by CodeRabbit

  • New Features

    • RF3Wrapper now accepts an optional device parameter to bind inference to a specific CUDA GPU for improved multi-GPU control.
  • Bug Fixes

    • Device selection logic made consistent so the originally requested device is respected during model setup.
  • Tests

    • Added GPU-focused tests validating CUDA device selection and wrapper device binding (skipped when GPUs unavailable).

RF3Wrapper previously ignored the device passed down from the guidance
dispatcher. Lightning Fabric's default `devices=1` always resolves to
GPU 0, so parallel grid-search workers on a multi-GPU box all landed on
cuda:0 and serialised.

Accept a `device` argument on RF3Wrapper and forward the CUDA index to
RF3InferenceEngine as `devices_per_node=[idx]`, matching the Boltz and
Protenix wrappers. Drop the `getattr(model_wrapper, "device", device)`
fallback in guidance_script_utils since the wrapper now honours what
was asked for.

Closes #37
@saada saada deployed to gpu-testing April 14, 2026 10:43 — with GitHub Actions Active
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 14, 2026

Warning

Rate limit exceeded

@k-chrispens has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 55 minutes and 0 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 55 minutes and 0 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 00850cd6-c7ca-4530-bdd8-3dc42be8fd3a

📥 Commits

Reviewing files that changed from the base of the PR and between b218a3e and 35b2747.

📒 Files selected for processing (1)
  • src/sampleworks/models/rf3/wrapper.py
📝 Walkthrough

Walkthrough

RF3Wrapper gained an optional device parameter and a module helper _cuda_index to convert/validate CUDA device inputs; engine construction now conditionally includes devices_per_node. Guidance utilities pass the computed device into RF3Wrapper. New tests verify _cuda_index behavior and RF3Wrapper device binding on multi‑GPU systems.

Changes

Cohort / File(s) Summary
RF3 Wrapper Device Support
src/sampleworks/models/rf3/wrapper.py
Added module helper `_cuda_index(device: torch.device
Guidance Utilities Integration
src/sampleworks/utils/guidance_script_utils.py
get_model_and_device() now constructs RF3Wrapper with device=device and no longer overwrites the local device from the wrapper.
Device Selection Tests
tests/models/test_rf3_device_selection.py
New tests for _cuda_index (various cuda:N / cuda / cpu inputs) and RF3Wrapper device binding; conditional skips when RF3 deps or sufficient GPUs are absent.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Suggested reviewers

  • marcuscollins

Poem

🐰 I hop to bind the GPU so bright,
I sniff the index, left to right,
"cuda:1" I give a gentle tug,
Fabric and wrapper snug as a bug,
Hooray — the kernels hum tonight! 🥕✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 42.86% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'fix(rf3): bind Fabric to caller-specified GPU' accurately reflects the main change—enabling RF3Wrapper to accept and respect a device parameter for GPU binding.
Linked Issues check ✅ Passed The PR fully addresses issue #37 by implementing device parameter support in RF3Wrapper and forwarding it to Lightning Fabric, enabling callers to control GPU placement.
Out of Scope Changes check ✅ Passed All changes are directly scoped to implementing device selection for RF3Wrapper; no unrelated modifications are present.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/rf3-device-selection

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Collaborator

@k-chrispens k-chrispens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, would be worth adding a few extra comments though I think in case these things break longer term

msa_manager: MSAManager | None
MSA manager for retrieving MSAs for input structures.
device: torch.device | str | None
CUDA device to bind the underlying Lightning Fabric to (e.g. ``"cuda:3"``).
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To make sure I understood this, I went searching in the foundry repo and found this line: https://github.com/RosettaCommons/foundry/blob/b071919caa19ff334bc04b1b41145cac61eba819/src/foundry/trainers/fabric.py#L92

Would probably be worth referencing this here for posterity

Comment thread src/sampleworks/models/rf3/wrapper.py Outdated
------
ValueError
If ``device`` is not a CUDA device. RF3's Fabric-backed accelerator
only supports GPU execution.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it does appear that the latest foundry does support MPS, but my fork (which this repo depends on) does not and in general sampleworks doesn't support non-CUDA systems right now. I think it'd be worth clarifying that here

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/sampleworks/models/rf3/wrapper.py`:
- Around line 227-236: Update the docstring for the device parameter in
src/sampleworks/models/rf3/wrapper.py to accurately state that passing
device=None resolves to GPU 0 (Fabric's default behavior) rather than implying
it selects a load-balanced "first available" device; edit the paragraph
referencing "When ``None``, Fabric picks the first available device" to
explicitly say "When ``None``, Fabric's default (devices=1) binds to GPU 0" and
keep the existing references to Lightning Fabric and the related foundry link so
readers see the authoritative sources.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: ee8984a5-6808-4406-8085-25d3e0951aea

📥 Commits

Reviewing files that changed from the base of the PR and between 2af96c7 and b218a3e.

📒 Files selected for processing (1)
  • src/sampleworks/models/rf3/wrapper.py

Comment thread src/sampleworks/models/rf3/wrapper.py
@k-chrispens k-chrispens merged commit ad70806 into main Apr 16, 2026
8 of 11 checks passed
@k-chrispens k-chrispens deleted the fix/rf3-device-selection branch April 16, 2026 23:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

For ModelWrappers that use Fabric, etc., figure out how to properly handle device

2 participants