Skip to content

Improve YOCO static attention: reusable helper, correct tensor op, runtime guard (#18545)#18545

Merged
meta-codesync[bot] merged 1 commit intopytorch:mainfrom
viveknayakatmeta:export-D97637849
Mar 28, 2026
Merged

Improve YOCO static attention: reusable helper, correct tensor op, runtime guard (#18545)#18545
meta-codesync[bot] merged 1 commit intopytorch:mainfrom
viveknayakatmeta:export-D97637849

Conversation

@viveknayakatmeta
Copy link
Copy Markdown
Contributor

@viveknayakatmeta viveknayakatmeta commented Mar 27, 2026

Summary:

  • replace the inline first_kv_shared index computation in _from_config with a reusable _is_kv_shared_layer() helper that matches llama_transformer.py's pattern and adds a missing first_shared <= 0 edge-case guard,
  • fix torch.cat → torch.stack in _process_normal_kv for SHA kv_to_share construction, since per-head K/V tensors are rank-3 and torch.cat(dim=1) concatenates seq dimensions incorrectly while torch.stack(dim=1) correctly inserts a new heads dimension,
  • change the forward() K/V skip guard from structural (if self.is_kv_shared_layer) to runtime (if shared_kv is not None) with an added assertion that self.is_kv_shared_layer holds.

Reviewed By: billmguo

Differential Revision: D97637849

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Mar 27, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18545

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures, 2 Unrelated Failures

As of commit c81fb28 with merge base 6fccd5a (image):

NEW FAILURES - The following jobs have failed:

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Mar 27, 2026
@meta-codesync
Copy link
Copy Markdown
Contributor

meta-codesync Bot commented Mar 27, 2026

@viveknayakatmeta has exported this pull request. If you are a Meta employee, you can view the originating Diff in D97637849.

@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

viveknayakatmeta added a commit to viveknayakatmeta/executorch that referenced this pull request Mar 27, 2026
…ntime guard (pytorch#18545)

Summary:
Pull Request resolved: pytorch#18545

- replace the inline first_kv_shared index computation in _from_config with a reusable _is_kv_shared_layer() helper that matches llama_transformer.py's pattern and adds a missing first_shared <= 0 edge-case guard,
- fix torch.cat → torch.stack in _process_normal_kv for SHA kv_to_share construction, since per-head K/V tensors are rank-3 and torch.cat(dim=1) concatenates seq dimensions incorrectly while torch.stack(dim=1) correctly inserts a new heads dimension,
- change the forward() K/V skip guard from structural (if self.is_kv_shared_layer) to runtime (if shared_kv is not None) with an added assertion that self.is_kv_shared_layer holds.

Differential Revision: D97637849
@meta-codesync meta-codesync Bot changed the title Improve YOCO static attention: reusable helper, correct tensor op, runtime guard Improve YOCO static attention: reusable helper, correct tensor op, runtime guard (#18545) Mar 27, 2026
viveknayakatmeta added a commit to viveknayakatmeta/executorch that referenced this pull request Mar 27, 2026
…ntime guard (pytorch#18545)

Summary:

- replace the inline first_kv_shared index computation in _from_config with a reusable _is_kv_shared_layer() helper that matches llama_transformer.py's pattern and adds a missing first_shared <= 0 edge-case guard,
- fix torch.cat → torch.stack in _process_normal_kv for SHA kv_to_share construction, since per-head K/V tensors are rank-3 and torch.cat(dim=1) concatenates seq dimensions incorrectly while torch.stack(dim=1) correctly inserts a new heads dimension,
- change the forward() K/V skip guard from structural (if self.is_kv_shared_layer) to runtime (if shared_kv is not None) with an added assertion that self.is_kv_shared_layer holds.

Reviewed By: billmguo

Differential Revision: D97637849
…ntime guard (pytorch#18545)

Summary:
Pull Request resolved: pytorch#18545

- replace the inline first_kv_shared index computation in _from_config with a reusable _is_kv_shared_layer() helper that matches llama_transformer.py's pattern and adds a missing first_shared <= 0 edge-case guard,
- fix torch.cat → torch.stack in _process_normal_kv for SHA kv_to_share construction, since per-head K/V tensors are rank-3 and torch.cat(dim=1) concatenates seq dimensions incorrectly while torch.stack(dim=1) correctly inserts a new heads dimension,
- change the forward() K/V skip guard from structural (if self.is_kv_shared_layer) to runtime (if shared_kv is not None) with an added assertion that self.is_kv_shared_layer holds.

Reviewed By: billmguo

Differential Revision: D97637849
@meta-codesync meta-codesync Bot merged commit 502d2de into pytorch:main Mar 28, 2026
157 of 163 checks passed
rascani pushed a commit to rascani/executorch that referenced this pull request Apr 1, 2026
…ntime guard (pytorch#18545)

Differential Revision: D97637849

Pull Request resolved: pytorch#18545
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants