Skip to content

Enable infinite generation with RoPE position remapping for attention sink (#19011)#19011

Merged
meta-codesync[bot] merged 1 commit intomainfrom
export-D100728748
Apr 21, 2026
Merged

Enable infinite generation with RoPE position remapping for attention sink (#19011)#19011
meta-codesync[bot] merged 1 commit intomainfrom
export-D100728748

Conversation

@kirklandsign
Copy link
Copy Markdown
Contributor

@kirklandsign kirklandsign commented Apr 20, 2026

Summary:

Previously, attention sink models could not generate beyond max_context_len
because RoPE used the raw monotonic input_pos to index into the pre-computed
freqs_cis table, causing OOB when pos >= max_context_len.

This change adds position remapping in RopeWithAttentionSink:

  • Sink token positions (< sink_size) are preserved as-is
  • Window token positions are wrapped into the ring buffer range
    [sink_size, sink_size + ring_size) using modular arithmetic

The 2x ring buffer (ring_size = 2 * window_size) ensures the live window
of tokens never spans a wrap boundary, preserving correct relative
distances in RoPE space.

This enables attention sink models to generate indefinitely — the KV cache
ring buffer recycles space while RoPE positions stay bounded.

Reviewed By: lucylq

Differential Revision: D100728748

Copilot AI review requested due to automatic review settings April 20, 2026 22:15
@kirklandsign kirklandsign requested a review from lucylq as a code owner April 20, 2026 22:15
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 20, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19011

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure, 3 Unrelated Failures

As of commit 9ae6844 with merge base 1d37abd (image):

NEW FAILURE - The following job has failed:

FLAKY - The following job failed but was likely due to flakiness present on trunk:

BROKEN TRUNK - The following jobs failed but was present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 20, 2026
@meta-codesync
Copy link
Copy Markdown
Contributor

meta-codesync Bot commented Apr 20, 2026

@kirklandsign has exported this pull request. If you are a Meta employee, you can view the originating Diff in D100728748.

@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@meta-codesync meta-codesync Bot changed the title Enable infinite generation with RoPE position remapping for attention sink Enable infinite generation with RoPE position remapping for attention sink (#19011) Apr 20, 2026
meta-codesync Bot pushed a commit that referenced this pull request Apr 20, 2026
… sink (#19011)

Summary:

Previously, attention sink models could not generate beyond max_context_len
because RoPE used the raw monotonic input_pos to index into the pre-computed
freqs_cis table, causing OOB when pos >= max_context_len.

This change adds position remapping in RopeWithAttentionSink:
- Sink token positions (< sink_size) are preserved as-is
- Window token positions are wrapped into the ring buffer range
  [sink_size, sink_size + ring_size) using modular arithmetic

The 2x ring buffer (ring_size = 2 * window_size) ensures the live window
of tokens never spans a wrap boundary, preserving correct relative
distances in RoPE space.

This enables attention sink models to generate indefinitely — the KV cache
ring buffer recycles space while RoPE positions stay bounded.

Reviewed By: lucylq

Differential Revision: D100728748
@meta-codesync meta-codesync Bot force-pushed the export-D100728748 branch from 7cce9a4 to 2a34458 Compare April 20, 2026 22:18
meta-codesync Bot pushed a commit that referenced this pull request Apr 20, 2026
… sink (#19011)

Summary:

Previously, attention sink models could not generate beyond max_context_len
because RoPE used the raw monotonic input_pos to index into the pre-computed
freqs_cis table, causing OOB when pos >= max_context_len.

This change adds position remapping in RopeWithAttentionSink:
- Sink token positions (< sink_size) are preserved as-is
- Window token positions are wrapped into the ring buffer range
  [sink_size, sink_size + ring_size) using modular arithmetic

The 2x ring buffer (ring_size = 2 * window_size) ensures the live window
of tokens never spans a wrap boundary, preserving correct relative
distances in RoPE space.

This enables attention sink models to generate indefinitely — the KV cache
ring buffer recycles space while RoPE positions stay bounded.

Reviewed By: lucylq

Differential Revision: D100728748
@meta-codesync meta-codesync Bot force-pushed the export-D100728748 branch from 2a34458 to a6472e5 Compare April 20, 2026 22:19
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR enables “infinite” token generation for LLaMA attention-sink models by remapping RoPE positions into a bounded range aligned with the KV-cache ring buffer, preventing out-of-bounds indexing when decoding past max_context_len.

Changes:

  • Add RoPE position remapping logic in RopeWithAttentionSink.get_freqs (sink positions preserved; window positions wrapped into [sink_size, sink_size + 2*window_size)).
  • Add an end-to-end test that generates beyond max_context_len and validates outputs remain finite.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.

File Description
examples/models/llama/source_transformation/attention_sink.py Implements RoPE position remapping for attention-sink + ring-buffer KV cache to avoid OOB past max_context_len.
examples/models/llama/source_transformation/test_attention_sink.py Adds E2E regression coverage for generating beyond max_context_len.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 71 to +73
assert input_pos is not None
# Use torch._check for export compatibility (data-dependent guard)
torch._check(input_pos[0].item() + seq_len <= self.max_context_length)
return super().get_freqs(input_pos, seq_len)
if not self.params.use_kv_cache:
return self.freqs_cos[:seq_len], self.freqs_sin[:seq_len]
self.sink_size = sink_size
# max_context_len from params is used for RoPE frequencies (should be large)
self.max_context_length = self.params.max_context_len
self.ring_size = window_size * 2
Comment on lines +401 to +417
def test_beyond_max_context_len(self):
"""Generate tokens beyond max_context_len with RoPE position remapping."""
sink_size = 4
window_size = 16
# KV cache size = 36, max_context_len = 64
# Generate 100 tokens — well beyond max_context_len
args = self._make_args(max_context_len=64)
model = self._build_model(args, sink_size, window_size, use_custom_sdpa=False)

outputs = self._run_generation(model, args, num_tokens=100)

self.assertEqual(len(outputs), 97) # 1 prefill + 96 decode steps
for out in outputs:
self.assertTrue(
torch.isfinite(out).all(),
"Output contains non-finite values beyond max_context_len",
)
Comment on lines +76 to +84
# Dynamic shape: input_pos is [start_pos], remap and narrow
input_pos_item = input_pos[-1].item()
if input_pos_item < self.sink_size:
remapped_item = input_pos_item
else:
remapped_item = (
self.sink_size
+ (input_pos_item - self.sink_size) % self.ring_size
)
kirklandsign added a commit that referenced this pull request Apr 20, 2026
… sink (#19011)

Summary:
Pull Request resolved: #19011

Previously, attention sink models could not generate beyond max_context_len
because RoPE used the raw monotonic input_pos to index into the pre-computed
freqs_cis table, causing OOB when pos >= max_context_len.

This change adds position remapping in RopeWithAttentionSink:
- Sink token positions (< sink_size) are preserved as-is
- Window token positions are wrapped into the ring buffer range
  [sink_size, sink_size + ring_size) using modular arithmetic

The 2x ring buffer (ring_size = 2 * window_size) ensures the live window
of tokens never spans a wrap boundary, preserving correct relative
distances in RoPE space.

This enables attention sink models to generate indefinitely — the KV cache
ring buffer recycles space while RoPE positions stay bounded.

Reviewed By: lucylq

Differential Revision: D100728748
meta-codesync Bot pushed a commit that referenced this pull request Apr 21, 2026
… sink (#19011)

Summary:

Previously, attention sink models could not generate beyond max_context_len
because RoPE used the raw monotonic input_pos to index into the pre-computed
freqs_cis table, causing OOB when pos >= max_context_len.

This change adds position remapping in RopeWithAttentionSink:
- Sink token positions (< sink_size) are preserved as-is
- Window token positions are wrapped into the ring buffer range
  [sink_size, sink_size + ring_size) using modular arithmetic

The 2x ring buffer (ring_size = 2 * window_size) ensures the live window
of tokens never spans a wrap boundary, preserving correct relative
distances in RoPE space.

This enables attention sink models to generate indefinitely — the KV cache
ring buffer recycles space while RoPE positions stay bounded.

Reviewed By: lucylq

Differential Revision: D100728748
Copilot AI review requested due to automatic review settings April 21, 2026 02:13
@kirklandsign kirklandsign review requested due to automatic review settings April 21, 2026 02:13
@meta-codesync meta-codesync Bot force-pushed the export-D100728748 branch from a451868 to db1328f Compare April 21, 2026 02:13
meta-codesync Bot pushed a commit that referenced this pull request Apr 21, 2026
… sink (#19011)

Summary:

Previously, attention sink models could not generate beyond max_context_len
because RoPE used the raw monotonic input_pos to index into the pre-computed
freqs_cis table, causing OOB when pos >= max_context_len.

This change adds position remapping in RopeWithAttentionSink:
- Sink token positions (< sink_size) are preserved as-is
- Window token positions are wrapped into the ring buffer range
  [sink_size, sink_size + ring_size) using modular arithmetic

The 2x ring buffer (ring_size = 2 * window_size) ensures the live window
of tokens never spans a wrap boundary, preserving correct relative
distances in RoPE space.

This enables attention sink models to generate indefinitely — the KV cache
ring buffer recycles space while RoPE positions stay bounded.

Reviewed By: lucylq

Differential Revision: D100728748
@meta-codesync meta-codesync Bot force-pushed the export-D100728748 branch from db1328f to cdf3644 Compare April 21, 2026 02:14
kirklandsign added a commit that referenced this pull request Apr 21, 2026
… sink (#19011)

Summary:
Pull Request resolved: #19011

Previously, attention sink models could not generate beyond max_context_len
because RoPE used the raw monotonic input_pos to index into the pre-computed
freqs_cis table, causing OOB when pos >= max_context_len.

This change adds position remapping in RopeWithAttentionSink:
- Sink token positions (< sink_size) are preserved as-is
- Window token positions are wrapped into the ring buffer range
  [sink_size, sink_size + ring_size) using modular arithmetic

The 2x ring buffer (ring_size = 2 * window_size) ensures the live window
of tokens never spans a wrap boundary, preserving correct relative
distances in RoPE space.

This enables attention sink models to generate indefinitely — the KV cache
ring buffer recycles space while RoPE positions stay bounded.

Reviewed By: lucylq

Differential Revision: D100728748
meta-codesync Bot pushed a commit that referenced this pull request Apr 21, 2026
… sink (#19011)

Summary:

Previously, attention sink models could not generate beyond max_context_len
because RoPE used the raw monotonic input_pos to index into the pre-computed
freqs_cis table, causing OOB when pos >= max_context_len.

This change adds position remapping in RopeWithAttentionSink:
- Sink token positions (< sink_size) are preserved as-is
- Window token positions are wrapped into the ring buffer range
  [sink_size, sink_size + ring_size) using modular arithmetic

The 2x ring buffer (ring_size = 2 * window_size) ensures the live window
of tokens never spans a wrap boundary, preserving correct relative
distances in RoPE space.

This enables attention sink models to generate indefinitely — the KV cache
ring buffer recycles space while RoPE positions stay bounded.

Reviewed By: lucylq

Differential Revision: D100728748
Copilot AI review requested due to automatic review settings April 21, 2026 18:38
@meta-codesync meta-codesync Bot force-pushed the export-D100728748 branch from 5faf3a6 to bfff183 Compare April 21, 2026 18:38
@kirklandsign kirklandsign review requested due to automatic review settings April 21, 2026 18:38
kirklandsign added a commit that referenced this pull request Apr 21, 2026
… sink (#19011)

Summary:
Pull Request resolved: #19011

Previously, attention sink models could not generate beyond max_context_len
because RoPE used the raw monotonic input_pos to index into the pre-computed
freqs_cis table, causing OOB when pos >= max_context_len.

This change adds position remapping in RopeWithAttentionSink:
- Sink token positions (< sink_size) are preserved as-is
- Window token positions are wrapped into the ring buffer range
  [sink_size, sink_size + ring_size) using modular arithmetic

The 2x ring buffer (ring_size = 2 * window_size) ensures the live window
of tokens never spans a wrap boundary, preserving correct relative
distances in RoPE space.

This enables attention sink models to generate indefinitely — the KV cache
ring buffer recycles space while RoPE positions stay bounded.

Reviewed By: lucylq

Differential Revision: D100728748
… sink (#19011)

Summary:

Previously, attention sink models could not generate beyond max_context_len
because RoPE used the raw monotonic input_pos to index into the pre-computed
freqs_cis table, causing OOB when pos >= max_context_len.

This change adds position remapping in RopeWithAttentionSink:
- Sink token positions (< sink_size) are preserved as-is
- Window token positions are wrapped into the ring buffer range
  [sink_size, sink_size + ring_size) using modular arithmetic

The 2x ring buffer (ring_size = 2 * window_size) ensures the live window
of tokens never spans a wrap boundary, preserving correct relative
distances in RoPE space.

This enables attention sink models to generate indefinitely — the KV cache
ring buffer recycles space while RoPE positions stay bounded.

Reviewed By: lucylq

Differential Revision: D100728748
Copilot AI review requested due to automatic review settings April 21, 2026 19:16
@meta-codesync meta-codesync Bot force-pushed the export-D100728748 branch from 311be20 to 9ae6844 Compare April 21, 2026 19:16
@kirklandsign kirklandsign review requested due to automatic review settings April 21, 2026 19:16
@meta-codesync meta-codesync Bot merged commit 239f7cc into main Apr 21, 2026
175 of 179 checks passed
@meta-codesync meta-codesync Bot deleted the export-D100728748 branch April 21, 2026 23:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants