Skip to content

Runner to utilize ring buffer#17222

Draft
kirklandsign wants to merge 3 commits intomainfrom
context
Draft

Runner to utilize ring buffer#17222
kirklandsign wants to merge 3 commits intomainfrom
context

Conversation

@kirklandsign
Copy link
Contributor

Summary

Allow exceeding context window

Test plan

CI

@pytorch-bot
Copy link

pytorch-bot bot commented Feb 4, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/17222

Note: Links to docs will display an error until the docs builds have been completed.

❌ 17 New Failures

As of commit 9634244 with merge base 6f780c7 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Feb 4, 2026
@github-actions
Copy link

github-actions bot commented Feb 4, 2026

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

# Module]`.
model.vocab_size,
llm_config.base.metadata,
use_ring_buffer=llm_config.model.local_global_attention is not None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is right. local global attention uses a sliding window which may or may not relevant to ring buffer.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the high level seems right, we need to have this metadata serialized into .pte.

}

auto error = runner->generate(prompt, config);
auto error2 = runner->generate(prompt, config);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

?


// Resolve max_new_tokens based on config
// Check if ring buffer is enabled - if so, we can exceed context length
bool use_ring_buffer = metadata_.at(kUseRingBuffer);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to set this in llm_runner_helper not here.

Comment on lines 188 to +195
int64_t max_context_len =
metadata_.at(kMaxContextLen) - 0; // No start_pos offset
int32_t max_new_tokens = config.resolve_max_new_tokens(max_context_len, pos_);
// When ring buffer is enabled, use a large context length to allow unlimited
// generation.
int64_t effective_context_len =
use_ring_buffer ? INT64_MAX : max_context_len;
int32_t max_new_tokens =
config.resolve_max_new_tokens(effective_context_len, pos_);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This logic needs to be applied to text_llm_runner as well.

Comment on lines +156 to +159
int64_t effective_context_len =
use_ring_buffer ? INT64_MAX : max_context_len;
int max_new_tokens =
config.resolve_max_new_tokens(max_context_len, num_prompt_tokens);
config.resolve_max_new_tokens(effective_context_len, num_prompt_tokens);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like duplicate code to me.

{llm::kMaxContextLen, 128},
{llm::kUseKVCache, true},
{llm::kUseSDPAWithKVCache, false},
{llm::kUseRingBuffer, true},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to set max_context_len to INT MAX here

Refactor position shift handling in attention sink to use torch buffers and dynamic shape conditions.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants