Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TextGeneration] Split up prep_for_generation operator, handle edge cases, handle kv_cache full during prefill #1562

Merged
merged 11 commits into from
Jan 26, 2024

Conversation

dsikka
Copy link
Contributor

@dsikka dsikka commented Jan 25, 2024

Summary

A series of improvements:

  • Split up the prep_for_generation operator such that we aren't setting up the tokenizer and generating tokens in the same operator, outside of the generation loop
  • With this split, make sure we properly handle edge cases, such as max_length=1, max_new_tokens=0, max_new_tokens=1. Also added the condition such that max_length has to be greater than 0 or we throw a ValueError
  • With splitting this up, we no longer have to import the process_output operator in the prep_for_generation operator and can remove the custom streaming code in the operator . Also, the non-kv cache and kv_cache pipelines now both share the same output_schema, simplifying generation logic
  • Also added a condition to check kv_cache capacity during prompt inference and exit if the capacity is reached before generation

Testing

  • Added new tests to validate the edge cases
  • Added a test to check if we throw an error when the cache capacity is filled during prefill
  • Updated the non-kv cache unit test to also compare logits
  • All other test cases pass

@dsikka dsikka marked this pull request as ready for review January 25, 2024 05:05
@dsikka dsikka merged commit 7b028d4 into main Jan 26, 2024
13 checks passed
@dsikka dsikka deleted the split_prep_operator branch January 26, 2024 18:42
ProExpertProg pushed a commit that referenced this pull request Jan 26, 2024
… cases, handle kv_cache full during prefill (#1562)

* split prep_for_generation operator

* fix logits

* update non-kv cache pipeline and tests

* add tests to address edge cases

* add condition to check of kv_cache full during prompt inference, add test to cover this case, revert debugging changes

* fix typing

* remove commented code

* remove irrelevant condition

* address PR comments
dsikka added a commit that referenced this pull request Jan 29, 2024
… cases, handle kv_cache full during prefill (#1562)

* split prep_for_generation operator

* fix logits

* update non-kv cache pipeline and tests

* add tests to address edge cases

* add condition to check of kv_cache full during prompt inference, add test to cover this case, revert debugging changes

* fix typing

* remove commented code

* remove irrelevant condition

* address PR comments

(cherry picked from commit 7b028d4)
@dsikka dsikka mentioned this pull request Jan 29, 2024
1 task
bfineran pushed a commit that referenced this pull request Jan 29, 2024
… cases, handle kv_cache full during prefill (#1562)

* split prep_for_generation operator

* fix logits

* update non-kv cache pipeline and tests

* add tests to address edge cases

* add condition to check of kv_cache full during prompt inference, add test to cover this case, revert debugging changes

* fix typing

* remove commented code

* remove irrelevant condition

* address PR comments

(cherry picked from commit 7b028d4)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants