Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Pipeline Refactor] Update routes, text generation initial functionality #1348

Merged
merged 12 commits into from
Nov 3, 2023

Conversation

dsikka
Copy link
Contributor

@dsikka dsikka commented Oct 25, 2023

Screenshot 2023-11-01 at 10 50 22 AM

Summary

  • The operators, pipeline, and routes added for text_generation so far include the operators and the flow shown above

Testing

Testing script to evaluate various situations:

  1. Just running the single-token engine as prompt_sequence_length > number of prompt tokens
  2. Just running the multi-token engine as the number of tokens % prompt_sequence_length == 0
  3. Running a combination of multi-token engine and single-token engine, as the number of prompt_sequence_length <= number of prompt tokens but prompt tokens % prompt_sequence_length != 0

All 3 cases are tested using the test script below, by changing the prompt_sequence_length.
Prompt logits are evaluated against the ground truth logits, produced using the transformers model.

import numpy as np
from transformers import AutoModelForCausalLM, AutoTokenizer

import torch
from deepsparse.transformers.pipelines.text_generation import TextGenerationInput
from deepsparse.v2.text_generation.pipeline import TextGenerationPipeline
from huggingface_hub import snapshot_download


def create_tokenizer(model_name):
    tokenizer = AutoTokenizer.from_pretrained(model_name)
    tokenizer.padding_side = "left"
    if tokenizer.pad_token is None:
        tokenizer.pad_token = tokenizer.eos_token

    return tokenizer


def get_ground_truth(prompt):
    model = AutoModelForCausalLM.from_pretrained("roneneldan/TinyStories-1M")
    tokenizer = create_tokenizer("roneneldan/TinyStories-1M")

    input_ids = tokenizer.encode(prompt, return_tensors="pt")
    out = model(input_ids=input_ids)
    prompt_logits = out.logits.detach().numpy()
    return prompt_logits

cur_len = 16
prompt = "Hello there, how are you?"
model_path = "hf:mgoin/TinyStories-1M-deepsparse"
pipeline = TextGenerationPipeline(model_path, engine_kwargs={"engine_type": "onnxruntime"}, prompt_sequence_length=cur_len)
input_values = TextGenerationInput(prompt=prompt)
logits = pipeline(input_values)
ground_truth = get_ground_truth(prompt)

print("All Close?", np.allclose(logits, ground_truth, atol=0.0001))

@dsikka dsikka changed the title Features/v2/prompt inference [Pipeline Refactor] Update routes, text generation initial functionality Oct 25, 2023
@dsikka dsikka marked this pull request as ready for review October 25, 2023 01:12
Copy link
Contributor

@dbogunowicz dbogunowicz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The general design makes sense to me. Added my comments. After we finish honing the logic, let's spend some time on the structure and aesthetics of the code. Since everyone will be building on top of those abstractions, I think it is very important to introduce prudent and good aesthetics that other pipelines will carry over.

src/deepsparse/v2/operators/engine_operator.py Outdated Show resolved Hide resolved
src/deepsparse/v2/operators/engine_operator.py Outdated Show resolved Hide resolved
src/deepsparse/v2/pipeline.py Show resolved Hide resolved
src/deepsparse/v2/pipeline.py Outdated Show resolved Hide resolved
src/deepsparse/v2/text_generation/pipeline.py Show resolved Hide resolved
src/deepsparse/v2/text_generation/pipeline.py Show resolved Hide resolved
src/deepsparse/v2/utils/state.py Show resolved Hide resolved
Copy link
Member

@rahul-tuli rahul-tuli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The overall structure looks good to me, there is a lil cleanup left pertaining to docstrings and typing (did not comment there), good comments from Damian; One pain point I have is reviewing this PR is slightly difficult, I would at least like to see some more unit tests, with coverage reports (if possible)

(P.S: Great work on this)

@dsikka dsikka changed the base branch from feature/v2/text_generation to ds_refactor October 27, 2023 20:56
@dsikka dsikka changed the base branch from ds_refactor to feature/v2/text_generation October 27, 2023 20:57
src/deepsparse/v2/operators/operator.py Outdated Show resolved Hide resolved
src/deepsparse/v2/operators/operator.py Outdated Show resolved Hide resolved
src/deepsparse/v2/routers/router.py Outdated Show resolved Hide resolved
src/deepsparse/v2/schedulers/scheduler.py Outdated Show resolved Hide resolved
# we want to pass the empty kv cache inputs
# (batch_size=0) to the engine. Therefore,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we still have the logic that handles this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as in is the comment still relevant?

pipeline_state = PipelineState()
pipeline_state_vals = {}

# TODO: The code below will be replaced with a transformers set-up Operator.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure if we should ever be setting up operators with other operators outside of construction - especially with stuff like engines that should be compiled on construction.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh sorry, the comment just means we should add in an operator that handles things specific to transformers (such as setting up the config, tokenizer, model).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

anything in setup should happen on init, not an operator that gets run every inference though, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point

src/deepsparse/v2/text_generation/pipeline.py Outdated Show resolved Hide resolved
__all__ = ["PrepareforPrefill"]


class PrepareforPrefill(Operator):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will we need these "prep" operators after the refactor?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so far they're responsible for doing things needed before prompt_inference (create a kv_cache) and before generation (set-up the token_generator for example) so they should still be here.

@dsikka dsikka force-pushed the features/v2/prompt_inference branch from 625a1c3 to 3f2193d Compare November 1, 2023 16:45
@dsikka dsikka changed the base branch from feature/v2/text_generation to v2 November 1, 2023 16:49
Copy link
Member

@bfineran bfineran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks great, like the updated diagram, could maybe use a separate graphic that breaks down the KVCache interactions (understand that it may make the main diagram messy)

)
compile_prompt_logits = CompilePromptLogits()
"""
prep_for_single_engine = PrepareforSingleEngine(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

intentional?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mixed in follow-up PR

pipeline_state = PipelineState()
pipeline_state_vals = {}

# TODO: The code below will be replaced with a transformers set-up Operator.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

anything in setup should happen on init, not an operator that gets run every inference though, right?

Copy link
Member

@rahul-tuli rahul-tuli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The structure is very good! really appreciate the diagram in PR description; most of my comments now are nits, I'll leave them upto you to address now or in a follow up PR

More tests would obviously be nicer, but I ran through the script attached in PR description which works for me, Great work on this.

num_cores: int = None,
num_streams: int = None,
scheduler: Scheduler = None,
input_shapes: List[List[int]] = None,
engine_context: Optional[Context] = None,
engine_context: Optional[EngineContext] = None,
engine_kwargs: Dict = None,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
engine_kwargs: Dict = None,
engine_kwargs: Optional[Dict] = None,

src/deepsparse/v2/routers/router.py Show resolved Hide resolved
@@ -87,7 +87,7 @@ def __init__(
self._engine_args = engine_args
self._engine_type = engine_type

self.engine = self.create_engine()
self.engine = self.create_engine(**engine_kwargs)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't like that we are passing kwargs to create_engine, which also accepts **kwargs; there is no clear way to figure out what arguments should be passed to create_engine without looking at the implementation?

@@ -87,7 +87,7 @@ def __init__(
self._engine_args = engine_args
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we have engine_args, and engine_kwargs, but both are dicts; aren't the names slightly misleading?

return False

def run(self, tokens: Any, kv_cache: Any, pipeline_state: PipelineState, **kwargs):

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

docstring?

Combine the prompt logits. Currently relying on the inference state to store the
prompt logits for each token or multi-token batch processed. This operator will
take prompt logits from each iteration run and update the inference state.
"""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

params?

"""

def run(self, logits, inference_state: InferenceState, **kwargs):
logit_type = "prompt_logits"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

docstring?



class KVCacheCreatorOutput(BaseModel):
kv_cache: Any = Field(description="KV Cache Created") # DecoderKVCache
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does setting arbitrary types allowed also doesn't allow the typing to be DecoderKVCache

class MultiEnginePrefill(Operator):
def __init__(self, prompt_sequence_length, sequence_length):
"""
Prepare the tokens for the multi-token engine. This requires creating the
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm stopping here with docstring and param comments, kindly check and update

@dsikka dsikka merged commit e1ff108 into v2 Nov 3, 2023
@dsikka dsikka deleted the features/v2/prompt_inference branch November 3, 2023 00:47
bfineran added a commit that referenced this pull request Dec 6, 2023
* Pipelines Refactor - Initial Impl (#1287)

* [Pipeline Refactor] Additional functionality, engine operator, linear router and image classification pipeline/operators/example (#1325)

* initial functionality and working example with image classification

* remove testing image

* update args

* initial functionality and working example with image classification

* remove testing image

* pr comments

* defines schemas for operators and test

* add image classification test, PR comments

* fix input/output handling in pipeline and operator base classes to be more generic; remove context

* add additional operator input message

* typo fix

* [v2] EngineOperator updates to make continuous batching easier (#1371)

* [v2] EngineOperator updates to make continuous batching easier

* test fixes

* [Pipeline Refactor] Update routes, text generation initial functionality (#1348)

* initial functionality and working example with image classification

* remove testing image

* rebase fixes

* initial functionality and working example with image classification

* text gen

* updates func

* prompt inference, initial functionality

* remove image; update state docstring

* Fix typo

* add todo for split/join

* remove context, clean-up args, remove prefill_preprocess_operaator

* fix docstrings

* [Pipeline Refactor] Additional Operators, Route update and completed generation functionality (#1356)

* initial functionality and working example with image classification

* remove testing image

* rebase fixes

* initial functionality and working example with image classification

* text gen

* updates func

* prompt inference, initial functionality

* remove image; update state docstring

* Fix typo

* add todo for split/join

* remove context, clean-up args, remove prefill_preprocess_operaator

* fix docstrings

* initial functionality and working example with image classification

* updates func

* prompt inference, initial functionality

* finish generation operators and update routes

* further breakdown operators

* add operators

* fix can_operate condition

* update can_operate to not rely on the inference_state

* rebase + update

* fix condition

* fix capacity settting again

* typo fixes

* [Pipeline Refactor] Split/Join Functionality for multiple prompts (#1384)

* add split/join functionality

* update router to include split/join in parent class, refactor pipeline code to remove repeat code, update map function

* process multiple generations

* move map to base class

* [Pipeline Refactor] Unit Testing for Text Generation Operators (#1392)

* unit testing for text generation operators

* additional changes

* unit testing completion

* remove debug

* fix

* add todo

* more clean-up

* fix test

* add docstrings/comments

* break out tests to individual unit test files; add conftest and make scope of fixtures module to help with speed

* fix name

* [Continuous Batching] Queue Implementation to support batching grouping and prioritization (#1373)

* [Continuous Batching] Queue Implementation to support batching grouping and prioritization

* has_key method

* thread safety

* add blocking option for pop_batch

* update docstring

* allow mutex to be shared across continuous batching objects

* revert last commit

* [Continuous Batching] Executor thread for running continuous batching (#1374)

* [Continuous Batching] Executor thread for running continuous batching

* quality

* ensure that executor stops when main thread does - clean up test hack

* [ContinuousBatching] ContinuousBatchingScheduler Implementation (#1375)

* [ContinuousBatching] ContinuousBatchingScheduler Implementation

* cleanup unnecessary stop condition

* [continuous batching] singleton pattern for scheduler (#1391)

* [continuous batching] singleton pattern for scheduler

* catch from review

* [Pipeline Refactor][Text-Generation] Create a helper function for creating engine_inputs (#1364)

* rebasing off my initial commit

* cleanups

* unit testing for text generation operators

* additional changes

* unit testing completion

* remove debug

* fix

* add todo

* more clean-up

* fix test

* add docstrings/comments

* break out tests to individual unit test files; add conftest and make scope of fixtures module to help with speed

* Delete tests/deepsparse/v2/unit/text_generation/test_msic.py

---------

Co-authored-by: Dipika Sikka <dipikasikka1@gmail.com>

* [Pipeline Refactor][Text-Generation] Refactor `transformers` helpers functions (#1394)

* add split/join functionality

* update router to include split/join in parent class, refactor pipeline code to remove repeat code, update map function

* process multiple generations

* initial commit

* fix error

* unit testing for text generation operators

* additional changes

* unit testing completion

* remove debug

* fix

* add todo

* more clean-up

* fix test

* add docstrings/comments

* break out tests to individual unit test files; add conftest and make scope of fixtures module to help with speed

* Delete tests/deepsparse/v2/unit/text_generation/test_msic.py

* pipeline runs, but incorrectly

* Revert "pipeline runs, but incorrectly"

This reverts commit 51c4ee6.

* PR review comments

---------

Co-authored-by: Dipika Sikka <dipikasikka1@gmail.com>

* [Text Generation][V2] End-to-end tests (#1402)

* initial commit

* initial commit

* its working now

* beautification

* thank you Dipika <3

* ready to review

* [Pipeline Refactor][Text Generation][Continuous Batching] Integration (#1409)

* update split/join

* use map

* update

* run end-to-end

* clean-up

* fix bug with batch size, introduce SplitRoute dataclass

* update tests to use new inputs/outputs

* use the normal scheduler for internal kv_cache

* add pipeline inpuits

* clean-up

* change engine type, update docstrings, update override function to be more generic

* move subgraph functionality to its own function; clean-up cont batching in text gen pipeline

* update linear pathway to also use subgraph execution

* rebase fix

* fix tests

* [Pipeline Refactor] Operator Registry (#1420)

* initial registry functionality

* use sparsezoo mixin

* [Pipeline Refactor] Fix Operator scheduling to fix issue with slow execution  (#1453)

* fix scheduling to fix issue with engine running very slowly; introduce new completed attribute for Subgraph instead of checking instance type

* fix warning message

* [Pipeline Refactor] Add `Pipeline.create` method to initialize pipelines (#1457)

* add pipeline create method for pipeline creation using the operator registry

* add instance check

* [Pipeline Refactor] async (#1380)

* initial functionality and working example with image classification

* remove testing image

* rebase fixes

* initial functionality and working example with image classification

* text gen

* updates func

* prompt inference, initial functionality

* remove image; update state docstring

* Fix typo

* add todo for split/join

* remove context, clean-up args, remove prefill_preprocess_operaator

* fix docstrings

* initial functionality and working example with image classification

* updates func

* prompt inference, initial functionality

* finish generation operators and update routes

* further breakdown operators

* add operators

* fix can_operate condition

* update can_operate to not rely on the inference_state

* rebase + update

* fix condition

* async initial functionality

* fix capacity settting again

* add blocking

* more testing

* update to use split/join

* fix

* rebase fix

* remove index

* change event loop

* rebase fix

* update async run to use new operator scheduling properly

* rebase fixes (#1458)

* more fixes (#1459)

---------

Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>
Co-authored-by: Dipika Sikka <dipikasikka1@gmail.com>
dbogunowicz pushed a commit that referenced this pull request Dec 18, 2023
…ity (#1348)

* initial functionality and working example with image classification

* remove testing image

* rebase fixes

* initial functionality and working example with image classification

* text gen

* updates func

* prompt inference, initial functionality

* remove image; update state docstring

* Fix typo

* add todo for split/join

* remove context, clean-up args, remove prefill_preprocess_operaator

* fix docstrings
dbogunowicz added a commit that referenced this pull request Jan 2, 2024
* Pipelines Refactor - Initial Impl (#1287)

* [Pipeline Refactor] Additional functionality, engine operator, linear router and image classification pipeline/operators/example (#1325)

* initial functionality and working example with image classification

* remove testing image

* update args

* initial functionality and working example with image classification

* remove testing image

* pr comments

* defines schemas for operators and test

* add image classification test, PR comments

* fix input/output handling in pipeline and operator base classes to be more generic; remove context

* add additional operator input message

* typo fix

* [v2] EngineOperator updates to make continuous batching easier (#1371)

* [v2] EngineOperator updates to make continuous batching easier

* test fixes

* [Pipeline Refactor] Update routes, text generation initial functionality (#1348)

* initial functionality and working example with image classification

* remove testing image

* rebase fixes

* initial functionality and working example with image classification

* text gen

* updates func

* prompt inference, initial functionality

* remove image; update state docstring

* Fix typo

* add todo for split/join

* remove context, clean-up args, remove prefill_preprocess_operaator

* fix docstrings

* [Pipeline Refactor] Additional Operators, Route update and completed generation functionality (#1356)

* initial functionality and working example with image classification

* remove testing image

* rebase fixes

* initial functionality and working example with image classification

* text gen

* updates func

* prompt inference, initial functionality

* remove image; update state docstring

* Fix typo

* add todo for split/join

* remove context, clean-up args, remove prefill_preprocess_operaator

* fix docstrings

* initial functionality and working example with image classification

* updates func

* prompt inference, initial functionality

* finish generation operators and update routes

* further breakdown operators

* add operators

* fix can_operate condition

* update can_operate to not rely on the inference_state

* rebase + update

* fix condition

* fix capacity settting again

* typo fixes

* add split/join functionality

* update router to include split/join in parent class, refactor pipeline code to remove repeat code, update map function

* process multiple generations

* initial commit

* fix error

* [Pipeline Refactor] Split/Join Functionality for multiple prompts (#1384)

* add split/join functionality

* update router to include split/join in parent class, refactor pipeline code to remove repeat code, update map function

* process multiple generations

* move map to base class

* unit testing for text generation operators

* additional changes

* unit testing completion

* remove debug

* fix

* add todo

* more clean-up

* fix test

* add docstrings/comments

* break out tests to individual unit test files; add conftest and make scope of fixtures module to help with speed

* [Pipeline Refactor] Unit Testing for Text Generation Operators (#1392)

* unit testing for text generation operators

* additional changes

* unit testing completion

* remove debug

* fix

* add todo

* more clean-up

* fix test

* add docstrings/comments

* break out tests to individual unit test files; add conftest and make scope of fixtures module to help with speed

* fix name

* Delete tests/deepsparse/v2/unit/text_generation/test_msic.py

* [Continuous Batching] Queue Implementation to support batching grouping and prioritization (#1373)

* [Continuous Batching] Queue Implementation to support batching grouping and prioritization

* has_key method

* thread safety

* add blocking option for pop_batch

* update docstring

* allow mutex to be shared across continuous batching objects

* revert last commit

* [Continuous Batching] Executor thread for running continuous batching (#1374)

* [Continuous Batching] Executor thread for running continuous batching

* quality

* ensure that executor stops when main thread does - clean up test hack

* [ContinuousBatching] ContinuousBatchingScheduler Implementation (#1375)

* [ContinuousBatching] ContinuousBatchingScheduler Implementation

* cleanup unnecessary stop condition

* [continuous batching] singleton pattern for scheduler (#1391)

* [continuous batching] singleton pattern for scheduler

* catch from review

* [Pipeline Refactor][Text-Generation] Create a helper function for creating engine_inputs (#1364)

* rebasing off my initial commit

* cleanups

* unit testing for text generation operators

* additional changes

* unit testing completion

* remove debug

* fix

* add todo

* more clean-up

* fix test

* add docstrings/comments

* break out tests to individual unit test files; add conftest and make scope of fixtures module to help with speed

* Delete tests/deepsparse/v2/unit/text_generation/test_msic.py

---------

Co-authored-by: Dipika Sikka <dipikasikka1@gmail.com>

* pipeline runs, but incorrectly

* it works for a single sequence

* cleanup. now lets figure out how to run multiple sequences

* [Pipeline Refactor][Text-Generation] Refactor `transformers` helpers functions (#1394)

* add split/join functionality

* update router to include split/join in parent class, refactor pipeline code to remove repeat code, update map function

* process multiple generations

* initial commit

* fix error

* unit testing for text generation operators

* additional changes

* unit testing completion

* remove debug

* fix

* add todo

* more clean-up

* fix test

* add docstrings/comments

* break out tests to individual unit test files; add conftest and make scope of fixtures module to help with speed

* Delete tests/deepsparse/v2/unit/text_generation/test_msic.py

* pipeline runs, but incorrectly

* Revert "pipeline runs, but incorrectly"

This reverts commit 51c4ee6.

* PR review comments

---------

Co-authored-by: Dipika Sikka <dipikasikka1@gmail.com>

* [Text Generation][V2] End-to-end tests (#1402)

* initial commit

* initial commit

* its working now

* beautification

* thank you Dipika <3

* ready to review

* integration tests pass

* [Pipeline Refactor][Text Generation][Continuous Batching] Integration (#1409)

* update split/join

* use map

* update

* run end-to-end

* clean-up

* fix bug with batch size, introduce SplitRoute dataclass

* update tests to use new inputs/outputs

* use the normal scheduler for internal kv_cache

* add pipeline inpuits

* clean-up

* change engine type, update docstrings, update override function to be more generic

* move subgraph functionality to its own function; clean-up cont batching in text gen pipeline

* update linear pathway to also use subgraph execution

* rebase fix

* fix tests

* [Pipeline Refactor] Operator Registry (#1420)

* initial registry functionality

* use sparsezoo mixin

* fix tricky rebase

* one more cleanup

* got tests to work after rebase. implementing SPLIT and JOIN in linearouter now

* pipeline working, with GraphRouter. Needs some more testing

* ready for review

* cleanup

* simplify after PR review round

* [Pipeline Refactor] Fix Operator scheduling to fix issue with slow execution  (#1453)

* fix scheduling to fix issue with engine running very slowly; introduce new completed attribute for Subgraph instead of checking instance type

* fix warning message

* [Pipeline Refactor] Add `Pipeline.create` method to initialize pipelines (#1457)

* add pipeline create method for pipeline creation using the operator registry

* add instance check

* [Pipeline Refactor] async (#1380)

* initial functionality and working example with image classification

* remove testing image

* rebase fixes

* initial functionality and working example with image classification

* text gen

* updates func

* prompt inference, initial functionality

* remove image; update state docstring

* Fix typo

* add todo for split/join

* remove context, clean-up args, remove prefill_preprocess_operaator

* fix docstrings

* initial functionality and working example with image classification

* updates func

* prompt inference, initial functionality

* finish generation operators and update routes

* further breakdown operators

* add operators

* fix can_operate condition

* update can_operate to not rely on the inference_state

* rebase + update

* fix condition

* async initial functionality

* fix capacity settting again

* add blocking

* more testing

* update to use split/join

* fix

* rebase fix

* remove index

* change event loop

* rebase fix

* update async run to use new operator scheduling properly

* rebase fixes (#1458)

* more fixes (#1459)

* bring back functionalities that were lost in v2 during rebasing

* Update src/deepsparse/transformers/helpers.py

* ready for review

* bring tests back"

* quality

* original readme

* addressing Dipikas comments

* Update src/deepsparse/transformers/pipelines/text_generation/pipeline_no_kv_cache.py

* addressing PR review

---------

Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>
Co-authored-by: Dipika Sikka <dipikasikka1@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants