Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stop Sequence #109

Open
volkerha opened this issue Nov 24, 2022 · 4 comments
Open

Stop Sequence #109

volkerha opened this issue Nov 24, 2022 · 4 comments

Comments

@volkerha
Copy link

Hi Deepspeed-MII team,

I was wondering if there is a way to implement a stop sequence or stop token in ds-mii to stop generation early.

In the current implementation, the model mostly generates max_new_tokens number of tokens. In huggingface transformers, it's possible to implement custom stopping criteria but I did not find this option here.

I tried setting the eos_token_id to the desired stop token but somehow the model keeps generating even after producing the stop token.

Cheers, V

@mrwyattii
Copy link
Contributor

Hi @volkerha this is not currently possible in MII. In general, any extra kwargs passed to query() will be passed on to the transformers.pipeline object. For example:

pipeline_kwargs = {"max_new_tokens": 50, "batch_size": 2}
result = generator.query({'query': ["DeepSpeed is the", "Seattle is"]}, **pipeline_kwargs)

The problem is that our current deployment types utilize a grpc server and those pipeline_kwargs values must be serialized. Currently we only support values that are int, float, str, and bool. I see a few ways forward:

  1. We introduce a deployment type that doesn't rely on grpc (this is something we have on our TODO list currently)
  2. We extend serialization capabilities (this would be great because even with deployments that utilize grpc, it would fix this issue)
  3. We add a custom API that will generate the stopping criteria based on user input and pass that to the pipeline (this would probably be the most fragile and difficult to maintain option)

In your current usage of MII, do you require the GRPC server capabilities? or would a non-GRPC deployment work for you? I believe (1) will be the most likely to be implemented in the near future.

@tokestermw
Copy link

Looks like there is a stop_sequence (though limited to one token). huggingface/transformers#18444

I'm thinking to add a PR to hugging face to add an argument to the generation pipeline to specify stop tokens (so we don't have to pass objects around).

@volkerha
Copy link
Author

As you mentioned, 🤗transformers simply set the eos_token_id to the first token of the stop_sequence here. We can directly pass eos_token_id to generate(...), so there's not much benefit.

In my case, I would like to specify a sequence of several tokens as stopping criteria, e.g. only if the model generates something like "User:", generation should be stopped.

Also, it would be good to keep the original eos_token.

@tokestermw
Copy link

This PR is merged: huggingface/transformers#20727

Once released, we should be able to do model.generate(..., eos_token_id=[1, 2]).

It's not using stopping_criteria since it looks like jax and tf code doesn't use them, also beam search doesn't work with stopping_criteria.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants