You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Use multiple prompts and generate multiple outputs for each prompt
PROMPTS= [PROMPT, SECOND_PROMPT]
generations=text_pipeline(prompt=PROMPTS, num_return_sequences=2, do_sample=True, max_length=100)
prompt_outputs=text_output.generations[0]
second_prompt_outputs=text_output.generations[1]
print("Outputs from the first prompt: ")
foroutputinprompt_outputs:
print(output)
print("\n")
print("Outputs from the second prompt: ")
foroutputinsecond_prompt_outputs:
print(output)
print("\n")
Output:
Outputs from the first prompt:
text=" are you coping better with holidays?\nI'm been reall getting good friends and helping friends as much as i can so it's all good." score=None finished=True finished_reason='stop'
text="\nI'm good... minor panic attacks but aside from that I'm good." score=None finished=True finished_reason='stop'
Outputs from the second prompt:
text='\nHAVING A GOOD TIME by Maya Angelou; How to Be a Winner by Peter Enns; BE CAREFUL WHAT YOU WHORE FOR by Sarah Bergman; 18: The Basic Ingredients of a Good Life by Jack Canfield.\nI think you might also read The Sympathy of the earth by Charles Darwin, if you are not interested in reading books. Do you write? I think it will help you to refine your own writing.' score=None finished=True finished_reason='stop'
text=' every school or publication I have looked at has said the same two books.\nIt depends on the school/master. AIS was the New York Times Bestseller forever, kicked an ass in the teen fiction genre for many reasons, a lot of fiction picks like that have been around a while hence popularity. And most science fiction and fantasy titles (but not romance or thriller) are still popular.' score=None finished=True finished_reason='stop'
Text Generation GenerationConfig Features Supported
Parameters controlling the output length:
Feature
Description
Deepsparse Default
HuggingFace Default
Supported
max_length
Maximum length of generated tokens. Equal to input_prompt + max_new_tokens. Overridden by max_new_tokens
None
20
Yes
max_new_tokens
Maximum number of tokens to generate, ignoring prompt tokens.
100
None
Yes
min_length
Minimum length of generated tokens. Equal to input_prompt + min_new_tokens. Overridden by min_new_tokens
-
0
No
min_new_tokens
Minomum number of tokens to generate, ignoring prompt tokens.
-
None
No
max_time
-
-
-
No
Parameters for manipulation of the model output logits
Feature
Description
Deepsparse Default
HuggingFace Default
Supported
top_k
The number of highest probability vocabulary tokens to keep for top-k-filtering
50
50
Yes
top_p
Keep the generated tokens where its cumulative probability is >= top_p
1.0
1.0
Yes
repetition_penalty
Penalty applied for generating new token. Existing token frequencies summed to subtraction the logit of its corresponding logit value
1.0
1.0
Yes
temperature
The temperature to use when sampling from the probability distribution computed from the logits. Higher values will result in more random samples. Should be greater than 0.0
1.0
1.0
Yes
typical_p
-
-
-
No
epsilon_cutoff
-
-
-
No
eta_cutoff
-
-
-
No
diversity_penalty
-
-
-
No
length_penalty
-
-
-
No
bad_words_ids
-
-
-
No
force_words_ids
-
-
-
No
renormalize_logits
-
-
-
No
constraints
-
-
-
No
forced_bos_token_id
-
-
-
No
forced_eos_token_id
-
-
-
No
remove_invalid_values
-
-
-
No
exponential_decay_length_penalty
-
-
-
No
suppress_tokens
-
-
-
No
begin_suppress_tokens
-
-
-
No
forced_decoder_ids
-
-
-
No
Parameters that control the generation strategy used
Feature
Description
Deepsparse Default
HuggingFace Default
Supported
do_sample
If True, will apply sampling from the probability distribution computed from the logits