Skip to content

Commit

Permalink
fix SeqGPTPipeline input force cuda issue (#738)
Browse files Browse the repository at this point in the history
tokenizer of SeqGPTPipeline force using cuda, now is depence on model devcie type
  • Loading branch information
RainJayTsai committed Mar 4, 2024
1 parent f4e01d6 commit a0fb7e6
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion modelscope/pipelines/nlp/text_generation_pipeline.py
Expand Up @@ -464,7 +464,7 @@ def forward(self, prompt: str, **forward_params) -> Dict[str, Any]:
padding=True,
truncation=True,
max_length=1024)
input_ids = input_ids.input_ids.cuda()
input_ids = input_ids.input_ids.to(self.model.device)
outputs = self.model.generate(
input_ids, num_beams=4, do_sample=False, max_new_tokens=256)
decoded_sentences = self.tokenizer.batch_decode(
Expand Down

0 comments on commit a0fb7e6

Please sign in to comment.