You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
IndexErrorTraceback (mostrecentcalllast)
InputIn [23], in<cellline: 1>()
---->1nlp("long text"*300)
File/opt/conda/lib/python3.8/site-packages/transformers/pipelines/text_classification.py:125, inTextClassificationPipeline.__call__(self, *args, **kwargs)
[92](file:///opt/conda/lib/python3.8/site-packages/transformers/pipelines/text_classification.py?line=91) def__call__(self, *args, **kwargs):
[93](file:///opt/conda/lib/python3.8/site-packages/transformers/pipelines/text_classification.py?line=92) """ [94](file:///opt/conda/lib/python3.8/site-packages/transformers/pipelines/text_classification.py?line=93) Classify the text(s) given as inputs. [95](file:///opt/conda/lib/python3.8/site-packages/transformers/pipelines/text_classification.py?line=94) (...) [123](file:///opt/conda/lib/python3.8/site-packages/transformers/pipelines/text_classification.py?line=122) If `self.return_all_scores=True`, one such dictionary is returned per label. [124](file:///opt/conda/lib/python3.8/site-packages/transformers/pipelines/text_classification.py?line=123) """--> [125](file:///opt/conda/lib/python3.8/site-packages/transformers/pipelines/text_classification.py?line=124) result=super().__call__(*args, **kwargs)
[126](file:///opt/conda/lib/python3.8/site-packages/transformers/pipelines/text_classification.py?line=125) ifisinstance(args[0], str):
[127](file:///opt/conda/lib/python3.8/site-packages/transformers/pipelines/text_classification.py?line=126) # This pipeline is odd, and return a list when single item is run
[128](file:///opt/conda/lib/python3.8/site-packages/transformers/pipelines/text_classification.py?line=127) return [result]
File/opt/conda/lib/python3.8/site-packages/transformers/pipelines/base.py:1027, inPipeline.__call__(self, inputs, num_workers, batch_size, *args, **kwargs)
[1025](file:///opt/conda/lib/python3.8/site-packages/transformers/pipelines/base.py?line=1024) returnself.iterate(inputs, preprocess_params, forward_params, postprocess_params)
[1026](file:///opt/conda/lib/python3.8/site-packages/transformers/pipelines/base.py?line=1025) else:
-> [1027](file:///opt/conda/lib/python3.8/site-packages/transformers/pipelines/base.py?line=1026) returnself.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File/opt/conda/lib/python3.8/site-packages/transformers/pipelines/base.py:1034, inPipeline.run_single(self, inputs, preprocess_params, forward_params, postprocess_params)
[1032](file:///opt/conda/lib/python3.8/site-packages/transformers/pipelines/base.py?line=1031) defrun_single(self, inputs, preprocess_params, forward_params, postprocess_params):
[1033](file:///opt/conda/lib/python3.8/site-packages/transformers/pipelines/base.py?line=1032) model_inputs=self.preprocess(inputs, **preprocess_params)
-> [1034](file:///opt/conda/lib/python3.8/site-packages/transformers/pipelines/base.py?line=1033) model_outputs=self.forward(model_inputs, **forward_params)
[1035](file:///opt/conda/lib/python3.8/site-packages/transformers/pipelines/base.py?line=1034) outputs=self.postprocess(model_outputs, **postprocess_params)
[1036](file:///opt/conda/lib/python3.8/site-packages/transformers/pipelines/base.py?line=1035) returnoutputsFile/opt/conda/lib/python3.8/site-packages/transformers/pipelines/base.py:944, inPipeline.forward(self, model_inputs, **forward_params)
[942](file:///opt/conda/lib/python3.8/site-packages/transformers/pipelines/base.py?line=941) withinference_context():
[943](file:///opt/conda/lib/python3.8/site-packages/transformers/pipelines/base.py?line=942) model_inputs=self._ensure_tensor_on_device(model_inputs, device=self.device)
--> [944](file:///opt/conda/lib/python3.8/site-packages/transformers/pipelines/base.py?line=943) model_outputs=self._forward(model_inputs, **forward_params)
[945](file:///opt/conda/lib/python3.8/site-packages/transformers/pipelines/base.py?line=944) model_outputs=self._ensure_tensor_on_device(model_outputs, device=torch.device("cpu"))
[946](file:///opt/conda/lib/python3.8/site-packages/transformers/pipelines/base.py?line=945) else:
File/opt/conda/lib/python3.8/site-packages/transformers/pipelines/text_classification.py:137, inTextClassificationPipeline._forward(self, model_inputs)
[136](file:///opt/conda/lib/python3.8/site-packages/transformers/pipelines/text_classification.py?line=135) def_forward(self, model_inputs):
--> [137](file:///opt/conda/lib/python3.8/site-packages/transformers/pipelines/text_classification.py?line=136) returnself.model(**model_inputs)
File/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py:1102, inModule._call_impl(self, *input, **kwargs)
[1098](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1097) # If we don't have any hooks, we want to skip the rest of the logic in
[1099](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1098) # this function, and just call forward.
[1100](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1099) ifnot (self._backward_hooksorself._forward_hooksorself._forward_pre_hooksor_global_backward_hooks
[1101](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1100) or_global_forward_hooksor_global_forward_pre_hooks):
-> [1102](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1101) returnforward_call(*input, **kwargs)
[1103](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1102) # Do not call functions when jit is used
[1104](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1103) full_backward_hooks, non_full_backward_hooks= [], []
File/opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py:1204, inRobertaForSequenceClassification.forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict)
[1196](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=1195) r""" [1197](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=1196) labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): [1198](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=1197) Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., [1199](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=1198) config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If [1200](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=1199) `config.num_labels > 1` a classification loss is computed (Cross-Entropy). [1201](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=1200) """
[1202](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=1201) return_dict=return_dictifreturn_dictisnotNoneelseself.config.use_return_dict-> [1204](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=1203) outputs=self.roberta(
[1205](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=1204) input_ids,
[1206](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=1205) attention_mask=attention_mask,
[1207](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=1206) token_type_ids=token_type_ids,
[1208](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=1207) position_ids=position_ids,
[1209](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=1208) head_mask=head_mask,
[1210](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=1209) inputs_embeds=inputs_embeds,
[1211](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=1210) output_attentions=output_attentions,
[1212](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=1211) output_hidden_states=output_hidden_states,
[1213](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=1212) return_dict=return_dict,
[1214](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=1213) )
[1215](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=1214) sequence_output=outputs[0]
[1216](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=1215) logits=self.classifier(sequence_output)
File/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py:1102, inModule._call_impl(self, *input, **kwargs)
[1098](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1097) # If we don't have any hooks, we want to skip the rest of the logic in
[1099](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1098) # this function, and just call forward.
[1100](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1099) ifnot (self._backward_hooksorself._forward_hooksorself._forward_pre_hooksor_global_backward_hooks
[1101](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1100) or_global_forward_hooksor_global_forward_pre_hooks):
-> [1102](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1101) returnforward_call(*input, **kwargs)
[1103](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1102) # Do not call functions when jit is used
[1104](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1103) full_backward_hooks, non_full_backward_hooks= [], []
File/opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py:843, inRobertaModel.forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
[836](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=835) # Prepare head mask if needed
[837](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=836) # 1.0 in head_mask indicate we keep the head
[838](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=837) # attention_probs has shape bsz x n_heads x N x N
[839](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=838) # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads]
[840](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=839) # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length]
[841](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=840) head_mask=self.get_head_mask(head_mask, self.config.num_hidden_layers)
--> [843](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=842) embedding_output=self.embeddings(
[844](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=843) input_ids=input_ids,
[845](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=844) position_ids=position_ids,
[846](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=845) token_type_ids=token_type_ids,
[847](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=846) inputs_embeds=inputs_embeds,
[848](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=847) past_key_values_length=past_key_values_length,
[849](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=848) )
[850](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=849) encoder_outputs=self.encoder(
[851](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=850) embedding_output,
[852](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=851) attention_mask=extended_attention_mask,
(...)
[860](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=859) return_dict=return_dict,
[861](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=860) )
[862](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=861) sequence_output=encoder_outputs[0]
File/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py:1102, inModule._call_impl(self, *input, **kwargs)
[1098](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1097) # If we don't have any hooks, we want to skip the rest of the logic in
[1099](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1098) # this function, and just call forward.
[1100](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1099) ifnot (self._backward_hooksorself._forward_hooksorself._forward_pre_hooksor_global_backward_hooks
[1101](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1100) or_global_forward_hooksor_global_forward_pre_hooks):
-> [1102](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1101) returnforward_call(*input, **kwargs)
[1103](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1102) # Do not call functions when jit is used
[1104](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1103) full_backward_hooks, non_full_backward_hooks= [], []
File/opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py:136, inRobertaEmbeddings.forward(self, input_ids, token_type_ids, position_ids, inputs_embeds, past_key_values_length)
[134](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=133) embeddings=inputs_embeds+token_type_embeddings
[135](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=134) ifself.position_embedding_type=="absolute":
--> [136](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=135) position_embeddings=self.position_embeddings(position_ids)
[137](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=136) embeddings+=position_embeddings
[138](file:///opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py?line=137) embeddings=self.LayerNorm(embeddings)
File/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py:1102, inModule._call_impl(self, *input, **kwargs)
[1098](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1097) # If we don't have any hooks, we want to skip the rest of the logic in
[1099](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1098) # this function, and just call forward.
[1100](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1099) ifnot (self._backward_hooksorself._forward_hooksorself._forward_pre_hooksor_global_backward_hooks
[1101](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1100) or_global_forward_hooksor_global_forward_pre_hooks):
-> [1102](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1101) returnforward_call(*input, **kwargs)
[1103](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1102) # Do not call functions when jit is used
[1104](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py?line=1103) full_backward_hooks, non_full_backward_hooks= [], []
File/opt/conda/lib/python3.8/site-packages/torch/nn/modules/sparse.py:158, inEmbedding.forward(self, input)
[157](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/sparse.py?line=156) defforward(self, input: Tensor) ->Tensor:
--> [158](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/sparse.py?line=157) returnF.embedding(
[159](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/sparse.py?line=158) input, self.weight, self.padding_idx, self.max_norm,
[160](file:///opt/conda/lib/python3.8/site-packages/torch/nn/modules/sparse.py?line=159) self.norm_type, self.scale_grad_by_freq, self.sparse)
File/opt/conda/lib/python3.8/site-packages/torch/nn/functional.py:2044, inembedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
[2038](file:///opt/conda/lib/python3.8/site-packages/torch/nn/functional.py?line=2037) # Note [embedding_renorm set_grad_enabled]
[2039](file:///opt/conda/lib/python3.8/site-packages/torch/nn/functional.py?line=2038) # XXX: equivalent to
[2040](file:///opt/conda/lib/python3.8/site-packages/torch/nn/functional.py?line=2039) # with torch.no_grad():
[2041](file:///opt/conda/lib/python3.8/site-packages/torch/nn/functional.py?line=2040) # torch.embedding_renorm_
[2042](file:///opt/conda/lib/python3.8/site-packages/torch/nn/functional.py?line=2041) # remove once script supports set_grad_enabled
[2043](file:///opt/conda/lib/python3.8/site-packages/torch/nn/functional.py?line=2042) _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> [2044](file:///opt/conda/lib/python3.8/site-packages/torch/nn/functional.py?line=2043) returntorch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
IndexError: indexoutofrangeinself
The text was updated successfully, but these errors were encountered:
Steps to reproduce
The text was updated successfully, but these errors were encountered: