You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I download the fine-tuned model and dataset, such as tapex.large.wtq and wtq.preprocess. And I run python examples/tableqa/run_model.py predict --resource-dir ./tapex.large.wtq --checkpoint-name model.pt. Then I get the following bug:
Traceback (most recent call last):
File "/data2/huchaowen/Table-Pretraining/examples/tableqa/run_model.py", line 181, in <module>
predict_demo(args)
File "/data2/huchaowen/Table-Pretraining/examples/tableqa/run_model.py", line 161, in predict_demo
answer = demo_interface.predict(question=question,
File "/data2/huchaowen/Table-Pretraining/tapex/model_interface.py", line 34, in predict
model_output = self.model.translate(
File "/data2/huchaowen/anaconda3/envs/tapex/lib/python3.8/site-packages/fairseq/hub_utils.py", line 124, in translate
return self.sample(sentences, beam, verbose, **kwargs)
File "/data2/huchaowen/anaconda3/envs/tapex/lib/python3.8/site-packages/fairseq/hub_utils.py", line 132, in sample
batched_hypos = self.generate(tokenized_sentences, beam, verbose, **kwargs)
File "/data2/huchaowen/anaconda3/envs/tapex/lib/python3.8/site-packages/fairseq/models/bart/hub_interface.py", line 107, in generate
results = super().generate(
File "/data2/huchaowen/anaconda3/envs/tapex/lib/python3.8/site-packages/fairseq/hub_utils.py", line 189, in generate
translations = self.task.inference_step(
File "/data2/huchaowen/anaconda3/envs/tapex/lib/python3.8/site-packages/fairseq/tasks/fairseq_task.py", line 540, in inference_step
return generator.generate(
File "/data2/huchaowen/anaconda3/envs/tapex/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/data2/huchaowen/anaconda3/envs/tapex/lib/python3.8/site-packages/fairseq/sequence_generator.py", line 204, in generate
return self._generate(sample, **kwargs)
File "/data2/huchaowen/anaconda3/envs/tapex/lib/python3.8/site-packages/fairseq/sequence_generator.py", line 274, in _generate
encoder_outs = self.model.forward_encoder(net_input)
File "/data2/huchaowen/anaconda3/envs/tapex/lib/python3.8/site-packages/fairseq/sequence_generator.py", line 801, in forward_encoder
return [model.encoder.forward_torchscript(net_input) for model in self.models]
File "/data2/huchaowen/anaconda3/envs/tapex/lib/python3.8/site-packages/fairseq/sequence_generator.py", line 801, in <listcomp>
return [model.encoder.forward_torchscript(net_input) for model in self.models]
File "/data2/huchaowen/anaconda3/envs/tapex/lib/python3.8/site-packages/fairseq/models/fairseq_encoder.py", line 55, in forward_torchscript
return self.forward_non_torchscript(net_input)
File "/data2/huchaowen/anaconda3/envs/tapex/lib/python3.8/site-packages/fairseq/models/fairseq_encoder.py", line 62, in forward_non_torchscript
return self.forward(**encoder_input)
File "/data2/huchaowen/anaconda3/envs/tapex/lib/python3.8/site-packages/fairseq/models/transformer/transformer_encoder.py", line 165, in forward
return self.forward_scriptable(
File "/data2/huchaowen/anaconda3/envs/tapex/lib/python3.8/site-packages/fairseq/models/transformer/transformer_encoder.py", line 294, in forward_scriptable
lr = layer(x, encoder_padding_mask=encoder_padding_mask_out)
File "/data2/huchaowen/anaconda3/envs/tapex/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/data2/huchaowen/anaconda3/envs/tapex/lib/python3.8/site-packages/fairseq/modules/transformer_layer.py", line 351, in forward
x, _ = self.self_attn(
File "/data2/huchaowen/anaconda3/envs/tapex/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/data2/huchaowen/anaconda3/envs/tapex/lib/python3.8/site-packages/fairseq/modules/multihead_attention.py", line 538, in forward
return F.multi_head_attention_forward(
File "/data2/huchaowen/anaconda3/envs/tapex/lib/python3.8/site-packages/torch/nn/functional.py", line 5160, in multi_head_attention_forward
attn_output_weights = torch.bmm(q_scaled, k.transpose(-2, -1))
RuntimeError: CUDA error: CUBLAS_STATUS_INVALID_VALUE when calling `cublasSgemmStridedBatched( handle, opa, opb, m, n, k, &alpha, a, lda, stridea, b, ldb, strideb, &beta, c, ldc, stridec, num_batches)`
fairseq==0.12.2, transformers==4.24.0
The text was updated successfully, but these errors were encountered:
@jhrsya Hello, thanks for you interest on our work! I think the bug is not closely related to tapex. Can you check if the pytorch is correctly installed? And also, does the CUDA work well?
I download the fine-tuned model and dataset, such as tapex.large.wtq and wtq.preprocess. And I run
python examples/tableqa/run_model.py predict --resource-dir ./tapex.large.wtq --checkpoint-name model.pt
. Then I get the following bug:fairseq==0.12.2, transformers==4.24.0
The text was updated successfully, but these errors were encountered: