Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Huggingface model not being torch scriptable #51376

Open
HamidShojanazeri opened this issue Jan 29, 2021 · 5 comments
Open

Huggingface model not being torch scriptable #51376

HamidShojanazeri opened this issue Jan 29, 2021 · 5 comments
Labels
oncall: jit Add this issue/PR to JIT oncall triage queue
Projects

Comments

@HamidShojanazeri
Copy link

HamidShojanazeri commented Jan 29, 2021

❓ Questions and Help

Referring to the recommendation of trying script API for Huggingface (HF) models in issue, here this has been tried and facing the issue as reported in the logs. Generally, HF models are not scriptable, there has been feature request to achieve this, some work has been started in this PR, however there is not much progress updated. Here is code snippet to reproduce the issue with latest transformers version. Any thoughts/suggestions is appreciated.

import transformers
import torch
from transformers import (AutoModelForSequenceClassification, AutoTokenizer, AutoModelForQuestionAnswering,
 AutoModelForTokenClassification, AutoConfig)

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
config = AutoConfig.from_pretrained('roberta-base',num_labels=2,torchscript=True)
model = AutoModelForSequenceClassification.from_pretrained('roberta-base', config=config)
tokenizer = AutoTokenizer.from_pretrained('roberta-base',do_lower_case=True)
dummy_input = "This is a dummy input for torch jit script"
max_length = 20
inputs = tokenizer.encode_plus(dummy_input,max_length = int(max_length),pad_to_max_length = True, add_special_tokens = True, return_tensors = 'pt')
input_ids = inputs["input_ids"]
attention_mask = inputs["attention_mask"]
outputs = model(**inputs)
model.to(device).eval()
scripted_model = torch.jit.script(model)

Error logs
HF-scripting-error.logs.txt

cc @gmagogsfm

@eellison
Copy link
Contributor

eellison commented Feb 2, 2021

Work is ongoing in this direction by @nikithamalgifb

@eellison eellison moved this from Need triage to Pending in JIT Triage Feb 2, 2021
@nikithamalgifb nikithamalgifb removed their assignment Mar 12, 2021
@isgursoy
Copy link

following

@issamemari
Copy link

Very interested

@adampauls
Copy link

following

@MrRace
Copy link

MrRace commented Oct 31, 2022

Sad ~ Who can share the solution? Thanks a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
oncall: jit Add this issue/PR to JIT oncall triage queue
Projects
JIT Triage
  
Pending
Development

No branches or pull requests

8 participants