Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minilm_jit example doesn't work #32

Closed
powderluv opened this issue May 2, 2022 · 9 comments
Closed

minilm_jit example doesn't work #32

powderluv opened this issue May 2, 2022 · 9 comments
Assignees

Comments

@powderluv
Copy link
Contributor

(shark.venv) a@debian-1:~/github/dshark$ python -m  shark.examples.minilm_jit
/home/a/github/dshark/shark.venv/lib/python3.7/site-packages/torch/nn/modules/module.py:1403: UserWarning: positional arguments and argument "destination" are deprecated. nn.Module.state_dict will not accept them in the future. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
  " and ".join(warn_msg) + " are deprecated. nn.Module.state_dict will not accept them in the future. "
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at microsoft/MiniLM-L12-H384-uncased and are newly initialized: ['classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
        - Avoid using `tokenizers` before the fork if possible
        - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Target triple found:x86_64-linux-gnu
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
        - Avoid using `tokenizers` before the fork if possible
        - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
(shark.venv) a@debian-1:~/github/dshark$    
@powderluv
Copy link
Contributor Author

Does no error on the output mean successful run ?

@powderluv
Copy link
Contributor Author

maybe we can run the inference a few times and output the time it took to run similar to:

https://github.com/powderluv/transformer-benchmarks/blob/f8258f751cd3b7e87bf66ffc1b38e4443a70c1e3/benchmark.py#L318-L339

@pashu123
Copy link
Collaborator

pashu123 commented May 2, 2022

Sounds good.

@pashu123
Copy link
Collaborator

pashu123 commented May 2, 2022

@powderluv
Copy link
Contributor Author

Can we also please add a number of iteration flag ?

@pashu123
Copy link
Collaborator

pashu123 commented May 2, 2022

@pashu123
Copy link
Collaborator

pashu123 commented May 2, 2022

Basically no. of times to run the inference?

@powderluv
Copy link
Contributor Author

yeah

@powderluv
Copy link
Contributor Author

maybe even a number of warmup runs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants