-
Hi, TLDR: I want to create my own private Zoo. I’ve successfully saved a PyTorch model (actually just a standard Bert) with:
Then I load it with (I had to define
Then I can create a predictor and use it and everything works as expected:
Now, I want to save it in a more standard DJL way so I don’t have to redefine a Translator or the engine each time I load it. Is there a way? Ideally, I want to create my own private Zoo. I’ve looked at the documentation but it’s quite scarce. Moreover, if I try to call Thanks! Note: my code examples are in Scala |
Beta Was this translation helpful? Give feedback.
Replies: 7 comments
-
The Actually we have quite a few Huggingface models already imported in DJL model zoo, you can use our built-in translators by adding a |
Beta Was this translation helpful? Give feedback.
-
Thanks a lot, I’ll look at the doc and the example you gave me. If I understand well, for a base bert uncased, I should be able to reuse sentence transformers and not have to code my custom Translator, right? |
Beta Was this translation helpful? Give feedback.
-
We imported the quite a few popular sentence transformers model in DJL model zoo, you can use the following url to load them:
Use the following command the list models in our model zoo:
We also has python code that help you download huggingface model and generate the .zip file: https://github.com/deepjavalibrary/djl/blob/master/extensions/tokenizers/src/main/python/model_zoo_importer.py |
Beta Was this translation helpful? Give feedback.
-
My goal here is to get bert embeddings (but I don’t want to be limited to bert, it’s just an example here). So I used
Note: traced_bert.pt is bert_base_uncased that I traced and saved on disk. It seems to work well but the predictor is not working:
Any idea why |
Beta Was this translation helpful? Give feedback.
-
If you use Huggingface AutoModel or pipeline, they should have "last_hidden_state" as model output. |
Beta Was this translation helpful? Give feedback.
-
Ok thanks a lot! |
Beta Was this translation helpful? Give feedback.
-
I understood why my output did not have "last_hidden_state". It’s because I was loading my model in Python with By removing the |
Beta Was this translation helpful? Give feedback.
I understood why my output did not have "last_hidden_state". It’s because I was loading my model in Python with
BertModel.from_pretrained("bert-base-uncased", torchscript=True)
which define a tuple as output instead of a dict with the names.By removing the
torchscript=True
and usingstrict=False
intorch.jit.trace
, it works as expected.