You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to Torchserve (https://pytorch.org/serve/) a Transformers model to be deployed as a prediction API on GCP as a custom prediction service.
However, torchserve requires that the model be in either .bin format or .pt format. Is there a way to deactivate safetensors and return the .bin PyTorch file?
I have tried to use torch.save(model.state_dict(), PATH) but have been unsuccessful thus far? Anyone encountered similar issues?
The text was updated successfully, but these errors were encountered:
I dig some digging of the parameters of the save_pretrained and trainer methods and you can actually 🔧 turn off the storing of the safetensors format models, by using the following example:
I am trying to Torchserve (https://pytorch.org/serve/) a Transformers model to be deployed as a prediction API on GCP as a custom prediction service.
However, torchserve requires that the model be in either
.bin
format or.pt
format. Is there a way to deactivate safetensors and return the .bin PyTorch file?I have tried to use torch.save(model.state_dict(), PATH) but have been unsuccessful thus far? Anyone encountered similar issues?
The text was updated successfully, but these errors were encountered: