Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Official procedure for adding new trained models #2336

Closed
saskiabosma opened this issue Oct 23, 2023 · 4 comments · Fixed by #2369
Closed

Official procedure for adding new trained models #2336

saskiabosma opened this issue Oct 23, 2023 · 4 comments · Fixed by #2369

Comments

@saskiabosma
Copy link
Contributor

saskiabosma commented Oct 23, 2023

I use a pretrained CLIP model (base architecture is CLIP-ViT-B-32 model) that wasn't fine-tuned using the sentence-transformers library. It looks like it is possible to integrate it by adding files modules.json and config_sentence_transformers.json copied from sentence-transformers/clip-ViT-B-32 in my model artifacts directory, and setting the correct path in module.json.

Could you please confirm that this procedure is correct ? If so, it might be worth to add it to the docs, as other users may have similar needs !

@tomaarsen
Copy link
Collaborator

Hello!

That is exactly correct. You can use the following files:
modules.json

[
  {
    "idx": 0,
    "name": "0",
    "path": "",
    "type": "sentence_transformers.models.CLIPModel"
  }
]

config_sentence_transformers.json:

{
  "__version__": {
    "sentence_transformers": "2.2.2",
    "transformers": "4.33.0",
    "pytorch": "2.1.0+cu121"
  }
}

Although you can obviously use the versions that you're using.

Additionally, you can also load non-SentenceTransformer CLIP models using:

from sentence_transformers import SentenceTransformer
from sentence_transformer.models import CLIPModel

model = SentenceTransformer(modules=[CLIPModel("patrickjohncyh/fashion-clip")])

If you model.save this one, then the CLIPModel will be placed in a 0_CLIPModel directory and the aforementioned files are created (alongside a README), but you can also just move the contents from the 0_CLIPModel into the root, and update the path from modules.json to "" like I've done above.

  • Tom Aarsen

@saskiabosma
Copy link
Contributor Author

Thanks a lot ! (cc @vinid)

@saskiabosma
Copy link
Contributor Author

@tomaarsen, what do you think of adding a few lines to the docs (specifically hugging_face.md) to guide users who want to use private local models ? Can I do a MR ?

@saskiabosma saskiabosma reopened this Nov 28, 2023
@tomaarsen
Copy link
Collaborator

Feel free! I'll be glad to review it.

  • Tom Aarsen

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants