-
Notifications
You must be signed in to change notification settings - Fork 29.1k
[Community contributions] Model cards #36979
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi. I would like to work on model card for gemma 2. |
Hi. I would like to work on model card for mistral. |
Hi @stevhliu , this is my first contribution so I have a really basic question . Should I clone every repo under mistralai? I just cloned the repo mistralai/Ministral-8B-Instruct-2410, but there are many other repos under mistralai. It's ok if I need to, but I just want to be sure. |
Hey , I would like to work on the model card for llama3 . |
Hey @NahieliV, welcome! You only need to modify the mistral.md file. This is just for the model cards in the Transformers docs rather than the Hub. |
Hey @stevhliu I would like to work on the model card for qwen2_5_vl. |
@stevhliu Is it not possible to automate with an LLM? |
hi @stevhliu i would be super grateful if you can let me work on the model card for code_llama |
Hey @stevhliu, I would like to work on the |
Hey @stevhliu , i would like to contribute to |
Hey @stevhliu , I would like to contribute to vitpose model card |
Hey @stevhliu, I would like to work on the |
Hey @stevhliu , I would like to contribute to |
To the folks who have been raising PR so far , just have a doubt did you get to install EDIT : Got it up and running, had to install all the libraries to make it run successfully. Initially felt doubtful about the need to install all the libraries such as flax but yea seems like it has to be installed too. |
Hey @stevhliu, I would like to work on the phi3 model card |
As you just going to edit the docs, you need not have a complete development setup. Fork the |
Hi @stevhliu, Continuing with the model card updates, I would like to work on the following models next:
Please let me know if these are still available and okay for me to take on. Thanks! |
Hi @stevhliu, continuing on my work I would love to update the BERTweet model card too, will raise a PR asap. |
Hey @stevhliu , |
Hello @stevhliu, I've created a PR for ALIGN(the top 2nd model on the list). This is the one. |
Hi @stevhliu, I initially started with bartpho but it already has a model card. I would now like to contribute the model card for gemma, which I confirmed is implemented and currently undocumented. |
What is happening? Nobody's reviewing our PRs and merging them in the main codebase, it has been days since @stevhliu was actively reviewing our PRs? Is there anyone else in the community who can do the reviewing instead of him? |
* Update code_llama.md aims to handle huggingface#36979 (comment) sub part of huggingface#36979 * Update docs/source/en/model_doc/code_llama.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/model_doc/code_llama.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/model_doc/code_llama.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * make changes as per code review * chore: make the function smaller for attention mask visualizer * chore[docs]: update code_llama.md with some more suggested changes * Update docs/source/en/model_doc/code_llama.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * chore[docs] : Update code_llama.md with indentation changes --------- Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Hi everyone! I'd love to contribute as well. Is anyone currently working on |
Hi @stevhliu, |
Hi everyone and @stevhliu, I'm actually working on |
Hey friends, sorry for the delay, I was on vacation but I'm back now and will be working on reviewing all your PRs over the next few days. Thanks for your patience! 🤗 @alvarotorro, bartpho doesn't look like it has been standardized yet whereas gemma has. Would you still like to work on bartpho? |
Hello i want to take the |
Hi @stevhliu , I'd like to work on the altclip model card. |
Uh oh!
There was an error while loading. Please reload this page.
Hey friends! 👋
We are currently in the process of improving the Transformers model cards by making them more directly useful for everyone. The main goal is to:
Pipeline
,AutoModel
, andtransformers-cli
with available optimizations included. For large models, provide a quantization example so its easier for everyone to run the model.Compare the before and after model cards below:
With so many models in Transformers, we could really use some a hand with standardizing the existing model cards. If you're interested in making a contribution, pick a model from the list below and then you can get started!
Steps
Each model card should follow the format below. You can copy the text exactly as it is!
For examples, take a look at #36469 or the BERT, Llama, Llama 2, Gemma 3, PaliGemma, ViT, and Whisper model cards on the
main
version of the docs.Once you're done or if you have any questions, feel free to ping @stevhliu to review. Don't add
fix
to your PR to avoid closing this issue.I'll also be right there working alongside you and opening PRs to convert the model cards so we can complete this faster together! 🤗
Models
The text was updated successfully, but these errors were encountered: