-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extend AdaptionPrompt and Add Multi-Modal AdaptionPromptV2 #763
Conversation
…lti-modal training
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. |
Hi, thanks for you continued work. Could you please run |
The tests are passed, I will futher improve documentation and add some more tests, then I think this pr can be reviewd ❤️🔥 |
I'm wondering: Would it be a lot of work for you to split the two changes you suggested into separate PRs? That would make it easier for us to review the changes. |
Not at all, I will do it this weekend after my jobs done on auto-gptq |
That would be fantastic, big thanks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you so much @PanQiWei for the impressive work on extending AdaptionPrompt to more models and adding the Multi-Modal AdaptionPromptV2 🔥🚀✨.
It would be great if we can have a Multi-Modal AdaptionPromptV2 example script or notebook in examples folder so that users can leverage it out of the box. Apart from that good to merge.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. |
What does this PR to
This PR:
AdaptionPrompt
to supportgptj
,gpt-neox
andmoss
and can be futher extended to more models in the future;AdaptionPromptV2
implementation that has a slightly modification to the original one proposed in Support More Models for Adaption Prompt and Add Adaption Prompt V2 with multi-modal ability supportion #398 so that people can training on multi-modal dataset (such as coco, reference project), note that the implementation of multi-modal functionality may different to the official code of llama_adapter_v2.@pacman100 @younesbelkada if this pr look good to you guys, I will close #398 and keep working on this one to make it merge-able 🙏