Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multimodal adapter #449

Closed
TheShy-Dream opened this issue May 15, 2023 · 5 comments
Closed

multimodal adapter #449

TheShy-Dream opened this issue May 15, 2023 · 5 comments

Comments

@TheShy-Dream
Copy link

I want to customize an adapter myself so that it can incorporate some multimodal information. Can I use this library。If possible, could you provide an example?

@PanQiWei
Copy link
Contributor

Hi, I'm also interested in multi-modal LLMs' fine tuning, and thus I create this pr, but I'm doing with the algorithm proposed in llama-adapter-v2, how about yours?

@TheShy-Dream
Copy link
Author

Hi,In fact, I also followed the idea of llama-adapter V2. I don't know if using peft can provide a fast implementation method.

@PanQiWei
Copy link
Contributor

Sure, for in peft there is already have a adaption_prompt that is actually llama-adapter, and I've extended it to support more other models such as gptj and gpt-neox, I also already implemented adaption_prompt_v2 which is llama-adapter-v2 and extend to support other models and multi-modal information integration in training and inference

@TheShy-Dream
Copy link
Author

Hi,Thank you for your reply. Can you provide an example about it, because I couldn't find any relevant examples in the examples or readme.md

@github-actions
Copy link

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants