Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

instruction tuning on other datasets #16

Closed
ChocoWu opened this issue Aug 18, 2023 · 1 comment
Closed

instruction tuning on other datasets #16

ChocoWu opened this issue Aug 18, 2023 · 1 comment

Comments

@ChocoWu
Copy link

ChocoWu commented Aug 18, 2023

Appreciate the provided code.
I have a question regarding the training process. While the LLM is in the process of learning to generate images, it generates the same text as the input. Nevertheless, during inference, the model is able to produce sensible responses for the input. Consequently, I'm curious if there are any other instruction datasets utilized for fine-tuning the model's ability to follow instructions. If such datasets are indeed employed, could the instruction fine-tuning resources be made publicly accessible?

@kohjingyu
Copy link
Owner

We do not use any instruction datasets, the model is only finetuned on the CC3M image + caption data. I do think that GILL would benefit greatly from finetuning on instructions, though!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants