Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ChatLLaMA] Add flan-t5-xl support for local and API model to generate synthetic reward_training_data scores #344

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

Linus-J
Copy link

@Linus-J Linus-J commented May 29, 2023

Added an optional argument -l / --local_directory to allow the usage of locally stored google/flan-t5-xl model from HuggingFace.

Added flan-t5-xl as a valid input option for the -m / --model argument. When used as just -m flan-t5-xl, the model can be accessed via a HuggingFace API key similar to the davinci model. If used with the option -l /path_to_locally_stored_flan-t5-xl/, the local model will be used to update the reward scores instead.

This commit resolves issues #218 #221 #241.

Added an optional argument "-l" / "--local_directory" to allow the usage of locally stored google/flan-t5-xl model from HuggingFace.

Added flan-t5-xl as a valid input option for the "-m" / "--model" argument. When used as just "-m flan-t5-xl", the model can be accessed via a HuggingFace API key similar to the davinci model. If used with the option "-l /path_to_locally_stored_flan-t5-xl/", the local model will be used to update the reward scores instead.
@Linus-J Linus-J changed the title Add flan-t5-xl support for local and API model [ChatLLaMA] Add flan-t5-xl support for local and API model to generate synthetic reward_training_data scores May 29, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant