New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Examples for Llama model architecture #2
Comments
Yes! With my candle-lora-macro library, all you need to do is derive the I plan on adding an example shortly. If you have any questions, let me know! |
This is amazing. Is the training example possible as well ?
…On Thu, 14 Sep 2023 at 5:48 PM, Eric Buehler ***@***.***> wrote:
Yes! With my candle-lora-macro
<https://github.com/EricLBuehler/candle-lora-macro> library, all you need
to do is derive the AutoLoraConvert and add the replace_layer_fields to
all model structs of a LLama model. They will replace the concrete types
and automate the conversion process. Then, call the conversion method on
each model struct.
I plan on adding an example shortly. If you have any questions, let me
know!
—
Reply to this email directly, view it on GitHub
<#2 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAXGU4DZEH4NE4Z3HTC24MLX2LYYJANCNFSM6AAAAAA4X745MQ>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Yes, once you convert to a LoRA model you could fine-tune it. After finetuning, you can merge the weights to speed up inference. |
I am closing this so that it does not become a stale issue, but feel free to reopen. I will be adding a LoRA example for Llama and soon! |
Sounds good, thank you !
…On Fri, 15 Sep 2023 at 6:19 AM, Eric Buehler ***@***.***> wrote:
I am closing this so that it does not become a stale issue, but feel free
to reopen. I will be adding a LoRA example for Llama and soon!
—
Reply to this email directly, view it on GitHub
<#2 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAXGU4GG42BAT6MV2AV4XTLX2OQZZANCNFSM6AAAAAA4X745MQ>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Hello Eric, this looks like great work ! Thank you !!
Can you please add examples for both training and inference for Llama model using candle-lora ? Is it supported through this work ?
The text was updated successfully, but these errors were encountered: