You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For now, most of open LLM models have token size of 2048. I need to expand it to 4096 or more.
Is is possible expand token size by fine-tuning with longer texts exceed 2048 using LoRA?
Or does it need re-training from scratch?
I searched about this but unable to find the information.
The LoRA freezes original model weight and add trainable layers, so I feel like it's bit difficult.
The text was updated successfully, but these errors were encountered:
Thank you for advice.
Actually, I do not try this token expansion.
Because llama2 supports longer contexts, and a method to further extend the context length by changing the rope frequency has appeared.
So close this issue.
For now, most of open LLM models have token size of 2048. I need to expand it to 4096 or more.
Is is possible expand token size by fine-tuning with longer texts exceed 2048 using LoRA?
Or does it need re-training from scratch?
I searched about this but unable to find the information.
The LoRA freezes original model weight and add trainable layers, so I feel like it's bit difficult.
The text was updated successfully, but these errors were encountered: