-
Notifications
You must be signed in to change notification settings - Fork 379
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Maybe a link on how to set it up ? #6
Comments
Thanks for the suggestion! Since we train this with JAX using our EasyLM framework, we have detailed documentation for using it with EasyLM. As for using it with PyTorch, unfortunately we weren't able to experiment much as our compute resources are on TPU. |
Ok thanks if you would like to draw much attention with it you might reconsider this. (a lot more people would be able to test it, not many people i know of have TPU's these days).
…________________________________
Van: Xinyang (Young) Geng ***@***.***>
Verzonden: donderdag 4 mei 2023 00:02
Aan: openlm-research/open_llama ***@***.***>
CC: PGTBoos ***@***.***>; Author ***@***.***>
Onderwerp: Re: [openlm-research/open_llama] Maybe a link on how to set it up ? (Issue #6)
Thanks for the suggestion! Since we train this with JAX using our EasyLM framework, we have detailed documentation for using it with EasyLM<https://github.com/young-geng/EasyLM/blob/main/docs/llama.md>. As for using it with PyTorch, unfortunately we weren't able to experiment much as our compute resources are on TPU.
—
Reply to this email directly, view it on GitHub<#6 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AFQQOG3BIYJ73WMX7S6KJMDXELIVXANCNFSM6AAAAAAXUWZX4M>.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
You can also use the JAX version on GPU. That's how we test it locally. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Might be nice to try this, but how does one set it up in a python jupyter environment ?.
(a working step by step would be nice, or an example notebook)
I'm no newbie but never had the time to spend a few days into LLM's.
if a 3080ti can run it , ib be fun to give it a try.
The text was updated successfully, but these errors were encountered: