New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Roadmap (tentative) #12
Comments
[moved inference of prompt-tuned model and priorities from summer to current tasks] |
hey, how hard would it be to extend petals to support training these models in addition to the fine tuning? |
Hi @bionicles, Petals is a system designed specifically for inference of large models: however, it shares a lot of the underlying architecture with SWARM Parallelism (see https://github.com/yandex-research/swarm for a WIP implementation, which I hope to update in the coming weeks). The short answer is "definitely possible", but please keep in mind that pretraining is out of scope for Petals. Hence, it might be more useful to continue the discussion elsewhere (e.g. to the SWARM repo or our Discord server) if you have specific questions or suggestions |
Hi @bionicles, A small addition to the @mryab's response - while Petals does not support training from scratch, both Petals and SWARM are based on hivemind, our library for training over the Internet, which can be used for pre-training. Please see Q3 of the FAQ's "General" section for details. |
Current tasks:
End of december: cover more use cases
End of
julyaugust: make it reliable, test with early adoptersEnd of june: build a proof-of-concept
Important, but not urgent:
The text was updated successfully, but these errors were encountered: