New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Any way of porting this to Colab? #12
Comments
This is a really interesting idea! Do you know if Google Colab has any way to listen on a network port that can be reached from the outside world? |
I remember a storyteller AI (KoboldAI, absolute banger). It opens an HTTP server from Cloudflare, then you connect to that Cloudflare server from your browser and you could have interacted with it very well. Sending a prompt request, getting a response from the server. Maybe something like that can be done in this case? |
I also just found this, which looks like it might be a good fit since the FauxPilot server is already using Flask: https://www.geeksforgeeks.org/how-to-run-flask-app-on-google-colab/ I will look into putting together a notebook! :) Thanks for the great suggestion! |
I think Colab has started to ban tunneling. Colab FAQ I used to use a similar tool called colabcode which would allow firing up VSCode in colab on a remote server, but with recent changes in their policy, they don't allow this anymore. You can check more about this over here. abhishekkrthakur/colabcode#109 |
Well, isn't this project a server for code generation? Does it launch up VSCode? You might have been banned because of the third rule. Again, about the tunneling, the AI application at Colab I mentioned above (KoboldAI) is still up and running, even though they use tunneling. Here's the link for their Colab. |
How about serverless? |
Hmm would serverless work when the models are really big though? Loading the 16B model from disk -> GPU takes almost a minute, so I wouldn't want to have to do that on every completion request... |
Not quite on topic but: if the model layers were broken up people could form small networks to share compute. |
I think the network latency involved would make that pretty slow? |
emm, any progress on this issue? |
Most of us don't have GPUs powerful enough to even run models with 6 billion parameters. Can we port this to Colab in any way so it would be more accessible?
The text was updated successfully, but these errors were encountered: