You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As for what's being blocked: Google detects any instances of /PygmalionAI/ inside the notebook's cells and automatically warns that your account may be banned if you decide to run the code, so we've removed the easy Colab notebook for now to prevent anyone from being unfairly banned. There are obviously workarounds for this, but we'd rather respect Google's wishes. It's their service after all.
Your guess is as good as mine. The ban was done without any explanation from Google's end, and when we reached out for clarification the response we got was basically "sorry, we don't offer any sort of support for Colab". The model can do SFW just fine, and there's plenty of other NSFW Colabs out there (e.g. for Stable Diffusion), so I think it's pretty unlikely that this is the reason.
It should. ggml allows running quantized LMs on CPU at reasonable speeds (see: llama.cpp), it'd just be a matter of implementing the relevant backing model architecture (GPT-J for 6B, NeoX for 1.3B/2.7B, OPT for 350M as of now) and having some sort of UI to interact with the model. This is outside the scope of the project though, so unfortunately I don't have any pointers for that.
2 questions (brilliant work btw!)
The text was updated successfully, but these errors were encountered: