🐣 Please follow me for new updates https://twitter.com/camenduru
🔥 Please join our discord server https://discord.gg/k5BwmmvJJU
colab | Info - Model Page |
---|---|
vicuna-13b-GPTQ-4bit-128g https://vicuna.lmsys.org |
|
vicuna-13B-1.1-GPTQ-4bit-128g https://vicuna.lmsys.org |
|
stable-vicuna-13B-GPTQ-4bit-128g https://huggingface.co/CarperAI/stable-vicuna-13b-delta |
|
gpt4-x-alpaca-13b-native-4bit-128g https://huggingface.co/chavinlo/gpt4-x-alpaca |
|
pyg-7b-GPTQ-4bit-128g https://huggingface.co/Neko-Institute-of-Science/pygmalion-7b |
|
koala-13B-GPTQ-4bit-128g https://bair.berkeley.edu/blog/2023/04/03/koala |
|
oasst-llama13b-GPTQ-4bit-128g https://open-assistant.io |
|
wizard-lm-uncensored-7b-GPTQ-4bit-128g https://github.com/nlpxucan/WizardLM |
|
mpt-storywriter-7b-GPTQ-4bit-128g https://www.mosaicml.com |
|
wizard-lm-uncensored-13b-GPTQ-4bit-128g https://github.com/nlpxucan/WizardLM |
|
pyg-13b-GPTQ-4bit-128g https://huggingface.co/PygmalionAI/pygmalion-13b |
According to the Facebook Research LLaMA license (Non-commercial bespoke license), maybe we cannot use this model with a Colab Pro account. But Yann LeCun said "GPL v3" (https://twitter.com/ylecun/status/1629189925089296386) I am a little confused. Is it possible to use this with a non-free Colab Pro account?
https://www.youtube.com/watch?v=kgA7eKU1XuA
https://github.com/oobabooga/text-generation-webui (Thanks to @oobabooga ❤)
Model | License |
---|---|
vicuna-13b-GPTQ-4bit-128g | From https://vicuna.lmsys.org: The online demo is a research preview intended for non-commercial use only, subject to the model License of LLaMA, Terms of Use of the data generated by OpenAI, and Privacy Practices of ShareGPT. Please contact us If you find any potential violation. The code is released under the Apache License 2.0. |
gpt4-x-alpaca-13b-native-4bit-128g | https://huggingface.co/chavinlo/alpaca-native -> https://huggingface.co/chavinlo/alpaca-13b -> https://huggingface.co/chavinlo/gpt4-x-alpaca |
Thanks to facebookresearch ❤ for https://github.com/facebookresearch/llama
Thanks to lmsys ❤ for https://huggingface.co/lmsys/vicuna-13b-delta-v0
Thanks to anon8231489123 ❤ for https://huggingface.co/anon8231489123/vicuna-13b-GPTQ-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/lmsys/vicuna-13b-delta-v0)
Thanks to tatsu-lab ❤ for https://github.com/tatsu-lab/stanford_alpaca
Thanks to chavinlo ❤ for https://huggingface.co/chavinlo/gpt4-x-alpaca
Thanks to qwopqwop200 ❤ for https://github.com/qwopqwop200/GPTQ-for-LLaMa
Thanks to tsumeone ❤ for https://huggingface.co/tsumeone/gpt4-x-alpaca-13b-native-4bit-128g-cuda (GPTQ 4bit quantization of: https://huggingface.co/chavinlo/gpt4-x-alpaca)
Thanks to transformers ❤ for https://github.com/huggingface/transformers
Thanks to gradio-app ❤ for https://github.com/gradio-app/gradio
Thanks to TheBloke ❤ for https://huggingface.co/TheBloke/stable-vicuna-13B-GPTQ
Thanks to Neko-Institute-of-Science ❤ for https://huggingface.co/Neko-Institute-of-Science/pygmalion-7b
Thanks to gozfarb ❤ for https://huggingface.co/gozfarb/pygmalion-7b-4bit-128g-cuda (GPTQ 4bit quantization of: https://huggingface.co/Neko-Institute-of-Science/pygmalion-7b)
Thanks to young-geng ❤ for https://huggingface.co/young-geng/koala
Thanks to TheBloke ❤ for https://huggingface.co/TheBloke/koala-13B-GPTQ-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/young-geng/koala)
Thanks to dvruette ❤ for https://huggingface.co/dvruette/oasst-llama-13b-2-epochs
Thanks to gozfarb ❤ for https://huggingface.co/gozfarb/oasst-llama13b-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/dvruette/oasst-llama-13b-2-epochs)
Thanks to ehartford ❤ for https://huggingface.co/ehartford/WizardLM-7B-Uncensored
Thanks to TheBloke ❤ for https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GPTQ (GPTQ 4bit quantization of: https://huggingface.co/ehartford/WizardLM-7B-Uncensored)
Thanks to mosaicml ❤ for https://huggingface.co/mosaicml/mpt-7b-storywriter
Thanks to OccamRazor ❤ for https://huggingface.co/OccamRazor/mpt-7b-storywriter-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/mosaicml/mpt-7b-storywriter)
Thanks to ehartford ❤ for https://huggingface.co/ehartford/WizardLM-13B-Uncensored
Thanks to ausboss ❤ for https://huggingface.co/ausboss/WizardLM-13B-Uncensored-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/ehartford/WizardLM-13B-Uncensored)
Thanks to PygmalionAI ❤ for https://huggingface.co/PygmalionAI/pygmalion-13b
Thanks to notstoic ❤ for https://huggingface.co/notstoic/pygmalion-13b-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/PygmalionAI/pygmalion-13b)