-
Notifications
You must be signed in to change notification settings - Fork 54
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
This is super cool! #11
Comments
@turnipsforme which model did you download? - There's a bug I found recently, will patch soon to enable model with smaller context windows like Dolly. |
i couldn't get any model to work other than MPT 7B, and that one primarily returned gibberish (speaking in symbols, chinese, or just random sentences). this was actually pretty funny, at one point it said "Hello, ���� come to my world" in chinese in response to me saying hello (in english haha) |
@turnipsforme how much RAM do you have? Can you try the wizard model? Those should work very well at least on my machine if you can load MPT :-?... |
@turnipsforme The MPT models are very bound to their prompt formats, especially if you use the Instruct or Chat versions, furthermore this project doesn't use the huggingface tokenizers yet. Which results in some weird decoding errors. |
Ooh right I need to add the remote vocab soon, need to embed into model look-up somehow :d..... |
@turnipsforme Just shipped an update fix which should enable more model such as Dolly and GPTJ, please try them out. Also I'd test out Wizard model if you machine can run it :) |
Yess both gptj and dolly work! i only have 8 gigs of ram, maybe what’s why? wizard works too (it’s the closest so far to chatgpt in terms of smarts, everyone is else has been a little.. off haha). regardless dolly responds suuuper quickly and app feels much better now! looking forward to seeing this continue to grow! |
I just wanted to also give some thoughts on this in terms of uses. I would personally primarily use this to help with text transformation (shorter/longer/remove elements/rewrite as a checklist…) since sensitive work documents can’t go into chatgpt. if the app was better suited for this kind of work it would be amazing (i think a lot of people’s work is sensitive and has to stay local)! either way best of luck!!:) |
@turnipsforme yeah 8GB of ram is too little to run 7B model and even 3B model IMO. You need at least 12GB (or 16GB preferred) for 3B model to run at an ok speed.
That's one of the core purpose for local.ai IMO! |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Hi! LOOOVE the idea behind this (for privacy reasons, but also the idea of running a model locally is just super cool). I can't get past downloading a model and sending a message. not getting anything back. on m1 macbook air. thanks!
The text was updated successfully, but these errors were encountered: