Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GPT4ALL] Leverage already-downloaded gpt4all models from official GPT4ALL desktop client #84

Open
zudsniper opened this issue Jun 21, 2023 · 4 comments

Comments

@zudsniper
Copy link

Hi Kurtosis team C:

Thank you for adding gpt4all model usage support!
With it, comes the problem of model size. I think an elegant solution / enhancement, which I believe is possible, would be the usage of any models which were already downloaded by a user using the standard GPT4ALL client application. This, with the help of LocalAI of course, is doable as far as I can tell, and would save a lot of time, as well as integrate seamlessly with gpt4all.

This makes more sense in a MacOS or Windows environment, wherein a desktop environment is much more likely to be involved, a la this example photo showing the GPT4ALL client application on Windows 10.

chat_RwyFxpSIWz.mp4

However, of course, especially with the use-cases that autogpt garners, it is naive to assume instances will have a Desktop installed at all. gpt4all is guilty of this, as their README offers no CLI instructions, with even their build_and_run instructions being extremely visual, requiring specific dependencies and applications which are not CLI friendly, etc.

Perhaps it is not easily possible as I am thinking that it is, and this is the reason that gpt4all haven't provided instructions as to the process. But personally I think that, especially with that aforementioned build_and_run explanation, implement a system that allows users to download gpt4all models through kurtosis iself,1 once per model, and then access / utilize them in autogpt-package for use as desired.

Once again, thank you guys for making this already extremely complicated field a lot more approachable. Haven dove head-first into this stuff a while ago, I am very happy to see you guys working on a project like this, and I really appreciate the way in which you have responded to feedback. This project needs more eyes on it.

Jason

Footnotes

  1. perhaps not directly through the kurtosis CLI, but through a subcommand of the autogpt-package, or something along those lines.

@zudsniper
Copy link
Author

I missed the fact that you already simply provide a link to the model you desire, as shown in the Run without OpenAI header in the README. I think my issue still stands however, as locally caching these models would make the lifecycle of each instance a lot faster.

Let me know if I'm missing something!

@h4ck3rk3y
Copy link
Collaborator

Thanks for the ticket @zudsniper

The way you describe it is indeed ideal. That way we don't have to download from the passed URL every time; reducing the time it takes to get to something useful.

If we were doing purely docker we would have used volumes and mounts but Kurtosis doesn't support that. There's a function to upload files but thats limited to 100MB. I am chatting with a colleague to figure out why we have the 100MB limit; if we can get past it (or support mounts in the future) the workflow could look like

git clone git@github.com:kurtosis-tech/autogpt-package.git
cd autogpt-package
cp /dir/with/models/ggml-gpt4all-l13b-snoozy.bin models/
kurtosis run . '{"GPT_4ALL": true}'

I am tracking this :)

@mieubrisse
Copy link
Contributor

Hey @zudsniper - I'm Kevin, one of the Kurtosis cofounders, and I first off wanted to say thank you for taking the time to put as much detail as you did in this ticket and the other one - we're trying to gather all the product feedback we can right now, and your tickets are hugely helpful!

Second, re. the ticket - this idea of "persisting data" (which in this case would be a GPT model) is something we've been thinking deeply about so I wanted to test a prototype: let's say that Kurtosis had the ability to mount files on your local computer's filesystem into the enclave (analogous to Docker's bind-mounts). Would this provide the missing functionality?

(PS checked out your Soundcloud and dug it - you've got a good voice, I liked the composition choices, and the vibe in general is interesting. Jamming to Modernity USA literally right now while writing this)

@zudsniper
Copy link
Author

zudsniper commented Jun 23, 2023

Hey @zudsniper - I'm Kevin, one of the Kurtosis cofounders, and I first off wanted to say thank you for taking the time to put as much detail as you did in this ticket and the other one - we're trying to gather all the product feedback we can right now, and your tickets are hugely helpful!

Hello @mieubrisse !
Happy to help. I like this project a lot due to the ease with which it enabled me to get started with rather complicated ideas. I realize this is not just the autogpt-package repository, but the purpose of the Kurtosis project. Beider-Meinhoff effect I suspect, but after learning about this project and using it a few times, I was fixing a friend's computer, leading to surfing through old BIOS options, and one of them was a IntelSmart (something like that) technology option that made reference to enclaves in a computational context. I will be looking into the concept further frankly, as its one I haven't found myself fully grasping yet, and I'd like to.

this idea of "persisting data" (which in this case would be a GPT model) is something we've been thinking deeply about so I wanted to test a prototype: let's say that Kurtosis had the ability to mount files on your local computer's filesystem into the enclave (analogous to Docker's bind-mounts). Would this provide the missing functionality?

It is an interesting predicament, the idea of "persisting" for an instance is complicated as you then have to make that instance stateful in some way, which is a surprisingly complicated problem when working with the real world in my experience.
To your questions though, I believe that **yes, support for mounting volumes a la Docker would accomplish the task. For this project specifically, it would be valuable in my opinion to include some wrapper system which can, if no mounted volume in a static relative location does not exist on the filesystem to mount (the volume where models are stored locally, could be as simple as paths.join(__basedir + "models") I truly have NO idea why I used python for this... damn thats weird lol ) then the autogpt-package Kurtosis package might automatically create the folder structure & then initiate the download of a provided model from a provided URL to that new local target directory.

(PS checked out your Soundcloud and dug it - you've got a good voice, I liked the composition choices, and the vibe in general is interesting. Jamming to Modernity USA literally right now while writing this)

thank you very much man, it means a lot! C: I have a fair bit of music out on streaming under the moniker "phantom fanboy", soundcloud is kinda where I dump random things -- I need to get back to music, but I've been making excuses. But I gotta.

Anyway, though I really haven't had the free time to use this package the way I intend to, I have at least 3 distinct large scale ideas I sincerely wish I could drop everything and work on. Looking forward to hopefully meeting you in our meeting Tuesday morning.

Cheers C:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants