-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🪐Feat: Support of local LLMs which are Free to run (DONE)✅ #3
Comments
I second this motion, there are several local LLM software solutions that provide API access and it'd open this project to the majority of people. |
+1 That would be great. I will try my best to find some time, but for now, here is a response I gave in another thread: This is where all the actual API calls to OpenAI are made. You will want to edit the functions there to instead call Llama2. (You may have to edit utils.py + a few prompts if they break when using it with Llama) |
You can run https://github.com/oobabooga/text-generation-webui with this extension: https://github.com/oobabooga/text-generation-webui/tree/main/extensions/openai You can also replace the embeddings with Sentence Transformers by replacing get_embeddings(...):
And then override api_base in utils.py to point to your proxy. I have a hacky fork doing this at the moment: main...InconsolableCellist:generative_agents:main However I'm not seeing any observed actions in my simulation. I suspect my model isn't generating content that's parsed into useful actions, though I haven't gone that far debugging it yet. |
we have a similiar work like this standford town, but work in progress. our team put some open source llm in the senario and we tested that THEY JUST DO NOT WORK without finetuing. the open source llms like llama and chatglm can not generate reasonable and stable content. |
Implemented GPT4All here :) https://github.com/SaturnCassini/gpt4all_generative_agents |
It was super easy so thanks for modularizing all the gpt logic in one place @joonspk-research |
Just curious does the simulation work with gpt4all? |
@pjq It does work, it tends to loop but i think that can be solved with increased temperature and changing the params of the model, like penalty for repetition. Also I didnt try models with more params, only orca mini which is known to not give the best results |
Does this mean replacing the entire utils.py with this code? But it seems that the error is reported openai_api_key does not exist, how to call this code instead? |
If you want to give it a try you can fork my repo. I had to add api_base in utils.py and then change the GPT calls to include api_base as a param (not sure if I could've done an environment variable or global variable instead). I replaced get_embedding where it appeared in gpt_structure.py as well. Follow the instructions as normal to create your utils.py and add |
I found an API interface that mimics OpenAPI under Alpaca2 in Stanford, and now it's deployed, and I only need URL http://localhost:19327/v1/completions to access the local port, so how can I now dock the port. Do I need to make changes to the gpt_structure.py script or utils.py? |
@SaturnCassini Could you open up 'Issues" on your repo? |
#3 (comment) |
Whenever
|
Zoom in - Zoom out feature
Hello, would it be possible to integrate something like GPT4All which runs locally and doesn't cost unlike OpenAI?
The text was updated successfully, but these errors were encountered: