Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support using other/local LLMs #25

Closed
DataBassGit opened this issue Apr 2, 2023 · 95 comments · Fixed by #7091, #7130 or #7178
Closed

Support using other/local LLMs #25

DataBassGit opened this issue Apr 2, 2023 · 95 comments · Fixed by #7091, #7130 or #7178
Labels
enhancement New feature or request function: AI-model local llm Related to local llms re-arch size/l

Comments

@DataBassGit
Copy link

You can modify the code to accept a config file as input, and read the Chosen_Model flag to select the appropriate AI model. Here's an example of how to achieve this:

Create a sample config file named config.ini:

[AI]
Chosen_Model = gpt-4

Offload the call_ai_fuction from the ai_functions.py to a separate library. Modify the call_ai_function function to read the model from the config file:

import configparser

def call_ai_function(function, args, description, config_path="config.ini"):
    # Load the configuration file
    config = configparser.ConfigParser()
    config.read(config_path)

    # Get the chosen model from the config file
    model = config.get("AI", "Chosen_Model", fallback="gpt-4")

    # Parse args to comma separated string
    args = ", ".join(args)
    messages = [
        {
            "role": "system",
            "content": f"You are now the following python function: ```# {description}\n{function}```\n\nOnly respond with your `return` value.",
        },
        {"role": "user", "content": args},
    ]

  # Use different AI APIs based on the chosen model
    if model == "gpt-4":
        response = openai.ChatCompletion.create(
            model=model, messages=messages, temperature=0
        )
    elif model == "some_other_api":
        # Add code to call another AI API with the appropriate parameters
        response = some_other_api_call(parameters)
    else:
        raise ValueError(f"Unsupported model: {model}")

    return response.choices[0].message["content"]

In this modified version, the call_ai_function takes an additional parameter config_path which defaults to "config.ini". The function reads the config file, retrieves the Chosen_Model value, and uses it as the model for the OpenAI API call. If the Chosen_Model flag is not found in the config file, it defaults to "gpt-4".

the if/elif structure is used to call different AI APIs based on the chosen model from the configuration file. Replace some_other_api with the name of the API you'd like to use, and replace parameters with the appropriate parameters required by that API. You can extend the if/elif structure to include more AI APIs as needed.

@Torantulino
Copy link
Member

Excellent work. Lots of people are asking for this submit a pull request!

In order to fully support gpt3.5 (and other models) we need to harden the prompt.

@Koobah had some success by adding this line to the end of prompt.txt:

Before submitting the response, simulate parsing the response with Python json.loads. Don't submit unless it can be parsed.

This would also help out #21

@DataBassGit
Copy link
Author

DataBassGit commented Apr 2, 2023

I see you looking over my shoulder. Thoughts?

I've got a friend who is going to clone the branch and test for me. (hopefully) I don't have a working environment atm.

@Koobah
Copy link

Koobah commented Apr 2, 2023

I also changed the user prompt from NEXT COMMAND to GENERATE NEXT COMMAND JSON. Basically reminding it to use JSON whenever possible
It's still not 100% working, though.

@DataBassGit
Copy link
Author

https://github.com/DataBassGit/Auto-GPT/blob/master/scripts/ai_function_lib.py

@Koobah This is basically what I'm working with atm. I think we can probably add a verify_json function to a gpt-3.5 segment of that function.

@DataBassGit
Copy link
Author

Pull request on this is submitted. I'm going to start looking into more models and platforms that can be incorporated.

@ResourceHog
Copy link

Would be huge if it can run llama.cpp locally.

@DataBassGit
Copy link
Author

I might be able to swing that. Let's see if this merge gets approved. I'm also looking at implementing GPT4all.

@MarkSchmidty
Copy link

@DataBassGit I see that PR got closed. What's the status of your fork?

@DataBassGit
Copy link
Author

They moved the API call to GPT-4 to an external library in main.py, but there are still some scripts that call openai directly, like chat.py, browse.py etc.

GPT4all doesn't support x64 architecture. I also tried some APIs on hugging face, but it seems that it truncates responses on the free API endpoints.

I just converted BabyAGI to Oobabooga, but it's untested. Should be getting to that tonight. If it works, I will start working on porting AutoGPT to Oobabooga as well. The nice thing about this method is that it allows for local or remote hosting, and can handle many different language models without much issue.

Hugging face should work, though. I need to review Microsoft/Jarvis. They make heavy use of HF APIs.

@MarkSchmidty
Copy link

GPT4all supports x64 and every architecture llama.cpp supports, which is every architecture (even non-POSIX, and webassemly). Their moto is "Can it run Doom LLaMA" for a reason.

Ooga supports GPT4all (and all llama.cpp ggml models), since it packages llama.cpp and the llamacpp python bindings library. So porting it to ooba would effectively resolve this.

@DataBassGit
Copy link
Author

The Python Client for gpt4all only supports x86 Linux and ARM Mac.

@MarkSchmidty
Copy link

I'm running it on x64 right now.

@MarkSchmidty
Copy link

MarkSchmidty commented Apr 5, 2023

To be clear, the x86 architecture for gpt4all should really be called x86/x64. It supports either.

But none of the gpt4all libraries are required to run inference with gpt4all. They have their own fork to load the pre-prompt automatically. But you can load the same pre-prompt with one click in ooba's UI with standard llama.cpp and the regular llama.cpp python bindings as a back-end. You don't need anything but the model .bin and ooba's webui repo.

@alkeryn
Copy link

alkeryn commented Apr 5, 2023

wouldn't it be simpler to just make an api call to ooba's gui instead of managing the loading of models ?
it may be easier to just have a standardized api so you don't have to care about implementation details.

@MarkSchmidty
Copy link

Ooba's UI is a lot of overhead just to send and receive requests from a different model.

AFAIK ooba supports two types of models, HuggingFace models and GGML (llama.cpp) models (like GPT4All). The former with HuggingFace libraries and the latter with these python bindings: https://github.com/thomasantony/llamacpp-python for llama.cpp

Adding even basic support for just one of these would surely bring in waves of developers who want local models and who would then contribute to improving Auto-GPT.

@alkeryn
Copy link

alkeryn commented Apr 6, 2023

@MarkSchmidty i see what you mean, though, maybe it would be simpler to make a separate project that exposes a standardized api and maybe some extensibility through plugins and not much else, so that other projects can just use the api without having to care about how to implement the various models and techniques.

either way in the long run i think it may be better if we have a standard api that was well thought out, just like language servers made our editors nicer, it would be nice to have a llm or even ai standardized api.

if we could avoid fragmentation that'd be great and there is no better time than now to do so.

@alkeryn
Copy link

alkeryn commented Apr 6, 2023

well in the meantime i think i'll fork it to use llama instead, i got gpt4 access but i like the idea of being able to let it run for very long without worrying about cost or api overuse.

@MarkSchmidty
Copy link

well in the meantime i think i'll fork it to use llama instead, i got gpt4 access but i like the idea of being able to let it run for very long without worrying about cost or api overuse.

I think a lot of people want this but just don't know it yet. There are lots of interesting use cases which would wrack up a huge OpenAI bill that LLaMA-30B or 65B can probably handle fine for just the cost to power a 150watt $200 Nvidia Tesla P40.

@DataBassGit
Copy link
Author

DataBassGit commented Apr 6, 2023

There's an issue with this. Auto-GPT relies on specifically structured prompts in order to function correctly. Llama does not do well at providing prompts structured in the exact format that is required. Vicuna does a much better job. It's not perfect, but could probably get there with some fine tuning.

I have a fork of an older version of Auto-GPT that I am planning to hook up to vicuna. Right now, I am waiting on Oobabooga to fix a bug in their API. I've been working with BabyAGI at the moment because it is simpler than AutoGPT. Once BabyAGI is working, I will migrate the changes to AutoGPT as well.

@MarkSchmidty
Copy link

With minimal finetuning LLaMA can easily do better (yes better*) than GPT-4. Finetuning goes a long way and LLaMA is a very capable base model. The Vicuna dataset (ShareGPT) is available for finetuning here: https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/tree/main/HTML_cleaned_raw_dataset

The ideal finetuning would be based on a dataset of GPT-4's interactions with Auto-GPT though.

*To be fair, GPT-4 could do better than it already does "out of the box" with a few tweaks like using embeddings, but that is besides the point.

@DataBassGit
Copy link
Author

@MarkSchmidty We absolutely could port the prompts and responses from autogpt to file and use that for fine tuning vicuna. I don't have a GPU, however, so I'm not able to perform the operation, and I don't have the money for gpt-4.

@alkeryn
Copy link

alkeryn commented Apr 6, 2023

@DataBassGit yes this is what i found during trying to implement it, and that's before the pinecone update, after the pinecone update there is an additional use of openai to generate embbedings, which would also need to be made differently.

@DataBassGit
Copy link
Author

@alkeryn I managed to perform this using sentence_transformers library. This appears to work for Vicuna and pinecone, but you have to change your index dimensions from 1536 to 768 on pinecone. I think the model dictates the index dimensions. I couldn't find a way to adjust the dimensions otherwise.


model = SentenceTransformer('sentence-transformers/LaBSE')

def get_ada_embedding(text):
    # Get the embedding for the given text
    embedding = model.encode([text])
    return embedding[0]```

@MarkSchmidty
Copy link

@alkeryn I managed to perform this using sentence_transformers library. This appears to work for Vicuna and pinecone, but you have to change your index dimensions from 1536 to 768 on pinecone. I think the model dictates the index dimensions. I couldn't find a way to adjust the dimensions otherwise.


model = SentenceTransformer('sentence-transformers/LaBSE')

def get_ada_embedding(text):
    # Get the embedding for the given text
    embedding = model.encode([text])
    return embedding[0]```

Awesome!

There are offline embedding replacements for pinecone that might be more ideal. For example, https://github.com/wawawario2/long_term_memory is a fork of ooba which produces and stores embeddings locally using zarr and Numpy. See https://github.com/wawawario2/long_term_memory#how-it-works-behind-the-scenes

@DataBassGit
Copy link
Author

Unfortunately, that would take a lot of chopping to apply to what I am using it for. This is designed for the webui, which I am not using. We're loading ooba in API mode so no --chat or --cai-chat flag.

python server.py --auto-devices --listen --no-stream

This is how I am initiating the server.

@MarkSchmidty
Copy link

Right, it would have to be re-implemented specifically for Auto-GPT. I just thought I'd point out that it is a future possibility.

I suppose local embeddings is a separate issue / feature we can look into.

@isaaclepes
Copy link

isaaclepes commented Sep 2, 2023

I still dream of a day when we can use Petals.dev's API for distributed processing. It even allows you to make your own private swarm. I am picking up old, free computers off Craigslist and adding cheap/free GPU's to them for my private swarm.

Say383 referenced this issue in Say383/Auto-GPT Sep 8, 2023
* windows docs make workspace if not there

* small fixes
@haochuan-li
Copy link

I'm new to this. I'm wondering if autogpt is able to call custom url(other than openai api) to get response? So that we can use other serving systems like TGI or vllm to serve our own llm.

@DGdev91
Copy link
Contributor

DGdev91 commented Oct 12, 2023

I'm new to this. I'm wondering if autogpt is able to call custom url(other than openai api) to get response? So that we can use other serving systems like TGI or vllm to serve our own llm.

Oh, is this thread still open?

Well, it's possible now. Just set the OPENAI_API_BASE variable and you can use any service wich is compliant with OpenAI's API.

....But local LLMs aren't as good as GPT-4 and i never obtained much, even if is technically possible to use them. So i gave up some time ago.

Maybe some recent long-context LLM like llongma and so on can actually work, but i never tried that.

@9cento
Copy link

9cento commented Oct 12, 2023

I'm new to this. I'm wondering if autogpt is able to call custom url(other than openai api) to get response? So that we can use other serving systems like TGI or vllm to serve our own llm.

Oh, is this thread still open?

Well, it's possible now. Just set the OPENAI_API_BASE variable and you can use any service wich is compliant with OpenAI's API.

....But local LLMs aren't as good as GPT-4 and i never obtained much, even if is technically possible to use them. So i gave up some time ago.

Maybe some recent long-context LLM like llongma and so on can actually work, but i never tried that.

Did you gave a try to Llama 2? Codellama?

@DataBassGit
Copy link
Author

The issue with open source models is that they are trained differently. Getting a response in a specific format requires either fine tuning of the model or modification of the prompts. AutoGPT wasn't designed to make it easy to edit the prompts, and fine tuning is expensive. Eventually, I just built my own agent framework.

@chymian
Copy link

chymian commented Oct 20, 2023

One Proxy to rule them all!

https://github.com/BerriAI/litellm/

is a API-Proxy with a vast choice on backends, like replicat, openai, petals, ...
and it works like charm.
pls implement!

@Wladastic
Copy link
Contributor

One Proxy to rule them all!

https://github.com/BerriAI/litellm/

is a API-Proxy with a vast choice on backends, like replicat, openai, petals, ... and it works like charm. pls implement!

Thank you for your suggestion,
I will take a look at this.

@alkeryn
Copy link

alkeryn commented Jan 31, 2024

@Wladastic just so you know textgen webui has a openai like api now, you can look in the wiki on how to force openai to use its api.

you can see how to set up here :
https://github.com/oobabooga/text-generation-webui/wiki/12-%E2%80%90-OpenAI-API

you will want to do something like : export OPENAI_API_BASE=http://localhost:5000/v1

@Wladastic
Copy link
Contributor

I know about that one,
I was talking about the project mentioned above

@Pwuts
Copy link
Member

Pwuts commented Feb 14, 2024

Coming soon... :)

@impredicative
Copy link

impredicative commented Mar 9, 2024

This request is more relevant if Claude 3 Opus is actually better than GPT4, at least for some types of tasks.

Copy link
Contributor

This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.

@github-actions github-actions bot added the Stale label Apr 29, 2024
@GoZippy
Copy link

GoZippy commented Apr 29, 2024

I got your activity

@alkeryn
Copy link

alkeryn commented Apr 29, 2024

i mean at that point it is pretty easy to tell it to use other OpenAPI compatible backends with the env variables.
and there must be proxies to use other providers too ie convert a non OpenAPI api to OpenAPI compatible, if not it's pretty trivial to build.

@github-actions github-actions bot removed the Stale label Apr 30, 2024
@ntindle
Copy link
Member

ntindle commented May 9, 2024

We've made lots of progress on this front recently with @Pwuts 's work on additional providers. We can now begin the work on more open providers. We will likely be starting with llamafile then moving out from there

@Cattacker
Copy link

Cattacker commented Jul 17, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment