Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OAI_CONFIG_LIST details in documentation #3

Closed
BeibinLi opened this issue Sep 18, 2023 · 10 comments
Closed

OAI_CONFIG_LIST details in documentation #3

BeibinLi opened this issue Sep 18, 2023 · 10 comments

Comments

@BeibinLi
Copy link
Collaborator

It would be helpful to add details about the OAI_CONFIG_LIST in documentation so that users can quickly starts with the OAI functions.

@sonichi sonichi transferred this issue from microsoft/FLAML Sep 20, 2023
@sonichi
Copy link
Collaborator

sonichi commented Sep 20, 2023

Could you read https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference#runtime-error and see if the issue is addressed?

@rustyorb
Copy link

Is there a working example of this file, in JSON format? I can get this file to work or parse right when I create it.

@AaronWard
Copy link
Collaborator

AaronWard commented Sep 27, 2023

I was a bit confused on how this works. It would be nice to just have something like load_dotenv() to handle the keys. But anyways - Just going off the examples in the /notebooks directory i made a file called OAI_CONFIG_LIST (with no filetype)

[
    {
        "model": "gpt-4",
        "api_key": "***"
    },
    {
        "model": "gpt-3.5-turbo",
        "api_key": "***"
    }
]

In my notebook:

# Setting configurations for autogen
config_list = autogen.config_list_from_json(
    env_or_file="OAI_CONFIG_LIST",
    file_location=".",
    filter_dict={
        "model": {
            "gpt-4",
            "gpt4",
            "gpt-4-32k",
            "gpt-4-32k-0314",
            "gpt-4-32k-v0314",
            "gpt-3.5-turbo",
            "gpt-3.5-turbo-16k",
            "gpt-3.5-turbo-0301",
            "chatgpt-35-turbo-0301",
            "gpt-35-turbo-v0301",
            "gpt",
        }
    }
)

config_list

When printing config_list

[{'model': 'gpt-4',
  'api_key': '***'},
 {'model': 'gpt-3.5-turbo',
  'api_key': '***}]

And then i pass the config_list to the initiate_chat function.

assistant = AssistantAgent("assistant")
user_proxy = UserProxyAgent("user_proxy")
user_proxy.initiate_chat(assistant, 
    message="Plot a chart of META and TESLA stock price change YTD.", 
    config_list=config_list
)

This worked for me


Update

If you'd rather use load_dotenv() this worked for me.

import json
from dotenv import find_dotenv, load_dotenv
load_dotenv(Path('../../.env'))

env_var = [
    {
        'model': 'gpt-4',
        'api_key': os.environ['OPENAI_API_KEY']
    },
    {
        'model': 'gpt-3.5-turbo',
        'api_key': os.environ['OPENAI_API_KEY']
    }
]

# needed to convert to str
env_var = json.dumps(env_var)

# Setting configurations for autogen
config_list = autogen.config_list_from_json(
    env_or_file=env_var,
    filter_dict={
        "model": {
            "gpt-4",
            "gpt4",
            "gpt-4-32k",
            "gpt-4-32k-0314",
            "gpt-4-32k-v0314",
            "gpt-3.5-turbo",
            "gpt-3.5-turbo-16k",
            "gpt-3.5-turbo-0301",
            "chatgpt-35-turbo-0301",
            "gpt-35-turbo-v0301",
            "gpt",
        }
    }
)

@rustyorb
Copy link

This is tremendously helpful, thank you.

@sonichi
Copy link
Collaborator

sonichi commented Sep 27, 2023

I was a bit confused on how this works. It would be nice to just have something like load_dotenv() to handle the keys. But anyways - Just going off the examples in the /notebooks directory i made a file called OAI_CONFIG_LIST (with no filetype)

[
    {
        "model": "gpt-4",
        "api_key": "***"
    },
    {
        "model": "gpt-3.5-turbo",
        "api_key": "***"
    }
]

In my notebook:

# Setting configurations for autogen
config_list = autogen.config_list_from_json(
    env_or_file="OAI_CONFIG_LIST",
    file_location=".",
    filter_dict={
        "model": {
            "gpt-4",
            "gpt4",
            "gpt-4-32k",
            "gpt-4-32k-0314",
            "gpt-4-32k-v0314",
            "gpt-3.5-turbo",
            "gpt-3.5-turbo-16k",
            "gpt-3.5-turbo-0301",
            "chatgpt-35-turbo-0301",
            "gpt-35-turbo-v0301",
            "gpt",
        }
    }
)

config_list

When printing config_list

[{'model': 'gpt-4',
  'api_key': '***'},
 {'model': 'gpt-3.5-turbo',
  'api_key': '***}]

And then i pass the config_list to the initiate_chat function.

assistant = AssistantAgent("assistant")
user_proxy = UserProxyAgent("user_proxy")
user_proxy.initiate_chat(assistant, 
    message="Plot a chart of META and TESLA stock price change YTD.", 
    config_list=config_list
)

This worked for me

Update

If you'd rather use load_dotenv() this worked for me.

import json
from dotenv import find_dotenv, load_dotenv
load_dotenv(Path('../../.env'))

env_var = [
    {
        'model': 'gpt-4',
        'api_key': os.environ['OPENAI_API_KEY']
    },
    {
        'model': 'gpt-3.5-turbo',
        'api_key': os.environ['OPENAI_API_KEY']
    }
]

# needed to convert to str
env_var = json.dumps(env_var)

# Setting configurations for autogen
config_list = autogen.config_list_from_json(
    env_or_file=env_var,
    filter_dict={
        "model": {
            "gpt-4",
            "gpt4",
            "gpt-4-32k",
            "gpt-4-32k-0314",
            "gpt-4-32k-v0314",
            "gpt-3.5-turbo",
            "gpt-3.5-turbo-16k",
            "gpt-3.5-turbo-0301",
            "chatgpt-35-turbo-0301",
            "gpt-35-turbo-v0301",
            "gpt",
        }
    }
)

Thanks. You can also have a single env var which contains the entire json and load it directly:

load_dotenv(Path('../../.env'))
config_list = autogen.config_list_from_json(YOUR_ENV_VAR_NAME_FOR_JSON)

@EdFries
Copy link

EdFries commented Sep 27, 2023

I'm trying to solve the same problem using fastchat with a local llm as in the documentation but it fails because I don't provide an openai key:

from autogen import AssistantAgent, UserProxyAgent, oai
config_list=[
    {
        "model": "chatglm2-6b",
        "api_base": "http://localhost:8000/v1",
        "api_type": "open_ai",
        "api_key": "NULL", # just a placeholder
    }
]

response = oai.Completion.create(config_list=config_list, prompt="Hi")
print(response) # works fine

assistant = AssistantAgent("assistant")
user_proxy = UserProxyAgent("user_proxy")
user_proxy.initiate_chat(assistant, message="Plot a chart of META and TESLA stock price change YTD.", config_list=config_list)
# fails with the error: openai.error.AuthenticationError: No API key provided.

@sonichi
Copy link
Collaborator

sonichi commented Sep 27, 2023

I'm trying to solve the same problem using fastchat with a local llm as in the documentation but it fails because I don't provide an openai key:

from autogen import AssistantAgent, UserProxyAgent, oai
config_list=[
    {
        "model": "chatglm2-6b",
        "api_base": "http://localhost:8000/v1",
        "api_type": "open_ai",
        "api_key": "NULL", # just a placeholder
    }
]

response = oai.Completion.create(config_list=config_list, prompt="Hi")
print(response) # works fine

assistant = AssistantAgent("assistant")
user_proxy = UserProxyAgent("user_proxy")
user_proxy.initiate_chat(assistant, message="Plot a chart of META and TESLA stock price change YTD.", config_list=config_list)
# fails with the error: openai.error.AuthenticationError: No API key provided.

Please add llm_config={"config_list": config_list} in the constructor of AssistantAgent

@EdFries
Copy link

EdFries commented Sep 27, 2023

Thanks, that worked!

@sonichi
Copy link
Collaborator

sonichi commented Sep 27, 2023

@AaronWard
Copy link
Collaborator

Update: i found that my previous example was throwing an error because the json was being parsed as a string. Here is working example of setting up your config list using dotenv. This will allow you to dynamically create the json file required by autogen.config_list_from_json() when you're using a .env file

import tempfile
from dotenv import find_dotenv, load_dotenv
load_dotenv(find_dotenv())

env_var = [
    {
        'model': 'gpt-4',
        'api_key': os.getenv('OPENAI_API_KEY')
    },
    {
        'model': 'gpt-3.5-turbo',
        'api_key': os.getenv('OPENAI_API_KEY')
    }
]

# Create a temporary file
# Write the JSON structure to a temporary file and pass it to config_list_from_json
with tempfile.NamedTemporaryFile(mode='w+', delete=True) as temp:
    env_var = json.dumps(env_var)
    temp.write(env_var)
    temp.flush()

    # Setting configurations for autogen
    config_list = autogen.config_list_from_json(
        env_or_file=temp.name,
        filter_dict={
            "model": {
                "gpt-4",
                "gpt-3.5-turbo",
            }
        }
    )

assert len(config_list) > 0 
print("models to use: ", [config_list[i]["model"] for i in range(len(config_list))])

models to use: ['gpt-4', 'gpt-3.5-turbo']

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants