Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] 配置在线openai的配置之后,其他在线模型或者本地模型无法使用 #1735

Closed
ArlanCooper opened this issue Oct 11, 2023 · 4 comments
Labels
bug Something isn't working

Comments

@ArlanCooper
Copy link

  1. 拉取了最新的代码,
    修改对应配置,model_config.py
    ONLINE_LLM_MODEL 下配置本地模型位置:
"zhipu-api": {
        "api_key": "mykey",
        "version": "chatglm_lite",  # 可选包括 "chatglm_lite", "chatglm_std", "chatglm_pro"
        "provider": "ChatGLMWorker",
    }
  1. 我在这里加了我需要的配置 server/utils.py内容:
def get_model_worker_config(model_name: str = None) -> dict:
    '''
    加载model worker的配置项。
    优先级:FSCHAT_MODEL_WORKERS[model_name] > ONLINE_LLM_MODEL[model_name] > FSCHAT_MODEL_WORKERS["default"]
    '''
    from configs.model_config import ONLINE_LLM_MODEL
    from configs.server_config import FSCHAT_MODEL_WORKERS
    from server import model_workers

    config = FSCHAT_MODEL_WORKERS.get("default", {}).copy()
    config.update(ONLINE_LLM_MODEL.get(model_name, {}))
    config.update(FSCHAT_MODEL_WORKERS.get(model_name, {}))

    # 在线模型API
    if model_name in ONLINE_LLM_MODEL:
        config["online_api"] = True
        if model_name in ['gpt-35-turbo', 'gpt-4', 'gpt-4-32k']:
            from configs.OpenAiConfig import OpenAiConfig
            myopenai = OpenAiConfig.set_openai()
            config["api_key"] = myopenai.api_key
            config['api_base_url'] = myopenai.api_base
        if provider := config.get("provider"):
            try:
                config["worker_class"] = getattr(model_workers, provider)
            except Exception as e:
                msg = f"在线模型 ‘{model_name}’ 的provider没有正确配置"
                logger.error(f'{e.__class__.__name__}: {msg}',
                             exc_info=e if log_verbose else None)

    config["model_path"] = get_model_path(model_name)
    config["device"] = llm_device(config.get("device"))
    return config

但是,还是需要在这个函数get_ChatOpenAI这类配置一个新的参数deployment_id,但是,一配置这个参数,所有的模型就又都变成了chatgpt类的模型了,其他模型都无法使用了,即在环境变量里面有了openai相关的参数了,但是不配置这个参数,gpt的模型就无法使用,具体修改:

def get_ChatOpenAI(
        model_name: str,
        temperature: float,
        streaming: bool = True,
        callbacks: List[Callable] = [],
        verbose: bool = True,
        **kwargs: Any,
) -> ChatOpenAI:
    config = get_model_worker_config(model_name)
    if config.get("api_key", "EMPTY") == 'EMPTY':
        deployment_id = None
    else:
        deployment_id = model_name
    model = ChatOpenAI(
        streaming=streaming,
        verbose=verbose,
        callbacks=callbacks,
        openai_api_key=config.get("api_key", "EMPTY"),
        openai_api_base=config.get("api_base_url", fschat_openai_api_address()),
        model_name=model_name,
        temperature=temperature,
        openai_proxy=config.get("openai_proxy"),
        # deployment_id=deployment_id, #配置的话,其他模型不能使用,不配置的话,gpt模型不能使用
        **kwargs
    )
    return model

请问一下如何解决呢, 或者说,有没有demo,既可以调用gpt类的接口,也可以使用本地模型或者在线其他服务接口的脚本可以参考呢?

@ArlanCooper ArlanCooper added the bug Something isn't working label Oct 11, 2023
@c940606
Copy link

c940606 commented Oct 13, 2023

一样的问题,调用完其他模型,本地模型无法使用

@zRzRzRzRzRzRzR
Copy link
Collaborator

必须是本地模型启动
在设计的时候,我们设计了不能API模型启动切换成本地模型,因为为了防止已经有一个本地模型进程运行,如果在启动一个本地模型而导致的OOM

@ArlanCooper
Copy link
Author

必须是本地模型启动 在设计的时候,我们设计了不能API模型启动切换成本地模型,因为为了防止已经有一个本地模型进程运行,如果在启动一个本地模型而导致的OOM

那如果我想可以兼容本地和在线接口,应该改哪里的代码呢?

@weixuan2008
Copy link

就是一个鸡肋的东西,太多的问题

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants