Skip to content

Latest commit

 

History

History
151 lines (98 loc) · 7.55 KB

apis_and_tokens.md

File metadata and controls

151 lines (98 loc) · 7.55 KB

How to setup various tokens and APIs for the project

This doc explains how to setup various tokens and APIs for the project. You will need some of these tokens and APIs to run the app and use its features. You must set up at least one of these tokens or APIs to run the app.

OpenAI API key

To get your OpenAI API key, you need to:

  • Go to https://platform.openai.com/account/api-keys
  • Create an account or log in with your existing one
  • Add a payment method to your account (this is not free, sorry 😬)
  • Copy your secret key (sk-...) and save it in ./.env as OPENAI_API_KEY

ChatGPT Free Access token

To get your Access token for ChatGPT 'Free Version', you need to:

Warning: There may be a chance of your account being banned if you deploy the app to multiple users with this method. Use at your own risk. 😱

Bing Access Token

To get your Bing Access Token, you have a few options:

  • You can try leaving it blank and see if it works (fingers crossed 🤞)

  • You can follow these new instructions (thanks @danny-avila for sharing 🙌)

  • You can use MS Edge, navigate to bing.com, and do the following:

    • Make sure you are logged in
    • Open the DevTools by pressing F12 on your keyboard
    • Click on the tab "Application" (On the left of the DevTools)
    • Expand the "Cookies" (Under "Storage")
    • Copy the value of the "_U" cookie and save it in ./.env as BING_ACCESS_TOKEN

Anthropic Endpoint (Claude)

Google's PaLM 2

To setup PaLM 2 (via Google Cloud Vertex AI API), you need to:

Enable the Vertex AI API on Google Cloud:

Create a Service Account:

Create a JSON key, rename as 'auth.json' and save it in /api/data/:

Azure OpenAI

In order to use Azure OpenAI with this project, specific environment variables must be set in your .env file. These variables will be used for constructing the API URLs.

The variables needed are outlined below:

Required Variables

  • AZURE_API_KEY: Your Azure OpenAI API key.
  • AZURE_OPENAI_API_INSTANCE_NAME: The instance name of your Azure OpenAI API.
  • AZURE_OPENAI_API_DEPLOYMENT_NAME: The deployment name of your Azure OpenAI API.
  • AZURE_OPENAI_API_VERSION: The version of your Azure OpenAI API.

For example, with these variables, the URL for chat completion would look something like:

https://{AZURE_OPENAI_API_INSTANCE_NAME}.openai.azure.com/openai/deployments/{AZURE_OPENAI_API_DEPLOYMENT_NAME}/chat/completions?api-version={AZURE_OPENAI_API_VERSION}

You should also consider changing the AZURE_OPENAI_MODELS variable to the models available in your deployment.

Additional Configuration Notes

  • Endpoint Construction: The provided variables help customize the construction of the API URL for Azure.

  • Model Deployment Naming: As of 2023-11-10, the Azure API allows only one model per deployment. It's advisable to name your deployments after the model name (e.g., "gpt-3.5-turbo") for easy deployment switching. This is facilitated by setting AZURE_USE_MODEL_AS_DEPLOYMENT_NAME to TRUE.

Alternatively, use custom deployment names and set AZURE_OPENAI_DEFAULT_MODEL for expected functionality.

  • AZURE_OPENAI_MODELS: List the available models, separated by commas without spaces. The first listed model will be the default. If left blank, internal settings will be used. Note that deployment names can't have periods, which are removed when generating the endpoint.

Example use:

# .env file
AZURE_OPENAI_MODELS=gpt-3.5-turbo,gpt-4,gpt-5
  • AZURE_USE_MODEL_AS_DEPLOYMENT_NAME: Enable using the model name as the deployment name for the API URL.

Example use:

# .env file
AZURE_USE_MODEL_AS_DEPLOYMENT_NAME=TRUE

Note: Azure API does not use the model in the payload and is more of an identifying field for the LibreChat App. If using non-model deployment names, but you're having issues with the model not being recognized, you should set this field. It will also not be used as the deployment name if AZURE_USE_MODEL_AS_DEPLOYMENT_NAME is enabled, which will prioritize what the user selects as the model.

  • AZURE_OPENAI_DEFAULT_MODEL: Override the model setting for Azure, useful if using custom deployment names.

Example use:

# .env file
AZURE_OPENAI_DEFAULT_MODEL=gpt-3.5-turbo # do include periods in the model name here

Optional Variables

  • AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME: The deployment name for completion. This is currently not in use but may be used in future.
  • AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME: The deployment name for embedding. This is currently not in use but may be used in future.

These two variables are optional but may be used in future updates of this project.

Using Plugins with Azure

Note: To use the Plugins endpoint with Azure OpenAI, you need a deployment supporting function calling. Otherwise, you need to set "Functions" off in the Agent settings. When you are not using "functions" mode, it's recommend to have "skip completion" off as well, which is a review step of what the agent generated.

To use Azure with the Plugins endpoint, make sure the following environment variables are set:

  • PLUGINS_USE_AZURE: If set to "true" or any truthy value, this will enable the program to use Azure with the Plugins endpoint.
  • AZURE_API_KEY: Your Azure API key must be set with an environment variable.

That's it! You're all set. 🎉



⚠️ Note: If you're having trouble, before creating a new issue, please search for similar ones on our #issues thread on our discord or our troubleshooting discussion on our Discussions page. If you don't find a relevant issue, feel free to create a new one and provide as much detail as possible.