Skip to content

Commit

Permalink
Add OpenLM LLM multi-provider
Browse files Browse the repository at this point in the history
OpenLM is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP. It implements the OpenAI Completion class so that it can be used as a drop-in replacement for the OpenAI API. This changeset utilizes BaseOpenAI for minimal added code.
  • Loading branch information
r2d4 committed May 22, 2023
1 parent 467ca6f commit 99fddf9
Show file tree
Hide file tree
Showing 6 changed files with 188 additions and 2 deletions.
135 changes: 135 additions & 0 deletions docs/modules/models/llms/integrations/openlm.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,135 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# OpenLM\n",
"[OpenLM](https://github.com/r2d4/openlm) is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP. \n",
"\n",
"\n",
"It implements the OpenAI Completion class so that it can be used as a drop-in replacement for the OpenAI API. This changeset utilizes BaseOpenAI for minimal added code.\n",
"\n",
"This examples goes over how to use LangChain to interact with both OpenAI and HuggingFace. You'll need API keys from both."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Setup"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"environ({'ASSETS_PATH': '/Users/matt/.config/raycast/extensions/open-code/assets', 'COMMAND_MODE': 'unix2003', 'COMMAND_NAME': 'index', 'DISPLAY': '/private/tmp/com.apple.launchd.S41rcVrtNR/org.xquartz:0', 'ELECTRON_NO_ATTACH_CONSOLE': '1', 'EXTENSION_NAME': 'open-code', 'HOME': '/Users/matt', 'LC_ALL': 'en_US-u-hc-h12-u-ca-gregory-u-nu-latn', 'LOGNAME': 'matt', 'MallocNanoZone': '0', 'NODE_ENV': 'development', 'NODE_PATH': '/Applications/Raycast.app/Contents/Resources/RaycastCommands_RaycastCommands.bundle/Contents/Resources/api/node_modules', 'ORIGINAL_XDG_CURRENT_DESKTOP': 'undefined', 'PATH': '/Users/matt/code/langchain/.venv/bin:/usr/bin:/bin:/usr/sbin:/sbin', 'PWD': '/', 'RAYCAST_BUNDLE_ID': 'com.raycast.macos', 'RAYCAST_VERSION': '1.51.3', 'SHELL': '/bin/zsh', 'SHLVL': '3', 'SSH_AUTH_SOCK': '/private/tmp/com.apple.launchd.HS0rnuvEbV/Listeners', 'SUPPORT_PATH': '/Users/matt/Library/Application Support/com.raycast.macos/extensions/open-code', 'TMPDIR': '/var/folders/18/8z30tbys2w18jwwvm_wbbrhc0000gn/T', 'USER': 'matt', 'VSCODE_AMD_ENTRYPOINT': 'vs/workbench/api/node/extensionHostProcess', 'VSCODE_CLI': '1', 'VSCODE_CODE_CACHE_PATH': '/Users/matt/Library/Application Support/Code/CachedData/b3e4e68a0bc097f0ae7907b217c1119af9e03435', 'VSCODE_CRASH_REPORTER_PROCESS_TYPE': 'extensionHost', 'VSCODE_CRASH_REPORTER_SANDBOXED_HINT': '1', 'VSCODE_CWD': '/', 'VSCODE_HANDLES_UNCAUGHT_ERRORS': 'true', 'VSCODE_IPC_HOOK': '/Users/matt/Library/Application Support/Code/1.78-main.sock', 'VSCODE_NLS_CONFIG': '{\"locale\":\"en-us\",\"osLocale\":\"en-us\",\"availableLanguages\":{},\"_languagePackSupport\":true}', 'VSCODE_PID': '22699', 'XPC_FLAGS': '0x0', 'XPC_SERVICE_NAME': '0', '__CFBundleIdentifier': 'com.microsoft.VSCode', '__CF_USER_TEXT_ENCODING': '0x1F5:0x0:0x0', 'ELECTRON_RUN_AS_NODE': '1', 'APPLICATION_INSIGHTS_NO_DIAGNOSTIC_CHANNEL': '1', 'VSCODE_L10N_BUNDLE_LOCATION': '', 'PYTHONUNBUFFERED': '1', 'PYTHONIOENCODING': 'utf-8', 'VIRTUAL_ENV': '/Users/matt/code/langchain/.venv', 'PS1': '(langchain-py3.11) ', '_': '/Users/matt/code/langchain/.venv/bin/python', 'PYDEVD_IPYTHON_COMPATIBLE_DEBUGGING': '1', 'PYDEVD_USE_FRAME_EVAL': 'NO', 'TERM': 'xterm-color', 'CLICOLOR': '1', 'FORCE_COLOR': '1', 'CLICOLOR_FORCE': '1', 'PAGER': 'cat', 'GIT_PAGER': 'cat', 'MPLBACKEND': 'module://matplotlib_inline.backend_inline', 'OPENAI_API_KEY': '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', 'HF_API_TOKEN': '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'})\n"
]
}
],
"source": [
"import os\n",
"import subprocess\n",
"from getpass import getpass\n",
"\n",
"print(os.environ)\n",
"\n",
"# Check if OPENAI_API_KEY environment variable is set\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" print(\"Enter your OpenAI API key:\")\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"\n",
"# Check if HF_API_TOKEN environment variable is set\n",
"if \"HF_API_TOKEN\" not in os.environ:\n",
" print(\"Enter your HuggingFace Hub API key:\")\n",
" os.environ[\"HF_API_TOKEN\"] = getpass()\n",
"\n",
"try:\n",
" import openlm\n",
" import openai\n",
"except ImportError:\n",
" print(\"openlm package not found. Installing openlm...\")\n",
" subprocess.run([\"pip\", \"install\", \"openlm\", \"openai\"])\n",
"\n",
"from langchain.llms import OpenLM\n",
"from langchain import PromptTemplate, LLMChain"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Using LangChain with OpenLM\n",
"\n",
"Here we're going to call two models in an LLMChain, `text-davinci-003` from OpenAI and `gpt2` on HuggingFace."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Choices [{'id': '13807ea7-c2f7-4f6a-a5bb-a21490d8622c', 'model_name': 'openai.com/text-davinci-003', 'created': 1684788067, 'text': ' First, what country are we looking for the capital of? France. The capital of France is Paris.', 'usage': {'prompt_tokens': 20, 'completion_tokens': 21, 'total_tokens': 41}, 'extra': {'id': 'cmpl-7J6cySQCqXIvWNhkeiaGEaezMfme3'}}]\n",
"Model: text-davinci-003\n",
"Result: First, what country are we looking for the capital of? France. The capital of France is Paris.\n",
"Choices [{'id': '5a824a67-4020-4a9b-b09b-b1f8ee61e907', 'model_name': 'huggingface.co/gpt2', 'created': 1684788067, 'text': \"Question: What is the capital of France?\\n\\nAnswer: Let's think step by step. I am not going to lie, this is a complicated issue, and I don't see any solutions to all this, but it is still far more\"}]\n",
"Model: huggingface.co/gpt2\n",
"Result: Question: What is the capital of France?\n",
"\n",
"Answer: Let's think step by step. I am not going to lie, this is a complicated issue, and I don't see any solutions to all this, but it is still far more\n"
]
}
],
"source": [
"question = \"What is the capital of France?\"\n",
"template = \"\"\"Question: {question}\n",
"\n",
"Answer: Let's think step by step.\"\"\"\n",
"\n",
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
"\n",
"for model in [\"text-davinci-003\", \"huggingface.co/gpt2\"]:\n",
" llm = OpenLM(model=model)\n",
" llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
" result = llm_chain.run(question)\n",
" print(\"\"\"Model: {}\n",
"Result: {}\"\"\".format(model, result))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}
3 changes: 3 additions & 0 deletions langchain/llms/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
from langchain.llms.modal import Modal
from langchain.llms.nlpcloud import NLPCloud
from langchain.llms.openai import AzureOpenAI, OpenAI, OpenAIChat
from langchain.llms.openlm import OpenLM
from langchain.llms.petals import Petals
from langchain.llms.pipelineai import PipelineAI
from langchain.llms.predictionguard import PredictionGuard
Expand Down Expand Up @@ -53,6 +54,7 @@
"NLPCloud",
"OpenAI",
"OpenAIChat",
"OpenLM",
"Petals",
"PipelineAI",
"HuggingFaceEndpoint",
Expand Down Expand Up @@ -96,6 +98,7 @@
"nlpcloud": NLPCloud,
"human-input": HumanInputLLM,
"openai": OpenAI,
"openlm": OpenLM,
"petals": Petals,
"pipelineai": PipelineAI,
"huggingface_pipeline": HuggingFacePipeline,
Expand Down
23 changes: 23 additions & 0 deletions langchain/llms/openlm.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
from typing import Dict,Any
from pydantic import root_validator

from langchain.llms.openai import BaseOpenAI

class OpenLM(BaseOpenAI):
@property
def _invocation_params(self) -> Dict[str, Any]:
return {**{"model": self.model_name}, **super()._invocation_params}

@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
try:
import openlm
values["client"] = openlm.Completion
except ImportError:
raise ValueError(
"Could not import openlm python package. "
"Please install it with `pip install openlm`."
)
if values["streaming"]:
raise ValueError("Streaming not supported with openlm")
return values
17 changes: 16 additions & 1 deletion poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 3 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,7 @@ psychicapi = {version = "^0.2", optional = true}
zep-python = {version="^0.25", optional=true}
chardet = {version="^5.1.0", optional=true}
requests-toolbelt = {version = "^1.0.0", optional = true}
openlm = {version = "^0.0.5", optional = true}

[tool.poetry.group.docs.dependencies]
autodoc_pydantic = "^1.8.0"
Expand Down Expand Up @@ -174,7 +175,7 @@ playwright = "^1.28.0"
setuptools = "^67.6.1"

[tool.poetry.extras]
llms = ["anthropic", "cohere", "openai", "nlpcloud", "huggingface_hub", "manifest-ml", "torch", "transformers"]
llms = ["anthropic", "cohere", "openai", "openlm", "nlpcloud", "huggingface_hub", "manifest-ml", "torch", "transformers"]
qdrant = ["qdrant-client"]
openai = ["openai", "tiktoken"]
text_helpers = ["chardet"]
Expand Down Expand Up @@ -240,6 +241,7 @@ all = [
"lxml",
"requests-toolbelt",
"neo4j",
"openlm"
]

# An extra used to be able to add extended testing.
Expand Down
8 changes: 8 additions & 0 deletions tests/integration_tests/llms/test_openlm.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
from langchain.llms.openlm import OpenLM


def test_openlm_call() -> None:
"""Test valid call to openlm."""
llm = OpenLM(model_name="dolly-v2-7b", max_tokens=10)
output = llm(prompt="Say foo:")
assert isinstance(output, str)

0 comments on commit 99fddf9

Please sign in to comment.