Skip to content

Commit

Permalink
feat: Remove ModelType enums (#105)
Browse files Browse the repository at this point in the history
  • Loading branch information
jayolee committed Jul 21, 2023
1 parent 8a4e597 commit 35a280e
Show file tree
Hide file tree
Showing 20 changed files with 60 additions and 101 deletions.
17 changes: 7 additions & 10 deletions GETTING_STARTED.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@

## <a name='TableofContents'></a>Table of Contents

<!-- vscode-markdown-toc -->
* [Table of Contents](#table-of-contents)
* [Installation](#installation)
* [Gen AI Endpoint](#gen-ai-endpoint)
Expand All @@ -23,10 +22,13 @@
```bash
pip install ibm-generative-ai
```

#### <a name='KnownIssueFixes:'></a>Known Issue Fixes:

- **[SSL Issue]** If you run into "SSL_CERTIFICATE_VERIFY_FAILED" please run the following code snippet here: [support](SUPPORT.md).

### <a name='Prerequisites'></a>Prerequisites

Python version >= 3.9

Pip version >= 22.0.1
Expand Down Expand Up @@ -71,13 +73,11 @@ creds = Credentials(api_key=my_api_key, api_endpoint=my_api_endpoint)

```


## <a name='Examples'></a>Examples

There are a number of examples you can try in the [`examples/user`](examples/user) directory.
Login to [workbench.res.ibm.com](https://workbench.res.ibm.com/) and get your GenAI API key. Then, create a `.env` file and assign the `GENAI_KEY` value as below example. [More information](#gen-ai-endpoint)


```ini
GENAI_KEY=YOUR_GENAI_API_KEY
# GENAI_API=GENAI_API_ENDPOINT << for a different endpoint
Expand Down Expand Up @@ -258,6 +258,7 @@ To learn more about logging in python, you can follow the tutorial [here](https:

Since generating responses for a large number of prompts can be time-consuming and there could be unforeseen circumstances such as internet connectivity issues, here are some strategies
to work with:

- Start with a small number of prompts to prototype the code. You can enable logging as described above for debugging during prototyping.
- Include exception handling in sensitive sections such as callbacks.
- Checkpoint/save prompts and received responses periodically.
Expand Down Expand Up @@ -292,10 +293,13 @@ us if you want support for some framework as an extension or want to design an e
### <a name='LangChainExtension'></a>LangChain Extension

Install the langchain extension as follows:

```bash
pip install "ibm-generative-ai[langchain]"
```

Currently the langchain extension allows IBM Generative AI models to be wrapped as Langchain LLMs and translation between genai PromptPatterns and LangChain PromptTemplates. Below are sample snippets

```python
import os
from dotenv import load_dotenv
Expand Down Expand Up @@ -327,13 +331,6 @@ print(langchain_model(template.format(question="What is life?")))
print(genai_model.generate([pattern.sub("question", "What is life?")])[0].generated_text)
```

## <a name='[Deprecated] Model Types'></a>[Deprecated] Model Types

Model types can be imported from the [ModelType class](src/genai/schemas/models.py). If you want to use a model that is not included in this class, you can pass it as a string as exemplified [here](src/genai/schemas/models.py).

Models can be selected by passing their string id to the Model class as exemplified [here](src/genai/schemas/models.py).


## <a name='Support'></a>Support

Need help? Check out how to get [support](SUPPORT.md)
7 changes: 0 additions & 7 deletions documentation/docs/source/rst_source/genai.schemas.models.rst

This file was deleted.

1 change: 0 additions & 1 deletion documentation/docs/source/rst_source/genai.schemas.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@ Submodules
genai.schemas.descriptions
genai.schemas.generate_params
genai.schemas.history_params
genai.schemas.models
genai.schemas.responses
genai.schemas.token_params
genai.schemas.tunes_params
Expand Down
4 changes: 2 additions & 2 deletions examples/dev/async-flaky-request-handler.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
from dotenv import load_dotenv

from genai.model import Credentials, Model
from genai.schemas import GenerateParams, ModelType, TokenParams
from genai.schemas import GenerateParams, TokenParams
from genai.services.connection_manager import ConnectionManager
from genai.services.request_handler import RequestHandler

Expand Down Expand Up @@ -80,7 +80,7 @@ async def flaky_async_generate(
tokenize_params = TokenParams(return_tokens=True)


flan_ul2 = Model(ModelType.FLAN_UL2, params=generate_params, credentials=creds)
flan_ul2 = Model("google/flan-ul2", params=generate_params, credentials=creds)
prompts = ["Generate a random number > {}: ".format(i) for i in range(25)]
for response in flan_ul2.generate_async(prompts, ordered=True):
pass
6 changes: 3 additions & 3 deletions examples/dev/async-flaky-responses-ordered.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
from dotenv import load_dotenv

from genai.model import Credentials, GenAiException, Model
from genai.schemas import GenerateParams, ModelType, TokenParams
from genai.schemas import GenerateParams, TokenParams
from genai.services.async_generator import AsyncResponseGenerator

num_requests = 0
Expand Down Expand Up @@ -83,7 +83,7 @@ def tokenize_async(self, prompts, ordered=False, callback=None, options=None):
tokenize_params = TokenParams(return_tokens=True)


flan_ul2 = FlakyModel(ModelType.FLAN_UL2_20B, params=generate_params, credentials=creds)
flan_ul2 = FlakyModel("google/flan-ul2", params=generate_params, credentials=creds)
prompts = ["Generate a random number > {}: ".format(i) for i in range(17)]
print("======== Async Generate with ordered=True ======== ")
counter = 0
Expand All @@ -97,7 +97,7 @@ def tokenize_async(self, prompts, ordered=False, callback=None, options=None):
num_requests = 0

# Instantiate a model proxy object to send your requests
flan_ul2 = FlakyModel(ModelType.FLAN_UL2_20B, params=tokenize_params, credentials=creds)
flan_ul2 = FlakyModel("google/flan-ul2", params=tokenize_params, credentials=creds)
prompts = ["Generate a random number > {}: ".format(i) for i in range(23)]
print("======== Async Tokenize with ordered=True ======== ")
counter = 0
Expand Down
5 changes: 3 additions & 2 deletions examples/dev/generate-all-models.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
from dotenv import load_dotenv

from genai.model import Credentials, Model
from genai.schemas import GenerateParams, ModelType
from genai.schemas import GenerateParams

# make sure you have a .env file under genai root with
# GENAI_KEY=<your-genai-key>
Expand All @@ -24,7 +24,8 @@
" during iteration it will do symb1 symb1 symb1 due to how it"
" maps internally. ===="
)
for key, modelid in ModelType.__members__.items():
for model_card in Model.models(credentials=creds):
modelid = model_card.id
model = Model(modelid, params=params, credentials=creds)
responses = [response.generated_text for response in model.generate(prompts)]
print(modelid, ":", responses)
Expand Down
4 changes: 2 additions & 2 deletions examples/dev/logging_example.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
from dotenv import load_dotenv

from genai.model import Credentials, Model
from genai.schemas import GenerateParams, ModelType
from genai.schemas import GenerateParams

logging.basicConfig(level=logging.INFO)

Expand All @@ -22,7 +22,7 @@
params = GenerateParams(decoding_method="sample", max_new_tokens=10)

# Instantiate a model proxy object to send your requests
flan_ul2 = Model(ModelType.FLAN_UL2, params=params, credentials=creds)
flan_ul2 = Model("google/flan-ul2", params=params, credentials=creds)

prompts = ["Hello! How are you?", "How's the weather?"]
for response in flan_ul2.generate_async(prompts):
Expand Down
7 changes: 4 additions & 3 deletions examples/user/prompt_templating/watsonx-prompt-output.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,10 @@

from dotenv import load_dotenv

from genai.model import Credentials, Model
from genai.credentials import Credentials
from genai.model import Model
from genai.prompt_pattern import PromptPattern
from genai.schemas import GenerateParams, ModelType
from genai.schemas import GenerateParams

# make sure you have a .env file under genai root with
# GENAI_KEY=<your-genai-key>
Expand All @@ -15,7 +16,7 @@
creds = Credentials(api_key, api_endpoint=api_url)
params = GenerateParams(temperature=0.5)

model = Model(ModelType.FLAN_UL2, params=params, credentials=creds)
model = Model("google/flan-ul2", params=params, credentials=creds)


_template = """
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,11 @@

from dotenv import load_dotenv

from genai.model import Credentials, Model
from genai.credentials import Credentials
from genai.model import Model
from genai.options import Options
from genai.prompt_pattern import PromptPattern
from genai.schemas import GenerateParams, ModelType
from genai.schemas import GenerateParams

# make sure you have a .env file under genai root with
# GENAI_KEY=<your-genai-key>
Expand All @@ -18,7 +19,7 @@
creds = Credentials(api_key, api_endpoint=api_url)
params = GenerateParams(temperature=0.5)

model = Model(ModelType.FLAN_UL2, params=params, credentials=creds)
model = Model("google/flan-ul2", params=params, credentials=creds)


_template = """
Expand Down
7 changes: 4 additions & 3 deletions examples/user/prompt_templating/watsonx-prompt-pattern-ux.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,11 @@

from dotenv import load_dotenv

from genai.model import Credentials, Model
from genai.credentials import Credentials
from genai.model import Model
from genai.options import Options
from genai.prompt_pattern import PromptPattern
from genai.schemas import GenerateParams, ModelType
from genai.schemas import GenerateParams

# make sure you have a .env file under genai root with
# GENAI_KEY=<your-genai-key>
Expand All @@ -18,7 +19,7 @@
creds = Credentials(api_key, api_endpoint=api_url)
params = GenerateParams(temperature=0.5)

model = Model(ModelType.FLAN_UL2, params=params, credentials=creds)
model = Model("google/flan-ul2", params=params, credentials=creds)


_template = """
Expand Down
10 changes: 5 additions & 5 deletions src/genai/extensions/langchain/llm.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
"""Wrapper around IBM GENAI APIs for use in langchain"""
import logging
from typing import Any, List, Mapping, Optional, Union
from typing import Any, List, Mapping, Optional

from pydantic import BaseModel, Extra

Expand All @@ -11,7 +11,7 @@
raise ImportError("Could not import langchain: Please install ibm-generative-ai[langchain] extension.")

from genai import Credentials, Model
from genai.schemas import GenerateParams, ModelType
from genai.schemas import GenerateParams

logger = logging.getLogger(__name__)

Expand All @@ -28,11 +28,11 @@ class LangChainInterface(LLM, BaseModel):
parameter, which is an instance of GenerateParams.
Example:
.. code-block:: python
llm = LangChainInterface(model=ModelType.FLAN_UL2, credentials=creds)
llm = LangChainInterface(model="google/flan-ul2", credentials=creds)
"""

credentials: Credentials = None
model: Optional[Union[ModelType, str]] = None
model: Optional[str] = None
params: Optional[GenerateParams] = None

class Config:
Expand Down Expand Up @@ -63,7 +63,7 @@ def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str:
The string generated by the model.
Example:
.. code-block:: python
llm = LangChainInterface(model_id=ModelType.FLAN_UL2, credentials=creds)
llm = LangChainInterface(model_id="google/flan-ul2", credentials=creds)
response = llm("What is a molecule")
"""
params = self.params or GenerateParams()
Expand Down
6 changes: 3 additions & 3 deletions src/genai/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
from genai.metadata import Metadata
from genai.options import Options
from genai.prompt_pattern import PromptPattern
from genai.schemas import GenerateParams, ModelType, TokenParams
from genai.schemas import GenerateParams, TokenParams
from genai.schemas.responses import (
GenerateResponse,
GenerateResult,
Expand Down Expand Up @@ -38,14 +38,14 @@ class Model:

def __init__(
self,
model: Union[ModelType, str],
model: str,
params: Union[GenerateParams, TokenParams, Any] = None,
credentials: Credentials = None,
):
"""Instantiates the Model Interface
Args:
model (Union[ModelType, str]): The type of model to use
model (str): The type of model to use
params (Union[GenerateParams, TokenParams]): Parameters to use during generate requests
credentials (Credentials): The API Credentials
"""
Expand Down
2 changes: 0 additions & 2 deletions src/genai/schemas/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@
ReturnOptions,
)
from genai.schemas.history_params import HistoryParams
from genai.schemas.models import ModelType
from genai.schemas.responses import GenerateResult, TokenizeResult
from genai.schemas.token_params import TokenParams
from genai.schemas.tunes_params import (
Expand All @@ -24,7 +23,6 @@
"ReturnOptions",
"TokenParams",
"HistoryParams",
"ModelType",
"GenerateResult",
"TokenizeResult",
"FileListParams",
Expand Down
31 changes: 0 additions & 31 deletions src/genai/schemas/models.py

This file was deleted.

5 changes: 2 additions & 3 deletions src/genai/schemas/responses.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,6 @@
from pydantic import BaseModel, Extra, root_validator

from genai.schemas.generate_params import GenerateParams
from genai.schemas.models import ModelType

logger = logging.getLogger(__name__)

Expand Down Expand Up @@ -76,7 +75,7 @@ class GenerateResult(GenAiResponseModel):


class GenerateResponse(GenAiResponseModel):
model_id: Union[ModelType, str]
model_id: str
created_at: datetime
results: List[GenerateResult]

Expand All @@ -98,7 +97,7 @@ class TokenizeResult(GenAiResponseModel):


class TokenizeResponse(GenAiResponseModel):
model_id: Union[ModelType, str]
model_id: str
created_at: datetime
results: List[TokenizeResult]

Expand Down
2 changes: 1 addition & 1 deletion src/genai/services/async_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ def __init__(
"""Instantiates the ConcurrentWrapper Interface.
Args:
model_id (ModelType): The type of model to use
model_id (str): The type of model to use
prompts (list): List of prompts
params (GenerateParams): Parameters to use during generate requests
service (ServiceInterface): The service interface
Expand Down
Loading

0 comments on commit 35a280e

Please sign in to comment.