Skip to content

Commit

Permalink
Merge branch 'main' into fix-anthropic-reask
Browse files Browse the repository at this point in the history
  • Loading branch information
jxnl committed Apr 3, 2024
2 parents 025d02f + 1f1cb5e commit b4c64fa
Show file tree
Hide file tree
Showing 14 changed files with 436 additions and 104 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -283,9 +283,9 @@ for user in users:

![iterable](./docs/blog/posts/img/iterable.png)

## [Evals](https://github.com/jxnl/instructor/tree/main/tests/openai/evals)
## [Evals](https://github.com/jxnl/instructor/tree/main/tests/llm/test_openai/evals#how-to-contribute-writing-and-running-evaluation-tests)

We invite you to contribute to evals in `pytest` as a way to monitor the quality of the OpenAI models and the `instructor` library. To get started check out the [jxnl/instructor/tests/evals](https://github.com/jxnl/instructor/tree/main/tests/openai/evals) and contribute your own evals in the form of pytest tests. These evals will be run once a week and the results will be posted.
We invite you to contribute to evals in `pytest` as a way to monitor the quality of the OpenAI models and the `instructor` library. To get started check out the evals for [anthropic](https://github.com/jxnl/instructor/blob/main/tests/llm/test_anthropic/evals/test_simple.py) and [OpenAI](https://github.com/jxnl/instructor/tree/main/tests/llm/test_openai/evals#how-to-contribute-writing-and-running-evaluation-tests) and contribute your own evals in the form of pytest tests. These evals will be run once a week and the results will be posted.

## Contributing

Expand Down
59 changes: 59 additions & 0 deletions docs/concepts/retrying.md
Original file line number Diff line number Diff line change
Expand Up @@ -175,3 +175,62 @@ Tenacity features a huge number of different retrying capabilities. A few of the
- `Retrying(wait=(wait_fixed(1) + wait_random(0.2)))`: Wait at least 1 second and add up to 0.2 seconds

Remember that for async clients you need to use `AsyncRetrying` instead of `Retrying`!


## Retry Callbacks

You can also define callbacks to be called before and after each attempt. This is useful for logging or debugging.

```python
from pydantic import BaseModel, field_validator
from openai import OpenAI
import instructor
import tenacity

client = OpenAI()
client = instructor.from_openai(client)


class User(BaseModel):
name: str
age: int

@field_validator("name")
def name_is_uppercase(cls, v: str):
assert v.isupper(), "Name must be uppercase"
return v


resp = client.messages.create(
model="gpt-3.5-turbo",
max_tokens=1024,
max_retries=tenacity.Retrying(
stop=tenacity.stop_after_attempt(3),
before=lambda _: print("before:", _),
after=lambda _: print("after:", _),
),
messages=[
{
"role": "user",
"content": "Extract John is 18 years old.",
}
],
response_model=User,
) # type: ignore

assert isinstance(resp, User)
assert resp.name == "JOHN" # due to validation
assert resp.age == 18
print(resp)

"""
before: <RetryCallState 4421908816: attempt #1; slept for 0.0; last result: none yet>
after: <RetryCallState 4421908816: attempt #1; slept for 0.0; last result: failed (ValidationError 1 validation error for User
name
Assertion failed, Name must be uppercase [type=assertion_error, input_value='John', input_type=str]
For further information visit https://errors.pydantic.dev/2.6/v/assertion_error)>
before: <RetryCallState 4421908816: attempt #2; slept for 0.0; last result: none yet>
name='JOHN' age=18
"""
```
67 changes: 67 additions & 0 deletions docs/examples/groq.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
# Structured Outputs using Groq
Instead of using openai or antrophic you can now also use groq for inference by using from_groq.

The examples are using mixtral-8x7b model.

## GroqCloud API
To use groq you need to obtain a groq API key.
Goto [groqcloud](https://console.groq.com) and login. Select API Keys from the left menu and then select Create API key to create a new key.

## Use example
Some pip packages need to be installed to use the example:
```
pip install instructor groq pydantic openai anthropic
```
You need to export the groq API key:
```
export GROQ_API_KEY=<your-api-key>
```

An example:
```python
import os
from pydantic import BaseModel, Field
from typing import List
from groq import Groq
import instructor

class Character(BaseModel):
name: str
fact: List[str] = Field(..., description="A list of facts about the subject")


client = Groq(
api_key=os.environ.get('GROQ_API_KEY'),
)

client = instructor.from_groq(client, mode=instructor.Mode.JSON)

resp = client.chat.completions.create(
model="mixtral-8x7b-32768",
messages=[
{
"role": "user",
"content": "Tell me about the company Tesla",
}
],
response_model=Character,
)
print(resp.model_dump_json(indent=2))
"""
{
"name": "Tesla",
"fact": [
"An American electric vehicle and clean energy company.",
"Co-founded by Elon Musk, JB Straubel, Martin Eberhard, Marc Tarpenning, and Ian Wright in 2003.",
"Headquartered in Austin, Texas.",
"Produces electric vehicles, energy storage solutions, and more recently, solar energy products.",
"Known for its premium electric vehicles, such as the Model S, Model 3, Model X, and Model Y.",
"One of the world's most valuable car manufacturers by market capitalization.",
"Tesla's CEO, Elon Musk, is also the CEO of SpaceX, Neuralink, and The Boring Company.",
"Tesla operates the world's largest global network of electric vehicle supercharging stations.",
"The company aims to accelerate the world's transition to sustainable transport and energy through innovative technologies and products."
]
}
"""
```
You can find another example called groq_example2.py under examples/groq of this repository.
1 change: 1 addition & 0 deletions docs/examples/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,5 +17,6 @@
13. [How to generate advertising copy from image inputs](image_to_ad_copy.md)
14. [How to use local models from Ollama](ollama.md)
15. [How to store responses in a database with SQLModel](sqlmodel.md)
16. [How to use groqcloud api](groq.md)

Explore more!
45 changes: 45 additions & 0 deletions examples/groq/groq_example.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
import os
from pydantic import BaseModel, Field
from typing import List
from groq import Groq
import instructor


class Character(BaseModel):
name: str
fact: List[str] = Field(..., description="A list of facts about the subject")


client = Groq(
api_key=os.environ.get("GROQ_API_KEY"),
)

client = instructor.from_groq(client, mode=instructor.Mode.JSON)

resp = client.chat.completions.create(
model="mixtral-8x7b-32768",
messages=[
{
"role": "user",
"content": "Tell me about the company Tesla",
}
],
response_model=Character,
)
print(resp.model_dump_json(indent=2))
"""
{
"name": "Tesla",
"fact": [
"An American electric vehicle and clean energy company.",
"Co-founded by Elon Musk, JB Straubel, Martin Eberhard, Marc Tarpenning, and Ian Wright in 2003.",
"Headquartered in Austin, Texas.",
"Produces electric vehicles, energy storage solutions, and more recently, solar energy products.",
"Known for its premium electric vehicles, such as the Model S, Model 3, Model X, and Model Y.",
"One of the world's most valuable car manufacturers by market capitalization.",
"Tesla's CEO, Elon Musk, is also the CEO of SpaceX, Neuralink, and The Boring Company.",
"Tesla operates the world's largest global network of electric vehicle supercharging stations.",
"The company aims to accelerate the world's transition to sustainable transport and energy through innovative technologies and products."
]
}
"""
37 changes: 37 additions & 0 deletions examples/groq/groq_example2.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
import os
from pydantic import BaseModel, Field
from typing import List
from groq import Groq
import instructor

client = Groq(
api_key=os.environ.get("GROQ_API_KEY"),
)

client = instructor.from_groq(client, mode=instructor.Mode.JSON)


class UserExtract(BaseModel):
name: str
age: int


user: UserExtract = client.chat.completions.create(
model="mixtral-8x7b-32768",
response_model=UserExtract,
messages=[
{"role": "user", "content": "Extract jason is 25 years old"},
],
)

assert isinstance(user, UserExtract), "Should be instance of UserExtract"
assert user.name.lower() == "jason"
assert user.age == 25

print(user.model_dump_json(indent=2))
"""
{
"name": "jason",
"age": 25
}
"""
51 changes: 51 additions & 0 deletions examples/retry/run.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
from pydantic import BaseModel, field_validator
from openai import OpenAI
import instructor
import tenacity

client = OpenAI()
client = instructor.from_openai(client)


class User(BaseModel):
name: str
age: int

@field_validator("name")
def name_is_uppercase(cls, v: str):
assert v.isupper(), "Name must be uppercase"
return v


resp = client.messages.create(
model="gpt-3.5-turbo",
max_tokens=1024,
max_retries=tenacity.Retrying(
stop=tenacity.stop_after_attempt(3),
before=lambda _: print("before:", _),
after=lambda _: print("after:", _),
),
messages=[
{
"role": "user",
"content": "Extract John is 18 years old.",
}
],
response_model=User,
) # type: ignore

assert isinstance(resp, User)
assert resp.name == "JOHN" # due to validation
assert resp.age == 18
print(resp)

"""
before: <RetryCallState 4421908816: attempt #1; slept for 0.0; last result: none yet>
after: <RetryCallState 4421908816: attempt #1; slept for 0.0; last result: failed (ValidationError 1 validation error for User
name
Assertion failed, Name must be uppercase [type=assertion_error, input_value='John', input_type=str]
For further information visit https://errors.pydantic.dev/2.6/v/assertion_error)>
before: <RetryCallState 4421908816: attempt #2; slept for 0.0; last result: none yet>
name='JOHN' age=18
"""
28 changes: 26 additions & 2 deletions instructor/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,13 +12,21 @@
from .function_calls import OpenAISchema, openai_schema
from .patch import apatch, patch
from .process_response import handle_parallel_model
from .client import Instructor, from_openai, from_anthropic, from_litellm
from .client import (
Instructor,
AsyncInstructor,
from_openai,
from_litellm,
Provider,
)


__all__ = [
"Instructor",
"from_openai",
"from_anthropic",
"from_litellm",
"AsyncInstructor",
"Provider",
"OpenAISchema",
"CitationMixin",
"IterableModel",
Expand All @@ -35,3 +43,19 @@
"handle_parallel_model",
"handle_response_model",
]

try:
import anthropic
from .client_anthropic import from_anthropic

__all__.append("from_anthropic")
except ImportError:
pass

try:
import groq
from .client_groq import from_groq

__all__.append("from_groq")
except ImportError:
pass
Loading

0 comments on commit b4c64fa

Please sign in to comment.