Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(api): add optional name argument + improve docs #972

Merged
merged 1 commit into from
Dec 15, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions src/openai/resources/audio/speech.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,9 @@ def create(
`tts-1` or `tts-1-hd`

voice: The voice to use when generating the audio. Supported voices are `alloy`,
`echo`, `fable`, `onyx`, `nova`, and `shimmer`.
`echo`, `fable`, `onyx`, `nova`, and `shimmer`. Previews of the voices are
available in the
[Text to speech guide](https://platform.openai.com/docs/guides/text-to-speech/voice-options).

response_format: The format to audio in. Supported formats are `mp3`, `opus`, `aac`, and `flac`.

Expand Down Expand Up @@ -120,7 +122,9 @@ async def create(
`tts-1` or `tts-1-hd`

voice: The voice to use when generating the audio. Supported voices are `alloy`,
`echo`, `fable`, `onyx`, `nova`, and `shimmer`.
`echo`, `fable`, `onyx`, `nova`, and `shimmer`. Previews of the voices are
available in the
[Text to speech guide](https://platform.openai.com/docs/guides/text-to-speech/voice-options).

response_format: The format to audio in. Supported formats are `mp3`, `opus`, `aac`, and `flac`.

Expand Down
112 changes: 62 additions & 50 deletions src/openai/resources/chat/completions.py

Large diffs are not rendered by default.

24 changes: 12 additions & 12 deletions src/openai/resources/completions.py
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ def create(
existing frequency in the text so far, decreasing the model's likelihood to
repeat the same line verbatim.

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)

logit_bias: Modify the likelihood of specified tokens appearing in the completion.

Expand Down Expand Up @@ -143,7 +143,7 @@ def create(
whether they appear in the text so far, increasing the model's likelihood to
talk about new topics.

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)

seed: If specified, our system will make a best effort to sample deterministically,
such that repeated requests with the same `seed` and parameters should return
Expand Down Expand Up @@ -272,7 +272,7 @@ def create(
existing frequency in the text so far, decreasing the model's likelihood to
repeat the same line verbatim.

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)

logit_bias: Modify the likelihood of specified tokens appearing in the completion.

Expand Down Expand Up @@ -312,7 +312,7 @@ def create(
whether they appear in the text so far, increasing the model's likelihood to
talk about new topics.

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)

seed: If specified, our system will make a best effort to sample deterministically,
such that repeated requests with the same `seed` and parameters should return
Expand Down Expand Up @@ -434,7 +434,7 @@ def create(
existing frequency in the text so far, decreasing the model's likelihood to
repeat the same line verbatim.

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)

logit_bias: Modify the likelihood of specified tokens appearing in the completion.

Expand Down Expand Up @@ -474,7 +474,7 @@ def create(
whether they appear in the text so far, increasing the model's likelihood to
talk about new topics.

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)

seed: If specified, our system will make a best effort to sample deterministically,
such that repeated requests with the same `seed` and parameters should return
Expand Down Expand Up @@ -671,7 +671,7 @@ async def create(
existing frequency in the text so far, decreasing the model's likelihood to
repeat the same line verbatim.

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)

logit_bias: Modify the likelihood of specified tokens appearing in the completion.

Expand Down Expand Up @@ -711,7 +711,7 @@ async def create(
whether they appear in the text so far, increasing the model's likelihood to
talk about new topics.

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)

seed: If specified, our system will make a best effort to sample deterministically,
such that repeated requests with the same `seed` and parameters should return
Expand Down Expand Up @@ -840,7 +840,7 @@ async def create(
existing frequency in the text so far, decreasing the model's likelihood to
repeat the same line verbatim.

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)

logit_bias: Modify the likelihood of specified tokens appearing in the completion.

Expand Down Expand Up @@ -880,7 +880,7 @@ async def create(
whether they appear in the text so far, increasing the model's likelihood to
talk about new topics.

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)

seed: If specified, our system will make a best effort to sample deterministically,
such that repeated requests with the same `seed` and parameters should return
Expand Down Expand Up @@ -1002,7 +1002,7 @@ async def create(
existing frequency in the text so far, decreasing the model's likelihood to
repeat the same line verbatim.

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)

logit_bias: Modify the likelihood of specified tokens appearing in the completion.

Expand Down Expand Up @@ -1042,7 +1042,7 @@ async def create(
whether they appear in the text so far, increasing the model's likelihood to
talk about new topics.

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)

seed: If specified, our system will make a best effort to sample deterministically,
such that repeated requests with the same `seed` and parameters should return
Expand Down
6 changes: 4 additions & 2 deletions src/openai/resources/embeddings.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,8 @@ def create(
input: Input text to embed, encoded as a string or array of tokens. To embed multiple
inputs in a single request, pass an array of strings or array of token arrays.
The input must not exceed the max input tokens for the model (8192 tokens for
`text-embedding-ada-002`) and cannot be an empty string.
`text-embedding-ada-002`), cannot be an empty string, and any array must be 2048
dimensions or less.
[Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken)
for counting tokens.

Expand Down Expand Up @@ -144,7 +145,8 @@ async def create(
input: Input text to embed, encoded as a string or array of tokens. To embed multiple
inputs in a single request, pass an array of strings or array of token arrays.
The input must not exceed the max input tokens for the model (8192 tokens for
`text-embedding-ada-002`) and cannot be an empty string.
`text-embedding-ada-002`), cannot be an empty string, and any array must be 2048
dimensions or less.
[Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken)
for counting tokens.

Expand Down
16 changes: 8 additions & 8 deletions src/openai/resources/files.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,12 +46,12 @@ def create(
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
) -> FileObject:
"""Upload a file that can be used across various endpoints/features.
"""Upload a file that can be used across various endpoints.

The size of
all the files uploaded by one organization can be up to 100 GB.
The size of all the
files uploaded by one organization can be up to 100 GB.

The size of individual files for can be a maximum of 512MB. See the
The size of individual files can be a maximum of 512 MB. See the
[Assistants Tools guide](https://platform.openai.com/docs/assistants/tools) to
learn more about the types of files supported. The Fine-tuning API only supports
`.jsonl` files.
Expand Down Expand Up @@ -309,12 +309,12 @@ async def create(
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
) -> FileObject:
"""Upload a file that can be used across various endpoints/features.
"""Upload a file that can be used across various endpoints.

The size of
all the files uploaded by one organization can be up to 100 GB.
The size of all the
files uploaded by one organization can be up to 100 GB.

The size of individual files for can be a maximum of 512MB. See the
The size of individual files can be a maximum of 512 MB. See the
[Assistants Tools guide](https://platform.openai.com/docs/assistants/tools) to
learn more about the types of files supported. The Fine-tuning API only supports
`.jsonl` files.
Expand Down
2 changes: 2 additions & 0 deletions src/openai/types/audio/speech_create_params.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,8 @@ class SpeechCreateParams(TypedDict, total=False):
"""The voice to use when generating the audio.

Supported voices are `alloy`, `echo`, `fable`, `onyx`, `nova`, and `shimmer`.
Previews of the voices are available in the
[Text to speech guide](https://platform.openai.com/docs/guides/text-to-speech/voice-options).
"""

response_format: Literal["mp3", "opus", "aac", "flac"]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,18 +24,28 @@ class FunctionCall(TypedDict, total=False):


class ChatCompletionAssistantMessageParam(TypedDict, total=False):
content: Required[Optional[str]]
"""The contents of the assistant message."""

role: Required[Literal["assistant"]]
"""The role of the messages author, in this case `assistant`."""

content: Optional[str]
"""The contents of the assistant message.

Required unless `tool_calls` or `function_call` is specified.
"""

function_call: FunctionCall
"""Deprecated and replaced by `tool_calls`.

The name and arguments of a function that should be called, as generated by the
model.
"""

name: str
"""An optional name for the participant.

Provides the model information to differentiate between participants of the same
role.
"""

tool_calls: List[ChatCompletionMessageToolCallParam]
"""The tool calls generated by the model, such as function calls."""
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,11 @@ class ImageURL(TypedDict, total=False):
"""Either a URL of the image or the base64 encoded image data."""

detail: Literal["auto", "low", "high"]
"""Specifies the detail level of the image."""
"""Specifies the detail level of the image.

Learn more in the
[Vision guide](https://platform.openai.com/docs/guides/vision/low-or-high-fidelity-image-understanding).
"""


class ChatCompletionContentPartImageParam(TypedDict, total=False):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,14 @@

from __future__ import annotations

from typing import Optional
from typing_extensions import Literal, Required, TypedDict

__all__ = ["ChatCompletionFunctionMessageParam"]


class ChatCompletionFunctionMessageParam(TypedDict, total=False):
content: Required[Optional[str]]
"""The return value from the function call, to return to the model."""
content: Required[str]
"""The contents of the function message."""

name: Required[str]
"""The name of the function to call."""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ class Function(TypedDict, total=False):


class ChatCompletionNamedToolChoiceParam(TypedDict, total=False):
function: Function
function: Required[Function]

type: Literal["function"]
type: Required[Literal["function"]]
"""The type of the tool. Currently, only `function` is supported."""
10 changes: 8 additions & 2 deletions src/openai/types/chat/chat_completion_system_message_param.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,21 @@

from __future__ import annotations

from typing import Optional
from typing_extensions import Literal, Required, TypedDict

__all__ = ["ChatCompletionSystemMessageParam"]


class ChatCompletionSystemMessageParam(TypedDict, total=False):
content: Required[Optional[str]]
content: Required[str]
"""The contents of the system message."""

role: Required[Literal["system"]]
"""The role of the messages author, in this case `system`."""

name: str
"""An optional name for the participant.

Provides the model information to differentiate between participants of the same
role.
"""
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,13 @@

from __future__ import annotations

from typing import Optional
from typing_extensions import Literal, Required, TypedDict

__all__ = ["ChatCompletionToolMessageParam"]


class ChatCompletionToolMessageParam(TypedDict, total=False):
content: Required[Optional[str]]
content: Required[str]
"""The contents of the tool message."""

role: Required[Literal["tool"]]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,15 @@


class ChatCompletionUserMessageParam(TypedDict, total=False):
content: Required[Union[str, List[ChatCompletionContentPartParam], None]]
content: Required[Union[str, List[ChatCompletionContentPartParam]]]
"""The contents of the user message."""

role: Required[Literal["user"]]
"""The role of the messages author, in this case `user`."""

name: str
"""An optional name for the participant.

Provides the model information to differentiate between participants of the same
role.
"""
Loading