Skip to content

Commit

Permalink
fix(openai_dart): Add missing name param in ChatCompletionMessage (#…
Browse files Browse the repository at this point in the history
…222)

> An optional name for the participant. Provides the model information to differentiate between participants of the same role.
  • Loading branch information
davidmigloz committed Nov 16, 2023
1 parent 95369e4 commit 6f18677
Show file tree
Hide file tree
Showing 13 changed files with 748 additions and 514 deletions.
60 changes: 45 additions & 15 deletions packages/openai_dart/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,18 +10,42 @@ Unofficial Dart client for [OpenAI](https://platform.openai.com/docs/api-referen
## Features

- Generated from the official OpenAI [OpenAPI specification](https://github.com/openai/openai-openapi)
- Fully type-safe, documented and tested
- Fully type-safe, [documented](https://pub.dev/documentation/openai_dart/latest/) and tested
- All platforms supported (including streaming on web)
- Authentication with organization support
- Custom base URL and headers support (e.g. HTTP proxies)
- Custom HTTP client support (e.g. SOCKS5 proxies or advanced use cases)
- Endpoints:
* Chat (with functions and streaming support)
* Completions (with streaming support)
* Embeddings
* Fine-tuning
* Images
* Models
* Moderations

**Supported endpoints:**

- Chat (with functions and streaming support)
- Completions (with streaming support)
- Embeddings
- Fine-tuning
- Images
- Models
- Moderations

## Table of contents

- [Usage](#usage)
* [Authentication](#authentication)
+ [Organization (optional)](#organization-optional)
* [Chat](#chat)
* [Completions](#completions)
* [Embeddings](#embeddings)
* [Fine-tuning](#fine-tuning)
* [Images](#images)
* [Models](#models)
* [Moderations](#moderations)
- [Advance usage](#advance-usage)
* [Default HTTP client](#default-http-client)
* [Custom HTTP client ](#custom-http-client)
* [Using a proxy](#using-a-proxy)
+ [HTTP proxy](#http-proxy)
+ [SOCKS5 proxy](#socks5-proxy)
- [Acknowledgements](#acknowledgements)
- [License](#license)

## Usage

Expand Down Expand Up @@ -84,6 +108,12 @@ print(res.choices.first.message.content);
- `ChatCompletionMessage.tool()`: a tool message.
- `ChatCompletionMessage.function()`: a function message.

`ChatCompletionMessage.user()` takes a `ChatCompletionUserMessageContent` object that supports the following content types:
- `ChatCompletionUserMessageContent.string('content')`: string content.
- `ChatCompletionUserMessageContent.parts([...])`: multi-modal content (check the 'Multi-modal prompt' section below).
* `ChatCompletionMessageContentPart.text('content')`: text content.
* `ChatCompletionMessageContentPart.image(imageUrl: ...)`: image content.

**Stream chat completion:**

```dart
Expand Down Expand Up @@ -535,16 +565,16 @@ print(res.results.first.categoryScores.violence);
- `ModerationInput.string('input')`: the input as string.
- `ModerationInput.listString(['input'])`: batch of string inputs.

### Advance
## Advance Usage

#### Default HTTP client
### Default HTTP client

By default, the client uses the following implementation of `http.Client`:

- Non-web: [`IOClient`](https://pub.dev/documentation/http/latest/io_client/IOClient-class.html)
- Web: [`FetchClient`](https://pub.dev/documentation/fetch_client/latest/fetch_client/FetchClient-class.html) (to support streaming on web)

#### Custom HTTP client
### Custom HTTP client

You can always provide your own implementation of `http.Client` for further customization:

Expand All @@ -555,9 +585,9 @@ final client = OpenAIClient(
);
```

#### Using a proxy
### Using a proxy

##### HTTP proxy
#### HTTP proxy

You can use your own HTTP proxy by overriding the `baseUrl` and providing your required `headers`:

Expand All @@ -572,7 +602,7 @@ final client = OpenAIClient(

If you need further customization, you can always provide your own `http.Client`.

##### SOCKS5 proxy
#### SOCKS5 proxy

To use a SOCKS5 proxy, you can use the [`socks5_proxy`](https://pub.dev/packages/socks5_proxy) package:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,9 @@ sealed class ChatCompletionMessage with _$ChatCompletionMessage {

/// The contents of the system message.
required String? content,

/// An optional name for the participant. Provides the model information to differentiate between participants of the same role.
@JsonKey(includeIfNull: false) String? name,
}) = ChatCompletionSystemMessage;

// ------------------------------------------
Expand All @@ -38,6 +41,9 @@ sealed class ChatCompletionMessage with _$ChatCompletionMessage {
/// The contents of the user message.
@_ChatCompletionUserMessageContentConverter()
required ChatCompletionUserMessageContent? content,

/// An optional name for the participant. Provides the model information to differentiate between participants of the same role.
@JsonKey(includeIfNull: false) String? name,
}) = ChatCompletionUserMessage;

// ------------------------------------------
Expand All @@ -53,6 +59,9 @@ sealed class ChatCompletionMessage with _$ChatCompletionMessage {
/// The contents of the assistant message.
required String? content,

/// An optional name for the participant. Provides the model information to differentiate between participants of the same role.
@JsonKey(includeIfNull: false) String? name,

/// No Description
@JsonKey(name: 'tool_calls', includeIfNull: false)
ChatCompletionMessageToolCalls? toolCalls,
Expand Down Expand Up @@ -88,7 +97,7 @@ sealed class ChatCompletionMessage with _$ChatCompletionMessage {
@Default(ChatCompletionMessageRole.function) ChatCompletionMessageRole role,

/// The return value from the function call, to return to the model.
required String? content,
required String content,

/// The name of the function to call.
required String name,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ class ChatCompletionMessageImageUrl with _$ChatCompletionMessageImageUrl {
/// Either a URL of the image or the base64 encoded image data.
required String url,

/// Specifies the detail level of the image.
/// Specifies the detail level of the image. Learn more in the [Vision guide](https://platform.openai.com/docs/guides/vision/low-or-high-fidelity-image-understanding).
@Default(ChatCompletionMessageImageDetail.auto)
ChatCompletionMessageImageDetail detail,
}) = _ChatCompletionMessageImageUrl;
Expand Down Expand Up @@ -104,7 +104,7 @@ class ChatCompletionMessageImageUrl with _$ChatCompletionMessageImageUrl {
// ENUM: ChatCompletionMessageImageDetail
// ==========================================

/// Specifies the detail level of the image.
/// Specifies the detail level of the image. Learn more in the [Vision guide](https://platform.openai.com/docs/guides/vision/low-or-high-fidelity-image-understanding).
enum ChatCompletionMessageImageDetail {
@JsonValue('auto')
auto,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,15 +15,15 @@ class CreateChatCompletionRequest with _$CreateChatCompletionRequest {

/// Factory constructor for CreateChatCompletionRequest
const factory CreateChatCompletionRequest({
/// ID of the model to use. See the [model endpoint compatibility](/docs/models/model-endpoint-compatibility) table for details on which models work with the Chat API.
/// ID of the model to use. See the [model endpoint compatibility](https://platform.openai.com/docs/models/model-endpoint-compatibility) table for details on which models work with the Chat API.
@_ChatCompletionModelConverter() required ChatCompletionModel model,

/// A list of messages comprising the conversation so far. [Example Python code](https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models).
required List<ChatCompletionMessage> messages,

/// Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
///
/// [See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
/// [See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)
@JsonKey(name: 'frequency_penalty', includeIfNull: false)
@Default(0.0)
double? frequencyPenalty,
Expand All @@ -44,7 +44,7 @@ class CreateChatCompletionRequest with _$CreateChatCompletionRequest {

/// Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
///
/// [See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
/// [See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)
@JsonKey(name: 'presence_penalty', includeIfNull: false)
@Default(0.0)
double? presencePenalty,
Expand All @@ -53,7 +53,7 @@ class CreateChatCompletionRequest with _$CreateChatCompletionRequest {
///
/// Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.
///
/// **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in increased latency and appearance of a "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
/// **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
@JsonKey(name: 'response_format', includeIfNull: false)
ChatCompletionResponseFormat? responseFormat,

Expand Down Expand Up @@ -259,7 +259,7 @@ enum ChatCompletionModels {
// CLASS: ChatCompletionModel
// ==========================================

/// ID of the model to use. See the [model endpoint compatibility](/docs/models/model-endpoint-compatibility) table for details on which models work with the Chat API.
/// ID of the model to use. See the [model endpoint compatibility](https://platform.openai.com/docs/models/model-endpoint-compatibility) table for details on which models work with the Chat API.
@freezed
sealed class ChatCompletionModel with _$ChatCompletionModel {
const ChatCompletionModel._();
Expand Down Expand Up @@ -319,7 +319,7 @@ class _ChatCompletionModelConverter
///
/// Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.
///
/// **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in increased latency and appearance of a "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
/// **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
@freezed
class ChatCompletionResponseFormat with _$ChatCompletionResponseFormat {
const ChatCompletionResponseFormat._();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ class CreateCompletionRequest with _$CreateCompletionRequest {

/// Factory constructor for CreateCompletionRequest
const factory CreateCompletionRequest({
/// ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them.
/// ID of the model to use. You can use the [List models](https://platform.openai.com/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](https://platform.openai.com/docs/models/overview) for descriptions of them.
@_CompletionModelConverter() required CompletionModel model,

/// The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays.
Expand All @@ -35,7 +35,7 @@ class CreateCompletionRequest with _$CreateCompletionRequest {

/// Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
///
/// [See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
/// [See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)
@JsonKey(name: 'frequency_penalty', includeIfNull: false)
@Default(0.0)
double? frequencyPenalty,
Expand Down Expand Up @@ -67,7 +67,7 @@ class CreateCompletionRequest with _$CreateCompletionRequest {

/// Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
///
/// [See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
/// [See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)
@JsonKey(name: 'presence_penalty', includeIfNull: false)
@Default(0.0)
double? presencePenalty,
Expand Down Expand Up @@ -258,7 +258,7 @@ enum CompletionModels {
// CLASS: CompletionModel
// ==========================================

/// ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them.
/// ID of the model to use. You can use the [List models](https://platform.openai.com/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](https://platform.openai.com/docs/models/overview) for descriptions of them.
@freezed
sealed class CompletionModel with _$CompletionModel {
const CompletionModel._();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,10 @@ class CreateEmbeddingRequest with _$CreateEmbeddingRequest {

/// Factory constructor for CreateEmbeddingRequest
const factory CreateEmbeddingRequest({
/// ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them.
/// ID of the model to use. You can use the [List models](https://platform.openai.com/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](https://platform.openai.com/docs/models/overview) for descriptions of them.
@_EmbeddingModelConverter() required EmbeddingModel model,

/// Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. The input must not exceed the max input tokens for the model (8192 tokens for `text-embedding-ada-002`) and cannot be an empty string. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.
/// Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. The input must not exceed the max input tokens for the model (8192 tokens for `text-embedding-ada-002`), cannot be an empty string, and any array must be 2048 dimensions or less. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.
@_EmbeddingInputConverter() required EmbeddingInput input,

/// The format to return the embeddings in. Can be either `float` or [`base64`](https://pypi.org/project/pybase64/).
Expand Down Expand Up @@ -72,7 +72,7 @@ enum EmbeddingModels {
// CLASS: EmbeddingModel
// ==========================================

/// ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them.
/// ID of the model to use. You can use the [List models](https://platform.openai.com/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](https://platform.openai.com/docs/models/overview) for descriptions of them.
@freezed
sealed class EmbeddingModel with _$EmbeddingModel {
const EmbeddingModel._();
Expand Down Expand Up @@ -127,7 +127,7 @@ class _EmbeddingModelConverter
// CLASS: EmbeddingInput
// ==========================================

/// Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. The input must not exceed the max input tokens for the model (8192 tokens for `text-embedding-ada-002`) and cannot be an empty string. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.
/// Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. The input must not exceed the max input tokens for the model (8192 tokens for `text-embedding-ada-002`), cannot be an empty string, and any array must be 2048 dimensions or less. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.
@freezed
sealed class EmbeddingInput with _$EmbeddingInput {
const EmbeddingInput._();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ class CreateFineTuningJobRequest with _$CreateFineTuningJobRequest {
/// Factory constructor for CreateFineTuningJobRequest
const factory CreateFineTuningJobRequest({
/// The name of the model to fine-tune. You can select one of the
/// [supported models](/docs/guides/fine-tuning/what-models-can-be-fine-tuned).
/// [supported models](https://platform.openai.com/docs/guides/fine-tuning/what-models-can-be-fine-tuned).
@_FineTuningModelConverter() required FineTuningModel model,

/// The ID of an uploaded file that contains training data.
Expand Down Expand Up @@ -110,7 +110,7 @@ enum FineTuningModels {
// ==========================================

/// The name of the model to fine-tune. You can select one of the
/// [supported models](/docs/guides/fine-tuning/what-models-can-be-fine-tuned).
/// [supported models](https://platform.openai.com/docs/guides/fine-tuning/what-models-can-be-fine-tuned).
@freezed
sealed class FineTuningModel with _$FineTuningModel {
const FineTuningModel._();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ class FunctionObject with _$FunctionObject {
/// A description of what the function does, used by the model to choose when and how to call the function.
@JsonKey(includeIfNull: false) String? description,

/// The parameters the functions accepts, described as a JSON Schema object. See the [guide](https://platform.openai.com/docs/guides/gpt/function-calling) for examples, and the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for documentation about the format.
/// The parameters the functions accepts, described as a JSON Schema object. See the [guide](https://platform.openai.com/docs/guides/function-calling) for examples, and the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for documentation about the format.
///
/// To describe a function that accepts no parameters, provide the value `{"type": "object", "properties": {}}`.
required FunctionParameters parameters,
Expand Down

0 comments on commit 6f18677

Please sign in to comment.