Skip to content

Commit

Permalink
feat(openai_dart): Remove OpenAI deprecated models (#290)
Browse files Browse the repository at this point in the history
  • Loading branch information
davidmigloz committed Jan 10, 2024
1 parent 57eceb9 commit 893b1c5
Show file tree
Hide file tree
Showing 8 changed files with 471 additions and 1,285 deletions.
2 changes: 1 addition & 1 deletion packages/openai_dart/lib/src/generated/client.dart
Original file line number Diff line number Diff line change
Expand Up @@ -471,7 +471,7 @@ class OpenAIClient {
// METHOD: createFineTuningJob
// ------------------------------------------

/// Creates a job that fine-tunes a specified model from a given dataset. Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete. [Learn more about fine-tuning](https://platform.openai.com/docs/guides/fine-tuning).
/// Creates a fine-tuning job which begins the process of creating a new model from a given dataset. Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete. [Learn more about fine-tuning](https://platform.openai.com/docs/guides/fine-tuning).
///
/// `request`: Request object for the Create fine-tuning job endpoint.
///
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ class CreateChatCompletionRequest with _$CreateChatCompletionRequest {
/// Controls which (if any) function is called by the model.
/// `none` means the model will not call a function and instead generates a message.
/// `auto` means the model can pick between generating a message or calling a function.
/// Specifying a particular function via `{"type: "function", "function": {"name": "my_function"}}` forces the model to call that function.
/// Specifying a particular function via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that function.
///
/// `none` is the default when no functions are present. `auto` is the default if functions are present.
@_ChatCompletionToolChoiceOptionConverter()
Expand Down Expand Up @@ -443,7 +443,7 @@ enum ChatCompletionToolChoiceMode {
/// Controls which (if any) function is called by the model.
/// `none` means the model will not call a function and instead generates a message.
/// `auto` means the model can pick between generating a message or calling a function.
/// Specifying a particular function via `{"type: "function", "function": {"name": "my_function"}}` forces the model to call that function.
/// Specifying a particular function via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that function.
///
/// `none` is the default when no functions are present. `auto` is the default if functions are present.
@freezed
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ class CreateCompletionRequest with _$CreateCompletionRequest {

/// Modify the likelihood of specified tokens appearing in the completion.
///
/// Accepts a JSON object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](https://platform.openai.com/tokenizer?view=bpe) (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
/// Accepts a JSON object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](/tokenizer?view=bpe) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
///
/// As an example, you can pass `{"50256": -100}` to prevent the <|endoftext|> token from being generated.
@JsonKey(name: 'logit_bias', includeIfNull: false)
Expand Down Expand Up @@ -232,26 +232,12 @@ class CreateCompletionRequest with _$CreateCompletionRequest {

/// Available completion models. Mind that the list may not be exhaustive nor up-to-date.
enum CompletionModels {
@JsonValue('babbage-002')
babbage002,
@JsonValue('davinci-002')
davinci002,
@JsonValue('gpt-3.5-turbo-instruct')
gpt35TurboInstruct,
@JsonValue('text-davinci-003')
textDavinci003,
@JsonValue('text-davinci-002')
textDavinci002,
@JsonValue('text-davinci-001')
textDavinci001,
@JsonValue('code-davinci-002')
codeDavinci002,
@JsonValue('text-curie-001')
textCurie001,
@JsonValue('text-babbage-001')
textBabbage001,
@JsonValue('text-ada-001')
textAda001,
@JsonValue('davinci-002')
davinci002,
@JsonValue('babbage-002')
babbage002,
}

// ==========================================
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ mixin _$CreateCompletionRequest {

/// Modify the likelihood of specified tokens appearing in the completion.
///
/// Accepts a JSON object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](https://platform.openai.com/tokenizer?view=bpe) (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
/// Accepts a JSON object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](/tokenizer?view=bpe) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
///
/// As an example, you can pass `{"50256": -100}` to prevent the <|endoftext|> token from being generated.
@JsonKey(name: 'logit_bias', includeIfNull: false)
Expand Down Expand Up @@ -504,14 +504,14 @@ class _$CreateCompletionRequestImpl extends _CreateCompletionRequest {

/// Modify the likelihood of specified tokens appearing in the completion.
///
/// Accepts a JSON object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](https://platform.openai.com/tokenizer?view=bpe) (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
/// Accepts a JSON object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](/tokenizer?view=bpe) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
///
/// As an example, you can pass `{"50256": -100}` to prevent the <|endoftext|> token from being generated.
final Map<String, int>? _logitBias;

/// Modify the likelihood of specified tokens appearing in the completion.
///
/// Accepts a JSON object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](https://platform.openai.com/tokenizer?view=bpe) (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
/// Accepts a JSON object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](/tokenizer?view=bpe) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
///
/// As an example, you can pass `{"50256": -100}` to prevent the <|endoftext|> token from being generated.
@override
Expand Down Expand Up @@ -733,7 +733,7 @@ abstract class _CreateCompletionRequest extends CreateCompletionRequest {

/// Modify the likelihood of specified tokens appearing in the completion.
///
/// Accepts a JSON object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](https://platform.openai.com/tokenizer?view=bpe) (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
/// Accepts a JSON object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](/tokenizer?view=bpe) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
///
/// As an example, you can pass `{"50256": -100}` to prevent the <|endoftext|> token from being generated.
@JsonKey(name: 'logit_bias', includeIfNull: false)
Expand Down Expand Up @@ -3429,7 +3429,7 @@ mixin _$CreateChatCompletionRequest {
/// Controls which (if any) function is called by the model.
/// `none` means the model will not call a function and instead generates a message.
/// `auto` means the model can pick between generating a message or calling a function.
/// Specifying a particular function via `{"type: "function", "function": {"name": "my_function"}}` forces the model to call that function.
/// Specifying a particular function via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that function.
///
/// `none` is the default when no functions are present. `auto` is the default if functions are present.
@_ChatCompletionToolChoiceOptionConverter()
Expand Down Expand Up @@ -4034,7 +4034,7 @@ class _$CreateChatCompletionRequestImpl extends _CreateChatCompletionRequest {
/// Controls which (if any) function is called by the model.
/// `none` means the model will not call a function and instead generates a message.
/// `auto` means the model can pick between generating a message or calling a function.
/// Specifying a particular function via `{"type: "function", "function": {"name": "my_function"}}` forces the model to call that function.
/// Specifying a particular function via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that function.
///
/// `none` is the default when no functions are present. `auto` is the default if functions are present.
@override
Expand Down Expand Up @@ -4307,7 +4307,7 @@ abstract class _CreateChatCompletionRequest
/// Controls which (if any) function is called by the model.
/// `none` means the model will not call a function and instead generates a message.
/// `auto` means the model can pick between generating a message or calling a function.
/// Specifying a particular function via `{"type: "function", "function": {"name": "my_function"}}` forces the model to call that function.
/// Specifying a particular function via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that function.
///
/// `none` is the default when no functions are present. `auto` is the default if functions are present.
@_ChatCompletionToolChoiceOptionConverter()
Expand Down
11 changes: 2 additions & 9 deletions packages/openai_dart/lib/src/generated/schema/schema.g.dart

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

22 changes: 5 additions & 17 deletions packages/openai_dart/oas/openapi_curated.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ paths:
tags:
- Fine-tuning
summary: |
Creates a job that fine-tunes a specified model from a given dataset.
Creates a fine-tuning job which begins the process of creating a new model from a given dataset.
Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete.
Expand Down Expand Up @@ -1168,19 +1168,7 @@ components:
title: CompletionModels
description: |
Available completion models. Mind that the list may not be exhaustive nor up-to-date.
enum:
[
"babbage-002",
"davinci-002",
"gpt-3.5-turbo-instruct",
"text-davinci-003",
"text-davinci-002",
"text-davinci-001",
"code-davinci-002",
"text-curie-001",
"text-babbage-001",
"text-ada-001",
]
enum: ["gpt-3.5-turbo-instruct", "davinci-002", "babbage-002"]
prompt:
title: CompletionPrompt
description: &completions_prompt_description |
Expand Down Expand Up @@ -1252,7 +1240,7 @@ components:
description: &completions_logit_bias_description |
Modify the likelihood of specified tokens appearing in the completion.

Accepts a JSON object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](https://platform.openai.com/tokenizer?view=bpe) (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
Accepts a JSON object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](/tokenizer?view=bpe) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.

As an example, you can pass `{"50256": -100}` to prevent the <|endoftext|> token from being generated.
logprobs: &completions_logprobs_configuration
Expand Down Expand Up @@ -1624,7 +1612,7 @@ components:
Controls which (if any) function is called by the model.
`none` means the model will not call a function and instead generates a message.
`auto` means the model can pick between generating a message or calling a function.
Specifying a particular function via `{"type: "function", "function": {"name": "my_function"}}` forces the model to call that function.
Specifying a particular function via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that function.
`none` is the default when no functions are present. `auto` is the default if functions are present.
oneOf:
Expand Down Expand Up @@ -2003,7 +1991,7 @@ components:
description: A list of message content tokens with log probability information.
type: array
items:
$ref: '#/components/schemas/ChatCompletionTokenLogprob'
$ref: "#/components/schemas/ChatCompletionTokenLogprob"
nullable: true
required:
- content
Expand Down

0 comments on commit 893b1c5

Please sign in to comment.