Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .claude/skills/tryagi-openai/commands.md
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ Files are used to upload documents that can be used with features like Assistant
| `list-files` | `GET /files` | Returns a list of files. |
| `retrieve-file` | `GET /files/{file_id}` | Returns information about a specific file. |
| `retrieve-file-content` | `GET /files/{file_id}/content` | Returns the contents of the specified file. |
| `upload-file` | `POST /files` | Upload a file that can be used across various endpoints. Individual files can be up to 512 MB, and each project can store up to 2.5 TB of files in total. There is no organization-wide storage limit. Uploads to this endpoint are rate-limited to 2,000 files per minute per organization. - The Assistants API supports files up to 2 million tokens and of specific file types. See the [Assistants Tools guide](/docs/assistants/tools) for details. - The Fine-tuning API only supports `.jsonl` files. The input also has certain required formats for fine-tuning [chat](/docs/api-reference/fine-tuning/chat-input) or [completions](/docs/api-reference/fine-tuning/completions-input) models. - The Batch API only supports `.jsonl` files up to 200 MB in size. The input also has a specific required [format](/docs/api-reference/batch/request-input). - For Retrieval or `file_search` ingestion, upload files here first. If you need to attach multiple uploaded files to the same vector store, use [`/vector_stores/{vector_store_id}/file_batches`](/docs/api-reference/vector-stores-file-batches/createBatch) instead of attaching them one by one. Please [contact us](https://help.openai.com/) if you need to increase these storage limits. |
| `upload-file` | `POST /files` | Upload a file that can be used across various endpoints. Individual files can be up to 512 MB, and each project can store up to 2.5 TB of files in total. There is no organization-wide storage limit. Uploads to this endpoint are rate-limited to 1,000 requests per minute per authenticated user. - The Assistants API supports files up to 2 million tokens and of specific file types. See the [Assistants Tools guide](/docs/assistants/tools) for details. - The Fine-tuning API only supports `.jsonl` files. The input also has certain required formats for fine-tuning [chat](/docs/api-reference/fine-tuning/chat-input) or [completions](/docs/api-reference/fine-tuning/completions-input) models. - The Batch API only supports `.jsonl` files up to 200 MB in size. The input also has a specific required [format](/docs/api-reference/batch/request-input). - For Retrieval or `file_search` ingestion, upload files here first. If you need to attach multiple uploaded files to the same vector store, use [`/vector_stores/{vector_store_id}/file_batches`](/docs/api-reference/vector-stores-file-batches/createBatch) instead of attaching them one by one. Vector store attachment has separate limits from file upload, including 2,000 attached files per minute per organization. Please [contact us](https://help.openai.com/) if you need to increase these storage limits. |

## `fine-tuning`

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,8 @@ partial void ProcessUploadFileResponseContent(
/// Upload a file that can be used across various endpoints. Individual files<br/>
/// can be up to 512 MB, and each project can store up to 2.5 TB of files in<br/>
/// total. There is no organization-wide storage limit. Uploads to this<br/>
/// endpoint are rate-limited to 2,000 files per minute per organization.<br/>
/// endpoint are rate-limited to 1,000 requests per minute per authenticated<br/>
/// user.<br/>
/// - The Assistants API supports files up to 2 million tokens and of specific<br/>
/// file types. See the [Assistants Tools guide](/docs/assistants/tools) for<br/>
/// details.<br/>
Expand All @@ -59,7 +60,9 @@ partial void ProcessUploadFileResponseContent(
/// - For Retrieval or `file_search` ingestion, upload files here first. If<br/>
/// you need to attach multiple uploaded files to the same vector store, use<br/>
/// [`/vector_stores/{vector_store_id}/file_batches`](/docs/api-reference/vector-stores-file-batches/createBatch)<br/>
/// instead of attaching them one by one.<br/>
/// instead of attaching them one by one. Vector store attachment has separate<br/>
/// limits from file upload, including 2,000 attached files per minute per<br/>
/// organization.<br/>
/// Please [contact us](https://help.openai.com/) if you need to increase these<br/>
/// storage limits.
/// </summary>
Expand Down Expand Up @@ -424,7 +427,8 @@ partial void ProcessUploadFileResponseContent(
/// Upload a file that can be used across various endpoints. Individual files<br/>
/// can be up to 512 MB, and each project can store up to 2.5 TB of files in<br/>
/// total. There is no organization-wide storage limit. Uploads to this<br/>
/// endpoint are rate-limited to 2,000 files per minute per organization.<br/>
/// endpoint are rate-limited to 1,000 requests per minute per authenticated<br/>
/// user.<br/>
/// - The Assistants API supports files up to 2 million tokens and of specific<br/>
/// file types. See the [Assistants Tools guide](/docs/assistants/tools) for<br/>
/// details.<br/>
Expand All @@ -438,7 +442,9 @@ partial void ProcessUploadFileResponseContent(
/// - For Retrieval or `file_search` ingestion, upload files here first. If<br/>
/// you need to attach multiple uploaded files to the same vector store, use<br/>
/// [`/vector_stores/{vector_store_id}/file_batches`](/docs/api-reference/vector-stores-file-batches/createBatch)<br/>
/// instead of attaching them one by one.<br/>
/// instead of attaching them one by one. Vector store attachment has separate<br/>
/// limits from file upload, including 2,000 attached files per minute per<br/>
/// organization.<br/>
/// Please [contact us](https://help.openai.com/) if you need to increase these<br/>
/// storage limits.
/// </summary>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,8 @@ public partial interface IFilesClient
/// Upload a file that can be used across various endpoints. Individual files<br/>
/// can be up to 512 MB, and each project can store up to 2.5 TB of files in<br/>
/// total. There is no organization-wide storage limit. Uploads to this<br/>
/// endpoint are rate-limited to 2,000 files per minute per organization.<br/>
/// endpoint are rate-limited to 1,000 requests per minute per authenticated<br/>
/// user.<br/>
/// - The Assistants API supports files up to 2 million tokens and of specific<br/>
/// file types. See the [Assistants Tools guide](/docs/assistants/tools) for<br/>
/// details.<br/>
Expand All @@ -22,7 +23,9 @@ public partial interface IFilesClient
/// - For Retrieval or `file_search` ingestion, upload files here first. If<br/>
/// you need to attach multiple uploaded files to the same vector store, use<br/>
/// [`/vector_stores/{vector_store_id}/file_batches`](/docs/api-reference/vector-stores-file-batches/createBatch)<br/>
/// instead of attaching them one by one.<br/>
/// instead of attaching them one by one. Vector store attachment has separate<br/>
/// limits from file upload, including 2,000 attached files per minute per<br/>
/// organization.<br/>
/// Please [contact us](https://help.openai.com/) if you need to increase these<br/>
/// storage limits.
/// </summary>
Expand All @@ -39,7 +42,8 @@ public partial interface IFilesClient
/// Upload a file that can be used across various endpoints. Individual files<br/>
/// can be up to 512 MB, and each project can store up to 2.5 TB of files in<br/>
/// total. There is no organization-wide storage limit. Uploads to this<br/>
/// endpoint are rate-limited to 2,000 files per minute per organization.<br/>
/// endpoint are rate-limited to 1,000 requests per minute per authenticated<br/>
/// user.<br/>
/// - The Assistants API supports files up to 2 million tokens and of specific<br/>
/// file types. See the [Assistants Tools guide](/docs/assistants/tools) for<br/>
/// details.<br/>
Expand All @@ -53,7 +57,9 @@ public partial interface IFilesClient
/// - For Retrieval or `file_search` ingestion, upload files here first. If<br/>
/// you need to attach multiple uploaded files to the same vector store, use<br/>
/// [`/vector_stores/{vector_store_id}/file_batches`](/docs/api-reference/vector-stores-file-batches/createBatch)<br/>
/// instead of attaching them one by one.<br/>
/// instead of attaching them one by one. Vector store attachment has separate<br/>
/// limits from file upload, including 2,000 attached files per minute per<br/>
/// organization.<br/>
/// Please [contact us](https://help.openai.com/) if you need to increase these<br/>
/// storage limits.
/// </summary>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@
namespace tryAGI.OpenAI.JsonConverters
{
/// <inheritdoc />
public sealed class LocalShellCallOutputStatusEnumJsonConverter : global::System.Text.Json.Serialization.JsonConverter<global::tryAGI.OpenAI.LocalShellCallOutputStatusEnum>
public sealed class FunctionShellCallOutputStatusEnumJsonConverter : global::System.Text.Json.Serialization.JsonConverter<global::tryAGI.OpenAI.FunctionShellCallOutputStatusEnum>
{
/// <inheritdoc />
public override global::tryAGI.OpenAI.LocalShellCallOutputStatusEnum Read(
public override global::tryAGI.OpenAI.FunctionShellCallOutputStatusEnum Read(
ref global::System.Text.Json.Utf8JsonReader reader,
global::System.Type typeToConvert,
global::System.Text.Json.JsonSerializerOptions options)
Expand All @@ -18,19 +18,19 @@ public sealed class LocalShellCallOutputStatusEnumJsonConverter : global::System
var stringValue = reader.GetString();
if (stringValue != null)
{
return global::tryAGI.OpenAI.LocalShellCallOutputStatusEnumExtensions.ToEnum(stringValue) ?? default;
return global::tryAGI.OpenAI.FunctionShellCallOutputStatusEnumExtensions.ToEnum(stringValue) ?? default;
}

break;
}
case global::System.Text.Json.JsonTokenType.Number:
{
var numValue = reader.GetInt32();
return (global::tryAGI.OpenAI.LocalShellCallOutputStatusEnum)numValue;
return (global::tryAGI.OpenAI.FunctionShellCallOutputStatusEnum)numValue;
}
case global::System.Text.Json.JsonTokenType.Null:
{
return default(global::tryAGI.OpenAI.LocalShellCallOutputStatusEnum);
return default(global::tryAGI.OpenAI.FunctionShellCallOutputStatusEnum);
}
default:
throw new global::System.ArgumentOutOfRangeException(nameof(reader));
Expand All @@ -42,12 +42,12 @@ public sealed class LocalShellCallOutputStatusEnumJsonConverter : global::System
/// <inheritdoc />
public override void Write(
global::System.Text.Json.Utf8JsonWriter writer,
global::tryAGI.OpenAI.LocalShellCallOutputStatusEnum value,
global::tryAGI.OpenAI.FunctionShellCallOutputStatusEnum value,
global::System.Text.Json.JsonSerializerOptions options)
{
writer = writer ?? throw new global::System.ArgumentNullException(nameof(writer));

writer.WriteStringValue(global::tryAGI.OpenAI.LocalShellCallOutputStatusEnumExtensions.ToValueString(value));
writer.WriteStringValue(global::tryAGI.OpenAI.FunctionShellCallOutputStatusEnumExtensions.ToValueString(value));
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@
namespace tryAGI.OpenAI.JsonConverters
{
/// <inheritdoc />
public sealed class LocalShellCallOutputStatusEnumNullableJsonConverter : global::System.Text.Json.Serialization.JsonConverter<global::tryAGI.OpenAI.LocalShellCallOutputStatusEnum?>
public sealed class FunctionShellCallOutputStatusEnumNullableJsonConverter : global::System.Text.Json.Serialization.JsonConverter<global::tryAGI.OpenAI.FunctionShellCallOutputStatusEnum?>
{
/// <inheritdoc />
public override global::tryAGI.OpenAI.LocalShellCallOutputStatusEnum? Read(
public override global::tryAGI.OpenAI.FunctionShellCallOutputStatusEnum? Read(
ref global::System.Text.Json.Utf8JsonReader reader,
global::System.Type typeToConvert,
global::System.Text.Json.JsonSerializerOptions options)
Expand All @@ -18,19 +18,19 @@ public sealed class LocalShellCallOutputStatusEnumNullableJsonConverter : global
var stringValue = reader.GetString();
if (stringValue != null)
{
return global::tryAGI.OpenAI.LocalShellCallOutputStatusEnumExtensions.ToEnum(stringValue);
return global::tryAGI.OpenAI.FunctionShellCallOutputStatusEnumExtensions.ToEnum(stringValue);
}

break;
}
case global::System.Text.Json.JsonTokenType.Number:
{
var numValue = reader.GetInt32();
return (global::tryAGI.OpenAI.LocalShellCallOutputStatusEnum)numValue;
return (global::tryAGI.OpenAI.FunctionShellCallOutputStatusEnum)numValue;
}
case global::System.Text.Json.JsonTokenType.Null:
{
return default(global::tryAGI.OpenAI.LocalShellCallOutputStatusEnum?);
return default(global::tryAGI.OpenAI.FunctionShellCallOutputStatusEnum?);
}
default:
throw new global::System.ArgumentOutOfRangeException(nameof(reader));
Expand All @@ -42,7 +42,7 @@ public sealed class LocalShellCallOutputStatusEnumNullableJsonConverter : global
/// <inheritdoc />
public override void Write(
global::System.Text.Json.Utf8JsonWriter writer,
global::tryAGI.OpenAI.LocalShellCallOutputStatusEnum? value,
global::tryAGI.OpenAI.FunctionShellCallOutputStatusEnum? value,
global::System.Text.Json.JsonSerializerOptions options)
{
writer = writer ?? throw new global::System.ArgumentNullException(nameof(writer));
Expand All @@ -53,7 +53,7 @@ public override void Write(
}
else
{
writer.WriteStringValue(global::tryAGI.OpenAI.LocalShellCallOutputStatusEnumExtensions.ToValueString(value.Value));
writer.WriteStringValue(global::tryAGI.OpenAI.FunctionShellCallOutputStatusEnumExtensions.ToValueString(value.Value));
}
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@
namespace tryAGI.OpenAI.JsonConverters
{
/// <inheritdoc />
public sealed class LocalShellCallStatusJsonConverter : global::System.Text.Json.Serialization.JsonConverter<global::tryAGI.OpenAI.LocalShellCallStatus>
public sealed class FunctionShellCallStatusJsonConverter : global::System.Text.Json.Serialization.JsonConverter<global::tryAGI.OpenAI.FunctionShellCallStatus>
{
/// <inheritdoc />
public override global::tryAGI.OpenAI.LocalShellCallStatus Read(
public override global::tryAGI.OpenAI.FunctionShellCallStatus Read(
ref global::System.Text.Json.Utf8JsonReader reader,
global::System.Type typeToConvert,
global::System.Text.Json.JsonSerializerOptions options)
Expand All @@ -18,19 +18,19 @@ public sealed class LocalShellCallStatusJsonConverter : global::System.Text.Json
var stringValue = reader.GetString();
if (stringValue != null)
{
return global::tryAGI.OpenAI.LocalShellCallStatusExtensions.ToEnum(stringValue) ?? default;
return global::tryAGI.OpenAI.FunctionShellCallStatusExtensions.ToEnum(stringValue) ?? default;
}

break;
}
case global::System.Text.Json.JsonTokenType.Number:
{
var numValue = reader.GetInt32();
return (global::tryAGI.OpenAI.LocalShellCallStatus)numValue;
return (global::tryAGI.OpenAI.FunctionShellCallStatus)numValue;
}
case global::System.Text.Json.JsonTokenType.Null:
{
return default(global::tryAGI.OpenAI.LocalShellCallStatus);
return default(global::tryAGI.OpenAI.FunctionShellCallStatus);
}
default:
throw new global::System.ArgumentOutOfRangeException(nameof(reader));
Expand All @@ -42,12 +42,12 @@ public sealed class LocalShellCallStatusJsonConverter : global::System.Text.Json
/// <inheritdoc />
public override void Write(
global::System.Text.Json.Utf8JsonWriter writer,
global::tryAGI.OpenAI.LocalShellCallStatus value,
global::tryAGI.OpenAI.FunctionShellCallStatus value,
global::System.Text.Json.JsonSerializerOptions options)
{
writer = writer ?? throw new global::System.ArgumentNullException(nameof(writer));

writer.WriteStringValue(global::tryAGI.OpenAI.LocalShellCallStatusExtensions.ToValueString(value));
writer.WriteStringValue(global::tryAGI.OpenAI.FunctionShellCallStatusExtensions.ToValueString(value));
}
}
}
Loading