Skip to content

Commit

Permalink
feat: Support generateContent for tuned model in googleai_dart client (
Browse files Browse the repository at this point in the history
  • Loading branch information
davidmigloz committed Apr 2, 2024
1 parent b9b808e commit b4641a0
Show file tree
Hide file tree
Showing 17 changed files with 6,744 additions and 5,445 deletions.
31 changes: 27 additions & 4 deletions packages/googleai_dart/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,10 @@ Unofficial Dart client for [Google AI](https://ai.google.dev) for Developers (Ge

**Supported endpoints:**

- Generate content (with streaming support)
- Generate content (with streaming and tuned model support)
- Embed content (with batch support)
- Count tokens
- Embed content
- Models info
- Tuned models operations
- Operations

## Table of contents
Expand All @@ -33,6 +32,7 @@ Unofficial Dart client for [Google AI](https://ai.google.dev) for Developers (Ge
+ [Text-and-image input](#text-and-image-input)
+ [Multi-turn conversations (chat)](#multi-turn-conversations-chat)
+ [Streaming generated content](#streaming-generated-content)
+ [Tuned model](#tuned-model)
* [Count tokens](#count-tokens)
* [Embedding](#embedding)
* [Model info](#model-info)
Expand Down Expand Up @@ -168,7 +168,7 @@ print(res.candidates?.first.content?.parts?.first.text);
By default, `generateContent` returns a response after completing the entire generation process. You can achieve faster interactions by not waiting for the entire result, and instead use `streamGenerateContent` to handle partial results as they become available.

```dart
final stream = await client.streamGenerateContent(
final stream = client.streamGenerateContent(
modelId: 'gemini-pro',
request: const GenerateContentRequest(
contents: [
Expand All @@ -190,6 +190,19 @@ stream.listen((final res) P
)
```

#### Tuned model

Use the `generateContentTunedModel` method to generate content using a tuned model:

```dart
final res = await client.generateContentTunedModel(
tunedModelId: 'my-tuned-model',
request: GenerateContentRequest(
//...
),
);
```

### Count tokens

When using long prompts, it might be useful to count tokens before sending any content to the model.
Expand Down Expand Up @@ -254,6 +267,16 @@ print(res);
// Model(name: models/gemini-pro, displayName: Gemini Pro, description: The best model...
```

### Operations

The following methods are available to manage operations:

- `listOperations()`
- `deleteOperation(operationId: operationId)`
- `listTunedModelOperations(tunedModelId: tunedModelId)`
- `getTunedModelOperation(tunedModelId: tunedModelId, operationId: operationId)`
- `cancelTunedModelOperation(tunedModelId: tunedModelId, operationId: operationId)`

## Advance Usage

### Default HTTP client
Expand Down

0 comments on commit b4641a0

Please sign in to comment.