Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ Just remember that the models do make mistakes at times. They might misunderstan

3. Open Excel, choose a provider from the drop-down menu in the Cellm tab, and plug in your API key.

You can also use local models, e.g., via [Ollama](https://ollama.com/). Download and install [Ollama](https://ollama.com/), open Windows Terminal (open start menu, type `Windows Terminal`, and click `OK`), type `ollama pull gemma3:4b`, and wait for the download to finish. Open Excel, choose the Ollama provider from the drop-down menu in the Cellm tab, and you are good to go.
You can also use local models, e.g., via [Ollama](https://ollama.com/). Download and install [Ollama](https://ollama.com/) and open Excel. Choose the Ollama provider from the drop-down menu in the Cellm tab and select a model. Cellm will prompt you to download it automatically. Alternatively, open Windows Terminal (open start menu, type `Windows Terminal`, and click `OK`), type `ollama pull gemma4:e4b`, and wait for the download to finish.

## Pricing
- **Free tier:** Use local models or your own API keys
Expand All @@ -56,7 +56,7 @@ You can also use local models, e.g., via [Ollama](https://ollama.com/). Download

## Basic usage

Select a cell and type `=PROMPT("What model are you and who made you?")`. For Gemma 3 4B, it will tell you that it's called "Gemma" and made by Google DeepMind.
Select a cell and type `=PROMPT("What model are you and who made you?")`. For Gemma 4 E4B, it will tell you that it's called "Gemma 4" and made by Google DeepMind.

You can also use cell references. For example, copy a news article into cell A1 and type in cell B1: `=PROMPT("Extract all person names mentioned in the text", A1)`. You can reference many cells using standard Excel notation, e.g. `=PROMPT("Extract all person names in the cells", A1:F10)` or reference multiple separate ranges, e.g. `=PROMPT("Compare these datasets", A1:B10, D1:E10)`

Expand Down
18 changes: 9 additions & 9 deletions docs/api-reference/functions/prompt-model.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Allows you to call a model from a cell formula and specify the model as the firs
## Arguments

<ParamField body="providerAndModel" type="string" required>
A string on the form "provider/model" (e.g., "openai/gpt-4o-mini").
A string on the form "provider/model" (e.g., "openai/gpt-5.4-mini").

The default model is determined by your configuration settings.
</ParamField>
Expand Down Expand Up @@ -44,39 +44,39 @@ Allows you to call a model from a cell formula and specify the model as the firs
<RequestExample>

```excel Text Instructions
=PROMPTMODEL("openai/gpt-4o-mini", "Extract keywords")
=PROMPTMODEL("openai/gpt-5.4-mini", "Extract keywords")
```

```excel Cell Instructions
=PROMPTMODEL("openai/gpt-4o-mini", A1:D10)
=PROMPTMODEL("openai/gpt-5.4-mini", A1:D10)
```

```excel With Context
=PROMPTMODEL("openai/gpt-4o-mini", "Extract keywords", A1:D10)
=PROMPTMODEL("openai/gpt-5.4-mini", "Extract keywords", A1:D10)
```

```excel Multiple Cell Ranges
=PROMPTMODEL("openai/gpt-4o-mini", "Compare these datasets", A1:B10, D1:E10)
=PROMPTMODEL("openai/gpt-5.4-mini", "Compare these datasets", A1:B10, D1:E10)
```

```excel Mixed Cell References
=PROMPTMODEL("openai/gpt-4o-mini", "Analyze all data", A1, B2:C5, D6)
=PROMPTMODEL("openai/gpt-5.4-mini", "Analyze all data", A1, B2:C5, D6)
```

</RequestExample>

<ResponseExample>

```excel TOROW
=PROMPTMODEL.TOROW("openai/gpt-4o-mini", "Extract keywords", A1:D10)
=PROMPTMODEL.TOROW("openai/gpt-5.4-mini", "Extract keywords", A1:D10)
```

```excel TOCOLUMN
=PROMPTMODEL.TOCOLUMN("openai/gpt-4o-mini", "Extract keywords", A1:D10)
=PROMPTMODEL.TOCOLUMN("openai/gpt-5.4-mini", "Extract keywords", A1:D10)
```

```excel TORANGE
=PROMPTMODEL.TORANGE("openai/gpt-4o-mini", "Extract keywords", A1:D10)
=PROMPTMODEL.TORANGE("openai/gpt-5.4-mini", "Extract keywords", A1:D10)
```

</ResponseExample>
6 changes: 3 additions & 3 deletions docs/get-started/install.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -40,11 +40,11 @@ To install Cellm:
If you want to use the more powerful hosted models right away, you can skip this step. The [Hosted Models](/models/hosted-models) section shows you how.
</Info>

To get started with local models, we recommend you try out the Gemma 3 4B model with quantized aware training. Gemma 3 4B is a wonderful little model that will run fine on your CPU, ensuring no data ever leaves your computer. And it's free.
To get started with local models, we recommend you try out the Gemma 4 E4B model. Gemma 4 E4B is a wonderful little model that will run fine on your CPU, ensuring no data ever leaves your computer. And it's free.

1. Download and install [Ollama](https://ollama.com/). Ollama will start after the install and automatically run whenever you start up your computer.
2. Open the Windows Terminal, type `ollama pull gemma3:4b-it-qat` and hit Enter.
3. Open Excel and type `=PROMPT("What model are you and who made you?")`. The model will respond that is is Gemma 3 and made by Google.
2. When you select an Ollama model in Cellm, it will prompt you to download it automatically. Alternatively, open the Windows Terminal, type `ollama pull gemma4:e4b` and hit Enter.
3. Open Excel and type `=PROMPT("What model are you and who made you?")`. The model will respond that it is Gemma 4 and made by Google.
</Accordion>

<Accordion icon="plug" title="Enable MCP (optional)">
Expand Down
6 changes: 3 additions & 3 deletions docs/get-started/quickstart.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@ To get started, you can quickly install Cellm and a local model:
<Step title="Install Ollama">
Download and install [Ollama](https://ollama.com/) to run local AI models.
</Step>
<Step title="Download Gemma 3 model">
Open the Windows Terminal and type `ollama pull gemma3:4b-it-qat` to download the Gemma 3 4B model.
<Step title="Download Gemma 4 model">
When you select an Ollama model in Cellm, it will prompt you to download it automatically. Alternatively, open the Windows Terminal and type `ollama pull gemma4:e4b` to download the Gemma 4 E4B model.
</Step>
</Steps>

Expand Down Expand Up @@ -143,7 +143,7 @@ Beyond basic text processing, you can use "Function Calling" to give models acce
````

<Tip>
Gemma 3 4B does not support function calling. For function calling you must use another model, e.g. OpenAI's `gpt-5-mini`.
Gemma 4 E4B does not support function calling. For function calling you must use another model, e.g. OpenAI's `gpt-5.4-mini`.
</Tip>

## Next steps
Expand Down
4 changes: 2 additions & 2 deletions docs/models/choosing-model.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -91,12 +91,12 @@ Imagine you want to analyze customer feedback from column A. Instead of a single

3. Extract Suggestions: In column D, use a Large model to analyze the feedback and suggest improvements. You could also add relevant background information on your product directly to the prompt or to a cell that you reference.
````mdx Analyze feedback
=PROMPTMODEL("openai/gpt-4o-mini", "Analyze user feedback and suggest improvements.", B2)
=PROMPTMODEL("openai/gpt-5.4-mini", "Analyze user feedback and suggest improvements.", B2)
````

4. Extract Topics: In column E, extract relevant topics with a small model, which is efficient for simple extraction tasks.
````mdx Extract topics
=PROMPTMODEL.TOROW("openai/gpt-4o-mini", "Extract relevant software engineering topics, such as UX, Bug, Documentation, or Improvement.", B2)
=PROMPTMODEL.TOROW("openai/gpt-5.4-mini", "Extract relevant software engineering topics, such as UX, Bug, Documentation, or Improvement.", B2)
````

This approach gives you reliable results and granular control of the output format.
Expand Down
10 changes: 5 additions & 5 deletions docs/models/hosted-models.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ We split hosted models into three tiers based on their size and capabilities, ba
| Speed | <Icon icon="star" iconType="solid" /><Icon icon="star" iconType="solid" /><Icon icon="star" iconType="solid" /> | <Icon icon="star" iconType="solid" /><Icon icon="star" iconType="solid" /><Icon icon="star" iconType="regular" /> | <Icon icon="star" iconType="solid" /><Icon icon="star" iconType="regular" /><Icon icon="star" iconType="regular" /> |
| Intelligence | <Icon icon="star" iconType="solid" /><Icon icon="star" iconType="regular" /><Icon icon="star" iconType="regular" /> | <Icon icon="star" iconType="solid" /><Icon icon="star" iconType="solid" /><Icon icon="star" iconType="regular" /> | <Icon icon="star" iconType="solid" /><Icon icon="star" iconType="solid" /><Icon icon="star" iconType="solid" /> |
| World Knowledge | <Icon icon="star" iconType="solid" /><Icon icon="star" iconType="regular" /><Icon icon="star" iconType="regular" /> | <Icon icon="star" iconType="solid" /><Icon icon="star" iconType="regular" /><Icon icon="star" iconType="regular" /> | <Icon icon="star" iconType="solid" /><Icon icon="star" iconType="solid" /><Icon icon="star" iconType="regular" /> |
| Recommended model | Gemini 2.5 Flash Lite | Gemini 2.5 Flash | Claude Sonnet 4.5 |
| Recommended model | Gemini 3.1 Flash Lite | Gemini 3 Flash | Claude Opus 4.6 |

## Provider setup

Expand Down Expand Up @@ -63,7 +63,7 @@ Mistral offers a generous free tier with access to powerful models.

### OpenAI

OpenAI provides access to GPT models, including GPT-4o and GPT-4o-mini.
OpenAI provides access to GPT models, including GPT-5.4 and GPT-5.4-mini.

<Steps>
<Step title="Create an account">
Expand All @@ -79,7 +79,7 @@ OpenAI provides access to GPT models, including GPT-4o and GPT-4o-mini.
In Excel, open Cellm's ribbon menu, select the `openai` provider, click the provider icon, and paste your API key. Try a model like:

````mdx OpenAI example
=PROMPTMODEL("openai/gpt-4o-mini", "Classify sentiment as positive, neutral, or negative", A1)
=PROMPTMODEL("openai/gpt-5.4-mini", "Classify sentiment as positive, neutral, or negative", A1)
````
</Step>
</Steps>
Expand All @@ -106,7 +106,7 @@ Google Gemini offers powerful AI models with a generous free tier.
In Excel, open Cellm's ribbon menu, select the `gemini` provider, click the provider icon, and paste your API key. Try a model like:

````mdx Gemini example
=PROMPTMODEL("gemini/gemini-2.5-flash", "Extract person names from text", A1)
=PROMPTMODEL("gemini/gemini-3-flash-preview", "Extract person names from text", A1)
````
</Step>
</Steps>
Expand Down Expand Up @@ -136,7 +136,7 @@ Anthropic provides Claude models, known for their strong reasoning capabilities.
In Excel, open Cellm's ribbon menu, select the `anthropic` provider, click the provider icon, and paste your API key. Try a model like:

````mdx Claude example
=PROMPTMODEL("anthropic/claude-sonnet-4.5", "Analyze customer feedback", A1)
=PROMPTMODEL("anthropic/claude-sonnet-4-6", "Analyze customer feedback", A1)
````
</Step>
</Steps>
Expand Down
16 changes: 8 additions & 8 deletions docs/models/local-models.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ We can split local models into three tiers based on their size and capabilities,
| Speed | <Icon icon="star" iconType="solid" /><Icon icon="star" iconType="solid" /><Icon icon="star" iconType="solid" /> | <Icon icon="star" iconType="solid" /><Icon icon="star" iconType="solid" /><Icon icon="star" iconType="regular" /> | <Icon icon="star" iconType="solid" /><Icon icon="star" iconType="regular" /><Icon icon="star" iconType="regular" /> |
| Intelligence | <Icon icon="star" iconType="solid" /><Icon icon="star" iconType="regular" /><Icon icon="star" iconType="regular" /> | <Icon icon="star" iconType="solid" /><Icon icon="star" iconType="solid" /><Icon icon="star" iconType="regular" /> | <Icon icon="star" iconType="solid" /><Icon icon="star" iconType="solid" /><Icon icon="star" iconType="solid" /> |
| World Knowledge | <Icon icon="star" iconType="regular" /><Icon icon="star" iconType="regular" /><Icon icon="star" iconType="regular" /> | <Icon icon="star" iconType="solid" /><Icon icon="star" iconType="regular" /><Icon icon="star" iconType="regular" /> | <Icon icon="star" iconType="solid" /><Icon icon="star" iconType="solid" /><Icon icon="star" iconType="regular" /> |
| Recommended model | Gemma 3 4B IT QAT | Mistral Small 3.2 | qwen3-30b-a3b-instruct-2507 |
| Recommended model | Gemma 4 E4B | Gemma 4 26B | Gemma 4 31B |

<Tip>
You need a GPU for any of the medium or large models to be useful in practice. If you don't have a GPU, you can use [Hosted Models](/models/hosted-models) if small ones are insufficient.
Expand All @@ -38,29 +38,29 @@ You need to run a program on your computer that serves models to Cellm. We call

### Ollama

To get started with Ollama, we recommend you try out the Gemma 3 4B IT QAT model, which is Cellm's default local model.
To get started with Ollama, we recommend you try out the Gemma 4 E4B model, which is Cellm's default local model.

<Steps>
<Step title="Install Ollama">
Download and install [Ollama](https://ollama.com/). Ollama will start after the install and automatically run whenever you start up your computer.
</Step>
<Step title="Download the model">
Open Windows Terminal (open start menu, type `Windows Terminal`, and click `OK`), then run:
When you select an Ollama model in Cellm, it will prompt you to download it automatically. Alternatively, open Windows Terminal (open start menu, type `Windows Terminal`, and click `OK`), then run:

````bash Download Gemma 3 4B QAT
ollama pull gemma3:4b-it-qat
````bash Download Gemma 4 E4B
ollama pull gemma4:e4b
````

Wait for the download to finish.
</Step>
<Step title="Test in Excel">
In Excel, select `ollama/gemma3:4b-it-qat` from the model dropdown menu, and type:
In Excel, select `ollama/gemma4:e4b` from the model dropdown menu, and type:

````mdx Test prompt
=PROMPT("Which model are you and who made you?")
````

The model will tell you that it is called "Gemma" and made by Google DeepMind.
The model will tell you that it is called "Gemma 4" and made by Google DeepMind.
</Step>
</Steps>

Expand Down Expand Up @@ -133,7 +133,7 @@ If you prefer to run models via docker, both Ollama and vLLM are packaged up wit
````
</Step>
<Step title="Configure Cellm">
Start Excel and select the `openaicompatible` provider from the model drop-down on Cellm's ribbon menu. Enter the model name you want to use, e.g., `gemma3:4b-it-qat`.
Start Excel and select the `openaicompatible` provider from the model drop-down on Cellm's ribbon menu. Enter the model name you want to use, e.g., `gemma4:e4b`.

Set the Base Address to `http://localhost:11434`.
</Step>
Expand Down
33 changes: 33 additions & 0 deletions docs/sdk-migration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# SDK Migration: Anthropic, Mistral, and Gemini

## Summary

Three provider SDKs need replacing due to incompatibilities and maintainability concerns. Both community SDKs (`Anthropic.SDK`, `Mistral.SDK`) are maintained by the same author (tghamm) and have become inactive. The Gemini provider uses an OpenAI-compatible endpoint that doesn't fully support tool use schemas.

## ~~Anthropic.SDK (5.10.0)~~ ✅ DONE

Migrated to official `Anthropic` SDK (v12.11.0). The community `Anthropic.SDK` was incompatible with MEAI 10.4.x (`MissingMethodException` on `HostedMcpServerTool.AuthorizationToken`). The official SDK has native IChatClient support and accepts custom HttpClient, so the resilient HttpClient pipeline is preserved. Also fixed a bug where the entitlement check referenced `EnableAzureProvider` instead of `EnableAnthropicProvider`. Removed the `RateLimitsExceeded` exception from `RateLimiterHelpers` (was Anthropic.SDK-specific; 429 status is already handled by `retryableStatusCodes`). All 4 integration tests pass.

## ~~Mistral.SDK (2.3.1)~~ ✅ DONE

Migrated both `AddMistralChatClient()` and `AddCellmChatClient()` to use `OpenAIClient` with custom endpoint, same pattern as DeepSeek and OpenRouter. Removed `Mistral.SDK` dependency entirely. All 4 Mistral integration tests pass (basic prompt, file reader, file search, Playwright MCP).

**Known issue: Magistral thinking models.** The OpenAI .NET SDK cannot deserialize Magistral's `thinking` content part type (`ArgumentOutOfRangeException: Unknown ChatMessageContentPartKind value: thinking`). The failure occurs at the deserialization level before `MistralThinkingBehavior` can process the response. This is a limitation of using the OpenAI SDK with Mistral's extended thinking format. Magistral models (`magistral-small-2509`, `magistral-medium-2509`) are currently broken.

## ~~Gemini (OpenAI-compatible endpoint)~~ ✅ DONE

Migrated to official `Google.GenAI` SDK (v1.6.1). The OpenAI-compatible endpoint rejected `strict: true` / `additionalProperties: false` in tool schemas. The native SDK handles tool schemas correctly. All 4 integration tests pass (basic prompt, file reader, file search, Playwright MCP).

**Tradeoff:** Google.GenAI does not support custom HttpClient injection, so HTTP-level retry/timeout from the resilience pipeline is not available for Gemini. Rate limiting (application layer) is unaffected. `GeminiTemperatureBehavior` (0-1 → 0-2 scaling) is still needed — the native SDK passes temperature as-is.

## Additional considerations

When switching SDKs, provider-specific behaviors and other code may need updating. Examples include but are not limited to:

- `GeminiTemperatureBehavior` — temperature scaling may differ with native SDK
- `AdditionalPropertiesBehavior` — provider-specific additional properties format may change
- `ProviderRequestHandler.UseJsonSchemaResponseFormat()` — structured output support flags
- Provider configuration classes (`SupportsJsonSchemaResponses`, `SupportsStructuredOutputWithTools`) — verify accuracy with new SDKs
- Resilient HTTP client integration — new SDKs may handle HTTP clients differently

A thorough review of all provider-specific code paths is needed during migration.
2 changes: 1 addition & 1 deletion docs/usage/writing-prompts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ For advanced workflows, you might want to use different AI models for different
The first argument consists of a provider and a model name separated by a forward slash (`/`). For example, if you want to use OpenAI's cheapest model in a particular cell, you can write:

````mdx Specify model
=PROMPTMODEL("openai/gpt-4o-mini", "Rate sentiment as positive, neutral, or negative", A1)
=PROMPTMODEL("openai/gpt-5.4-mini", "Rate sentiment as positive, neutral, or negative", A1)
````

This is useful when you want to use a strong model by default but offload simple tasks to cheaper models.
Expand Down
Loading
Loading