Skip to content
Merged
1 change: 1 addition & 0 deletions docs/configuration/roles-and-permissions.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@ The following quotas are supported in user roles:
| scripts_running_limit | 2.3 | Total number of _Python_ scripts run within a month: 100 means 100 script runs per month; -1 means unlimited script runs | The script run counter is reset at the beginning of every month. |
| snapshot_days | 2.1 | Retention period for snapshots in days: 180 means a storage period of 180 days; no value means an unlimited retention period | Snapshots older than the retention period are automatically removed. |
| share_limit | | Max number of users a base can be shared with: 100 means a base can be shared with 100 users | |
| ai_credit_per_user | 6.0 | The maximum AI quota allowed per user per month (i.e., the maximum amount of tokens that can be used in a single month, converted into an amount. In team mode, the total quota within the team will be shared). `-1` means unlimited quota. | |


### Standard User Roles
Expand Down
86 changes: 86 additions & 0 deletions docs/installation/advanced/seatable-ai-standalone.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
# Standalone Deployment of SeaTable AI

This guide describes the standalone deployment of `seatable-ai` on a dedicated server or virtual machine.

## Prerequisites

- You have successfully installed [Docker and Docker-Compose](../basic-setup.md#install-docker-and-docker-compose-plugin)
- You have [downloaded the latest `.yml` files](../basic-setup.md#1-create-basic-structure) from the `seatable-release` GitHub repository
- The hosts destined to run `seatable-ai` and other SeaTable components are attached to the same private network

## SeaTable AI Configuration

The following section outlines an `.env` file with the settings needed to run `seatable-ai`.
These changes should be made inside `/opt/seatable-compose/.env`:

```ini
COMPOSE_FILE='seatable-ai-standalone.yml'
COMPOSE_PATH_SEPARATOR=','

# system settings
TIME_ZONE='Europe/Berlin'

# database
MARIADB_HOST=
MARIADB_PORT=3306
MARIADB_PASSWORD=

# redis
REDIS_HOST=
REDIS_PORT=6379
REDIS_PASSWORD=

# This private key must have the same value as the JWT_PRIVATE_KEY variable on other SeaTable nodes
JWT_PRIVATE_KEY=

# Public URL of your SeaTable server
SEATABLE_SERVER_URL=https://seatable.your-domain.com

# Cluster-internal URL of dtable-server
INNER_DTABLE_SERVER_URL=http://dtable-server:5000

# Cluster-internal URL of dtable-db
INNER_DTABLE_DB_URL=http://dtable-db:7777

# LLM
SEATABLE_AI_LLM_TYPE=openai
SEATABLE_AI_LLM_URL=
SEATABLE_AI_LLM_KEY=...
SEATABLE_AI_LLM_MODEL=gpt-4o-mini # recommended
```

!!! warning
- In case you are not using password authentication for Redis, you should not specify a value after the equals sign (`=`) for the `REDIS_PASSWORD` variable.
Specifying an empty string (e.g. `REDIS_PASSWORD=""`) will cause problems.

- By default, the ports of `dtable-server` (5000) and `dtable-db` (7777) are not exposed to the host. This requires a manual change inside the `.yml` file.

### LLM Provider Configuration

Please refer to the documentation on [configuring your LLM provider of choice](../components/seatable-ai.md#llm-provider-configuration).
These configuration details do not change depending on the deployment topology of `seatable-server` and `seatable-ai`.

### Start SeaTable AI

You can now start SeaTable AI by running the following command inside your terminal:

```bash
cd /opt/seatable-compose
docker compose up -d
```

## Configuration of SeaTable Server

Since `seatable-ai` is now running on a separate host or virtual machine, the following configuration changes must be made inside the `.env` file on the host running the `seatable-server` container:

```ini
ENABLE_SEATABLE_AI=true
SEATABLE_AI_SERVER_URL='http://seatable-ai.example.com:8888'
```

Restart the `seatable-server` service and test your SeaTable AI:

```bash
cd /opt/seatable-compose
docker compose up -d
```
32 changes: 32 additions & 0 deletions docs/installation/advanced/seatable-ai-token-pricing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# AI Token Pricing

## AI Credits

AI credits serve as an internal unit of currency for measuring AI-related usage within SeaTable.
They are directly linked to the number of tokens consumed by using AI-based features according to the [configured price](#pricing-configuration) of each AI model.

SeaTable supports role-based AI credit limits by configuring the `ai_credit_per_user` option on a user role.
Please refer to the documentation on [user quotas](../../configuration/roles-and-permissions.md#user-quotas) for more details.

!!! note "`ai_credit_per_user` for organization users"
AI credits are shared across all users inside a SeaTable organization. The total number of credits can be calculated by multiplying the value of `ai_credit_per_user` by the number of team users.

**Example:** Setting `ai_credit_per_user` to `2` will allow a team with 10 members to have 20 AI credits in total.

## Pricing Configuration

In order to accurately track the number of AI credits used by users and organizations, you must configure token pricing inside `/opt/seatable-server/seatable/conf/dtable_web_settings.py`.
This can be achieved by configuring the `AI_PRICES` variable, which is a dictionary that maps model identifiers (e.g `gpt-4o-mini`) to token pricing **per thousand tokens**:

```py
AI_PRICES = {
"gpt-4o-mini": {
"input_tokens_1k": 0.01827, # price / 1000 tokens
"output_tokens_1k": 0.07309 # price / 1000 tokens
},
}
```

!!! warning "Model Identifiers"
The dictionary key must match **the exact value** of the chosen AI Model, which is configured through the `SEATABLE_AI_LLM_MODEL` variable inside your `.env` file.
In case of a mismatch, AI usage will not count towards any configured credit limits!
115 changes: 115 additions & 0 deletions docs/installation/components/seatable-ai.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,115 @@
# SeaTable AI Integration

<!-- md:version 6.0 -->

SeaTable AI is a SeaTable extension that integrates AI functionality into SeaTable.
Deploying SeaTable AI allows users to execute AI-based automation steps within SeaTable.

At the time of writing, the following types of automation steps are supported:

- **Summarize**
- **Classify**
- **OCR** (Optical character recognition)
- **Extract**
- **Custom** for individual use cases

## Deployment

!!! note "SeaTable AI requires SeaTable 6.0"

The easiest way to deploy SeaTable AI is to deploy it on the same host as SeaTable Server. A standalone deployment (on a separate host or virtual machine) is explained [here](../advanced/seatable-ai-standalone.md).

### Amend the .env file

To install SeaTable AI, include `seatable-ai.yml` in the `COMPOSE_FILE` variable within your `.env` file. This instructs Docker-Compose to include the `seatable-ai` service.

Simply copy and paste (:material-content-copy:) the following code into your command line:

```bash
sed -i "s/COMPOSE_FILE='\(.*\)'/COMPOSE_FILE='\1,seatable-ai.yml'/" /opt/seatable-compose/.env
```

Then add SeaTable AI server configurations in `.env`:

```ini
ENABLE_SEATABLE_AI=true
SEATABLE_AI_SERVER_URL=http://seatable-ai:8888
```

#### LLM Provider Configuration

SeaTable AI will use AI functions in conjunction with a Large Language Model (LLM) service.

!!! note "Supported LLM Providers"

SeaTable AI supports a wide variety of LLM providers through [LiteLLM](https://docs.litellm.ai/docs) as well as any LLM services with OpenAI-compatible endpoints. Please refer to [LiteLLM's documentation](https://docs.litellm.ai/docs/providers) in case you run into issues while trying to use a specific provider.

!!! note "Model Selection"

In order to ensure the efficient use of SeaTable AI features, you need to select a **large, multimodal model**.
This requires the chosen model to support image input and recognition (e.g. for running OCR as part of automations).

The following section showcases the required configuration settings for the most popular hosted LLM services.
These must be configured inside your `.env` file:

<a id="llm-configuration"></a>
=== "OpenAI"
```ini
SEATABLE_AI_LLM_TYPE=openai
SEATABLE_AI_LLM_KEY=<your openai LLM access key>
SEATABLE_AI_LLM_MODEL=gpt-4o-mini # recommended
```
=== "Deepseek"
```ini
SEATABLE_AI_LLM_TYPE=deepseek
SEATABLE_AI_LLM_KEY=<your LLM access key>
SEATABLE_AI_LLM_MODEL=deepseek-chat # recommended
```
=== "Azure OpenAI"
```ini
SEATABLE_AI_LLM_TYPE=azure
SEATABLE_AI_LLM_URL= # your deployment url, leave blank to use default endpoint
SEATABLE_AI_LLM_KEY=<your API key>
SEATABLE_AI_LLM_MODEL=<your deployment name>
```
=== "Ollama"
```ini
SEATABLE_AI_LLM_TYPE=ollama_chat
SEATABLE_AI_LLM_URL=<your LLM endpoint>
SEATABLE_AI_LLM_KEY=<your LLM access key>
SEATABLE_AI_LLM_MODEL=<your model-id>
```
=== "HuggingFace"
```ini
SEATABLE_AI_LLM_TYPE=huggingface
SEATABLE_AI_LLM_URL=<your huggingface API endpoint>
SEATABLE_AI_LLM_KEY=<your huggingface API key>
SEATABLE_AI_LLM_MODEL=<model provider>/<model-id>
```
=== "Self-Hosted Proxy Server"
```ini
SEATABLE_AI_LLM_TYPE=proxy
SEATABLE_AI_LLM_URL=<your proxy url>
SEATABLE_AI_LLM_KEY=<your proxy virtual key> # optional
SEATABLE_AI_LLM_MODEL=<model-id>
```
=== "Other"
If you are using an LLM service with ***OpenAI-compatible endpoints***, you should set `SEATABLE_AI_LLM_TYPE` to `other` or `openai`, and set other LLM configuration settings as necessary:

```ini
SEATABLE_AI_LLM_TYPE=...
SEATABLE_AI_LLM_URL=...
SEATABLE_AI_LLM_KEY=...
SEATABLE_AI_LLM_MODEL=...
```

### Download SeaTable AI image and restart

One more step is necessary to download the SeaTable AI image and restart the SeaTable service:

```bash
cd /opt/seatable-compose
docker compose up -d
```

Now SeaTable AI can be used.
4 changes: 4 additions & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -151,6 +151,7 @@ nav:
- Our deployment approach: installation/deployment-approach.md
- Single-Node Deployment:
- SeaTable Server: installation/basic-setup.md
- SeaTable AI: installation/components/seatable-ai.md
- Python Pipeline: installation/components/python-pipeline.md
- Whiteboard: installation/components/whiteboard.md
- n8n: installation/components/n8n.md
Expand Down Expand Up @@ -178,6 +179,9 @@ nav:
- Webserver Security: installation/advanced/webserver-security.md
- Maintenance Mode: installation/advanced/maintenance-mode.md
- Advanced Settings for Caddy: installation/advanced/settings-caddy.md
- SeaTable AI:
- SeaTable AI (standalone): installation/advanced/seatable-ai-standalone.md
- AI Token Pricing: installation/advanced/seatable-ai-token-pricing.md
- S3 Object Storage:
- Configuration: installation/advanced/s3.md
- Migration: installation/advanced/s3-migration.md
Expand Down