From eff759d3a8e8839f9aabb2b0cd4bbb5c11a1b686 Mon Sep 17 00:00:00 2001 From: Junxiang Huang Date: Mon, 21 Jul 2025 18:21:08 +0800 Subject: [PATCH 01/13] Docs for deploying SeaTable AI --- docs/installation/advanced/seatable-ai.md | 194 ++++++++++++++++++++++ mkdocs.yml | 1 + 2 files changed, 195 insertions(+) create mode 100644 docs/installation/advanced/seatable-ai.md diff --git a/docs/installation/advanced/seatable-ai.md b/docs/installation/advanced/seatable-ai.md new file mode 100644 index 000000000..700715d90 --- /dev/null +++ b/docs/installation/advanced/seatable-ai.md @@ -0,0 +1,194 @@ +# SeaTable AI Integration + +SeaTable AI is an extension of SeaTable that providing AI functions. + +SeaSearch, a file indexer with more lightweight and efficiency than Elasticsearch. + +## Deployment SeaTable AI + +The easiest way to deployment SeaTable AI is to deploy it with SeaTable server on the same host. If in some situations, you need to deployment SeaTable AI standalone, you can follow the next section. + +Note: Deploy SeaTable AI requires SeaTable 5.3. + +### Change the .env file + +To install SeaTable AI, include `seatable-ai.yml` and `seasearch.yml`in the `COMPOSE_FILE` variable within your `.env` file. This instructs Docker to download the required images for SeaTable AI. + +Simply copy and paste (:material-content-copy:) the following code into your command line: + +```bash +sed -i "s/COMPOSE_FILE='\(.*\)'/COMPOSE_FILE='\1,seatable-ai.yml,seasearch.yml'/" /opt/seatable-compose/.env +``` + +Then add SeaTable AI and SeaSearch configurations in `.env`: + +```env +ENABLE_SEATABLE_AI=true +SEATABLE_AI_SERVER_URL=http://seatable-ai:8888 + +SEATABLE_AI_LLM_TYPE=openai +SEATABLE_AI_LLM_KEY= +SEATABLE_AI_LLM_MODEL=gpt-4.1 + +ENABLE_SEARCH=true +INIT_SS_ADMIN_USER= +INIT_SS_ADMIN_PASSWORD= +SEASEARCH_SERVER_URL=http://seasearch:4080 +SEASEARCH_TOKEN= # get from `echo -n 'INIT_SS_ADMIN_USER:INIT_SS_ADMIN_PASSWORD' | base64` +``` + +!!! note "Details for `SEASEARCH_TOKEN`" + Get your authorization token (SEASEARCH_TOKEN) by base64 code consist of INIT_SS_ADMIN_USER and INIT_SS_ADMIN_PASSWORD defined in .env firstly, which is used to authorize when calling the SeaSearch API: + + ```bash + echo -n 'username:password' | base64 + + # example output + YWRtaW46YWRtaW5fcGFzc3dvcmQ= + ``` + +!!! tip "Use the custom models not from OpenAI Ltc." + SeaTable AI supports users to use large language models (LLM) that are not provided by OpenAI Ltd. However, the model service selected by the user needs to be compatible with the OpenAI API. To use custom models, please make the following changes in `.env`: + + ```env + SEATABLE_AI_LLM_TYPE=other + SEATABLE_AI_LLM_URL=https://api.openai.com/v1 # your LLM service endpoint + SEATABLE_AI_LLM_KEY= # your API key + SEATABLE_AI_LLM_MODEL=gpt-4.1 # your custom model id + ``` + +### Download SeaTable AI and restart + +One more step is necessary to download the SeaTable AI image and restart the SeaTable service. + +```bash +cd /opt/seatable-compose +docker compose down && docker compose up -d +``` + +Now SeaTable AI can be used. + +## Deploy SeaTable AI standalone + +The deployment of a separate SeaTable AI is simple. Get seatable-release from github like described in the installation of seatable server and only use `seatable-ai.yml` and `seasearch.yml`. + +### Update `seatable-ai.yml` and expose service port + +Update your `seatable-ai.yml` and expose service port: + +```yml +services: + seatable-ai: + ... + ports: + - "8888:8888" + ... +``` + +### Update `.env` in the host will deploy SeaTable AI + +Update your `.env`, that it looks like this and add/update the values according to your needs: + +```env +COMPOSE_FILE='seatable-ai,seasearch.yml' +COMPOSE_PATH_SEPARATOR=',' + +# system settings +TIME_ZONE='Europe/Berlin' + +# database +SEATABLE_MYSQL_DB_HOST= +SEATABLE_MYSQL_DB_PORT=3306 +SEATABLE_MYSQL_DB_USER= +SEATABLE_MYSQL_DB_PASSWORD= + +# redis +REDIS_HOST= +REDIS_PORT=6379 +REDIS_PASSWORD= + +# For SeaTable +JWT_PRIVATE_KEY= +SEATABLE_SERVER_URL=https://seatable.your-domain.com # dtable-web's URL + +## dtable-server's url, `http://dtable-inner-proxy/dtable-server` for cluster +INNER_DTABLE_SERVER_URL=https://seatable.your-domain.com/dtable-server/ + +## dtable-db's url, `http://dtable-inner-proxy/dtable-db` for cluster +INNER_DTABLE_DB_URL=https://seatable.your-domain.com/dtable-db/ + +# LLM +SEATABLE_AI_LLM_TYPE=openai +SEATABLE_AI_LLM_KEY= +SEATABLE_AI_LLM_MODEL=gpt-4.1 + +# SeaSearch +ENABLE_SEARCH=true +INIT_SS_ADMIN_USER= +INIT_SS_ADMIN_PASSWORD= +SEASEARCH_SERVER_URL=http://seasearch:4080 +SEASEARCH_TOKEN= # get from `echo -n 'INIT_SS_ADMIN_USER:INIT_SS_ADMIN_PASSWORD' | base64` +``` + +!!! warning + - `JWT_PRIVATE_KEY`, same as the `JWT_PRIVATE_KEY` field in SeaTable `.env` file. + + - If Redis has no REDIS_PASSWORD, leave it as empty after "=", do not use empty string (like REDIS_PASSWORD="") + +!!! note "Details for `SEASEARCH_TOKEN`" + Get your authorization token (SEASEARCH_TOKEN) by base64 code consist of INIT_SS_ADMIN_USER and INIT_SS_ADMIN_PASSWORD defined in .env firstly, which is used to authorize when calling the SeaSearch API: + + ```bash + echo -n 'username:password' | base64 + + # example output + YWRtaW46YWRtaW5fcGFzc3dvcmQ= + ``` + +!!! tip "Use the custom models not from OpenAI Ltc." + SeaTable AI supports users to use large language models (LLM) that are not provided by OpenAI Ltd. However, the model service selected by the user needs to be compatible with the OpenAI API. To use custom models, please make the following changes in `.env`: + + ```env + SEATABLE_AI_LLM_TYPE=other + SEATABLE_AI_LLM_URL=https://api.openai.com/v1 # your LLM service endpoint + SEATABLE_AI_LLM_KEY= # your API key + SEATABLE_AI_LLM_MODEL=gpt-4.1 # your custom model id + ``` + +Execute `docker compose up -d` to fire up your separate SeaTable AI. + +### Configurations of SeaTable Server + +SeaTable must know where to get the SeaTable AI. + +Add SeaTable AI configurations to `.env` file where deployed SeaTable. + +```py +ENABLE_SEATABLE_AI = True +SEATABLE_AI_SERVER_URL = 'http://seatable-ai.example.com:8888' +``` + +Restart seatable service and test your SeaTable AI. + +```bash +docker compose down && docker compose up -d +``` + +## SeaTable AI directory structure + +`/opt/seatable-server` + +Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files outside. This allows you to rebuild containers easily without losing important information. + +* /opt/seatable-server/conf: This is the directory for SeaTable AI configuration files. +* /opt/seatable-server/logs: This is the directory for SeaTable AI logs. +* /opt/seatable-server/ai-data/assets: This is the directory for SeaTable AI assets. +* /opt/seatable-server/ai-data/index-info: This is the directory for SeaTable AI index. + +`/opt/seasearch-data` + +* /opt/seasearch-data/logs: This is the directory for SeaSearch logs. + +## Database used by SeaTable AI + +SeaTable AI used several database tables like `dtable_db.ai_assistant` to store records. diff --git a/mkdocs.yml b/mkdocs.yml index 9d07f9961..fb1ee1a45 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -188,6 +188,7 @@ nav: - Python Pipeline Workflow: installation/advanced/python-pipeline-workflow.md - MariaDB (standalone): installation/advanced/database-standalone.md - Seafile (external): installation/advanced/seafile.md + - SeaTable AI: installation/advanced/seatable-ai.md - Cluster Deployment: - Introduction: installation/cluster/introduction.md From b3ba824fd0fc42a94c7748ccf0388c47db941f48 Mon Sep 17 00:00:00 2001 From: Junxiang Huang Date: Tue, 22 Jul 2025 17:10:38 +0800 Subject: [PATCH 02/13] update descriptions of using custom llm --- docs/installation/advanced/seatable-ai.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/installation/advanced/seatable-ai.md b/docs/installation/advanced/seatable-ai.md index 700715d90..a37a6778e 100644 --- a/docs/installation/advanced/seatable-ai.md +++ b/docs/installation/advanced/seatable-ai.md @@ -48,7 +48,7 @@ SEASEARCH_TOKEN= # get from `echo -n 'INIT_SS_ADMIN_USER:INIT_SS_ADMIN_PASSWORD ``` !!! tip "Use the custom models not from OpenAI Ltc." - SeaTable AI supports users to use large language models (LLM) that are not provided by OpenAI Ltd. However, the model service selected by the user needs to be compatible with the OpenAI API. To use custom models, please make the following changes in `.env`: + SeaTable AI supports users to use **multimodality large language models** (**multimodality LLM**, i.e., image recognition support is necessary) that are not provided by OpenAI Ltd. However, the model service selected by the user needs to be compatible with the OpenAI API. To use custom models, please make the following changes in `.env`: ```env SEATABLE_AI_LLM_TYPE=other @@ -146,7 +146,7 @@ SEASEARCH_TOKEN= # get from `echo -n 'INIT_SS_ADMIN_USER:INIT_SS_ADMIN_PASSWORD ``` !!! tip "Use the custom models not from OpenAI Ltc." - SeaTable AI supports users to use large language models (LLM) that are not provided by OpenAI Ltd. However, the model service selected by the user needs to be compatible with the OpenAI API. To use custom models, please make the following changes in `.env`: + SeaTable AI supports users to use custom **multimodality large language models** (**multimodality LLM**, i.e., image recognition support is necessary) that are not provided by OpenAI Ltd. However, the model service selected by the user needs to be compatible with the OpenAI API. To use custom models, please make the following changes in `.env`: ```env SEATABLE_AI_LLM_TYPE=other From afcd9be4b62cf6bd3b98c3a1de08ba7064cabb8b Mon Sep 17 00:00:00 2001 From: Junxiang Huang Date: Wed, 23 Jul 2025 11:34:01 +0800 Subject: [PATCH 03/13] update seatable-ai standlone deployment --- docs/installation/advanced/seatable-ai.md | 16 ++-------------- 1 file changed, 2 insertions(+), 14 deletions(-) diff --git a/docs/installation/advanced/seatable-ai.md b/docs/installation/advanced/seatable-ai.md index a37a6778e..0b7117d5e 100644 --- a/docs/installation/advanced/seatable-ai.md +++ b/docs/installation/advanced/seatable-ai.md @@ -70,27 +70,15 @@ Now SeaTable AI can be used. ## Deploy SeaTable AI standalone -The deployment of a separate SeaTable AI is simple. Get seatable-release from github like described in the installation of seatable server and only use `seatable-ai.yml` and `seasearch.yml`. +The deployment of a separate SeaTable AI is simple. Get seatable-release from github like described in the installation of seatable server and only use `seatable-ai-standlone.yml` and `seasearch.yml`. -### Update `seatable-ai.yml` and expose service port - -Update your `seatable-ai.yml` and expose service port: - -```yml -services: - seatable-ai: - ... - ports: - - "8888:8888" - ... -``` ### Update `.env` in the host will deploy SeaTable AI Update your `.env`, that it looks like this and add/update the values according to your needs: ```env -COMPOSE_FILE='seatable-ai,seasearch.yml' +COMPOSE_FILE='seatable-ai-standlone.yml,seasearch.yml' COMPOSE_PATH_SEPARATOR=',' # system settings From 164df82128cbe27eae611b589f27e987a3d997c8 Mon Sep 17 00:00:00 2001 From: Junxiang Huang Date: Mon, 8 Sep 2025 10:35:17 +0800 Subject: [PATCH 04/13] update seatable-ai deployment --- docs/installation/advanced/seatable-ai.md | 107 +++++++++++++++++----- 1 file changed, 83 insertions(+), 24 deletions(-) diff --git a/docs/installation/advanced/seatable-ai.md b/docs/installation/advanced/seatable-ai.md index 0b7117d5e..fc9856cc9 100644 --- a/docs/installation/advanced/seatable-ai.md +++ b/docs/installation/advanced/seatable-ai.md @@ -8,7 +8,7 @@ SeaSearch, a file indexer with more lightweight and efficiency than Elasticsearc The easiest way to deployment SeaTable AI is to deploy it with SeaTable server on the same host. If in some situations, you need to deployment SeaTable AI standalone, you can follow the next section. -Note: Deploy SeaTable AI requires SeaTable 5.3. +!!! note "Deploy SeaTable AI requires SeaTable 6.0" ### Change the .env file @@ -20,16 +20,12 @@ Simply copy and paste (:material-content-copy:) the following code into your com sed -i "s/COMPOSE_FILE='\(.*\)'/COMPOSE_FILE='\1,seatable-ai.yml,seasearch.yml'/" /opt/seatable-compose/.env ``` -Then add SeaTable AI and SeaSearch configurations in `.env`: +Then add SeaTable AI server and SeaSearch configurations in `.env`: ```env ENABLE_SEATABLE_AI=true SEATABLE_AI_SERVER_URL=http://seatable-ai:8888 -SEATABLE_AI_LLM_TYPE=openai -SEATABLE_AI_LLM_KEY= -SEATABLE_AI_LLM_MODEL=gpt-4.1 - ENABLE_SEARCH=true INIT_SS_ADMIN_USER= INIT_SS_ADMIN_PASSWORD= @@ -47,17 +43,67 @@ SEASEARCH_TOKEN= # get from `echo -n 'INIT_SS_ADMIN_USER:INIT_SS_ADMIN_PASSWORD YWRtaW46YWRtaW5fcGFzc3dvcmQ= ``` -!!! tip "Use the custom models not from OpenAI Ltc." - SeaTable AI supports users to use **multimodality large language models** (**multimodality LLM**, i.e., image recognition support is necessary) that are not provided by OpenAI Ltd. However, the model service selected by the user needs to be compatible with the OpenAI API. To use custom models, please make the following changes in `.env`: +SeaTable AI will use AI functions in conjunction with the Large Language Model (LLM) service. Therefore, in order for SeaTable AI to work properly, you also need to add LLM configuration information to `.env`: + + +=== "OpenAI" + ``` + SEATABLE_AI_LLM_TYPE=openai + SEATABLE_AI_LLM_KEY= + SEATABLE_AI_LLM_MODEL=gpt-4o-mini # recommend + ``` +=== "Deepseek" + ``` + SEATABLE_AI_LLM_TYPE=deepseek + SEATABLE_AI_LLM_KEY= + SEATABLE_AI_LLM_MODEL=deepseek-chat # recommend + ``` +=== "Azure OpenAI" + ``` + SEATABLE_AI_LLM_TYPE=azure + SEATABLE_AI_LLM_URL= # your deployment url, leave blank to use default endpoint + SEATABLE_AI_LLM_KEY= + SEATABLE_AI_LLM_MODEL= + ``` +=== "Ollama" + ``` + SEATABLE_AI_LLM_TYPE=ollama + SEATABLE_AI_LLM_URL= + SEATABLE_AI_LLM_KEY= + SEATABLE_AI_LLM_MODEL= + ``` +=== "HuggingFace" + ``` + SEATABLE_AI_LLM_TYPE=huggingface + SEATABLE_AI_LLM_URL= + SEATABLE_AI_LLM_KEY= + SEATABLE_AI_LLM_MODEL=/ + ``` +=== "Self-proxy Server" + ``` + SEATABLE_AI_LLM_TYPE=proxy + SEATABLE_AI_LLM_URL= + SEATABLE_AI_LLM_KEY= # optional + SEATABLE_AI_LLM_MODEL= + ``` +=== "Other" + Seafile AI utilizes [LiteLLM](https://docs.litellm.ai/docs/) to interact with LLM services. For a complete list of supported LLM providers, please refer to [this documentation](https://docs.litellm.ai/docs/providers). Then fill the following fields in your `.env`: - ```env - SEATABLE_AI_LLM_TYPE=other - SEATABLE_AI_LLM_URL=https://api.openai.com/v1 # your LLM service endpoint - SEATABLE_AI_LLM_KEY= # your API key - SEATABLE_AI_LLM_MODEL=gpt-4.1 # your custom model id ``` + SEATABLE_AI_LLM_TYPE=... + SEATABLE_AI_LLM_URL=... + SEATABLE_AI_LLM_KEY=... + SEATABLE_AI_LLM_MODEL=... + ``` + + For example, if you are using a LLM service with ***OpenAI-compatible endpoints***, you should set `SEATABLE_AI_LLM_TYPE` to `other` or `openai`, and set other LLM configuration items accurately. + +!!! note "About model selection" + + SeaTable AI supports using large model providers from [LiteLLM](https://docs.litellm.ai/docs/providers) or large model services with OpenAI-compatible endpoints. Therefore, SeaTable AI is compatible with most custom large model services except the default model (*gpt-4.1*), but in order to ensure the normal use of SeaTable AI features, you need to select a **multimodal large model** (such as supporting image input and recognition) -### Download SeaTable AI and restart + +### Download SeaTable AI image and restart One more step is necessary to download the SeaTable AI image and restart the SeaTable service. @@ -72,7 +118,6 @@ Now SeaTable AI can be used. The deployment of a separate SeaTable AI is simple. Get seatable-release from github like described in the installation of seatable server and only use `seatable-ai-standlone.yml` and `seasearch.yml`. - ### Update `.env` in the host will deploy SeaTable AI Update your `.env`, that it looks like this and add/update the values according to your needs: @@ -107,7 +152,8 @@ INNER_DTABLE_DB_URL=https://seatable.your-domain.com/dtable-db/ # LLM SEATABLE_AI_LLM_TYPE=openai -SEATABLE_AI_LLM_KEY= +SEATABLE_AI_LLM_URL= +SEATABLE_AI_LLM_KEY=... SEATABLE_AI_LLM_MODEL=gpt-4.1 # SeaSearch @@ -133,15 +179,11 @@ SEASEARCH_TOKEN= # get from `echo -n 'INIT_SS_ADMIN_USER:INIT_SS_ADMIN_PASSWORD YWRtaW46YWRtaW5fcGFzc3dvcmQ= ``` -!!! tip "Use the custom models not from OpenAI Ltc." - SeaTable AI supports users to use custom **multimodality large language models** (**multimodality LLM**, i.e., image recognition support is necessary) that are not provided by OpenAI Ltd. However, the model service selected by the user needs to be compatible with the OpenAI API. To use custom models, please make the following changes in `.env`: +!!! note "About model selection and LLM configurations" - ```env - SEATABLE_AI_LLM_TYPE=other - SEATABLE_AI_LLM_URL=https://api.openai.com/v1 # your LLM service endpoint - SEATABLE_AI_LLM_KEY= # your API key - SEATABLE_AI_LLM_MODEL=gpt-4.1 # your custom model id - ``` + SeaTable AI supports using large model providers from [LiteLLM](https://docs.litellm.ai/docs/providers) or large model services with OpenAI-compatible endpoints. Therefore, SeaTable AI is compatible with most custom large model services except the default model (*gpt-4.1*), but in order to ensure the normal use of SeaTable AI features, you need to select a **multimodal large model** (such as supporting image input and recognition). + + We also provide some [reference configurations](#llm-configuration) for the LLM service provider in this manual (it is irrelevant to whether SeaTable AI is deployed standalone). You can also adjust these configurations based on your actual situation. Execute `docker compose up -d` to fire up your separate SeaTable AI. @@ -162,6 +204,23 @@ Restart seatable service and test your SeaTable AI. docker compose down && docker compose up -d ``` +## Advanced operations + +### Context management + +You can manage SeaTable AI's context policies by modifying `/opt/seatable-server/seatable/conf/seatable_ai_settings.py`: + +```py +# The maximum number of entries in the context record, default is 10. When set to 0, the entire history of the current session will be read +CONTEXT_WINDOW_LIMIT = 10 + +# The validity time (hour) of tool calls' history in the context within CONTEXT_WINDOW_LIMIT. When set to 0, all tool calls' history will be loaded into the context +CONTEXT_TOOLS_VALID_TIME = 24 # hour + +# The validity time (hour) of common conversations' history (user input and assistant output) in the context within CONTEXT_WINDOW_LIMIT. When set to 0, all common conversations' history will be loaded into the context +CONTEXT_CONVERSATION_VALID_TIME = 168 # hour +``` + ## SeaTable AI directory structure `/opt/seatable-server` From 905713e55a1d64f69f9920e427fb6c94c74e2ea0 Mon Sep 17 00:00:00 2001 From: Junxiang Huang Date: Mon, 8 Sep 2025 10:59:13 +0800 Subject: [PATCH 05/13] update seatable-ai deployment --- docs/installation/advanced/seatable-ai.md | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/docs/installation/advanced/seatable-ai.md b/docs/installation/advanced/seatable-ai.md index fc9856cc9..a3cfe6569 100644 --- a/docs/installation/advanced/seatable-ai.md +++ b/docs/installation/advanced/seatable-ai.md @@ -1,5 +1,7 @@ # SeaTable AI Integration + + SeaTable AI is an extension of SeaTable that providing AI functions. SeaSearch, a file indexer with more lightweight and efficiency than Elasticsearch. @@ -130,10 +132,9 @@ COMPOSE_PATH_SEPARATOR=',' TIME_ZONE='Europe/Berlin' # database -SEATABLE_MYSQL_DB_HOST= -SEATABLE_MYSQL_DB_PORT=3306 -SEATABLE_MYSQL_DB_USER= -SEATABLE_MYSQL_DB_PASSWORD= +MARIADB_HOST= +MARIADB_PORT=3306 +MARIADB_PASSWORD= # redis REDIS_HOST= From 3827cc763b37d4b70d66b12384aa17a478a12d2b Mon Sep 17 00:00:00 2001 From: Junxiang Huang Date: Fri, 12 Sep 2025 16:39:20 +0800 Subject: [PATCH 06/13] add DISABLE_CONTEXT --- docs/installation/advanced/seatable-ai.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/docs/installation/advanced/seatable-ai.md b/docs/installation/advanced/seatable-ai.md index a3cfe6569..0a5d87cc4 100644 --- a/docs/installation/advanced/seatable-ai.md +++ b/docs/installation/advanced/seatable-ai.md @@ -212,6 +212,9 @@ docker compose down && docker compose up -d You can manage SeaTable AI's context policies by modifying `/opt/seatable-server/seatable/conf/seatable_ai_settings.py`: ```py +# If you would like to disable context, set this variable to True +DISABLE_CONTEXT = False + # The maximum number of entries in the context record, default is 10. When set to 0, the entire history of the current session will be read CONTEXT_WINDOW_LIMIT = 10 From 489662859070c283d684d696ffd2081871cb7c03 Mon Sep 17 00:00:00 2001 From: Junxiang Huang Date: Fri, 12 Sep 2025 17:57:50 +0800 Subject: [PATCH 07/13] support custom temperature --- docs/installation/advanced/seatable-ai.md | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/docs/installation/advanced/seatable-ai.md b/docs/installation/advanced/seatable-ai.md index 0a5d87cc4..73dd2f320 100644 --- a/docs/installation/advanced/seatable-ai.md +++ b/docs/installation/advanced/seatable-ai.md @@ -225,6 +225,15 @@ CONTEXT_TOOLS_VALID_TIME = 24 # hour CONTEXT_CONVERSATION_VALID_TIME = 168 # hour ``` +### Custom LLM parameters + +SeaTable AI supports customizing the following LLM parameters by modifying `/opt/seatable-server/seatable/conf/seatable_ai_settings.py`: + +- LLM_TEMPERATURE: Temperature is a key floating-point parameter (**ranging from 0 to 1**) in LLM that controls the randomness (creativity) and determinism of generated text. Lower temperature yields more accurate results. + +!!! warning "Temperature for ***GPT-5*** series model" + GPT-5 series models(including ***gpt-5***, ***gpt-5-mini***, ***gpt-5-nano***, and ***gpt-5-chat***) no longer support custom temperature values and only receive `temperature=1`. If you would like to use ***GPT-5*** series model, please set `LLM_TEMPERATURE = 1`. + ## SeaTable AI directory structure `/opt/seatable-server` From e65d71e3a6c3342bb05ab7c9c6969dc47e2d43f9 Mon Sep 17 00:00:00 2001 From: Junxiang Huang Date: Tue, 16 Sep 2025 10:59:53 +0800 Subject: [PATCH 08/13] update seatable-ai docs --- docs/configuration/roles-and-permissions.md | 1 + docs/installation/advanced/seatable-ai.md | 77 ++++++++------------- 2 files changed, 30 insertions(+), 48 deletions(-) diff --git a/docs/configuration/roles-and-permissions.md b/docs/configuration/roles-and-permissions.md index 39601927f..2bc44ae6c 100644 --- a/docs/configuration/roles-and-permissions.md +++ b/docs/configuration/roles-and-permissions.md @@ -45,6 +45,7 @@ The following quotas are supported in user roles: | scripts_running_limit | 2.3 | Total number of _Python_ scripts run within a month: 100 means 100 script runs per month; -1 means unlimited script runs | The script run counter is reset at the beginning of every month. | | snapshot_days | 2.1 | Retention period for snapshots in days: 180 means a storage period of 180 days; no value means an unlimited retention period | Snapshots older than the retention period are automatically removed. | | share_limit | | Max number of users a base can be shared with: 100 means a base can be shared with 100 users | | +| monthly_ai_credit_per_user | 6.0 | The maximum AI quota allowed per user per month (i.e., the maximum amount of tokens that can be used in a single month, converted into an amount. In team mode, the total quota within the team will be shared). `-1` means unlimited quota. | | ### Standard User Roles diff --git a/docs/installation/advanced/seatable-ai.md b/docs/installation/advanced/seatable-ai.md index 73dd2f320..3792e5996 100644 --- a/docs/installation/advanced/seatable-ai.md +++ b/docs/installation/advanced/seatable-ai.md @@ -4,8 +4,6 @@ SeaTable AI is an extension of SeaTable that providing AI functions. -SeaSearch, a file indexer with more lightweight and efficiency than Elasticsearch. - ## Deployment SeaTable AI The easiest way to deployment SeaTable AI is to deploy it with SeaTable server on the same host. If in some situations, you need to deployment SeaTable AI standalone, you can follow the next section. @@ -14,37 +12,21 @@ The easiest way to deployment SeaTable AI is to deploy it with SeaTable server o ### Change the .env file -To install SeaTable AI, include `seatable-ai.yml` and `seasearch.yml`in the `COMPOSE_FILE` variable within your `.env` file. This instructs Docker to download the required images for SeaTable AI. +To install SeaTable AI, include `seatable-ai.yml` in the `COMPOSE_FILE` variable within your `.env` file. This instructs Docker to download the required images for SeaTable AI. Simply copy and paste (:material-content-copy:) the following code into your command line: ```bash -sed -i "s/COMPOSE_FILE='\(.*\)'/COMPOSE_FILE='\1,seatable-ai.yml,seasearch.yml'/" /opt/seatable-compose/.env +sed -i "s/COMPOSE_FILE='\(.*\)'/COMPOSE_FILE='\1,seatable-ai.yml'/" /opt/seatable-compose/.env ``` -Then add SeaTable AI server and SeaSearch configurations in `.env`: +Then add SeaTable AI server configurations in `.env`: ```env ENABLE_SEATABLE_AI=true SEATABLE_AI_SERVER_URL=http://seatable-ai:8888 - -ENABLE_SEARCH=true -INIT_SS_ADMIN_USER= -INIT_SS_ADMIN_PASSWORD= -SEASEARCH_SERVER_URL=http://seasearch:4080 -SEASEARCH_TOKEN= # get from `echo -n 'INIT_SS_ADMIN_USER:INIT_SS_ADMIN_PASSWORD' | base64` ``` -!!! note "Details for `SEASEARCH_TOKEN`" - Get your authorization token (SEASEARCH_TOKEN) by base64 code consist of INIT_SS_ADMIN_USER and INIT_SS_ADMIN_PASSWORD defined in .env firstly, which is used to authorize when calling the SeaSearch API: - - ```bash - echo -n 'username:password' | base64 - - # example output - YWRtaW46YWRtaW5fcGFzc3dvcmQ= - ``` - SeaTable AI will use AI functions in conjunction with the Large Language Model (LLM) service. Therefore, in order for SeaTable AI to work properly, you also need to add LLM configuration information to `.env`: @@ -102,7 +84,7 @@ SeaTable AI will use AI functions in conjunction with the Large Language Model ( !!! note "About model selection" - SeaTable AI supports using large model providers from [LiteLLM](https://docs.litellm.ai/docs/providers) or large model services with OpenAI-compatible endpoints. Therefore, SeaTable AI is compatible with most custom large model services except the default model (*gpt-4.1*), but in order to ensure the normal use of SeaTable AI features, you need to select a **multimodal large model** (such as supporting image input and recognition) + SeaTable AI supports using large model providers from [LiteLLM](https://docs.litellm.ai/docs/providers) or large model services with OpenAI-compatible endpoints. Therefore, SeaTable AI is compatible with most custom large model services, but in order to ensure the normal use of SeaTable AI features, you need to select a **multimodal large model** (such as supporting image input and recognition) ### Download SeaTable AI image and restart @@ -118,14 +100,14 @@ Now SeaTable AI can be used. ## Deploy SeaTable AI standalone -The deployment of a separate SeaTable AI is simple. Get seatable-release from github like described in the installation of seatable server and only use `seatable-ai-standlone.yml` and `seasearch.yml`. +The deployment of a separate SeaTable AI is simple. Get seatable-release from github like described in the installation of seatable server and only use `seatable-ai-standlone.yml`. ### Update `.env` in the host will deploy SeaTable AI Update your `.env`, that it looks like this and add/update the values according to your needs: ```env -COMPOSE_FILE='seatable-ai-standlone.yml,seasearch.yml' +COMPOSE_FILE='seatable-ai-standlone.yml' COMPOSE_PATH_SEPARATOR=',' # system settings @@ -155,14 +137,7 @@ INNER_DTABLE_DB_URL=https://seatable.your-domain.com/dtable-db/ SEATABLE_AI_LLM_TYPE=openai SEATABLE_AI_LLM_URL= SEATABLE_AI_LLM_KEY=... -SEATABLE_AI_LLM_MODEL=gpt-4.1 - -# SeaSearch -ENABLE_SEARCH=true -INIT_SS_ADMIN_USER= -INIT_SS_ADMIN_PASSWORD= -SEASEARCH_SERVER_URL=http://seasearch:4080 -SEASEARCH_TOKEN= # get from `echo -n 'INIT_SS_ADMIN_USER:INIT_SS_ADMIN_PASSWORD' | base64` +SEATABLE_AI_LLM_MODEL=gpt-4o-mini # recommend ``` !!! warning @@ -170,19 +145,9 @@ SEASEARCH_TOKEN= # get from `echo -n 'INIT_SS_ADMIN_USER:INIT_SS_ADMIN_PASSWORD - If Redis has no REDIS_PASSWORD, leave it as empty after "=", do not use empty string (like REDIS_PASSWORD="") -!!! note "Details for `SEASEARCH_TOKEN`" - Get your authorization token (SEASEARCH_TOKEN) by base64 code consist of INIT_SS_ADMIN_USER and INIT_SS_ADMIN_PASSWORD defined in .env firstly, which is used to authorize when calling the SeaSearch API: - - ```bash - echo -n 'username:password' | base64 - - # example output - YWRtaW46YWRtaW5fcGFzc3dvcmQ= - ``` - !!! note "About model selection and LLM configurations" - SeaTable AI supports using large model providers from [LiteLLM](https://docs.litellm.ai/docs/providers) or large model services with OpenAI-compatible endpoints. Therefore, SeaTable AI is compatible with most custom large model services except the default model (*gpt-4.1*), but in order to ensure the normal use of SeaTable AI features, you need to select a **multimodal large model** (such as supporting image input and recognition). + SeaTable AI supports using large model providers from [LiteLLM](https://docs.litellm.ai/docs/providers) or large model services with OpenAI-compatible endpoints. Therefore, SeaTable AI is compatible with most custom large model services, but in order to ensure the normal use of SeaTable AI features, you need to select a **multimodal large model** (such as supporting image input and recognition). We also provide some [reference configurations](#llm-configuration) for the LLM service provider in this manual (it is irrelevant to whether SeaTable AI is deployed standalone). You can also adjust these configurations based on your actual situation. @@ -229,11 +194,31 @@ CONTEXT_CONVERSATION_VALID_TIME = 168 # hour SeaTable AI supports customizing the following LLM parameters by modifying `/opt/seatable-server/seatable/conf/seatable_ai_settings.py`: -- LLM_TEMPERATURE: Temperature is a key floating-point parameter (**ranging from 0 to 1**) in LLM that controls the randomness (creativity) and determinism of generated text. Lower temperature yields more accurate results. +- **LLM_TEMPERATURE**: Temperature is a key floating-point parameter (**ranging from 0 to 1**) in LLM that controls the randomness (creativity) and determinism of generated text. Lower temperature yields more accurate results. !!! warning "Temperature for ***GPT-5*** series model" GPT-5 series models(including ***gpt-5***, ***gpt-5-mini***, ***gpt-5-nano***, and ***gpt-5-chat***) no longer support custom temperature values and only receive `temperature=1`. If you would like to use ***GPT-5*** series model, please set `LLM_TEMPERATURE = 1`. +### Token usage and fee statistics + +SeaTable AI supports enabling token usage and fee statistics (viewable when the user moves the mouse over the avatar). + +1. Add the following content to `/opt/seatable-server/seatable/conf/dtable_web_settings.py` to enable token usage and fee statistics: + + ```py + AI_PRICES = { + "your_model_id": { # your model name, same as SEATABLE_AI_LLM_MODEL + "input_tokens_1k": 0.01827, # price / 1000 tokens + "output_tokens_1k": 0.07309 # price / 1000 tokens + }, + } + ``` + +2. Refer management of [roles and permission](../../configuration/roles-and-permissions.md#user-quotas) to specify `monthly_ai_credit_per_user` (-1 is unlimited), and the unit should be the same as in `AI_PRICES`. + +!!! note "`monthly_ai_credit_per_user` for organization user" + For organizational team users, `monthly_ai_credit_per_user` will apply to the entire team. For example, when `monthly_ai_credit_per_user` is set to `2` (unit of doller for example) and there are 10 members in the team, so all members in the team will share the quota of 20. + ## SeaTable AI directory structure `/opt/seatable-server` @@ -245,10 +230,6 @@ Placeholder spot for shared volumes. You may elect to store certain persistent i * /opt/seatable-server/ai-data/assets: This is the directory for SeaTable AI assets. * /opt/seatable-server/ai-data/index-info: This is the directory for SeaTable AI index. -`/opt/seasearch-data` - -* /opt/seasearch-data/logs: This is the directory for SeaSearch logs. - ## Database used by SeaTable AI SeaTable AI used several database tables like `dtable_db.ai_assistant` to store records. From 7d03d64370bb2efaf820d2ca462f6a0ea0a148be Mon Sep 17 00:00:00 2001 From: Junxiang Huang Date: Tue, 16 Sep 2025 11:21:27 +0800 Subject: [PATCH 09/13] remove no-merged content --- docs/configuration/roles-and-permissions.md | 2 +- docs/installation/advanced/seatable-ai.md | 35 +++------------------ 2 files changed, 5 insertions(+), 32 deletions(-) diff --git a/docs/configuration/roles-and-permissions.md b/docs/configuration/roles-and-permissions.md index 2bc44ae6c..83ea86264 100644 --- a/docs/configuration/roles-and-permissions.md +++ b/docs/configuration/roles-and-permissions.md @@ -45,7 +45,7 @@ The following quotas are supported in user roles: | scripts_running_limit | 2.3 | Total number of _Python_ scripts run within a month: 100 means 100 script runs per month; -1 means unlimited script runs | The script run counter is reset at the beginning of every month. | | snapshot_days | 2.1 | Retention period for snapshots in days: 180 means a storage period of 180 days; no value means an unlimited retention period | Snapshots older than the retention period are automatically removed. | | share_limit | | Max number of users a base can be shared with: 100 means a base can be shared with 100 users | | -| monthly_ai_credit_per_user | 6.0 | The maximum AI quota allowed per user per month (i.e., the maximum amount of tokens that can be used in a single month, converted into an amount. In team mode, the total quota within the team will be shared). `-1` means unlimited quota. | | +| ai_credit_per_user | 6.0 | The maximum AI quota allowed per user per month (i.e., the maximum amount of tokens that can be used in a single month, converted into an amount. In team mode, the total quota within the team will be shared). `-1` means unlimited quota. | | ### Standard User Roles diff --git a/docs/installation/advanced/seatable-ai.md b/docs/installation/advanced/seatable-ai.md index 3792e5996..1bacb3fb8 100644 --- a/docs/installation/advanced/seatable-ai.md +++ b/docs/installation/advanced/seatable-ai.md @@ -172,36 +172,9 @@ docker compose down && docker compose up -d ## Advanced operations -### Context management - -You can manage SeaTable AI's context policies by modifying `/opt/seatable-server/seatable/conf/seatable_ai_settings.py`: - -```py -# If you would like to disable context, set this variable to True -DISABLE_CONTEXT = False - -# The maximum number of entries in the context record, default is 10. When set to 0, the entire history of the current session will be read -CONTEXT_WINDOW_LIMIT = 10 - -# The validity time (hour) of tool calls' history in the context within CONTEXT_WINDOW_LIMIT. When set to 0, all tool calls' history will be loaded into the context -CONTEXT_TOOLS_VALID_TIME = 24 # hour - -# The validity time (hour) of common conversations' history (user input and assistant output) in the context within CONTEXT_WINDOW_LIMIT. When set to 0, all common conversations' history will be loaded into the context -CONTEXT_CONVERSATION_VALID_TIME = 168 # hour -``` - -### Custom LLM parameters - -SeaTable AI supports customizing the following LLM parameters by modifying `/opt/seatable-server/seatable/conf/seatable_ai_settings.py`: - -- **LLM_TEMPERATURE**: Temperature is a key floating-point parameter (**ranging from 0 to 1**) in LLM that controls the randomness (creativity) and determinism of generated text. Lower temperature yields more accurate results. - -!!! warning "Temperature for ***GPT-5*** series model" - GPT-5 series models(including ***gpt-5***, ***gpt-5-mini***, ***gpt-5-nano***, and ***gpt-5-chat***) no longer support custom temperature values and only receive `temperature=1`. If you would like to use ***GPT-5*** series model, please set `LLM_TEMPERATURE = 1`. - ### Token usage and fee statistics -SeaTable AI supports enabling token usage and fee statistics (viewable when the user moves the mouse over the avatar). +SeaTable AI supports enabling token usage and fee statistics (can view it by moving the mouse to the statistics column when move the mouse to the avatar). 1. Add the following content to `/opt/seatable-server/seatable/conf/dtable_web_settings.py` to enable token usage and fee statistics: @@ -214,10 +187,10 @@ SeaTable AI supports enabling token usage and fee statistics (viewable when the } ``` -2. Refer management of [roles and permission](../../configuration/roles-and-permissions.md#user-quotas) to specify `monthly_ai_credit_per_user` (-1 is unlimited), and the unit should be the same as in `AI_PRICES`. +2. Refer management of [roles and permission](../../configuration/roles-and-permissions.md#user-quotas) to specify `ai_credit_per_user` (-1 is unlimited), and the unit should be the same as in `AI_PRICES`. -!!! note "`monthly_ai_credit_per_user` for organization user" - For organizational team users, `monthly_ai_credit_per_user` will apply to the entire team. For example, when `monthly_ai_credit_per_user` is set to `2` (unit of doller for example) and there are 10 members in the team, so all members in the team will share the quota of 20. +!!! note "`ai_credit_per_user` for organization user" + For organizational team users, `ai_credit_per_user` will apply to the entire team. For example, when `ai_credit_per_user` is set to `2` (unit of doller for example) and there are 10 members in the team, so all members in the team will share the quota of 20. ## SeaTable AI directory structure From 6bf9fc27837c626d8282e1d37242ee26dcd4e2c4 Mon Sep 17 00:00:00 2001 From: Simon Hammes Date: Thu, 25 Sep 2025 11:32:27 +0200 Subject: [PATCH 10/13] Improve SeaTable AI docs --- docs/installation/advanced/seatable-ai.md | 58 +++++++++++------------ 1 file changed, 29 insertions(+), 29 deletions(-) diff --git a/docs/installation/advanced/seatable-ai.md b/docs/installation/advanced/seatable-ai.md index 1bacb3fb8..fc657b2ec 100644 --- a/docs/installation/advanced/seatable-ai.md +++ b/docs/installation/advanced/seatable-ai.md @@ -2,13 +2,13 @@ -SeaTable AI is an extension of SeaTable that providing AI functions. +SeaTable AI is a SeaTable extension that provides AI functions such as support for executing AI-based automation steps within SeaTable. -## Deployment SeaTable AI +## Deployment -The easiest way to deployment SeaTable AI is to deploy it with SeaTable server on the same host. If in some situations, you need to deployment SeaTable AI standalone, you can follow the next section. +The easiest way to deploy SeaTable AI is to deploy it on the same host as SeaTable Server. A standalone deployment (on a separate host or virtual machine) is [explained below](#standalone-deployment). -!!! note "Deploy SeaTable AI requires SeaTable 6.0" +!!! note "SeaTable AI requires SeaTable 6.0" ### Change the .env file @@ -27,20 +27,20 @@ ENABLE_SEATABLE_AI=true SEATABLE_AI_SERVER_URL=http://seatable-ai:8888 ``` -SeaTable AI will use AI functions in conjunction with the Large Language Model (LLM) service. Therefore, in order for SeaTable AI to work properly, you also need to add LLM configuration information to `.env`: +SeaTable AI will use AI functions in conjunction with a Large Language Model (LLM) service. Therefore, in order for SeaTable AI to work properly, you also need to configure an LLM service inside your `.env` file: === "OpenAI" ``` SEATABLE_AI_LLM_TYPE=openai SEATABLE_AI_LLM_KEY= - SEATABLE_AI_LLM_MODEL=gpt-4o-mini # recommend + SEATABLE_AI_LLM_MODEL=gpt-4o-mini # recommended ``` === "Deepseek" ``` SEATABLE_AI_LLM_TYPE=deepseek SEATABLE_AI_LLM_KEY= - SEATABLE_AI_LLM_MODEL=deepseek-chat # recommend + SEATABLE_AI_LLM_MODEL=deepseek-chat # recommended ``` === "Azure OpenAI" ``` @@ -51,7 +51,7 @@ SeaTable AI will use AI functions in conjunction with the Large Language Model ( ``` === "Ollama" ``` - SEATABLE_AI_LLM_TYPE=ollama + SEATABLE_AI_LLM_TYPE=ollama_chat SEATABLE_AI_LLM_URL= SEATABLE_AI_LLM_KEY= SEATABLE_AI_LLM_MODEL= @@ -63,7 +63,7 @@ SeaTable AI will use AI functions in conjunction with the Large Language Model ( SEATABLE_AI_LLM_KEY= SEATABLE_AI_LLM_MODEL=/ ``` -=== "Self-proxy Server" +=== "Self-Hosted Proxy Server" ``` SEATABLE_AI_LLM_TYPE=proxy SEATABLE_AI_LLM_URL= @@ -84,12 +84,12 @@ SeaTable AI will use AI functions in conjunction with the Large Language Model ( !!! note "About model selection" - SeaTable AI supports using large model providers from [LiteLLM](https://docs.litellm.ai/docs/providers) or large model services with OpenAI-compatible endpoints. Therefore, SeaTable AI is compatible with most custom large model services, but in order to ensure the normal use of SeaTable AI features, you need to select a **multimodal large model** (such as supporting image input and recognition) + SeaTable AI supports using large model providers from [LiteLLM](https://docs.litellm.ai/docs/providers) or large model services with OpenAI-compatible endpoints. Therefore, SeaTable AI is compatible with most custom large model services, but in order to ensure the normal use of SeaTable AI features, you need to select a **large multimodal model** (such as supporting image input and recognition). ### Download SeaTable AI image and restart -One more step is necessary to download the SeaTable AI image and restart the SeaTable service. +One more step is necessary to download the SeaTable AI image and restart the SeaTable service: ```bash cd /opt/seatable-compose @@ -98,16 +98,16 @@ docker compose down && docker compose up -d Now SeaTable AI can be used. -## Deploy SeaTable AI standalone +## Standalone Deployment -The deployment of a separate SeaTable AI is simple. Get seatable-release from github like described in the installation of seatable server and only use `seatable-ai-standlone.yml`. +The deployment of a separate SeaTable AI instance is simple. Download `seatable-release` from GitHub as described in the [installation of Seatable Server](../basic-setup.md) and only use `seatable-ai-standalone.yml`. ### Update `.env` in the host will deploy SeaTable AI Update your `.env`, that it looks like this and add/update the values according to your needs: ```env -COMPOSE_FILE='seatable-ai-standlone.yml' +COMPOSE_FILE='seatable-ai-standalone.yml' COMPOSE_PATH_SEPARATOR=',' # system settings @@ -137,34 +137,34 @@ INNER_DTABLE_DB_URL=https://seatable.your-domain.com/dtable-db/ SEATABLE_AI_LLM_TYPE=openai SEATABLE_AI_LLM_URL= SEATABLE_AI_LLM_KEY=... -SEATABLE_AI_LLM_MODEL=gpt-4o-mini # recommend +SEATABLE_AI_LLM_MODEL=gpt-4o-mini # recommended ``` !!! warning - - `JWT_PRIVATE_KEY`, same as the `JWT_PRIVATE_KEY` field in SeaTable `.env` file. + - `JWT_PRIVATE_KEY` must have the same as the `JWT_PRIVATE_KEY` field in SeaTable Server's `.env` file. - - If Redis has no REDIS_PASSWORD, leave it as empty after "=", do not use empty string (like REDIS_PASSWORD="") + - If Redis has no REDIS_PASSWORD, do not specify a value after the equals sign ("="). Specifying an empty string will cause problems (like REDIS_PASSWORD=""). !!! note "About model selection and LLM configurations" - SeaTable AI supports using large model providers from [LiteLLM](https://docs.litellm.ai/docs/providers) or large model services with OpenAI-compatible endpoints. Therefore, SeaTable AI is compatible with most custom large model services, but in order to ensure the normal use of SeaTable AI features, you need to select a **multimodal large model** (such as supporting image input and recognition). + SeaTable AI supports using large model providers from [LiteLLM](https://docs.litellm.ai/docs/providers) or large model services with OpenAI-compatible endpoints. Therefore, SeaTable AI is compatible with most custom large model services, but in order to ensure the normal use of SeaTable AI features, you need to select a **large multimodal model** (such as supporting image input and recognition). - We also provide some [reference configurations](#llm-configuration) for the LLM service provider in this manual (it is irrelevant to whether SeaTable AI is deployed standalone). You can also adjust these configurations based on your actual situation. + We also provide some [reference configurations](#llm-configuration) for the LLM service provider in this manual. These configuration details do not change depending on your deployment topology. You can also adjust these configurations based on your needs. Execute `docker compose up -d` to fire up your separate SeaTable AI. ### Configurations of SeaTable Server -SeaTable must know where to get the SeaTable AI. +SeaTable must know how to access SeaTable AI. -Add SeaTable AI configurations to `.env` file where deployed SeaTable. +Add the following configuration settings to your `.env` file on SeaTable Server's host: -```py -ENABLE_SEATABLE_AI = True -SEATABLE_AI_SERVER_URL = 'http://seatable-ai.example.com:8888' +```env +ENABLE_SEATABLE_AI=true +SEATABLE_AI_SERVER_URL='http://seatable-ai.example.com:8888' ``` -Restart seatable service and test your SeaTable AI. +Restart the `seatable-server` service and test your SeaTable AI. ```bash docker compose down && docker compose up -d @@ -187,10 +187,10 @@ SeaTable AI supports enabling token usage and fee statistics (can view it by mov } ``` -2. Refer management of [roles and permission](../../configuration/roles-and-permissions.md#user-quotas) to specify `ai_credit_per_user` (-1 is unlimited), and the unit should be the same as in `AI_PRICES`. +2. Refer to management of [roles and permission](../../configuration/roles-and-permissions.md#user-quotas) to specify `ai_credit_per_user` (-1 is unlimited), and the unit should be the same as in `AI_PRICES`. -!!! note "`ai_credit_per_user` for organization user" - For organizational team users, `ai_credit_per_user` will apply to the entire team. For example, when `ai_credit_per_user` is set to `2` (unit of doller for example) and there are 10 members in the team, so all members in the team will share the quota of 20. +!!! note "`ai_credit_per_user` for organization users" + For organizational team users, `ai_credit_per_user` will apply to the entire team. For example, when `ai_credit_per_user` is set to `2` (unit of dollars for example) and there are 10 members in the team, all members in the team will share the same quota of 20 AI credits per month. ## SeaTable AI directory structure @@ -205,4 +205,4 @@ Placeholder spot for shared volumes. You may elect to store certain persistent i ## Database used by SeaTable AI -SeaTable AI used several database tables like `dtable_db.ai_assistant` to store records. +SeaTable AI uses several database tables such as `ai_assistant` inside the `dtable_db` database to store records. From 9e8a6fc24d2cc49176906c888a927747d0a328d4 Mon Sep 17 00:00:00 2001 From: Simon Hammes Date: Fri, 26 Sep 2025 12:34:24 +0200 Subject: [PATCH 11/13] Split SeaTable AI docs into two pages --- .../advanced/seatable-ai-standalone.md | 86 ++++++++ docs/installation/advanced/seatable-ai.md | 208 ------------------ docs/installation/components/seatable-ai.md | 137 ++++++++++++ mkdocs.yml | 4 +- 4 files changed, 226 insertions(+), 209 deletions(-) create mode 100644 docs/installation/advanced/seatable-ai-standalone.md delete mode 100644 docs/installation/advanced/seatable-ai.md create mode 100644 docs/installation/components/seatable-ai.md diff --git a/docs/installation/advanced/seatable-ai-standalone.md b/docs/installation/advanced/seatable-ai-standalone.md new file mode 100644 index 000000000..6bba63e66 --- /dev/null +++ b/docs/installation/advanced/seatable-ai-standalone.md @@ -0,0 +1,86 @@ +# Standalone Deployment of SeaTable AI + +This guide describes the standalone deployment of `seatable-ai` on a dedicated server or virtual machine. + +## Prerequisites + +- You have successfully installed [Docker and Docker-Compose](../basic-setup.md#install-docker-and-docker-compose-plugin) +- You have [downloaded the latest `.yml` files](../basic-setup.md#1-create-basic-structure) from the `seatable-release` GitHub repository +- The hosts destined to run `seatable-ai` and other SeaTable components are attached to the same private network + +## SeaTable AI Configuration + +The following section outlines an `.env` file with the settings needed to run `seatable-ai`. +These changes should be made inside `/opt/seatable-compose/.env`: + +```ini +COMPOSE_FILE='seatable-ai-standalone.yml' +COMPOSE_PATH_SEPARATOR=',' + +# system settings +TIME_ZONE='Europe/Berlin' + +# database +MARIADB_HOST= +MARIADB_PORT=3306 +MARIADB_PASSWORD= + +# redis +REDIS_HOST= +REDIS_PORT=6379 +REDIS_PASSWORD= + +# This private key must have the same value as the JWT_PRIVATE_KEY variable on other SeaTable nodes +JWT_PRIVATE_KEY= + +# Public URL of your SeaTable server +SEATABLE_SERVER_URL=https://seatable.your-domain.com + +# Cluster-internal URL of dtable-server +INNER_DTABLE_SERVER_URL=http://dtable-server:5000 + +# Cluster-internal URL of dtable-db +INNER_DTABLE_DB_URL=http://dtable-db:7777 + +# LLM +SEATABLE_AI_LLM_TYPE=openai +SEATABLE_AI_LLM_URL= +SEATABLE_AI_LLM_KEY=... +SEATABLE_AI_LLM_MODEL=gpt-4o-mini # recommended +``` + +!!! warning + - In case you are not using password authentication for Redis, you should not specify a value after the equals sign (`=`) for the `REDIS_PASSWORD` variable. + Specifying an empty string (e.g. `REDIS_PASSWORD=""`) will cause problems. + + - By default, the ports of `dtable-server` (5000) and `dtable-db` (7777) are not exposed to the host. This requires a manual change inside the `.yml` file. + +### LLM Provider Configuration + +Please refer to the documentation on [configuring your LLM provider of choice](../components/seatable-ai.md#llm-provider-configuration). +These configuration details do not change depending on the deployment topology of `seatable-server` and `seatable-ai`. + +### Start SeaTable AI + +You can now start SeaTable AI by running the following command inside your terminal: + +```bash +cd /opt/seatable-compose +docker compose up -d +``` + +## Configuration of SeaTable Server + +Since `seatable-ai` is now running on a separate host or virtual machine, the following configuration changes must be made inside the `.env` file on the host running the `seatable-server` container: + +```ini +ENABLE_SEATABLE_AI=true +SEATABLE_AI_SERVER_URL='http://seatable-ai.example.com:8888' +``` + +Restart the `seatable-server` service and test your SeaTable AI: + +```bash +cd /opt/seatable-compose +docker compose up -d +``` diff --git a/docs/installation/advanced/seatable-ai.md b/docs/installation/advanced/seatable-ai.md deleted file mode 100644 index fc657b2ec..000000000 --- a/docs/installation/advanced/seatable-ai.md +++ /dev/null @@ -1,208 +0,0 @@ -# SeaTable AI Integration - - - -SeaTable AI is a SeaTable extension that provides AI functions such as support for executing AI-based automation steps within SeaTable. - -## Deployment - -The easiest way to deploy SeaTable AI is to deploy it on the same host as SeaTable Server. A standalone deployment (on a separate host or virtual machine) is [explained below](#standalone-deployment). - -!!! note "SeaTable AI requires SeaTable 6.0" - -### Change the .env file - -To install SeaTable AI, include `seatable-ai.yml` in the `COMPOSE_FILE` variable within your `.env` file. This instructs Docker to download the required images for SeaTable AI. - -Simply copy and paste (:material-content-copy:) the following code into your command line: - -```bash -sed -i "s/COMPOSE_FILE='\(.*\)'/COMPOSE_FILE='\1,seatable-ai.yml'/" /opt/seatable-compose/.env -``` - -Then add SeaTable AI server configurations in `.env`: - -```env -ENABLE_SEATABLE_AI=true -SEATABLE_AI_SERVER_URL=http://seatable-ai:8888 -``` - -SeaTable AI will use AI functions in conjunction with a Large Language Model (LLM) service. Therefore, in order for SeaTable AI to work properly, you also need to configure an LLM service inside your `.env` file: - - -=== "OpenAI" - ``` - SEATABLE_AI_LLM_TYPE=openai - SEATABLE_AI_LLM_KEY= - SEATABLE_AI_LLM_MODEL=gpt-4o-mini # recommended - ``` -=== "Deepseek" - ``` - SEATABLE_AI_LLM_TYPE=deepseek - SEATABLE_AI_LLM_KEY= - SEATABLE_AI_LLM_MODEL=deepseek-chat # recommended - ``` -=== "Azure OpenAI" - ``` - SEATABLE_AI_LLM_TYPE=azure - SEATABLE_AI_LLM_URL= # your deployment url, leave blank to use default endpoint - SEATABLE_AI_LLM_KEY= - SEATABLE_AI_LLM_MODEL= - ``` -=== "Ollama" - ``` - SEATABLE_AI_LLM_TYPE=ollama_chat - SEATABLE_AI_LLM_URL= - SEATABLE_AI_LLM_KEY= - SEATABLE_AI_LLM_MODEL= - ``` -=== "HuggingFace" - ``` - SEATABLE_AI_LLM_TYPE=huggingface - SEATABLE_AI_LLM_URL= - SEATABLE_AI_LLM_KEY= - SEATABLE_AI_LLM_MODEL=/ - ``` -=== "Self-Hosted Proxy Server" - ``` - SEATABLE_AI_LLM_TYPE=proxy - SEATABLE_AI_LLM_URL= - SEATABLE_AI_LLM_KEY= # optional - SEATABLE_AI_LLM_MODEL= - ``` -=== "Other" - Seafile AI utilizes [LiteLLM](https://docs.litellm.ai/docs/) to interact with LLM services. For a complete list of supported LLM providers, please refer to [this documentation](https://docs.litellm.ai/docs/providers). Then fill the following fields in your `.env`: - - ``` - SEATABLE_AI_LLM_TYPE=... - SEATABLE_AI_LLM_URL=... - SEATABLE_AI_LLM_KEY=... - SEATABLE_AI_LLM_MODEL=... - ``` - - For example, if you are using a LLM service with ***OpenAI-compatible endpoints***, you should set `SEATABLE_AI_LLM_TYPE` to `other` or `openai`, and set other LLM configuration items accurately. - -!!! note "About model selection" - - SeaTable AI supports using large model providers from [LiteLLM](https://docs.litellm.ai/docs/providers) or large model services with OpenAI-compatible endpoints. Therefore, SeaTable AI is compatible with most custom large model services, but in order to ensure the normal use of SeaTable AI features, you need to select a **large multimodal model** (such as supporting image input and recognition). - - -### Download SeaTable AI image and restart - -One more step is necessary to download the SeaTable AI image and restart the SeaTable service: - -```bash -cd /opt/seatable-compose -docker compose down && docker compose up -d -``` - -Now SeaTable AI can be used. - -## Standalone Deployment - -The deployment of a separate SeaTable AI instance is simple. Download `seatable-release` from GitHub as described in the [installation of Seatable Server](../basic-setup.md) and only use `seatable-ai-standalone.yml`. - -### Update `.env` in the host will deploy SeaTable AI - -Update your `.env`, that it looks like this and add/update the values according to your needs: - -```env -COMPOSE_FILE='seatable-ai-standalone.yml' -COMPOSE_PATH_SEPARATOR=',' - -# system settings -TIME_ZONE='Europe/Berlin' - -# database -MARIADB_HOST= -MARIADB_PORT=3306 -MARIADB_PASSWORD= - -# redis -REDIS_HOST= -REDIS_PORT=6379 -REDIS_PASSWORD= - -# For SeaTable -JWT_PRIVATE_KEY= -SEATABLE_SERVER_URL=https://seatable.your-domain.com # dtable-web's URL - -## dtable-server's url, `http://dtable-inner-proxy/dtable-server` for cluster -INNER_DTABLE_SERVER_URL=https://seatable.your-domain.com/dtable-server/ - -## dtable-db's url, `http://dtable-inner-proxy/dtable-db` for cluster -INNER_DTABLE_DB_URL=https://seatable.your-domain.com/dtable-db/ - -# LLM -SEATABLE_AI_LLM_TYPE=openai -SEATABLE_AI_LLM_URL= -SEATABLE_AI_LLM_KEY=... -SEATABLE_AI_LLM_MODEL=gpt-4o-mini # recommended -``` - -!!! warning - - `JWT_PRIVATE_KEY` must have the same as the `JWT_PRIVATE_KEY` field in SeaTable Server's `.env` file. - - - If Redis has no REDIS_PASSWORD, do not specify a value after the equals sign ("="). Specifying an empty string will cause problems (like REDIS_PASSWORD=""). - -!!! note "About model selection and LLM configurations" - - SeaTable AI supports using large model providers from [LiteLLM](https://docs.litellm.ai/docs/providers) or large model services with OpenAI-compatible endpoints. Therefore, SeaTable AI is compatible with most custom large model services, but in order to ensure the normal use of SeaTable AI features, you need to select a **large multimodal model** (such as supporting image input and recognition). - - We also provide some [reference configurations](#llm-configuration) for the LLM service provider in this manual. These configuration details do not change depending on your deployment topology. You can also adjust these configurations based on your needs. - -Execute `docker compose up -d` to fire up your separate SeaTable AI. - -### Configurations of SeaTable Server - -SeaTable must know how to access SeaTable AI. - -Add the following configuration settings to your `.env` file on SeaTable Server's host: - -```env -ENABLE_SEATABLE_AI=true -SEATABLE_AI_SERVER_URL='http://seatable-ai.example.com:8888' -``` - -Restart the `seatable-server` service and test your SeaTable AI. - -```bash -docker compose down && docker compose up -d -``` - -## Advanced operations - -### Token usage and fee statistics - -SeaTable AI supports enabling token usage and fee statistics (can view it by moving the mouse to the statistics column when move the mouse to the avatar). - -1. Add the following content to `/opt/seatable-server/seatable/conf/dtable_web_settings.py` to enable token usage and fee statistics: - - ```py - AI_PRICES = { - "your_model_id": { # your model name, same as SEATABLE_AI_LLM_MODEL - "input_tokens_1k": 0.01827, # price / 1000 tokens - "output_tokens_1k": 0.07309 # price / 1000 tokens - }, - } - ``` - -2. Refer to management of [roles and permission](../../configuration/roles-and-permissions.md#user-quotas) to specify `ai_credit_per_user` (-1 is unlimited), and the unit should be the same as in `AI_PRICES`. - -!!! note "`ai_credit_per_user` for organization users" - For organizational team users, `ai_credit_per_user` will apply to the entire team. For example, when `ai_credit_per_user` is set to `2` (unit of dollars for example) and there are 10 members in the team, all members in the team will share the same quota of 20 AI credits per month. - -## SeaTable AI directory structure - -`/opt/seatable-server` - -Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files outside. This allows you to rebuild containers easily without losing important information. - -* /opt/seatable-server/conf: This is the directory for SeaTable AI configuration files. -* /opt/seatable-server/logs: This is the directory for SeaTable AI logs. -* /opt/seatable-server/ai-data/assets: This is the directory for SeaTable AI assets. -* /opt/seatable-server/ai-data/index-info: This is the directory for SeaTable AI index. - -## Database used by SeaTable AI - -SeaTable AI uses several database tables such as `ai_assistant` inside the `dtable_db` database to store records. diff --git a/docs/installation/components/seatable-ai.md b/docs/installation/components/seatable-ai.md new file mode 100644 index 000000000..f7a3354f9 --- /dev/null +++ b/docs/installation/components/seatable-ai.md @@ -0,0 +1,137 @@ +# SeaTable AI Integration + + + +SeaTable AI is a SeaTable extension that integrates AI functionality into SeaTable. +Deploying SeaTable AI allows users to execute AI-based automation steps within SeaTable. + +At the time of writing, the following types of automation steps are supported: + +- **Summarize** +- **Classify** +- **OCR** (Optical character recognition) +- **Extract** +- **Custom** for individual use cases + +## Deployment + +!!! note "SeaTable AI requires SeaTable 6.0" + +The easiest way to deploy SeaTable AI is to deploy it on the same host as SeaTable Server. A standalone deployment (on a separate host or virtual machine) is explained [here](../advanced/seatable-ai-standalone.md). + +### Amend the .env file + +To install SeaTable AI, include `seatable-ai.yml` in the `COMPOSE_FILE` variable within your `.env` file. This instructs Docker-Compose to include the `seatable-ai` service. + +Simply copy and paste (:material-content-copy:) the following code into your command line: + +```bash +sed -i "s/COMPOSE_FILE='\(.*\)'/COMPOSE_FILE='\1,seatable-ai.yml'/" /opt/seatable-compose/.env +``` + +Then add SeaTable AI server configurations in `.env`: + +```ini +ENABLE_SEATABLE_AI=true +SEATABLE_AI_SERVER_URL=http://seatable-ai:8888 +``` + +#### LLM Provider Configuration + +SeaTable AI will use AI functions in conjunction with a Large Language Model (LLM) service. + +!!! note "Supported LLM Providers" + + SeaTable AI supports a wide variety of LLM providers through [LiteLLM](https://docs.litellm.ai/docs) as well as any LLM services with OpenAI-compatible endpoints. Please refer to [LiteLLM's documentation](https://docs.litellm.ai/docs/providers) in case you run into issues while trying to use a specific provider. + +!!! note "Model Selection" + + In order to ensure the efficient use of SeaTable AI features, you need to select a **large, multimodal model**. + This requires the chosen model to support image input and recognition (e.g. for running OCR as part of automations). + +The following section showcases the required configuration settings for the most popular hosted LLM services. +These must be configured inside your `.env` file: + + +=== "OpenAI" + ```ini + SEATABLE_AI_LLM_TYPE=openai + SEATABLE_AI_LLM_KEY= + SEATABLE_AI_LLM_MODEL=gpt-4o-mini # recommended + ``` +=== "Deepseek" + ```ini + SEATABLE_AI_LLM_TYPE=deepseek + SEATABLE_AI_LLM_KEY= + SEATABLE_AI_LLM_MODEL=deepseek-chat # recommended + ``` +=== "Azure OpenAI" + ```ini + SEATABLE_AI_LLM_TYPE=azure + SEATABLE_AI_LLM_URL= # your deployment url, leave blank to use default endpoint + SEATABLE_AI_LLM_KEY= + SEATABLE_AI_LLM_MODEL= + ``` +=== "Ollama" + ```ini + SEATABLE_AI_LLM_TYPE=ollama_chat + SEATABLE_AI_LLM_URL= + SEATABLE_AI_LLM_KEY= + SEATABLE_AI_LLM_MODEL= + ``` +=== "HuggingFace" + ```ini + SEATABLE_AI_LLM_TYPE=huggingface + SEATABLE_AI_LLM_URL= + SEATABLE_AI_LLM_KEY= + SEATABLE_AI_LLM_MODEL=/ + ``` +=== "Self-Hosted Proxy Server" + ```ini + SEATABLE_AI_LLM_TYPE=proxy + SEATABLE_AI_LLM_URL= + SEATABLE_AI_LLM_KEY= # optional + SEATABLE_AI_LLM_MODEL= + ``` +=== "Other" + If you are using an LLM service with ***OpenAI-compatible endpoints***, you should set `SEATABLE_AI_LLM_TYPE` to `other` or `openai`, and set other LLM configuration settings as necessary: + + ```ini + SEATABLE_AI_LLM_TYPE=... + SEATABLE_AI_LLM_URL=... + SEATABLE_AI_LLM_KEY=... + SEATABLE_AI_LLM_MODEL=... + ``` + +### Download SeaTable AI image and restart + +One more step is necessary to download the SeaTable AI image and restart the SeaTable service: + +```bash +cd /opt/seatable-compose +docker compose up -d +``` + +Now SeaTable AI can be used. + +## Advanced operations + +### Token usage and fee statistics + +SeaTable AI supports enabling token usage and fee statistics (can view it by moving the mouse to the statistics column when move the mouse to the avatar). + +1. Add the following content to `/opt/seatable-server/seatable/conf/dtable_web_settings.py` to enable token usage and fee statistics: + + ```py + AI_PRICES = { + "your_model_id": { # your model name, same as SEATABLE_AI_LLM_MODEL + "input_tokens_1k": 0.01827, # price / 1000 tokens + "output_tokens_1k": 0.07309 # price / 1000 tokens + }, + } + ``` + +2. Refer to management of [roles and permission](../../configuration/roles-and-permissions.md#user-quotas) to specify `ai_credit_per_user` (-1 is unlimited), and the unit should be the same as in `AI_PRICES`. + +!!! note "`ai_credit_per_user` for organization users" + For organizational team users, `ai_credit_per_user` will apply to the entire team. For example, when `ai_credit_per_user` is set to `2` (unit of dollars for example) and there are 10 members in the team, all members in the team will share the same quota of 20 AI credits per month. diff --git a/mkdocs.yml b/mkdocs.yml index fb1ee1a45..0d5200062 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -151,6 +151,7 @@ nav: - Our deployment approach: installation/deployment-approach.md - Single-Node Deployment: - SeaTable Server: installation/basic-setup.md + - SeaTable AI: installation/components/seatable-ai.md - Python Pipeline: installation/components/python-pipeline.md - Whiteboard: installation/components/whiteboard.md - n8n: installation/components/n8n.md @@ -178,6 +179,8 @@ nav: - Webserver Security: installation/advanced/webserver-security.md - Maintenance Mode: installation/advanced/maintenance-mode.md - Advanced Settings for Caddy: installation/advanced/settings-caddy.md + - SeaTable AI: + - SeaTable AI (standalone): installation/advanced/seatable-ai-standalone.md - S3 Object Storage: - Configuration: installation/advanced/s3.md - Migration: installation/advanced/s3-migration.md @@ -188,7 +191,6 @@ nav: - Python Pipeline Workflow: installation/advanced/python-pipeline-workflow.md - MariaDB (standalone): installation/advanced/database-standalone.md - Seafile (external): installation/advanced/seafile.md - - SeaTable AI: installation/advanced/seatable-ai.md - Cluster Deployment: - Introduction: installation/cluster/introduction.md From 7a70ef7d508124d77230822d87e7ad5ec17c1ba7 Mon Sep 17 00:00:00 2001 From: Simon Hammes Date: Fri, 26 Sep 2025 14:13:41 +0200 Subject: [PATCH 12/13] Move AI token pricing section to a dedicated page --- .../advanced/seatable-ai-token-pricing.md | 32 +++++++++++++++++++ docs/installation/components/seatable-ai.md | 22 ------------- mkdocs.yml | 1 + 3 files changed, 33 insertions(+), 22 deletions(-) create mode 100644 docs/installation/advanced/seatable-ai-token-pricing.md diff --git a/docs/installation/advanced/seatable-ai-token-pricing.md b/docs/installation/advanced/seatable-ai-token-pricing.md new file mode 100644 index 000000000..dd1244f0d --- /dev/null +++ b/docs/installation/advanced/seatable-ai-token-pricing.md @@ -0,0 +1,32 @@ +# AI Token Pricing + +## AI Credits + +SeaTable's AI credits can be understood as its own dedicated currency that is used to price AI usage within SeaTable. +AI Credits directly map to the number of tokens used by using AI-based features through the [configured prices](#pricing-configuration) per AI model. + +SeaTable supports role-based AI credit limits by configuring the `ai_credit_per_user` option on a user role. +Please refer to the documentation on [user quotas](../../configuration/roles-and-permissions.md#user-quotas) for more details. + +!!! note "`ai_credit_per_user` for organization users" + AI credits are shared across all users inside a SeaTable organization. The total number of credits can be calculated by multiplying the value of `ai_credit_per_user` by the number of team users. + + **Example:** Setting `ai_credit_per_user` to `2` will allow a team with 10 members to have 20 AI credits in total. + +## Pricing Configuration + +In order to accurately track the number of AI credits used by users and organizations, you must configure token pricing inside `/opt/seatable-server/seatable/conf/dtable_web_settings.py`. +This can be achieved by configuring the `AI_PRICES` variable, which is a dictionary that maps model identifiers (e.g `gpt-4o-mini`) to token pricing **per thousand tokens**: + +```py +AI_PRICES = { + "gpt-4o-mini": { + "input_tokens_1k": 0.01827, # price / 1000 tokens + "output_tokens_1k": 0.07309 # price / 1000 tokens + }, +} +``` + +!!! warning "Model Identifiers" + The dictionary key must match **the exact value** of the chosen AI Model, which is configured through the `SEATABLE_AI_LLM_MODEL` variable inside your `.env` file. + In case of a mismatch, AI usage will not count towards any configured credit limits! diff --git a/docs/installation/components/seatable-ai.md b/docs/installation/components/seatable-ai.md index f7a3354f9..aedbc6605 100644 --- a/docs/installation/components/seatable-ai.md +++ b/docs/installation/components/seatable-ai.md @@ -113,25 +113,3 @@ docker compose up -d ``` Now SeaTable AI can be used. - -## Advanced operations - -### Token usage and fee statistics - -SeaTable AI supports enabling token usage and fee statistics (can view it by moving the mouse to the statistics column when move the mouse to the avatar). - -1. Add the following content to `/opt/seatable-server/seatable/conf/dtable_web_settings.py` to enable token usage and fee statistics: - - ```py - AI_PRICES = { - "your_model_id": { # your model name, same as SEATABLE_AI_LLM_MODEL - "input_tokens_1k": 0.01827, # price / 1000 tokens - "output_tokens_1k": 0.07309 # price / 1000 tokens - }, - } - ``` - -2. Refer to management of [roles and permission](../../configuration/roles-and-permissions.md#user-quotas) to specify `ai_credit_per_user` (-1 is unlimited), and the unit should be the same as in `AI_PRICES`. - -!!! note "`ai_credit_per_user` for organization users" - For organizational team users, `ai_credit_per_user` will apply to the entire team. For example, when `ai_credit_per_user` is set to `2` (unit of dollars for example) and there are 10 members in the team, all members in the team will share the same quota of 20 AI credits per month. diff --git a/mkdocs.yml b/mkdocs.yml index 0d5200062..dba84c0fe 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -181,6 +181,7 @@ nav: - Advanced Settings for Caddy: installation/advanced/settings-caddy.md - SeaTable AI: - SeaTable AI (standalone): installation/advanced/seatable-ai-standalone.md + - AI Token Pricing: installation/advanced/seatable-ai-token-pricing.md - S3 Object Storage: - Configuration: installation/advanced/s3.md - Migration: installation/advanced/s3-migration.md From 6069d0f78ac9f091ab7a269c469579c1b502c7c9 Mon Sep 17 00:00:00 2001 From: Simon Hammes Date: Fri, 26 Sep 2025 14:16:55 +0200 Subject: [PATCH 13/13] Improve section on AI credits --- docs/installation/advanced/seatable-ai-token-pricing.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/installation/advanced/seatable-ai-token-pricing.md b/docs/installation/advanced/seatable-ai-token-pricing.md index dd1244f0d..ea8fbca07 100644 --- a/docs/installation/advanced/seatable-ai-token-pricing.md +++ b/docs/installation/advanced/seatable-ai-token-pricing.md @@ -2,8 +2,8 @@ ## AI Credits -SeaTable's AI credits can be understood as its own dedicated currency that is used to price AI usage within SeaTable. -AI Credits directly map to the number of tokens used by using AI-based features through the [configured prices](#pricing-configuration) per AI model. +AI credits serve as an internal unit of currency for measuring AI-related usage within SeaTable. +They are directly linked to the number of tokens consumed by using AI-based features according to the [configured price](#pricing-configuration) of each AI model. SeaTable supports role-based AI credit limits by configuring the `ai_credit_per_user` option on a user role. Please refer to the documentation on [user quotas](../../configuration/roles-and-permissions.md#user-quotas) for more details.