diff --git a/docs/ai.md b/docs/ai.md index 580a243c8..a3da9382d 100644 --- a/docs/ai.md +++ b/docs/ai.md @@ -53,7 +53,7 @@ With the database connected, three additional skills become available for schema To manage database connections later, use `storm db` for the global connection library and `storm mcp` for project-level configuration. See [Database Connections & MCP](database-and-mcp.md) for the full guide. :::tip Looking for a database MCP server for Python, Go, Ruby, or any other language? -The Storm MCP server works standalone — no Storm ORM required. Run `npx @storm-orm/cli mcp init` to set up schema access and optional read-only data queries without installing Storm rules or skills. See [Using Without Storm ORM](database-and-mcp.md#using-without-storm-orm). +The Storm MCP server works standalone — no Storm ORM required. Run `npx @storm-orm/cli mcp` to set up schema access and optional read-only data queries without installing Storm rules or skills. See [Using Without Storm ORM](database-and-mcp.md#using-without-storm-orm). ::: --- diff --git a/docs/database-and-mcp.md b/docs/database-and-mcp.md index 8c7466326..a4232a9f9 100644 --- a/docs/database-and-mcp.md +++ b/docs/database-and-mcp.md @@ -108,7 +108,7 @@ Run `storm mcp remove reporting` to remove an alias from the project. This unreg ### Re-registering connections -If your AI tool's MCP configuration gets out of sync (for example, after switching branches or resetting editor config files), run `storm mcp` without arguments. This re-registers all connections from `databases.json` for every configured AI tool. +If your AI tool's MCP configuration gets out of sync (for example, after switching branches or resetting editor config files), run `storm mcp update`. This re-registers all connections from `databases.json` for every configured AI tool. --- @@ -191,7 +191,7 @@ Global connections are stored in `~/.storm/connections/`. Project-level configur The Storm MCP server is a standalone database tool — it does not require Storm ORM in your project. If you use Python, Go, Ruby, or any other language and just want your AI tool to have schema awareness and optional data access, run: ```bash -npx @storm-orm/cli mcp init +npx @storm-orm/cli mcp ``` This walks you through: @@ -324,11 +324,11 @@ Excluded tables still appear in `list_tables` and can be described with `describ ### `storm mcp` — Project MCP servers -#### `storm mcp init` +#### `storm mcp` -Standalone setup for the MCP database server, intended for projects that do not use Storm ORM. Walks you through AI tool selection, database connections, data access, and MCP registration. No Storm rules or language-specific configuration is installed. +Set up a MCP database server (default). Walks you through AI tool selection, database connections, data access, and MCP registration. Works standalone — no Storm ORM required. `storm mcp init` is an alias for this command. -#### `storm mcp` +#### `storm mcp update` Re-register all MCP servers defined in `.storm/databases.json` with your AI tools. Useful after switching branches, resetting editor config files, or when MCP registrations get out of sync. diff --git a/docs/index.md b/docs/index.md index dcc8087e7..79f536b0b 100644 --- a/docs/index.md +++ b/docs/index.md @@ -9,7 +9,7 @@ import TabItem from '@theme/TabItem'; # ST/ORM :::tip Give your AI tool access to your database schema -Storm includes a schema-aware MCP server that exposes your table definitions, column types, and foreign keys to AI coding tools like Claude Code, Cursor, Copilot and Codex. Run `npx @storm-orm/cli` for full Storm ORM support including AI skills, conventions, and schema access. Using Python, Go, Ruby, or another language? Run `npx @storm-orm/cli mcp init` to set up the MCP server standalone. +Storm includes a schema-aware MCP server that exposes your table definitions, column types, and foreign keys to AI coding tools like Claude Code, Cursor, Copilot and Codex. Run `npx @storm-orm/cli` for full Storm ORM support including AI skills, conventions, and schema access. Using Python, Go, Ruby, or another language? Run `npx @storm-orm/cli mcp` to set up the MCP server standalone. ::: **Storm** is a modern, high-performance ORM for Kotlin 2.0+ and Java 21+, built around a powerful SQL template engine. It focuses on simplicity, type safety, and predictable performance through immutable models and compile-time metadata. diff --git a/pom.xml b/pom.xml index a1ba27b3c..9a756a7a5 100644 --- a/pom.xml +++ b/pom.xml @@ -3,7 +3,7 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> 4.0.0 - 1.11.2 + 1.11.3 21 ${java.version} ${java.version} diff --git a/storm-cli/package.json b/storm-cli/package.json index bd97952d5..07077ea85 100644 --- a/storm-cli/package.json +++ b/storm-cli/package.json @@ -1,6 +1,6 @@ { "name": "@storm-orm/cli", - "version": "1.11.2", + "version": "1.11.3", "description": "Storm ORM - AI assistant configuration tool", "type": "module", "bin": { diff --git a/storm-cli/storm.mjs b/storm-cli/storm.mjs index 4add496f5..2ae907586 100644 --- a/storm-cli/storm.mjs +++ b/storm-cli/storm.mjs @@ -9,7 +9,7 @@ import { basename, join, dirname } from 'path'; import { homedir } from 'os'; import { execSync, spawn } from 'child_process'; -const VERSION = '1.11.2'; +const VERSION = '1.11.3'; // ─── ANSI ──────────────────────────────────────────────────────────────────── @@ -905,13 +905,18 @@ async function fetchSkill(name) { } } -function installSkill(name, content, toolConfig, created) { +function installSkill(name, content, toolConfig, created, appended) { const cwd = process.cwd(); const fullPath = join(cwd, toolConfig.skillPath(name)); - if (existsSync(fullPath) && readFileSync(fullPath, 'utf-8') === content) return; + const exists = existsSync(fullPath); + if (exists && readFileSync(fullPath, 'utf-8') === content) return; mkdirSync(dirname(fullPath), { recursive: true }); writeFileSync(fullPath, content); - created.push(toolConfig.skillPath(name)); + if (exists && appended) { + appended.push(toolConfig.skillPath(name)); + } else { + created.push(toolConfig.skillPath(name)); + } } function cleanStaleSkills(toolConfigs, installedSkillNames, skipped) { @@ -1389,6 +1394,18 @@ function resolveColumnName(validColumns, name) { return null; } +function coerceArray(value) { + if (value == null) return value; + if (Array.isArray(value)) return value; + if (typeof value === 'string') { + var trimmed = value.trim(); + if (trimmed.charAt(0) === '[') { + try { var parsed = JSON.parse(trimmed); if (Array.isArray(parsed)) return parsed; } catch (e) { /* fall through */ } + } + } + return value; +} + async function selectData(args) { if (excludedTables.has((args.table || '').toLowerCase())) throw new Error('Data access is excluded for table: ' + args.table); var tables = await listTables(); @@ -1397,7 +1414,7 @@ async function selectData(args) { var validColumns = await resolveColumns(tableName); - var columns = args.columns; + var columns = coerceArray(args.columns); if (columns && columns.length > 0) { for (var i = 0; i < columns.length; i++) { var resolved = resolveColumnName(validColumns, columns[i]); @@ -1410,10 +1427,11 @@ async function selectData(args) { var sql = 'SELECT ' + selectClause + ' FROM ' + quoteIdentifier(tableName); var params = []; - if (args.where && args.where.length > 0) { + var where = coerceArray(args.where); + if (where && where.length > 0) { var conditions = []; - for (var i = 0; i < args.where.length; i++) { - var w = args.where[i]; + for (var i = 0; i < where.length; i++) { + var w = where[i]; var resolvedCol = resolveColumnName(validColumns, w.column); if (!resolvedCol) throw new Error('Unknown column: ' + w.column + ' in table ' + tableName); w.column = resolvedCol; @@ -1425,6 +1443,7 @@ async function selectData(args) { } else if (op === 'IS NOT NULL') { conditions.push(col + ' IS NOT NULL'); } else if (op === 'IN') { + w.value = coerceArray(w.value); if (!Array.isArray(w.value)) throw new Error('IN operator requires an array value'); var placeholders = w.value.map(function(v) { params.push(v); return ph(params.length); }); conditions.push(col + ' IN (' + placeholders.join(', ') + ')'); @@ -1436,10 +1455,11 @@ async function selectData(args) { sql += ' WHERE ' + conditions.join(' AND '); } - if (args.orderBy && args.orderBy.length > 0) { + var orderBy = coerceArray(args.orderBy); + if (orderBy && orderBy.length > 0) { var orderParts = []; - for (var i = 0; i < args.orderBy.length; i++) { - var o = args.orderBy[i]; + for (var i = 0; i < orderBy.length; i++) { + var o = orderBy[i]; var resolvedOrderCol = resolveColumnName(validColumns, o.column); if (!resolvedOrderCol) throw new Error('Unknown column: ' + o.column + ' in table ' + tableName); var dir = (o.direction || 'ASC').toUpperCase(); @@ -1453,7 +1473,7 @@ async function selectData(args) { var offset = Math.max(0, Math.floor(args.offset || 0)); if (dbType === 'mssql') { if (offset > 0) { - if (!args.orderBy || args.orderBy.length === 0) { + if (!orderBy || orderBy.length === 0) { sql += ' ORDER BY (SELECT NULL)'; } sql += ' OFFSET ' + offset + ' ROWS FETCH NEXT ' + limit + ' ROWS ONLY'; @@ -1601,6 +1621,28 @@ rl.on('line', async function(line) { const MARKER_START = ''; const MARKER_END = ''; +function installSchemaRules(filePath, schemaRules, appended) { + if (!existsSync(filePath)) return; + const existing = readFileSync(filePath, 'utf-8'); + const endMarker = existing.indexOf(MARKER_END); + if (endMarker === -1) return; + const cleanRules = schemaRules.replace('\n' + STORM_SKILL_MARKER, ''); + const schemaStart = existing.indexOf('## Database Schema Access'); + if (schemaStart !== -1 && schemaStart < endMarker) { + // Replace existing schema rules (from start to just before MARKER_END). + const updated = existing.substring(0, schemaStart) + cleanRules + '\n' + existing.substring(endMarker); + if (updated !== existing) { + writeFileSync(filePath, updated); + if (!appended.includes(filePath.replace(process.cwd() + '/', ''))) appended.push(filePath.replace(process.cwd() + '/', '')); + } + } else { + // First time — insert before MARKER_END. + const updated = existing.substring(0, endMarker) + '\n' + cleanRules + '\n' + existing.substring(endMarker); + writeFileSync(filePath, updated); + if (!appended.includes(filePath.replace(process.cwd() + '/', ''))) appended.push(filePath.replace(process.cwd() + '/', '')); + } +} + function installRulesBlock(filePath, content, created, appended) { const block = `${MARKER_START}\n${content.trim()}\n${MARKER_END}`; mkdirSync(dirname(filePath), { recursive: true }); @@ -1998,7 +2040,7 @@ async function update() { for (const config of skillToolConfigs) { for (const [name, content] of fetchedSkills) { - installSkill(name, content, config, created); + installSkill(name, content, config, created, appended); } } @@ -2008,18 +2050,7 @@ async function update() { for (const toolId of tools) { const config = TOOL_CONFIGS[toolId]; if (config.rulesFile && schemaRules) { - const rulesPath = join(process.cwd(), config.rulesFile); - if (existsSync(rulesPath)) { - const existing = readFileSync(rulesPath, 'utf-8'); - if (!existing.includes('Database Schema Access')) { - const endMarker = existing.indexOf(MARKER_END); - if (endMarker !== -1) { - const updated = existing.substring(0, endMarker) + '\n' + schemaRules.replace('\n' + STORM_SKILL_MARKER, '') + '\n' + existing.substring(endMarker); - writeFileSync(rulesPath, updated); - if (!appended.includes(config.rulesFile)) appended.push(config.rulesFile); - } - } - } + installSchemaRules(join(process.cwd(), config.rulesFile), schemaRules, appended); } } @@ -2028,7 +2059,7 @@ async function update() { if (!content) { skipped.push(skillName + ' (fetch failed)'); continue; } installedSkillNames.push(skillName); for (const config of skillToolConfigs) { - installSkill(skillName, content, config, created); + installSkill(skillName, content, config, created, appended); } } } @@ -2040,6 +2071,7 @@ async function update() { // Update MCP server script if databases are configured. if (Object.keys(readDatabases()).length > 0) { ensureGlobalDir(); + appended.push('~/.storm/server.mjs'); } const uniqueCreated = [...new Set(created)]; @@ -2549,8 +2581,10 @@ async function updateMcp(subArgs) { mcpList(); } else if (subcommand === 'remove' || subcommand === 'rm') { await mcpRemove(subArgs[1]); - } else { + } else if (subcommand === 'update') { await mcpReregisterAll(); + } else { + await mcpInit(); } } @@ -2649,7 +2683,7 @@ async function setup() { // Install fetched skills into each tool's directory. for (const config of skillToolConfigs) { for (const [name, content] of fetchedSkills) { - installSkill(name, content, config, created); + installSkill(name, content, config, created, appended); } } } @@ -2726,19 +2760,7 @@ async function setup() { for (const toolId of tools) { const config = TOOL_CONFIGS[toolId]; if (config.rulesFile && schemaRules) { - const rulesPath = join(process.cwd(), config.rulesFile); - if (existsSync(rulesPath)) { - const existing = readFileSync(rulesPath, 'utf-8'); - // Insert schema rules inside the STORM block if not already present. - if (!existing.includes('Database Schema Access')) { - const endMarker = existing.indexOf(MARKER_END); - if (endMarker !== -1) { - const updated = existing.substring(0, endMarker) + '\n' + schemaRules.replace('\n' + STORM_SKILL_MARKER, '') + '\n' + existing.substring(endMarker); - writeFileSync(rulesPath, updated); - appended.push(config.rulesFile); - } - } - } + installSchemaRules(join(process.cwd(), config.rulesFile), schemaRules, appended); } } @@ -2749,7 +2771,7 @@ async function setup() { if (!content) { skipped.push(skillName + ' (fetch failed)'); continue; } installedSkillNames.push(skillName); for (const config of skillToolConfigs) { - installSkill(skillName, content, config, created); + installSkill(skillName, content, config, created, appended); } } } @@ -2873,7 +2895,7 @@ async function demo() { installedSkillNames.push(skillName); } for (const [name, content] of fetchedSkills) { - installSkill(name, content, config, created); + installSkill(name, content, config, created, appended); } } @@ -2930,18 +2952,7 @@ async function demo() { // Fetch and install schema rules into the rules block. const schemaRules = await fetchSkill('storm-schema-rules'); if (config.rulesFile && schemaRules) { - const rulesPath = join(cwd, config.rulesFile); - if (existsSync(rulesPath)) { - const existing = readFileSync(rulesPath, 'utf-8'); - if (!existing.includes('Database Schema Access')) { - const endMarker = existing.indexOf(MARKER_END); - if (endMarker !== -1) { - const updated = existing.substring(0, endMarker) + '\n' + schemaRules.replace('\n' + STORM_SKILL_MARKER, '') + '\n' + existing.substring(endMarker); - writeFileSync(rulesPath, updated); - appended.push(config.rulesFile); - } - } - } + installSchemaRules(join(cwd, config.rulesFile), schemaRules, appended); } // Fetch and install schema-dependent skills. @@ -2950,7 +2961,7 @@ async function demo() { for (const skillName of schemaSkillNames) { const content = await fetchSkill(skillName); if (!content) { skipped.push(skillName + ' (fetch failed)'); continue; } - installSkill(skillName, content, config, created); + installSkill(skillName, content, config, created, appended); installedSkillNames.push(skillName); } } @@ -3031,8 +3042,8 @@ async function run() { storm db add [name] Add a global database connection storm db remove [name] Remove a global database connection storm db config [name] Configure data access and table exclusions - storm mcp init Set up MCP database server (no Storm ORM required) - storm mcp Re-register MCP servers for configured tools + storm mcp Set up MCP database server (default, no Storm ORM required) + storm mcp update Re-register MCP servers for configured tools storm mcp add [alias] Add a database connection to this project storm mcp list List project database connections storm mcp remove [alias] Remove a database connection diff --git a/website/static/skills/storm-query-java.md b/website/static/skills/storm-query-java.md index a3d6b4920..85a9a0fd5 100644 --- a/website/static/skills/storm-query-java.md +++ b/website/static/skills/storm-query-java.md @@ -20,6 +20,8 @@ The `Operator` enum is in `st.orm` and contains: `EQUALS`, `NOT_EQUALS`, `LESS_T Ask what data they need, filters, ordering, or pagination. +**DI preference:** In Spring Boot projects, repositories should be constructor-injected (see /storm-repository-java). Use `orm.entity(T.class)` and `orm.repository(T.class)` lookups only in standalone (non-DI) contexts and tests. In DI environments, write queries on injected repository instances. + ## API Design: Builder Methods vs Convenience Methods Repository/entity methods fall into two categories: diff --git a/website/static/skills/storm-query-kotlin.md b/website/static/skills/storm-query-kotlin.md index e9b62f3f3..fbba91a63 100644 --- a/website/static/skills/storm-query-kotlin.md +++ b/website/static/skills/storm-query-kotlin.md @@ -20,6 +20,30 @@ All infix predicate operators (`eq`, `neq`, `like`, `greater`, `less`, `inList`, Ask what data they need, filters, ordering, or pagination. +**Repository rule:** All database queries must live in repository interfaces, not inline in services or other classes. In Spring Boot or Ktor projects, repositories are constructor-injected (see /storm-repository-kotlin). Use `orm.entity()` and `orm.repository()` lookups only in standalone (non-DI) contexts and tests. + +**Code-first WHERE clauses:** Always express WHERE conditions using metamodel-based predicates (`eq`, `isFalse()`, `isNotNull()`, etc.) instead of template strings. Only fall back to template expressions for conditions that predicates cannot express (e.g., `COALESCE`, date arithmetic, aggregate functions). When a WHERE clause mixes expressible and inexpressible conditions, split it: use code-based predicates for what you can, templates only for what you must. Multiple `where()` calls are AND-combined automatically. FK paths through the object graph (e.g., `User_.city eq city`) do not require explicit joins. + +```kotlin +// ❌ Wrong — WHERE conditions as a template string +fun findActiveWithEmail(city: City, minAge: Int): List = + select { + where { + """${User_.city} = ${city.id()} + AND ${User_.active} = true + AND ${User_.email} IS NOT NULL + AND TIMESTAMPDIFF(YEAR, ${User_.birthDate}, CURDATE()) >= $minAge""" + } + }.resultList + +// ✅ Correct — code-based predicates where possible, template only when no alternative exists +fun findActiveWithEmail(city: City, minAge: Int): List = + select { + where((User_.city eq city) and (User_.active.isTrue()) and (User_.email.isNotNull())) + where { "TIMESTAMPDIFF(YEAR, ${User_.birthDate}, CURDATE()) >= $minAge" } + }.resultList +``` + ## Kotlin Infix Predicate Operators All operators are extension functions on `Metamodel` (generated metamodel fields like `User_.name`): diff --git a/website/static/skills/storm-repository-java.md b/website/static/skills/storm-repository-java.md index aa6af3020..14d77acba 100644 --- a/website/static/skills/storm-repository-java.md +++ b/website/static/skills/storm-repository-java.md @@ -24,8 +24,34 @@ Ask: which entity, what custom queries? Detect the project's framework from its build file (pom.xml or build.gradle): look for `storm-spring-boot-starter` or `spring-boot-starter` (Spring Boot) or neither (standalone). Use the detected framework to suggest the appropriate repository registration pattern. +**DI preference:** In Spring Boot projects, always prefer constructor-injected repositories over `orm.entity(T.class)` or `orm.repository(T.class)` lookups. Repository lookup via `orm` is for standalone (non-DI) use and tests only. In DI environments, repositories are beans — inject them. + ## Getting a Repository +### Spring Boot (preferred in DI environments) + +Inject repositories via constructor injection. The Spring Boot Starter (or a `RepositoryBeanFactoryPostProcessor`) auto-registers repository interfaces as beans: + +```java +@Service +public class UserService { + private final UserRepository userRepository; + public UserService(UserRepository userRepository) { this.userRepository = userRepository; } + public Optional findUser(String email) { return userRepository.findByEmail(email); } +} + +// For generic entity access without a custom repository, inject EntityRepository directly: +@Service +public class CityService { + private final EntityRepository cities; + public CityService(EntityRepository cities) { this.cities = cities; } +} +``` + +### Standalone / Tests + +Create repositories directly from the `ORMTemplate` (no DI container): + ```java // Generic entity access (no custom interface needed) var users = orm.entity(User.class); // EntityRepository diff --git a/website/static/skills/storm-repository-kotlin.md b/website/static/skills/storm-repository-kotlin.md index 5ca5473c7..ccfd729a0 100644 --- a/website/static/skills/storm-repository-kotlin.md +++ b/website/static/skills/storm-repository-kotlin.md @@ -23,10 +23,46 @@ import org.junit.jupiter.api.Assertions.* // assertEquals, assertTrue, as Ask: which entity, what custom queries? +**Repository rule:** All database queries must live in repository interfaces, not inline in services or other classes. Services orchestrate by calling repository methods — they never build queries directly. When a skill or tool generates a query, always place it in the appropriate repository interface. + Detect the project's framework from its build file (pom.xml or build.gradle.kts): look for `storm-kotlin-spring-boot-starter` or `spring-boot-starter` (Spring Boot), `storm-ktor` or `ktor-server-core` (Ktor), or neither (standalone). Use the detected framework to suggest the appropriate repository registration pattern below. +**DI preference:** In Spring Boot or Ktor projects, always prefer constructor-injected repositories over `orm.entity()` or `orm.repository()` lookups. Repository lookup via `orm` is for standalone (non-DI) use and tests only. In DI environments, repositories are beans/components — inject them. + ## Getting a Repository +### Spring Boot (preferred in DI environments) + +Inject repositories via constructor injection. The Spring Boot Starter (or a `RepositoryBeanFactoryPostProcessor`) auto-registers repository interfaces as beans: + +```kotlin +@Service +class UserService(private val userRepository: UserRepository) { + fun findUser(email: String) = userRepository.findByEmail(email) +} + +// For generic entity access without a custom repository, inject EntityRepository directly: +@Service +class CityService(private val cities: EntityRepository) { + fun findAll() = cities.findAll() +} +``` + +### Ktor + +Access repositories via `call.repository()` after registering them with `stormRepositories { }`: + +```kotlin +get("/users/{email}") { + val users = call.repository() + call.respond(users.findByEmail(call.parameters.getOrFail("email"))) +} +``` + +### Standalone / Tests + +Create repositories directly from the `ORMTemplate` (no DI container): + ```kotlin // Generic entity access (no custom interface needed) val users = orm.entity() // preferred — reified, import st.orm.repository.entity @@ -51,12 +87,6 @@ interface UserRepository : EntityRepository { fun findActiveInCity(city: City): List = findAll((User_.city eq city) and (User_.active eq true)) } - -// Obtain the repository -val userRepository: UserRepository = orm.repository() - -// Or use the generic entity repository for simple CRUD -val users = orm.entity(User::class) ``` Key rules: diff --git a/website/static/skills/storm-schema-rules.md b/website/static/skills/storm-schema-rules.md index 4afc6845d..4471e1110 100644 --- a/website/static/skills/storm-schema-rules.md +++ b/website/static/skills/storm-schema-rules.md @@ -4,7 +4,31 @@ This project has a Storm Schema MCP server configured. Use the following tools t - `list_tables` - List all tables in the database - `describe_table(table)` - Describe a table's columns, types, nullability, primary key, foreign keys (with cascade rules), and unique constraints -- `select_data(table, ...)` - Query individual records from a table (only available when data access is enabled for this connection) +- `select_data` - Query individual records from a table (only available when data access is enabled for this connection) + + +### select_data parameters + +All parameters except `table` are optional. Pass arrays and objects as native JSON types — never as stringified JSON. + +| Parameter | Type | Description | +|-----------|------|-------------| +| `table` | `string` | **Required.** Table name. | +| `columns` | `string[]` | Columns to return. Omit for all columns. Example: `["id", "name"]` | +| `where` | `object[]` | Filter conditions (AND). Each object: `{ "column": "name", "operator": "=", "value": "x" }`. Operators: `=`, `!=`, `<`, `>`, `<=`, `>=`, `LIKE`, `IN`, `IS NULL`, `IS NOT NULL`. | +| `orderBy` | `object[]` | Sort order. Each object: `{ "column": "name", "direction": "DESC" }`. Direction: `ASC` (default) or `DESC`. **Not** `sort`. | +| `offset` | `integer` | Rows to skip (default: 0). | +| `limit` | `integer` | Max rows (default: 50, max: 500). | + +Example call: +```json +{ + "table": "USER", + "columns": ["id", "name", "timestamp"], + "orderBy": [{"column": "timestamp", "direction": "DESC"}], + "limit": 10 +} +``` Use these tools when: - Asked about the database schema or data model @@ -16,10 +40,10 @@ Use these tools when: The `list_tables` and `describe_table` tools return structural metadata only — no data is exposed. -The `select_data` tool is only available when the developer has explicitly enabled data access for this connection. If the tool is not listed in `tools/list`, data access is disabled — do not attempt to call it. When available, `select_data` accepts a structured request (table, columns, filters, sort, offset, limit) and returns individual rows formatted as a markdown table. It does not accept raw SQL. Results default to 50 rows (max 500), and cell values longer than 200 characters are truncated. +The `select_data` tool is only available when the developer has explicitly enabled data access for this connection. If the tool is not listed in `tools/list`, data access is disabled — do not attempt to call it. When available, `select_data` accepts a structured request (table, columns, where, orderBy, offset, limit) and returns individual rows formatted as a markdown table. It does not accept raw SQL. Results default to 50 rows (max 500), and cell values longer than 200 characters are truncated. Use `select_data` when sample data would inform a decision — for example, to determine whether a `VARCHAR` column contains enum-like values, whether a `TEXT` column stores JSON, or what value ranges a numeric column holds. Do not query data speculatively or in bulk; use it when a specific question about the data would change the entity design. -When presenting `select_data` results to the user, always display them as a table with column names as column headers and one row per record. Never transpose the data (columns as rows). The response already contains a markdown table — present it directly or reformat it, but always keep the column-per-column, row-per-row orientation. +When presenting `select_data` results to the user, always show the actual data rows as a table — column names as headers, one row per record. Do not summarize, describe, or narrate the data in prose. The user asked to see the data, so show it. The response already contains a markdown table — present it directly. Never transpose the data (columns as rows), and never replace the table with a written description of what the data contains. Some tables may be excluded from data queries by the developer. If `select_data` returns an error about an excluded table, the table's schema is still available through `describe_table` — only data access is restricted. diff --git a/website/versioned_docs/version-1.11.2/ai.md b/website/versioned_docs/version-1.11.2/ai.md index 580a243c8..a3da9382d 100644 --- a/website/versioned_docs/version-1.11.2/ai.md +++ b/website/versioned_docs/version-1.11.2/ai.md @@ -53,7 +53,7 @@ With the database connected, three additional skills become available for schema To manage database connections later, use `storm db` for the global connection library and `storm mcp` for project-level configuration. See [Database Connections & MCP](database-and-mcp.md) for the full guide. :::tip Looking for a database MCP server for Python, Go, Ruby, or any other language? -The Storm MCP server works standalone — no Storm ORM required. Run `npx @storm-orm/cli mcp init` to set up schema access and optional read-only data queries without installing Storm rules or skills. See [Using Without Storm ORM](database-and-mcp.md#using-without-storm-orm). +The Storm MCP server works standalone — no Storm ORM required. Run `npx @storm-orm/cli mcp` to set up schema access and optional read-only data queries without installing Storm rules or skills. See [Using Without Storm ORM](database-and-mcp.md#using-without-storm-orm). ::: --- diff --git a/website/versioned_docs/version-1.11.2/database-and-mcp.md b/website/versioned_docs/version-1.11.2/database-and-mcp.md index 6ae1734e3..e0d3db2da 100644 --- a/website/versioned_docs/version-1.11.2/database-and-mcp.md +++ b/website/versioned_docs/version-1.11.2/database-and-mcp.md @@ -108,7 +108,7 @@ Run `storm mcp remove reporting` to remove an alias from the project. This unreg ### Re-registering connections -If your AI tool's MCP configuration gets out of sync (for example, after switching branches or resetting editor config files), run `storm mcp` without arguments. This re-registers all connections from `databases.json` for every configured AI tool. +If your AI tool's MCP configuration gets out of sync (for example, after switching branches or resetting editor config files), run `storm mcp update`. This re-registers all connections from `databases.json` for every configured AI tool. --- @@ -191,7 +191,7 @@ Global connections are stored in `~/.storm/connections/`. Project-level configur The Storm MCP server is a standalone database tool — it does not require Storm ORM in your project. If you use Python, Go, Ruby, or any other language and just want your AI tool to have schema awareness and optional data access, run: ```bash -npx @storm-orm/cli mcp init +npx @storm-orm/cli mcp ``` This walks you through: @@ -324,11 +324,11 @@ Excluded tables still appear in `list_tables` and can be described with `describ ### `storm mcp` — Project MCP servers -#### `storm mcp init` +#### `storm mcp` -Standalone setup for the MCP database server, intended for projects that do not use Storm ORM. Walks you through AI tool selection, database connections, data access, and MCP registration. No Storm rules or language-specific configuration is installed. +Set up a MCP database server (default). Walks you through AI tool selection, database connections, data access, and MCP registration. Works standalone — no Storm ORM required. `storm mcp init` is an alias for this command. -#### `storm mcp` +#### `storm mcp update` Re-register all MCP servers defined in `.storm/databases.json` with your AI tools. Useful after switching branches, resetting editor config files, or when MCP registrations get out of sync. diff --git a/website/versioned_docs/version-1.11.2/index.md b/website/versioned_docs/version-1.11.2/index.md index dcc8087e7..79f536b0b 100644 --- a/website/versioned_docs/version-1.11.2/index.md +++ b/website/versioned_docs/version-1.11.2/index.md @@ -9,7 +9,7 @@ import TabItem from '@theme/TabItem'; # ST/ORM :::tip Give your AI tool access to your database schema -Storm includes a schema-aware MCP server that exposes your table definitions, column types, and foreign keys to AI coding tools like Claude Code, Cursor, Copilot and Codex. Run `npx @storm-orm/cli` for full Storm ORM support including AI skills, conventions, and schema access. Using Python, Go, Ruby, or another language? Run `npx @storm-orm/cli mcp init` to set up the MCP server standalone. +Storm includes a schema-aware MCP server that exposes your table definitions, column types, and foreign keys to AI coding tools like Claude Code, Cursor, Copilot and Codex. Run `npx @storm-orm/cli` for full Storm ORM support including AI skills, conventions, and schema access. Using Python, Go, Ruby, or another language? Run `npx @storm-orm/cli mcp` to set up the MCP server standalone. ::: **Storm** is a modern, high-performance ORM for Kotlin 2.0+ and Java 21+, built around a powerful SQL template engine. It focuses on simplicity, type safety, and predictable performance through immutable models and compile-time metadata. diff --git a/website/versions.json b/website/versions.json index 3db499e91..d2a0f60f8 100644 --- a/website/versions.json +++ b/website/versions.json @@ -1,4 +1,5 @@ [ + "1.11.3", "1.11.2", "1.11.1", "1.11.0",