docs: add showcase examples with screenshots#289
Conversation
- Standardize tool count to 50+ across all docs (was 99+/55+) - Update install command to unscoped `altimate-code` package - Remove stale Python/uv auto-setup claims (all-native TypeScript now) - Update docs badge URL to docs.altimate.sh - Remove altimate-core npm badge from README - Add --yolo flag to CLI reference and builder mode subtext - Add new env vars (YOLO, MEMORY, TRAINING) to CLI docs - Add prompt enhancement keybind (leader+i) to TUI and keybinds docs - Add tool_lookup to tools index - Add built-in skills table (sql-review, schema-migration, pii-audit, etc.) - Add altimate-dbt CLI section to dbt-tools.md - Add Oracle and SQLite to warehouse lists - Update security FAQ: replace Python engine FAQ with native engine, add sensitive_write FAQ - Update telemetry docs to remove Python engine references - Add v0.4.2 to README "What's New" section - Update llms.txt URLs to docs.altimate.sh and bump version to v0.4.2 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Replace 304 em-dashes across 38 docs files with natural sentence structures (colons, commas, periods, split sentences) to avoid AI-generated content appearance - Fix pill-grid CSS: increase gap/padding, add responsive breakpoints at 768px and 480px for reliable scaling across viewport sizes - Simplify quickstart /discover step to brief description + link to Full Setup; add (Optional) marker to getting-started warehouse step Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Rewrite quickstart as a full Setup page covering warehouse connections, LLM provider switching, agent modes, skills, and permissions. Update overview page with ADE-Bench results (74.4%), fix install command, and change 70+ to 50+ tools. Replace query example with NYC taxi cab analytics prompt. Remove time blocks from step headings and trim redundant sections. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Move CI page from data-engineering/guides to usage/. Remove duplicate non-interactive and tracing sections from CLI page, link to CI instead. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Remove data-engineering-specific agent table from agents.md (now covered elsewhere), replace grid cards in quickstart with a compact link list, and rename "Complete Setup" → "Setup" in nav. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Rename "What's New" to "Changelog" across docs, nav, and README - Populate changelog with full release history (v0.1.0 through v0.4.9) - Add inline permission examples to security-faq (permission table, JSON configs) - Add Data Engineering agents table and Agent Permissions example to agents page - Add Non-interactive Usage and Tracing sections to CLI docs - Add missing nav entries: Web UI, Claude Code/Codex guides, Memory Tools, Observability (Tracing/Telemetry), Training, and Extend (SDK/Server/Plugins/Ecosystem) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Fix broken anchor link to #step-3-configure-your-warehouse-optional - Add inline "Adding Custom Skills" section in skills.md - Fix changelog upgrade command to use unscoped package name - Split merged 0.4.1/0.4.2 changelog into separate sections - Update tool count from 70+ to 100+ in configure/tools pages - Move Guides to bottom of Use section in nav - Change hero tagline to "Open-source data engineering harness." - Simplify install command to just npm install Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… stub web UI - Reduce agent modes from 7 to 3 (builder, analyst, plan) per PR #282 - Add SQL Write Access Control section with query classification table - Add sql_execute_write permission to permissions reference - Update /data to /altimate in Claude Code guide, add /configure-claude setup - Add Codex CLI skill integration and /configure-codex setup - Add /configure-claude and /configure-codex to commands reference - Stub web UI page with Coming Soon notice - Update all cross-references (getting-started, quickstart, index, tui, training, migration) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- commands.md: keep capitalized "Integration" heading - cli.md: keep --yolo flag in global flags table Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Fix 4 broken URLs in llms.txt (network, telemetry, security-faq, troubleshooting) to match reference/ paths in mkdocs nav - Update llms.txt version from v0.4.2 to v0.5.0 - Add missing v0.5.0 changelog entry with features and fixes Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace the generic bulleted grid cards with six detailed showcase examples, each with a real prompt and screenshot: NYC Taxi, Olist E-Commerce, Global CO2 Explorer, Spotify Analytics Migration, US Home Sales Data Science, and Snowflake vs Databricks Benchmark. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
📝 WalkthroughWalkthroughComprehensive documentation refactor updating product positioning from 99+ to 100+ tools, redesigning agent system from seven modes to three (Builder, Analyst, Plan), introducing SQL write access control with new permission mechanisms, adding extensive configuration and tooling reference pages, normalizing punctuation/formatting across all docs, and updating packaging references to unscoped npm packages. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~35 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Important Merge conflicts detected (Beta)
✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip CodeRabbit can use your project's `biome` configuration to improve the quality of JS/TS/CSS/JSON code reviews.Add a configuration file to your project to customize how CodeRabbit runs |
There was a problem hiding this comment.
Pull request overview
This PR substantially updates the documentation site: it introduces a new “Showcase” examples page with screenshots, while also reorganizing MkDocs navigation and refreshing multiple docs pages to reflect recent product changes (agents, tooling, CI/headless usage, and references).
Changes:
- Add a new Examples “Showcase” page with six detailed prompts and corresponding screenshots.
- Restructure the MkDocs navigation and expand documentation structure (Getting Started / Use / Configure / Governance / Reference).
- Update/refresh many docs pages (CLI, permissions, telemetry, security FAQ, training, tools) and add new reference pages (network, changelog, CI/headless).
Reviewed changes
Copilot reviewed 57 out of 64 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| README.md | Updates badges/install instructions and adds a short changelog section. |
| docs/mkdocs.yml | Reworks site navigation structure and page grouping. |
| docs/docs/usage/web.md | Rewrites Web UI documentation content. |
| docs/docs/usage/tui.md | Updates TUI layout wording and keybinds/agent list. |
| docs/docs/usage/cli.md | Re-formats CLI flags and adds env var sections. |
| docs/docs/usage/ci-headless.md | Adds new CI/headless usage guide with examples. |
| docs/docs/reference/windows-wsl.md | Updates Windows install snippet and wording tweaks. |
| docs/docs/reference/troubleshooting.md | Clarifies troubleshooting wording and links. |
| docs/docs/reference/telemetry.md | Updates telemetry event descriptions and references. |
| docs/docs/reference/security-faq.md | Updates security FAQ content and internal links; adds sensitive_write section. |
| docs/docs/reference/network.md | Adds a new network/proxy/firewall reference page. |
| docs/docs/reference/changelog.md | Adds a new docs-native changelog page. |
| docs/docs/quickstart.md | Updates quickstart copy and install command. |
| docs/docs/llms.txt | Updates llms.txt metadata and canonical docs URLs. |
| docs/docs/index.md | Updates homepage copy, agent section, warehouse list, and formatting. |
| docs/docs/getting-started/quickstart.md | Adds a new “Setup” page under Getting Started. |
| docs/docs/getting-started/quickstart-new.md | Adds a new fast “Quickstart” page under Getting Started. |
| docs/docs/getting-started/index.md | Adds a new Getting Started landing page with hero section. |
| docs/docs/getting-started.md | Updates legacy getting-started page content and links. |
| docs/docs/examples/index.md | Adds new showcase examples page referencing screenshot assets. |
| docs/docs/drivers.md | Updates driver documentation wording and formatting. |
| docs/docs/develop/server.md | Re-formats server endpoint bullet list. |
| docs/docs/develop/sdk.md | Re-formats SDK import table descriptions. |
| docs/docs/develop/plugins.md | Re-formats plugin spec list and clarifies hook wording. |
| docs/docs/develop/ecosystem.md | Re-formats ecosystem integration/community lists. |
| docs/docs/data-engineering/training/team-deployment.md | Wording and heading style normalization. |
| docs/docs/data-engineering/training/index.md | Updates training narrative to match current agent set and wording. |
| docs/docs/data-engineering/tools/warehouse-tools.md | Removes “Python Engine” section from environment scan example. |
| docs/docs/data-engineering/tools/sql-tools.md | Re-formats parameter lists and example text. |
| docs/docs/data-engineering/tools/schema-tools.md | Re-formats parameter lists and output examples. |
| docs/docs/data-engineering/tools/memory-tools.md | Improves wording/formatting and clarifies defaults. |
| docs/docs/data-engineering/tools/index.md | Updates tool count language and adds tool_lookup mention. |
| docs/docs/data-engineering/tools/finops-tools.md | Re-formats parameters and example outputs. |
| docs/docs/data-engineering/tools/dbt-tools.md | Adds altimate-dbt CLI section and updates formatting. |
| docs/docs/data-engineering/guides/using-with-codex.md | Minor wording tweak. |
| docs/docs/data-engineering/guides/migration.md | Updates migration guide to reflect new agent usage and formatting. |
| docs/docs/data-engineering/guides/cost-optimization.md | Re-formats common findings list. |
| docs/docs/data-engineering/guides/ci-headless.md | Updates examples/install command and link fixes. |
| docs/docs/data-engineering/agent-modes.md | Updates agent modes (3-mode model) and adds SQL write control section. |
| docs/docs/configure/warehouses.md | Adds new warehouses configuration reference page. |
| docs/docs/configure/tracing.md | Rewords tracing intro and adjusts links. |
| docs/docs/configure/tools/index.md | Adds new Tools Reference index under Configure. |
| docs/docs/configure/tools/custom.md | Adds new Custom Tools documentation. |
| docs/docs/configure/tools/core-tools.md | Adds new Core Tools documentation page. |
| docs/docs/configure/tools/config.md | Adds Built-in Tools documentation page. |
| docs/docs/configure/tools.md | Updates tool counts/phrasing and re-formats bullets. |
| docs/docs/configure/skills.md | Adds built-in skills table and expands custom/remote skills docs. |
| docs/docs/configure/rules.md | Re-formats rules list and writing tips. |
| docs/docs/configure/providers.md | Minor wording tweaks. |
| docs/docs/configure/permissions.md | Clarifies matching semantics and adds sql_execute_write permission. |
| docs/docs/configure/keybinds.md | Adds Leader+i prompt enhancement keybind doc. |
| docs/docs/configure/index.md | Adds new Configure landing page. |
| docs/docs/configure/governance.md | Adds new Governance overview page. |
| docs/docs/configure/context-management.md | Re-formats lists and clarifies pruning/compaction text. |
| docs/docs/configure/config.md | Updates telemetry link and clarifies defaults formatting. |
| docs/docs/configure/commands.md | Re-formats command descriptions and headings. |
| docs/docs/configure/agents.md | Updates built-in agent definitions and adds SQL write access control table. |
| docs/docs/assets/css/extra.css | Adjusts card padding and improves pill-grid responsiveness. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| - Getting Started: | ||
| - Overview: getting-started/index.md | ||
| - Quickstart: getting-started/quickstart-new.md | ||
| - Setup: getting-started/quickstart.md | ||
| - Examples: | ||
| - Showcase: examples/index.md | ||
| - Use: | ||
| - Agents: |
| Altimate Web is a browser-based interface for interacting with altimate's data engineering tools without the terminal. It provides the same conversational agent experience as the TUI, accessible from any browser. | ||
|
|
||
| - Full chat interface with streaming responses | ||
| - Agent switching between builder, analyst, and plan modes | ||
| - File references and tool call results | ||
| - Agent switching | ||
| - Session management | ||
| - Session management and history | ||
|
|
||
| !!! note | ||
| The web UI is the general-purpose agent interface. For data-engineering-specific UIs, see the [Data Engineering guides](../data-engineering/guides/index.md). | ||
| !!! info "Coming Soon" | ||
| The web UI is currently under development. For now, use the [TUI](tui.md) or [CLI](cli.md) to interact with altimate. |
| # Warehouses | ||
|
|
||
| Altimate Code connects to 8 warehouse types. Configure them in the `warehouses` section of your config file or in `.altimate-code/connections.json`. | ||
|
|
||
| ## Configuration | ||
|
|
||
| Each warehouse has a key (the connection name) and a config object: | ||
|
|
||
| ```json | ||
| { | ||
| "warehouses": { | ||
| "my-connection-name": { | ||
| "type": "<warehouse-type>", | ||
| ... | ||
| } | ||
| } |
| ### Manual configuration | ||
|
|
||
| Add a warehouse connection to `altimate-code.json`: | ||
|
|
||
| === "Snowflake" | ||
|
|
||
| ```json | ||
| { | ||
| "warehouses": { |
| Add to `altimate-code.json` in your project root: | ||
|
|
||
| === "Snowflake" | ||
|
|
||
| ```json | ||
| { | ||
| "connections": { | ||
| "snowflake": { | ||
| "type": "snowflake", | ||
| "account": "xy12345.us-east-1", | ||
| "user": "dbt_user", | ||
| "password": "${SNOWFLAKE_PASSWORD}", | ||
| "warehouse": "TRANSFORM_WH", | ||
| "database": "ANALYTICS", | ||
| "schema": "PUBLIC", | ||
| "role": "TRANSFORMER" | ||
| } | ||
| } | ||
| } | ||
| ``` |
| ## Changelog | ||
|
|
||
| - **v0.4.2** (March 2026) — yolo mode, Python engine elimination (all-native TypeScript), tool consolidation, path sandboxing hardening, altimate-dbt CLI, unscoped npm package | ||
| - **v0.4.1** (March 2026) — env-based skill selection, session caching, tracing improvements | ||
| - **v0.4.0** (Feb 2026) — data visualization skill, 99+ tools, training system | ||
| - **v0.4.0** (Feb 2026) — data visualization skill, 100+ tools, training system |
| @@ -0,0 +1,57 @@ | |||
| # Configure | |||
|
|
|||
| Set up your warehouses, LLM providers, and preferences. For agents, tools, skills, and commands, see the [Use](../data-engineering/agent-modes.md) section. For rules, permissions, and context management, see [Governance](rules.md). | |||
There was a problem hiding this comment.
Actionable comments posted: 17
🧹 Nitpick comments (10)
docs/docs/assets/css/extra.css (1)
170-175: Consider slightly increasing tiny-mobile pill font size.On Line 173,
font-size: 0.72remcan become hard to read on small devices. Consider0.75remminimum for readability.Proposed tweak
`@media` (max-width: 480px) { .pill-grid ul li { padding: 0.4rem 0.8rem; - font-size: 0.72rem; + font-size: 0.75rem; } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/docs/assets/css/extra.css` around lines 170 - 175, Update the small-screen CSS rule for .pill-grid ul li inside the `@media` (max-width: 480px) block to increase the font-size from 0.72rem to 0.75rem to improve readability on tiny devices; locate the .pill-grid ul li selector in docs/assets/css/extra.css and adjust only the font-size value while keeping the existing padding intact.docs/docs/examples/index.md (2)
61-61: Optional: Hyphenate compound adjectives.Static analysis suggests hyphenating "data-science-style" and "BI-style" when used as compound adjectives before nouns, per standard English style guides.
Optional hyphenation update
-> Download all available public US home sales data sets. Process and merge them into a unified format. Perform advanced data science on it to bring to the surface interesting insights. K-means, OLS regressions, and more. Build a single interactive dashboard with data science style charts, think violin plots, Q-Q plots and lollipop charts. Use a R/ggplot2 aesthetic. No BI style charts. +> Download all available public US home sales data sets. Process and merge them into a unified format. Perform advanced data science on it to bring to the surface interesting insights. K-means, OLS regressions, and more. Build a single interactive dashboard with data-science-style charts, think violin plots, Q-Q plots and lollipop charts. Use a R/ggplot2 aesthetic. No BI-style charts.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/docs/examples/index.md` at line 61, Update the sentence that currently reads "Build a single interactive dashboard with data science style charts, think violin plots, Q-Q plots and lollipop charts. Use a R/ggplot2 aesthetic. No BI style charts." to hyphenate the compound adjectives: replace "data science style charts" with "data-science-style charts" and "BI style charts" with "BI-style charts" so the phrases are correctly treated as compound modifiers; preserve the rest of the wording and punctuation.
3-3: Minor: Consider replacing em-dash for consistency.Line 3 uses an em-dash (
—), while other files in this PR are updating to remove em-dashes in favor of commas, colons, or periods.Optional punctuation update
-Real-world examples showing what altimate can do across data engineering workflows. Each example demonstrates end-to-end automation — from discovery to implementation. +Real-world examples showing what altimate can do across data engineering workflows. Each example demonstrates end-to-end automation, from discovery to implementation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/docs/examples/index.md` at line 3, The sentence containing "end-to-end automation — from discovery to implementation" uses an em-dash; replace the em-dash with a comma (or other punctuation consistent with this PR) so it reads "end-to-end automation, from discovery to implementation" to match the other files' punctuation style.docs/docs/getting-started/quickstart-new.md (1)
141-141: Consider consistent capitalization of product name.Line 141 uses lowercase "altimate" in "What altimate does:", while the product is typically capitalized as "Altimate Code" or "Altimate" throughout the documentation.
✍️ Proposed fix
-**What altimate does:** +**What Altimate does:**🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/docs/getting-started/quickstart-new.md` at line 141, Change the header text "What altimate does:" to use the project's standard capitalization—e.g., "What Altimate does:" or "What Altimate Code does:"—so it matches other occurrences; update the exact string found in the docs file (the header "What altimate does:") to the chosen capitalized form and ensure consistency with other references like "Altimate" and "Altimate Code".docs/docs/configure/warehouses.md (2)
9-18: Add language specifier to placeholder code block.The fenced code block showing the warehouse configuration structure should specify
jsonas the language for proper syntax highlighting.🎨 Proposed fix
-``` +```json { "warehouses": { "my-connection-name": { "type": "<warehouse-type>", - ... + // ... additional config } } }Note: JSON doesn't support `...` syntax, so consider using a comment instead. </details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@docs/docs/configure/warehouses.mdaround lines 9 - 18, Update the fenced
example showing the warehouse configuration to specify the JSON language and
replace the invalid "..." placeholder with a JSON-friendly comment, i.e., change
the code fence to indicate json and replace the ellipsis inside the
"my-connection-name" object with a JSON comment like "// ... additional config"
so the example in the warehouses example is both highlighted and syntactically
sensible.</details> --- `336-350`: **Add language specifier to test output example.** The connection test output example should specify `text` or `bash` as the language for proper rendering. <details> <summary>🎨 Proposed fix</summary> ```diff -``` +```text > warehouse_test prod-snowflake Testing connection to prod-snowflake (snowflake)...🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/docs/configure/warehouses.md` around lines 336 - 350, The fenced code block in the "## Testing Connections" example (the block beginning with "> warehouse_test prod-snowflake" and the following connection test output) is missing a language specifier; update the opening fence to include a language such as "text" or "bash" (e.g., ```text) so the example renders correctly in the docs.docs/docs/getting-started/index.md (1)
45-47: Consider varying sentence structure for improved readability.Three consecutive sentences begin with "Your," which can feel repetitive. While this may be intentional for rhetorical emphasis, varying the structure could improve flow.
✍️ Example alternative
-Your transformation logic is in dbt. Your orchestration is in Airflow or Dagster. Your warehouses span Snowflake and BigQuery (and maybe that Redshift cluster nobody wants to talk about). Your governance requirements cross every platform boundary. +Your transformation logic is in dbt. Orchestration runs in Airflow or Dagster. Warehouses span Snowflake and BigQuery (and maybe that Redshift cluster nobody wants to talk about). Governance requirements cross every platform boundary.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/docs/getting-started/index.md` around lines 45 - 47, The three consecutive sentences starting with "Your" ("Your transformation logic...", "Your orchestration...", "Your warehouses...") feel repetitive; rewrite them to vary sentence openings and improve flow while preserving emphasis on dbt/Airflow/Dagster, Snowflake/BigQuery/Redshift, and governance across platforms and keep bold formatting for **entire** and **any LLM**; for example, combine or rephrase one or two sentences (e.g., "Transformations run in dbt, orchestration in Airflow or Dagster, and warehouses span Snowflake and BigQuery (and maybe that Redshift cluster)"), then follow with a sentence about governance crossing platform boundaries and the final sentence about Altimate Code connecting the **entire** stack and supporting **any LLM**.docs/docs/getting-started/quickstart.md (3)
179-274: Fix indented code blocks in warehouse configuration sections.Similar to the LLM provider sections, the warehouse configuration tabs use indented code blocks that should be fenced for consistency.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/docs/getting-started/quickstart.md` around lines 179 - 274, The warehouse configuration sections currently use indented code blocks; replace each indented block under the "Snowflake", "BigQuery", "Databricks", "PostgreSQL", "DuckDB", and "Redshift" headings with fenced code blocks using triple backticks and the json language specifier (e.g., ```json ... ```), ensuring the fences surround the JSON payload exactly as the LLM provider sections do so formatting is consistent and syntax-highlighted.
58-150: Fix indented code blocks to use fenced style.The code blocks inside the tabbed sections (=== "Anthropic", === "OpenAI", etc.) use indented style, but markdownlint expects fenced blocks for consistency.
📝 Proposed fix for consistent code block style
For each provider section (lines 60, 73, 86, 101, 115, 128, 141), wrap the JSON in fenced code blocks:
=== "Anthropic" - ```json +```json { "provider": { "anthropic": { "apiKey": "{env:ANTHROPIC_API_KEY}" } }, "model": "anthropic/claude-sonnet-4-6" } - ``` +```Repeat for all provider tabs.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/docs/getting-started/quickstart.md` around lines 58 - 150, The tabbed provider sections (e.g., the headings === "Anthropic", === "OpenAI", === "AWS Bedrock", === "Azure OpenAI", === "Google Gemini", === "Ollama (Local)", === "OpenRouter") currently use indented code blocks; change each to fenced code blocks using triple backticks with the language specifier (```json) before the JSON and closing triple backticks after it so markdownlint accepts them. Locate the JSON blocks under each of those section headings and replace the indented-style block delimiters with fenced-style ```json ... ``` blocks while preserving the JSON content and indentation.
280-280: Add language specifiers to fenced code blocks.The code blocks at lines 280, 299, and 427 should specify their language for proper syntax highlighting.
📝 Proposed fix
-``` +```bash > warehouse_test snowflake ✓ Connected successfully-
+bash
/agent analyst-``` +```text Build a NYC taxi analytics dashboard using BigQuery public data and dbt for transformations. Include geographic demand analysis with pickup/dropoff hotspots, top routes, airport traffic, and borough comparisons. Add revenue analytics with fare breakdowns, fare distribution, tip analysis, payment trends, and revenue-per-mile by route.Also applies to: 299-299, 427-427
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/docs/getting-started/quickstart.md` at line 280, Three fenced code blocks in quickstart.md lack language specifiers; update the three blocks containing "> warehouse_test snowflake" to use ```bash, the block containing "/agent analyst" to use ```bash (or appropriate shell), and the descriptive block starting "Build a NYC taxi analytics dashboard..." to use ```text so they render with proper highlighting; locate and edit the three fenced code blocks in the file and add the language tags to their opening backticks.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/docs/configure/agents.md`:
- Line 11: The docs currently contradict on the `plan` permission: update the
description for the `plan` permission (the table row labeled `plan`) so it is
consistent with the later statement that `plan` can "edit plan files"; choose
one canonical boundary (either "no edits" or "can edit plan files") and apply it
to both places, ensuring both the table entry for `plan` and the explanatory
sentence that mentions editing plan files use the same wording and clarify that
edits are limited to plan-related files only; update any adjacent phrasing to
match the chosen behavior so users see a single, unambiguous definition of the
`plan` permission.
- Line 19: The "analyst" role is described as "Truly read-only" but the allowed
command list includes "dbt deps", which mutates the repo by writing to
dbt_packages and package-lock.yml; fix this by either removing "dbt deps" from
the allowed commands for the analyst role or changing the description from
"Truly read-only" to a softer phrase (e.g., "primarily read-only") and add an
explicit note that "dbt deps" writes packages to dbt_packages and updates
package-lock.yml and therefore is an exception to read-only behavior; update the
"analyst" claim text and the allowed commands list accordingly to keep the doc
consistent.
In `@docs/docs/configure/skills.md`:
- Around line 107-131: The "Remote Skills" section duplicates the discovery
order but omits entries from the earlier list; update the Remote Skills text so
the load-order matches the first list by explicitly listing: 1)
".altimate-code/skill/" (project), 2) "~/.altimate-code/skills/" (global), 3)
custom paths via config (e.g. "skills.paths"), and then show how remote URLs
(e.g. "skills.urls") are loaded at startup—ensure the same precedence language
and examples are used as in the initial "Skills are loaded from these paths"
block so readers see a single consistent discovery order.
In `@docs/docs/configure/tools.md`:
- Line 27: Replace the incorrect tool count in the sentence "In addition to
built-in tools, altimate provides 100+ specialized data engineering tools." with
the accurate number reflecting the tally in data-engineering/tools/index.md
(43), so it reads something like "In addition to built-in tools, altimate
provides 43 specialized data engineering tools." Verify the wording remains
consistent with existing phrasing around counts.
In `@docs/docs/configure/tools/core-tools.md`:
- Around line 65-68: Update the documentation text for the
altimate_core_classify_pii entry to fix the PII expansion typo: replace the
phrase "personal identifiable information" with the correct term "personally
identifiable information" in the description for altimate_core_classify_pii.
In `@docs/docs/data-engineering/agent-modes.md`:
- Line 143: The sentence about Plan mode is contradictory; update the wording
around "Plan mode" and "plan files" to clearly state the permission boundary:
either allow editing only plan files or forbid file modifications entirely.
Replace the current line with a single clear sentence such as "Plan mode
restricts the agent to reading files and editing only plan files; no SQL, no
bash, and no other file modifications are allowed," or, if the intent is to
forbid edits, change it to "Plan mode restricts the agent to reading files only;
no editing of plan files, no SQL, no bash, and no file modifications." Ensure
the terms "Plan mode" and "plan files" are used exactly as in the doc so readers
can locate the change.
In `@docs/docs/data-engineering/tools/index.md`:
- Line 11: Check and reconcile the skills count in the "dbt Tools" table row
with the actual capabilities listed in dbt-tools.md: open dbt-tools.md,
enumerate the distinct skills/features documented (e.g., Run, manifest parsing,
test generation, scaffolding, altimate-dbt CLI, etc.), confirm whether one was
removed or a new skill (altimate-dbt) was added, then update the table entry
string in index.md so the numeric count matches the true number of skills and/or
adjust the descriptive text to accurately reflect the listed capabilities
(update the "5 skills" number or the description as appropriate).
In `@docs/docs/examples/index.md`:
- Line 15: The docs reference six missing screenshot files (nyc_taxi.png,
ADF_Snowflake_Pipeline.png, global_co_explorer.png, spotify_analytics.png,
us_home_sales.png, dbrx_snowflake_benchmark.png) causing broken images in
docs/examples/index.md; fix by either committing the actual image files with
those exact filenames into the repository assets or removing/replacing the
markdown image links in index.md to point to valid images/URLs (update the alt
text if replacing) so the markdown no longer references nonexistent files.
In `@docs/docs/getting-started/index.md`:
- Around line 75-79: The heading ":material-shield-check:{ .lg .middle }
**Governed by design — five agent modes**" contradicts the body which lists
"Three agent modes — Builder, Analyst, and Plan"; update the heading to reflect
three agent modes (e.g., change "five" to "three" or reword to "Governed by
design — three agent modes") so it matches the body content, and verify any
nearby text referencing agent counts is consistent with the updated phrase.
In `@docs/docs/llms.txt`:
- Around line 16-20: Update the stale capability metadata in llms.txt: locate
the list items referencing "Agent Modes" and "SQL Tools" (and the other bullet
entries) and replace legacy agent mode names/counts and the SQL tool
count/anti-pattern rules with the current, authoritative values from the system
(e.g., current agent mode list/permissions and the present number of SQL tools
and rules); ensure the descriptive text for "Agent Modes", "SQL Tools", "Schema
Tools", and related links matches the latest product/state and remove or
annotate any deprecated modes so the AI-facing index reflects the up-to-date
capabilities.
In `@docs/docs/reference/changelog.md`:
- Line 29: Replace the word "builtin" with the hyphenated "built-in" in the
changelog entry text "Ship builtin skills to customers via postinstall (`#279`)"
so it reads "Ship built-in skills to customers via postinstall (`#279`)" to match
the docs style; update the single line in docs/reference/changelog.md
accordingly.
- Around line 205-233: The changelog dates are out of order: the headings "##
[0.2.3] - 2026-03-04" and "## [0.2.2] - 2026-03-05" must reflect chronological
release order; update the date on either the "0.2.3" or "0.2.2" heading so that
0.2.2 is not later than 0.2.3 (e.g., change "## [0.2.3] - 2026-03-04" to a date
after 2026-03-05 or correct "## [0.2.2] - 2026-03-05" to the intended earlier
date), leaving the rest of the section content unchanged.
In `@docs/docs/reference/security-faq.md`:
- Around line 256-258: The wording is inconsistent about whether a "Blocked"
command is absolutely forbidden or simply denied by default; update the sentence
that currently reads 'Blocked means the agent cannot run it at all; you must
override in config.' to clearly state that "Blocked" is a default-deny that can
be overridden via the altimate-code.json configuration (or, if some commands are
truly immutable, mark them explicitly as "hard-blocked"); reference the terms
"Prompted" and "Blocked" in the same paragraph and link to the Permissions page
so readers know how to change rules via altimate-code.json and where to find
which commands are hard-blocked versus configurable.
- Around line 185-190: Remove the stale Python-specific guidance that references
the ALTIMATE_CLI_PYTHON environment variable in the FAQ: delete or replace the
sentence on Line 189 that recommends ALTIMATE_CLI_PYTHON and instead reference
general secret-handling best practices (e.g., using environment variables and
your cloud provider's secret manager) consistent with the earlier note that
there is no Python dependency; ensure the project-level config mention
(altimate-code.json) and the guidance to not commit credentials remain intact
and consistent.
In `@docs/docs/reference/telemetry.md`:
- Line 82: The sentence "Personally identifiable information (your email is
SHA-256 hashed before sending and is used only for anonymous user correlation)"
contradicts the "never collected" claim; replace or relocate that sentence so
the docs clearly state that a non-reversible SHA-256 hash of the email is
collected as a stable identifier (not raw PII), explain its narrow purpose
("anonymous user correlation"), and clarify retention/processing rules and that
it is not intended to be personally identifiable; update the "never collected"
list to remove this wording and add a precise note that hashed emails are
collected as described.
In `@docs/docs/usage/ci-headless.md`:
- Around line 30-46: The CI env snippet contains GitHub Actions templating for
SNOWFLAKE_PASSWORD (${ { secrets.SNOWFLAKE_PASSWORD }}) which is not valid
shell; replace that token with a shell-valid placeholder or environment variable
reference (e.g., set SNOWFLAKE_PASSWORD=your-password or export
SNOWFLAKE_PASSWORD) so the block remains copy/pasteable; update the example
lines referencing SNOWFLAKE_PASSWORD and keep other keys (ALTIMATE_PROVIDER,
ALTIMATE_ANTHROPIC_API_KEY, ALTIMATE_OPENAI_API_KEY, SNOWFLAKE_ACCOUNT,
SNOWFLAKE_USER, SNOWFLAKE_DATABASE, SNOWFLAKE_SCHEMA, SNOWFLAKE_WAREHOUSE)
unchanged.
- Around line 123-135: The pre-commit example only captures newly added files
because STAGED_MODELS uses git diff --cached --name-only --diff-filter=A; change
the diff filter to include modified files as well (e.g., use --diff-filter=AM)
so STAGED_MODELS picks up both added and modified SQL files, ensuring the
altimate run "/generate-tests for: $STAGED_MODELS" covers edited models too.
---
Nitpick comments:
In `@docs/docs/assets/css/extra.css`:
- Around line 170-175: Update the small-screen CSS rule for .pill-grid ul li
inside the `@media` (max-width: 480px) block to increase the font-size from
0.72rem to 0.75rem to improve readability on tiny devices; locate the .pill-grid
ul li selector in docs/assets/css/extra.css and adjust only the font-size value
while keeping the existing padding intact.
In `@docs/docs/configure/warehouses.md`:
- Around line 9-18: Update the fenced example showing the warehouse
configuration to specify the JSON language and replace the invalid "..."
placeholder with a JSON-friendly comment, i.e., change the code fence to
indicate json and replace the ellipsis inside the "my-connection-name" object
with a JSON comment like "// ... additional config" so the example in the
warehouses example is both highlighted and syntactically sensible.
- Around line 336-350: The fenced code block in the "## Testing Connections"
example (the block beginning with "> warehouse_test prod-snowflake" and the
following connection test output) is missing a language specifier; update the
opening fence to include a language such as "text" or "bash" (e.g., ```text) so
the example renders correctly in the docs.
In `@docs/docs/examples/index.md`:
- Line 61: Update the sentence that currently reads "Build a single interactive
dashboard with data science style charts, think violin plots, Q-Q plots and
lollipop charts. Use a R/ggplot2 aesthetic. No BI style charts." to hyphenate
the compound adjectives: replace "data science style charts" with
"data-science-style charts" and "BI style charts" with "BI-style charts" so the
phrases are correctly treated as compound modifiers; preserve the rest of the
wording and punctuation.
- Line 3: The sentence containing "end-to-end automation — from discovery to
implementation" uses an em-dash; replace the em-dash with a comma (or other
punctuation consistent with this PR) so it reads "end-to-end automation, from
discovery to implementation" to match the other files' punctuation style.
In `@docs/docs/getting-started/index.md`:
- Around line 45-47: The three consecutive sentences starting with "Your" ("Your
transformation logic...", "Your orchestration...", "Your warehouses...") feel
repetitive; rewrite them to vary sentence openings and improve flow while
preserving emphasis on dbt/Airflow/Dagster, Snowflake/BigQuery/Redshift, and
governance across platforms and keep bold formatting for **entire** and **any
LLM**; for example, combine or rephrase one or two sentences (e.g.,
"Transformations run in dbt, orchestration in Airflow or Dagster, and warehouses
span Snowflake and BigQuery (and maybe that Redshift cluster)"), then follow
with a sentence about governance crossing platform boundaries and the final
sentence about Altimate Code connecting the **entire** stack and supporting
**any LLM**.
In `@docs/docs/getting-started/quickstart-new.md`:
- Line 141: Change the header text "What altimate does:" to use the project's
standard capitalization—e.g., "What Altimate does:" or "What Altimate Code
does:"—so it matches other occurrences; update the exact string found in the
docs file (the header "What altimate does:") to the chosen capitalized form and
ensure consistency with other references like "Altimate" and "Altimate Code".
In `@docs/docs/getting-started/quickstart.md`:
- Around line 179-274: The warehouse configuration sections currently use
indented code blocks; replace each indented block under the "Snowflake",
"BigQuery", "Databricks", "PostgreSQL", "DuckDB", and "Redshift" headings with
fenced code blocks using triple backticks and the json language specifier (e.g.,
```json ... ```), ensuring the fences surround the JSON payload exactly as the
LLM provider sections do so formatting is consistent and syntax-highlighted.
- Around line 58-150: The tabbed provider sections (e.g., the headings ===
"Anthropic", === "OpenAI", === "AWS Bedrock", === "Azure OpenAI", === "Google
Gemini", === "Ollama (Local)", === "OpenRouter") currently use indented code
blocks; change each to fenced code blocks using triple backticks with the
language specifier (```json) before the JSON and closing triple backticks after
it so markdownlint accepts them. Locate the JSON blocks under each of those
section headings and replace the indented-style block delimiters with
fenced-style ```json ... ``` blocks while preserving the JSON content and
indentation.
- Line 280: Three fenced code blocks in quickstart.md lack language specifiers;
update the three blocks containing "> warehouse_test snowflake" to use ```bash,
the block containing "/agent analyst" to use ```bash (or appropriate shell), and
the descriptive block starting "Build a NYC taxi analytics dashboard..." to use
```text so they render with proper highlighting; locate and edit the three
fenced code blocks in the file and add the language tags to their opening
backticks.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: ae52f9f5-ff4b-4a47-ad61-4fba3e1ec26b
⛔ Files ignored due to path filters (6)
docs/docs/assets/images/ADF_Snowflake_Pipeline.pngis excluded by!**/*.pngdocs/docs/assets/images/dbrx_snowflake_benchmark.pngis excluded by!**/*.pngdocs/docs/assets/images/global_co_explorer.pngis excluded by!**/*.pngdocs/docs/assets/images/nyc_taxi.pngis excluded by!**/*.pngdocs/docs/assets/images/spotify_analytics.pngis excluded by!**/*.pngdocs/docs/assets/images/us_home_sales.pngis excluded by!**/*.png
📒 Files selected for processing (58)
README.mddocs/docs/assets/css/extra.cssdocs/docs/configure/agents.mddocs/docs/configure/commands.mddocs/docs/configure/config.mddocs/docs/configure/context-management.mddocs/docs/configure/governance.mddocs/docs/configure/index.mddocs/docs/configure/keybinds.mddocs/docs/configure/permissions.mddocs/docs/configure/providers.mddocs/docs/configure/rules.mddocs/docs/configure/skills.mddocs/docs/configure/tools.mddocs/docs/configure/tools/config.mddocs/docs/configure/tools/core-tools.mddocs/docs/configure/tools/custom.mddocs/docs/configure/tools/index.mddocs/docs/configure/tracing.mddocs/docs/configure/warehouses.mddocs/docs/data-engineering/agent-modes.mddocs/docs/data-engineering/guides/ci-headless.mddocs/docs/data-engineering/guides/cost-optimization.mddocs/docs/data-engineering/guides/migration.mddocs/docs/data-engineering/guides/using-with-codex.mddocs/docs/data-engineering/tools/dbt-tools.mddocs/docs/data-engineering/tools/finops-tools.mddocs/docs/data-engineering/tools/index.mddocs/docs/data-engineering/tools/memory-tools.mddocs/docs/data-engineering/tools/schema-tools.mddocs/docs/data-engineering/tools/sql-tools.mddocs/docs/data-engineering/tools/warehouse-tools.mddocs/docs/data-engineering/training/index.mddocs/docs/data-engineering/training/team-deployment.mddocs/docs/develop/ecosystem.mddocs/docs/develop/plugins.mddocs/docs/develop/sdk.mddocs/docs/develop/server.mddocs/docs/drivers.mddocs/docs/examples/index.mddocs/docs/getting-started.mddocs/docs/getting-started/index.mddocs/docs/getting-started/quickstart-new.mddocs/docs/getting-started/quickstart.mddocs/docs/index.mddocs/docs/llms.txtdocs/docs/quickstart.mddocs/docs/reference/changelog.mddocs/docs/reference/network.mddocs/docs/reference/security-faq.mddocs/docs/reference/telemetry.mddocs/docs/reference/troubleshooting.mddocs/docs/reference/windows-wsl.mddocs/docs/usage/ci-headless.mddocs/docs/usage/cli.mddocs/docs/usage/tui.mddocs/docs/usage/web.mddocs/mkdocs.yml
💤 Files with no reviewable changes (1)
- docs/docs/data-engineering/tools/warehouse-tools.md
| |-------|------------|-------------| | ||
| | `builder` | Create and modify dbt models, SQL pipelines, and data transformations | Full read/write. SQL mutations prompt for approval. | | ||
| | `analyst` | Explore data, run SELECT queries, inspect schemas, generate insights | Read-only (enforced). SQL writes denied. Safe bash commands auto-allowed. | | ||
| | `plan` | Plan before acting, restricted to planning files only | Minimal: no edits, no bash, no SQL | |
There was a problem hiding this comment.
Resolve contradictory plan permissions description
Line 11 says plan has “no edits,” but Line 32 says it can “edit plan files.” Please make this consistent so users understand the exact boundary.
Suggested doc fix
-| `plan` | Plan before acting, restricted to planning files only | Minimal: no edits, no bash, no SQL |
+| `plan` | Plan before acting, restricted to planning files only | Minimal: can edit plan files only; no bash; no SQL |
-Planning mode with minimal permissions. Can only read files and edit plan files. No SQL, no bash, no file modifications.
+Planning mode with minimal permissions. Can read all files and edit plan files only. No SQL, no bash, and no non-plan file modifications.Also applies to: 32-32
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/docs/configure/agents.md` at line 11, The docs currently contradict on
the `plan` permission: update the description for the `plan` permission (the
table row labeled `plan`) so it is consistent with the later statement that
`plan` can "edit plan files"; choose one canonical boundary (either "no edits"
or "can edit plan files") and apply it to both places, ensuring both the table
entry for `plan` and the explanatory sentence that mentions editing plan files
use the same wording and clarify that edits are limited to plan-related files
only; update any adjacent phrasing to match the chosen behavior so users see a
single, unambiguous definition of the `plan` permission.
| | `migrator` | Cross-warehouse SQL translation and migration | Read/write for migration | | ||
| ### Analyst | ||
|
|
||
| Truly read-only mode for safe data exploration: |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does dbt depswrite files to the project (for example underdbt_packages), or is it strictly read-only?
💡 Result:
No, dbt deps is not strictly read-only. It writes files to the dbt project, specifically installing packages into the dbt_packages directory (by default) and generating or updating the package-lock.yml file in the project root.
Citations:
- 1: https://docs.getdbt.com/reference/commands/deps
- 2: https://docs.getdbt.com/docs/building-a-dbt-project/package-management
- 3: https://docs.getdbt.com/docs/deps
- 4: https://docs.getdbt.com/docs/build/packages
- 5: https://docs.getdbt.com/reference/dbt-commands
🏁 Script executed:
find . -name "agents.md" -type fRepository: AltimateAI/altimate-code
Length of output: 99
🏁 Script executed:
cat -n ./docs/docs/configure/agents.md | head -50Repository: AltimateAI/altimate-code
Length of output: 2929
analyst "read-only" claim conflicts with dbt deps allowance
Line 19 defines analyst as "Truly read-only," but Line 23 allows dbt deps, which writes packages to the dbt_packages directory and generates/updates package-lock.yml. Either remove dbt deps from allowed commands or soften the read-only claim.
Suggested doc fix (strict read-only interpretation)
-- **Bash**: Safe commands auto-allowed (`ls`, `grep`, `cat`, `head`, `tail`, `find`, `wc`). dbt read commands allowed (`dbt list`, `dbt ls`, `dbt debug`, `dbt deps`). Everything else denied.
+- **Bash**: Safe commands auto-allowed (`ls`, `grep`, `cat`, `head`, `tail`, `find`, `wc`). dbt read commands allowed (`dbt list`, `dbt ls`, `dbt debug`). Everything else denied.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/docs/configure/agents.md` at line 19, The "analyst" role is described as
"Truly read-only" but the allowed command list includes "dbt deps", which
mutates the repo by writing to dbt_packages and package-lock.yml; fix this by
either removing "dbt deps" from the allowed commands for the analyst role or
changing the description from "Truly read-only" to a softer phrase (e.g.,
"primarily read-only") and add an explicit note that "dbt deps" writes packages
to dbt_packages and updates package-lock.yml and therefore is an exception to
read-only behavior; update the "analyst" claim text and the allowed commands
list accordingly to keep the doc consistent.
| Skills are loaded from these paths (highest priority first): | ||
|
|
||
| 1. `.altimate-code/skill/` (project) | ||
| 2. `~/.altimate-code/skills/` (global) | ||
| 3. Custom paths via config: | ||
|
|
||
| ```json | ||
| { | ||
| "skills": { | ||
| "paths": ["./my-skills", "~/shared-skills"] | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| ### Remote Skills | ||
|
|
||
| Host skills at a URL and load them at startup: | ||
|
|
||
| ```json | ||
| { | ||
| "skills": { | ||
| "urls": ["https://example.com/skills-registry.json"] | ||
| } | ||
| } | ||
| ``` |
There was a problem hiding this comment.
Keep the second load-order list aligned with the first one.
Line 107 reintroduces the discovery order but drops .altimate-code/skills/ and the external directories documented above. Readers end up with two different precedence rules depending on which section they follow.
📝 Suggested wording
-Skills are loaded from these paths (highest priority first):
+For the full discovery order, see [Discovery Paths](`#discovery-paths`). The most common custom locations are:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Skills are loaded from these paths (highest priority first): | |
| 1. `.altimate-code/skill/` (project) | |
| 2. `~/.altimate-code/skills/` (global) | |
| 3. Custom paths via config: | |
| ```json | |
| { | |
| "skills": { | |
| "paths": ["./my-skills", "~/shared-skills"] | |
| } | |
| } | |
| ``` | |
| ### Remote Skills | |
| Host skills at a URL and load them at startup: | |
| ```json | |
| { | |
| "skills": { | |
| "urls": ["https://example.com/skills-registry.json"] | |
| } | |
| } | |
| ``` | |
| For the full discovery order, see [Discovery Paths](`#discovery-paths`). The most common custom locations are: | |
| 1. `.altimate-code/skill/` (project) | |
| 2. `~/.altimate-code/skills/` (global) | |
| 3. Custom paths via config: | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/docs/configure/skills.md` around lines 107 - 131, The "Remote Skills"
section duplicates the discovery order but omits entries from the earlier list;
update the Remote Skills text so the load-order matches the first list by
explicitly listing: 1) ".altimate-code/skill/" (project), 2)
"~/.altimate-code/skills/" (global), 3) custom paths via config (e.g.
"skills.paths"), and then show how remote URLs (e.g. "skills.urls") are loaded
at startup—ensure the same precedence language and examples are used as in the
initial "Skills are loaded from these paths" block so readers see a single
consistent discovery order.
| ## Data Engineering Tools | ||
|
|
||
| In addition to built-in tools, altimate provides 55+ specialized data engineering tools. See the [Data Engineering Tools](../data-engineering/tools/index.md) section for details. | ||
| In addition to built-in tools, altimate provides 100+ specialized data engineering tools. See the [Data Engineering Tools](../data-engineering/tools/index.md) section for details. |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check the actual tool counts documented in the data engineering tools index
cat docs/docs/data-engineering/tools/index.md | grep -E '^\|.*\|.*tools.*\|' | head -20Repository: AltimateAI/altimate-code
Length of output: 884
🏁 Script executed:
wc -l docs/docs/data-engineering/tools/index.mdRepository: AltimateAI/altimate-code
Length of output: 112
🏁 Script executed:
cat docs/docs/data-engineering/tools/index.mdRepository: AltimateAI/altimate-code
Length of output: 1336
Correct the tool count: data-engineering tools total 43, not 100+
The categorized tools in docs/docs/data-engineering/tools/index.md sum to 43 tools (12 SQL + 7 Schema + 8 FinOps + 1 Lineage + 2 dbt + 6 Warehouse + 3 Altimate Memory + 3 Training + 1 tool_lookup), plus 8 additional skills. The claim of "100+ specialized data engineering tools" in this file is inaccurate and should be corrected to reflect the actual count.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/docs/configure/tools.md` at line 27, Replace the incorrect tool count in
the sentence "In addition to built-in tools, altimate provides 100+ specialized
data engineering tools." with the accurate number reflecting the tally in
data-engineering/tools/index.md (43), so it reads something like "In addition to
built-in tools, altimate provides 43 specialized data engineering tools." Verify
the wording remains consistent with existing phrasing around counts.
| ### altimate_core_classify_pii | ||
|
|
||
| Classify PII columns in a schema by name patterns and data types. Identifies columns likely containing personal identifiable information. | ||
|
|
There was a problem hiding this comment.
Fix the PII expansion typo.
Line 67 should say "personally identifiable information". personal identifiable information is incorrect terminology.
📝 Suggested fix
-Classify PII columns in a schema by name patterns and data types. Identifies columns likely containing personal identifiable information.
+Classify PII columns in a schema by name patterns and data types. Identifies columns likely containing personally identifiable information.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ### altimate_core_classify_pii | |
| Classify PII columns in a schema by name patterns and data types. Identifies columns likely containing personal identifiable information. | |
| ### altimate_core_classify_pii | |
| Classify PII columns in a schema by name patterns and data types. Identifies columns likely containing personally identifiable information. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/docs/configure/tools/core-tools.md` around lines 65 - 68, Update the
documentation text for the altimate_core_classify_pii entry to fix the PII
expansion typo: replace the phrase "personal identifiable information" with the
correct term "personally identifiable information" in the description for
altimate_core_classify_pii.
| 1. **Use project-level config.** Place `altimate-code.json` in your project root with appropriate permission defaults. This ensures consistent security settings across the team. | ||
|
|
||
| 2. **Restrict dangerous operations** — Deny destructive SQL and shell commands at the project level so individual users can't accidentally bypass them. | ||
| 2. **Restrict dangerous operations.** Deny destructive SQL and shell commands at the project level so individual users can't accidentally bypass them. | ||
|
|
||
| 3. **Use environment variables for secrets** — Never commit credentials. Use `ALTIMATE_CLI_PYTHON`, warehouse connection env vars, and your cloud provider's secret management. | ||
| 3. **Use environment variables for secrets.** Never commit credentials. Use `ALTIMATE_CLI_PYTHON`, warehouse connection env vars, and your cloud provider's secret management. | ||
|
|
There was a problem hiding this comment.
Remove the stale ALTIMATE_CLI_PYTHON guidance.
Line 189 still recommends a Python-specific env var, but Lines 155-157 now say there is no Python dependency. Keeping both claims in the same FAQ is confusing.
📝 Suggested fix
-3. **Use environment variables for secrets.** Never commit credentials. Use `ALTIMATE_CLI_PYTHON`, warehouse connection env vars, and your cloud provider's secret management.
+3. **Use environment variables for secrets.** Never commit credentials. Use warehouse connection env vars and your cloud provider's secret management.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| 1. **Use project-level config.** Place `altimate-code.json` in your project root with appropriate permission defaults. This ensures consistent security settings across the team. | |
| 2. **Restrict dangerous operations** — Deny destructive SQL and shell commands at the project level so individual users can't accidentally bypass them. | |
| 2. **Restrict dangerous operations.** Deny destructive SQL and shell commands at the project level so individual users can't accidentally bypass them. | |
| 3. **Use environment variables for secrets** — Never commit credentials. Use `ALTIMATE_CLI_PYTHON`, warehouse connection env vars, and your cloud provider's secret management. | |
| 3. **Use environment variables for secrets.** Never commit credentials. Use `ALTIMATE_CLI_PYTHON`, warehouse connection env vars, and your cloud provider's secret management. | |
| 1. **Use project-level config.** Place `altimate-code.json` in your project root with appropriate permission defaults. This ensures consistent security settings across the team. | |
| 2. **Restrict dangerous operations.** Deny destructive SQL and shell commands at the project level so individual users can't accidentally bypass them. | |
| 3. **Use environment variables for secrets.** Never commit credentials. Use warehouse connection env vars and your cloud provider's secret management. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/docs/reference/security-faq.md` around lines 185 - 190, Remove the stale
Python-specific guidance that references the ALTIMATE_CLI_PYTHON environment
variable in the FAQ: delete or replace the sentence on Line 189 that recommends
ALTIMATE_CLI_PYTHON and instead reference general secret-handling best practices
(e.g., using environment variables and your cloud provider's secret manager)
consistent with the earlier note that there is no Python dependency; ensure the
project-level config mention (altimate-code.json) and the guidance to not commit
credentials remain intact and consistent.
| **"Prompted"** means you'll see the command and can approve or reject it. **"Blocked"** means the agent cannot run it at all; you must override in config. | ||
|
|
||
| To override defaults, add rules in `altimate-code.json`. See [Permissions](configure/permissions.md) for the full configuration reference. | ||
| To override defaults, add rules in `altimate-code.json`. See [Permissions](../configure/permissions.md) for the full configuration reference. |
There was a problem hiding this comment.
Clarify whether Blocked is absolute or just default-deny.
Line 256 says blocked commands "cannot run at all" and then says users can override them in config. Those are opposite guarantees, and the distinction matters because the rest of the docs also use stronger terms like hard-blocked.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/docs/reference/security-faq.md` around lines 256 - 258, The wording is
inconsistent about whether a "Blocked" command is absolutely forbidden or simply
denied by default; update the sentence that currently reads 'Blocked means the
agent cannot run it at all; you must override in config.' to clearly state that
"Blocked" is a default-deny that can be overridden via the altimate-code.json
configuration (or, if some commands are truly immutable, mark them explicitly as
"hard-blocked"); reference the terms "Prompted" and "Blocked" in the same
paragraph and link to the Permissions page so readers know how to change rules
via altimate-code.json and where to find which commands are hard-blocked versus
configurable.
| - Credentials, API keys, or tokens | ||
| - Database connection strings or hostnames | ||
| - Personally identifiable information (your email is SHA-256 hashed before sending — used only for anonymous user correlation) | ||
| - Personally identifiable information (your email is SHA-256 hashed before sending and is used only for anonymous user correlation) |
There was a problem hiding this comment.
Don't list a hashed email under "never collected" PII.
Line 82 says telemetry never collects PII, then immediately documents sending a SHA-256 email hash. That's still a stable identifier derived from personal data, so the current wording overstates anonymity/compliance.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/docs/reference/telemetry.md` at line 82, The sentence "Personally
identifiable information (your email is SHA-256 hashed before sending and is
used only for anonymous user correlation)" contradicts the "never collected"
claim; replace or relocate that sentence so the docs clearly state that a
non-reversible SHA-256 hash of the email is collected as a stable identifier
(not raw PII), explain its narrow purpose ("anonymous user correlation"), and
clarify retention/processing rules and that it is not intended to be personally
identifiable; update the "never collected" list to remove this wording and add a
precise note that hashed emails are collected as described.
| ```bash | ||
| # LLM provider | ||
| ALTIMATE_PROVIDER=anthropic | ||
| ALTIMATE_ANTHROPIC_API_KEY=your-key-here | ||
|
|
||
| # Or OpenAI | ||
| ALTIMATE_PROVIDER=openai | ||
| ALTIMATE_OPENAI_API_KEY=your-key-here | ||
|
|
||
| # Warehouse (Snowflake example) | ||
| SNOWFLAKE_ACCOUNT=myorg-myaccount | ||
| SNOWFLAKE_USER=ci_user | ||
| SNOWFLAKE_PASSWORD=${{ secrets.SNOWFLAKE_PASSWORD }} | ||
| SNOWFLAKE_DATABASE=analytics | ||
| SNOWFLAKE_SCHEMA=public | ||
| SNOWFLAKE_WAREHOUSE=compute_wh | ||
| ``` |
There was a problem hiding this comment.
Keep the generic CI env snippet shell-valid.
Line 42 is inside a bash block, but ${{ secrets.SNOWFLAKE_PASSWORD }} is GitHub Actions templating, not shell syntax. Copy/pasting this into another CI system or a local shell will produce a broken example.
📝 Suggested fix
-SNOWFLAKE_PASSWORD=${{ secrets.SNOWFLAKE_PASSWORD }}
+SNOWFLAKE_PASSWORD=<from-your-ci-secret-store>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/docs/usage/ci-headless.md` around lines 30 - 46, The CI env snippet
contains GitHub Actions templating for SNOWFLAKE_PASSWORD (${ {
secrets.SNOWFLAKE_PASSWORD }}) which is not valid shell; replace that token with
a shell-valid placeholder or environment variable reference (e.g., set
SNOWFLAKE_PASSWORD=your-password or export SNOWFLAKE_PASSWORD) so the block
remains copy/pasteable; update the example lines referencing SNOWFLAKE_PASSWORD
and keep other keys (ALTIMATE_PROVIDER, ALTIMATE_ANTHROPIC_API_KEY,
ALTIMATE_OPENAI_API_KEY, SNOWFLAKE_ACCOUNT, SNOWFLAKE_USER, SNOWFLAKE_DATABASE,
SNOWFLAKE_SCHEMA, SNOWFLAKE_WAREHOUSE) unchanged.
| ### Example 3: Automated Test Generation (Pre-commit) | ||
|
|
||
| ```bash | ||
| #!/bin/bash | ||
| # .git/hooks/pre-commit | ||
| # Generate tests for any staged SQL model files | ||
|
|
||
| STAGED_MODELS=$(git diff --cached --name-only --diff-filter=A | grep "models/.*\.sql") | ||
|
|
||
| if [ -n "$STAGED_MODELS" ]; then | ||
| echo "Generating tests for new models..." | ||
| altimate run "/generate-tests for: $STAGED_MODELS" --no-color | ||
| fi |
There was a problem hiding this comment.
The pre-commit hook misses modified models.
Line 130 uses --diff-filter=A, so the example only regenerates tests for newly added SQL files. Edited models are skipped even though the comment says "any staged SQL model files".
📝 Suggested fix
-STAGED_MODELS=$(git diff --cached --name-only --diff-filter=A | grep "models/.*\.sql")
+STAGED_MODELS=$(git diff --cached --name-only --diff-filter=ACMRTUXB | grep -E '^models/.*\.sql$' || true)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/docs/usage/ci-headless.md` around lines 123 - 135, The pre-commit
example only captures newly added files because STAGED_MODELS uses git diff
--cached --name-only --diff-filter=A; change the diff filter to include modified
files as well (e.g., use --diff-filter=AM) so STAGED_MODELS picks up both added
and modified SQL files, ensuring the altimate run "/generate-tests for:
$STAGED_MODELS" covers edited models too.
✅ Tests — All PassedTypeScript — passedcc @ppradnesh |
Summary
Test plan
🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Changes
altimate-code).Documentation