Skip to content

CidLucas/platform

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

318 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

AI-Powered Data Platform

A production-grade, multi-tenant platform that centralizes business data and enables AI agents to analyze, query, and act on it.

Built from scratch as a solo full-stack engineer β€” 20+ shared libraries, 6 microservices in Python and TypeScript.

Python 3.11+ FastAPI React 18 LangGraph PostgreSQL Docker GCP Cloud Run


The Problem

Small and medium businesses generate data across multiple platforms (ERPs, e-commerce, spreadsheets) but lack the tools to centralize, analyze, and act on it. Hiring data teams is expensive. Generic BI tools require technical expertise.

The Solution

The solution is a data-centralization and analysis platform that creates a context layer so AI agents can perform tasks effectively β€” from answering natural-language questions about sales data, to generating reports, to managing knowledge bases β€” all scoped per tenant with strict data isolation.

Dashboard Home
Dashboard β€” real-time KPI scorecards, charts, and AI chat in a unified interface

Platform Features

πŸ“Š Data Analysis & Visualization

Ingest data from multiple sources (BigQuery, Shopify, VTEX, CSV/XLSX uploads), transform it into a star-schema analytics layer, and visualize it through interactive dashboards with scorecards, bar charts, and detail views.

Product Detail View
Detail view β€” drill-down into individual product analytics with AI-generated insights

πŸ—£οΈ Natural Language to SQL

Users ask questions in plain language; the platform converts them to safe, validated SQL queries. A defense-in-depth pipeline ensures security:

  1. Parse β€” AST validation via sqlglot (only SELECT allowed)
  2. Validate β€” table/column allowlists, mandatory filters, PII masking
  3. Rewrite β€” expand SELECT *, inject LIMIT, enforce client_id filter
  4. Execute β€” via PostgREST with RLS enforcement
SQL Agent
Text-to-SQL β€” natural language query converted to validated SQL with results rendered in the chat

πŸ“š Knowledge Base (Hybrid RAG)

Upload documents (PDF, DOCX, TXT, CSV) to build per-tenant knowledge bases. The retrieval pipeline combines multiple strategies for high-quality answers:

  • Semantic search β€” pgvector cosine similarity with multilingual embeddings
  • Keyword search β€” PostgreSQL full-text search (BM25)
  • Reciprocal Rank Fusion β€” merges semantic + keyword results
  • Reranking β€” Cohere, CrossEncoder, or LLM-based reranking
  • MMR diversification β€” Maximal Marginal Relevance to avoid redundant results
Knowledge Base RAG
RAG pipeline β€” hybrid retrieval with source attribution and confidence scores
Knowledge Management
Knowledge base management β€” upload, chunk, embed, and search documents per tenant

πŸ”§ MCP Tool Server (20+ Tools)

A centralized FastMCP server exposes tools that agents can invoke at runtime. Tools are registered as modular packages, each with its own auth, validation, and tier gating:

Module Tools Description
rag_module executar_rag_cliente Hybrid semantic + BM25 document search
sql_module executar_sql_agent Safe text-to-SQL with defense-in-depth
csv_module CSV analysis Statistics, distributions, column profiling
google_module Sheets, Gmail, Calendar Full Google Workspace integration via OAuth
common_module File retrieval, context Utility tools for agent context
web_monitor_module URL monitoring Track website changes
prompt_module MCP prompts Langfuse-versioned prompt resources
structured_data_formatter Output formatting Deterministic formatting for reports
config_helper_module Tool validation Availability checks per tier
MCP Server
MCP tool server β€” modular tool registration with health introspection

πŸ€– Multi-Agent Architecture

The platform runs specialized agents built with LangGraph, orchestrated through a supervisor pattern:

  • Orchestrator (Atendente Core) β€” LangGraph state machine with 4 nodes: init β†’ supervisor β†’ execute_tools β†’ elicit. Routes between tool execution, knowledge retrieval, data analysis, and clarification requests.
  • Standalone Agents β€” Catalog-driven factory that dynamically builds agents from database definitions. Each agent gets its own session, tools, and context.
  • Sales Agent / Support Agent β€” Specialized lightweight agents using the shared AgentBuilder fluent API.
User message β†’ Supervisor Node β†’ Route decision
                    β”œβ”€β”€ execute_tools β†’ MCP Server β†’ Tool result β†’ Response
                    β”œβ”€β”€ elicit β†’ Clarification question β†’ User
                    └── respond β†’ Direct LLM response

πŸ” Multi-Tenant Security & Context Isolation

Every layer enforces tenant isolation:

  • PostgreSQL Row-Level Security (RLS) on all tables β€” 62 migrations maintain the schema
  • JWT validation supporting HS256 + ES256 + RS256 (Supabase Auth)
  • Per-request context injection β€” ClientContext carries tenant config, enabled tools, tier, and brand voice
  • Tool-level auth β€” each MCP tool extracts and validates JWT independently
  • Tier-based access control β€” tools, agents, and features gated by subscription tier (BASIC β†’ PRO β†’ ENTERPRISE β†’ ADMIN)

πŸ“ˆ Observability & Prompt Management

  • OpenTelemetry traces exported to Grafana Cloud (Tempo, Loki, Mimir)
  • Langfuse as prompt management system β€” version-controlled prompts with A/B testing labels, Redis-cached with builtin fallbacks
  • One-line bootstrap β€” setup_observability(app, service_name) instruments any service
  • End-to-end tracing β€” from HTTP request β†’ agent graph β†’ tool call β†’ LLM invocation β†’ database query

πŸ”„ Data Connectors & Ingestion

A factory-based connector system integrates with external data sources:

  • BigQuery β€” federated queries via Foreign Data Wrappers
  • Shopify / VTEX / Loja Integrada β€” e-commerce platform connectors
  • CSV/XLSX uploads β€” automatic parsing, column detection, and schema inference
  • Column mapping β€” AI-assisted mapping of source columns to the star-schema
Column Mapping
Column mapping β€” AI-assisted mapping of imported data to the analytics schema

πŸ’¬ Human-in-the-Loop (HITL)

An elicitation service handles cases where the agent needs clarification or human approval:

  • Multiple elicitation types: yes_no, multiple_choice, free_text
  • Priority queue for human review (Streamlit UI)
  • Audit trail for all decisions
  • Integrated into the agent graph as a first-class node

Architecture Overview

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                        FRONTEND LAYER                               β”‚
β”‚  React 18 + TypeScript + Vite + Chakra UI                          β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”        β”‚
β”‚  β”‚  Dashboard    β”‚  β”‚  Chat Panel   β”‚  β”‚  HITL Review     β”‚        β”‚
β”‚  β”‚  (Scorecards, β”‚  β”‚  (SSE Stream) β”‚  β”‚  (Streamlit)     β”‚        β”‚
β”‚  β”‚   Charts)     β”‚  β”‚               β”‚  β”‚                  β”‚        β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜        β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
          β”‚                 β”‚                     β”‚
          β–Ό                 β–Ό                     β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                      SERVICE LAYER (FastAPI)                        β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”        β”‚
β”‚  β”‚  Atendente   β”‚  β”‚  Standalone   β”‚  β”‚   File Upload    β”‚        β”‚
β”‚  β”‚  Core        β”‚  β”‚  Agent API    β”‚  β”‚   API            β”‚        β”‚
β”‚  β”‚  (LangGraph) β”‚  β”‚  (Catalog)    β”‚  β”‚   (Ingestion)    β”‚        β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜        β”‚
β”‚         β”‚                 β”‚                     β”‚                   β”‚
β”‚         β–Ό                 β–Ό                     β”‚                   β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”               β”‚                  β”‚
β”‚  β”‚   Tool Pool API (FastMCP)   β”‚β—„β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                  β”‚
β”‚  β”‚   20+ tools, JWT per-tool   β”‚                                   β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                                   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                  β”‚
                  β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                       LIBRARY LAYER (20 packages)                   β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚  β”‚ Agent        β”‚ β”‚ RAG      β”‚ β”‚ SQL        β”‚ β”‚ LLM Service    β”‚  β”‚
β”‚  β”‚ Framework    β”‚ β”‚ Factory  β”‚ β”‚ Factory    β”‚ β”‚ (multi-provider)β”‚  β”‚
β”‚  β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€  β”‚
β”‚  β”‚ Auth (JWT)   β”‚ β”‚ Context  β”‚ β”‚ Prompt     β”‚ β”‚ Observability  β”‚  β”‚
β”‚  β”‚              β”‚ β”‚ Service  β”‚ β”‚ Management β”‚ β”‚ Bootstrap      β”‚  β”‚
β”‚  β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€  β”‚
β”‚  β”‚ MCP Commons  β”‚ β”‚ Parsers  β”‚ β”‚ Tool       β”‚ β”‚ Data           β”‚  β”‚
β”‚  β”‚              β”‚ β”‚          β”‚ β”‚ Registry   β”‚ β”‚ Connectors     β”‚  β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                  β”‚
                  β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                       DATA LAYER                                    β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚  β”‚ PostgreSQL       β”‚ β”‚ pgvector β”‚ β”‚ Redis     β”‚ β”‚ Supabase    β”‚  β”‚
β”‚  β”‚ (RLS, analytics  β”‚ β”‚ (RAG     β”‚ β”‚ (cache,   β”‚ β”‚ (Auth, Edge β”‚  β”‚
β”‚  β”‚  star-schema)    β”‚ β”‚  chunks) β”‚ β”‚  checkpts)β”‚ β”‚  Functions) β”‚  β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Shared Library Ecosystem (20 packages)

One of the core engineering decisions: every reusable capability is a library, not duplicated code. All services depend on the same shared packages:

Library Purpose
_agent_framework LangGraph builder pattern, state machines, node registry
_auth JWT decode (HS256/ES256/RS256), RLS context injection
_context_service Per-tenant context loading with Redis cache (5min TTL)
_data_connectors Factory for BigQuery, Shopify, VTEX, Loja Integrada
_db_connector SQLAlchemy async engine management
_elicitation_service Agent clarification requests (yes/no, multiple choice, free text)
_experiment_service Experiment manifests, batch evaluation, classification
_google_suite_client Google Sheets, Gmail, Calendar with OAuth token management
_hitl_service Human-in-the-loop review queue with Streamlit UI
_llm_service Provider abstraction (OpenAI, Anthropic, Google, Ollama) with tier budgets
_mcp_commons MCP tool dataclasses, executor with parallel invocation
_models Shared Pydantic/SQLModel domain models
_observability_bootstrap One-line OpenTelemetry + Langfuse + Grafana setup
_parsers PDF, DOCX, CSV, TXT parsing + semantic chunking
_prompt_management Langfuse prompt fetching with Redis cache and builtin fallbacks
_rag_factory Hybrid retrieval (semantic + BM25 + RRF + reranking + MMR)
_shared_utils Common utilities across all services
_sql_factory Text-to-SQL with AST validation, allowlists, PII masking
_supabase_client Typed Supabase SDK wrapper
_tool_registry Tool discovery, tier validation, Docker MCP bridge
_twilio_client WhatsApp webhook integration

Engineering Practices

Practice Implementation
Monorepo structure Single repo with libs/, services/, apps/, supabase/ β€” shared dependencies via path imports
Factory patterns ConnectorFactory, StandaloneAgentFactory, RAGFactory β€” pluggable components
Builder pattern AgentBuilder fluent API: .with_llm().with_mcp().with_checkpointer().build()
Dependency injection FastAPI Depends() for auth, context, and services
Defense-in-depth SQL validation has 4 security layers; tools validate JWT independently
12-factor config All config via environment variables, .env files, no hardcoded secrets
Database migrations 62 Alembic/Supabase migrations β€” versioned schema evolution
Code quality ruff for formatting + linting, enforced via make fmt / make lint
Testing Unit tests, E2E smoke tests, persona tests, batch evaluation with Langfuse traces
Streaming Server-Sent Events (SSE) for real-time agent responses
Caching Redis for context (5min TTL), prompts, agent checkpoints, tool results
Observability OpenTelemetry β†’ Grafana Cloud; Langfuse for LLM traces; structured logging

Tech Stack

Backend: Python 3.11+, FastAPI, Pydantic, SQLModel, LangGraph, LangChain, FastMCP

Frontend: React 18, TypeScript, Vite, Chakra UI, Recharts

AI/ML: LangGraph agents, pgvector embeddings, hybrid RAG (BM25 + semantic + RRF), Cohere reranking, multi-provider LLM (OpenAI, Anthropic, Google, Ollama)

Database: PostgreSQL with RLS, pgvector, Supabase (Auth, Edge Functions, Storage, PostgREST)

Infrastructure: Docker Compose (dev), Google Cloud Run (prod), Artifact Registry, Redis, Nginx

Observability: OpenTelemetry, Grafana Cloud (Tempo, Loki, Mimir, Faro), Langfuse

Auth: Supabase Auth, JWT (HS256/ES256/RS256), PostgreSQL RLS, per-tool tier gating


Repository Structure

apps/
β”œβ”€β”€  _dashboard/          # React 18 + TypeScript admin dashboard
β”œβ”€β”€ hitl_dashboard/          # Streamlit HITL review interface
└── landing/                 # Marketing landing page

services/
β”œβ”€β”€ atendente_core/          # Main LangGraph agent orchestrator
β”œβ”€β”€ tool_pool_api/           # FastMCP server (20+ tools)
β”œβ”€β”€ standalone_agent_api/    # Catalog-driven agent builder
β”œβ”€β”€ file_upload_api/         # Document ingestion + vector pipeline
β”œβ”€β”€ vendas_agent/            # Sales-specialized agent
└── support_agent/           # Support-specialized agent

libs/                        # 20 shared Python packages (see table above)

supabase/
β”œβ”€β”€ migrations/              # 62 SQL migrations (RLS, star-schema, vector DB)
└── functions/               # 5 Edge Functions (search, process, sync, enrich, match)

scripts/                     # Evaluation, seeding, and utility scripts
docs/                        # Architecture documentation

Quick Start

# 1. Clone and configure
git clone https://github.com/ br/ -mono.git
cd  -mono
cp .env.example .env          # fill in your keys

# 2. Start the development stack
make dev

# 3. Open the dashboard
open http://localhost:8080

Services run with hot reload and connect to a remote Supabase instance β€” no local database setup required.

Available Commands

# Development
make dev               # Start core stack (dashboard + backend + tools + redis)
make dev-logs          # Tail all service logs
make dev-rebuild       # Rebuild after dependency changes

# Testing & Evaluation
make test              # Unit tests
make smoke-test        # End-to-end integration
make batch-run         # Batch test with Langfuse traces
make experiment-run    # Run evaluation experiments

# Database
make migrate           # Apply Alembic migrations
make migrate-prod      # Apply to production (with confirmation)

# Code Quality
make fmt               # Format with ruff
make lint              # Lint with ruff
make lint-fix          # Auto-fix lint issues

# Deployment
make cloudrun-build    # Build Docker images
make cloudrun-push-all # Push to GCP Artifact Registry

About

This platform was designed and implemented by me as the sole engineer using Copilot , the idea is to deliver business management and productivity solutions for SMBs.

The goal: enable non-technical business users to ask questions, get reports, and manage their data through natural conversation β€” with AI doing the heavy lifting, securely scoped to each tenant's data.


Architectured and built by Lucas Cruz

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors