diff --git a/ai/gen-ai-agents/sql_graph_generator_dashboard/LICENSE.txt b/ai/gen-ai-agents/sql_graph_generator_dashboard/LICENSE.txt
deleted file mode 100644
index 46c0c79d9..000000000
--- a/ai/gen-ai-agents/sql_graph_generator_dashboard/LICENSE.txt
+++ /dev/null
@@ -1,35 +0,0 @@
-Copyright (c) 2025 Oracle and/or its affiliates.
-
-The Universal Permissive License (UPL), Version 1.0
-
-Subject to the condition set forth below, permission is hereby granted to any
-person obtaining a copy of this software, associated documentation and/or data
-(collectively the "Software"), free of charge and under any and all copyright
-rights in the Software, and any and all patent rights owned or freely
-licensable by each licensor hereunder covering either (i) the unmodified
-Software as contributed to or provided by such licensor, or (ii) the Larger
-Works (as defined below), to deal in both
-
-(a) the Software, and
-(b) any piece of software and/or hardware listed in the lrgrwrks.txt file if
-one is included with the Software (each a "Larger Work" to which the Software
-is contributed by such licensors),
-
-without restriction, including without limitation the rights to copy, create
-derivative works of, display, perform, and distribute the Software and make,
-use, sell, offer for sale, import, export, have made, and have sold the
-Software and the Larger Work(s), and to sublicense the foregoing rights on
-either these or other terms.
-
-This license is subject to the following condition:
-The above copyright notice and either this complete permission notice or at
-a minimum a reference to the UPL must be included in all copies or
-substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-SOFTWARE.
diff --git a/ai/gen-ai-agents/sql_graph_generator_dashboard/README.md b/ai/gen-ai-agents/sql_graph_generator_dashboard/README.md
deleted file mode 100644
index 20185e3c6..000000000
--- a/ai/gen-ai-agents/sql_graph_generator_dashboard/README.md
+++ /dev/null
@@ -1,231 +0,0 @@
-# SQL Graph Generator Dashboard
-
-SQL Graph Generator Dashboard is an AI-powered assistant that enables natural language database queries and intelligent chart generation.
-It extracts data from your database using conversational queries, automatically generates appropriate visualizations, and provides multi-turn conversational context for data exploration.
-It runs as an interactive Next.js web app backed by a FastAPI server, LangChain orchestration, and Oracle Cloud Infrastructure GenAI models.
-
-Reviewed: October 13, 2025
-
-# When to use this asset?
-
-Use this asset when you want to:
-
-- Query databases using natural language instead of SQL
-- Automatically generate charts and visualizations from query results
-- Build conversational data exploration interfaces
-- Integrate OCI GenAI models with database operations
-- Demonstrate intelligent routing between data queries, visualizations, and insights
-
-Ideal for:
-
-- AI engineers building conversational data analytics tools
-- Data teams needing natural language database interfaces
-- OCI customers integrating GenAI into business intelligence workflows
-- Anyone showcasing LangChain + OCI GenAI + dynamic visualization generation
-
-# How to use this asset?
-
-This assistant can be launched via:
-
-- Next.js Web UI
-
-It supports:
-
-- Natural language to SQL conversion
-- Automatic chart generation from query results
-- Multi-turn conversations with context preservation
-- Multiple chart types: bar, line, pie, scatter, heatmap
-- Real-time data visualization using matplotlib/seaborn
-- Intelligent routing between data queries, visualizations, and insights
-
-## Setup Instructions
-
-### OCI Generative AI Model Configuration
-
-1. Go to: OCI Console → Generative AI
-2. Select your model (this demo uses OpenAI GPT OSS 120B):
- `ocid1.generativeaimodel.oc1.eu-frankfurt-1.amaaaaaask7dceyav...`
-3. Set up an OCI Agent Runtime endpoint for SQL queries
-4. Copy the following values:
- - MODEL_ID
- - AGENT_ENDPOINT_ID
- - COMPARTMENT_ID
- - SERVICE_ENDPOINT (e.g., `https://inference.generativeai.eu-frankfurt-1.oci.oraclecloud.com`)
-5. Configure them in `backend/utils/config.py`
-
-Documentation:
-[OCI Generative AI Documentation](https://docs.oracle.com/en-us/iaas/Content/generative-ai/home.htm)
-
-No API key is required — authentication is handled via OCI identity.
-
-Ensure your OCI CLI credentials are configured.
-Edit or create the following config file at `~/.oci/config`:
-
-```
-[DEFAULT]
-user=ocid1.user.oc1..exampleuniqueID
-fingerprint=c6:4f:66:e7:xx:xx:xx:xx
-tenancy=ocid1.tenancy.oc1..exampleuniqueID
-region=eu-frankfurt-1
-key_file=~/.oci/oci_api_key.pem
-```
-
-### Install Dependencies
-
-Backend:
-
-```bash
-cd backend
-pip install -r requirements.txt
-```
-
-Frontend:
-
-```bash
-cd ..
-npm install
-```
-
-### Configure Database
-
-1. Set up your database connection in OCI Agent Runtime
-2. The demo uses a sample e-commerce database with tables:
- - orders
- - customers
- - products
- - order_items
-
-### Start the Application
-
-Backend (FastAPI):
-
-```bash
-cd backend
-python -m uvicorn api.main:app --reload --host 0.0.0.0 --port 8000
-```
-
-Frontend (Next.js):
-
-```bash
-npm run dev
-```
-
-Access the application at: http://localhost:3000
-
-## Key Features
-
-| Feature | Description |
-| ------------------------ | ---------------------------------------------------------------- |
-| Natural Language Queries | Ask questions like "show me the top 5 orders" |
-| Intelligent Routing | GenAI-powered routing between data queries, charts, and insights |
-| Auto Chart Generation | Automatically creates appropriate visualizations from data |
-| Multi-Turn Conversations | Maintains context across multiple queries |
-| Real-Time Visualization | Generates matplotlib/seaborn charts as base64 images |
-| Multiple Chart Types | Supports bar, line, pie, scatter, and heatmap charts |
-| OCI GenAI Integration | Uses OCI Agent Runtime and Chat API |
-| LangChain Runnables | Clean integration pattern wrapping OCI SDK calls |
-| Conversation Management | Tracks query history and data state |
-| Error Handling | Clear error messages and fallback behavior |
-
-## Architecture
-
-### Backend Components
-
-1. **Router Agent** (OCI Chat API)
-
- - Intelligent query routing using GenAI
- - Routes: DATA_QUERY, CHART_EDIT, INSIGHT_QA
- - Returns structured JSON decisions
-
-2. **SQL Agent** (OCI Agent Runtime)
-
- - Natural language to SQL conversion
- - Database query execution
- - Structured data extraction
-
-3. **Chart Generator** (OCI Chat API + Python Execution)
-
- - GenAI generates matplotlib/seaborn code
- - Safe code execution in sandboxed environment
- - Returns base64-encoded chart images
-
-4. **Orchestrator**
- - Coordinates agents based on routing decisions
- - Manages conversation state
- - Handles multi-turn context
-
-### Frontend Components
-
-1. **Chat Interface**
-
- - Real-time message display
- - Support for text, tables, and images
- - Speech recognition integration
-
-2. **Service Layer**
-
- - API communication with backend
- - Response transformation
- - Error handling
-
-3. **Context Management**
- - User session handling
- - Message history
- - State management
-
-## Example Queries
-
-```
-"Show me the top 5 orders"
-→ Returns table with order data
-
-"Make a bar chart of those orders by total amount"
-→ Generates bar chart visualization
-
-"Show me orders grouped by region"
-→ Returns data aggregated by region
-
-"Create a pie chart of the order distribution"
-→ Generates pie chart from current data
-
-"What insights can you provide about these sales?"
-→ Provides AI-generated analysis
-```
-
-## Configuration Files
-
-Key configuration in `backend/utils/config.py`:
-
-- MODEL_ID: Your OCI GenAI model OCID
-- AGENT_ENDPOINT_ID: Your OCI Agent Runtime endpoint
-- COMPARTMENT_ID: Your OCI compartment
-- SERVICE_ENDPOINT: GenAI service endpoint URL
-- DATABASE_SCHEMA: Database table definitions
-
-## Notes
-
-- Prompts can be customized in `backend/orchestration/oci_direct_runnables.py`
-- Chart generation code is dynamically created by GenAI
-- Designed specifically for Oracle Cloud Infrastructure + Generative AI
-- Sample database schema included for e-commerce use case
-- Frontend uses Material-UI for consistent design
-
-# Useful Links
-
-- [OCI Generative AI](https://docs.oracle.com/en-us/iaas/Content/generative-ai/home.htm)
- Official documentation for Oracle Generative AI
-
-- [OCI Agent Runtime](https://docs.oracle.com/en-us/iaas/Content/generative-ai/agent-runtime.htm)
- Documentation for OCI Agent Runtime
-
-- [LangChain Documentation](https://python.langchain.com/docs/get_started/introduction)
- LangChain framework documentation
-
-- [Next.js Documentation](https://nextjs.org/docs)
- Next.js framework documentation
-
-# License
-
-Copyright (c) 2025 Oracle and/or its affiliates.
-
-Licensed under the Universal Permissive License (UPL), Version 1.0.
diff --git a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/DATABASE_SETUP.md b/ai/gen-ai-agents/sql_graph_generator_dashboard/files/DATABASE_SETUP.md
deleted file mode 100644
index 8b07ba82a..000000000
--- a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/DATABASE_SETUP.md
+++ /dev/null
@@ -1,130 +0,0 @@
-# Database Setup
-
-## Overview
-
-This application uses OCI Agent Runtime to query your database. The sample schema is for an e-commerce database.
-
-## Database Schema
-
-### CUSTOMERS Table
-```sql
-CREATE TABLE CUSTOMERS (
- CUSTOMER_ID NUMBER PRIMARY KEY,
- CUSTOMER_NAME VARCHAR2(100),
- EMAIL VARCHAR2(100),
- SIGNUP_DATE DATE,
- SEGMENT VARCHAR2(50),
- COUNTRY VARCHAR2(50),
- LIFETIME_VALUE NUMBER(10,2),
- CREATION_DATE DATE,
- CREATED_BY VARCHAR2(50),
- LAST_UPDATED_DATE DATE,
- LAST_UPDATED_BY VARCHAR2(50)
-);
-```
-
-### PRODUCTS Table
-```sql
-CREATE TABLE PRODUCTS (
- PRODUCT_ID NUMBER PRIMARY KEY,
- PRODUCT_NAME VARCHAR2(200),
- CATEGORY VARCHAR2(100),
- PRICE NUMBER(10,2),
- COST NUMBER(10,2),
- STOCK_QUANTITY NUMBER,
- LAUNCH_DATE DATE,
- CREATION_DATE DATE,
- CREATED_BY VARCHAR2(50),
- LAST_UPDATED_DATE DATE,
- LAST_UPDATED_BY VARCHAR2(50)
-);
-```
-
-### ORDERS Table
-```sql
-CREATE TABLE ORDERS (
- ORDER_ID NUMBER PRIMARY KEY,
- CUSTOMER_ID NUMBER,
- ORDER_DATE DATE,
- TOTAL_AMOUNT NUMBER(10,2),
- STATUS VARCHAR2(50),
- REGION VARCHAR2(100),
- SALES_REP VARCHAR2(100),
- CREATION_DATE DATE,
- CREATED_BY VARCHAR2(50),
- LAST_UPDATED_DATE DATE,
- LAST_UPDATED_BY VARCHAR2(50),
- FOREIGN KEY (CUSTOMER_ID) REFERENCES CUSTOMERS(CUSTOMER_ID)
-);
-```
-
-### ORDER_ITEMS Table
-```sql
-CREATE TABLE ORDER_ITEMS (
- ORDER_ITEM_ID NUMBER PRIMARY KEY,
- ORDER_ID NUMBER,
- PRODUCT_ID NUMBER,
- QUANTITY NUMBER,
- UNIT_PRICE NUMBER(10,2),
- DISCOUNT_PERCENT NUMBER(5,2),
- CREATION_DATE DATE,
- CREATED_BY VARCHAR2(50),
- LAST_UPDATED_DATE DATE,
- LAST_UPDATED_BY VARCHAR2(50),
- FOREIGN KEY (ORDER_ID) REFERENCES ORDERS(ORDER_ID),
- FOREIGN KEY (PRODUCT_ID) REFERENCES PRODUCTS(PRODUCT_ID)
-);
-```
-
-## Sample Data
-
-### Sample Customers
-```sql
-INSERT INTO CUSTOMERS VALUES (1, 'Acme Corp', 'contact@acme.com', DATE '2023-01-15', 'Enterprise', 'USA', 150000, SYSDATE, 'SYSTEM', SYSDATE, 'SYSTEM');
-INSERT INTO CUSTOMERS VALUES (2, 'TechStart Inc', 'info@techstart.com', DATE '2023-03-20', 'SMB', 'UK', 45000, SYSDATE, 'SYSTEM', SYSDATE, 'SYSTEM');
-INSERT INTO CUSTOMERS VALUES (3, 'Global Solutions', 'sales@global.com', DATE '2023-02-10', 'Enterprise', 'Germany', 200000, SYSDATE, 'SYSTEM', SYSDATE, 'SYSTEM');
-```
-
-### Sample Products
-```sql
-INSERT INTO PRODUCTS VALUES (1, 'Enterprise Security Suite', 'Software', 3499.99, 1200, 100, DATE '2023-01-01', SYSDATE, 'SYSTEM', SYSDATE, 'SYSTEM');
-INSERT INTO PRODUCTS VALUES (2, 'AI Analytics Platform', 'Software', 2999.99, 1000, 150, DATE '2023-02-01', SYSDATE, 'SYSTEM', SYSDATE, 'SYSTEM');
-INSERT INTO PRODUCTS VALUES (3, 'Cloud Storage Pro', 'Cloud', 999.99, 300, 500, DATE '2023-03-01', SYSDATE, 'SYSTEM', SYSDATE, 'SYSTEM');
-INSERT INTO PRODUCTS VALUES (4, 'Premium Consulting', 'Services', 5000, 2000, 50, DATE '2023-01-15', SYSDATE, 'SYSTEM', SYSDATE, 'SYSTEM');
-INSERT INTO PRODUCTS VALUES (5, 'Training Program', 'Services', 2500, 800, 100, DATE '2023-02-20', SYSDATE, 'SYSTEM', SYSDATE, 'SYSTEM');
-```
-
-### Sample Orders
-```sql
-INSERT INTO ORDERS VALUES (1, 1, DATE '2024-01-15', 8999.98, 'Completed', 'North America', 'John Smith', SYSDATE, 'SYSTEM', SYSDATE, 'SYSTEM');
-INSERT INTO ORDERS VALUES (2, 2, DATE '2024-01-20', 2999.99, 'Completed', 'Europe', 'Sarah Johnson', SYSDATE, 'SYSTEM', SYSDATE, 'SYSTEM');
-INSERT INTO ORDERS VALUES (3, 3, DATE '2024-02-01', 12499.97, 'Completed', 'Europe', 'Mike Davis', SYSDATE, 'SYSTEM', SYSDATE, 'SYSTEM');
-INSERT INTO ORDERS VALUES (4, 1, DATE '2024-02-15', 7500, 'Processing', 'North America', 'John Smith', SYSDATE, 'SYSTEM', SYSDATE, 'SYSTEM');
-INSERT INTO ORDERS VALUES (5, 2, DATE '2024-03-01', 999.99, 'Completed', 'Europe', 'Sarah Johnson', SYSDATE, 'SYSTEM', SYSDATE, 'SYSTEM');
-```
-
-### Sample Order Items
-```sql
-INSERT INTO ORDER_ITEMS VALUES (1, 1, 1, 2, 3499.99, 0, SYSDATE, 'SYSTEM', SYSDATE, 'SYSTEM');
-INSERT INTO ORDER_ITEMS VALUES (2, 1, 3, 2, 999.99, 10, SYSDATE, 'SYSTEM', SYSDATE, 'SYSTEM');
-INSERT INTO ORDER_ITEMS VALUES (3, 2, 2, 1, 2999.99, 0, SYSDATE, 'SYSTEM', SYSDATE, 'SYSTEM');
-INSERT INTO ORDER_ITEMS VALUES (4, 3, 1, 1, 3499.99, 0, SYSDATE, 'SYSTEM', SYSDATE, 'SYSTEM');
-INSERT INTO ORDER_ITEMS VALUES (5, 3, 2, 2, 2999.99, 10, SYSDATE, 'SYSTEM', SYSDATE, 'SYSTEM');
-INSERT INTO ORDER_ITEMS VALUES (6, 3, 5, 1, 2500, 0, SYSDATE, 'SYSTEM', SYSDATE, 'SYSTEM');
-INSERT INTO ORDER_ITEMS VALUES (7, 4, 4, 1, 5000, 0, SYSDATE, 'SYSTEM', SYSDATE, 'SYSTEM');
-INSERT INTO ORDER_ITEMS VALUES (8, 4, 5, 1, 2500, 0, SYSDATE, 'SYSTEM', SYSDATE, 'SYSTEM');
-INSERT INTO ORDER_ITEMS VALUES (9, 5, 3, 1, 999.99, 0, SYSDATE, 'SYSTEM', SYSDATE, 'SYSTEM');
-```
-
-## OCI Agent Runtime Configuration
-
-1. Create database connection in OCI Agent Runtime
-2. Configure database tool/function with:
- - Connection string
- - User credentials
- - Query permissions
-3. Test connection with simple query
-4. Update AGENT_ENDPOINT_ID in config.py
-
-
-
diff --git a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/README_FILES.md b/ai/gen-ai-agents/sql_graph_generator_dashboard/files/README_FILES.md
deleted file mode 100644
index 3278b89a6..000000000
--- a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/README_FILES.md
+++ /dev/null
@@ -1,180 +0,0 @@
-# Files Directory - Quick Start Guide
-
-This directory contains all necessary files to run the SQL Graph Generator Dashboard.
-
-## Directory Structure
-
-```
-files/
-├── backend/
-│ ├── api/
-│ │ └── main.py # FastAPI server entry point
-│ ├── orchestration/
-│ │ ├── langchain_orchestrator_v2.py # Main orchestrator with routing logic
-│ │ ├── oci_direct_runnables.py # OCI GenAI Chat API wrappers
-│ │ ├── oci_runnables.py # OCI Agent Runtime wrappers
-│ │ └── conversation_manager.py # Conversation state management
-│ ├── tools/
-│ │ └── genai_chart_generator.py # Chart generation with GenAI
-│ ├── utils/
-│ │ └── config.py # OCI configuration (UPDATE THIS)
-│ └── requirements.txt # Python dependencies
-├── frontend/
-│ ├── services/
-│ │ └── genaiAgentService.js # Backend API communication
-│ ├── contexts/
-│ │ └── ChatContext.js # Chat state management
-│ └── package.json # Node.js dependencies
-├── database/
-│ ├── customers.csv # Sample customer data
-│ ├── products.csv # Sample product data
-│ ├── orders.csv # Sample order data
-│ └── order_items.csv # Sample order items data
-├── SETUP_GUIDE.md # Detailed setup instructions
-├── DATABASE_SETUP.md # Database schema and setup
-└── README_FILES.md # This file
-```
-
-## Quick Start (5 Steps)
-
-### 1. Update OCI Configuration
-
-Edit `backend/utils/config.py`:
-```python
-MODEL_ID = "ocid1.generativeaimodel.oc1.YOUR_REGION.YOUR_MODEL_ID"
-AGENT_ENDPOINT_ID = "ocid1.genaiagentendpoint.oc1.YOUR_REGION.YOUR_ENDPOINT_ID"
-COMPARTMENT_ID = "ocid1.compartment.oc1..YOUR_COMPARTMENT_ID"
-SERVICE_ENDPOINT = "https://inference.generativeai.YOUR_REGION.oci.oraclecloud.com"
-```
-
-### 2. Setup OCI CLI
-
-Create `~/.oci/config`:
-```
-[DEFAULT]
-user=ocid1.user.oc1..YOUR_USER_OCID
-fingerprint=YOUR_FINGERPRINT
-tenancy=ocid1.tenancy.oc1..YOUR_TENANCY_OCID
-region=YOUR_REGION
-key_file=~/.oci/oci_api_key.pem
-```
-
-### 3. Install Dependencies
-
-Backend:
-```bash
-cd backend
-pip install -r requirements.txt
-```
-
-Frontend (in project root):
-```bash
-npm install
-```
-
-### 4. Setup Database
-
-The database CSV files are included in `database/` directory.
-Configure your OCI Agent Runtime to access these files or load them into your database.
-
-See `DATABASE_SETUP.md` for SQL schema.
-
-### 5. Run the Application
-
-Terminal 1 - Backend:
-```bash
-cd backend
-python -m uvicorn api.main:app --reload --host 0.0.0.0 --port 8000
-```
-
-Terminal 2 - Frontend (from project root):
-```bash
-npm run dev
-```
-
-Open: http://localhost:3000
-
-## Key Files Explained
-
-### Backend
-
-**main.py** - FastAPI server with `/query` endpoint
-- Receives natural language questions
-- Returns data, charts, or text responses
-
-**langchain_orchestrator_v2.py** - Main orchestration logic
-- Routes queries to appropriate agents
-- Manages conversation state
-- Coordinates data retrieval and chart generation
-
-**oci_direct_runnables.py** - OCI GenAI Chat API integration
-- Router for intelligent query routing
-- Uses GenAI for decision making
-
-**oci_runnables.py** - OCI Agent Runtime integration
-- SQL Agent for database queries
-- Extracts structured data from tool outputs
-
-**genai_chart_generator.py** - Chart generation
-- Uses GenAI to create matplotlib code
-- Executes code safely
-- Returns base64-encoded images
-
-**conversation_manager.py** - State management
-- Tracks conversation history
-- Maintains data context
-
-### Frontend
-
-**genaiAgentService.js** - API client
-- Communicates with backend
-- Maps response fields (chart_base64 → diagram_base64)
-
-**ChatContext.js** - React context
-- Manages chat state
-- Processes responses for display
-- Handles different message types
-
-## Configuration Tips
-
-1. **Region Consistency**: Ensure all OCIDs and endpoints use the same region
-2. **Model Selection**: OpenAI GPT OSS 120B recommended for routing and generation
-3. **Agent Tools**: Configure database tools in OCI Agent Runtime console
-4. **Permissions**: Ensure OCI user has GenAI and Agent Runtime permissions
-
-## Common Issues
-
-**Authentication Error:**
-- Check `~/.oci/config` file
-- Verify API key is uploaded to OCI Console
-- Test with: `oci iam region list`
-
-**Module Import Error:**
-- Ensure you're in the correct directory
-- Check all `__init__.py` files exist
-- Verify Python path includes backend directory
-
-**Chart Not Displaying:**
-- Check browser console for errors
-- Verify chart_base64 field in API response
-- Ensure frontend compiled successfully
-
-**SQL Agent Timeout:**
-- Check AGENT_ENDPOINT_ID is correct
-- Verify agent is deployed and active
-- Test agent in OCI Console first
-
-## Next Steps
-
-1. Customize DATABASE_SCHEMA in config.py for your database
-2. Adjust prompts in oci_direct_runnables.py for your use case
-3. Add custom chart types in genai_chart_generator.py
-4. Extend routing logic for additional query types
-
-## Support
-
-For OCI GenAI documentation:
-https://docs.oracle.com/en-us/iaas/Content/generative-ai/home.htm
-
-For OCI Agent Runtime:
-https://docs.oracle.com/en-us/iaas/Content/generative-ai/agent-runtime.htm
diff --git a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/SETUP_GUIDE.md b/ai/gen-ai-agents/sql_graph_generator_dashboard/files/SETUP_GUIDE.md
deleted file mode 100644
index 72ae21fc4..000000000
--- a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/SETUP_GUIDE.md
+++ /dev/null
@@ -1,118 +0,0 @@
-# Setup Guide
-
-## Prerequisites
-
-1. Oracle Cloud Infrastructure (OCI) account
-2. Python 3.8+
-3. Node.js 18+
-4. OCI CLI configured
-
-## Step 1: OCI Configuration
-
-Create `~/.oci/config`:
-
-```
-[DEFAULT]
-user=ocid1.user.oc1..YOUR_USER_OCID
-fingerprint=YOUR_FINGERPRINT
-tenancy=ocid1.tenancy.oc1..YOUR_TENANCY_OCID
-region=eu-frankfurt-1
-key_file=~/.oci/oci_api_key.pem
-```
-
-Generate API key:
-```bash
-openssl genrsa -out ~/.oci/oci_api_key.pem 2048
-openssl rsa -pubout -in ~/.oci/oci_api_key.pem -out ~/.oci/oci_api_key_public.pem
-```
-
-Upload public key to OCI Console → User Settings → API Keys
-
-## Step 2: OCI GenAI Setup
-
-1. Go to OCI Console → Generative AI
-2. Create or select a model (e.g., OpenAI GPT OSS 120B)
-3. Note the MODEL_ID
-4. Create an Agent Runtime endpoint for SQL queries
-5. Note the AGENT_ENDPOINT_ID
-6. Get your COMPARTMENT_ID
-
-## Step 3: Update Configuration
-
-Edit `backend/utils/config.py`:
-- Replace MODEL_ID with your model OCID
-- Replace AGENT_ENDPOINT_ID with your agent endpoint OCID
-- Replace COMPARTMENT_ID with your compartment OCID
-- Update region if different from eu-frankfurt-1
-
-## Step 4: Install Dependencies
-
-Backend:
-```bash
-cd backend
-pip install -r requirements.txt
-```
-
-Frontend:
-```bash
-cd ..
-npm install
-```
-
-## Step 5: Database Setup
-
-This demo uses OCI Agent Runtime with database tools.
-Configure your database connection in the OCI Agent Runtime console:
-1. Go to OCI Console → Generative AI → Agents
-2. Create or configure your agent
-3. Add database tool/function
-4. Configure connection to your database
-
-Sample schema is provided in `config.py` for reference.
-
-## Step 6: Run the Application
-
-Terminal 1 (Backend):
-```bash
-cd backend
-python -m uvicorn api.main:app --reload --host 0.0.0.0 --port 8000
-```
-
-Terminal 2 (Frontend):
-```bash
-npm run dev
-```
-
-Access: http://localhost:3000
-
-## Troubleshooting
-
-**OCI Authentication Error:**
-- Verify ~/.oci/config is correct
-- Check API key permissions in OCI Console
-- Ensure key_file path is absolute
-
-**Model Not Found:**
-- Verify MODEL_ID matches your OCI model OCID
-- Check model is in same region as config
-- Ensure compartment access permissions
-
-**Agent Endpoint Error:**
-- Verify AGENT_ENDPOINT_ID is correct
-- Check agent is deployed and active
-- Ensure database tools are configured
-
-**Chart Generation Fails:**
-- Check matplotlib/seaborn are installed
-- Verify python code execution permissions
-- Check logs for specific errors
-
-## Environment Variables (Optional)
-
-Instead of editing config.py, you can use environment variables:
-
-```bash
-export MODEL_ID="ocid1.generativeaimodel..."
-export AGENT_ENDPOINT_ID="ocid1.genaiagentendpoint..."
-export COMPARTMENT_ID="ocid1.compartment..."
-```
diff --git a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/backend/api/main.py b/ai/gen-ai-agents/sql_graph_generator_dashboard/files/backend/api/main.py
deleted file mode 100644
index e706a936a..000000000
--- a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/backend/api/main.py
+++ /dev/null
@@ -1,119 +0,0 @@
-"""
-FastAPI server for SQL Graph Generator Dashboard
-"""
-
-from fastapi import FastAPI, HTTPException
-from fastapi.middleware.cors import CORSMiddleware
-from pydantic import BaseModel
-from typing import Dict, Any, List, Optional
-import json
-import logging
-
-from orchestration.langchain_orchestrator_v2 import LangChainOrchestratorV2
-
-# Setup logging
-logging.basicConfig(level=logging.INFO)
-logger = logging.getLogger(__name__)
-
-app = FastAPI(title="SQL Graph Generator Dashboard", version="1.0.0")
-
-# CORS configuration
-app.add_middleware(
- CORSMiddleware,
- allow_origins=["http://localhost:3000", "http://localhost:3001"],
- allow_credentials=True,
- allow_methods=["*"],
- allow_headers=["*"],
-)
-
-# Initialize LangChain orchestrator
-orchestrator = LangChainOrchestratorV2()
-
-# Request/Response models
-class QueryRequest(BaseModel):
- question: str
- context: Optional[str] = ""
-
-class QueryResponse(BaseModel):
- success: bool
- response_type: str # "visualization", "data", "error"
- query: Optional[str] = None
- agent_response: Optional[str] = None
- dashboard: Optional[Dict] = None
- data: Optional[List[Dict]] = None
- insights: Optional[List[str]] = None
- text_response: Optional[str] = None
- error: Optional[str] = None
- chart_base64: Optional[str] = None
- chart_config: Optional[Dict] = None
- method: Optional[str] = None
- generated_sql: Optional[str] = None
- additional_info: Optional[str] = None
-
-@app.get("/")
-async def root():
- return {
- "message": "SQL Graph Generator Dashboard API",
- "version": "1.0.0",
- "status": "active"
- }
-
-@app.get("/health")
-async def health_check():
- return {"status": "healthy", "service": "sql-graph-generator"}
-
-@app.post("/query", response_model=QueryResponse)
-async def process_query(request: QueryRequest):
- """
- Process a user query and return data, visualization, or text response
- """
- try:
- logger.info(f"Processing query: {request.question}")
-
- result = orchestrator.process_natural_language_query(request.question)
-
- return QueryResponse(**result)
-
- except Exception as e:
- logger.error(f"Error processing query: {str(e)}")
- raise HTTPException(status_code=500, detail=str(e))
-
-@app.get("/sample-questions")
-async def get_sample_questions():
- """
- Get sample questions that users can ask
- """
- return {
- "questions": orchestrator.get_sample_questions(),
- "description": "Sample questions you can ask the SQL Graph Generator"
- }
-
-@app.get("/database-schema")
-async def get_database_schema():
- """
- Get the database schema information
- """
- from utils.config import DATABASE_SCHEMA
- return {
- "schema": DATABASE_SCHEMA,
- "description": "E-commerce database schema with orders, customers, products, and order_items"
- }
-
-@app.get("/chart-types")
-async def get_supported_chart_types():
- """
- Get supported chart types
- """
- return {
- "chart_types": [
- {"type": "bar", "description": "Bar charts for category comparisons"},
- {"type": "line", "description": "Line charts for trends over time"},
- {"type": "pie", "description": "Pie charts for distributions"},
- {"type": "scatter", "description": "Scatter plots for correlations"},
- {"type": "heatmap", "description": "Heatmaps for correlation analysis"}
- ]
- }
-
-if __name__ == "__main__":
- import uvicorn
- uvicorn.run(app, host="0.0.0.0", port=8000)
\ No newline at end of file
diff --git a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/backend/orchestration/conversation_manager.py b/ai/gen-ai-agents/sql_graph_generator_dashboard/files/backend/orchestration/conversation_manager.py
deleted file mode 100644
index fba047a15..000000000
--- a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/backend/orchestration/conversation_manager.py
+++ /dev/null
@@ -1,211 +0,0 @@
-"""
-Conversation History Manager for Multi-Turn Conversational Flow
-Tracks context across multiple GenAI calls for intelligent responses
-"""
-
-from typing import Dict, Any, List, Optional
-from dataclasses import dataclass, asdict
-from datetime import datetime
-import json
-
-
-@dataclass
-class ConversationTurn:
- """Single conversation turn with full context"""
- user_query: str
- route: str
- data: Optional[List[Dict]]
- chart_config: Optional[Dict]
- response_type: str
- agent_response: str
- generated_sql: Optional[str]
- chart_base64: Optional[str]
- timestamp: datetime
- success: bool
- method: str
-
- def to_dict(self) -> Dict[str, Any]:
- """Convert to dictionary for JSON serialization"""
- return {
- **asdict(self),
- 'timestamp': self.timestamp.isoformat(),
- 'data_summary': {
- 'count': len(self.data) if self.data else 0,
- 'columns': list(self.data[0].keys()) if self.data else [],
- 'sample': self.data[:2] if self.data else []
- } if self.data else None
- }
-
- def to_context_string(self) -> str:
- """Convert to concise context string for prompts"""
- context_parts = [
- f"Q: {self.user_query}",
- f"Route: {self.route}",
- f"Response: {self.agent_response[:100]}..." if len(self.agent_response) > 100 else f"Response: {self.agent_response}"
- ]
-
- if self.data:
- context_parts.append(f"Data: {len(self.data)} rows with columns {list(self.data[0].keys())}")
-
- if self.chart_config:
- chart_type = self.chart_config.get('chart_type', 'unknown')
- context_parts.append(f"Chart: {chart_type} chart created")
-
- return " | ".join(context_parts)
-
-
-class ConversationManager:
- """
- Manages conversation history and context for multi-turn interactions
- """
-
- def __init__(self, max_history: int = 10):
- self.conversation_history: List[ConversationTurn] = []
- self.max_history = max_history
- self.session_id = datetime.now().strftime("%Y%m%d_%H%M%S")
-
- def add_turn(self,
- user_query: str,
- route: str,
- result: Dict[str, Any]) -> None:
- """Add a new conversation turn"""
-
- turn = ConversationTurn(
- user_query=user_query,
- route=route,
- data=result.get('data'),
- chart_config=result.get('chart_config'),
- response_type=result.get('response_type', 'unknown'),
- agent_response=result.get('agent_response', ''),
- generated_sql=result.get('generated_sql'),
- chart_base64=result.get('chart_base64'),
- timestamp=datetime.now(),
- success=result.get('success', False),
- method=result.get('method', 'unknown')
- )
-
- self.conversation_history.append(turn)
-
- # Keep only recent history
- if len(self.conversation_history) > self.max_history:
- self.conversation_history = self.conversation_history[-self.max_history:]
-
- print(f" Added conversation turn: {user_query} → {route}")
-
- def get_context_for_prompt(self, context_window: int = 3) -> str:
- """
- Get formatted conversation context for GenAI prompts
- """
- if not self.conversation_history:
- return "No previous conversation history."
-
- recent_turns = self.conversation_history[-context_window:] if context_window else self.conversation_history
-
- context_lines = ["Previous conversation context:"]
- for i, turn in enumerate(recent_turns, 1):
- context_lines.append(f"{i}. {turn.to_context_string()}")
-
- return "\n".join(context_lines)
-
- def get_current_data(self) -> Optional[List[Dict]]:
- """Get data from the most recent turn that has data"""
- for turn in reversed(self.conversation_history):
- if turn.data and turn.success:
- return turn.data
- return None
-
- def get_current_chart_config(self) -> Optional[Dict]:
- """Get chart config from the most recent turn that has a chart"""
- for turn in reversed(self.conversation_history):
- if turn.chart_config and turn.success:
- return turn.chart_config
- return None
-
- def get_current_chart_base64(self) -> Optional[str]:
- """Get the most recent chart image"""
- for turn in reversed(self.conversation_history):
- if turn.chart_base64 and turn.success:
- return turn.chart_base64
- return None
-
- def has_data_context(self) -> bool:
- """Check if we have data in recent context"""
- return self.get_current_data() is not None
-
- def has_chart_context(self) -> bool:
- """Check if we have a chart in recent context"""
- return self.get_current_chart_config() is not None
-
- def get_data_summary(self) -> Dict[str, Any]:
- """Get summary of current data context"""
- data = self.get_current_data()
- if not data:
- return {"has_data": False}
-
- return {
- "has_data": True,
- "row_count": len(data),
- "columns": list(data[0].keys()) if data else [],
- "sample_row": data[0] if data else None
- }
-
- def get_chart_summary(self) -> Dict[str, Any]:
- """Get summary of current chart context"""
- chart_config = self.get_current_chart_config()
- if not chart_config:
- return {"has_chart": False}
-
- return {
- "has_chart": True,
- "chart_type": chart_config.get("chart_type", "unknown"),
- "x_axis": chart_config.get("x_axis", "unknown"),
- "y_axis": chart_config.get("y_axis", "unknown"),
- "title": chart_config.get("title", "")
- }
-
- def clear_history(self) -> None:
- """Clear conversation history"""
- self.conversation_history = []
- print(" Conversation history cleared")
-
- def export_history(self) -> List[Dict]:
- """Export conversation history as JSON-serializable format"""
- return [turn.to_dict() for turn in self.conversation_history]
-
- def get_recent_queries(self, count: int = 5) -> List[str]:
- """Get recent user queries for context"""
- recent_turns = self.conversation_history[-count:] if count else self.conversation_history
- return [turn.user_query for turn in recent_turns]
-
- def get_last_successful_sql(self) -> Optional[str]:
- """Get the most recent successful SQL query"""
- for turn in reversed(self.conversation_history):
- if turn.generated_sql and turn.success and turn.route == "DATA_QUERY":
- return turn.generated_sql
- return None
-
- def should_use_existing_data(self, user_query: str) -> bool:
- """
- Determine if the query can use existing data or needs new data
- """
- query_lower = user_query.lower()
-
- # Keywords that suggest working with existing data
- chart_keywords = ["chart", "graph", "plot", "visualize", "show", "display"]
- edit_keywords = ["change", "modify", "edit", "update", "make it", "convert to"]
- analysis_keywords = ["analyze", "explain", "what does", "tell me about", "insights", "trends"]
-
- has_data = self.has_data_context()
-
- # If we have data and query suggests chart/analysis work
- if has_data and any(keyword in query_lower for keyword in chart_keywords + edit_keywords + analysis_keywords):
- return True
-
- # If query explicitly asks for new data
- new_data_keywords = ["get", "find", "show me", "list", "select", "data"]
- specific_requests = ["orders", "customers", "products", "sales"]
-
- if any(keyword in query_lower for keyword in new_data_keywords + specific_requests):
- return False
-
- return has_data # Default to using existing data if available
\ No newline at end of file
diff --git a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/backend/orchestration/langchain_orchestrator_v2.py b/ai/gen-ai-agents/sql_graph_generator_dashboard/files/backend/orchestration/langchain_orchestrator_v2.py
deleted file mode 100644
index 771c2b510..000000000
--- a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/backend/orchestration/langchain_orchestrator_v2.py
+++ /dev/null
@@ -1,443 +0,0 @@
-"""
-LangChain orchestrator using RunnableSequence for SQL Graph Dashboard
-Router → branch(DATA_QUERY→OCI, CHART_EDIT→viz_edit, INSIGHT_QA→insight)
-"""
-
-from langchain_core.runnables import Runnable, RunnableLambda, RunnableBranch
-from langchain_core.runnables.utils import Input, Output
-from typing import Dict, Any, List, Optional
-import base64
-import json
-
-from .oci_runnables import OciSqlAgentRunnable
-from .oci_direct_runnables import RouterRunnable, VizGeneratorRunnable, InsightQARunnable
-from .conversation_manager import ConversationManager
-from tools.genai_chart_generator import GenAIChartGenerator
-
-
-class ChartEditRunnable(Runnable):
- """
- Runnable for editing existing chart configurations
- """
-
- def invoke(self, input_data: Dict[str, Any]) -> Dict[str, Any]:
- """
- Modify existing chart based on user request
- """
- current_config = input_data.get("current_chart_config", {})
- question = input_data.get("question", "")
- data = input_data.get("data", [])
-
- # Simple chart type modifications
- if "pie" in question.lower():
- current_config["chart_type"] = "pie"
- elif "bar" in question.lower():
- current_config["chart_type"] = "bar"
- elif "line" in question.lower():
- current_config["chart_type"] = "line"
- elif "scatter" in question.lower():
- current_config["chart_type"] = "scatter"
-
- # Sorting modifications
- if "sort" in question.lower():
- if "desc" in question.lower() or "highest" in question.lower():
- current_config["sort_direction"] = "desc"
- else:
- current_config["sort_direction"] = "asc"
-
- return {
- "success": True,
- "config": current_config,
- "data": data,
- "method": "chart_edit",
- "response_type": "visualization"
- }
-
-
-class InsightQARunnable(Runnable):
- """
- Runnable for generating insights about current data
- """
-
- def __init__(self):
- try:
- from langchain_community.chat_models.oci_generative_ai import ChatOCIGenAI
- from langchain_core.messages import HumanMessage
- from utils import config
-
- self.genai_client = ChatOCIGenAI(
- model_id=config.MODEL_ID,
- service_endpoint=config.SERVICE_ENDPOINT,
- compartment_id=config.COMPARTMENT_ID,
- model_kwargs={
- "temperature": 0.7,
- "top_p": 0.9,
- "max_tokens": 500
- }
- )
- self.oci_available = True
- print(" Insight QA Runnable initialized")
- except Exception as e:
- print(f"⚠️ Insight QA fallback mode: {e}")
- self.genai_client = None
- self.oci_available = False
-
- def invoke(self, input_data: Dict[str, Any]) -> Dict[str, Any]:
- """
- Generate insights about the current data
- """
- data = input_data.get("data", [])
- question = input_data.get("question", "")
-
- if not data:
- return {
- "success": False,
- "error": "No data available for analysis",
- "response_type": "text_response"
- }
-
- # Create analysis prompt
- data_summary = {
- "total_rows": len(data),
- "columns": list(data[0].keys()) if data else [],
- "sample_data": data[:3]
- }
-
- prompt = f"""Analyze this data and answer the user's question with insights.
-
-User Question: "{question}"
-
-Data Summary:
-- Total rows: {data_summary['total_rows']}
-- Columns: {data_summary['columns']}
-- Sample data: {data_summary['sample_data']}
-
-Provide a concise analysis with specific insights, trends, or patterns you observe in the data.
-"""
-
- if self.oci_available:
- try:
- from langchain_core.messages import HumanMessage
- messages = [HumanMessage(content=prompt)]
- response = self.genai_client.invoke(messages)
-
- # Extract content
- if hasattr(response, 'content'):
- insight_text = response.content
- else:
- insight_text = str(response)
-
- return {
- "success": True,
- "text_response": insight_text,
- "data": data,
- "response_type": "text_response",
- "method": "genai_analysis"
- }
-
- except Exception as e:
- print(f" Insight generation error: {e}")
- return self._fallback_insight(data, question)
- else:
- return self._fallback_insight(data, question)
-
- def _fallback_insight(self, data: List[Dict], question: str) -> Dict[str, Any]:
- """Generate simple fallback insights"""
- if not data:
- return {
- "success": True,
- "text_response": "No data available for analysis.",
- "response_type": "text_response"
- }
-
- insights = [
- f"Dataset contains {len(data)} records",
- f"Available fields: {', '.join(data[0].keys()) if data else 'None'}"
- ]
-
- # Simple numeric analysis
- numeric_fields = []
- for field in data[0].keys() if data else []:
- try:
- values = [float(row.get(field, 0)) for row in data[:10]]
- if values:
- avg_val = sum(values) / len(values)
- insights.append(f"{field} average: {avg_val:.2f}")
- numeric_fields.append(field)
- except (ValueError, TypeError):
- pass
-
- if not numeric_fields:
- insights.append("No numeric fields found for statistical analysis.")
-
- return {
- "success": True,
- "text_response": "\n".join(insights),
- "data": data,
- "response_type": "text_response",
- "method": "fallback_analysis"
- }
-
-
-class LangChainOrchestratorV2:
- """
- Clean LangChain orchestrator using RunnableSequence architecture
- """
-
- def __init__(self):
- print("🚀 Initializing LangChain Orchestrator V2...")
-
- # Initialize all runnables
- self.router = RouterRunnable()
- self.sql_agent = OciSqlAgentRunnable()
- self.viz_generator = VizGeneratorRunnable()
- self.chart_editor = ChartEditRunnable()
- self.insight_qa = InsightQARunnable() # Now using direct OCI calls
- self.chart_generator = GenAIChartGenerator()
-
- # Conversation history manager
- self.conversation = ConversationManager()
-
- # Track current state (for backward compatibility)
- self.current_data = None
- self.current_chart_config = None
-
- print(" LangChain Orchestrator V2 initialized")
-
- def process_natural_language_query(self, user_question: str) -> Dict[str, Any]:
- """
- Main entry point - processes user query through the complete pipeline
- """
- try:
- print(f" Processing query: {user_question}")
-
- # Step 1: Route the query with conversation context
- route_input = {
- "question": user_question,
- "context": {
- "has_data": self.conversation.has_data_context(),
- "has_chart": self.conversation.has_chart_context(),
- "conversation_history": self.conversation.get_context_for_prompt(3),
- "data_summary": self.conversation.get_data_summary(),
- "chart_summary": self.conversation.get_chart_summary()
- }
- }
-
- routing_result = self.router.invoke(route_input)
- route = routing_result.get("route", "DATA_QUERY")
- print(f" Router decision: {route} (confidence: {routing_result.get('confidence', 0.5)})")
- print(f" Reasoning: {routing_result.get('reasoning', 'No reasoning')}")
-
- # Step 2: Branch based on route
- if route == "DATA_QUERY":
- result = self._handle_data_query(user_question)
- elif route == "CHART_EDIT":
- result = self._handle_chart_edit(user_question)
- elif route == "INSIGHT_QA":
- result = self._handle_insight_qa(user_question)
- else:
- # Fallback to data query
- result = self._handle_data_query(user_question)
-
- # Step 3: Record this conversation turn
- self.conversation.add_turn(user_question, route, result)
-
- # Update backward compatibility state
- if result.get('data'):
- self.current_data = result['data']
- if result.get('chart_config'):
- self.current_chart_config = result['chart_config']
-
- return result
-
- except Exception as e:
- print(f" Orchestrator error: {e}")
- import traceback
- traceback.print_exc()
- return {
- "success": False,
- "error": str(e),
- "response_type": "error"
- }
-
- def _handle_data_query(self, user_question: str) -> Dict[str, Any]:
- """
- Handle DATA_QUERY route: SQL Agent → Viz Generator → Chart Generator
- """
- try:
- # Step 1: Get data from OCI SQL Agent
- sql_input = {"question": user_question}
- sql_result = self.sql_agent.invoke(sql_input)
-
- if not sql_result.get("success", False):
- return {
- "success": False,
- "error": sql_result.get("error", "SQL query failed"),
- "response_type": "error"
- }
-
- data = sql_result.get("data", [])
- if not data:
- return {
- "success": True,
- "query": user_question,
- "agent_response": sql_result.get("agent_response", "No data found"),
- "response_type": "text_response",
- "text_response": sql_result.get("agent_response", "No data found"),
- "data": []
- }
-
- # Update current state (conversation manager handles this)
-
- # DATA_QUERY only returns data - no automatic chart generation
- # Charts should only be created when explicitly requested via CHART_EDIT
-
- # Store data for conversation context
- self.current_data = data
-
- # Add to conversation history
- self.conversation.add_turn(user_question, "DATA_QUERY", {"data": data})
-
- # Return data without chart
- return {
- "success": True,
- "query": user_question,
- "agent_response": sql_result.get("agent_response", ""),
- "response_type": "data",
- "data": data,
- "generated_sql": sql_result.get("generated_sql"),
- "additional_info": sql_result.get("additional_info"),
- "method": "data_only"
- }
-
- except Exception as e:
- print(f" Data query handling error: {e}")
- return {
- "success": False,
- "error": str(e),
- "response_type": "error"
- }
-
- def _handle_chart_edit(self, user_question: str) -> Dict[str, Any]:
- """
- Handle CHART_EDIT route: modify existing chart
- """
- # Always get fresh data for chart requests to ensure we're using the right dataset
- print(" Getting fresh data for chart...")
- sql_input = {"question": user_question}
- sql_result = self.sql_agent.invoke(sql_input)
-
- if not sql_result.get("success", False):
- return {
- "success": False,
- "error": f"Failed to get data for chart: {sql_result.get('error', 'Unknown error')}",
- "response_type": "error"
- }
-
- current_data = sql_result.get("data", [])
- if not current_data:
- return {
- "success": False,
- "error": "No data available for chart creation",
- "response_type": "error"
- }
-
- # Store the new data
- self.current_data = current_data
- print(f" Retrieved {len(current_data)} rows for chart generation")
-
- # Get current chart config for potential reuse
- current_chart_config = self.conversation.get_current_chart_config()
-
- # If we have data but no chart config, create a new chart (don't redirect to data query)
-
- try:
- # Generate chart directly using GenAI Chart Generator
- chart_result = self.chart_generator.generate_chart(
- user_request=user_question,
- data=current_data,
- chart_params=current_chart_config or {}
- )
-
- if chart_result.get("success", False):
- # Store the chart config for future use
- self.current_chart_config = chart_result.get("chart_config", {})
-
- # Add to conversation history
- self.conversation.add_turn(user_question, "CHART_EDIT", {
- "chart_config": chart_result.get("chart_config", {}),
- "chart_base64": chart_result.get("chart_base64")
- })
-
- return {
- "success": True,
- "query": user_question,
- "agent_response": f"Chart created: {user_question}",
- "response_type": "visualization",
- "data": current_data,
- "chart_base64": chart_result.get("chart_base64"),
- "chart_config": chart_result.get("chart_config", {}),
- "method": f"chart_generated_+_{chart_result.get('method', 'unknown')}"
- }
- else:
- return {
- "success": False,
- "error": f"Failed to update chart: {chart_result.get('error', 'Unknown error')}",
- "response_type": "error"
- }
-
- except Exception as e:
- print(f" Chart edit handling error: {e}")
- return {
- "success": False,
- "error": str(e),
- "response_type": "error"
- }
-
- def _handle_insight_qa(self, user_question: str) -> Dict[str, Any]:
- """
- Handle INSIGHT_QA route: analyze current data
- """
- if not self.current_data:
- # No data to analyze, redirect to data query
- return self._handle_data_query(user_question)
-
- try:
- insight_input = {
- "question": user_question,
- "data": self.current_data
- }
-
- insight_result = self.insight_qa.invoke(insight_input)
-
- return {
- "success": insight_result.get("success", True),
- "query": user_question,
- "agent_response": insight_result.get("text_response", "No insights generated"),
- "response_type": "text_response",
- "text_response": insight_result.get("text_response", "No insights generated"),
- "data": self.current_data,
- "method": insight_result.get("method", "insight_analysis")
- }
-
- except Exception as e:
- print(f" Insight QA handling error: {e}")
- return {
- "success": False,
- "error": str(e),
- "response_type": "error"
- }
-
- def get_current_data(self) -> Optional[List[Dict]]:
- """Get current data for transparency"""
- return self.current_data
-
- def get_current_chart_config(self) -> Optional[Dict]:
- """Get current chart config for transparency"""
- return self.current_chart_config
-
- def clear_context(self):
- """Clear current context"""
- self.current_data = None
- self.current_chart_config = None
- print(" Context cleared")
\ No newline at end of file
diff --git a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/backend/orchestration/oci_direct_runnables.py b/ai/gen-ai-agents/sql_graph_generator_dashboard/files/backend/orchestration/oci_direct_runnables.py
deleted file mode 100644
index aefda1ecf..000000000
--- a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/backend/orchestration/oci_direct_runnables.py
+++ /dev/null
@@ -1,412 +0,0 @@
-"""
-LangChain Runnables using direct OCI SDK calls for GenAI models
-Pure OCI SDK wrapped as LangChain Runnables - no langchain-community dependencies
-"""
-
-from langchain_core.runnables import Runnable
-from typing import Dict, Any, List
-import oci
-import json
-from utils import config
-
-
-class OciGenAIRunnable(Runnable):
- """
- Direct OCI GenAI model calls wrapped as LangChain Runnable
- """
-
- def __init__(self, purpose: str = "general"):
- self.purpose = purpose
- try:
- # Initialize OCI GenAI client with correct endpoint
- oci_config = oci.config.from_file()
- # Override endpoint to match the model's region
- oci_config['region'] = 'eu-frankfurt-1'
- self.genai_client = oci.generative_ai_inference.GenerativeAiInferenceClient(oci_config)
-
- # Set correct service endpoint
- self.genai_client.base_client.endpoint = config.SERVICE_ENDPOINT
-
- self.model_id = config.MODEL_ID
- self.service_endpoint = config.SERVICE_ENDPOINT
- self.compartment_id = config.COMPARTMENT_ID
- self.oci_available = True
- print(f"OCI GenAI Direct Runnable ({purpose}) initialized with endpoint: {config.SERVICE_ENDPOINT}")
- except Exception as e:
- print(f"OCI GenAI Direct Runnable ({purpose}) failed: {e}")
- self.genai_client = None
- self.oci_available = False
-
- def invoke(self, input_data: Dict[str, Any]) -> Dict[str, Any]:
- """
- Call OCI GenAI model directly
- """
- prompt = input_data.get("prompt", "")
- max_tokens = input_data.get("max_tokens", 500)
- temperature = input_data.get("temperature", 0.7)
-
- if not self.oci_available:
- return {
- "success": False,
- "error": "OCI GenAI not available",
- "response": "",
- "method": "error"
- }
-
- try:
- # Create chat request using Oracle demo format for OpenAI GPT OSS 120B
- content = oci.generative_ai_inference.models.TextContent()
- content.text = prompt
-
- message = oci.generative_ai_inference.models.Message()
- message.role = "USER"
- message.content = [content]
-
- chat_request = oci.generative_ai_inference.models.GenericChatRequest()
- chat_request.api_format = oci.generative_ai_inference.models.BaseChatRequest.API_FORMAT_GENERIC
- chat_request.messages = [message]
- chat_request.max_tokens = max_tokens
- chat_request.temperature = temperature
- chat_request.frequency_penalty = 0
- chat_request.presence_penalty = 0
- chat_request.top_p = 1
- chat_request.top_k = 0
-
- chat_detail = oci.generative_ai_inference.models.ChatDetails()
- chat_detail.serving_mode = oci.generative_ai_inference.models.OnDemandServingMode(model_id=self.model_id)
- chat_detail.chat_request = chat_request
- chat_detail.compartment_id = self.compartment_id
-
- # Call OCI GenAI
- response = self.genai_client.chat(chat_detail)
-
- # Extract response text
- response_text = ""
- if hasattr(response.data, 'chat_response') and response.data.chat_response:
- if hasattr(response.data.chat_response, 'choices') and response.data.chat_response.choices:
- choice = response.data.chat_response.choices[0]
- if hasattr(choice, 'message') and choice.message:
- if hasattr(choice.message, 'content') and choice.message.content:
- for content in choice.message.content:
- if hasattr(content, 'text'):
- response_text += content.text
-
- return {
- "success": True,
- "response": response_text.strip(),
- "method": "oci_direct",
- "model_id": self.model_id
- }
-
- except Exception as e:
- error_msg = str(e)
- print(f"OCI GenAI Direct call failed ({self.purpose}): {error_msg}")
-
- # Check for specific error types
- if "does not support" in error_msg:
- return {
- "success": False,
- "error": f"Model {self.model_id} API format incompatible",
- "response": "",
- "method": "model_error"
- }
-
- return {
- "success": False,
- "error": error_msg,
- "response": "",
- "method": "call_error"
- }
-
-
-class RouterRunnable(Runnable):
- """
- Intelligent routing using direct OCI GenAI calls
- """
-
- def __init__(self):
- self.genai_runnable = OciGenAIRunnable("router")
-
- def invoke(self, input_data: Dict[str, Any]) -> Dict[str, Any]:
- """
- Route user query and return routing decision
- """
- user_question = input_data.get("question", "")
- context = input_data.get("context", {})
-
- # Create routing prompt
- prompt = f"""You are an intelligent router for a data dashboard. Analyze the user query and decide which tool to use.
-
-Tools Available:
-1. DATA_QUERY: For getting NEW data from database (show orders, get customers, list products, etc.)
-2. CHART_EDIT: For creating ANY charts or visualizations (make chart, graph, pie chart, bar chart, etc.) - Will automatically get data if needed
-3. INSIGHT_QA: For analyzing current data (trends, patterns, outliers)
-
-IMPORTANT: If user asks for ANY chart/graph/visualization, always choose CHART_EDIT regardless of whether data exists or not.
-
-Context:
-- Has existing data: {context.get('has_data', False)}
-- Has existing chart: {context.get('has_chart', False)}
-
-User Query: "{user_question}"
-
-Respond with ONLY a JSON object:
-{{"route": "DATA_QUERY|CHART_EDIT|INSIGHT_QA", "reasoning": "Brief explanation", "confidence": 0.0-1.0}}"""
-
- if not self.genai_runnable.oci_available:
- return self._fallback_route(user_question)
-
- # Call OCI GenAI
- genai_input = {
- "prompt": prompt,
- "max_tokens": 200,
- "temperature": 0.3
- }
-
- result = self.genai_runnable.invoke(genai_input)
-
- if result.get("success"):
- try:
- # Parse JSON response
- route_data = json.loads(result["response"])
- return {
- "route": route_data.get("route", "DATA_QUERY"),
- "reasoning": route_data.get("reasoning", "GenAI routing"),
- "confidence": route_data.get("confidence", 0.9),
- "method": "oci_genai"
- }
- except json.JSONDecodeError:
- print(f"Failed to parse GenAI response: {result['response']}")
- return self._fallback_route(user_question)
- else:
- print(f"GenAI routing failed: {result.get('error')}")
- return self._fallback_route(user_question)
-
- def _fallback_route(self, user_question: str) -> Dict[str, Any]:
- """Simple rule-based fallback routing"""
- user_lower = user_question.lower()
-
- if any(word in user_lower for word in ["show", "get", "find", "list", "data"]):
- return {
- "route": "DATA_QUERY",
- "reasoning": "Fallback: Detected data request",
- "confidence": 0.5,
- "method": "fallback"
- }
- elif any(word in user_lower for word in ["chart", "pie", "bar", "line", "graph"]):
- return {
- "route": "CHART_EDIT",
- "reasoning": "Fallback: Detected chart modification",
- "confidence": 0.5,
- "method": "fallback"
- }
- else:
- return {
- "route": "INSIGHT_QA",
- "reasoning": "Fallback: Default to analysis",
- "confidence": 0.3,
- "method": "fallback"
- }
-
-
-class VizGeneratorRunnable(Runnable):
- """
- Generate visualization configs using direct OCI GenAI calls
- """
-
- def __init__(self):
- self.genai_runnable = OciGenAIRunnable("viz_generator")
-
- def invoke(self, input_data: Dict[str, Any]) -> Dict[str, Any]:
- """
- Generate chart config from data and user question
- """
- data = input_data.get("data", [])
- question = input_data.get("question", "")
-
- if not data:
- return {
- "success": False,
- "error": "No data provided for visualization"
- }
-
- # Analyze data structure
- sample_row = data[0] if data else {}
- columns = list(sample_row.keys()) if sample_row else []
-
- # Generate chart config prompt
- prompt = f"""Generate a chart configuration for this data visualization request.
-
-User Question: "{question}"
-Data Columns: {columns}
-Data Sample (first 2 rows): {data[:2]}
-Total Rows: {len(data)}
-
-Respond with ONLY a JSON object:
-{{"chart_type": "bar|line|pie|scatter", "x_axis": "column_name", "y_axis": "column_name", "title": "Chart Title", "caption": "Brief insight"}}"""
-
- if not self.genai_runnable.oci_available:
- return self._fallback_config(data, question)
-
- # Call OCI GenAI
- genai_input = {
- "prompt": prompt,
- "max_tokens": 300,
- "temperature": 0.3
- }
-
- result = self.genai_runnable.invoke(genai_input)
-
- if result.get("success"):
- try:
- # Parse JSON response
- config_data = json.loads(result["response"])
- return {
- "success": True,
- "config": config_data,
- "method": "oci_genai"
- }
- except json.JSONDecodeError:
- print(f"Failed to parse viz config: {result['response']}")
- return self._fallback_config(data, question)
- else:
- print(f"Viz generation failed: {result.get('error')}")
- return self._fallback_config(data, question)
-
- def _fallback_config(self, data: List[Dict], question: str) -> Dict[str, Any]:
- """Generate simple fallback chart config"""
- if not data:
- return {"success": False, "error": "No data"}
-
- sample_row = data[0]
- columns = list(sample_row.keys())
-
- # Find numeric columns
- numeric_cols = []
- for col in columns:
- try:
- float(str(sample_row[col]))
- numeric_cols.append(col)
- except (ValueError, TypeError):
- pass
-
- # Simple config generation
- if len(columns) >= 2:
- x_axis = columns[0]
- y_axis = numeric_cols[0] if numeric_cols else columns[1]
- chart_type = "bar"
- else:
- x_axis = columns[0]
- y_axis = columns[0]
- chart_type = "bar"
-
- return {
- "success": True,
- "config": {
- "chart_type": chart_type,
- "x_axis": x_axis,
- "y_axis": y_axis,
- "title": f"Chart for: {question}",
- "caption": "Fallback visualization configuration"
- },
- "method": "fallback"
- }
-
-
-class InsightQARunnable(Runnable):
- """
- Generate insights using direct OCI GenAI calls
- """
-
- def __init__(self):
- self.genai_runnable = OciGenAIRunnable("insight_qa")
-
- def invoke(self, input_data: Dict[str, Any]) -> Dict[str, Any]:
- """
- Generate insights about the current data
- """
- data = input_data.get("data", [])
- question = input_data.get("question", "")
-
- if not data:
- return {
- "success": False,
- "error": "No data available for analysis",
- "response_type": "text_response"
- }
-
- # Create analysis prompt
- data_summary = {
- "total_rows": len(data),
- "columns": list(data[0].keys()) if data else [],
- "sample_data": data[:3]
- }
-
- prompt = f"""Analyze this data and answer the user's question with insights.
-
-User Question: "{question}"
-
-Data Summary:
-- Total rows: {data_summary['total_rows']}
-- Columns: {data_summary['columns']}
-- Sample data: {data_summary['sample_data']}
-
-Provide a concise analysis with specific insights, trends, or patterns you observe in the data.
-"""
-
- if not self.genai_runnable.oci_available:
- return self._fallback_insight(data, question)
-
- # Call OCI GenAI
- genai_input = {
- "prompt": prompt,
- "max_tokens": 400,
- "temperature": 0.7
- }
-
- result = self.genai_runnable.invoke(genai_input)
-
- if result.get("success"):
- return {
- "success": True,
- "text_response": result["response"],
- "data": data,
- "response_type": "text_response",
- "method": "oci_genai"
- }
- else:
- print(f"⚠️ Insight generation failed: {result.get('error')}")
- return self._fallback_insight(data, question)
-
- def _fallback_insight(self, data: List[Dict], question: str) -> Dict[str, Any]:
- """Generate simple fallback insights"""
- if not data:
- return {
- "success": True,
- "text_response": "No data available for analysis.",
- "response_type": "text_response",
- "method": "fallback"
- }
-
- insights = [
- f"Dataset contains {len(data)} records",
- f"Available fields: {', '.join(data[0].keys()) if data else 'None'}"
- ]
-
- # Simple numeric analysis
- for field in data[0].keys() if data else []:
- try:
- values = [float(row.get(field, 0)) for row in data[:10]]
- if values:
- avg_val = sum(values) / len(values)
- insights.append(f"{field} average: {avg_val:.2f}")
- except (ValueError, TypeError):
- pass
-
- return {
- "success": True,
- "text_response": "\n".join(insights),
- "data": data,
- "response_type": "text_response",
- "method": "fallback"
- }
\ No newline at end of file
diff --git a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/backend/orchestration/oci_runnables.py b/ai/gen-ai-agents/sql_graph_generator_dashboard/files/backend/orchestration/oci_runnables.py
deleted file mode 100644
index 212854ad1..000000000
--- a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/backend/orchestration/oci_runnables.py
+++ /dev/null
@@ -1,374 +0,0 @@
-"""
-LangChain Runnables that wrap OCI SDK calls for clean integration
-"""
-
-from langchain_core.runnables import Runnable
-try:
- from langchain_oci.chat_models import ChatOCIGenAI
-except ImportError:
- try:
- from langchain_community.chat_models.oci_generative_ai import ChatOCIGenAI
- except ImportError:
- print("⚠️ Neither langchain-oci nor langchain-community ChatOCIGenAI available")
- ChatOCIGenAI = None
-from langchain_core.messages import HumanMessage
-from typing import Dict, Any, List
-import oci
-from utils import config
-import json
-
-class OciSqlAgentRunnable(Runnable):
- """
- LangChain Runnable that wraps OCI Agent Runtime SDK to extract tool_outputs reliably
- """
-
- def __init__(self):
- # Initialize OCI Agent Runtime client
- try:
- oci_config = oci.config.from_file()
- # Override region to match the agent endpoint
- oci_config['region'] = 'eu-frankfurt-1'
- self.client = oci.generative_ai_agent_runtime.GenerativeAiAgentRuntimeClient(oci_config)
- self.agent_endpoint_id = config.AGENT_ENDPOINT_ID
- print("OCI SQL Agent Runnable initialized with eu-frankfurt-1")
- except Exception as e:
- print(f"Failed to initialize OCI Agent Runtime: {e}")
- self.client = None
- self.agent_endpoint_id = None
-
- def invoke(self, input_data: Dict[str, Any]) -> Dict[str, Any]:
- """
- Call OCI Agent and extract tool_outputs[0].result for reliable data
- """
- user_question = input_data.get("question", "") if isinstance(input_data, dict) else str(input_data)
-
- if not self.client or not self.agent_endpoint_id:
- return {
- "success": False,
- "error": "OCI Agent Runtime not available",
- "data": [],
- "agent_response": "Agent not initialized"
- }
-
- try:
- print(f"OCI SQL Agent: Executing query: {user_question}")
-
- # Step 1: Create a session first (required for sessionId)
- create_session_response = self.client.create_session(
- create_session_details=oci.generative_ai_agent_runtime.models.CreateSessionDetails(
- display_name="SQL Query Session",
- description="Session for SQL query execution"
- ),
- agent_endpoint_id=self.agent_endpoint_id
- )
- session_id = create_session_response.data.id
- print(f"Created session: {session_id}")
-
- # Step 2: Create chat request with required sessionId
- chat_request = oci.generative_ai_agent_runtime.models.ChatDetails(
- user_message=user_question,
- session_id=session_id,
- should_stream=False
- )
-
- # Step 3: Call OCI Agent
- response = self.client.chat(
- agent_endpoint_id=self.agent_endpoint_id,
- chat_details=chat_request
- )
-
- # Extract message content
- message_content = ""
- if hasattr(response.data, 'message') and response.data.message:
- if hasattr(response.data.message, 'content') and response.data.message.content:
- if hasattr(response.data.message.content, 'text'):
- message_content = response.data.message.content.text or ""
-
- # Extract tool outputs (where SQL data lives)
- tool_outputs = getattr(response.data, 'tool_outputs', []) or []
- data = []
- generated_sql = None
- additional_info = None
-
- if tool_outputs and len(tool_outputs) > 0:
- result = tool_outputs[0].result if hasattr(tool_outputs[0], 'result') else None
- if result:
- try:
- # Parse JSON data from tool output
- if isinstance(result, str):
- parsed_result = json.loads(result)
- else:
- parsed_result = result
-
- if isinstance(parsed_result, list):
- data = parsed_result
- elif isinstance(parsed_result, dict):
- data = parsed_result.get('data', [])
- generated_sql = parsed_result.get('generated_sql')
- additional_info = parsed_result.get('additional_info')
- except json.JSONDecodeError:
- # If not JSON, treat as raw data
- data = [{"result": result}]
-
- return {
- "success": True,
- "agent_response": message_content.strip(),
- "data": data,
- "generated_sql": generated_sql,
- "additional_info": additional_info,
- "tool_outputs": tool_outputs # Pass through for transparency
- }
-
- except Exception as e:
- print(f"OCI SQL Agent error: {e}")
- return {
- "success": False,
- "error": str(e),
- "data": [],
- "agent_response": f"Error calling SQL Agent: {str(e)}"
- }
-
-
-class RouterRunnable(Runnable):
- """
- LangChain Runnable for intelligent routing using ChatOCIGenAI
- """
-
- def __init__(self):
- self.genai_client = None
- self.oci_available = False
-
- if ChatOCIGenAI is None:
- print("ChatOCIGenAI not available - Router using fallback")
- return
-
- try:
- self.genai_client = ChatOCIGenAI(
- model_id=config.MODEL_ID,
- service_endpoint=config.SERVICE_ENDPOINT,
- compartment_id=config.COMPARTMENT_ID,
- model_kwargs={
- "temperature": config.TEMPERATURE,
- "top_p": config.TOP_P,
- "max_tokens": config.MAX_TOKENS
- }
- )
- self.oci_available = True
- print("Router Runnable with ChatOCIGenAI initialized")
- except Exception as e:
- print(f"Router Runnable fallback mode: {e}")
- self.genai_client = None
- self.oci_available = False
-
- def invoke(self, input_data: Dict[str, Any]) -> Dict[str, Any]:
- """
- Route user query and return routing decision
- """
- user_question = input_data.get("question", "") if isinstance(input_data, dict) else str(input_data)
- context = input_data.get("context", {}) if isinstance(input_data, dict) else {}
-
- # Routing prompt
- prompt = f"""You are an intelligent router for a data dashboard. Analyze the user query and decide which tool to use.
-
-Tools Available:
-1. DATA_QUERY: For getting new data from database (show orders, get customers, etc.)
-2. CHART_EDIT: For modifying existing charts (make it pie chart, sort by amount, etc.)
-3. INSIGHT_QA: For analyzing current data (trends, patterns, outliers)
-
-User Query: "{user_question}"
-
-Respond with ONLY a JSON object:
-{{
- "route": "DATA_QUERY|CHART_EDIT|INSIGHT_QA",
- "reasoning": "Brief explanation",
- "confidence": 0.0-1.0,
- "params": {{}}
-}}"""
-
- if self.oci_available:
- try:
- messages = [HumanMessage(content=prompt)]
- response = self.genai_client.invoke(messages)
-
- # Extract content from response
- if hasattr(response, 'content'):
- content = response.content
- else:
- content = str(response)
-
- # Parse JSON response
- try:
- import json
- route_data = json.loads(content)
- return {
- "route": route_data.get("route", "DATA_QUERY"),
- "reasoning": route_data.get("reasoning", "GenAI routing"),
- "confidence": route_data.get("confidence", 0.9),
- "params": route_data.get("params", {})
- }
- except json.JSONDecodeError:
- print(f"Failed to parse GenAI response: {content}")
- return self._fallback_route(user_question)
-
- except Exception as e:
- print(f"GenAI routing error: {e}")
- return self._fallback_route(user_question)
- else:
- return self._fallback_route(user_question)
-
- def _fallback_route(self, user_question: str) -> Dict[str, Any]:
- """Simple rule-based fallback routing"""
- user_lower = user_question.lower()
-
- if any(word in user_lower for word in ["show", "get", "find", "list", "data"]):
- return {
- "route": "DATA_QUERY",
- "reasoning": "Fallback: Detected data request",
- "confidence": 0.5,
- "params": {}
- }
- elif any(word in user_lower for word in ["chart", "pie", "bar", "line", "graph"]):
- return {
- "route": "CHART_EDIT",
- "reasoning": "Fallback: Detected chart modification",
- "confidence": 0.5,
- "params": {}
- }
- else:
- return {
- "route": "INSIGHT_QA",
- "reasoning": "Fallback: Default to analysis",
- "confidence": 0.3,
- "params": {}
- }
-
-
-class VizGeneratorRunnable(Runnable):
- """
- LangChain Runnable for generating visualization configs from data
- """
-
- def __init__(self):
- try:
- self.genai_client = ChatOCIGenAI(
- model_id=config.MODEL_ID,
- service_endpoint=config.SERVICE_ENDPOINT,
- compartment_id=config.COMPARTMENT_ID,
- model_kwargs={
- "temperature": 0.3,
- "top_p": 0.9,
- "max_tokens": 1000
- }
- )
- self.oci_available = True
- print("Viz Generator Runnable initialized")
- except Exception as e:
- print(f"Viz Generator fallback mode: {e}")
- self.genai_client = None
- self.oci_available = False
-
- def invoke(self, input_data: Dict[str, Any]) -> Dict[str, Any]:
- """
- Generate chart config from data and user question
- """
- data = input_data.get("data", [])
- question = input_data.get("question", "")
- suggested_type = input_data.get("chart_type", "auto")
-
- if not data:
- return {
- "success": False,
- "error": "No data provided for visualization"
- }
-
- # Analyze data structure
- sample_row = data[0] if data else {}
- columns = list(sample_row.keys()) if sample_row else []
-
- # Generate chart config prompt
- prompt = f"""Generate a chart configuration for this data visualization request.
-
-User Question: "{question}"
-Suggested Chart Type: {suggested_type}
-Data Columns: {columns}
-Data Sample (first 3 rows): {data[:3]}
-Total Rows: {len(data)}
-
-Respond with ONLY a JSON object:
-{{
- "chart_type": "bar|line|pie|scatter",
- "x_axis": "column_name",
- "y_axis": "column_name",
- "title": "Chart Title",
- "caption": "Brief insight about the data",
- "color_field": "optional_column_for_colors"
-}}"""
-
- if self.oci_available:
- try:
- messages = [HumanMessage(content=prompt)]
- response = self.genai_client.invoke(messages)
-
- # Extract content
- if hasattr(response, 'content'):
- content = response.content
- else:
- content = str(response)
-
- # Parse JSON response
- try:
- import json
- config_data = json.loads(content)
- return {
- "success": True,
- "config": config_data,
- "method": "genai_generated"
- }
- except json.JSONDecodeError:
- print(f"Failed to parse viz config: {content}")
- return self._fallback_config(data, question)
-
- except Exception as e:
- print(f"Viz generation error: {e}")
- return self._fallback_config(data, question)
- else:
- return self._fallback_config(data, question)
-
- def _fallback_config(self, data: List[Dict], question: str) -> Dict[str, Any]:
- """Generate simple fallback chart config"""
- if not data:
- return {"success": False, "error": "No data"}
-
- sample_row = data[0]
- columns = list(sample_row.keys())
-
- # Find numeric columns
- numeric_cols = []
- for col in columns:
- try:
- float(str(sample_row[col]))
- numeric_cols.append(col)
- except (ValueError, TypeError):
- pass
-
- # Simple config generation
- if len(columns) >= 2:
- x_axis = columns[0]
- y_axis = numeric_cols[0] if numeric_cols else columns[1]
- chart_type = "bar"
- else:
- x_axis = columns[0]
- y_axis = columns[0]
- chart_type = "bar"
-
- return {
- "success": True,
- "config": {
- "chart_type": chart_type,
- "x_axis": x_axis,
- "y_axis": y_axis,
- "title": f"Chart for: {question}",
- "caption": "Fallback visualization configuration"
- },
- "method": "fallback"
- }
\ No newline at end of file
diff --git a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/backend/requirements.txt b/ai/gen-ai-agents/sql_graph_generator_dashboard/files/backend/requirements.txt
deleted file mode 100644
index d9e66384a..000000000
--- a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/backend/requirements.txt
+++ /dev/null
@@ -1,22 +0,0 @@
-# Core dependencies
-fastapi==0.104.1
-uvicorn==0.24.0
-pydantic==2.5.0
-python-multipart==0.0.6
-
-# OCI SDK
-oci==2.119.1
-
-# LangChain
-langchain==0.1.0
-langchain-core==0.1.10
-langchain-community==0.0.13
-
-# Data visualization
-matplotlib==3.8.2
-seaborn==0.13.0
-pandas==2.1.4
-numpy==1.26.2
-
-# Utilities
-python-dotenv==1.0.0
diff --git a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/backend/tools/genai_chart_generator.py b/ai/gen-ai-agents/sql_graph_generator_dashboard/files/backend/tools/genai_chart_generator.py
deleted file mode 100644
index 1b42fe021..000000000
--- a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/backend/tools/genai_chart_generator.py
+++ /dev/null
@@ -1,341 +0,0 @@
-"""
-GenAI-Powered Chart Generator
-Uses OCI GenAI to generate custom visualization code based on data and user requirements
-"""
-
-import json
-import oci
-import matplotlib.pyplot as plt
-import seaborn as sns
-import pandas as pd
-import numpy as np
-import io
-import base64
-from typing import Dict, Any, List, Optional
-from utils import config
-import signal
-
-
-class GenAIChartGenerator:
- """
- Generate custom charts using OCI GenAI to create Python visualization code
- """
-
- CHART_GENERATION_PROMPT = """You are an expert data visualization developer. Generate Python code to create beautiful, insightful charts.
-
-User Request: "{user_request}"
-
-Available Data (first 3 rows shown):
-{data_preview}
-
-Data Summary:
-- Total rows: {total_rows}
-- Columns: {columns}
-- Numeric columns: {numeric_columns}
-
-Requirements:
-1. Create a matplotlib/seaborn visualization
-2. Use the provided data variable called 'df' (pandas DataFrame)
-3. Make the chart beautiful with proper titles, labels, colors
-4. Return the chart as base64 image
-5. Handle any data preprocessing needed
-6. Choose the most appropriate chart type for the data and request
-
-Generate ONLY Python code in this format:
-```python
-import matplotlib.pyplot as plt
-import seaborn as sns
-import pandas as pd
-import numpy as np
-import io
-import base64
-
-# Set style for beautiful charts
-plt.style.use('seaborn-v0_8')
-sns.set_palette("husl")
-
-# Your visualization code here
-# Use df as the DataFrame variable
-# Example:
-fig, ax = plt.subplots(figsize=(12, 8))
-
-# Create your chart (customize based on user request and data)
-# ... your chart code ...
-
-# Finalize chart
-plt.title("Your Chart Title", fontsize=16, fontweight='bold')
-plt.tight_layout()
-
-# Convert to base64
-img_buffer = io.BytesIO()
-plt.savefig(img_buffer, format='png', dpi=150, bbox_inches='tight',
- facecolor='white', edgecolor='none')
-img_buffer.seek(0)
-img_base64 = base64.b64encode(img_buffer.getvalue()).decode('utf-8')
-plt.close()
-
-# Return the base64 string
-chart_base64 = img_base64
-```
-
-Generate the complete Python code that will create an appropriate visualization."""
-
- def __init__(self):
- # Initialize direct OCI GenAI client using chat API
- try:
- oci_config = oci.config.from_file()
- oci_config['region'] = 'eu-frankfurt-1'
- self.genai_client = oci.generative_ai_inference.GenerativeAiInferenceClient(oci_config)
- self.genai_client.base_client.endpoint = config.SERVICE_ENDPOINT
-
- self.model_id = config.MODEL_ID
- self.compartment_id = config.COMPARTMENT_ID
- self.oci_available = True
- print("LangChain OCI GenAI Chart Generator initialized successfully")
- except Exception as e:
- print(f"LangChain OCI GenAI Chart Generator not available: {e}")
- self.genai_client = None
- self.oci_available = False
-
- def generate_chart(self, user_request: str, data: List[Dict], chart_params: Dict[str, Any] = None) -> Dict[str, Any]:
- """
- Generate custom chart using GenAI-generated code
- """
- try:
- print(f"GenAI Chart Generator: Creating chart for: {user_request}")
-
- if not data:
- return {
- "success": False,
- "error": "No data provided for chart generation"
- }
-
- # Prepare data summary for GenAI
- df = pd.DataFrame(data)
- data_preview = df.head(3).to_dict('records')
- columns = list(df.columns)
- numeric_columns = list(df.select_dtypes(include=[np.number]).columns)
-
- # Create GenAI prompt
- prompt = self.CHART_GENERATION_PROMPT.format(
- user_request=user_request,
- data_preview=json.dumps(data_preview, indent=2, default=str),
- total_rows=len(df),
- columns=columns,
- numeric_columns=numeric_columns
- )
-
- # Call GenAI to generate code
- genai_response = self._call_genai(prompt)
- print(f"GenAI Response length: {len(genai_response)} chars")
- print(f"GenAI Response preview: {genai_response[:200]}...")
-
- # Extract Python code from response
- python_code = self._extract_code(genai_response)
- print(f" Extracted code length: {len(python_code) if python_code else 0} chars")
-
- if not python_code:
- print(" No Python code extracted, using fallback")
- return self._fallback_chart(df, user_request)
-
- print(f" Code preview: {python_code[:100]}...")
-
- # Execute the generated code
- print(" Executing generated Python code...")
- chart_result = self._execute_chart_code(python_code, df)
- print(f" Chart execution result: {chart_result.get('success', False)}")
-
- if chart_result["success"]:
- return {
- "success": True,
- "chart_base64": chart_result["chart_base64"],
- "generated_code": python_code,
- "method": "genai_generated",
- "chart_config": {
- "title": f"GenAI Chart: {user_request}",
- "type": "custom",
- "description": "Custom chart generated using GenAI"
- }
- }
- else:
- print(f" Generated code failed, using fallback: {chart_result['error']}")
- return self._fallback_chart(df, user_request)
-
- except Exception as e:
- print(f" GenAI Chart Generation error: {e}")
- return self._fallback_chart(pd.DataFrame(data) if data else pd.DataFrame(), user_request)
-
- def _call_genai(self, prompt: str) -> str:
- """
- Call OCI GenAI model to generate chart code using direct Chat API
- """
- try:
- print(" Creating chat request...")
- # Create chat request using Oracle demo format for OpenAI GPT OSS 120B
- content = oci.generative_ai_inference.models.TextContent()
- content.text = prompt
-
- message = oci.generative_ai_inference.models.Message()
- message.role = "USER"
- message.content = [content]
-
- chat_request = oci.generative_ai_inference.models.GenericChatRequest()
- chat_request.api_format = oci.generative_ai_inference.models.BaseChatRequest.API_FORMAT_GENERIC
- chat_request.messages = [message]
- chat_request.max_tokens = 2000
- chat_request.temperature = 0.3
- chat_request.frequency_penalty = 0
- chat_request.presence_penalty = 0
- chat_request.top_p = 1
- chat_request.top_k = 0
-
- chat_detail = oci.generative_ai_inference.models.ChatDetails()
- chat_detail.serving_mode = oci.generative_ai_inference.models.OnDemandServingMode(model_id=self.model_id)
- chat_detail.chat_request = chat_request
- chat_detail.compartment_id = self.compartment_id
-
- # Call OCI GenAI
- print(" Calling OCI GenAI Chat API...")
- response = self.genai_client.chat(chat_detail)
- print(" Got response from OCI GenAI")
-
- # Extract response text
- response_text = ""
- if hasattr(response.data, 'chat_response') and response.data.chat_response:
- if hasattr(response.data.chat_response, 'choices') and response.data.chat_response.choices:
- choice = response.data.chat_response.choices[0]
- if hasattr(choice, 'message') and choice.message:
- if hasattr(choice.message, 'content') and choice.message.content:
- for content in choice.message.content:
- if hasattr(content, 'text'):
- response_text += content.text
-
- return response_text.strip()
-
- except Exception as e:
- print(f" LangChain GenAI API call failed: {e}")
- return f"Error: {str(e)}"
-
- def _extract_code(self, genai_response: str) -> Optional[str]:
- """
- Extract Python code from GenAI response
- """
- try:
- # Look for code blocks
- if "```python" in genai_response:
- start = genai_response.find("```python") + 9
- end = genai_response.find("```", start)
- if end != -1:
- return genai_response[start:end].strip()
- elif "```" in genai_response:
- start = genai_response.find("```") + 3
- end = genai_response.find("```", start)
- if end != -1:
- return genai_response[start:end].strip()
-
- # If no code blocks, try to find code patterns
- lines = genai_response.split('\n')
- code_lines = []
- in_code = False
-
- for line in lines:
- if any(keyword in line for keyword in ['import ', 'plt.', 'sns.', 'fig,', 'ax =']):
- in_code = True
- if in_code:
- code_lines.append(line)
-
- return '\n'.join(code_lines) if code_lines else None
-
- except Exception as e:
- print(f" Code extraction error: {e}")
- return None
-
- def _execute_chart_code(self, python_code: str, df: pd.DataFrame) -> Dict[str, Any]:
- """
- Safely execute the generated Python code
- """
- try:
- # Create a safe execution environment
- safe_globals = {
- 'plt': plt,
- 'sns': sns,
- 'pd': pd,
- 'np': np,
- 'io': io,
- 'base64': base64,
- 'df': df,
- 'chart_base64': None
- }
-
- # Execute the code
- exec(python_code, safe_globals)
-
- # Get the result
- chart_base64 = safe_globals.get('chart_base64')
-
- if chart_base64:
- return {
- "success": True,
- "chart_base64": chart_base64
- }
- else:
- return {
- "success": False,
- "error": "No chart_base64 variable found in generated code"
- }
-
- except Exception as e:
- return {
- "success": False,
- "error": f"Code execution error: {str(e)}"
- }
-
- def _fallback_chart(self, df: pd.DataFrame, user_request: str) -> Dict[str, Any]:
- """
- Generate a simple fallback chart when GenAI fails
- """
- try:
- fig, ax = plt.subplots(figsize=(10, 6))
-
- # Choose chart based on data
- if len(df.columns) >= 2:
- numeric_cols = df.select_dtypes(include=[np.number]).columns
- if len(numeric_cols) >= 2:
- # Scatter plot for numeric data
- ax.scatter(df[numeric_cols[0]], df[numeric_cols[1]], alpha=0.7)
- ax.set_xlabel(numeric_cols[0])
- ax.set_ylabel(numeric_cols[1])
- elif len(numeric_cols) == 1:
- # Bar chart
- if len(df) <= 20:
- df[numeric_cols[0]].plot(kind='bar', ax=ax)
- else:
- df[numeric_cols[0]].plot(kind='line', ax=ax)
- ax.set_ylabel(numeric_cols[0])
-
- plt.title(f"Chart for: {user_request}", fontsize=14)
- plt.tight_layout()
-
- # Convert to base64
- img_buffer = io.BytesIO()
- plt.savefig(img_buffer, format='png', dpi=150, bbox_inches='tight')
- img_buffer.seek(0)
- chart_base64 = base64.b64encode(img_buffer.getvalue()).decode('utf-8')
- plt.close()
-
- return {
- "success": True,
- "chart_base64": chart_base64,
- "method": "fallback",
- "chart_config": {
- "title": f"Fallback Chart: {user_request}",
- "type": "auto",
- "description": "Simple fallback visualization"
- }
- }
-
- except Exception as e:
- return {
- "success": False,
- "error": f"Fallback chart error: {str(e)}"
- }
\ No newline at end of file
diff --git a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/backend/utils/config.py b/ai/gen-ai-agents/sql_graph_generator_dashboard/files/backend/utils/config.py
deleted file mode 100644
index 35278f713..000000000
--- a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/backend/utils/config.py
+++ /dev/null
@@ -1,45 +0,0 @@
-MODEL_ID = "ocid1.generativeaimodel.oc1.eu-frankfurt-1.YOUR_MODEL_ID"
-SERVICE_ENDPOINT = "https://inference.generativeai.eu-frankfurt-1.oci.oraclecloud.com"
-COMPARTMENT_ID = "ocid1.compartment.oc1..YOUR_COMPARTMENT_ID"
-AGENT_ENDPOINT_ID = "ocid1.genaiagentendpoint.oc1.eu-frankfurt-1.YOUR_AGENT_ENDPOINT_ID"
-SQL_AGENT_ID = "ocid1.genaiagentendpoint.oc1.eu-frankfurt-1.YOUR_AGENT_ENDPOINT_ID"
-SQL_AGENT_ENDPOINT = "https://agent-runtime.generativeai.eu-frankfurt-1.oci.oraclecloud.com"
-
-TEMPERATURE = 0.1
-MAX_TOKENS = 1024
-TOP_P = 0.9
-MAX_ROWS_IN_CHART = 50
-CHART_EXPORT_FORMAT = "json"
-DEBUG = False
-AUTH = "API_KEY"
-
-# Database Schema - Customize for your database
-DATABASE_SCHEMA = {
- "CUSTOMERS": [
- "CUSTOMER_ID", "CUSTOMER_NAME", "EMAIL", "SIGNUP_DATE", "SEGMENT",
- "COUNTRY", "LIFETIME_VALUE", "CREATION_DATE", "CREATED_BY",
- "LAST_UPDATED_DATE", "LAST_UPDATED_BY"
- ],
- "PRODUCTS": [
- "PRODUCT_ID", "PRODUCT_NAME", "CATEGORY", "PRICE", "COST",
- "STOCK_QUANTITY", "LAUNCH_DATE", "CREATION_DATE", "CREATED_BY",
- "LAST_UPDATED_DATE", "LAST_UPDATED_BY"
- ],
- "ORDERS": [
- "ORDER_ID", "CUSTOMER_ID", "ORDER_DATE", "TOTAL_AMOUNT", "STATUS",
- "REGION", "SALES_REP", "CREATION_DATE", "CREATED_BY",
- "LAST_UPDATED_DATE", "LAST_UPDATED_BY"
- ],
- "ORDER_ITEMS": [
- "ORDER_ITEM_ID", "ORDER_ID", "PRODUCT_ID", "QUANTITY", "UNIT_PRICE",
- "DISCOUNT_PERCENT", "CREATION_DATE", "CREATED_BY",
- "LAST_UPDATED_DATE", "LAST_UPDATED_BY"
- ]
-}
-
-ECOMMERCE_CORE_FIELDS = {
- "CUSTOMERS": ["CUSTOMER_ID", "CUSTOMER_NAME", "SEGMENT", "COUNTRY", "LIFETIME_VALUE"],
- "PRODUCTS": ["PRODUCT_ID", "PRODUCT_NAME", "CATEGORY", "PRICE"],
- "ORDERS": ["ORDER_ID", "CUSTOMER_ID", "ORDER_DATE", "TOTAL_AMOUNT", "STATUS", "REGION"],
- "ORDER_ITEMS": ["ORDER_ITEM_ID", "ORDER_ID", "PRODUCT_ID", "QUANTITY", "UNIT_PRICE"]
-}
diff --git a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/database/customers.csv b/ai/gen-ai-agents/sql_graph_generator_dashboard/files/database/customers.csv
deleted file mode 100644
index c01f396a4..000000000
--- a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/database/customers.csv
+++ /dev/null
@@ -1,16 +0,0 @@
-CUSTOMER_ID,CUSTOMER_NAME,EMAIL,SIGNUP_DATE,SEGMENT,COUNTRY,LIFETIME_VALUE
-1001,Tech Innovators Inc,contact@techinnovators.com,2023-01-15,Enterprise,USA,25000
-1002,Global Retail Corp,orders@globalretail.com,2023-02-20,Enterprise,Canada,18500
-1003,Startup Solutions,hello@startupsol.com,2023-01-30,SMB,UK,8500
-1004,Digital Commerce Co,sales@digitalcom.com,2023-03-10,Enterprise,Australia,22000
-1005,Local Business Hub,info@localbiz.com,2023-02-05,SMB,USA,6200
-1006,European Distributors,contact@eudist.com,2023-04-12,SMB,Germany,7800
-1007,Premium Brands Ltd,premium@brands.com,2023-03-25,Enterprise,Spain,28500
-1008,Creative Studios,studio@creative.com,2023-01-08,SMB,France,9200
-1009,Asia Pacific Trade,trade@apac.com,2023-02-18,Enterprise,Japan,31000
-1010,Market Leaders Inc,leaders@market.com,2023-04-05,Enterprise,Mexico,24800
-1011,Regional Partners,partners@regional.com,2023-05-12,SMB,Brazil,5900
-1012,Innovation Labs,labs@innovation.com,2023-06-08,Enterprise,Singapore,19500
-1013,Growth Ventures,growth@ventures.com,2023-07-15,SMB,India,7100
-1014,Excellence Corp,corp@excellence.com,2023-08-22,Enterprise,South Korea,26800
-1015,Future Tech,future@tech.com,2023-09-10,SMB,Netherlands,8900
\ No newline at end of file
diff --git a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/database/order_items.csv b/ai/gen-ai-agents/sql_graph_generator_dashboard/files/database/order_items.csv
deleted file mode 100644
index 933cab359..000000000
--- a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/database/order_items.csv
+++ /dev/null
@@ -1,36 +0,0 @@
-order_item_id,order_id,product_id,quantity,unit_price,discount_percent
-4001,3001,2001,2,2999.99,10
-4002,3001,2009,1,5000.00,0
-4003,3002,2002,2,1899.99,0
-4004,3003,2012,1,1799.99,0
-4005,3004,2004,1,3499.99,5
-4006,3004,2010,1,2500.00,0
-4007,3005,2005,1,1299.99,0
-4008,3005,2006,1,599.99,0
-4009,3006,2007,1,1599.99,0
-4010,3007,2001,3,2999.99,15
-4011,3007,2009,1,5000.00,0
-4012,3008,2003,2,899.99,10
-4013,3008,2010,1,2500.00,0
-4014,3009,2008,2,899.99,0
-4015,3009,2011,2,1200.00,5
-4016,3010,2004,2,3499.99,10
-4017,3010,2015,1,7500.00,0
-4018,3011,2002,2,1899.99,5
-4019,3011,2005,1,1299.99,0
-4020,3012,2013,1,999.99,0
-4021,3012,2006,2,599.99,10
-4022,3013,2005,1,1299.99,0
-4023,3014,2001,1,2999.99,0
-4024,3014,2009,1,5000.00,0
-4025,3015,2012,1,1799.99,0
-4026,3015,2011,1,1200.00,0
-4027,3016,2015,1,7500.00,5
-4028,3016,2001,1,2999.99,0
-4029,3016,2010,1,2500.00,0
-4030,3017,2004,2,3499.99,8
-4031,3017,2014,1,2299.99,0
-4032,3018,2001,1,2999.99,0
-4033,3019,2012,1,1799.99,0
-4034,3020,2002,2,1899.99,5
-4035,3020,2011,1,1200.00,0
\ No newline at end of file
diff --git a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/database/orders.csv b/ai/gen-ai-agents/sql_graph_generator_dashboard/files/database/orders.csv
deleted file mode 100644
index ff180257f..000000000
--- a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/database/orders.csv
+++ /dev/null
@@ -1,21 +0,0 @@
-norder_id,customer_id,order_date,total_amount,status,region,sales_rep
-3001,1001,2024-01-15,8999.97,DELIVERED,North America,Sarah Chen
-3002,1002,2024-01-20,3799.98,DELIVERED,North America,Mike Johnson
-3003,1003,2024-01-25,1799.98,SHIPPED,Europe,Emma Rodriguez
-3004,1004,2024-02-01,6299.97,DELIVERED,Asia Pacific,David Kim
-3005,1005,2024-02-10,2199.98,PROCESSING,North America,Sarah Chen
-3006,1006,2024-02-15,1599.99,DELIVERED,Europe,Emma Rodriguez
-3007,1007,2024-02-20,11999.96,SHIPPED,Europe,Emma Rodriguez
-3008,1008,2024-03-01,3699.98,DELIVERED,Europe,Emma Rodriguez
-3009,1009,2024-03-05,4499.98,DELIVERED,Asia Pacific,David Kim
-3010,1010,2024-03-10,9799.97,PROCESSING,North America,Sarah Chen
-3011,1001,2024-03-15,5999.98,SHIPPED,North America,Sarah Chen
-3012,1003,2024-03-20,2699.98,DELIVERED,Europe,Emma Rodriguez
-3013,1005,2024-04-01,1299.99,PENDING,North America,Sarah Chen
-3014,1007,2024-04-05,7999.98,PROCESSING,Europe,Emma Rodriguez
-3015,1009,2024-04-10,3199.98,SHIPPED,Asia Pacific,David Kim
-3016,1012,2024-05-01,12499.97,DELIVERED,Asia Pacific,David Kim
-3017,1014,2024-05-15,8799.98,DELIVERED,Asia Pacific,David Kim
-3018,1011,2024-06-01,2999.98,SHIPPED,South America,Carlos Lopez
-3019,1013,2024-06-10,1799.99,DELIVERED,Asia Pacific,David Kim
-3020,1015,2024-07-01,4299.98,PROCESSING,Europe,Emma Rodriguez
\ No newline at end of file
diff --git a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/database/products.csv b/ai/gen-ai-agents/sql_graph_generator_dashboard/files/database/products.csv
deleted file mode 100644
index 4139547bf..000000000
--- a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/database/products.csv
+++ /dev/null
@@ -1,16 +0,0 @@
-product_id,product_name,category,price,cost,stock_quantity,launch_date
-2001,AI Analytics Platform,Software,2999.99,1200.00,100,2023-01-15
-2002,Cloud Infrastructure,Software,1899.99,800.00,150,2023-02-01
-2003,Data Visualization Tool,Software,899.99,350.00,200,2023-01-20
-2004,Enterprise Security Suite,Software,3499.99,1500.00,75,2023-03-01
-2005,Mobile App Framework,Software,1299.99,550.00,120,2023-02-15
-2006,IoT Sensor Kit,Hardware,599.99,250.00,300,2023-04-01
-2007,Smart Dashboard Display,Hardware,1599.99,700.00,80,2023-03-15
-2008,Network Monitoring Device,Hardware,899.99,400.00,150,2023-05-01
-2009,Premium Consulting,Services,5000.00,2000.00,999,2023-01-01
-2010,Training Program,Services,2500.00,800.00,999,2023-02-01
-2011,Support Package,Services,1200.00,400.00,999,2023-01-15
-2012,API Gateway,Software,1799.99,750.00,90,2023-06-01
-2013,Backup Solution,Software,999.99,420.00,180,2023-04-15
-2014,Load Balancer,Hardware,2299.99,1000.00,60,2023-07-01
-2015,Custom Integration,Services,7500.00,3000.00,999,2023-03-01
\ No newline at end of file
diff --git a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/frontend/app/components/Chat/Chat.js b/ai/gen-ai-agents/sql_graph_generator_dashboard/files/frontend/app/components/Chat/Chat.js
deleted file mode 100644
index 2bfa28082..000000000
--- a/ai/gen-ai-agents/sql_graph_generator_dashboard/files/frontend/app/components/Chat/Chat.js
+++ /dev/null
@@ -1,352 +0,0 @@
-"use client";
-
-import { APP_CONFIG } from "../../config/app";
-import DynamicThemeProvider from "../../contexts/DynamicThemeProvider";
-import {
- Alert,
- alpha,
- Box,
- Container,
- lighten,
- Paper,
- Snackbar,
- Typography,
-} from "@mui/material";
-import { AnimatePresence, motion } from "framer-motion";
-import { useState } from "react";
-import { useChat } from "../../contexts/ChatContext";
-import { useProject } from "../../contexts/ProjectsContext";
-import ChatHeader from "./ChatHeader";
-import ChatInputBar from "./ChatInputBar";
-import MessageList from "./MessageList";
-
-const containerVariants = {
- initial: {
- scale: 0.8,
- opacity: 0,
- },
- animate: {
- scale: 1,
- opacity: 1,
- transition: {
- type: "spring",
- stiffness: 260,
- damping: 20,
- },
- },
-};
-
-const dynamicIslandVariants = {
- initial: {
- y: 100,
- opacity: 0,
- },
- animate: {
- y: 0,
- opacity: 1,
- transition: {
- type: "spring",
- stiffness: 350,
- damping: 25,
- delay: 0.3,
- },
- },
-};
-
-const logoVariants = {
- initial: {
- opacity: 0,
- },
- animate: {
- opacity: 1,
- transition: {
- duration: 0.3,
- },
- },
- exit: {
- opacity: 0,
- transition: {
- duration: 0.2,
- },
- },
-};
-
-export default function Chat({ onAddProject, onEditProject, onDeleteProject }) {
- const {
- messages,
- connected,
- loading,
- error,
- isListening,
- isWaitingForResponse,
- sendMessage,
- sendAttachment,
- clearChat,
- toggleSpeechRecognition,
- setError,
- currentSpeechProvider,
- } = useChat();
-
- const { getCurrentProject } = useProject();
- const currentProject = getCurrentProject();
-
- const [isDragOver, setIsDragOver] = useState(false);
-
- const isOracleRecording = currentSpeechProvider === "oracle" && isListening;
-
- const handleDragOver = (e) => {
- e.preventDefault();
- e.stopPropagation();
- };
-
- const handleDragEnter = (e) => {
- e.preventDefault();
- e.stopPropagation();
- setIsDragOver(true);
- };
-
- const handleDragLeave = (e) => {
- e.preventDefault();
- e.stopPropagation();
- if (!e.currentTarget.contains(e.relatedTarget)) {
- setIsDragOver(false);
- }
- };
-
- const handleDrop = (e) => {
- e.preventDefault();
- e.stopPropagation();
- setIsDragOver(false);
-
- const files = e.dataTransfer.files;
- if (files.length > 0) {
- const file = files[0];
- const isValidType =
- file.type.startsWith("image/") || file.type === "application/pdf";
-
- if (isValidType) {
- window.dispatchEvent(new CustomEvent("fileDropped", { detail: file }));
- }
- }
- };
-
- const getBackgroundStyle = () => {
- if (currentProject.backgroundImage) {
- return {
- backgroundImage: `url(${currentProject.backgroundImage})`,
- backgroundSize: "cover",
- backgroundPosition: "center",
- backgroundRepeat: "no-repeat",
- };
- }
- return {
- backgroundColor: lighten(
- currentProject.backgroundColor || APP_CONFIG.defaults.backgroundColor,
- 0.5
- ),
- };
- };
-
- const hasMessages = messages.length > 0 || isWaitingForResponse;
-
- return (
-
-
-
- {messages.map((msg, idx) => (
-
-
${block.code}`;
- processedText = processedText.replace(placeholder, formattedCode);
- });
-
- return processedText;
-};
-
-export const formatConversationTime = (dateString) => {
- try {
- const date = new Date(dateString);
- const now = new Date();
- const diffInHours = Math.floor((now - date) / (1000 * 60 * 60));
-
- if (diffInHours < 24) {
- return date.toLocaleTimeString([], {
- hour: "2-digit",
- minute: "2-digit",
- });
- } else if (diffInHours < 48) {
- return "Yesterday";
- } else {
- return date.toLocaleDateString([], { month: "short", day: "numeric" });
- }
- } catch (e) {
- return "";
- }
-};
-
-export const truncateText = (text, maxLength = 60) => {
- if (!text) return "";
- if (text.length <= maxLength) return text;
-
- return text.substring(0, maxLength).trim() + "...";
-};
-
-export const sanitizeHtml = (html) => {
- if (!html) return "";
-
- return html
- .replace(/