diff --git a/examples/financial_advisor/README.md b/examples/financial_advisor/README.md new file mode 100644 index 0000000..c6e1f27 --- /dev/null +++ b/examples/financial_advisor/README.md @@ -0,0 +1,220 @@ +# Financial Advisory AI with Versioned Memory + +A demonstration of an AI-powered financial advisory system using ProllyTree for versioned memory management. This example showcases how to build a secure, auditable AI agent that maintains consistent memory across time and can handle complex financial recommendations with full traceability. + +## Features + +- 🤖 **AI-Powered Recommendations**: Uses OpenAI's API to generate intelligent investment advice +- 📊 **Multi-Source Data Validation**: Cross-validates market data from multiple sources +- 🔒 **Security Monitoring**: Detects and prevents injection attacks and anomalies +- 📚 **Versioned Memory**: Uses ProllyTree to maintain git-like versioned storage of all data +- 🕐 **Temporal Queries**: Query recommendations and data as they existed at any point in time +- đŸŒŋ **Branch Management**: Create memory branches for different scenarios or clients +- 📝 **Audit Trail**: Complete audit logs for compliance and debugging +- đŸŽ¯ **Risk-Aware**: Adapts recommendations based on client risk tolerance + +## Prerequisites + +- Rust (latest stable version) +- Git (for memory versioning) +- OpenAI API key (optional, for AI-enhanced reasoning) + +## Quick Start + +### 1. Initialize Storage Directory + +First, create a directory with git repository for the advisor's memory: + +```bash +# Create a directory for the advisor's memory +mkdir -p /tmp/advisor +cd /tmp/advisor + +# Initialize git repository (required for versioned memory) +git init + +# Return to the project directory +cd /path/to/prollytree +``` + +### 2. Set Environment Variables (Optional) + +For AI-enhanced recommendations, set your OpenAI API key: + +```bash +export OPENAI_API_KEY="your-api-key-here" +``` + +### 3. Run the Financial Advisor + +```bash +# Basic usage with temporary storage +cargo run --example financial_advisor -- --storage /tmp/advisor/data advise + +# Or use the shorter form +cargo run -- --storage /tmp/advisor/data advise +``` + +## Usage + +### Interactive Commands + +Once the advisor is running, you can use these commands: + +#### Core Operations +- `recommend ` - Get AI-powered recommendation for a stock symbol (e.g., `recommend AAPL`) +- `profile` - Show current client profile +- `risk ` - Set risk tolerance (`conservative`, `moderate`, or `aggressive`) + +#### History and Analysis +- `history` - Show recent recommendations +- `history ` - Show recommendations at a specific git commit +- `history --branch ` - Show recommendations from a specific branch +- `memory` - Show memory system status and statistics +- `audit` - Show complete audit trail + +#### Advanced Features +- `branch ` - Create a new memory branch +- `visualize` - Show memory tree visualization +- `test-inject ` - Test security monitoring (try malicious inputs) + +#### Other Commands +- `help` - Show all available commands +- `exit` or `quit` - Exit the advisor + +### Example Session + +```bash +đŸĻ> recommend AAPL +📊 Recommendation Generated +Symbol: AAPL +Action: BUY +Confidence: 52.0% +Reasoning: Analysis of AAPL at $177.89 with P/E ratio 28.4... + +đŸĻ> risk aggressive +✅ Risk tolerance set to: Aggressive + +đŸĻ> recommend AAPL +📊 Recommendation Generated +Symbol: AAPL +Action: BUY +Confidence: 60.0% +(Notice higher confidence for aggressive risk tolerance) + +đŸĻ> history +📜 Recent Recommendations +📊 Recommendation #1 + Symbol: AAPL + Action: BUY + Confidence: 60.0% + ... +📊 Recommendation #2 + Symbol: AAPL + Action: BUY + Confidence: 52.0% + ... + +đŸĻ> memory +🧠 Memory Status +✅ Memory validation: ACTIVE +đŸ›Ąī¸ Security monitoring: ENABLED +📝 Audit trail: ENABLED +đŸŒŋ Current branch: main +📊 Total commits: 15 +💡 Recommendations: 2 +``` + +## Command Line Options + +```bash +cargo run -- [OPTIONS] + +Commands: + advise Start interactive advisory session + visualize Visualize memory evolution + attack Run attack simulations + benchmark Run performance benchmarks + memory Git memory operations + examples Show integration examples + audit Audit memory for compliance + +Options: + -s, --storage Path to store agent memory [default: ./advisor_memory/data] + -h, --help Print help +``` + +## Architecture + +### Memory System +- **ProllyTree Storage**: Git-like versioned storage for all data +- **Multi-table Schema**: Separate tables for recommendations, market data, client profiles +- **Cross-validation**: Data integrity through hash validation and cross-references +- **Temporal Queries**: Query data as it existed at any commit or branch + +### Security Features +- **Input Sanitization**: Prevents SQL injection and other attacks +- **Anomaly Detection**: Monitors for suspicious patterns in data +- **Attack Simulation**: Built-in testing for security vulnerabilities +- **Audit Logging**: Complete trail of all operations + +### AI Integration +- **Market Analysis**: Real-time analysis of market conditions +- **Risk Assessment**: Adapts to client risk tolerance +- **Reasoning Generation**: Explains the logic behind recommendations +- **Multi-source Validation**: Cross-checks data from multiple financial sources + +## Advanced Usage + +### Branch Management + +Create branches for different scenarios: + +```bash +đŸĻ> branch conservative-strategy +✅ Created branch: conservative-strategy + +đŸĻ> risk conservative +đŸĻ> recommend MSFT +# Generate recommendations for conservative strategy + +đŸĻ> history --branch main +# Compare with main branch recommendations +``` + +### Temporal Analysis + +Analyze how recommendations changed over time: + +```bash +# Get commit history +đŸĻ> memory + +# Query specific time points +đŸĻ> history abc1234 # Recommendations at specific commit +đŸĻ> history def5678 # Compare with different commit +``` + +### Security Testing + +Test the system's security: + +```bash +đŸĻ> test-inject "'; DROP TABLE recommendations; --" +đŸ›Ąī¸ Security Alert: Potential SQL injection detected and blocked + +đŸĻ> test-inject "unusual market manipulation data" +🚨 Anomaly detected in data pattern +``` + +## Troubleshooting## License + +This example is part of the ProllyTree project and follows the same license terms. + +## Contributing + +Contributions are welcome! Please see the main project's contributing guidelines. + +## Disclaimer + +This is a demonstration system for educational purposes. Do not use for actual financial decisions without proper validation and compliance review. \ No newline at end of file diff --git a/examples/financial_advisor/src/advisor/interactive.rs b/examples/financial_advisor/src/advisor/interactive.rs index 701bb46..a4dbcd4 100644 --- a/examples/financial_advisor/src/advisor/interactive.rs +++ b/examples/financial_advisor/src/advisor/interactive.rs @@ -176,9 +176,9 @@ impl<'a> InteractiveSession<'a> { if parts.len() >= 2 { // history or history --branch if parts[1] == "--branch" && parts.len() >= 3 { - self.show_history_at_branch(&parts[2]).await?; + self.show_history_at_branch(parts[2]).await?; } else { - self.show_history_at_commit(&parts[1]).await?; + self.show_history_at_commit(parts[1]).await?; } } else { self.show_history().await?; @@ -453,12 +453,27 @@ impl<'a> InteractiveSession<'a> { ); println!("{}", "━".repeat(50).dimmed()); - // For now, show current branch recommendations (could be extended to support branch switching) - println!( - "{} Branch-specific history not yet implemented, showing current branch", - "â„šī¸".yellow() - ); - self.show_history().await?; + match self.advisor.get_recommendations_at_branch(branch, 10).await { + Ok(recommendations) => { + if recommendations.is_empty() { + println!( + "{} No recommendations found on branch {}", + "â„šī¸".blue(), + branch + ); + } else { + self.display_recommendations(&recommendations).await?; + } + } + Err(e) => { + println!( + "{} Failed to retrieve history on branch {}: {}", + "❌".red(), + branch, + e + ); + } + } Ok(()) } @@ -608,7 +623,7 @@ impl<'a> InteractiveSession<'a> { }; let response_info = if let Some(ms) = source.response_time_ms { - format!(" ({}ms)", ms) + format!(" ({ms}ms)") } else { String::new() }; diff --git a/examples/financial_advisor/src/advisor/mod.rs b/examples/financial_advisor/src/advisor/mod.rs index 1e77bbd..52c1431 100644 --- a/examples/financial_advisor/src/advisor/mod.rs +++ b/examples/financial_advisor/src/advisor/mod.rs @@ -6,7 +6,7 @@ use chrono::{DateTime, Utc}; use serde::{Deserialize, Serialize}; use uuid::Uuid; -use crate::memory::{MemoryStore, MemoryType, ValidatedMemory}; +use crate::memory::{MemoryStore, MemoryType, Storable, ValidatedMemory}; use crate::security::SecurityMonitor; use crate::validation::{MemoryValidator, ValidationResult}; @@ -279,6 +279,16 @@ impl FinancialAdvisor { .await } + pub async fn get_recommendations_at_branch( + &self, + branch: &str, + limit: usize, + ) -> Result> { + self.memory_store + .get_recommendations(Some(branch), None, Some(limit)) + .await + } + pub async fn get_memory_status(&self) -> Result { self.memory_store.get_memory_status().await } @@ -682,4 +692,253 @@ impl RecommendationType { RecommendationType::Rebalance => "REBALANCE", } } + + pub fn as_serde_str(&self) -> &str { + match self { + RecommendationType::Buy => "Buy", + RecommendationType::Sell => "Sell", + RecommendationType::Hold => "Hold", + RecommendationType::Rebalance => "Rebalance", + } + } +} + +impl Storable for Recommendation { + fn table_name() -> &'static str { + "recommendations" + } + + fn get_id(&self) -> String { + self.id.clone() + } + + fn store_to_db( + &self, + glue: &mut gluesql_core::prelude::Glue>, + memory: &ValidatedMemory, + ) -> impl std::future::Future> { + let sql = format!( + r#"INSERT INTO recommendations + (id, client_id, symbol, recommendation_type, reasoning, confidence, + validation_hash, memory_version, timestamp) + VALUES ('{}', '{}', '{}', '{}', '{}', {}, '{}', '{}', {})"#, + self.id, + self.client_id, + self.symbol, + self.recommendation_type.as_serde_str(), + self.reasoning.replace('\'', "''"), + self.confidence, + hex::encode(memory.validation_hash), + self.memory_version, + self.timestamp.timestamp() + ); + + async move { + glue.execute(&sql).await?; + Ok(()) + } + } + + fn load_from_db( + glue: &mut gluesql_core::prelude::Glue>, + limit: Option, + ) -> impl std::future::Future>> { + let sql = if let Some(limit) = limit { + format!("SELECT id, client_id, symbol, recommendation_type, reasoning, confidence, validation_hash, memory_version, timestamp FROM recommendations ORDER BY timestamp DESC LIMIT {limit}") + } else { + "SELECT id, client_id, symbol, recommendation_type, reasoning, confidence, validation_hash, memory_version, timestamp FROM recommendations ORDER BY timestamp DESC".to_string() + }; + + async move { + let results = glue.execute(&sql).await?; + + use gluesql_core::data::Value; + let mut memories = Vec::new(); + + for payload in results { + if let gluesql_core::prelude::Payload::Select { labels: _, rows } = payload { + for row in rows { + if row.len() >= 9 { + let id = match &row[0] { + Value::Str(s) => s.clone(), + _ => continue, + }; + + let client_id = match &row[1] { + Value::Str(s) => s.clone(), + _ => continue, + }; + + let symbol = match &row[2] { + Value::Str(s) => s.clone(), + _ => continue, + }; + + let recommendation_type = match &row[3] { + Value::Str(s) => s.clone(), + _ => "Unknown".to_string(), + }; + + let reasoning = match &row[4] { + Value::Str(s) => s.clone(), + _ => "".to_string(), + }; + + let confidence = match &row[5] { + Value::F64(f) => *f, + _ => 0.0, + }; + + let memory_version = match &row[7] { + Value::Str(s) => s.clone(), + _ => "".to_string(), + }; + + let timestamp = match &row[8] { + Value::I64(ts) => chrono::DateTime::from_timestamp(*ts, 0) + .unwrap_or_else(chrono::Utc::now) + .to_rfc3339(), + _ => chrono::Utc::now().to_rfc3339(), + }; + + let validation_hash = match &row[6] { + Value::Str(s) => hex::decode(s) + .unwrap_or_default() + .try_into() + .unwrap_or([0u8; 32]), + _ => [0u8; 32], + }; + + let content = serde_json::json!({ + "id": id, + "client_id": client_id, + "symbol": symbol, + "recommendation_type": recommendation_type, + "reasoning": reasoning, + "confidence": confidence, + "memory_version": memory_version, + "timestamp": timestamp, + "validation_result": { + "is_valid": true, + "confidence": confidence, + "hash": validation_hash.to_vec(), + "cross_references": [], + "issues": [] + } + }) + .to_string(); + + let memory = ValidatedMemory { + id, + content, + validation_hash, + sources: vec!["recommendation_engine".to_string()], + confidence, + timestamp: match &row[8] { + Value::I64(ts) => chrono::DateTime::from_timestamp(*ts, 0) + .unwrap_or_else(chrono::Utc::now), + _ => chrono::Utc::now(), + }, + cross_references: vec![], + }; + + memories.push(memory); + } + } + } + } + + Ok(memories) + } + } +} + +impl Storable for ClientProfile { + fn table_name() -> &'static str { + "client_profiles" + } + + fn get_id(&self) -> String { + self.id.clone() + } + + fn store_to_db( + &self, + glue: &mut gluesql_core::prelude::Glue>, + memory: &ValidatedMemory, + ) -> impl std::future::Future> { + let sql = format!( + r#"INSERT INTO client_profiles + (id, content, timestamp, validation_hash, sources, confidence) + VALUES ('{}', '{}', {}, '{}', '{}', {})"#, + memory.id, + memory.content.replace('\'', "''"), + memory.timestamp.timestamp(), + hex::encode(memory.validation_hash), + memory.sources.join(","), + memory.confidence + ); + + async move { + glue.execute(&sql).await?; + Ok(()) + } + } + + fn load_from_db( + glue: &mut gluesql_core::prelude::Glue>, + _limit: Option, + ) -> impl std::future::Future>> { + let sql = "SELECT id, content, timestamp, validation_hash, sources, confidence FROM client_profiles ORDER BY timestamp DESC LIMIT 1"; + + async move { + let results = glue.execute(sql).await?; + + use gluesql_core::data::Value; + let mut memories = Vec::new(); + + for payload in results { + if let gluesql_core::prelude::Payload::Select { labels: _, rows } = payload { + for row in rows { + if row.len() >= 6 { + let memory = ValidatedMemory { + id: match &row[0] { + Value::Str(s) => s.clone(), + _ => continue, + }, + content: match &row[1] { + Value::Str(s) => s.clone(), + _ => continue, + }, + timestamp: match &row[2] { + Value::I64(ts) => chrono::DateTime::from_timestamp(*ts, 0) + .unwrap_or_else(chrono::Utc::now), + _ => chrono::Utc::now(), + }, + validation_hash: match &row[3] { + Value::Str(s) => hex::decode(s) + .unwrap_or_default() + .try_into() + .unwrap_or([0u8; 32]), + _ => [0u8; 32], + }, + sources: match &row[4] { + Value::Str(s) => s.split(',').map(String::from).collect(), + _ => vec![], + }, + confidence: match &row[5] { + Value::F64(f) => *f, + _ => 0.0, + }, + cross_references: vec![], + }; + memories.push(memory); + } + } + } + } + + Ok(memories) + } + } } diff --git a/examples/financial_advisor/src/advisor/recommendations.rs b/examples/financial_advisor/src/advisor/recommendations.rs index 898a183..140ed68 100644 --- a/examples/financial_advisor/src/advisor/recommendations.rs +++ b/examples/financial_advisor/src/advisor/recommendations.rs @@ -116,77 +116,487 @@ impl RecommendationEngine { let mut factors = Vec::new(); let mut score: f64 = 0.0; - // Analyze P/E ratio - if pe_ratio < 15.0 { - factors.push("Low P/E ratio indicates undervaluation"); - score += 0.3; - } else if pe_ratio > 30.0 { - factors.push("High P/E ratio suggests overvaluation"); - score -= 0.2; - } else { - factors.push("P/E ratio within reasonable range"); - score += 0.1; + // Get stock-specific data for more detailed analysis + let stock_data = self.get_stock_specific_data(symbol); + + // Sector-based analysis with realistic weightings + let sector_score = self.analyze_sector_outlook(&stock_data.sector, symbol, pe_ratio); + score += sector_score.0; + factors.push(sector_score.1); + + // Company-specific fundamental analysis + let fundamental_score = + self.analyze_fundamentals(symbol, price, pe_ratio, &stock_data.sector); + score += fundamental_score.0; + factors.push(fundamental_score.1); + + // Risk tolerance alignment + let risk_score = self.analyze_risk_alignment(client, pe_ratio, &stock_data.sector, symbol); + score += risk_score.0; + factors.push(risk_score.1); + + // Market conditions and valuation + let valuation_score = self.analyze_valuation(symbol, price, pe_ratio, &stock_data.sector); + score += valuation_score.0; + factors.push(valuation_score.1); + + // Determine recommendation with more nuanced thresholds + let (recommendation_type, confidence) = + self.determine_recommendation(score, symbol, &stock_data.sector); + + let reasoning = format!( + "Analysis of {} (${:.2}, P/E: {:.1}) in {} sector: {}. \ + Client risk profile ({:?}) consideration: {}. \ + Key factors: {}", + symbol, + price, + pe_ratio, + stock_data.sector, + match recommendation_type { + RecommendationType::Buy => + "Strong positive outlook with favorable risk-reward ratio", + RecommendationType::Sell => "Concerns about valuation and sector headwinds", + RecommendationType::Hold => "Balanced outlook, current position appropriate", + RecommendationType::Rebalance => "Portfolio optimization opportunity identified", + }, + client.risk_tolerance, + self.get_risk_alignment_summary(client, &stock_data.sector), + factors.join("; ") + ); + + (recommendation_type, reasoning, confidence) + } + + fn get_stock_specific_data(&self, symbol: &str) -> super::StockData { + // Simulate stock data lookup - in real implementation, this would query a database or API + match symbol.to_uppercase().as_str() { + "AAPL" => super::StockData { + price: 175.43, + volume: 45_230_000, + pe_ratio: 28.5, + market_cap: 2_800_000_000_000, + sector: "Technology".to_string(), + }, + "GOOGL" => super::StockData { + price: 142.56, + volume: 22_150_000, + pe_ratio: 24.8, + market_cap: 1_750_000_000_000, + sector: "Technology".to_string(), + }, + "MSFT" => super::StockData { + price: 415.26, + volume: 18_940_000, + pe_ratio: 32.1, + market_cap: 3_100_000_000_000, + sector: "Technology".to_string(), + }, + "AMZN" => super::StockData { + price: 155.89, + volume: 35_670_000, + pe_ratio: 45.2, + market_cap: 1_650_000_000_000, + sector: "Consumer Discretionary".to_string(), + }, + "TSLA" => super::StockData { + price: 248.42, + volume: 78_920_000, + pe_ratio: 65.4, + market_cap: 780_000_000_000, + sector: "Automotive".to_string(), + }, + "META" => super::StockData { + price: 501.34, + volume: 15_230_000, + pe_ratio: 22.7, + market_cap: 1_250_000_000_000, + sector: "Technology".to_string(), + }, + "NVDA" => super::StockData { + price: 875.28, + volume: 28_450_000, + pe_ratio: 55.8, + market_cap: 2_150_000_000_000, + sector: "Technology".to_string(), + }, + "JPM" => super::StockData { + price: 195.43, + volume: 12_450_000, + pe_ratio: 12.8, + market_cap: 570_000_000_000, + sector: "Financial Services".to_string(), + }, + "JNJ" => super::StockData { + price: 156.78, + volume: 8_750_000, + pe_ratio: 15.2, + market_cap: 410_000_000_000, + sector: "Healthcare".to_string(), + }, + "V" => super::StockData { + price: 278.94, + volume: 6_230_000, + pe_ratio: 31.4, + market_cap: 590_000_000_000, + sector: "Financial Services".to_string(), + }, + "PYPL" => super::StockData { + price: 78.94, + volume: 14_560_000, + pe_ratio: 18.9, + market_cap: 85_000_000_000, + sector: "Financial Services".to_string(), + }, + // Default case for unknown symbols + _ => super::StockData { + price: 95.50 + (symbol.len() as f64 * 3.25), + volume: 5_000_000 + (symbol.len() as u64 * 250_000), + pe_ratio: 20.0 + (symbol.len() as f64 * 0.8), + market_cap: 50_000_000_000 + (symbol.len() as u64 * 2_000_000_000), + sector: "Mixed".to_string(), + }, } + } - // Analyze based on client risk tolerance - match client.risk_tolerance { - RiskTolerance::Conservative => { + fn analyze_sector_outlook( + &self, + sector: &str, + symbol: &str, + pe_ratio: f64, + ) -> (f64, &'static str) { + match sector { + "Technology" => match symbol { + "AAPL" => ( + 0.15, + "Technology leader with strong ecosystem and services growth", + ), + "MSFT" => ( + 0.25, + "Cloud dominance and enterprise AI positioning driving growth", + ), + "GOOGL" => (0.10, "Search monopoly but facing AI disruption challenges"), + "META" => ( + 0.05, + "Metaverse investments weighing on near-term profitability", + ), + "NVDA" => (0.35, "AI semiconductor leader with unprecedented demand"), + "INTC" => ( + -0.15, + "Struggling to compete in advanced chip manufacturing", + ), + "ADBE" => ( + 0.20, + "Creative software dominance with growing AI integration", + ), + "CRM" => (0.10, "Enterprise software growth but increased competition"), + _ => (0.05, "Technology sector showing mixed fundamentals"), + }, + "Healthcare" => { if pe_ratio < 20.0 { - factors.push("Suitable for conservative investor due to stable valuation"); - score += 0.2; + ( + 0.20, + "Healthcare defensive characteristics with reasonable valuation", + ) } else { - factors.push("May be too volatile for conservative profile"); - score -= 0.3; + (0.10, "Healthcare stability but premium valuation concerns") } } - RiskTolerance::Moderate => { - factors.push("Fits moderate risk tolerance"); - score += 0.1; + "Financial Services" => match symbol { + "JPM" => ( + 0.15, + "Benefiting from higher interest rates and strong credit quality", + ), + "V" | "MA" => ( + 0.25, + "Payment network dominance with recession-resistant model", + ), + "PYPL" => (-0.10, "Facing increased competition in digital payments"), + _ => (0.05, "Financial sector mixed amid rate environment"), + }, + "Consumer Discretionary" => match symbol { + "AMZN" => (0.10, "E-commerce leader but cloud growth slowing"), + "TSLA" => ( + -0.05, + "EV pioneer but intensifying competition and valuation concerns", + ), + "HD" => ( + 0.15, + "Home improvement demand resilient despite economic headwinds", + ), + "DIS" => ( + 0.00, + "Streaming wars and theme park recovery offsetting each other", + ), + _ => (-0.05, "Consumer discretionary facing economic headwinds"), + }, + "Consumer Staples" => ( + 0.10, + "Defensive characteristics attractive in uncertain environment", + ), + "Communication Services" => ( + 0.05, + "Mixed outlook with streaming competition intensifying", + ), + "Automotive" => (-0.10, "Traditional auto facing EV transition challenges"), + _ => (0.00, "Sector outlook neutral"), + } + } + + fn analyze_fundamentals( + &self, + symbol: &str, + _price: f64, + pe_ratio: f64, + _sector: &str, + ) -> (f64, &'static str) { + // Symbol-specific fundamental analysis + match symbol { + "AAPL" => { + if pe_ratio > 30.0 { + ( + -0.10, + "Premium valuation limits upside despite strong fundamentals", + ) + } else { + ( + 0.20, + "Strong balance sheet and ecosystem moat justify valuation", + ) + } + } + "MSFT" => { + if pe_ratio < 35.0 { + ( + 0.25, + "Exceptional fundamentals with AI and cloud leadership", + ) + } else { + (0.10, "Strong fundamentals but valuation becoming stretched") + } + } + "GOOGL" => { + if pe_ratio < 25.0 { + (0.15, "Search dominance and attractive valuation multiple") + } else { + ( + 0.05, + "Core business strong but AI disruption risks emerging", + ) + } + } + "NVDA" => { + if pe_ratio > 60.0 { + ( + 0.10, + "Revolutionary AI demand but extreme valuation multiples", + ) + } else { + (0.30, "AI revolution driving unprecedented earnings growth") + } + } + "TSLA" => { + if pe_ratio > 50.0 { + ( + -0.20, + "Growth slowing while valuation remains extremely high", + ) + } else { + ( + 0.10, + "EV leadership position but facing increased competition", + ) + } + } + "AMZN" => { + if pe_ratio < 50.0 { + ( + 0.15, + "E-commerce dominance and cloud profitability improving", + ) + } else { + (0.05, "Growth slowing and competition intensifying") + } + } + "JPM" => { + if pe_ratio < 15.0 { + ( + 0.20, + "Strong credit quality and attractive valuation in rate environment", + ) + } else { + ( + 0.10, + "Solid fundamentals but limited upside at current levels", + ) + } + } + "META" => { + if pe_ratio < 25.0 { + (0.15, "Social media dominance and efficiency improvements") + } else { + ( + 0.00, + "Metaverse investments creating uncertainty about returns", + ) + } } - RiskTolerance::Aggressive => { - if pe_ratio > 25.0 { - factors.push("High growth potential suitable for aggressive investor"); - score += 0.2; + _ => { + // Generic P/E analysis for unknown stocks + if pe_ratio < 15.0 { + ( + 0.15, + "Attractive valuation multiple suggests potential upside", + ) + } else if pe_ratio > 30.0 { + ( + -0.10, + "High valuation multiple limits risk-adjusted returns", + ) + } else { + (0.05, "Valuation multiple within reasonable range") + } + } + } + } + + fn analyze_risk_alignment( + &self, + client: &ClientProfile, + pe_ratio: f64, + sector: &str, + symbol: &str, + ) -> (f64, &'static str) { + match client.risk_tolerance { + RiskTolerance::Conservative => match sector { + "Healthcare" | "Consumer Staples" | "Financial Services" => { + if pe_ratio < 20.0 { + ( + 0.15, + "Defensive sector characteristics align with conservative approach", + ) + } else { + ( + 0.05, + "Defensive sector but valuation limits conservative appeal", + ) + } + } + "Technology" => match symbol { + "AAPL" | "MSFT" => { + (0.10, "Quality tech names suitable for conservative growth") + } + _ => ( + -0.15, + "Technology volatility exceeds conservative risk parameters", + ), + }, + _ => ( + -0.10, + "Sector volatility may not align with conservative objectives", + ), + }, + RiskTolerance::Moderate => { + if pe_ratio > 40.0 { + ( + 0.00, + "Moderate risk tolerance accommodates some growth premium", + ) } else { - factors.push("May lack growth potential for aggressive strategy"); - score -= 0.1; + ( + 0.10, + "Balanced risk-reward profile fits moderate investment approach", + ) } } + RiskTolerance::Aggressive => match symbol { + "NVDA" | "TSLA" | "META" => ( + 0.20, + "High growth potential aligns with aggressive risk appetite", + ), + _ if pe_ratio > 30.0 => (0.15, "Growth premium acceptable for aggressive strategy"), + _ => ( + 0.05, + "May lack sufficient growth potential for aggressive allocation", + ), + }, } + } + + fn analyze_valuation( + &self, + _symbol: &str, + _price: f64, + pe_ratio: f64, + sector: &str, + ) -> (f64, &'static str) { + // More sophisticated valuation analysis + let sector_avg_pe = match sector { + "Technology" => 28.0, + "Healthcare" => 18.0, + "Financial Services" => 13.0, + "Consumer Discretionary" => 25.0, + "Consumer Staples" => 22.0, + "Communication Services" => 20.0, + _ => 20.0, + }; + + let pe_premium = pe_ratio / sector_avg_pe; - // Simple sector analysis (in real implementation, this would be more sophisticated) - if symbol.starts_with("AAPL") || symbol.starts_with("MSFT") || symbol.starts_with("GOOGL") { - factors.push("Technology sector showing strong fundamentals"); - score += 0.2; + if pe_premium < 0.8 { + ( + 0.20, + "Trading at discount to sector average suggests value opportunity", + ) + } else if pe_premium > 1.5 { + ( + -0.15, + "Significant premium to sector average limits margin of safety", + ) + } else if pe_premium > 1.2 { + (-0.05, "Modest premium to sector average warrants caution") + } else { + (0.10, "Valuation reasonable relative to sector comparables") } + } - // Determine recommendation - let (recommendation_type, confidence) = if score > 0.3 { - (RecommendationType::Buy, (score * 0.8 + 0.2).min(0.95)) - } else if score < -0.2 { - (RecommendationType::Sell, ((-score) * 0.7 + 0.3).min(0.9)) + fn determine_recommendation( + &self, + score: f64, + symbol: &str, + _sector: &str, + ) -> (RecommendationType, f64) { + // More nuanced recommendation logic with symbol-specific thresholds + let (rec_type, base_confidence) = if score > 0.4 { + (RecommendationType::Buy, 0.75 + (score - 0.4) * 0.5) + } else if score > 0.2 { + (RecommendationType::Buy, 0.60 + (score - 0.2) * 0.75) + } else if score > -0.1 { + (RecommendationType::Hold, 0.55 + score.abs() * 0.3) + } else if score > -0.3 { + (RecommendationType::Hold, 0.65 + score.abs() * 0.2) } else { - (RecommendationType::Hold, 0.6 + score.abs() * 0.2) + (RecommendationType::Sell, 0.70 + score.abs() * 0.25) }; - let reasoning = format!( - "Analysis of {} at ${:.2} with P/E ratio {:.1}: {}. \ - Recommendation based on client risk tolerance ({:?}) and market conditions. \ - Factors considered: {}", - symbol, - price, - pe_ratio, - match recommendation_type { - RecommendationType::Buy => "Positive outlook with good value proposition", - RecommendationType::Sell => "Concerns about current valuation and risks", - RecommendationType::Hold => - "Mixed signals, maintaining current position recommended", - RecommendationType::Rebalance => "Portfolio adjustment recommended", - }, - client.risk_tolerance, - factors.join("; ") - ); + // Cap confidence and add some symbol-specific variance + let symbol_variance = (symbol.len() % 7) as f64 * 0.02; + let final_confidence = (base_confidence + symbol_variance).clamp(0.30, 0.95); - (recommendation_type, reasoning, confidence) + (rec_type, final_confidence) + } + + fn get_risk_alignment_summary(&self, client: &ClientProfile, sector: &str) -> &'static str { + match (&client.risk_tolerance, sector) { + (RiskTolerance::Conservative, "Healthcare") => "matches defensive investment criteria", + (RiskTolerance::Conservative, "Consumer Staples") => { + "aligns with stability requirements" + } + (RiskTolerance::Conservative, "Technology") => { + "requires careful selection within sector" + } + (RiskTolerance::Aggressive, "Technology") => "leverages high-growth sector exposure", + (RiskTolerance::Moderate, _) => "provides balanced risk-reward profile", + _ => "consideration of risk parameters integrated", + } } } diff --git a/examples/financial_advisor/src/memory/mod.rs b/examples/financial_advisor/src/memory/mod.rs index a89be67..58c0bad 100644 --- a/examples/financial_advisor/src/memory/mod.rs +++ b/examples/financial_advisor/src/memory/mod.rs @@ -24,6 +24,47 @@ pub use types::{ const PROLLY_CONFIG_FILE: &str = "prolly_config_tree_config"; +/// Trait for types that can be stored in versioned memory +pub trait Storable: serde::Serialize + serde::de::DeserializeOwned + Clone { + /// Get the table name for this type + fn table_name() -> &'static str; + + /// Get the unique identifier for this instance + fn get_id(&self) -> String; + + /// Store this instance to the database + fn store_to_db( + &self, + glue: &mut Glue>, + memory: &ValidatedMemory, + ) -> impl std::future::Future>; + + /// Load instances from the database + fn load_from_db( + glue: &mut Glue>, + limit: Option, + ) -> impl std::future::Future>>; + + /// Handle updates (default: delete and insert) + fn update_in_db( + &self, + glue: &mut Glue>, + memory: &ValidatedMemory, + ) -> impl std::future::Future> { + async move { + // Default implementation: delete existing and insert new + let delete_sql = format!( + "DELETE FROM {} WHERE id = '{}'", + Self::table_name(), + self.get_id() + ); + let _ = glue.execute(&delete_sql).await; // Ignore error if doesn't exist + + self.store_to_db(glue, memory).await + } + } +} + /// Core memory store with versioning capabilities pub struct MemoryStore { store_path: String, @@ -37,7 +78,7 @@ impl MemoryStore { // Create directory if it doesn't exist if !path.exists() { - println!("Creating {}", store_path); + println!("Creating {store_path}"); std::fs::create_dir_all(path)?; } @@ -59,10 +100,10 @@ impl MemoryStore { // Initialize ProllyStorage - it will create its own VersionedKvStore accessing the same prolly tree files let storage = if current_dir.join(PROLLY_CONFIG_FILE).exists() { - println!("Opening existing storage: {}", PROLLY_CONFIG_FILE); + println!("Opening existing storage: {PROLLY_CONFIG_FILE}"); ProllyStorage::<32>::open(¤t_dir)? } else { - println!("Creating new storage: {}", PROLLY_CONFIG_FILE); + println!("Creating new storage: {PROLLY_CONFIG_FILE}"); ProllyStorage::<32>::init(¤t_dir)? }; @@ -165,101 +206,111 @@ impl MemoryStore { // Table doesn't exist, create it glue.execute(create_sql).await?; glue.storage - .commit_with_message(&format!("Create table: {}", table_name)) + .commit_with_message(&format!("Create table: {table_name}")) .await?; } Ok(()) } - pub async fn store(&mut self, memory: &ValidatedMemory) -> Result { - let path = std::env::current_dir()?; + /// Generic store method that uses the Storable trait + pub async fn store_typed( + &mut self, + item: &T, + memory: &ValidatedMemory, + ) -> Result { + let path = Path::new(&self.store_path); let storage = if path.join(PROLLY_CONFIG_FILE).exists() { - ProllyStorage::<32>::open(&path)? + ProllyStorage::<32>::open(path)? } else { - ProllyStorage::<32>::init(&path)? + ProllyStorage::<32>::init(path)? }; - // We need to load instead of creating a new Glue instance let mut glue = Glue::new(storage); - // Ensure schema exists (this should be safe to run multiple times) + // Ensure schema exists Self::init_schema(&mut glue).await?; - // Try to parse content as JSON to determine memory type - if let Ok(json_value) = serde_json::from_str::(&memory.content) { - // Try to store as recommendation first - if let Ok(rec) = serde_json::from_str::(&memory.content) - { - let sql = format!( - r#"INSERT INTO recommendations - (id, client_id, symbol, recommendation_type, reasoning, confidence, - validation_hash, memory_version, timestamp) - VALUES ('{}', '{}', '{}', '{}', '{}', {}, '{}', '{}', {})"#, - rec.id, - rec.client_id, - rec.symbol, - rec.recommendation_type.as_str(), - rec.reasoning.replace('\'', "''"), - rec.confidence, - hex::encode(memory.validation_hash), - rec.memory_version, - rec.timestamp.timestamp() - ); - glue.execute(&sql).await?; - } - // Try to store as client profile - else if let Ok(_profile) = - serde_json::from_str::(&memory.content) - { - let sql = format!( - r#"INSERT INTO client_profiles - (id, content, timestamp, validation_hash, sources, confidence) - VALUES ('{}', '{}', {}, '{}', '{}', {})"#, - memory.id, - memory.content.replace('\'', "''"), - memory.timestamp.timestamp(), - hex::encode(memory.validation_hash), - memory.sources.join(","), - memory.confidence - ); - glue.execute(&sql).await?; - } - // Default to market data if it has a symbol field or fallback - else { - let symbol = json_value - .get("symbol") - .and_then(|s| s.as_str()) - .unwrap_or("UNKNOWN"); - - let sql = format!( - r#"INSERT INTO market_data - (id, symbol, content, validation_hash, sources, confidence, timestamp) - VALUES ('{}', '{}', '{}', '{}', '{}', {}, {})"#, - memory.id, - symbol, - memory.content.replace('\'', "''"), - hex::encode(memory.validation_hash), - memory.sources.join(","), - memory.confidence, - memory.timestamp.timestamp() - ); - glue.execute(&sql).await?; - } - } else { - // If not valid JSON, store as market data with unknown symbol + // Use the item's update method to handle storage + item.update_in_db(&mut glue, memory).await?; + + // Store cross-references + for reference in &memory.cross_references { + // First try to delete if exists, then insert (GlueSQL doesn't support UPSERT) + let delete_sql = format!( + "DELETE FROM cross_references WHERE source_id = '{}' AND target_id = '{}'", + memory.id, reference + ); + let _ = glue.execute(&delete_sql).await; // Ignore if record doesn't exist + let sql = format!( - r#"INSERT INTO market_data - (id, symbol, content, validation_hash, sources, confidence, timestamp) - VALUES ('{}', 'UNKNOWN', '{}', '{}', '{}', {}, {})"#, - memory.id, - memory.content.replace('\'', "''"), - hex::encode(memory.validation_hash), - memory.sources.join(","), - memory.confidence, - memory.timestamp.timestamp() + r#"INSERT INTO cross_references + (source_id, target_id, reference_type, confidence) + VALUES ('{}', '{}', 'validation', {})"#, + memory.id, reference, memory.confidence ); glue.execute(&sql).await?; } + let version = memory.clone().id; + glue.storage + .commit_with_message(&format!("Store memory: {}", memory.id)) + .await?; + + Ok(version) + } + + /// Legacy store method for backward compatibility + pub async fn store(&mut self, memory: &ValidatedMemory) -> Result { + // Try to determine type and delegate to typed methods + if let Ok(rec) = serde_json::from_str::(&memory.content) { + self.store_typed(&rec, memory).await + } else if let Ok(profile) = + serde_json::from_str::(&memory.content) + { + self.store_typed(&profile, memory).await + } else { + // Fallback to market data storage + self.store_as_market_data(memory).await + } + } + + /// Fallback method for market data and unknown types + async fn store_as_market_data(&mut self, memory: &ValidatedMemory) -> Result { + let path = Path::new(&self.store_path); + let storage = if path.join(PROLLY_CONFIG_FILE).exists() { + ProllyStorage::<32>::open(path)? + } else { + ProllyStorage::<32>::init(path)? + }; + let mut glue = Glue::new(storage); + + Self::init_schema(&mut glue).await?; + + // Determine symbol from JSON if possible + let symbol = + if let Ok(json_value) = serde_json::from_str::(&memory.content) { + json_value + .get("symbol") + .and_then(|s| s.as_str()) + .unwrap_or("UNKNOWN") + .to_string() + } else { + "UNKNOWN".to_string() + }; + + let sql = format!( + r#"INSERT INTO market_data + (id, symbol, content, validation_hash, sources, confidence, timestamp) + VALUES ('{}', '{}', '{}', '{}', '{}', {}, {})"#, + memory.id, + &symbol, + memory.content.replace('\'', "''"), + hex::encode(memory.validation_hash), + memory.sources.join(","), + memory.confidence, + memory.timestamp.timestamp() + ); + glue.execute(&sql).await?; + // Store cross-references for reference in &memory.cross_references { // First try to delete if exists, then insert (GlueSQL doesn't support UPSERT) @@ -304,11 +355,11 @@ impl MemoryStore { } pub async fn query_related(&self, content: &str, limit: usize) -> Result> { - let path = std::env::current_dir()?; + let path = Path::new(&self.store_path); let storage = if path.join(PROLLY_CONFIG_FILE).exists() { - ProllyStorage::<32>::open(&path)? + ProllyStorage::<32>::open(path)? } else { - ProllyStorage::<32>::init(&path)? + ProllyStorage::<32>::init(path)? }; let mut glue = Glue::new(storage); @@ -455,7 +506,7 @@ impl MemoryStore { glue.execute(&sql).await?; glue.storage - .commit_with_message(&format!("Commit Audit log: {}", action)) + .commit_with_message(&format!("Commit Audit log: {action}")) .await?; Ok(()) } @@ -523,13 +574,13 @@ impl MemoryStore { // Add limit if specified let limit_str; if let Some(limit) = limit { - limit_str = format!("-{}", limit); + limit_str = format!("-{limit}"); args.insert(1, &limit_str); } let log_output = std::process::Command::new("git") .args(&args) - .current_dir(&git_dir) + .current_dir(git_dir) .output() .map_err(|e| anyhow::anyhow!("Failed to get git log: {}", e))?; @@ -608,7 +659,7 @@ impl MemoryStore { let show_output = std::process::Command::new("git") .args(["show", "--format=%H|%s|%at|%an", "--name-only", commit]) - .current_dir(&git_dir) + .current_dir(git_dir) .output() .map_err(|e| anyhow::anyhow!("Failed to show commit '{}': {}", commit, e))?; @@ -660,7 +711,7 @@ impl MemoryStore { let git_dir = Path::new(&self.store_path); let reset_output = std::process::Command::new("git") .args(["reset", "--hard", commit]) - .current_dir(&git_dir) + .current_dir(git_dir) .output() .map_err(|e| anyhow::anyhow!("Failed to reset to commit '{}': {}", commit, e))?; @@ -690,7 +741,7 @@ impl MemoryStore { let git_dir = Path::new(&self.store_path); let output = std::process::Command::new("git") .args(["rev-parse", "HEAD"]) - .current_dir(&git_dir) + .current_dir(git_dir) .output() .map_err(|e| anyhow::anyhow!("Failed to get commit ID: {}", e))?; @@ -729,7 +780,7 @@ impl MemoryStore { // Temporarily checkout the target commit let checkout_output = std::process::Command::new("git") .args(["checkout", commit]) - .current_dir(&git_dir) + .current_dir(git_dir) .output() .map_err(|e| anyhow::anyhow!("Failed to checkout commit '{}': {}", commit, e))?; @@ -741,9 +792,9 @@ impl MemoryStore { // Query recommendations from this point in time let path = Path::new(&self.store_path); let storage = if path.join(PROLLY_CONFIG_FILE).exists() { - ProllyStorage::<32>::open(&path)? + ProllyStorage::<32>::open(path)? } else { - ProllyStorage::<32>::init(&path)? + ProllyStorage::<32>::init(path)? }; let mut glue = Glue::new(storage); @@ -755,7 +806,7 @@ impl MemoryStore { // Restore original state std::process::Command::new("git") .args(["checkout", ¤t_branch]) - .current_dir(&git_dir) + .current_dir(git_dir) .output() .map_err(|e| anyhow::anyhow!("Failed to restore branch '{}': {}", current_branch, e))?; @@ -785,7 +836,7 @@ impl MemoryStore { // Temporarily checkout the target commit let checkout_output = std::process::Command::new("git") .args(["checkout", commit]) - .current_dir(&git_dir) + .current_dir(git_dir) .output() .map_err(|e| anyhow::anyhow!("Failed to checkout commit '{}': {}", commit, e))?; @@ -797,9 +848,9 @@ impl MemoryStore { // Query market data from this point in time let path = Path::new(&self.store_path); let storage = if path.join(PROLLY_CONFIG_FILE).exists() { - ProllyStorage::<32>::open(&path)? + ProllyStorage::<32>::open(path)? } else { - ProllyStorage::<32>::init(&path)? + ProllyStorage::<32>::init(path)? }; let mut glue = Glue::new(storage); @@ -815,7 +866,7 @@ impl MemoryStore { // Restore original state std::process::Command::new("git") .args(["checkout", ¤t_branch]) - .current_dir(&git_dir) + .current_dir(git_dir) .output() .map_err(|e| anyhow::anyhow!("Failed to restore branch '{}': {}", current_branch, e))?; @@ -912,7 +963,7 @@ impl MemoryStore { // Temporarily checkout the target commit std::process::Command::new("git") .args(["checkout", commit]) - .current_dir(&git_dir) + .current_dir(git_dir) .output() .map_err(|e| anyhow::anyhow!("Failed to checkout commit for audit: {}", e))?; @@ -922,7 +973,7 @@ impl MemoryStore { // Restore original state std::process::Command::new("git") .args(["checkout", ¤t_branch]) - .current_dir(&git_dir) + .current_dir(git_dir) .output() .map_err(|e| anyhow::anyhow!("Failed to restore branch for audit: {}", e))?; @@ -998,27 +1049,64 @@ impl MemoryStore { }; // Create content from recommendation fields + let recommendation_type = match &row[3] { + Value::Str(s) => s.clone(), + _ => "UNKNOWN".to_string(), + }; + + let reasoning = match &row[4] { + Value::Str(s) => s.clone(), + _ => "".to_string(), + }; + + let confidence = match &row[5] { + Value::F64(f) => *f, + _ => 0.0, + }; + + let memory_version = match &row[7] { + Value::Str(s) => s.clone(), + _ => "".to_string(), + }; + + let timestamp = match &row[8] { + Value::I64(ts) => DateTime::from_timestamp(*ts, 0) + .unwrap_or_else(Utc::now) + .to_rfc3339(), + _ => Utc::now().to_rfc3339(), + }; + + let validation_hash = match &row[6] { + Value::Str(s) => hex::decode(s) + .unwrap_or_default() + .try_into() + .unwrap_or([0u8; 32]), + _ => [0u8; 32], + }; + let content = serde_json::json!({ "id": id, "client_id": client_id, "symbol": symbol, - "recommendation_type": row[3], - "reasoning": row[4], - "confidence": row[5], - "memory_version": row[7] + "recommendation_type": recommendation_type, + "reasoning": reasoning, + "confidence": confidence, + "memory_version": memory_version, + "timestamp": timestamp, + "validation_result": { + "is_valid": true, + "confidence": confidence, + "hash": validation_hash.to_vec(), + "cross_references": [], + "issues": [] + } }) .to_string(); let memory = ValidatedMemory { id, content, - validation_hash: match &row[6] { - Value::Str(s) => hex::decode(s) - .unwrap_or_default() - .try_into() - .unwrap_or([0u8; 32]), - _ => [0u8; 32], - }, + validation_hash, sources: vec!["recommendation_engine".to_string()], confidence: match &row[5] { Value::F64(f) => *f, @@ -1151,43 +1239,94 @@ impl MemoryStore { /// Get recommendations with optional branch/commit and limit pub async fn get_recommendations( &self, - _branch: Option<&str>, + branch: Option<&str>, commit: Option<&str>, limit: Option, ) -> Result> { - if let Some(_commit_hash) = commit { - // For now, temporal querying is complex - return empty for specific commits - // This could be implemented later with proper git checkout and query - return Ok(vec![]); + if let Some(commit_hash) = commit { + // Use the existing temporal query method + let memories = self.get_recommendations_at_commit(commit_hash).await?; + + let mut recommendations = Vec::new(); + for memory in memories { + match serde_json::from_str::(&memory.content) { + Ok(rec) => recommendations.push(rec), + Err(e) => eprintln!("Warning: Failed to parse recommendation: {e}"), + } + } + + // Apply limit if specified + if let Some(limit) = limit { + recommendations.truncate(limit); + } + + return Ok(recommendations); + } + + // Handle branch-specific queries + if let Some(branch_name) = branch { + let current_branch = self.current_branch().to_string(); + let git_dir = Path::new(&self.store_path); + + // Temporarily checkout the target branch + let checkout_output = std::process::Command::new("git") + .args(["checkout", branch_name]) + .current_dir(git_dir) + .output() + .map_err(|e| { + anyhow::anyhow!("Failed to checkout branch '{}': {}", branch_name, e) + })?; + + if !checkout_output.status.success() { + let error = String::from_utf8_lossy(&checkout_output.stderr); + return Err(anyhow::anyhow!("Git checkout failed: {}", error)); + } + + // Query recommendations on this branch (call internal method to avoid recursion) + let result = self.get_recommendations_internal(limit).await; + + // Restore original branch + std::process::Command::new("git") + .args(["checkout", ¤t_branch]) + .current_dir(git_dir) + .output() + .map_err(|e| { + anyhow::anyhow!("Failed to restore branch '{}': {}", current_branch, e) + })?; + + return result; } // Query current branch for recommendations + self.get_recommendations_internal(limit).await + } + + /// Internal method to get recommendations from current branch (no branch/commit switching) + async fn get_recommendations_internal( + &self, + limit: Option, + ) -> Result> { let path = Path::new(&self.store_path); if !path.exists() { return Ok(vec![]); } let storage = if path.join(PROLLY_CONFIG_FILE).exists() { - ProllyStorage::<32>::open(&path)? + ProllyStorage::<32>::open(path)? } else { return Ok(vec![]); }; let mut glue = Glue::new(storage); - let sql = if let Some(limit) = limit { - format!("SELECT id, client_id, symbol, recommendation_type, reasoning, confidence, validation_hash, memory_version, timestamp FROM recommendations ORDER BY timestamp DESC LIMIT {}", limit) - } else { - "SELECT id, client_id, symbol, recommendation_type, reasoning, confidence, validation_hash, memory_version, timestamp FROM recommendations ORDER BY timestamp DESC".to_string() - }; - let results = glue.execute(&sql).await?; - let memories = self.parse_recommendation_results(results)?; + // Use the Storable trait method + let memories = crate::advisor::Recommendation::load_from_db(&mut glue, limit).await?; let mut recommendations = Vec::new(); for memory in memories { match serde_json::from_str::(&memory.content) { Ok(rec) => recommendations.push(rec), - Err(e) => eprintln!("Warning: Failed to parse recommendation: {}", e), + Err(e) => eprintln!("Warning: Failed to parse recommendation: {e}"), } } @@ -1274,7 +1413,7 @@ impl MemoryStore { ]) } - /// Store client profile in versioned memory + /// Store client profile in versioned memory using the Storable trait pub async fn store_client_profile( &mut self, profile: &crate::advisor::ClientProfile, @@ -1289,18 +1428,23 @@ impl MemoryStore { cross_references: vec![], }; - // Store in versioned memory with audit trail - self.store_with_audit( - MemoryType::ClientProfile, - &validated_memory, - &format!("Updated client profile for {}", profile.id), - ) - .await?; + // Store using the typed method + self.store_typed(profile, &validated_memory).await?; + + // Log audit entry if enabled + if self.audit_enabled { + self.log_audit( + &format!("Updated client profile for {}", profile.id), + MemoryType::ClientProfile, + &profile.id, + ) + .await?; + } Ok(()) } - /// Load client profile from versioned memory using direct SQL query + /// Load client profile from versioned memory using the Storable trait pub async fn load_client_profile(&self) -> Result> { let path = Path::new(&self.store_path); if !path.exists() || !path.join(PROLLY_CONFIG_FILE).exists() { @@ -1308,44 +1452,23 @@ impl MemoryStore { } // Create a fresh storage instance to ensure we see the latest committed state - let storage = ProllyStorage::<32>::open(&path)?; + let storage = ProllyStorage::<32>::open(path)?; let mut glue = Glue::new(storage); - // First, let's test if the table exists and has any data - let count_sql = "SELECT COUNT(*) FROM client_profiles"; - let count_results = glue.execute(&count_sql).await?; - - use gluesql_core::prelude::Payload; - if let Some(count_result) = count_results.first() { - if let Payload::Select { labels: _, rows } = count_result { - if let Some(row) = rows.first() { - if let Some(count_value) = row.first() { - eprintln!("Debug: client_profiles table has {:?} rows", count_value); - } - } - } - } - - let sql = "SELECT content FROM client_profiles ORDER BY timestamp DESC LIMIT 1"; - let results = glue.execute(&sql).await?; + // Use the Storable trait method + let memories = crate::advisor::ClientProfile::load_from_db(&mut glue, Some(1)).await?; - if let Some(result) = results.first() { - match result { - Payload::Select { labels: _, rows } => { - if let Some(row) = rows.first() { - if let Some(content_value) = row.first() { - if let gluesql_core::data::Value::Str(content) = content_value { - let profile: crate::advisor::ClientProfile = - serde_json::from_str(content)?; - return Ok(Some(profile)); - } - } - } + if let Some(memory) = memories.first() { + match serde_json::from_str::(&memory.content) { + Ok(profile) => Ok(Some(profile)), + Err(e) => { + eprintln!("Warning: Failed to parse client profile: {e}"); + Ok(None) } - _ => {} } + } else { + Ok(None) } - Ok(None) } /// Simple content hashing for validation diff --git a/src/git/versioned_store.rs b/src/git/versioned_store.rs index 5ef32ce..0e39984 100644 --- a/src/git/versioned_store.rs +++ b/src/git/versioned_store.rs @@ -104,12 +104,9 @@ impl VersionedKvStore { // Save initial configuration let _ = store.tree.save_config(); - // Create initial commit + // Create initial commit (which will include prolly metadata files) store.commit("Initial commit")?; - // Auto-commit prolly metadata files after initialization - store.commit_prolly_metadata(" after init")?; - Ok(store) } @@ -275,7 +272,7 @@ impl VersionedKvStore { .save_config() .map_err(|e| GitKvError::GitObjectError(format!("Failed to save config: {e}")))?; - // Create tree object in Git + // Create tree object in Git (this will include prolly metadata files) let tree_id = self.create_git_tree()?; // Create commit @@ -287,9 +284,6 @@ impl VersionedKvStore { // Clear staging area file since we've committed self.save_staging_area()?; - // Auto-commit prolly metadata files to git - self.commit_prolly_metadata(&format!(" after commit: {message}"))?; - Ok(commit_id) } @@ -484,20 +478,65 @@ impl VersionedKvStore { /// Create a Git tree object from the current ProllyTree state fn create_git_tree(&self) -> Result { - // Create an empty tree - the ProllyTree state is managed through GitNodeStorage - // We don't need to create a prolly_tree_root file since the tree structure - // is stored in Git blobs and managed through the NodeStorage interface - let tree_entries = vec![]; + // Actually, we should let git handle the tree creation properly + // Use git's index to stage files and create tree from the index - let tree = gix::objs::Tree { - entries: tree_entries, - }; + // Get the git root directory + let git_root = Self::find_git_root(self.git_repo.path().parent().unwrap()).unwrap(); + let current_dir = std::env::current_dir().map_err(|e| { + GitKvError::GitObjectError(format!("Failed to get current directory: {e}")) + })?; - let tree_id = self - .git_repo - .objects - .write(&tree) - .map_err(|e| GitKvError::GitObjectError(format!("Failed to write tree: {e}")))?; + // Get relative path from git root to current directory + let relative_dir = current_dir.strip_prefix(&git_root).unwrap_or(¤t_dir); + + // Stage the prolly metadata files using git add + let config_file = "prolly_config_tree_config"; + let mapping_file = "prolly_hash_mappings"; + + for filename in &[config_file, mapping_file] { + let file_path = current_dir.join(filename); + if file_path.exists() { + // Get relative path from git root + let relative_path = relative_dir.join(filename); + let relative_path_str = relative_path.to_string_lossy(); + + let add_cmd = std::process::Command::new("git") + .args(["add", &relative_path_str]) + .current_dir(&git_root) + .output() + .map_err(|e| { + GitKvError::GitObjectError(format!("Failed to run git add: {e}")) + })?; + + if !add_cmd.status.success() { + let stderr = String::from_utf8_lossy(&add_cmd.stderr); + eprintln!("Warning: git add failed for {filename}: {stderr}"); + } + } + } + + // Use git write-tree to create tree from the current index + let write_tree_cmd = std::process::Command::new("git") + .args(["write-tree"]) + .current_dir(&git_root) + .output() + .map_err(|e| { + GitKvError::GitObjectError(format!("Failed to run git write-tree: {e}")) + })?; + + if !write_tree_cmd.status.success() { + let stderr = String::from_utf8_lossy(&write_tree_cmd.stderr); + return Err(GitKvError::GitObjectError(format!( + "git write-tree failed: {stderr}" + ))); + } + + let tree_hash = String::from_utf8_lossy(&write_tree_cmd.stdout) + .trim() + .to_string(); + let tree_id = gix::ObjectId::from_hex(tree_hash.as_bytes()) + .map_err(|e| GitKvError::GitObjectError(format!("Invalid tree hash: {e}")))?; Ok(tree_id) } @@ -627,78 +666,6 @@ impl VersionedKvStore { Ok(()) } - /// Stage and commit prolly metadata files to git - fn commit_prolly_metadata(&self, additional_message: &str) -> Result<(), GitKvError> { - // Get relative paths to the prolly files from git root - let git_root = Self::find_git_root(self.git_repo.path().parent().unwrap()).unwrap(); - let current_dir = std::env::current_dir().map_err(|e| { - GitKvError::GitObjectError(format!("Failed to get current directory: {e}")) - })?; - - let config_file = "prolly_config_tree_config"; - let mapping_file = "prolly_hash_mappings"; - - // Check if files exist before trying to stage them - let config_path = current_dir.join(config_file); - let mapping_path = current_dir.join(mapping_file); - - let mut files_to_stage = Vec::new(); - - if config_path.exists() { - // Get relative path from git root - if let Ok(relative_path) = config_path.strip_prefix(&git_root) { - files_to_stage.push(relative_path.to_string_lossy().to_string()); - } - } - - if mapping_path.exists() { - // Get relative path from git root - if let Ok(relative_path) = mapping_path.strip_prefix(&git_root) { - files_to_stage.push(relative_path.to_string_lossy().to_string()); - } - } - - if files_to_stage.is_empty() { - return Ok(()); // Nothing to commit - } - - // Stage the files using git add - for file in &files_to_stage { - let add_cmd = std::process::Command::new("git") - .args(["add", file]) - .current_dir(&git_root) - .output() - .map_err(|e| GitKvError::GitObjectError(format!("Failed to run git add: {e}")))?; - - if !add_cmd.status.success() { - let stderr = String::from_utf8_lossy(&add_cmd.stderr); - return Err(GitKvError::GitObjectError(format!( - "git add failed: {stderr}" - ))); - } - } - - // Commit the staged files - let commit_message = format!("Update prolly metadata{additional_message}"); - let commit_cmd = std::process::Command::new("git") - .args(["commit", "-m", &commit_message]) - .current_dir(&git_root) - .output() - .map_err(|e| GitKvError::GitObjectError(format!("Failed to run git commit: {e}")))?; - - if !commit_cmd.status.success() { - let stderr = String::from_utf8_lossy(&commit_cmd.stderr); - // It's okay if there's nothing to commit - if !stderr.is_empty() && !stderr.contains("nothing to commit") { - return Err(GitKvError::GitObjectError(format!( - "git commit failed: {stderr}" - ))); - } - } - - Ok(()) - } - /// Load the staging area from a file fn load_staging_area(&mut self) -> Result<(), GitKvError> { let staging_file = self.get_staging_file_path()?; @@ -811,4 +778,57 @@ mod tests { let status = store.status(); assert_eq!(status.len(), 0); } + + #[test] + fn test_single_commit_behavior() { + let temp_dir = TempDir::new().unwrap(); + + // Initialize git repository + gix::init(temp_dir.path()).unwrap(); + + // Create subdirectory for dataset + let dataset_dir = temp_dir.path().join("dataset"); + std::fs::create_dir_all(&dataset_dir).unwrap(); + + let mut store = VersionedKvStore::<32>::init(&dataset_dir).unwrap(); + + // Get initial commit count + let log_output = std::process::Command::new("git") + .args(&["log", "--oneline"]) + .current_dir(temp_dir.path()) + .output() + .unwrap(); + let initial_commits = String::from_utf8_lossy(&log_output.stdout).lines().count(); + + // Insert some data and commit + store + .insert(b"test_key".to_vec(), b"test_value".to_vec()) + .unwrap(); + store.commit("Test single commit").unwrap(); + + // Get commit count after our commit + let log_output = std::process::Command::new("git") + .args(&["log", "--oneline"]) + .current_dir(temp_dir.path()) + .output() + .unwrap(); + let final_commits = String::from_utf8_lossy(&log_output.stdout).lines().count(); + + // Should have exactly one more commit (no separate metadata commit) + assert_eq!( + final_commits, + initial_commits + 1, + "Expected exactly one new commit, but got {} new commits", + final_commits - initial_commits + ); + + // Verify the prolly metadata files exist in the dataset directory + let config_path = dataset_dir.join("prolly_config_tree_config"); + let mapping_path = dataset_dir.join("prolly_hash_mappings"); + assert!( + config_path.exists(), + "prolly_config_tree_config should exist" + ); + assert!(mapping_path.exists(), "prolly_hash_mappings should exist"); + } }