AI-Powered TensorFlow Debugging with ReAct Reasoning & Self-Learning capabilities.
- ReAct Reasoning Framework π§ - Advanced reasoning with Thought-Action-Observation cycles
- Knowledge Graph Integration πΈοΈ - Entity relationship mapping for better context
- Self-Reflection π - Quality assessment and confidence scoring
- Continuous Learning π - System improves from user feedback
- Agentic Updates π€ - Intelligent knowledge base expansion
- Python 3.8 or higher
- pip package manager
- Clone or navigate to the project directory:
cd /Users/gowtham/CascadeProjects/GraphMind/GraphMind- Create a virtual environment (recommended):
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate- Install dependencies:
pip install -r requirements.txt- Ensure artifacts are present:
Make sure the following files exist in the project directory:
embeddings.npyfaiss_index.indexprocessed_docs.jsonkg_networkx.gpickle
python server.pyThe server will start on http://localhost:5000
- Open your browser and navigate to
http://localhost:5000 - Enter your TensorFlow error message in the input field
- Click "Analyze Error" to get AI-powered debugging assistance
- Review the solution with:
- Root cause analysis
- Step-by-step fixes
- Code examples
- Confidence scores
- Provide feedback to help the system learn
POST /api/analyze
Content-Type: application/json
{
"error_message": "Your TensorFlow error here",
"session_id": "optional-session-id"
}POST /api/feedback
Content-Type: application/json
{
"session_id": "session-id-from-analyze",
"worked": true,
"new_error": "Optional additional info if it didn't work"
}GET /api/learning/statsPOST /api/learning/applyGET /api/system/statsUpdate the OPENROUTER_API_KEY in app_core.py with your OpenRouter API key:
OPENROUTER_API_KEY = "your-api-key-here"Get your free API key from OpenRouter
Change the model in app_core.py:
MODEL_NAME = "nvidia/nemotron-nano-9b-v2:free" # or any other OpenRouter modelIf your artifacts are in a different location, update:
ARTIFACTS_DIR = "./path/to/artifacts"- OpenRouterLLM - API client for language model interactions
- ReasoningPlanner - Plans reasoning strategy using ReAct framework
- EnhancedRetriever - Multi-stage retrieval with knowledge graph awareness
- AdvancedReActAgent - Main agent with reasoning and reflection
- FeedbackLearningSystem - Handles user feedback and system updates
GraphMind/
βββ app_core.py # Core logic (no UI dependencies)
βββ server.py # Flask server with API endpoints
βββ templates/
β βββ index.html # Web interface
βββ static/
β βββ css/
β β βββ style.css # Styling
β βββ js/
β βββ app.js # Frontend JavaScript
βββ requirements.txt # Python dependencies
βββ README.md # This file
βββ embeddings.npy # Document embeddings
βββ faiss_index.index # FAISS vector index
βββ processed_docs.json # Knowledge base documents
βββ kg_networkx.gpickle # Knowledge graph
The system continuously improves through:
- Feedback Collection - Users report if solutions worked
- ReAct Analysis - AI analyzes failures to extract insights
- Knowledge Base Updates - New documents added from validated feedback
- Graph Enhancement - Entity relationships strengthened
- Embedding Updates - FAISS index expanded with new knowledge
Ensure all artifact files are in the correct directory:
ls -la *.npy *.index *.json *.gpickleChange the port in server.py:
app.run(debug=True, host='0.0.0.0', port=5001) # Use different portCheck your OpenRouter API key is valid and has credits.
For large knowledge bases, increase Python memory:
export PYTHONMALLOC=mallocThe server runs in debug mode by default. Disable for production:
app.run(debug=False, host='0.0.0.0', port=5000)- Core logic goes in
app_core.py - API endpoints go in
server.py - UI updates go in
templates/index.htmlandstatic/
MIT License
- Powered by NVIDIA Nemotron via OpenRouter
- Built with Flask, FAISS, and NetworkX
- Uses Sentence Transformers for embeddings