A comprehensive note-taking application with integrated AI assistance powered by LM Studio for local AI inference.
NoteFlow+ is a full-stack note-taking application that provides intelligent writing assistance, grammar checking, text summarization, research capabilities, and more through a locally-hosted AI model using LM Studio.
project/
├── sandboxdemo/
│ ├── client/ # React frontend application
│ ├── server/ # Node.js backend API
│ └── README.md # Detailed setup guide
├── llm/ # LLM-related utilities
├── models/ # AI model storage
└── visualizations/ # Data visualization components
- No external API dependencies - All AI functionality runs locally
- Privacy-focused - Your data never leaves your machine
- Cost-effective - No per-token charges or API limits
- Customizable - Use any compatible LLM model
- Intelligent Chat Assistant - NoteFlow+ AI assistant for writing help
- Grammar & Spelling Check - Real-time text analysis and suggestions
- Text Summarization - Automatic content summarization
- Research Assistant - Topic research and insights
- Math Research Agent - Specialized mathematical assistance
- Style Suggestions - Writing style improvements
- Text Analysis - Comprehensive content analysis
- Node.js (v16 or higher)
- npm or yarn
- LM Studio - Download from https://lmstudio.ai/
- Download and install LM Studio
- Download a compatible model (recommended: Llama 2, Code Llama, or similar)
- Start the local server in LM Studio:
- Open LM Studio
- Go to the "Local Server" tab
- Load your preferred model
- Start the server on
http://127.0.0.1:1234 - Ensure the server is running before starting the application
git clone <repository-url>
cd projectcd sandboxdemo/client
npm install --legacy-peer-depscd ../server
npm installCreate .env files in both client and server directories:
Client (.env):
VITE_API_URL=http://localhost:5000
VITE_LM_STUDIO_URL=http://127.0.0.1:1234Server (.env):
PORT=5000
LM_STUDIO_URL=http://127.0.0.1:1234/v1
NODE_ENV=developmentTerminal 1 - Start LM Studio:
- Open LM Studio application
- Load your preferred model
- Start local server on port 1234
Terminal 2 - Start Backend:
cd sandboxdemo/server
npm run devTerminal 3 - Start Frontend:
cd sandboxdemo/client
npm run dev- Frontend: http://localhost:5173
- Backend API: http://localhost:5000
- LM Studio API: http://127.0.0.1:1234
- LM Studio Test: http://localhost:5173/lmstudio-test
POST /api/ai/generate- Content generationPOST /api/ai/grammar-check- Grammar and spelling analysisPOST /api/ai/summarize- Text summarizationPOST /api/ai/research- Topic researchPOST /api/ai/analyze- Text analysisPOST /api/ai/style-suggestions- Writing style improvements
GET /api/notes- Get all notesPOST /api/notes- Create new noteGET /api/notes/:id- Get specific notePUT /api/notes/:id- Update noteDELETE /api/notes/:id- Delete note
sandboxdemo/
├── client/src/
│ ├── components/
│ │ ├── ai-assistant/ # AI chat and assistance components
│ │ ├── math/ # Math research agent
│ │ └── ...
│ ├── services/
│ │ ├── lmStudioService.js # Main LM Studio integration
│ │ ├── bardService.js # Math-specific AI service
│ │ └── ...
│ └── pages/ # Main application pages
└── server/src/
├── controllers/
│ ├── aiController.js # AI endpoint handlers
│ └── ...
├── config/
│ ├── ai.js # AI configuration
│ └── ...
└── routes/ # API route definitions
sandboxdemo/client/src/services/lmStudioService.js- Frontend AI servicesandboxdemo/server/src/controllers/aiController.js- Backend AI controllersandboxdemo/server/src/config/ai.js- AI configuration
1. LM Studio Connection Failed
Error: connect ECONNREFUSED 127.0.0.1:1234
- Ensure LM Studio is running and server is started
- Check that the model is loaded in LM Studio
- Verify the port number (default: 1234)
2. Model Loading Issues
- Ensure sufficient RAM for the model
- Try a smaller model if experiencing memory issues
- Check LM Studio logs for model loading errors
3. API Response Errors
Error: LM Studio API error
- Check LM Studio server status
- Verify model compatibility
- Review server logs for detailed error messages
- Model Selection: Use quantized models for better performance
- Memory Management: Monitor RAM usage, especially with larger models
- Response Times: Larger models provide better quality but slower responses
- Fork the repository
- Create a feature branch
- Make your changes
- Test with LM Studio integration
- Submit a pull request
[Add your license information here]
For issues and questions:
- Check the troubleshooting section
- Review LM Studio documentation
- Open an issue in the repository
Note: This application requires LM Studio to be running locally for AI functionality. Without LM Studio, the app will function as a regular note-taking application without AI features.