Skip to content

anushka369/MuseLens

Repository files navigation

MuseLens - Multimodal Design Assistant

A cloud-native multimodal design assistant that combines computer vision, natural language processing, and vector similarity search to help designers discover inspiration, analyze compositions, and generate design improvements.

Architecture

MuseLens consists of three main services:

  • Frontend: React 18 + TypeScript + Tailwind CSS + Vite
  • Backend: Node.js 18 + Express + TypeScript
  • AI Service: Python 3.10 + FastAPI + PyTorch + FAISS

Prerequisites

  • Node.js 18+
  • Python 3.10+
  • Docker and Docker Compose
  • npm or yarn

Quick Start

1. Install Dependencies

# Install all dependencies
npm run install:all

2. Set Up Environment Variables

# Backend
cp backend/.env.example backend/.env

# AI Service
cp ai-service/.env.example ai-service/.env

# Frontend
cp frontend/.env.example frontend/.env

3. Start LocalStack (AWS Services Emulation)

# Start LocalStack with S3, DynamoDB, and Cognito
docker-compose up -d localstack

# Wait for initialization (check logs)
docker-compose logs -f localstack

4. Start Services

Option A: Using Docker Compose (Recommended)

# Start all services
docker-compose up -d

# View logs
docker-compose logs -f

Option B: Run Services Individually

# Terminal 1: Backend
cd backend
npm run dev

# Terminal 2: AI Service
cd ai-service
uvicorn main:app --reload --port 5000

# Terminal 3: Frontend
cd frontend
npm run dev

Service URLs

Project Structure

muselens/
├── frontend/               # React frontend application
│   ├── src/
│   │   ├── components/    # React components
│   │   ├── services/      # API client services
│   │   └── test/          # Frontend tests
│   ├── package.json
│   └── vite.config.ts
│
├── backend/               # Node.js backend service
│   ├── src/
│   │   ├── config/       # AWS and service configuration
│   │   ├── services/     # Business logic services
│   │   ├── routes/       # API route handlers
│   │   └── utils/        # Utility functions
│   ├── package.json
│   └── tsconfig.json
│
├── ai-service/           # Python AI microservice
│   ├── main.py          # FastAPI application
│   ├── vector_store.py  # FAISS vector store
│   └── requirements.txt
│
├── scripts/             # Utility scripts
│   └── init-aws.sh     # LocalStack initialization
│
└── docker-compose.yml  # Docker Compose configuration

Development

Running Tests

# Frontend tests
npm run test:frontend

# Backend tests
npm run test:backend

# AI Service tests
npm run test:ai

# All tests
npm run test:all

Building for Production

# Frontend
npm run build:frontend

# Backend
npm run build:backend

AWS Services (LocalStack)

The project uses LocalStack to emulate AWS services locally:

  • S3: Design asset storage
  • DynamoDB: Metadata and session storage
  • Cognito: User authentication

DynamoDB Tables

  1. muselens-designs

    • Partition Key: designId
    • GSI: UserDesignsIndex (userId, timestamp)
  2. muselens-user-sessions

    • Partition Key: userId
    • Sort Key: sessionId
    • GSI: SessionActivityIndex (userId, lastActivity)

S3 Bucket Structure

muselens-designs/
├── uploads/
│   └── {userId}/
│       └── {designId}.{ext}
└── exports/
    └── {userId}/
        └── {designId}/
            ├── tokens.css
            ├── tokens.json
            └── tokens.figma.json

API Documentation

Backend API Endpoints

  • GET /health - Health check
  • POST /api/upload - Upload design asset
  • GET /api/search - Search for similar designs
  • POST /api/critique - Generate design critique
  • POST /api/palette - Extract color palette
  • POST /api/layout - Generate layout suggestions
  • GET /api/design/:id - Get design metadata
  • GET /api/clusters - Get design clusters
  • GET /api/export - Export design tokens
  • GET /api/history - Get user history

AI Service Endpoints

  • GET /health - Health check
  • POST /embed - Generate embeddings
  • POST /critique - Generate design critique
  • POST /palette - Extract color palette
  • POST /layout - Generate layout suggestions
  • POST /cluster - Cluster designs
  • GET /vector-store/stats - Get vector store statistics
  • POST /vector-store/add - Add vector to store
  • POST /vector-store/search - Search similar vectors
  • POST /vector-store/save - Save vector store to disk
  • POST /vector-store/load - Load vector store from disk

Technology Stack

Frontend

  • React 18
  • TypeScript
  • Tailwind CSS
  • Vite
  • React Router
  • AWS Amplify (Cognito integration)
  • Axios

Backend

  • Node.js 18
  • Express
  • TypeScript
  • AWS SDK v3 (S3, DynamoDB)
  • Multer (file uploads)
  • Winston (logging)
  • JWT authentication

AI Service

  • Python 3.10
  • FastAPI
  • PyTorch
  • CLIP (OpenAI)
  • OpenCV
  • FAISS
  • scikit-learn

Environment Variables

Backend (.env)

# AWS Configuration (LocalStack)
AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=test
AWS_SECRET_ACCESS_KEY=test
AWS_ENDPOINT=http://localhost:4566

# S3 Configuration
S3_BUCKET_NAME=muselens-designs
S3_ENDPOINT=http://localhost:4566

# DynamoDB Configuration
DYNAMODB_DESIGNS_TABLE=muselens-designs
DYNAMODB_SESSIONS_TABLE=muselens-user-sessions
DYNAMODB_ENDPOINT=http://localhost:4566

# Cognito Configuration
COGNITO_USER_POOL_ID=us-east-1_test123
COGNITO_CLIENT_ID=test-client-id
COGNITO_REGION=us-east-1

# AI Service Configuration
AI_SERVICE_URL=http://localhost:5000

# Server Configuration
PORT=4000
NODE_ENV=development

# JWT Configuration
JWT_SECRET=your-secret-key-change-in-production

AI Service (.env)

# Model Configuration
CLIP_MODEL=openai/clip-vit-base-patch32
EMBEDDING_DIMENSION=512

# Vector Store Configuration
VECTOR_STORE_PATH=./data/vector_store.index
VECTOR_STORE_METADATA_PATH=./data/vector_metadata.json

# Server Configuration
HOST=0.0.0.0
PORT=5000

# Performance Configuration
MAX_BATCH_SIZE=32
DEVICE=cpu  # or 'cuda' for GPU

Frontend (.env)

# API Configuration
VITE_BACKEND_API_URL=http://localhost:4000
VITE_AI_SERVICE_URL=http://localhost:5000

# AWS Cognito Configuration
VITE_COGNITO_REGION=us-east-1
VITE_COGNITO_USER_POOL_ID=us-east-1_test123
VITE_COGNITO_CLIENT_ID=test-client-id

# Feature Flags
VITE_ENABLE_CLUSTERING=true
VITE_ENABLE_LAYOUT_GENERATION=true

Troubleshooting

Common Issues

1. LocalStack Not Starting

Problem: LocalStack container fails to start or AWS services are unavailable.

Solutions:

  • Check if port 4566 is already in use: lsof -i :4566
  • Ensure Docker is running: docker ps
  • Check LocalStack logs: docker-compose logs localstack
  • Restart LocalStack: docker-compose restart localstack
  • Clear LocalStack data: docker-compose down -v && docker-compose up -d localstack

2. DynamoDB Tables Not Created

Problem: Backend fails with "Table does not exist" error.

Solutions:

  • Run initialization script: ./scripts/init-aws.sh
  • Manually create tables using AWS CLI:
    aws dynamodb create-table \
      --table-name muselens-designs \
      --attribute-definitions AttributeName=designId,AttributeType=S \
      --key-schema AttributeName=designId,KeyType=HASH \
      --billing-mode PAY_PER_REQUEST \
      --endpoint-url http://localhost:4566
  • Verify tables exist: aws dynamodb list-tables --endpoint-url http://localhost:4566

3. S3 Upload Failures

Problem: File uploads fail with S3 errors.

Solutions:

  • Verify S3 bucket exists: aws s3 ls --endpoint-url http://localhost:4566
  • Create bucket manually: aws s3 mb s3://muselens-designs --endpoint-url http://localhost:4566
  • Check CORS configuration on the bucket
  • Verify AWS credentials in backend .env file

4. AI Service Model Download Issues

Problem: AI service fails to start due to model download errors.

Solutions:

  • Ensure stable internet connection for initial model download
  • Models are cached in ~/.cache/huggingface/
  • Manually download models:
    python -c "from transformers import CLIPModel, CLIPProcessor; \
      CLIPModel.from_pretrained('openai/clip-vit-base-patch32'); \
      CLIPProcessor.from_pretrained('openai/clip-vit-base-patch32')"
  • Check disk space (models require ~500MB)

5. CORS Errors in Frontend

Problem: Frontend shows CORS errors when calling backend API.

Solutions:

  • Verify backend CORS configuration allows frontend origin
  • Check that backend is running on correct port (4000)
  • Ensure VITE_BACKEND_API_URL in frontend .env matches backend URL
  • Clear browser cache and restart frontend dev server

6. Authentication Failures

Problem: Login/registration fails or tokens are invalid.

Solutions:

  • Verify Cognito user pool exists in LocalStack
  • Check Cognito configuration in both backend and frontend .env files
  • Ensure JWT_SECRET is set in backend .env
  • Clear browser local storage and try again
  • Check backend logs for authentication errors

7. Vector Search Returns No Results

Problem: Search functionality returns empty results.

Solutions:

  • Verify vector store is initialized: Check AI service logs
  • Ensure designs have been uploaded and embeddings generated
  • Check vector store statistics: GET http://localhost:5000/vector-store/stats
  • Restart AI service to reload vector store
  • Verify FAISS index file exists at configured path

8. High Memory Usage

Problem: AI service consumes excessive memory.

Solutions:

  • Reduce MAX_BATCH_SIZE in AI service .env
  • Use CPU instead of GPU if memory is limited
  • Restart AI service periodically
  • Monitor with: docker stats (if using Docker)

9. Port Conflicts

Problem: Services fail to start due to port already in use.

Solutions:

  • Check which process is using the port: lsof -i :<port>
  • Kill the process: kill -9 <PID>
  • Change port in respective .env file
  • Update other services' configuration to use new port

10. Docker Build Failures

Problem: Docker images fail to build.

Solutions:

  • Clear Docker cache: docker system prune -a
  • Check Dockerfile syntax
  • Ensure all dependencies are listed in package.json/requirements.txt
  • Verify base images are accessible
  • Check disk space: df -h

Debugging Tips

Enable Verbose Logging

Backend:

# In backend/.env
LOG_LEVEL=debug

AI Service:

# Run with debug logging
uvicorn main:app --reload --port 5000 --log-level debug

Frontend:

# In browser console
localStorage.setItem('debug', 'muselens:*')

Check Service Health

# Backend health
curl http://localhost:4000/health

# AI Service health
curl http://localhost:5000/health

# LocalStack health
curl http://localhost:4566/_localstack/health

Monitor Logs

# All services
docker-compose logs -f

# Specific service
docker-compose logs -f backend
docker-compose logs -f ai-service
docker-compose logs -f frontend

# Backend (if running locally)
cd backend && npm run dev

# AI Service (if running locally)
cd ai-service && uvicorn main:app --reload --log-level debug

Reset Everything

If all else fails, reset the entire environment:

# Stop all services
docker-compose down -v

# Remove node_modules and reinstall
rm -rf node_modules frontend/node_modules backend/node_modules
npm run install:all

# Remove Python cache
cd ai-service && rm -rf __pycache__ .pytest_cache

# Restart from scratch
docker-compose up -d localstack
./scripts/init-aws.sh
docker-compose up -d

Performance Optimization

Production Recommendations

  1. Use GPU for AI Service: Set DEVICE=cuda in AI service .env for faster inference
  2. Enable Caching: Configure Redis for caching embeddings and search results
  3. Use CDN: Serve frontend static assets through CloudFront
  4. Database Optimization: Use DynamoDB on-demand billing or provisioned capacity based on usage
  5. Vector Store: Consider using managed vector databases (Pinecone, Weaviate) for production scale
  6. Load Balancing: Use Application Load Balancer for backend and AI service
  7. Monitoring: Set up CloudWatch dashboards and alarms

Scaling Considerations

  • Horizontal Scaling: Backend and AI service can be scaled independently using ECS
  • Vector Store: FAISS IndexFlatIP works well up to 100K vectors; consider IndexIVFFlat for larger datasets
  • S3: No scaling needed, handles any load
  • DynamoDB: Auto-scales with on-demand billing mode

Contributing

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/my-feature
  3. Commit changes: git commit -am 'Add new feature'
  4. Push to branch: git push origin feature/my-feature
  5. Submit a pull request

Support

For issues and questions:

License

MIT

About

A cloud-native multimodal design assistant that combines computer vision, natural language processing, and vector similarity search to help designers discover inspiration, analyze compositions, and generate design improvements.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors