deployment Link - https://neusearch-ei5c888y3-vandit98s-projects.vercel.app/
An e-commerce platform with an AI-powered shopping assistant that helps users find the right hair care products.
- Product Catalog: Browse 48+ hair care products scraped from Traya.health
- AI Chat Assistant: Ask questions like "I have dry scalp, what should I use?" and get personalized recommendations
- Product Details: View full product info including price, description, and features
- Frontend: React + TypeScript + Vite
- Backend: FastAPI + PostgreSQL (Supabase)
- AI: Google Gemini for chat and embeddings
- Search: pgvector for semantic product search
cd backend
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
# Add your credentials
echo "DATABASE_URL=your_supabase_url" > .env
echo "GEMINI_API_KEY=your_gemini_key" >> .env
# Run
uvicorn app.main:app --reload --port 8000cd frontend
npm install
npm run devcurl -X POST http://localhost:8000/api/scraper/run- User asks a question
- System searches products using text matching (or vector search if embeddings are available)
- Relevant products are sent to Gemini as context
- Gemini generates a helpful response with product recommendations
| Endpoint | Description |
|---|---|
GET /api/products |
List products |
GET /api/products/:id |
Get product |
POST /api/chat |
Chat with assistant |
POST /api/scraper/run |
Scrape products |
Products are fetched from Traya.health's Shopify API (/products.json). This gives us structured JSON data directly - no HTML parsing needed.
backend/
app/
api/ # Routes
models/ # Database models
services/ # Business logic
scraper/ # Product scraper
frontend/
src/
components/ # UI components
pages/ # Page views
services/ # API calls
The Render free tier only provides 512 MB RAM, which isn’t enough to load the sentence-transformers model (it requires ~800 MB with PyTorch). So instead, I use an LLM to handle the conversation and then generate the recommended product.
Locally, the suggestion workflow creates an embedding of the user’s query, refines the query, and then performs a semantic similarity match between embeddings to identify the top 3 results.