Skip to content

Nurexcoder/zrift

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Zrift

An AI-powered ecommerce search bot that understands natural language queries.

Instead of searching by exact keywords, Zrift understands what you mean:

"something pretty under $200 for my girl bestfriend"

It extracts intent (budget, vibe, occasion, recipient) and returns the most relevant products using vector similarity search.


How It Works

User Query
    ↓
Intent Extractor (gpt-4o-mini)
    → extracts: budget, vibe, category, occasion, recipient
    ↓
Embedding (text-embedding-3-small)
    → converts vibe keywords into a vector
    ↓
Vector Search (ChromaDB)
    → finds semantically similar products
    → applies metadata filters (price, stock, rating)
    ↓
Response Generator (gpt-4o-mini)
    → formats results into a natural, helpful reply

Tech Stack

Layer Tool
Backend FastAPI
Vector DB (dev) ChromaDB
Vector DB (prod) Qdrant
Embeddings OpenAI text-embedding-3-small
LLM OpenAI gpt-4o-mini
Agent framework LangChain → LangGraph
Package manager Poetry

Project Structure

zrift/
├── app/
│   ├── main.py              # FastAPI app entry point
│   ├── routers/             # API route handlers
│   │   └── search.py        # /search and /chat endpoints
│   ├── models/              # Pydantic + DB models
│   └── core/
│       ├── intent.py        # Extract search intent from query
│       ├── vector_store.py  # ChromaDB connection
│       └── config.py        # App settings
├── scripts/
│   ├── fetch_products.py    # Fetch products from dummyjson API
│   ├── index_products.py    # Embed + store products in ChromaDB
│   └── test_search.py       # Test raw vector search
├── dataset/
│   └── products.json        # Product data
├── db/                      # ChromaDB local storage
├── plan/
│   └── PLAN.md              # Full project plan
├── .env                     # API keys (not committed)
├── .gitignore
└── pyproject.toml

Getting Started

1. Clone and install dependencies

git clone <repo-url>
cd zrift
poetry install

2. Set up environment variables

Create a .env file in the root:

OPENAI_API_KEY=sk-your-key-here

3. Fetch products

poetry run python scripts/fetch_products.py

4. Index products into ChromaDB

poetry run python scripts/index_products.py

5. Test raw search

poetry run python scripts/test_search.py

6. Run the API

poetry run uvicorn app.main:app --reload

API docs available at: http://localhost:8000/docs


Example Query

curl -X POST http://localhost:8000/chat \
  -H "Content-Type: application/json" \
  -d '{"message": "something pretty under $200 for my girl bestfriend"}'

Response:

{
  "response": "Here are some great gift ideas for your bestfriend under $200...",
  "products": [
    {
      "title": "Chanel Coco Noir Eau De",
      "price": 129.99,
      "rating": 4.26,
      "thumbnail": "..."
    }
  ]
}

Build Plan

Phase Goal Status
Phase 1 Working vector search In Progress
Phase 2 Natural language chat bot Pending
Phase 3 Smarter bot with memory + retry Pending
Phase 4 Image search (CLIP) Pending
Phase 5 Fine-tuned embeddings + Qdrant Pending

See plan/PLAN.md for full details.


Scripts Reference

# Fetch all products from dummyjson API
poetry run python scripts/fetch_products.py

# Index products into ChromaDB (run after fetching)
poetry run python scripts/index_products.py

# Test vector search directly (no LLM)
poetry run python scripts/test_search.py

# Test intent extractor
poetry run python app/core/intent.py

About

An AI-powered ecommerce search bot that understands natural language queries.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages