Skip to content

dhirajnair/text-analyzer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Text Analyzer

Simple FastAPI + React project that analyzes text with an LLM-backed backend and a lightweight frontend for submitting and browsing analyses.

Prerequisites

  • Python 3.11+
  • Node.js 18+ (for local frontend development)
  • Docker (optional)

Run Standalone

Backend

  1. cd backend
  2. cp .env.example .env (optional: edit .env to add API keys)
  3. python -m venv .venv && source .venv/bin/activate
  4. pip install -r requirements.txt
  5. uvicorn app.main:app --reload --port 8000

The API lives at http://localhost:8000/api/v1. Interactive docs: http://localhost:8000/docs.

Environment variables (see env.example):

  • OPENAI_API_KEY — uses OpenAI via LangChain if set
  • ANTHROPIC_API_KEY — uses Anthropic via LangChain if set
  • GOOGLE_API_KEY — uses Google Gemini via LangChain if set
  • Falls back to MockLLM if no keys are set

Frontend

  1. cd frontend
  2. cp env.example .env (optional: edit if backend URL differs)
  3. npm install
  4. npm run dev

The app will be available at http://localhost:5173. See env.example for configuration options.

Run with Docker

Backend container

docker build -t llm-analyzer-backend ./backend
docker run --rm -p 8000:8000 llm-analyzer-backend

Frontend container

docker build -t llm-analyzer-frontend ./frontend
docker run --rm -p 5173:80 llm-analyzer-frontend

The frontend image serves the production build via Nginx. It expects the backend to be reachable from your browser at http://localhost:8000.

Design Choices

  • Clean layered architecture with clear separation between API handlers, business logic (services), and data access layers
  • FastAPI chosen for async capabilities, automatic OpenAPI documentation, and modern Python type hints
  • React + Vite provides lightweight frontend with fast development iteration
  • LangChain abstracts multiple LLM providers (OpenAI, Anthropic, Google Gemini) with MockLLM fallback for testing, enabling flexible deployment without vendor lock-in
  • NLTK handles deterministic noun extraction with POS tagging to meet non-LLM keyword extraction requirements
  • SQLAlchemy ORM ensures database portability from SQLite to Postgres
  • Structured logging (JSON output) and comprehensive error handling (graceful LLM failures, input validation) for production readiness

Trade-offs & Time Constraints

  • SQLite over Postgres for simplicity, though SQLAlchemy abstraction makes migration trivial
  • No authentication/authorization as not specified in requirements
  • Minimal frontend styling to prioritize functionality over aesthetics
  • Simple confidence heuristic (keyword overlap) rather than sophisticated NLP approach
  • Focused test coverage on critical paths rather than exhaustive edge cases
  • Missing production features like rate limiting, response caching, and comprehensive input sanitization would need to be addressed before deployment at scale

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published