Prototype platform to detect and mitigate malign information operations powered by large language models.
Combines advanced AI detection with multi-layered analysis:
- π€ Ollama Semantic Analysis (40% weight) - Deep contextual risk assessment using local LLMs
- π Hugging Face AI Detection (35% weight) - State-of-the-art AI-generated content detection
- π― Behavioral Analysis (15% weight) - Metadata, urgency, and manipulation tactics
- π Stylometric Analysis (10% weight) - Linguistic fingerprinting and patterns
Plus threat graph intelligence, provenance checks, and federated sharing scaffolding.
This project requires Python 3.11.x
Python 3.12+ is not supported due to FastAPI + Pydantic v1 compatibility issues. The codebase enforces this requirement at runtime.
macOS (Homebrew)
brew install python@3.11macOS (pyenv)
brew install pyenv
pyenv install 3.11.9
pyenv local 3.11.9 # Uses .python-version fileWindows (Chocolatey)
choco install python --version=3.11.9 -yUbuntu/Debian
sudo apt update
sudo apt install -y software-properties-common
sudo add-apt-repository -y ppa:deadsnakes/ppa
sudo apt update
sudo apt install -y python3.11 python3.11-venvFedora/RHEL
sudo dnf install -y python3.11 python3.11-venv# Linux
curl -fsSL https://ollama.com/install.sh | sh
# macOS
brew install ollama
# Windows - Download from https://ollama.com/download/windowsStart Ollama and download the model:
# Start Ollama server
ollama serve
# In another terminal, download the recommended model
ollama pull llama3.2:3bπ Detailed Ollama setup: See docs/OLLAMA_SETUP.md
# Clone the repo
git clone https://github.com/Team-ASHTOJ/TattvaDrishti.git
cd TattvaDrishti
# Create virtual environment with Python 3.11
python3.11 -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\Activate.ps1
# Install dependencies
python -m pip install --upgrade pip
python -m pip install -r requirements.txt
# Create .env file (copy from example)
cp .env.example .env
# Start the backend server
uvicorn app.main:app --reloadBackend will be available at: http://127.0.0.1:8000
cd frontend
# Create environment file
echo 'NEXT_PUBLIC_API_BASE_URL=http://localhost:8000' > .env.local
# Install dependencies
npm install
# Start development server
npm run devFrontend will be available at: http://localhost:3000
TattvaDrishti/
βββ app/ # FastAPI backend
β βββ main.py # API routes and server
β βββ config.py # Settings and environment config
β βββ schemas.py # Pydantic models
β βββ integrations/ # HuggingFace, Ollama clients
β βββ models/ # Detection, graph, watermark engines
β βββ services/ # Orchestrator
β βββ storage/ # SQLite database layer
βββ frontend/ # Next.js dashboard
β βββ app/ # Pages and layouts
β βββ components/ # React components
β βββ lib/ # API client
βββ templates/ # Jinja2 templates
βββ tests/ # Unit tests
βββ .python-version # Python version for pyenv/asdf
βββ requirements.txt # Python dependencies
βββ README.md # This file
Create a .env file in the project root (optional):
APP_ENV=dev
DATABASE_URL=sqlite:///./data/app.db
WATERMARK_SEED=your-secret-seed
HF_MODEL_NAME=roberta-base-openai-detector
HF_TOKENIZER_NAME=roberta-base-openai-detector
HF_DEVICE=-1 # -1 for CPU, 0+ for GPU
OLLAMA_ENABLED=false
OLLAMA_MODEL=gpt-oss:20bThe frontend requires .env.local (already gitignored):
NEXT_PUBLIC_API_BASE_URL=http://localhost:8000# Activate virtual environment
source .venv/bin/activate
# Run tests
pytest- FastAPI 0.104.1 - Web framework
- Pydantic 1.10.13 - Data validation (v1 for Python 3.11 compatibility)
- Uvicorn 0.23.2 - ASGI server
- NetworkX 3.1 - Graph intelligence
- Transformers - HuggingFace models
- PyTorch - ML framework
- Jinja2 3.1.2 - Template engine
- Next.js 14.2.3 - React framework
- React 18.2.0
- Tailwind CSS 3.4.4 - Styling
- SWR 2.2.4 - Data fetching
- Ensure Python 3.11 is installed (see above)
- Clone the repo
git clone https://github.com/Team-ASHTOJ/TattvaDrishti.git cd TattvaDrishti - Backend setup
python3.11 -m venv .venv source .venv/bin/activate pip install -r requirements.txt - Frontend setup
cd frontend echo 'NEXT_PUBLIC_API_BASE_URL=http://localhost:8000' > .env.local npm install
Terminal 1 - Backend
cd TattvaDrishti
source .venv/bin/activate
uvicorn app.main:app --reloadTerminal 2 - Frontend
cd TattvaDrishti/frontend
npm run dev- Database file (
data/app.db) is gitignored - Environment files (
.env,.env.local) are gitignored - Virtual environments (
.venv) are gitignored - Node modules are gitignored
[Add your license here]
- Ensure Python 3.11.x is installed
- Create a feature branch
- Make your changes
- Run tests:
pytest - Submit a pull request
- Install Python 3.11 (see installation instructions above)
- Recreate your virtual environment with Python 3.11
- Verify Python version:
python --version(should show 3.11.x) - Reinstall dependencies:
pip install -r requirements.txt
- Ensure backend is running on port 8000
- Check
.env.localhas correctNEXT_PUBLIC_API_BASE_URL
Built with β€οΈ by Team ASHTOJ
cd frontend
npm install
npm run devThe app expects the API to be reachable at http://localhost:8000 by default. To point at a different backend, set NEXT_PUBLIC_API_BASE_URL before running npm run dev:
NEXT_PUBLIC_API_BASE_URL=https://shield.example.com npm run devKey experiences showcased:
- Live ingestion form that posts to
/api/v1/intake - Real-time event stream over
/api/v1/events/stream - Case drill-down that hydrates via
/api/v1/cases/{intake_id} - One-click sharing package generation via
/api/v1/share
curl -X POST http://127.0.0.1:8000/api/v1/intake \
-H "Content-Type: application/json" \
-d @samples/intake_example.json- Model prep (offline-friendly): download once and cache locally.
python -c "from transformers import AutoTokenizer, AutoModelForSequenceClassification; \
tokenizer = AutoTokenizer.from_pretrained('roberta-base-openai-detector');
model = AutoModelForSequenceClassification.from_pretrained('roberta-base-openai-detector')"
To point at a custom or fine-tuned model directory, set environment variables:
```bash
export HF_MODEL_NAME=/path/to/model
export HF_TOKENIZER_NAME=/path/to/model
export HF_DEVICE=0 # GPU id or -1 for CPU
-
Run with GPU: ensure PyTorch detects your CUDA device (
python -c "import torch; print(torch.cuda.is_available())"). -
Threshold tuning (default 0.6):
export HF_SCORE_THRESHOLD=0.55
Enable qualitative risk scoring through a local Ollama model (e.g., mistral, codellama, or a fine-tuned guard model).
ollama pull mistral
export OLLAMA_ENABLED=true
export OLLAMA_MODEL=mistral
export OLLAMA_TIMEOUT=20 # secondsThe detector will prompt the model for a JSON risk rating (0-1) and blend it with heuristics/Hugging Face probabilities.
pytestPOST /api/v1/intakeβ analyse content.GET /api/v1/cases/{intake_id}β retrieve stored case summary.POST /api/v1/shareβ generate a federated sharing package.GET /api/v1/events/streamβ Server-Sent Events feed for live updates.
SQLite database stored at data/app.db (configurable via DATABASE_URL).
- Swap the Hugging Face model for a proprietary fine-tuned detector.
- Enrich graph intelligence with live social telemetry.
- Integrate blockchain-backed sharing receipts.
- Add automated remediation playbooks triggered by high-risk scores.