An advanced multi-agent AI-powered fact verification system that analyzes claims using state-of-the-art language models to determine their truthfulness, detect bias, and evaluate source credibility.
π [Frontend Demo] (Deploy link coming soon)
The primary user-facing interface showcasing the complete AI verification workflow with modern UI/UX design.
π [API Docs] (Deploy link coming soon)
Explore the FastAPI auto-generated documentation with Swagger UI. Test endpoints in real-time and view request/response schemas.
π€ [Streamlit App] https://huggingface.co/spaces/vidushi-agarwal/ai-truth-engine
A rapid-prototyping interface designed for experimentation and internal demonstration of AI workflows. Ideal for testing and validation.
- π Intelligent Claim Extraction: Automatically extracts verifiable claims from noisy human text
- β Multi-Source Verification: Cross-references claims against real-world knowledge
- π Truth Scoring: Computes a comprehensive truth score (0-100) based on multiple factors
- π― Bias Detection: Identifies and classifies bias levels in reasoning
- π Source Credibility Analysis: Evaluates the reliability of sources using domain reputation
- π¨ Modern UI: Beautiful, responsive interface with real-time results
- β‘ Fast API Backend: Built with FastAPI for high-performance asynchronous operations
The system uses a multi-agent pipeline approach:
- Claim Extraction Agent: Converts raw user input into structured, verifiable claims
- Verification Agent: Validates claims against grounded web knowledge using AI
- Credibility Scoring Agent: Evaluates source reliability based on domain reputation
- Bias Detection Agent: Analyzes reasoning for potential bias
- Truth Score Aggregator: Combines all metrics into a final truth score
- FastAPI: Modern, fast web framework for building APIs
- Cerebras Cloud SDK: Powers AI inference with Llama 3.3 70B model
- Pydantic: Data validation and settings management
- Python-dotenv: Environment variable management
- HTML5/CSS3: Modern semantic markup and styling
- Vanilla JavaScript: No framework overhead, pure performance
- Google Fonts (Inter): Clean, professional typography
- Streamlit: Rapid prototyping framework for data apps
- Requests: HTTP library for API communication
- Python-dotenv: Environment configuration
- Python 3.8 or higher
- pip (Python package manager)
- A Cerebras API key (get one at Cerebras Cloud)
git clone https://github.com/Vidushi-code/AI_Internet_Truth_Analyzer.git
cd "AI Internet Truth Engine(ui)"# Navigate to backend directory
cd backend
# Create a virtual environment (recommended)
python -m venv venv
# Activate virtual environment
# On Windows:
venv\Scripts\activate
# On macOS/Linux:
source venv/bin/activate
# Install dependencies
pip install -r requirements.txtCreate a .env file in the backend directory:
CEREBRAS_API_KEY=your_cerebras_api_key_here
β οΈ Security Note: Never commit your.envfile to version control. It's already included in.gitignore.
# From the backend directory
uvicorn main:app --reload --host 0.0.0.0 --port 8000The API will be available at http://localhost:8000
Open frontend_ui/index.html in your web browser, or use a local server:
# Using Python's built-in server
cd frontend_ui
python -m http.server 3000Then navigate to http://localhost:3000
For an alternative Streamlit-based interface:
# Navigate to streamlit_app directory
cd streamlit_app
# Install Streamlit dependencies
pip install -r requirements.txt
# Run the Streamlit app
streamlit run app.pyThe Streamlit app will be available at http://localhost:8501
π‘ Tip: The Streamlit interface provides a cleaner, research-focused UI ideal for demonstrations and rapid testing.
-
Enter a Claim: Type or paste any claim you want to verify in the text area
- Example: "The Earth revolves around the Sun"
- Example: "Drinking 8 glasses of water daily is necessary"
-
Analyze: Click the "Analyze Truth" button or press
Ctrl+Enter -
Review Results: The system will display:
- Truth Score: 0-100 rating of claim accuracy
- Verdict: TRUE, FALSE, or UNCERTAIN
- Bias Level: NEUTRAL, SLIGHTLY_BIASED, or HIGHLY_BIASED
- Reasoning: Clear explanation of the verdict
- Sources: List of references with credibility ratings
Health check endpoint.
Response:
{
"message": "AI Truth Engine Running"
}Analyzes a claim and returns verification results.
Request Body:
{
"text": "Your claim here"
}Response:
{
"original_text": "Your original input",
"extracted_claim": "Extracted verifiable claim",
"verification": {
"verdict": "TRUE | FALSE | UNCERTAIN",
"confidence": 85,
"reasoning": "Explanation of the verdict",
"sources": [
{
"title": "Source Title",
"url": "https://example.com",
"credibility": "HIGH | MEDIUM | LOW"
}
]
},
"bias_level": "NEUTRAL | SLIGHTLY_BIASED | HIGHLY_BIASED",
"final_truth_score": 82,
"status": "COMPLETED"
}FastAPI provides automatic interactive API documentation:
- Swagger UI:
http://localhost:8000/docs - ReDoc:
http://localhost:8000/redoc
Test the backend API using curl:
curl -X POST "http://localhost:8000/analyze" \
-H "Content-Type: application/json" \
-d '{"text":"The Earth is flat"}'AI Internet Truth Engine/
βββ backend/
β βββ main.py # FastAPI application and core logic
β βββ requirements.txt # Python dependencies
β βββ .env # Environment variables (not in repo)
βββ frontend_ui/
β βββ index.html # Main UI markup
β βββ styles.css # Styling and animations
β βββ script.js # Frontend logic
β βββ README.md # Frontend documentation
βββ streamlit_app/
β βββ app.py # Streamlit application
β βββ requirements.txt # Streamlit dependencies
βββ requirements.txt # Root dependencies
βββ .gitignore # Git ignore rules
βββ README.md # This file
The system first processes raw user input to extract a single, clear, verifiable claim:
- Removes opinions and emotions
- Identifies the primary factual statement
- Rewrites as a concise declarative sentence
Uses the Cerebras Llama 3.3 70B model to:
- Cross-reference the claim with grounded knowledge
- Determine verdict (TRUE/FALSE/UNCERTAIN)
- Provide confidence score
- Cite relevant sources
Evaluates sources based on domain reputation:
- HIGH: .gov, .edu, WHO, NASA, CDC, NIH, Mayo Clinic, etc.
- MEDIUM: BBC, Reuters, NYTimes, The Guardian
- LOW: All other domains
Analyzes the reasoning text to detect:
- Neutral language
- Slight bias indicators
- Highly biased explanations
Combines all factors:
Final Score = Base Confidence
+ Source Credibility Adjustment
- Bias Penalty
- API keys are stored in environment variables
- CORS is configured (update in production for specific domains)
- Input validation prevents malformed requests
- No sensitive data is logged
Contributions are welcome! Please follow these steps:
- Fork the repository
- Create a feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- API rate limits may apply based on your Cerebras subscription
- Source discovery depends on model's training data cutoff
- Add support for multiple AI providers (OpenAI, Google Gemini)
- Implement user authentication and history tracking
- Add multi-language support
- Create browser extension
- Implement caching for frequently verified claims
- Add export functionality (PDF, CSV)
For issues, questions, or suggestions:
- Open an issue on GitHub
- Contact: [Your contact information]
- Cerebras for providing powerful AI infrastructure
- FastAPI for the excellent web framework
- Open-source community for continuous inspiration
This tool is designed to assist with fact-checking but should not be considered 100% accurate. Always cross-reference important claims with multiple authoritative sources. The system's accuracy depends on the AI model's training data and may not reflect the most recent information.
Made with β€οΈ by Vidushi
β Star this repo if you find it helpful!