TrustLens is an explainable and trust-aware AI prototype that classifies images while revealing how confident the model really is. Built for transparency and reliability in AI decision-making.
- Image Classification: CNN trained on CIFAR-10 with 10 classes
- Uncertainty Quantification: Entropy-based confidence scoring
- Anomaly Detection: Multiple methods including ODIN and Mahalanobis distance
- Explainable AI: Grad-CAM visualizations showing decision reasoning
- Trust Calibration: Temperature scaling for confidence calibration
- Interactive Interface: Streamlit-based web application
- Real-time Analysis: Upload images and get instant predictions with explanations
- Python 3.8+
- PyTorch
- CUDA (optional, for GPU acceleration)
- Clone the repository:
git clone https://github.com/Leptons1618/TrustNet.git
cd TrustNet- Create a virtual environment:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate- Install dependencies:
pip install -r requirements.txt- Download CIFAR-10 data (will be downloaded automatically on first run)
streamlit run app.pyThe application will open in your default web browser at http://localhost:8501.
- Upload any image or select from sample images
- Get predicted class with confidence score
- View preprocessing steps
- Entropy Score: Measures prediction uncertainty
- ODIN Score: Out-of-distribution detection
- Mahalanobis Distance: Anomaly detection based on feature space
- Calibrated Confidence: Temperature-scaled probability
- Grad-CAM Heatmaps: Visual explanation of model decisions
- Feature Importance: Which parts of the image matter most
- Decision Reasoning: Step-by-step explanation
- Real-time trust metrics
- Historical analysis
- Confidence calibration plots
TrustNet/
βββ src/
β βββ models/
β β βββ __init__.py
β β βββ cnn.py # CNN architecture
β β βββ trust_model.py # Trust-aware model wrapper
β βββ trust_methods/
β β βββ __init__.py
β β βββ entropy.py # Entropy-based uncertainty
β β βββ odin.py # ODIN method
β β βββ mahalanobis.py # Mahalanobis distance
β β βββ temperature.py # Temperature scaling
β βββ utils/
β β βββ __init__.py
β β βββ data_loader.py # Data preprocessing
β β βββ gradcam.py # Grad-CAM implementation
β β βββ visualization.py # Plotting utilities
β βββ app.py # Main Streamlit application
βββ assets/ # Static files and sample images
βββ models/ # Trained model files
βββ tests/ # Unit tests
βββ cookbook/ # Jupyter notebooks for development
βββ requirements.txt
βββ README.md
βββ .gitignore
Measures prediction uncertainty using Shannon entropy:
H(p) = -β p_i * log(p_i)
- Low entropy = High confidence
- High entropy = High uncertainty
Uses input preprocessing and temperature scaling to detect anomalous inputs:
- Applies small perturbations to inputs
- Uses temperature scaling on logits
- Effective for detecting domain shift
Measures distance from class centroids in feature space:
- Computes distance to nearest class centroid
- Uses feature representations from penultimate layer
- Effective for detecting novel classes
Calibrates model confidence to match actual accuracy:
- Post-processing technique
- Learns optimal temperature parameter
- Improves reliability of confidence scores
- Clean Accuracy: ~85%
- Domain-Shifted Accuracy: ~70%
- Calibration Error: <5% (after temperature scaling)
- Entropy AUC: 0.85+ for OOD detection
- ODIN AUC: 0.90+ for OOD detection
- Mahalanobis AUC: 0.88+ for OOD detection
- Quality Inspection: Detect defective products with confidence scores
- Medical Triage: Classify medical images with uncertainty quantification
- Security Screening: Identify suspicious items with explainable decisions
- Autonomous Systems: Make safety-critical decisions with trust metrics
pytest tests/python src/train.py --config config/train_config.yamlExplore the cookbook/ directory for development notebooks and experiments.
- Support for additional datasets (ImageNet, custom datasets)
- Ensemble methods for improved trust
- Bayesian neural networks
- Active learning integration
- Mobile app deployment
- REST API for integration
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- CIFAR-10 dataset creators
- PyTorch team for the deep learning framework
- Streamlit team for the web app framework
- Research papers on uncertainty quantification and explainable AI
For questions or support, please open an issue on GitHub or contact [anishgiri163@gmail.com].
Made with β€οΈ for transparent and trustworthy AI