Skip to content

AnikS22/DR_Project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Diabetic Retinopathy Early Detection System

A professional iOS application designed to make early detection of diabetic retinopathy more accessible through real-time, on-device machine learning. This app enables healthcare providers and patients to perform preliminary retinal screening using standard iOS devices, helping identify diabetic retinopathy in its early stages when treatment is most effective.

Mission

Making Early Detection Accessible: Diabetic retinopathy is a leading cause of blindness, but early detection and treatment can prevent up to 90% of vision loss. This application aims to democratize access to early screening by bringing advanced AI-powered detection capabilities to standard mobile devices, making it available in underserved communities and remote areas where specialized equipment may not be readily available.

Overview

This iOS application leverages Core ML and real-time camera processing to perform instant classification of retinal images. The system can identify diabetic retinopathy across five severity levels, providing immediate feedback to help guide clinical decision-making and patient care.

Classification Categories

  • No Diabetic Retinopathy Detected: Healthy retina with no signs of diabetic retinopathy
  • Early Stage DR Detected - Stage 1: Mild non-proliferative diabetic retinopathy
  • Moderate DR Detected - Stage 2: Moderate non-proliferative diabetic retinopathy
  • Severe DR Detected - Stage 3: Severe non-proliferative diabetic retinopathy
  • Proliferative DR Detected - Stage 4: Advanced proliferative diabetic retinopathy
  • Image Quality Feedback: Guidance for optimal image capture

Key Features

🎯 Early Detection Focus

  • Real-time Analysis: Instant classification as images are captured
  • On-Device Processing: Privacy-preserving, no data transmission required
  • Accessible Technology: Works on standard iOS devices with built-in cameras

📱 User Experience

  • Intuitive Interface: Clean, professional design optimized for clinical use
  • Clear Visual Feedback: Color-coded diagnostic results for quick interpretation
  • Live Camera Preview: Real-time view of retinal images being analyzed

🤖 Advanced AI

  • Core ML Integration: Optimized machine learning models for mobile devices
  • Multiple Model Support: Flexible architecture supporting different model versions
  • Continuous Improvement: Framework supports model updates and enhancements

🔒 Privacy & Security

  • Local Processing: All analysis performed on-device
  • No Data Transmission: Images never leave the device
  • HIPAA Considerations: Designed with healthcare privacy requirements in mind

Requirements

System Requirements

  • iOS: 16.0 or later
  • Device: iPhone or iPad with camera
  • Storage: Minimal space required (app + models)
  • Camera Permission: Required for image capture

Development Requirements

  • Xcode: Latest version (15.0+ recommended)
  • Swift: 5.0 or later
  • Apple Developer Account: For device deployment

Installation

For End Users

  1. Download from App Store (when available)
    • Search for "Diabetic Retinopathy Early Detection"
    • Install and launch the application
    • Grant camera permissions when prompted

For Developers

  1. Clone the Repository

    git clone <repository-url>
    cd DR_Project
  2. Open in Xcode

    open VideoContinuousImageDetection.xcodeproj
  3. Configure Project Settings

    • Update Development Team in project settings
    • Verify bundle identifier: com.gprof.DR2
    • Ensure camera permissions are configured
  4. Build and Run

    • Select target device (iPhone/iPad)
    • Press Cmd + R to build and run
    • Test on physical device for camera functionality

Usage Guide

Getting Started

  1. Launch Application: Open the app on your iOS device

  2. Grant Permissions: Allow camera access when prompted

  3. Position Retinal Image:

    • Use with fundus camera or retinal imaging device
    • Ensure proper lighting and image clarity
    • Position image within camera frame
  4. Start Analysis:

    • Tap "Start Analysis" button
    • Camera feed will begin processing frames
    • Results appear in real-time
  5. Interpret Results:

    • Green: No diabetic retinopathy detected
    • Orange: Early stage detection (Stage 1)
    • Red: Moderate to severe detection (Stages 2-4)
    • Blue: Image quality feedback
  6. Stop Analysis: Tap "Stop Analysis" when finished

Best Practices

  • Image Quality: Ensure retinal images are clear and well-lit
  • Positioning: Maintain consistent distance and angle
  • Multiple Readings: Consider multiple analyses for confirmation
  • Professional Consultation: Always consult with ophthalmologists for final diagnosis

Project Structure

DR_Project/
├── VideoContinuousImageDetection/          # Main iOS application
│   ├── ContentView.swift                   # Main UI and camera integration
│   ├── VideoContinuousImageDetectionApp.swift  # Application entry point
│   ├── Assets.xcassets/                    # Application assets and icons
│   ├── DR_Image_Classifer 1.mlmodel       # Alternative ML model
│   └── DRDetection 1.mlmodel              # Alternative ML model
│
├── Images 2/                               # Training dataset
│   ├── DR_1/                               # Stage 1 DR images (50+)
│   ├── DR_2/                               # Stage 2 DR images (50+)
│   ├── DR_3/                               # Stage 3 DR images (50)
│   ├── DR_4/                               # Stage 4 DR images (50)
│   ├── DR_Healthy/                         # Healthy retinal images (50)
│   └── Zoom_In/                            # Quality control images (47)
│
├── Images_2024_0509.mlproj/                # Core ML training project
│   ├── Models/                             # Trained model files
│   ├── Data Sources/                       # Training data configuration
│   └── Checkpoints/                        # Model training checkpoints
│
└── VideoContinuousImageDetection.xcodeproj/  # Xcode project configuration

Technical Architecture

Framework & Technologies

  • SwiftUI: Modern declarative UI framework
  • AVFoundation: Camera capture and video processing
  • Core ML: On-device machine learning inference
  • Vision Framework: Image processing capabilities

Model Configuration

The application currently uses the Images_2024_0509_1 model, trained on a comprehensive dataset of retinal images. The model architecture supports:

  • Real-time inference on mobile devices
  • Low latency processing (< 100ms per frame)
  • High accuracy classification across DR stages
  • Efficient memory usage

Camera Processing Pipeline

  1. Capture: Continuous video frame capture via AVFoundation
  2. Preprocessing: Frame extraction and pixel buffer conversion
  3. Inference: Core ML model prediction on each frame
  4. Post-processing: Result mapping to user-friendly messages
  5. Display: Real-time UI updates with diagnostic feedback

Performance Optimization

  • Background Processing: ML inference on dedicated queue
  • Frame Dropping: Late frames discarded to maintain responsiveness
  • Memory Management: Efficient buffer handling
  • Battery Optimization: Optimized processing to minimize power consumption

Model Training

Training Dataset

The models were trained using Create ML on a curated dataset of retinal images:

  • Total Images: 300+ annotated retinal images
  • Distribution: Balanced across all DR stages
  • Quality Control: Images validated for diagnostic accuracy
  • Augmentation: Data augmentation techniques applied

Retraining Models

To update or retrain models:

  1. Open Images_2024_0509.mlproj in Xcode
  2. Import updated training data from Images 2/
  3. Configure training parameters
  4. Train and validate model performance
  5. Export new .mlmodel file
  6. Replace model in application bundle

Development

Code Organization

  • MARK Comments: Organized code sections for navigation
  • Documentation: Comprehensive inline documentation
  • Error Handling: Robust error handling throughout
  • Type Safety: Strong typing and Swift best practices

Key Components

VideoCapture Class

  • Manages camera session lifecycle
  • Handles real-time frame processing
  • Coordinates ML model inference
  • Publishes results for UI updates

ContentView

  • Main user interface
  • Camera preview integration
  • Diagnostic result display
  • User interaction handling

Adding Features

The codebase is structured to support:

  • Additional ML models
  • Enhanced UI components
  • Export functionality
  • Historical tracking
  • Confidence score display

Permissions

Required Permissions

  • Camera Access (NSCameraUsageDescription)
    • Purpose: Capture retinal images for analysis
    • Usage: Real-time video frame processing
    • Privacy: All processing occurs on-device

Medical Disclaimer

⚠️ IMPORTANT MEDICAL DISCLAIMER

This application is intended for screening and educational purposes only. It is NOT a replacement for professional medical diagnosis, evaluation, or treatment.

  • Not a Diagnostic Tool: This app provides preliminary screening results only
  • Professional Consultation Required: All results must be reviewed by qualified ophthalmologists
  • No Medical Advice: The application does not provide medical advice or treatment recommendations
  • Limitations: Results may not be accurate in all cases and should not be used as the sole basis for medical decisions
  • Regulatory Status: This application is for research and educational use

Always consult with qualified healthcare professionals for proper diagnosis and treatment of diabetic retinopathy or any other medical condition.

Contributing

We welcome contributions that improve accessibility, accuracy, and usability. Areas of interest:

  • Model accuracy improvements
  • User experience enhancements
  • Accessibility features
  • Documentation improvements
  • Performance optimizations
  • Localization support

Contribution Guidelines

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes with proper documentation
  4. Test thoroughly on physical devices
  5. Submit a pull request with detailed description

Authors

  • Danika Gupta - Core Development, ML Integration
  • Amit Gupta - Application Architecture, UI/UX

Acknowledgments

  • Medical professionals who provided guidance on diabetic retinopathy classification
  • Research community for open datasets and methodologies
  • Apple for Core ML and SwiftUI frameworks

Future Roadmap

Planned Enhancements

  • Confidence scores and probability distributions
  • Image capture and save functionality
  • Historical analysis tracking
  • Export results with metadata
  • Enhanced UI/UX with accessibility features
  • Model accuracy metrics and calibration
  • Support for photo library import
  • Multi-language support
  • Integration with electronic health records
  • Telemedicine capabilities

Research Directions

  • Improved model accuracy through larger datasets
  • Faster inference times
  • Support for additional retinal conditions
  • Integration with wearable devices
  • Cloud-based model updates

License

[Specify license - Consider MIT, Apache 2.0, or proprietary license]

Support & Contact

For technical support, feature requests, or medical inquiries:

  • Technical Issues: [GitHub Issues]
  • Medical Questions: Consult with qualified healthcare professionals
  • General Inquiries: [Contact Information]

Version History

  • v1.0 (Current): Initial release with real-time detection capabilities
    • Core ML integration
    • Real-time camera processing
    • Five-stage DR classification
    • Professional UI/UX

Making early detection accessible, one device at a time.

About

Live video feed into AI Model also tells to zoom in and out

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors