OCTsense is a web platform (SPA) for the automated analysis of OCT (Optical Coherence Tomography) images using artificial intelligence.
It is designed for ophthalmologists who need support diagnosing ocular diseases such as Choroidal Neovascularization (CNV), Age-Related Macular Degeneration (AMD), macular edema, and others — without relying on image interpretation experts.
OCTsense also serves patients and ophthalmologists by automatically detecting:
- Choroidal Neovascularization (CNV)
- Diabetic Macular Edema (DME)
- Drusen lesions (DRUSEN)
- Healthy retinal tissue
—all without depending on expert image analysis.
-
Backend (Django + REST API)
https://github.com/gaoux/OCT-diagnosis-backend -
AI Model (Hugging Face – OCT Classification)
https://huggingface.co/gaoux/OCT_class
| Area | Technology |
|---|---|
| Frontend | React.js (SPA) with Vite |
| Backend | Django + Django REST Framework (Python) |
| AI Model | TensorFlow 2.x, Keras, OpenCV |
| Database | PostgreSQL |
-
User and authentication management:
Registration for ophthalmologists and admins, credential validation, password recovery, role control. -
Landing Page:
Interactive welcome screen with guides and access to main features. -
AI-driven image analysis:
Pretrained TensorFlow models process images to generate preliminary diagnostic predictions. -
Results and reports:
View analysis results, generate medical reports in PDF, download/store reports, and compare historical images.
Follow these steps to install and run the project locally:
git clone https://your-repository-url.git
cd your-repository-folderThe backend code for OCTsense is located in a separate repository. You can find it here:
Backend Repository: OCTsense Backend
Follow the instructions in that repository to set up and run the backend server locally.
Create a .env file in the frontend folder:
VITE_API_BASE_URL=http://127.0.0.1:8000/docker-compose up --buildFrontend will be available at:
http://localhost:3000
| Task | Command |
|---|---|
| Build and run backend | Refer to backend repository instructions |
| Build and run frontend | docker-compose up --build (in frontend folder) |
| Access backend API | http://127.0.0.1:8000/api/ |
| Access frontend app | http://localhost:3000 |
- AI models are managed separately inside the backend (
oct/predict/endpoint). - Make sure ports 8000 (backend) and 3000 (frontend) are open.
- For production, it's recommended to serve the frontend using Nginx (already set up).
This project is licensed under the GPL-3.0 license.
For questions, contributions, or collaboration inquiries:
Gustavo Parra | parrat-ga@javeriana.edu.co