Skip to content

DrBenjamin/Dissertation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Dissertation

This is the Dissertation project at the University of Edinburgh in the studies Data Science for Health and Social Care. Please use s2616861@ed.ac.uk for any questions or inquiries about the project.

It uses a Kaggle image dataset with 300 high quality images which was collected and used by a student in his final project of the Deep Learning School course at MIPTa - Github repo.

Project structure:

.
├── .github/
│   └── copilot-instructions.md
├── .gitignore
├── Dissertation.qmd
├── LICENSE
├── Proposal.md
├── README.md
├── README_Kaggle.md
├── code/
│   ├── Jenkinsfile
│   ├── Dockerfile
│   ├── Dockerfile_mediapipe
│   ├── custom_image_classifier_model_training.ipynb
│   ├── custom_pose_model_training.ipynb
│   ├── docker-compose.yml
│   ├── docker-compose_frontend_backend.yml
│   ├── environment.yml
│   ├── human_posture_analysis.ipynb
│   ├── human_posture_analysis.py
│   ├── image_classification.ipynb
│   ├── mediapipe_pose.py
│   ├── requirements.txt
│   ├── scripts/
│       ├── custom_tflite_image_classifier.py
│       ├── custom_tflite_pose.py
│       ├── mediapipe_api.py
│       └── ...
│   └── posture-keypoints-detection/
|       ├── inference.ipynb
|       ├── train.ipynb
|       ├── docker-compose.yml
│       ├── README_en.md    
|       ├── README.md
│       ├── frontend/
│           ├── Dockerfile
│           ├── main_app.py
│           └── requirements.txt
│       ├── backend/
│           ├── best.pt
│           ├── Dockerfile
│           ├── main_api.py
│           └── requirements.txt
|       ├── images/
|           └── ...
|       └── models/
|           ├── best.pt
|           └── yolo11s-pose.pt
├── docs/
│   ├── Bachelor Thesis/
│   │   └── Bachelor.Thesis.pdf
│   ├── Ethics/
│   │   ├── Data Management Flow Chart.docx
│   │   ├── Gross_UMREG_Ethical_Considerations_Form_Export.pdf
│   │   ├── Gross_UMREG_Ethical_Considerations_Form.docx
│   │   ├── Introduction to Research Integrity Resources.pdf
│   │   ├── Local Ethics approval.pdf
│   │   └── MSc Data Science for Health and Social Care - UMREG Application Form - 2025:26 AY.pdf
│   ├── Proposal/
│       ├── Dissertation_proposal_B248593_converted.qmd
│       ├── Dissertation_proposal_B248593.docx
│       ├── Dissertation_proposal_B248593.pdf
│       ├── MSc_DSHSC_Supervisor_Appraisal_of_Project_Risk.docx
│       ├── MSc_DSHSC_Supervisor_Appraisal_of_Project_Risk.pdf
│       └── receipt_Dissertation_proposal_B248593.pdf.pdf
│   ├── Reflective Blog/
│       ├── docs/Reflective Blog/B248593_Blog_1_instructions.qmd
│       ├── B248593_Blog_1.qmd
│       ├── Reflective_Blog.qmd
│       └── Reflective_Writing.pdf
│   ├── MSc_DSHSC_Supervision_research_meeting_diary_template.docx
│   └── ...
└── literature/
    ├── apa.csl
    ├── bibliography.bib
    └── ...
  • Research Proposal for the dissertation (Word document): Dissertation_proposal_B248593.docx
  • Dissertation (Quarto file): Dissertation.qmd
  • Reflective Blog 1: [B248593_Blog_1.qmd](docs/Reflective Blog/B248593_Blog_1.qmd)
  • README (Markdown): README.md
  • LICENSE.txt (Creative Commons Attribution 4.0 International Public License))
  • Code (folder): code
    • Jenkins pipeline configuration file: Jenkinsfile
    • Docker Compose file for setting up the environment: docker-compose.yml
    • Docker Compose file for setting up the frontend and backend services: docker-compose_frontend_backend.yml
    • Dockerfile for the Streamlit app environment: Dockerfile
    • Dockerfile for the Jupyter TensorFlow Lite Model Maker lab environment: Dockerfile_mediapipe
    • Conda environment specification: environment.yml
    • Requirements for the Streamlit service: requirements.txt
    • Jupyter notebook for fine-tuning the image classification model: custom_image_classifier_model_training.ipynb
    • Jupyter notebook for training custom Pose Landmarker Model with MediaPipe Model Maker (!!!BROKEN!!!): custom_pose_model_training.ipynb
    • Jupyter notebook for the baseline image classification workflow: image_classification.ipynb
    • Jupyter notebook for converting images or videos to annotated posture analytics: human_posture_analysis.ipynb
    • Python script for converting images or videos to annotated posture analytics: human_posture_analysis.py
    • Streamlit app for pose detection and analysis: mediapipe_pose.py
    • Subfolder code/posture-keypoints-detection/ containing the code for fine-tuning and deploying the YOLO11s-pose model:
      • Jupyter notebook for fine‑tuneing a YOLO11s‑pose model on a custom, CVAT‑annotated dataset of 300 side‑view posture images to learn spinal keypoint detection for automated posture assessment.: train.ipynb
      • Jupyter notebook for model inference and evaluation of the fine-tuned YOLO11s-pose model: inference.ipynb
      • Frontend folder for the Streamlit app: frontend
      • Backend folder for the FastAPI service: backend
    • Subfolder /code/scripts containing helper functions for the Streamlit app: scripts
      • custom_tflite_pose.py: helper functions for loading and running inference with a custom TensorFlow Lite model for pose classification
      • custom_tflite_image_classifier.py: helper functions for loading and running inference with a custom TensorFlow Lite model for image classification
      • mediapipe_api.py: n8n API wrapper functions for MediaPipe Pose
  • Documents (folder): docs
  • Literature (folder): literature
    • bibliography.bib
    • apa.csl

Build dissertation artefacts

The dissertation artefacts can be built using Quarto. The generated PDF will be available at Dissertation.pdf after the build.

quarto render Dissertation.qmd --to pdf

Build developing environments (Jenkins & Docker)

The repository ships with a Docker setup tailored for TensorFlow Lite Model Maker and the surrounding tooling. Set a password for the bundled Jupyter server by exporting JUPYTER_PASSWORD or defining it in a .env file next to code/docker-compose.yml before launching the containers. The container will abort startup if the variable is empty. Use Docker Compose to build and launch the lab environment locally (or rely on the Jenkins build that is automatically triggered on pushes to the repository):

# Stopping any running container
docker compose -f code/docker-compose.yml down

# Building container
docker compose -f code/docker-compose.yml build  #--progress plain

# Starting both services in detached mode
docker compose -f code/docker-compose.yml up -d

# Logging container output
docker compose -f code/docker-compose.yml logs -f dissertation-tflite-lab-1

Open a shell inside the container to retrieve the login URL, then authenticate in the browser with the password stored in JUPYTER_PASSWORD:

# Getting the Jupyter url with token
docker compose -f code/docker-compose.yml exec tflite-lab jupyter notebook list

The Streamlit pose explorer is available at http://localhost:8501 once the streamlit service is running.

Compatibility note: The Docker image pins TensorFlow to 2.8.0 and constrains the scientific stack to versions compatible with tflite-model-maker==0.4.3. If you also need the newer mediapipe-model-maker pipeline, consider creating a parallel container with an updated TensorFlow stack to avoid conflicting requirements.

Mediapipe Pose

Webapp demo: MediaPipe Pose Tracking Demo

TLite Model Maker for Image Classification

Training a custom image classification model using MediaPipe Model Maker.

code/custom_image_classifier_model_training.ipynb

Server-first workflow to commit and push trained models:

# 1) Login to the server
ssh -i ~/.ssh/id_rsa root@ssh.seriousbenentertainment.org

# 2) Open a shell in the running Jupyter container
docker exec -it dissertation-tflite-lab-1 bash

# 3) Go to the project repository mounted in the container
cd /workspace/project

# 4) Ensure git can operate in this mounted path and set identity (first time only)
git config --global --add safe.directory /workspace/project
git config user.name "DrBenjamin"
git config user.email "s2616861@ed.ac.uk"

# 5) Start a dedicated branch for generated model artifacts
git switch -c server-model-build-$(date +%Y%m%d)

# 6) Stage notebooks and newly trained model artifacts
git add code/custom_image_classifier_model_training.ipynb
git add data/models/efficientnet_lite0 data/models/efficientnet_lite2 data/models/efficientnet_lite4 data/models/mobilenet_v2

# 7) Commit
git commit -m "Add server container model build artifacts"

# 8) Push branch to GitHub
git push -u origin "$(git rev-parse --abbrev-ref HEAD)"

Optional one-time HTTPS push with token (if origin credentials are not configured):

git push "https://<GITHUB_USERNAME>:<GITHUB_TOKEN>@github.com/DrBenjamin/dissertation-movement-analysis.git" "$(git rev-parse --abbrev-ref HEAD)"

After push, open the PR page:

echo "https://github.com/DrBenjamin/dissertation-movement-analysis/pull/new/$(git rev-parse --abbrev-ref HEAD)"

The exported models can be used in the Streamlit pose detection app (code/mediapipe_pose.py).

Images or videos posture analysis

Converting images and videos to annotated outputs measuring the posture on the MediaPipe Pose landmarks using this OpenCV Tutorial.

Files: code/human_posture_analysis.ipynb and code/human_posture_analysis.py.

To run the Python script:

# for images
python code/human_posture_analysis.py --mode image --api-base-url http://seriousbenentertainment.org:8000 --input-video ./data/images/input.png --output-video ./data/images/output.png

# for videos
python code/human_posture_analysis.py --mode video --api-base-url http://seriousbenentertainment.org:8000 --input-video ./data/video/input.mp4 --output-video ./data/video/output.mp4

# for videos with the worst posture frame extracted as image
python code/human_posture_analysis.py --mode video --api-base-url http://seriousbenentertainment.org:8000 --input-video ./data/video/input.mp4 --output-video ./data/video/output.mp4 --output-image ./data/video/output_worst_frame.png

Streamlit MediaPipe Pose App

You can now run a local Streamlit application to experiment with MediaPipe Pose on your own images.

Run the app:

python -m streamlit run code/mediapipe_pose.py

Features:

  • Multiple image upload
  • Configurable model complexity and confidence thresholds
  • Optional segmentation mask blending
  • Display of pixel nose coordinates and sample world landmark
  • Download of annotated images as a zip archive

Planned enhancements (not yet implemented): video support, CSV export of all landmarks, comparative analytics view.

References

All references and resources used in the project are listed below.

Mediapipe-model-maker

MediaPipe Model Maker – Getting Started MediaPipe Model Maker – Image Classifier Customisation

In Colab: MediaPipe Model Maker Colab Example

Custom TensorFlow Lite models for image classification on-device using MediaPipe Model Maker: DeepWiki – Custom TensorFlow Lite Models

Human 3D models compatibility

Human Mesh Recovery Survey (arXiv 2212.14474) PosePile Dataset Pose Dataset Viewer

Demo:

MediaPipe Studio

About

This dissertation explores the application of AI-based posture recognition using Convolutional Neural Networks (CNNs) to detect and analyse habitual patterns of human movement and posture. The work focuses on transfer learning with TensorFlow Lite and Google MediaPipe for identifying head-neck-torso imbalances.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors