I use TensorFlow framework to create U-net deep learning model to detect knee position in X-ray image. You can detect other parts when you provide data in data folder.
The architecture was created by U-Net: Convolutional Networks for Biomedical Image Segmentation.
I can not upload all dataset because it is private. You can find or use your own.
My dataset contain 2 types: X-ray image and its knee position as label. Here is an example of my dataset:
X-ray image | Label |
---|---|
The idea of U-net model is simple. Just like you dealing with classify dog and cat images, the output of model doing this task will be one number (the probability of cats and dogs). Now the output of U-net is 256 * 256 (the size of image) number.
U-net take 50 epochs to get nearly 95% accuracy, but it will take a lot of time to train because we are dealing with image dataset.
My model can remove detail that not a knee in X-ray image
Before model | After model |
---|---|
git clone https://github.com/hoangcaobao/U-net.git
cd U-net
pip install -r requirements.txt
Go to folder data, put X-rays image to image folder and put mask of its image to label folder
This step makes you wait very long but it is required to get weights of model before next step.
python3 training.py
python3 test.py