Running the "run_all" script is going to run all the necessary scripts in the right order so we generate the estimated segmentation score for each image in the test set of the DRIVE dataset.
Here is an explanation of what "run_all" is doing, for each image in the test set:
-
image_selection.py : select the test image for which the segmentation is going to be tested
-
generate_results.py --solo : generate the segmentation for the selected image, using a model already trained on the train set of the DRIVE dataset
-
analyze_results_by_img.py --solo : generate the segmentation score of the test image segmentation (we are not supposed to be able to do that in a real situation because we don't have this image GT to compare)(because we are not supposed to get this score in a real situation, I call this segmentation score the "hidden score")
-
train.py : we train a small model, overfitting on the couple (image_selected, generated_segmentation) PS: generated_segmentation is the segmentation we generated 2 scripts earlier with "generate_results.py"
-
generate_results.py --train : we call this script again, but this time to generate segmentations of each images in the training dataset, using the overfitting model we just created a script ago in train.py
-
analyze_results_by_img.py --train : here again, we call this script once more, but this time to generate the segmentation scores of each image in the training dataset, then picking the best score and saving it (because the segmentations are generated by an overfitting model, I call those segmentation scores the "overfitted score")
We then save the image tested, the hidden score, the overfitted score and the training set image that got the highest segmentation overfitted score.
We then can call "correlation_analysis.py" so it can analyze the correlation between the hidden scores and the overfitted scores.