New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reactivate tests and clear models #306

Merged
merged 2 commits into from Feb 1, 2018

Conversation

Projects
None yet
3 participants
@vessemer
Copy link
Contributor

vessemer commented Jan 31, 2018

Refactoring of evaluate_detection.py and uncommented tests in test_classification.py, as was mentioned in this comment. Remove redundant self.scale_factor which is neither dependend on DATA_SHAPE nor on self.input_shape by this and this. Remove redundant self.scale_factor which was neither dependend on DATA_SHAPE nor on self.input_shape by this and this.

CLA

  • I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well

@vessemer vessemer referenced this pull request Jan 31, 2018

Closed

Chose an appropriate DATA_SHAPE #304

1 of 1 task complete
predicted = trained_model.predict(dicom_paths[0], nodule_locations, model_path)
assert predicted
assert 0 <= predicted[0]['p_concerning'] <= 1


This comment has been minimized.

@reubano

reubano Jan 31, 2018

Contributor

I'm wondering if this will pass travis or hit the memory error issue... I think we may need to move this hunk over to #307.

This comment has been minimized.

@vessemer

vessemer Jan 31, 2018

Contributor

According to the current Travis log and previous one from PR #298 whit commented test, classification model is not responsible for the memory errosrs.

This comment has been minimized.

@reubano

reubano Jan 31, 2018

Contributor

hmmm, looks like they all failed with a memory error...

This comment has been minimized.

@vessemer

This comment has been minimized.

@reubano

reubano Jan 31, 2018

Contributor

FWIW, the latest commit also failed as well... so at least we know the error isn't due to these PRs

This comment has been minimized.

@vessemer

vessemer Jan 31, 2018

Contributor

This error occurred due to re-estimation of coordinates and fixed in PR #308

This comment has been minimized.

@vessemer

vessemer Jan 31, 2018

Contributor

@reubano, thank you for the remarks, I've checked locally and turns out that the issue was not only in DATA_SHAPE but also in memory leak when using tensorflow, thus I've added clear_session() as it was suggested and it corrects the situation for me.

This comment has been minimized.

@vessemer

vessemer Jan 31, 2018

Contributor

I also include an ability to clear model along with pull_ct and pull_patches which may be useful if feed will be called more than once for the same Model instance. Though, those proprieties can't be responsible for the memory errors. I'll rerun locally with htop to provide the information of RAM load.

This comment has been minimized.

@vessemer

This comment has been minimized.

@vessemer

vessemer Feb 1, 2018

Contributor

As you can see the main impact in memory error was made from the segmentation itself.

@vessemer vessemer force-pushed the vessemer:refactoring branch 2 times, most recently from dacd9e5 to 2df7b6c Jan 31, 2018


correct_detection = compare_results(detections, annotations)
print 'number of correct_detection:', correct_detection

This comment has been minimized.

@lamby

lamby Feb 1, 2018

Contributor

Odd, how did this work before? I mean, Py2 vs Py3 etc.

This comment has been minimized.

@vessemer

vessemer Feb 1, 2018

Contributor

It didn't :) Even more, I don't understand the purpose of evaluate_detection.py, 'cause in PR #301:

We extend the IoU calculation from 2D to 3D but in order to be simple, we only deal with 3D boxes / cubes. Although the detections we have are 3D spheres output from the grt123 system. The threshold we currently use is 0.5 but can easily be changed (‘correct_detection_threshold’ in evaluate_detection.py). We also did some experiments using different thresholds, please check the evaluation results below.

The thing is that do the same for 3D spheres even easier 'cause you have to deal with centroids and diameters only (which are given) no need to compute bboxes. That evaluation was proposed by the LUNA16 authors and was adjusted in PR #292

This comment has been minimized.

@vessemer

@reubano reubano referenced this pull request Feb 1, 2018

Merged

Fix memory and styling errors #307

1 of 1 task complete

@vessemer vessemer force-pushed the vessemer:refactoring branch from 2df7b6c to 34494f6 Feb 1, 2018

@vessemer vessemer force-pushed the vessemer:refactoring branch from 34494f6 to 701fc26 Feb 1, 2018

@reubano reubano changed the title Refactoring Reactivate tests and clear keras models Feb 1, 2018

@reubano reubano changed the title Reactivate tests and clear keras models Reactivate tests and clear models Feb 1, 2018

@vessemer

This comment has been minimized.

Copy link
Contributor

vessemer commented Feb 1, 2018

All rebased after #307, tests have been passed.

@reubano reubano merged commit a434336 into drivendataorg:master Feb 1, 2018

2 checks passed

concept-to-clinic/cla @vessemer has signed the CLA.
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details

@vessemer vessemer deleted the vessemer:refactoring branch Feb 1, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment