Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What means different model/parameters achieve higher score? #9

Closed
pythonokai opened this issue Nov 19, 2016 · 2 comments
Closed

What means different model/parameters achieve higher score? #9

pythonokai opened this issue Nov 19, 2016 · 2 comments

Comments

@pythonokai
Copy link
Contributor

pythonokai commented Nov 19, 2016

Thank you for opensource the code.

I made some changes with your model and use the same training settings as 'test' experiment, which following follow settings:
[training settings]
#number of total patches:
N_subimgs = 190000
#Number of training epochs
N_epochs = 150
And I got a wierd problem, that model, I just called g-model, when training epochs got 110~120 times, It will cause a GPU error, I have training that model for TEN times and every time will appear that error.
I don't know why, maybe there have something wrong with GPU's calculation, dividing by zero? or data overflow?
Fortunately, g-model achieve higher tested score, average is 0.97945 (with 0.9794, 0.9795, 0.9799, 0.9797, 0.9793, 0.9789). Yes, there are all finished with 110~120 epochs and 190000 subimgs, there are all have 'last improved' at 60~70 epochs, after that to the end(when stopped with GPU error), no anything improved. So I don't know that model at that settings can be better or not.
I am training g-model with subimgs=25000, epochs=100 at this moment.

Now I want to ask about 'test' experiment and your source model, you choose that parameters can make the model converge or there are higher parameters can make higher score? (I am feel ashamed to ask that I didn't tested source model with higher parameters)
Or, A higher score with reasonable settings and costs can saying that model is better than another lower score model? by the way, g-model costs 3 times training time as such as the source model.

In other words, I want to ask the evaluation standard of models, score base on the same settings or only final score?

I just found cuDNN 5105 are faster than 5005, source model with 'test' settings, costs 650s per epoch, after use cuDNN 5105, it decreased to 550s per epoch. Theano report a warning but I haven't found any problem. (ps. That problem I said above this was appeared before I update the cuDNN)

Can I use your code in my graduation project? My project name is 'Retina image processing and vessels segmentation' with python implement.

Thank you for reading my question.

@dcorti
Copy link
Contributor

dcorti commented Nov 25, 2016

Hi,

Nice to know you achieved higher score :)
May I ask you what you changed in the g-model (with respect to the "source" model)?
I have never had a GPU crash after so many epochs, I don't really know what could be the reason.

I chose the parameters basically by trial and error:

  • The number of epochs, 150, is reasonable because usually the best performance is reached after 70-80 epochs, as you noticed as well. I tried with higher number (till 300) but did not observe any change.
  • I tried with more than 190000 patches, but with no improvement.

After updating the cuDNN I got a warning from Theano as well, it looks like this new cuDNN version is not supported. However everything is still going smoothly as before.

Sure, you can use the code. Just out of curiosity, can you share your project with me? And good luck!

@pythonokai
Copy link
Contributor Author

Thanks the reply.

I am glad to tell you what I changed with respect to the "source" model.
I added a upsampling and a maxpooling layer before the "source" model, and another same layers after the "source" model, and added a new merge layer like the "source" model do.
I will add function get_gnet beside get_unet OR separate get_net function from your source code and add unet, gnet function to it if you want.
I think g-model not good enough because it cost 3x training time with your source model and just increase 0.0005(about 0.05%) of the AUC of ROC, so I didn't tell you before.

I also tested some model "like" the source model, such as "W-net", "V-net" and the paper's model "U-net", there are all naming in shape, but there result are not better than g-model(g-net, that shape like gaussian curve).

I guess GPU crash reason is buffer overflow, I will check it out.
And I hope you can test it make sure that crash reason is not because my GPU is damaged : )

I am undergraduate in a general University, that project is my graduation project, and It will be easy to do, so It will contain some simple image process, such as channels separate, rotate, flip, some filters, color adjust, enhancement and so on. Of course It will contain retina vessels segmentation with THIS code. And I am not start coding yet. So I can't share you now, but I think I will opensource that implement after graduation.

Thanks again for Q&A.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants