Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why v1.1 does not have deploy.prototxt #12

Closed
changyun79 opened this issue Jun 15, 2016 · 10 comments
Closed

Why v1.1 does not have deploy.prototxt #12

changyun79 opened this issue Jun 15, 2016 · 10 comments

Comments

@changyun79
Copy link

Hello,
I am wondering why v1.1 does not have deploy.prototxt. Missed commit? Thank you.

@forresti
Copy link
Owner

We can add that. Thanks to @terrychenism for creating this on SqueezeNet v1.0.

Nobody on our team uses the deploy.prototxt interface very often... we mostly load custom training/testing sets into LMDBs to avoid being bottlenecked by I/O.

@forresti forresti reopened this Jun 16, 2016
@changyun79
Copy link
Author

Thanks for the quick response. I appreciate if it can be added in.

@besirkurtulmus
Copy link

I'd also appreciate if you could add deploy.txt too.

@austingg
Copy link

@besirkurtulmus it's quite easy to make deploy.prototxt with reference train_val.prototxt. You can just modified the input and output layers and there are some reference in Caffe/models directory.

psyhtest added a commit to dividiti/ck-caffe that referenced this issue Jul 7, 2016
Note there's no deploy.prototxt as per:
forresti/SqueezeNet#12
@psyhtest
Copy link

psyhtest commented Jul 7, 2016

@forresti

I thought one needs deploy.prototxt to measure the execution time using caffe time? I appreciate your research is mostly about achieving the same accuracy but what about performance?

@forresti
Copy link
Owner

forresti commented Jul 7, 2016

I often run caffe time on train_val.prototxt files... it works great. :)

@psyhtest
Copy link

psyhtest commented Jul 8, 2016

@forresti

Thanks! Do you know which batch size gets picked up for benchmarking - from the TRAIN or TEST specification?

@forresti
Copy link
Owner

forresti commented Jul 8, 2016

I think train. But, caffe time will print the dimensions of all layers
including the batch size.
On Jul 8, 2016 1:19 AM, "Anton Lokhmotov" notifications@github.com wrote:

forresti
Thanks! Do you know which batch size gets picked up for benchmarking -
from the TRAIN or TEST specification? Is it deterministic?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#12 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AB7Sqg2wwWGzjM1NGSTmsTVpXGrHcx3jks5qTggrgaJpZM4I2qdc
.

@rmekdma rmekdma mentioned this issue Jul 26, 2016
@AliaMYH
Copy link

AliaMYH commented Mar 3, 2017

What exactly is the difference between the train_val and the deploy prototxt ? What are their different purposes?

@amroamroamro
Copy link

@forresti
there's now three PRs to add deploy.txt for v1.1: #20, #21, #43

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants