Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Weird prototxt layer ... #1

Open
MyVanitar opened this issue Apr 16, 2017 · 11 comments
Open

Weird prototxt layer ... #1

MyVanitar opened this issue Apr 16, 2017 · 11 comments

Comments

@MyVanitar
Copy link

MyVanitar commented Apr 16, 2017

Hello,

where did you got these .prototxt and .caffemodel files?

for example look at this layer at the FCN-8s-OBG-4s.

layer {
  name: "obg-fcn-fuse"
  type: "Eltwise"
  bottom: "mask_projection_no_b"
  bottom: "score_fcn"
  top: "score"
  eltwise_param {
    operation: PROD
  }
}

where this top: "score" refers to? there is not layer in the name of score in the whole text. I think the problem of lower accuracy might come from this or possible similar problems inside prototxt files.

@twtygqyy
Copy link
Owner

@VanitarNordic Hi, top: "score" is only the name of output layer with inputs of two bottom layers , you can use any name you want.

@MyVanitar
Copy link
Author

@twtygqyy

Did you download these from somewhere or you have designed them yourself from paper?

@twtygqyy
Copy link
Owner

@VanitarNordic This code in this repository was implemented by my understanding according to the first version of obg_fcn paper. The paper and algorithm were updated for ACCV's publication.

@MyVanitar
Copy link
Author

@twtygqyy

Good man, it is difficult to do, specially when paper authors give no feedback! a simple mistake or parameter change can lead to a totally different results, better or worse.

May I ask you where can I learn about layers practically and caffe structure to design or implement from paper and be professional like you? sources which explained these good.

@twtygqyy
Copy link
Owner

@VanitarNordic
https://github.com/kjw0612/awesome-deep-vision contains almost all related paper and material for learning.
I would recommend http://cs231n.stanford.edu/ as a start point.

@MyVanitar
Copy link
Author

MyVanitar commented Apr 17, 2017

@twtygqyy

Thank you.

I suggest you to consider this paper instead of BGN. They achieved very good accuracy by modifying the FCN-8s. I asked the authors but they did not handle their trained model or prototxt files but I think it is easy for you modify the existing FCN-8s with their instruction.

https://arxiv.org/pdf/1611.08986.pdf

@twtygqyy
Copy link
Owner

@VanitarNordic Thanks for the suggestion, I know this paper but didn't try to implement it. I will take a deep look.

@MyVanitar
Copy link
Author

@twtygqyy

They have skipped some connections, if I understand correctly they have skipped the classification layers.

@MyVanitar
Copy link
Author

@twtygqyy
Hi man, have you reached to any result with that paper?

@twtygqyy
Copy link
Owner

@VanitarNordic sry for the late reply, was in a long vacation. I've heard that one of my colleagues implemented the improved FCN, haven't tried by myself.

@MyVanitar
Copy link
Author

@twtygqyy

So please ask him for the materials, because the implementation is more straightforward than OBG-FCN, because they have improved the existing FCN and named it iFCN-Resnet and reached very good results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants