New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Per Class Accuracy During Training/Validation #506
Comments
Cool, I didn't notice that PR go through. I'll add it to the TODOs. |
If you just add a second top to your accuracy layer: layer {
name: "accuracy"
type: "Accuracy"
bottom: "ip2"
bottom: "label"
top: "accuracy"
top: "accuracies" # <- new
include { stage: "val" }
} then Caffe will print out per-class accuracies:
Unfortunately, DIGITS doesn't know how to interpret this kind of output. I'll leave this open. |
@lukeyeager do you successfully use this PR? I'm still stuck at how to actually show accuracies for each class just like you did. Can you help? |
In PyCaffe, you can set your accuracy layer to something like: n.accur,n.accur_by_class = L.Accuracy(n.fc8, n.label, include=dict(phase=caffe.TEST), ntop=2) ...where n is my net name from n = caffe.NetSpec(), and fc8 is my last fully connected layer. Setting ntop=2 provides a second output from the accuracy layer (which I called accur_by_class in this example). |
Caffe has support wherein if you provide 2 top output blobs for the final accuracy layer, it displays a per-class accuracy as well during the testing phase.
Is it possible to get something in Digits which displays this information?
related issue: BVLC/caffe#2935
The text was updated successfully, but these errors were encountered: