-
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding quality metrics and opening discussion #3912
base: main
Are you sure you want to change the base?
Conversation
Hello @alexdesiqueira! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:
Comment last updated at 2020-02-28 00:08:14 UTC |
Hi @scikit-image/core,
|
On the other hand, some of these measures only work on binary images... silly me 😌 |
There seems to be a few segmentation/clustering comparison algorithms out there, including Normalized Probabilistic Rand: https://ieeexplore.ieee.org/abstract/document/1565332. I guess there are others too; we should try to find a review paper. |
@stefanv I have one or two reviews I used during previous research; I'll paste their names and possible links soon. |
Quick question: some of these features seem to intersect with
scikit-learn. Are the functions in
https://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics
not usable for our images?
|
I missed it 🤦♂️ I saw only |
Are there any useful ones we can add?
…On May 21, 2019 16:02:10 Alexandre de Siqueira ***@***.***> wrote:
I missed it 🤦♂ I saw only confusion_matrix before. I believe they all are
usable for our images, and I think they're prepared to deal with more than
two classes.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Closed due to overlapping with |
I think I'd like this reopened. I'd also appreciate reviews on #3354! |
@jni any opinions on this one? |
Reopening this; @jni, would you like to discuss that? |
Description
This brings us a basic implementation for several metrics: precision, recall, specificity, accuracy, and the coefficients of Matthews, Dice (F1 score), and informedness. Open to discussion (see first comment).
Checklist
./doc/examples
(new features only)./benchmarks
, if your changes aren't covered by anexisting benchmark
For reviewers
later.
__init__.py
.doc/release/release_dev.rst
.@meeseeksdev backport to v0.14.x