Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Reference Issues/PRs #26965 is Fixed
What does this implement/fix? Explain your changes.
Case when
y_true
contains a single class andy_true == y_pred
.labels = unique_labels(y_true, y_pred)
It calculate number of unique value in the given y_test and y_pred
n_label=label.size
so i have create a condition that:-
if n_labels==1:
return coo_matrix((sample_weight, (y_true, y_pred)),shape=(2, 2),dtype=dtype,).toarray()
example 👍
y_true = [1,1,1,1]
y_pred = [1,1,1,1]
before it shows [[4]]
but now it shows [[4,0],[0,0]]
SO this issue now fixed
y_true = np.array([0, 0])
y_pred = np.array([0, 0])
print(f1_score(y_true, y_pred, zero_division=1)) # Here division by zero should be triggered resulting in 1.0
but now confusion Matrix =[[2,0],[0,0]] so now the precision , recall and f1_score will come without and trigged ed
So this issue is also solved.
And other Issue in class_likelihood_ratios :---
LR+ ranges from 1 to infinity. A LR+ of 1 indicates that the probability of predicting the positive class is the same for samples belonging to either class; therefore, the test is useless. The greater LR+ is, the more a positive prediction is likely to be a true positive when compared with the pre-test probability. A value of LR+ lower than 1 is invalid as it would indicate that the odds of a sample being a true positive decrease with respect to the pre-test odds.
LR- ranges from 0 to 1. The closer it is to 0, the lower the probability of a given sample to be a false negative. A LR- of 1 means the test is useless because the odds of having the condition did not change after the test. A value of LR- greater than 1 invalidates the classifier as it indicates an increase in the odds of a sample belonging to the positive class after being classified as negative. This is the case when the classifier systematically predicts the opposite of the true label.
This issue is also Fixed :--
Firstly calculate the number of unique_label if it come 1 then create a condition to solve it.
if labels is None:
labels = unique_labels(y_true, y_pred)
else:
labels = np.asarray(labels)
n_labels = labels.size
if n_labels == 0:
raise ValueError("'labels' should contains at least one label.")
elif y_true.size == 0:
return np.zeros((n_labels, n_labels), dtype=int)
elif len(np.intersect1d(y_true, labels)) == 0:
raise ValueError("At least one label specified must be in y_true")
n_labels = labels.size
if (n_labels,n_labels)==(1,1):
positive_likelihood_ratio=float("inf")
negative_likelihood_ratio=0
One test case is removed because it check the previous error but now it is solved
Any other comments?
So all The issue related to precision, recall , F1_score, confusion_matrix , class_likelihood_ratios is Solved
Case when
y_test
contains a single class andy_test == y_pred
.