Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PR addressing Issue #43 #191

Merged
merged 2 commits into from Dec 8, 2022
Merged

PR addressing Issue #43 #191

merged 2 commits into from Dec 8, 2022

Conversation

Max-Bladen
Copy link
Collaborator

After extensive exploration, it was determined that the overall error rates were consistent across the two functions (perf.mint.splsda() and tune.mint.splsda()). Hence, only the BER calculation needed to be rectified.

The inconsistency was caused by the way the global BER was calculated in each function:

  • tune.mint.splsda(): Calculated the confusion matrix of a model for each study and then found the mean of these study-specific BER values
  • perf.mint.splsda(): Calculated the confusion matrix of a model on all samples, irrespective of study. A single BER was yielded from this.

The perf.mint.splsda() had its functionality changed such that it was brought in line with the way in which tune.mint.splsda() calculated global BER.

The way in which global$overall is calculated in perf.mint.splsda() was only adjusted for consistency with global$BER - the outputted values in the previous overall method and new overall method are the same.

A minor addition was made to BOTH functions (in the case of tune.mint.splsda(), this occured in LOGOCV()). Rather than the global BER being just a basic average of each study-specific BER, both now find the average BER across studies, weighted by the study sample sizes

@Max-Bladen Max-Bladen added the bug-fix For PR's that address an Issue with `bug` label label Mar 15, 2022
@Max-Bladen Max-Bladen self-assigned this Mar 15, 2022
@aljabadi
Copy link
Collaborator

Thanks @Max-Bladen.

Can you please add a unit test that tests for this issue, where the test fails before the fix and passes after it? If this behaviour might be affecting other functions, it would be great to add tests for those as well.

Included a test to check that the error rate values yielded by `perf.mint.splsda()` and `tune.mint.splsda()` are the same (to 4 decimal places). This is checked for all three distance metrics, using both standard and balanced error rates.
@Max-Bladen
Copy link
Collaborator Author

@aljabadi latest commit adds a test which checks for homogenity between the two methods for all distance (max.dist, centroid.dist and mahalanobis.dist) and error rate (overall and BER) metrics.

@Max-Bladen Max-Bladen added the ready-to-review for all PRs that are ready to be reviewed. including complex, larger commits label Sep 22, 2022
@Max-Bladen Max-Bladen changed the title Fix for Issue #43 Bug Fix, Issue #43 Nov 28, 2022
@Max-Bladen Max-Bladen changed the title Bug Fix, Issue #43 PR addressing Issue #43 Nov 28, 2022
@Max-Bladen Max-Bladen merged commit 5b1c57a into master Dec 8, 2022
@Max-Bladen Max-Bladen deleted the issue-43 branch December 8, 2022 22:41
@Max-Bladen Max-Bladen removed the ready-to-review for all PRs that are ready to be reviewed. including complex, larger commits label Dec 8, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-fix For PR's that address an Issue with `bug` label
Projects
None yet
Development

Successfully merging this pull request may close these issues.

inconsistent error rates when using perf.mint.splsda and tune.mint.splsda
2 participants