You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If we have learning curves for which the scores are very close to the maximum for the given metric (see attached example below), the curves have limits that theoretically may not make sense. It's not a big issue but perhaps we can fix it although we already have some intelligent computation of y-limits in SKLL that should be handling this.
The text was updated successfully, but these errors were encountered:
I think I see what's going on. The y-limit computation function uses the following statement:
upper_limit = 1.1 if max_score <= 1 else math.ceil(max_score)
where max_score is computed as the sum of the mean plus the standard deviation.
In the above example, the mean is 0.996401028277635 and the std. dev. is 0.011380947877212683 and the sum is 1.0077819761548477 for which the ceiling is 2 and hence the weird limits. We should be able to fix this.
If we have learning curves for which the scores are very close to the maximum for the given metric (see attached example below), the curves have limits that theoretically may not make sense. It's not a big issue but perhaps we can fix it although we already have some intelligent computation of y-limits in SKLL that should be handling this.
The text was updated successfully, but these errors were encountered: