You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
which will let people to wrap any metric, have a set of copies of the metric internally updated with different samples of the data, giving us then ability to get a distribution of metric values.
Alternatives
We can skip it on the class-based metrics side and assume anyone doing bootstrap will load everything in memory and do bootstrap using functional metrics.
The text was updated successfully, but these errors were encountered:
Yes, we need to decide how we want to represent the bootstrapped result (e.g. mean + std, or maybe 5%, 50%, 95% percentiles, possibly even configurable), but yes, your compute function is approximately what I have in mind.
I don't have too much bandwidth to do it right now, so throwing it out there if someone wants to take it on. I might get back to it in the future though.
馃殌 Feature
We should provide ability to compute bootstrapped confidence intervals for metrics.
Motivation
Confidence intervals are important and we should make it easy for people to increase rigor of their research and model evaluations.
Pitch
I'm thinking we can have something like this (very high level):
which will let people to wrap any metric, have a set of copies of the metric internally updated with different samples of the data, giving us then ability to get a distribution of metric values.
Alternatives
We can skip it on the class-based metrics side and assume anyone doing bootstrap will load everything in memory and do bootstrap using functional metrics.
The text was updated successfully, but these errors were encountered: