Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[solidago] gbt: estimate asymmetrical uncertainties based on increase of loss by 1 #1973

Merged
merged 10 commits into from
Jun 1, 2024

Conversation

amatissart
Copy link
Member

@amatissart amatissart commented May 18, 2024

Based on #1970

I struggled with the sign conventions, but I think I got something that works as expected.

TODO:

  • review the definition
    @lenhoanglnh the paper suggests to only consider the negative log-likelihood term to estimate the uncertainties. Don't we need to consider the regularization term too? That what is done on this branch, because I observed very high values when it was not present.

  • the test data need to be updated with new uncertainties, after some sanity checks on the actual values

  • adapt the L-BFGS implementation to use the new uncertainties too (or split the tests)

Checklist

  • I added the related issue(s) id in the related issues section (if any)
    • if not, delete the related issues section
  • I described my changes and my decisions in the PR description
  • I read the development guidelines of the CONTRIBUTING.md
  • The tests pass and have been updated if relevant
  • The code quality check pass

@amatissart amatissart changed the title [solidago] gbt: estimate uncertainties based on increase of loss by 1 [solidago] gbt: estimate asymmetrical uncertainties based on increase of loss by 1 May 18, 2024
@lenhoanglnh
Copy link
Contributor

@lenhoanglnh the paper suggests to only consider the negative log-likelihood term to estimate the uncertainties. Don't we need to consider the regularization term too? That what is done on this branch, because I observed very high values when it was not present.

This is intended. One interesting implications of this is that if a user says A is maximally better then B, then the comparison will yield an infinite right uncertainty on A, and an infinite uncertainty on B. Does this break something?

If the uncertainty is too large (perhaps a feature rather than a bug in principle), the value + 1 in the equation may be changed to a smaller value.

@amatissart
Copy link
Member Author

amatissart commented May 18, 2024

This is intended. One interesting implications of this is that if a user says A is maximally better then B, then the comparison will yield an infinite right uncertainty on A, and an infinite uncertainty on B. Does this break something?

If the uncertainty is too large (perhaps a feature rather than a bug in principle), the value + 1 in the equation may be changed to a smaller value.

Ok 👍 I pushed the modification in bf95cbb.

It seems to work. I was just a bit surprised to see such a big difference compared to the current expected values for uncertainties in the tests files. For example in "data_3.py":

Index | Obtained          | Expected                     
1     | 6.156474787981032 | 0.03096002542681708 ± 1.0e-01
2     | 700.7408689591132 | 0.03096002542681708 ± 1.0e-01

@GresilleSiffle GresilleSiffle added the Solidago Tournesol algorithms library label May 23, 2024
@amatissart amatissart marked this pull request as ready for review May 30, 2024 15:15
@amatissart
Copy link
Member Author

@lenhoanglnh The uncertainty values close to 700 were actually due to a numerical issue. In practice, there are cases where the log-likelihood term never reaches the threshold $\mathcal L + 1$. So I adapted the implementation to use $10^3$ as the default uncertainty. (I fear storing in the databse $+\infty$ would create other problems).

@@ -29,7 +30,7 @@ def solve(
-------
out: float
"""
ymin, ymax = f(xmin) - value, f(xmax) - value
ymin, ymax = f(xmin, *args) - value, f(xmax, *args) - value
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor: Another way to do something similar would be to not change solve, but use it with a partial https://docs.python.org/3/library/functools.html#functools.partial


@njit
def f(delta, theta_diff, r, coord_indicator, ll_actual):
return ll_function(theta_diff + delta * coord_indicator, r) - ll_actual - 1.0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor: Should this -1.0 be a constant: HIGH_LIKELIHOOD_RANGE_THRESHOLD = 1.0 ?

pass

@cached_property
def loss_increase_to_solve(self):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor: Naming translated_negative_log_likelihood (what it is, not what it is meant to be used for) + it's a log likelihood, not a loss

@amatissart amatissart merged commit 82e9c4f into neurips24 Jun 1, 2024
8 checks passed
@amatissart amatissart deleted the neurips24-gbt-uncertainty branch June 1, 2024 16:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Solidago Tournesol algorithms library
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants