Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some questions about the implementation of the Waudby-Smith–Ramdas (WSR) bound #2

Closed
christian-cahig opened this issue Dec 21, 2022 · 2 comments

Comments

@christian-cahig
Copy link

christian-cahig commented Dec 21, 2022

If I understand correctly, bounds.WSR_mu_plus() implements the Waudby-Smith–Ramdas (WSR) bound.

rcps/core/bounds.py

Lines 126 to 137 in b400457

def WSR_mu_plus(x, delta, maxiters): # this one is different.
n = x.shape[0]
muhat = (np.cumsum(x) + 0.5) / (1 + np.array(range(1,n+1)))
sigma2hat = (np.cumsum((x - muhat)**2) + 0.25) / (1 + np.array(range(1,n+1)))
sigma2hat[1:] = sigma2hat[:-1]
sigma2hat[0] = 0.25
nu = np.minimum(np.sqrt(2 * np.log( 1 / delta ) / n / sigma2hat), 1)
def _Kn(mu):
return np.max(np.cumsum(np.log(1 - nu * (x - mu)))) + np.log(delta)
if _Kn(1) < 0:
return 1
return brentq(_Kn, 1e-10, 1-1e-10, maxiter=maxiters)

I have some (seemingly trivial) questions regarding the said function. Any form of guidance is appreciated.

  1. In the subroutine _Kn(), how does one get the + np.log(delta) term from Proposition 5?
  2. Why do we check whether _Kn(1) is negative, and, if it is, subsequently return a WSR bound of 1?
  3. When invoking scipy.optimize.brentq(), why do we use a “smaller” search interval (i.e., between 1e-10 and 1-1e-10) instead of between 0 and 1?
@christian-cahig christian-cahig changed the title Some questions about the implementation of the Waudby-Smith–Ramdas (WSR) bound bounds.WSR_mu_plus() Some questions about the implementation of the Waudby-Smith–Ramdas (WSR) bound Dec 21, 2022
@aangelopoulos
Copy link
Owner

  1. We're inverting the capital process _Kn, so we need to find when it equals to log(delta). Looking at Theorem 3, we know that this happens when the process equals 1/delta. take log of both sides and set them equal.

  2. If _Kn(1) is negative that means that we can never achieve an error of delta, which means that our upper-bound is 1. (This is basically just clipping).

  3. This was just for numerical stability. I don't recall at this point exactly why this was needed, but I think some warnings were raised if you plugged in 0 or 1.

@christian-cahig
Copy link
Author

@aangelopoulos thank you very much for the explanation. It's a big help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants