New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Help need for evaluation of latest wiki page equations #126

Closed
rudolph-git-acc opened this Issue Dec 28, 2018 · 154 comments

Comments

Projects
None yet
5 participants
@rudolph-git-acc
Copy link
Collaborator

rudolph-git-acc commented Dec 28, 2018

Grateful for any help to properly evaluate the sequence of equations on this Wiki page: http://michaelnielsen.org/polymath1/index.php?title=Polymath15_test_problem#Large_negative_values_of_.5Bmath.5Dt.5B.2Fmath.5D

Below is the pari/gp code I have used so far. At the end of the code is a simple rootfinder that should (approximately) find this sequence of roots:

-10.00000000, 906.72649548
-10.00000000, 931.47009390
-10.00000000, 976.80075639
-10.00000000, 990.45190417

however from eq 318 onwards it finds many more roots (also at different t, e.g. -100). Have experimented with the integral limits, the integral's fixed parameter, the real precision settings of pari and the size of the sums, but no success yet.

Have re-injected some of the non-zero multiplication factors that can be switched on/off if needed.

One thing I noticed is that after eq 307 the function doesn't seem to be real anymore (see data at the end). Not sure if this should matter though, since I use the real value of Ht for root finding.

default(realprecision, 40)

Xi(s)=return((s/2)*(s-1)*Pi^(-s/2)*gamma(s/2)*zeta(s));
M0(s)=return(s*(s-1)/2*Pi^(-s/2)*sqrt(2*Pi)*exp((s/2-1/2)*log(s/2)-s/2));

Ht302(x,t) = {
    A=intnum(v=-8,8,Xi((1+I*x)/2+I*sqrt(abs(t))*v)*exp(-v^2),1);
    return(A);
}

Ht303(x,t) = {
    A=intnum(v=-8,8,Xi((1+I*x)/2+I*sqrt(abs(t))*v-Pi*I*abs(t)/8)*exp(-(v-Pi*sqrt(abs(t))/8)^2),1);
    return(A);
}

Ht304(x,t) = {
    xt = x-Pi*abs(t)/4;
    mulfact = exp(-Pi^2*abs(t)/64);
    A=mulfact*intnum(v=-8,8,Xi((1+I*xt)/2+I*sqrt(abs(t))*v)*exp(-v^2+Pi*sqrt(abs(t))*v/4),1);
    return(A);
}

Ht307(x,t) = {
    xt = x-Pi*abs(t)/4;
    mulfact = exp(-Pi^2*abs(t)/64)*2;
    A=mulfact*intnum(v=-8,8,real(M0((1+I*xt)/2+I*sqrt(abs(t))*v))*zeta((1+I*xt)/2+I*sqrt(abs(t))*v)*exp(-v^2+Pi*sqrt(abs(t))*v/4),1);
    return(A);
}

Ht312(x,t) = {
    xt = x-Pi*abs(t)/4;
    mulfact = exp(-Pi^2*abs(t)/64)*(M0((1+I*xt)/2));
    A= mulfact*intnum(v=-8,8,exp((I*sqrt(abs(t))*v)/2*log(xt/(4*Pi))-Pi*sqrt(abs(t))*v/4+(I*abs(t)*v^2)/(2*xt))*zeta((1+I*xt)/2+I*sqrt(abs(t))*v)*exp(-v^2+Pi*sqrt(abs(t))*v/4),1);
    return(A);
}

Ht315(x,t) = {
    xt = x-Pi*abs(t)/4;
    N = sqrt(xt/(4*Pi));
    u = 4*Pi*abs(t)/xt;
    mulfact = exp(-Pi^2*abs(t)/64)*(M0((1+I*xt)/2));
    A=mulfact*intnum(v=-8,8,exp(I*sqrt(abs(t))*v*log(N)+I*u*v^2/(8*Pi))*zeta((1+I*xt)/2+I*sqrt(abs(t))*v)*exp(-v^2),1);
    return(A);
}

Ht316(x,t) = {
    xt = x-Pi*abs(t)/4;
    N = sqrt(xt/(4*Pi));
    u = 4*Pi*abs(t)/xt;
    mulfact = exp(-Pi^2*abs(t)/64)*(M0((1+I*xt)/2));
    A=mulfact*sum(n=1,1000, intnum(v=-8,8,exp(I*sqrt(abs(t))*v*log(N)+I*u*v^2/(8*Pi))*n^(-((1+I*xt)/2+I*sqrt(abs(t))*v))*exp(-v^2),1));
    return(A);
}

Ht317(x,t) = {
    xt = x-Pi*abs(t)/4;
    N = sqrt(xt/(4*Pi));
    u = 4*Pi*abs(t)/xt;
    mulfact = exp(-Pi^2*abs(t)/64)*(M0((1+I*xt)/2))*N^((1+I*xt)/2)/8;
    A=mulfact*sum(n=1,1000, intnum(v=-8,8,exp(-I*sqrt(abs(t))*v*log(n/N)+I*u*v^2/(8*Pi)-((1+I*xt)/2)*log(n/N))*exp(-v^2),1));
    return(A);
}

Ht318(x,t) = {
    xt = x-Pi*abs(t)/4;
    N = sqrt(xt/(4*Pi));
    u = 4*Pi*abs(t)/xt;
    mulfact = exp(-Pi^2*abs(t)/64)*(M0((1+I*xt)/2))*N^((1+I*xt)/2)/8*sqrt(Pi);
    A=mulfact*sum(n=1,1000,exp(-abs(t)*(log(n/N))^2/(4*(1-I*u/(8*Pi)))-(1+I*xt)/2*log(n/N)));
    return(A);
}

Ht320(x,t) = {
    xt = x-Pi*abs(t)/4;
    N = sqrt(xt/(4*Pi));
    u = 4*Pi*abs(t)/xt;
    mulfact = exp(-Pi^2*abs(t)/64)*(M0((1+I*xt)/2))*N^((1+I*xt)/2)/8*sqrt(Pi);
    A = mulfact*sum(n=1,1000,exp(-(abs(t)*(n-N)^2)/(4*N^2*(1-I*u/(8*Pi)))-I*xt/2*(n-N)/N+I*xt*(n-N)^2/(4*N^2)));
    return(A);
}

Ht321(x,t) = {
    xt = x-Pi*abs(t)/4;
    N = sqrt(xt/(4*Pi));
    u = 4*Pi*abs(t)/xt;
    mulfact = exp(-Pi^2*abs(t)/64)*(M0((1+I*xt)/2))*N^((1+I*xt)/2)/8*sqrt(Pi);
    A = mulfact*sum(n=1,1000,exp(-(2*Pi*u*(n-N)^2)/(8*Pi-I*u)-2*Pi*I*N*(n-N)+Pi*I*(n-N)^2));
    return(A);
}

Ht323(x,t) = {
    xt = x-Pi*abs(t)/4;
    N = sqrt(xt/(4*Pi));
    u = 4*Pi*abs(t)/xt;
    mulfact = exp(-Pi^2*abs(t)/64)*(M0((1+I*xt)/2))*N^((1+I*xt)/2)/8*sqrt(Pi);
    A = mulfact*sum(n=1,1000,exp(-(2*Pi*u*(n-N)^2)/(8*Pi-I*u)-Pi*I*n^2+2*Pi*I*(n-N)^2));
    return(A);
}

Ht324(x,t) = {
    xt = x-Pi*abs(t)/4;
    N = sqrt(xt/(4*Pi));
    u = 4*Pi*abs(t)/xt;
    mulfact = exp(-Pi^2*abs(t)/64)*(M0((1+I*xt)/2))*N^((1+I*xt)/2)/8*sqrt(Pi);
    A = mulfact*sum(n=1,1000,exp((16*Pi^2*I*(n-N)^2)/(8*Pi-I*u))*exp(Pi*I*n));
    return(A);
}

Ht328(x,t) = {
    xt = x-Pi*abs(t)/4;
    N = sqrt(xt/(4*Pi));
    u = 4*Pi*abs(t)/xt;
    mulfact = exp(-Pi^2*abs(t)/64)*(M0((1+I*xt)/2))*N^((1+I*xt)/2)/8*sqrt(Pi);
    A = mulfact*sum(n=1,1000,exp(-Pi*I*n*(n+1)/2)*exp(2*Pi*I*(n+1/2)*N)*exp(-u*n*(n+1)/16));
    return(A);
}

print("eq.302 ",Ht302(1000, -10))
print("eq.303 ",Ht303(1000, -10))
print("eq.304 ",Ht304(1000, -10))
print("eq.307 ",Ht307(1000, -10))
print("eq.315 ",Ht315(1000, -10))
print("eq.316 ",Ht316(1000, -10))
print("eq.317 ",Ht317(1000, -10))
print("eq.318 ",Ht318(1000, -10))
print("eq.320 ",Ht320(1000, -10))
print("eq.321 ",Ht321(1000, -10))
print("eq.323 ",Ht323(1000, -10))
print("eq.324 ",Ht324(1000, -10))
print("eq.328 ",Ht324(1000, -10))

xs = 900; xe = 1010;
ts = -10; te = -1;
f=Ht318;
forstep(t=ts, te, 10, {
    xc = xs;
    while(xc <= xe,
        if(real(f(xc,t))*real(f(xc+1,t)) < 0, root=solve(a=xc, xc+1, real(f(a,t))); printf("%3.8f, %3.8f\n",t, root));
        xc=xc+1);
})
eq.302 1.201485926838305713580758240853488704456 E-165 + 3.514303326279329051 E-221*I
eq.303 1.201485926838305713582303927841938470920 E-165 + 3.522103826014908131 E-221*I
eq.304 1.201485926838305713582303927841938470920 E-165 + 3.522103826014922598 E-221*I
eq.307 1.201477416607903787215861199656638682224 E-165 + 3.992587328505261451855248837227037420869 E-169*I
eq.312 1.200242494404236645648308584282451391036 E-165 + 4.014384445003979486997306873566995218158 E-169*I
eq.315 1.200242494404236645648308584282451391036 E-165 + 4.014384445003979486997306873566995218158 E-169*I
eq.316 1.200242494404236645648308630967087228353 E-165 + 4.014384445003979487000178658198142256762 E-169*I
eq.317 1.210630457223022714918883693478442584040 E-165 - 5.581422172204223838770738013241198104262 E-166*I
eq.318 1.209227903617935860382758702133562335051 E-165 - 5.611945144292097211229689587084475461023 E-166*I
eq.320 2.791674127351571571455964690537038072397 E-167 - 2.126109533152847673586778474944806978351 E-167*I
eq.321 2.791674127351571571455964690537038072397 E-167 - 2.126109533152847673586778474944806978351 E-167*I
eq.323 -3.076893561828845392623560432368639943370 E-167 + 1.687160985046708488717820653125387727510 E-167*I
eq.324 -3.076893561828845392623560432368639943370 E-167 + 1.687160985046708488717820653125387727510 E-167*I
eq.328 -3.076893561828845392623560432368639943370 E-167 + 1.687160985046708488717820653125387727510 E-167*I
@teorth

This comment has been minimized.

Copy link
Collaborator

teorth commented Dec 28, 2018

Thanks for this - it's easy for me to step through it myself now. The multiplicative factor for 317 should have N^(-(1+I*xt)/2) rather than N^((1+I*xt)/2)/8 (I guess the 1/8 factor was just dropped at the very start, as it wasn't important), which seems to fix that step at least.

Similarly, the multiplicative factor for 318 should be N^(-(1+I*xt)/2)*sqrt(Pi/(1-I*u/(8*Pi))) rather than N^((1+I*xt)/2)/8*sqrt(Pi) .

Now checking 3.20...

@teorth

This comment has been minimized.

Copy link
Collaborator

teorth commented Dec 28, 2018

OK, it looks like even after fixing the multiplicative factor, the 3.20 approximation is not very accurate unless x and t are extremely large (of size 10^5 or so). However they do become reasonably good at that scale. What this could mean is that the wiki asymptotics are for a regime which is not the one seen in the numerics.

On the other hand, now that 3.18 remains reasonably accurate and significantly faster to calculate than the preceding expressions due to the lack of integration, perhaps it could be used to explore the zeroes at a larger range than previously done? e.g. for x,t of size around 10^5 or 10^6. I don't know right now whether the patterns we are seeing are transient; it could be that a different dynamics takes over at larger scales.

@rudolph-git-acc

This comment has been minimized.

Copy link
Collaborator

rudolph-git-acc commented Dec 28, 2018

Many thanks for your help! I now indeed get the correct zeros for eq 3.18, however the equations beyond that remain unstable.

Do you know where the imaginary parts are coming from beyond 3.07?

Great idea to use the fast 3.18 to explore very large t, x. Will try this out.

P.S.:
print("eq.328 ",Ht324(1000, -10)) should have been print("eq.328 ",Ht328(1000, -10)) of course.

@teorth

This comment has been minimized.

Copy link
Collaborator

teorth commented Dec 28, 2018

I think the small imaginary parts are coming from the lower order terms in the Stirling approximation that we are omitting (they are of order about O(1/12s), which is consistent with the fact that the imaginary part is four orders of magnitude smaller than the real part from 3.07 onwards). It can be fixed by adding additional terms to definition of M_0, but given that we have many more serious accuracy issues, it probably isn't worth expending the effort to fix it at this stage.

@teorth

This comment has been minimized.

Copy link
Collaborator

teorth commented Dec 28, 2018

The equation (3.18), by the way, is almost already entirely in (N,u) coordinates (replace xt by 4PiN^2) after discarding the multiplicative factor, so we should for instance now be able to fill in the missing portions of https://drive.google.com/file/d/101wtC7VyM9aHTst2F-X1Xf62FQT0m6e9/view
relatively easily.

@rudolph-git-acc

This comment has been minimized.

Copy link
Collaborator

rudolph-git-acc commented Dec 29, 2018

Have played quite a bit with 3.18. It is really fast and also appears to be sufficiently accurate as long as x / |t| > 10 (-ish). For other values it quickly drifts away from the correct value of Ht.

I see what you mean with the (N,u) coordinates, however once the multiplicative factor is removed, the zeros of 3.18 immediately become very 'noisy' again. I have seen a similar effect when I first tested the (unbounded) Ht-integral for larger negative t and noticed that a 'normaliser' (like e.g. multiplying by 10^166) yielded more stable results.

The Ht-zeros used for the graph you refer to were in the range t=-200...0, x=20..600 and we have to go higher to test the new equation. Therefore just kicked off a real root finding run for t=100..0, x=10000...10600 and will overlay this with the Ht data (did some test runs and expect this to all match).

@rudolph-git-acc

This comment has been minimized.

Copy link
Collaborator

rudolph-git-acc commented Dec 29, 2018

There is a perfect fit between the zeros of Ht-actual and eq.3.18 (red + green turns into brownish):

eq318plot

Will now expand the data in the t-direction to see if the pattern is re-emerging, however expect divergence to start when t decreases < -800.

@teorth

This comment has been minimized.

Copy link
Collaborator

teorth commented Dec 29, 2018

The perfect fit is very promising! It suggests that much of the numerical deviation between H_t and 3.18 is coming from the multiplier rather than the sum.

By the way, I think I know why your root finding algorithm creates spurious zeroes when the multiplier is removed - it's because the multiplier and sum are both complex numbers, with approximately opposite phase; only the product is (approximately) real. So any attempt to find zeroes based on detecting sign changes in the sum is going to create artificial zeroes when the phase is close to \pm \pi/2. On the other hand this means that one can remove any purely real components from the multiplier (in particular M0 can be replaced by its phase M0/abs(M0)), which will remove the exponentially small magnitude in these sums and perhaps lead to slightly more accurate numerics.

I'm curious to see exactly where the breakdown in accuracy from (3.18) to (3.20) is coming from. There are three substitutions being made here:

  • replacing 1 + I*xt/2 with I*xt/2;
  • replacing the first log(n/N) with (n-N)/N;
  • replacing the second log(n/N) with (n-N)/N - (n-N)^2/2N^2.

From my own primitive testing it seemed that the first change was fairly accurate but the second one was not for smallish values of t,x, but it should get more accurate for larger values of t,x. (Amusingly, replacing the first log(n/N) with the seemingly more accurate (n-N)/N - (n-N)^2/2N^2 gives far worse results due to the fact that the latter approximation has an additional zero leading to some new very large terms in the sum.)

@rudolph-git-acc

This comment has been minimized.

Copy link
Collaborator

rudolph-git-acc commented Dec 29, 2018

The phase M0/abs(M0)) works great and no longer a need for any other multiplicative factor than that. Have also tried Newton-Raphson instead of sign-changed based root finding, however this worked only successfully for 3.18 with the new factor (still drift in 3.20).

The graph has been extended to t=-300 and still has a perfect fit with the Ht-actual zeros. At t=-1000 the drift becomes ~0.3 at each zero (i.e. Ht-actual is 0.3 higher than 3.18). This increases to ~0.7 at t=-1800 (here only a single 'straight line' is left to detect). Will generate the roots for x=10000...10600, t=-1800...-10 in steps of 10, so we have that ready when we break through the 3.20 Barrier :)

@km-git-acc

This comment has been minimized.

Copy link
Owner

km-git-acc commented Dec 30, 2018

Rudolph, just to confirm, is this the eqn 3.18 function you are using post the changes?

Ht318_new(x,t) = {
    xt = x-Pi*abs(t)/4;
    N = sqrt(xt/(4*Pi));
    u = 4*Pi*abs(t)/xt;
    m0val = M0((1+I*xt)/2);
    mulfact = (m0val/abs(m0val))*N^(-(1+I*xt)/2)*sqrt(Pi/(1-I*u/(8*Pi)));
    A=mulfact*sum(n=1,1000,exp(-abs(t)*(log(n/N))^2/(4*(1-I*u/(8*Pi)))-(1+I*xt)/2*log(n/N)));
    return(A);
}
Ht318_new(1000,-10) -> 1.55003 + 0.00051*I

Using the root finder at the end, the output is close to the one in the first post.

-10.00000000, 906.76541539
-10.00000000, 931.50799535
-10.00000000, 976.83695739
-10.00000000, 990.48752071
@rudolph-git-acc

This comment has been minimized.

Copy link
Collaborator

rudolph-git-acc commented Dec 30, 2018

Yes, that's the one. You could simplify the mulfact by only using M0/abs(M0)). Eq. 3.18 does deliver very well as long as x / |t| > 10; it is eqs. 3.20 onwards that are still unstable when establishing the zeros.

@km-git-acc

This comment has been minimized.

Copy link
Owner

km-git-acc commented Dec 30, 2018

It seems mulfact still requires the N^() term. On removing it,

Ht318_new(x,t) = {
    xt = x-Pi*abs(t)/4;
    N = sqrt(xt/(4*Pi));
    u = 4*Pi*abs(t)/xt;
    m0val = M0((1+I*xt)/2);
    mulfact = (m0val/abs(m0val));
    A=mulfact*sum(n=1,1000,exp(-abs(t)*(log(n/N))^2/(4*(1-I*u/(8*Pi)))-(1+I*xt)/2*log(n/N)));
    return(A);
}
Ht318_new(1000,-10) -> -2.54492 + 0.56468*I
the imaginary term becomes significant unlike in the previous version

The root finder also gives different and a lot more zeros,

-10.00000000, 902.09670707
-10.00000000, 904.48290536
-10.00000000, 909.25164595
-10.00000000, 911.63424081
-10.00000000, 914.01563999
..
..

Also while dealing with large x, it may be useful to change the number of terms in the sum dynamically (for eg sqrt(x) instead of 1000),

Ht318_new2(x,t) = {
    xt = x-Pi*abs(t)/4;
    N = sqrt(xt/(4*Pi));
    u = 4*Pi*abs(t)/xt;
    m0val = M0((1+I*xt)/2);
    mulfact = (m0val/abs(m0val))*N^(-(1+I*xt)/2)*sqrt(Pi/(1-I*u/(8*Pi)));
    A=mulfact*sum(n=1,floor(sqrt(x)),exp(-abs(t)*(log(n/N))^2/(4*(1-I*u/(8*Pi)))-(1+I*xt)/2*log(n/N)));
    return(A);
}

for eg. check the behavior of the new vs new2 functions at x=10^7 or higher.

@rudolph-git-acc

This comment has been minimized.

Copy link
Collaborator

rudolph-git-acc commented Dec 30, 2018

That is strange. I am currently running x=10000...11000, t=-1800...-10 in steps of 10, with the code below (at realprecision=40) and don't have issues with the root finding. Note that I already use sqrt(x) and count to 100 rather than 1000 (could that explain the difference?):

EDIT 1:
Oops. You are correct. I use log(n) in the last part and therefore have implicitly done the N^() multiplication !

Ht318(x,t) = {
    xt = x-Pi*abs(t)/4;
    N = sqrt(xt/(4*Pi));
    u = 4*Pi*abs(t)/xt;
    mulfact = exp(-Pi^2*abs(t)/64)*(M0((1+I*xt)/2))*N^(-0*(1+I*xt)/2)*sqrt(Pi);
    A=M0((1+I*xt)/2)/abs(M0((1+I*xt)/2))*sum(n=1,100,exp(-abs(t)*(log(n/N))^2/(4*(1-I*u/(8*Pi)))-(1+I*xt)/2*log(n)));
    return(A);
}

EDIT 2:
If I have done the math correctly, moving from 3.16 to 3.17 just gives log(n) as the last log in 3.17. The non-zero multiplicative N^() is then brought in for the later steps. However, when I skip this multiplication and just continue the last log(n) in 3.20, it does produce the correct zeros (this works all the way to 3.23):

Ht320(x,t) = {
    xt = x-Pi*abs(t)/4;
    N = sqrt(xt/(4*Pi));
    u = 4*Pi*abs(t)/xt;
    A = M0((1+I*xt)/2)/abs(M0((1+I*xt)/2))*sum(n=1,100,exp(-(abs(t)*(n-N)^2)/(4*N^2*(1-I*u/(8*Pi)))-(1+I*xt)/2*log(n)));
    return(A);
}
@km-git-acc

This comment has been minimized.

Copy link
Owner

km-git-acc commented Dec 30, 2018

Great. I was able to match the output with the above Ht318 function after removing the sqrt(Pi/(1-I*u/(8*Pi))) factor. Had kept it since it is complex, but the imaginary part is small.

The Ht320 function is giving the correct zeros in the test range, although the imaginary part here is not insignificant which may require caution.

Ht320(1000,-10) -> 0.69424 - 0.20181*I
@rudolph-git-acc

This comment has been minimized.

Copy link
Collaborator

rudolph-git-acc commented Dec 31, 2018

Here is a plot of both eq 3.18 and Ht-actual in the range: x=10000..15000, t=-3000...-10 (step 10). The brownish color indicates the surprisingly good fit (< 1 difference) even at t=-3000 and using n=1...100.

graph10kto15k

Normalized version:

graph10kto15knorm

@km-git-acc

This comment has been minimized.

Copy link
Owner

km-git-acc commented Dec 31, 2018

Great. Let me know if you want me to evaluate a particular x range. Currently planning to check a x range at a much higher height, say near 10^8.

@rudolph-git-acc

This comment has been minimized.

Copy link
Collaborator

rudolph-git-acc commented Dec 31, 2018

The evaluation of 3.18 is very fast, even in pari/gp (have not yet built it in ARB). To get some good visuals of the patterns at 10^8, you probably could pick a relatively wide x-range like 10^8...10^8 + 100000 since the t-value has to be substantially negative (I guess < -10000) to capture the lowest peaks and to clearly identify the straight lines.

@teorth

This comment has been minimized.

Copy link
Collaborator

teorth commented Dec 31, 2018

Thanks for the plot! It does strongly look like the normalised zeroes are rapidly becoming periodic in the N variable with period 1 I'll have another crack at trying for an asymptotic expansion, maybe dropping fewer terms than what I initially tried on the wiki. I probably won't get a theta function any more but hopefully it will be something still computable.

Added later: actually, the following numerics would be helpful:

  1. To what extent does the above picture change if one replaces the (1+I*xt)/2 factor in 3.18 with just I*xt/2? (I feel like this should be relatively negligible.)

  2. After doing this, to what extent does the above picture change of one replaces the second factor of log(n/N) in 3.18 with (n-N)/N - (n-N)^2 / 2N^2? (This change should be slightly less negligible, but should become more accurate as N gets large, I think.

  3. Finally, what happens when one replaces the first log(n/N) factor (the one being squared) with (n-N)/N? This is of course 3.20. The numerics I did suggest that it is this change which is causing most of the deviation, but I'd like to confirm this.

@p15-git-acc

This comment has been minimized.

Copy link

p15-git-acc commented Dec 31, 2018

Here are four pictures with axes like in Rudolph's normalized plot above. These have only one evaluation per pixel so some roots might be missed, and only 1000 summation terms were taken per evaluation. I'm pretty sure the first picture is correct but the rest are relatively untested so they might have typos. The same multiplier was used for all four plots.

The first and second pictures are indeed almost identical, but both of the approximations of the log factors appear to introduce significant deviation in the third and fourth pictures.

3.18 with multiplier:
approximation 0

and (1+I*xt)/2 -> I*xt/2:
approximation 1

and second log(n/N) -> (n-N)/N - (n-N)^2 / 2N^2:
approximation 2

and first log(n/N) -> (n-N)/N
approximation 3

@teorth

This comment has been minimized.

Copy link
Collaborator

teorth commented Jan 1, 2019

Wow, those last two approximations are significantly worse than I expected, with no significant sign of improvement as N goes to infinity. Will have to rethink how to make a usable approximation.

Could you try changing 1+I*xt to I*xt and then changing the second log(n/N) to (n-N)/N - (n-N)^2/2N^2 + (n-N)^3/3N^3? I guess what must be happening is that the convergence of the Taylor series for log(1+x) is a lot worse than I expected.

@p15-git-acc

This comment has been minimized.

Copy link

p15-git-acc commented Jan 1, 2019

Using the third order expansion of the second log fixes both of the last two approximations.

3.18 with multiplier, and changing (1+I*xt)/2 to I*xt/2 and changing second log(n/N) to (n-N)/N - (n-N)^2 / 2N^2 + (n-N)^3/3N^3:
approximation 2 with third order expansion

and changing first log(n/N) to (n-N)/N:
approximation 3 with third order expansion

@teorth

This comment has been minimized.

Copy link
Collaborator

teorth commented Jan 1, 2019

Ah! Now we're getting somewhere. Many thanks!

The next approximation I want to make (on top of all the previous ones) is to separate out the exp( I*xt/2 * (n-N)^3/3N^3 ) term and replace it with (1 + I*xt/2 * (n-N)^3/3N^3). That is to say, exactly the same expression as (3.20) except that the summand is multiplied by (1 + I*xt/2 * (n-N)^3/3N^3). Could you make a similar plot for this approximation also? In the meantime I will try to repeat the calculations from (3.20) onwards with this new correction term and see what comes out.

@p15-git-acc

This comment has been minimized.

Copy link

p15-git-acc commented Jan 1, 2019

In this picture I've separated out the exp( -I*xt/2 * (n-N)^3/3N^3 ) term and replaced it with (1 - I*xt/2 * (n-N)^3/3N^3).

modified 320

@p15-git-acc

This comment has been minimized.

Copy link

p15-git-acc commented Jan 1, 2019

Here's the result of taking one more term of the exp approximation, separating out exp( -I*xt/2 * (n-N)^3/3N^3 ) and replacing it with (1 - I*xt/2 * (n-N)^3/3N^3 + (I*xt/2 * (n-N)^3/3N^3)^2/2):

modified 320

@teorth

This comment has been minimized.

Copy link
Collaborator

teorth commented Jan 2, 2019

Thanks for this! I guess the numerics are telling us that it's not a good idea to Taylor expand this particulr exponential factor. (It's interesting though that the 1-periodicity survives even in these lousy approximations, though. I'm also slightly curious to see how well each of these approximations maintain the real-valuedness of $H_t(x)$, though this is somewhat secondary.)

I'll start writing a revised version of the wiki computations in which I keep this exponential phase factor. The Theta function is now replaced by something a bit weirder - a series of Airy functions - but perhaps that's simply what the answer for what the asymptotic behaviour is.

UPDATE: I've got a first draft of this at

http://michaelnielsen.org/polymath1/index.php?title=Second_attempt_at_computing_H_t(x)_for_negative_t

I've now kept the multiplier rather than discarded it as in the previous version (in particular there was a factor of 1/8 sqrt(Pi) that was missing in previous code). The calculation is still not at a final stage because I haven't yet figured out how best to evaluate the Airy type integral that I end up with, but perhaps first one should check that these formulae are consistent with each other and still producing essentially the same pattern of zeros that we are seeing above.

@p15-git-acc

This comment has been minimized.

Copy link

p15-git-acc commented Jan 2, 2019

I looked up generalised Airy functions but https://dlmf.nist.gov/ is offline because the federal government of the USA is currently shut down.

@rudolph-git-acc

This comment has been minimized.

Copy link
Collaborator

rudolph-git-acc commented Jan 2, 2019

Have tried the second version in pari/gp and at 1.20 the value then already deviates by a factor 3 (for x=1000, t=-10). The roots also become incorrect at 1.20. Have not gotten any good value for 1.24.

Could an issue be that the Taylor expansion for log(n/N) (i.e. 1.19), is only valid for|-1 + n/N| < 1 whilst n becomes much larger than N ?

default(realprecision, 40);

Xi(s)=return((s/2)*(s-1)*Pi^(-s/2)*gamma(s/2)*zeta(s));
M0(s)=return(s*(s-1)/2*Pi^(-s/2)*sqrt(2*Pi)*exp((s/2-1/2)*(log(s/2))-s/2));

Ht11(x,t) = {
    A=1/(8*sqrt(Pi))*intnum(v=-8,8,Xi((1+I*x)/2+I*sqrt(abs(t))*v)*exp(-v^2),1);
    return(A);
}

Ht116(x,t) = {
    xt = x-Pi*abs(t)/4;
    N = sqrt(xt/(4*Pi));
    u = 4*Pi*abs(t)/xt;
    mulfact = exp(Pi^2*t/64)*(M0((1+I*xt)/2))*N^(-(1+I*xt)/2)/(8*sqrt(1-I*u/(8*Pi)));
    A=mulfact*sum(n=1,1000,exp(-(abs(t)*log(n/N)^2)/(4*(1-I*u/(8*Pi)))-(1+I*xt)/2*log(n/N)));
    return(A);
}

Ht118(x,t) = {
    xt = x-Pi*abs(t)/4;
    N = sqrt(xt/(4*Pi));
    u = 4*Pi*abs(t)/xt;
    mulfact = exp(Pi^2*t/64)*(M0((1+I*xt)/2))*N^(-(1+I*xt)/2)/(8*sqrt(1-I*u/(8*Pi)));
    A=mulfact*sum(n=1,1000,exp(-u*(n-N)^2/(4*(1-I*u/(8*Pi)))-2*Pi*I*N^2*log(n/N)));
    return(A);
}

Ht120(x,t) = {
    xt = x-Pi*abs(t)/4;
    N = sqrt(xt/(4*Pi));
    u = 4*Pi*abs(t)/xt;
    mulfact = exp(Pi^2*t/64)*(M0((1+I*xt)/2))*N^(-(1+I*xt)/2)/(8*sqrt(1-I*u/(8*Pi)));
    A=mulfact*sum(n=1,1000,exp(-u*(n-N)^2/(4*(1-I*u/(8*Pi)))-2*Pi*I*N*(n-N)+Pi*I*(n-N)^2-2*Pi*I/(3*N)*(n-N)^3));
    return(A);
}

Ht124(x,t) = {
    xt = x-Pi*abs(t)/4;
    N = sqrt(xt/(4*Pi));
    u = 4*Pi*abs(t)/xt;
    mulfact = exp(Pi^2*t/64)*(M0((1+I*xt)/2))*N^(-(1+I*xt)/2)/(8*sqrt(1-I*u/(8*Pi)));
    A=mulfact*sum(n=1,1000,exp(-(2*Pi*I*(n-N)^2)/(1-I*u/(8*Pi))+Pi*I*n-2*Pi*I/(3*N)*(n-N)^3));
    return(A);
}

print("Ht11 : ", Ht11(1000, -10));
print("Ht116: ", Ht116(1000, -10));
print("Ht118: ", Ht118(1000, -10));
print("Ht120: ", Ht120(1000, -10));
print("Ht124: ", Ht124(1000, -10));

xs = 900; xe = 1000;
ts = -10; te = -9;
f=Ht120;
forstep(t=ts, te, 2, {
    xc = xs;
    while(xc <= xe,
        if(real(f(xc,t))*real(f(xc+1,t)) < 0, root=solve(a=xc, xc+1, real(f(a,t))); printf("%3.8f, %3.8f\n",t, root));
        xc=xc+1);
})
Ht11 : 8.473323058767420988462095093867083351079 E-167 + 2.4784166626425366748 E-222*I
Ht116: 8.464553913428080983922546057446293276061 E-167 + 2.831092360284232422609199398263948176428 E-170*I
Ht118: 7.555897718962981045167509895563492320651 E-167 - 8.678666255397601416299821448091805557430 E-168*I
Ht120: 2.571676382003351323029996737756072336631 E-167 - 2.171715290532354466079839597858283138264 E-167*I
Ht124: -1.397262230331464853130873538396745824946 E13341 + 4.141701909039241843563469909225129565188 E13341*I
@teorth

This comment has been minimized.

Copy link
Collaborator

teorth commented Jan 2, 2019

Hmm, this doesn't quite mesh with what p15 was reporting, but perhaps this has to do with a different range of x and t. In principle the regions where n is far from N should not be significant because the first term exp( -u(n-N)^2 / 4(1-iu/8*Pi) ) should make those terms small regardless of what is going on with the imaginary part of the exponent.

Do things get better for (1.20) if one adds in the next term in the Taylor expansion, namely -2pi i / (4*N^2) * (n-N)^4?

Something is also weird with the discrepancy between (1.20) and (1.24) because these two expressions should be identical. Could you also try evaluating (1.23) to try to pin this down? Thanks! EDIT: Ah, one problem at least is that there was a multiplier of exp( - pi i N^2 ) missing, which I have now restored, though this doesn't explain the enormous magnitude.

@rudolph-git-acc

This comment has been minimized.

Copy link
Collaborator

rudolph-git-acc commented Jan 8, 2019

Closing in on the right edge of the shark fin, however not entirely there. I noted that the imaginary part of 1.65 stays around 6300i and moves only a small bit when changing N, u. The first term 2PiI*N in 1.65 seems to be responsible for this 'height' (and gives empty plots), however when I remove it I get:

plot165

Is there possibly something wrong with the first term? Or is it related to the mod-part (that I have taken out).

@p15-git-acc

This comment has been minimized.

Copy link

p15-git-acc commented Jan 9, 2019

I've added the right edge of the sharkfin, where the real part of -(c_0 a^{-1/3} + b^2 a^{-4/3} + 2\pi a^{-1/3}) ) is zero:
right edge of envelope
To be clear, this is superimposed on the green plot of zeros of the real part of the Ht approximation, and also superimposed on the red plot of the left side of the shark fin where the magnitudes of the two summands of the approximation are equal.

@rudolph-git-acc

This comment has been minimized.

Copy link
Collaborator

rudolph-git-acc commented Jan 9, 2019

This is a plot of the derivative of Im(S) (1.65 as per wiki, i.e. no terms removed except the mod):

plot165der

@teorth

This comment has been minimized.

Copy link
Collaborator

teorth commented Jan 9, 2019

Huh, it appears that there is some sort of "phase transition" near the right edge where just about anything we do to the imaginary part will pick up that edge.

The 2 Pi i N term in (1.65) can be replaced for instance by 2 Pi i {N} (at the cost of introducing one of these date line discontinuities). It's not so much the zero set of Im(S) that is relevant but more of a contour plot of when Im(S) hits a multiple of 2 Pi (or maybe Pi), as this should start creating the oscillations that one sees in the sharkfin.

p15's graph that Re( -(c_0 a^{-1/3} + b^2 a^{-4/3} + 2\pi a^{-1/3}) ) appears to change sign at the right edge of the sharkfin is convincing. I don't yet have a theoretical explanation as to why this ought to be the case, but assuming it we do at least have fairly short descriptions of both sides of the sharkfin now. One could now compute the point (N,u) where these two curves cross; this is something one could do for many ranges of N by computer rather than by eyeballing the sharkfin, and maybe we can start understanding the way in which u decays in N as well (this may help figure out how to do the asymptotics properly, since the first attempt I had was completely wrong).

@p15-git-acc

This comment has been minimized.

Copy link

p15-git-acc commented Jan 9, 2019

One could now compute the point (N,u) where these two curves cross;

The point where the right edge curve intersects the vertical line past the top of the sharkfin could be interesting too, and probably even more tractable.

Edit: I think this vertical line intersection is u=pi*sqrt(32*(1-(1+sqrt(N^2-6*N+1))/N))

@teorth

This comment has been minimized.

Copy link
Collaborator

teorth commented Jan 9, 2019

OK, I think I understand what is going on now.

The quantity (1.58) has two terms A+B, both involving an Airy function. But in the A term, the Airy function is applied to something with a positive real part and so there isn't much oscillation; in the B term, the Airy function is applied to something with a positive real part to the right of the right sharkfin edge and a negative real part to the left of this edge, so it oscillates to the left of this edge and doesn't oscillate to the right. This creates all the ridges in the zeros. But once one crosses the Re(S)=0 line, the amplitude of the oscillatory B term falls below that of the non-oscillatory A term and so now there are no more zeroes.

(I think a plot of Re(A) and Re(B) for a fixed choice of u, e.g., u=0.2, should illustrate the above behaviour.)

The right shark fin edge is basically a parabola in each strip corresponding to a fixed value of integer part [2N+1/2]; see (1.75) in the writeup.

Am still working on getting a good approximation to the left edge. EDIT: I now have a crude approximation, (1.82). The approximations suggest that actually (N, Nu^2) coordinates may give a better plot than (N,u) coordinates, in particular the (N,Nu^2) plot should be quite stable as one varies N (right now we are focusing on N ~ 1000, but I think the (N,Nu^2) plots for other values of N should give similar results).

@rudolph-git-acc

This comment has been minimized.

Copy link
Collaborator

rudolph-git-acc commented Jan 9, 2019

That logic makes a lot of sense!

Here are the plots of the real parts of the A and B (approximate) Airy-functions as they appear in 1.58 with u fixed at 0.2:

plot1581stairy

plot1582ndairy

@teorth

This comment has been minimized.

Copy link
Collaborator

teorth commented Jan 9, 2019

Thanks for the plots! (One could superimpose the first plot against the negative of the second plot to graphically illustrate where zeroes of the sum come from, but it's already clear what's going on from the two separate plots.) So inside the sharkfin, the first term is basically negligible and one is just seeing the zeroes of the Airy function of the second term.

As one increases u, the region where the oscillation of the second term dominates the magnitude of the first term shrinks, and at some point the sharkfin disappears and one either gets a lone zero (near the half-integers) or no zero (near the integers). I'm not sure exactly what the difference; the u=0.2 plots above seem to give no distinction between the half-integers and integers. But perhaps the situation will be clearer if we also make similar plots for higher values of u.

@p15-git-acc

This comment has been minimized.

Copy link

p15-git-acc commented Jan 9, 2019

The approximations suggest that actually (N, Nu^2) coordinates may give a better plot than (N,u) coordinates

Here are plots of sign changes of the real part of (1.58) in the coordinates (N, v) where v=Nu^2.

N=[1000 - 1/4, 1000 + 1/4], v=[0, 300]:
N=1000

N=[1000000 - 1/4, 1000000 + 1/4], v=[0, 300]:
N=1000000

In the second image the fin is mostly green with some patterns in the upper right and lower left. The upper right loops are real, but the patterns in the lower left are aliasing artifacts.

@rudolph-git-acc

This comment has been minimized.

Copy link
Collaborator

rudolph-git-acc commented Jan 9, 2019

Here are a few more plots for the second Airy function at different u. Zeros indeed vanish when u increases.

plot158u03
plot158u05
plot15807
plot158u08

@rudolph-git-acc

This comment has been minimized.

Copy link
Collaborator

rudolph-git-acc commented Jan 9, 2019

The distinction between the half-integers and integers indeed doesn't come out from the graphs above, however when I include the weighting factor of exp(2*Pi*I*N +2*Pi*I*b(u)/a(N)) for the second Airy function, these patterns emerge:

158noisya2u02

p158noisyu08

@p15-git-acc

This comment has been minimized.

Copy link

p15-git-acc commented Jan 10, 2019

The lighter part of this plot is supposed to be where the inequality (1.80) is true in N=[1000-1/4, 1000+1/4], v=[0, 300]:
1.80

It looks wrong, but I'm not sure if I wrote the code accurately.

@teorth

This comment has been minimized.

Copy link
Collaborator

teorth commented Jan 10, 2019

Hmm, this looks weird... the LHS of (1.80) should be increasing as N increases (because c_0 becomes more and more negative), so the shaded region should be to the right of the curve rather than to the left. But the curve does seem to fit the sharkfin to first order on the left, it's the quadratic term that may need some correction. Perhaps if you post the code we could diagnose it? One could also try some earlier versions of this constraint (1.80) such as (1.77), (1.78), or (1.79).

@p15-git-acc

This comment has been minimized.

Copy link

p15-git-acc commented Jan 10, 2019

The darker region is supposed to be where the inequality is false, but yeah I know it looks wrong. I mainly wanted to post it in case other people are making plots but not posting them because they are afraid they are wrong.

@p15-git-acc

This comment has been minimized.

Copy link

p15-git-acc commented Jan 10, 2019

It looks like the b^2 ~ -(u^2)/16 approximation is too aggressive in the step from (1.77) to (1.78). Independent confirmation would be welcome!

(1.77):
1.77

(1.78):
1.78

@rudolph-git-acc

This comment has been minimized.

Copy link
Collaborator

rudolph-git-acc commented Jan 10, 2019

If the main purpose of both conditions in 1.72 is to encapsulate the oscillations within a shark fin, then it appears that only evaluating 1.66 with the (simplified) condition 1.75 is already sufficient:

plot158cond175

@rudolph-git-acc

This comment has been minimized.

Copy link
Collaborator

rudolph-git-acc commented Jan 10, 2019

Here is the plot for 1.77:

plot158cond177

and getting the same issue as P15 for 1.78:

plot158cond178

which appears to be induced by a wrong sign before each Nu^2/(32pi) term in 1.78. Here is the plot with a minus sign before each of these factors:

p158cond178corr

EDIT: Working this through to 1.82 I get:

Ht182(N,u)=if(frac(2*N+1/2) > -N*u^2/(64*Pi^2)+(9/(256*Pi^2)*N*u^2)^(1/3),frac(2*N+1/2), 0)

which gives:

plot158cond182

however with the corrected signs, the assumption in the step between 1.79 and 1.80 (on the region in 1.73) seems to be 'just true' for capturing the oscillations, but above those the red line now bends further backwards.

@teorth

This comment has been minimized.

Copy link
Collaborator

teorth commented Jan 10, 2019

Thanks for fixing all the signs!

After all the approximations, the sharkfin region is now simplified to

(9 N u^2 / 256 Pi^2)^{1/3} - Nu^2/64 Pi^2 < { 2N + 1/2 } < 1 - Nu^2 /64 pi^2

so in particular the tip should be reached when

N u^2 = 256 Pi^2 / 9 ~ 280.7
{2N+1/2} = 5/9

which looks roughly in line with expectations (noting that our right edge of the sharkfin seems to extend by an extra wavelength or so from the last zero). Also if we use {2N+1/2}, Nu^2 coordinates (maybe using Nu^2/64 Pi^2 rather than Nu^2 is very slightly cleaner) then we now have determined the asymptotic shape of the sharkfin.

So I think we now have a pretty satisfactory description of the sharkfin region. As for the zeroes inside the sharkfin, we now know that the second term in (1.56) dominates, so I think more or less the zeroes should correspond to where (the real part of) -(c_0 a^{-1/3} + b^2 a^{-4/3} + 2 pi a^{-1/3}) matches a zero of the Airy function. These zeroes occur at -z where 2/3 z^{3/2} + pi/4 is close to a multiple of pi, so I guess the zeroes inside the sharkfin should be roughly where 2/3 (c_0 a^{-1/3} + b^2 a^{-4/3} + 2 pi a^{-1/3})^{3/2} + pi/4 is a multiple of pi? (I did some quick writeup on this on the wiki, may be sign errors though.)

@rudolph-git-acc

This comment has been minimized.

Copy link
Collaborator

rudolph-git-acc commented Jan 10, 2019

This works great! Here are a few real zero trajectories computed by 1.89:

plot189

Since these curved trajectories are now numbered by n with n=1 being the most outer curve, does this mean we could calculate exactly how many of these curves will fit within a shark fin at a certain N?

@teorth

This comment has been minimized.

Copy link
Collaborator

teorth commented Jan 10, 2019

Yes, I think so. As {2N+1/2} increases from 0 to 1, setting u=0 we see that the left-hand side of (1.89) decreases from (4/3) \sqrt{N} + 1/4 to 1/4. This suggests that there should be about (4/3) \sqrt{N} zeroes picked up, which from a visual count looks reasonably accurate (depending on what is going on at the bottom left corner of your more recent image).

On the other hand,the Riemann von Mangoldt formula tells us that the number of zeroes up to N at t=0 should be something like N^2 log N^2 - N^2, so in each interval [N - 1/4, N+1/4] there should actually be about 2 N log N zeroes initially. So only a small fraction of the zeroes are being part of this sharkfin - most of the zeroes are off doing something else, such as joining one of those complex curves we discovered previously.

It does occur to me though that one could at least hope to use this phenomena to give a proof that there are infinitely many zeroes of zeta on the critical line; there are other proofs of this type of result (and in fact a positive fraction of zeroes are known to be on the critical line) but I haven't seen an argument of this form before.

EDIT: the plots might look a bit straighter if one uses (N, Nu^2) (or N, Nu^2/64 pi^2) coordinates instead of (N,u) coordinates.

@rudolph-git-acc

This comment has been minimized.

Copy link
Collaborator

rudolph-git-acc commented Jan 10, 2019

Using N, Nu^2/(64 pi^2)indeed straightens the plots quite a bit:

plot158with189str

@rudolph-git-acc

This comment has been minimized.

Copy link
Collaborator

rudolph-git-acc commented Jan 10, 2019

To check that we still have a reasonable fit with the actual zeros of Ht, over the last 4 days I have computed a larger set of zeros of eq 1.1 in the domain N=97...102, u = 0...1.4 which roughly corresponds to x=125,000..139,000, t = -18,000...-100.

The results already look pretty good even at this relatively low size of N and we have seen improving asymptotics for higher N:

actualeq118

actualeq140

actualeq158

@p15-git-acc

This comment has been minimized.

Copy link

p15-git-acc commented Jan 10, 2019

Here's what I get for (1.58) and (1.83) for N=[1000-1/4, 1000+1/4], w=N*u^2/(64*pi^2)=[0, 1/2]:
1.58 + 1.83

@rudolph-git-acc

This comment has been minimized.

Copy link
Collaborator

rudolph-git-acc commented Jan 10, 2019

So only a small fraction of the zeroes are being part of this sharkfin - most of the zeroes are off doing something else, such as joining one of those complex curves we discovered previously.

In the spirit of 'all information about H_t originates from H_0', these straight vertical lines at the half integers actually do appear to send a tiny bit of information back to H_0, since two of these lines always seem to enclose an even number of zeros at t=0. I.e they can't just 'split up a pair of zeros' in their journey back to t=0 and have to pick their pass-through location with a certain care. Or, reasoning from t=0: does a zero at t=0 already know it will stay single its entire life?

@p15-git-acc

This comment has been minimized.

Copy link

p15-git-acc commented Jan 11, 2019

Here's a composite visualisation of (1.58), (1.83), and (1.89) for N=[1000-1/4, 1000+1/4], w=N*u^2/(64*pi^2)=[0, 1/2]:
oscillation roots composite image

@p15-git-acc

This comment has been minimized.

Copy link

p15-git-acc commented Jan 11, 2019

Is the asymptotic behaviour clear enough now so that this github issue can be closed, or are there still theoretical approximations and numerical calculations or visualisations to make before we will have reached that point?

@teorth

This comment has been minimized.

Copy link
Collaborator

teorth commented Jan 11, 2019

As far as I can see I think we've reached a satisfactory state of affairs with the "sharkfin" zeroes. At some point it might be worth writing this up as a relatively short paper for an experimental mathematics journal showing the numerical phenomena and the heuristic justifications of them, but we don't have to decide up on that now. I have a graduate student who is interested in making the heuristic calculations rigorous (for instance to make the complex zero phenomena rigorous enough that one can make an alternate proof of Newman's conjecture) but this would be separate from the Polymath project.

Rudolph: I don't know of any good way to identify at time zero which of the (presumably entirely real) zeroes of H_0 will survive to be one of the zeroes that stay real for all negative time. The fact that these zeroes exist do give some very weak information on H_0 but at this point I don't think we can say very much (unless there is an interesting new numerical phenomenon about them that demands some explanation). I did realise by the way that the original work of Polya had some work in this direction, in particular I think Polya was able to show that zeta had infinitely many zeroes on the critical line by working with something like H_t (will try to look into this more).

@rudolph-git-acc

This comment has been minimized.

Copy link
Collaborator

rudolph-git-acc commented Jan 11, 2019

Agree, this seems indeed a good point to return from this 'surprisingly fishy' ;) subroutine and go back to the main thread. The exploration of the negative t domain has sparked off a few more thoughts, but I will post them on the blog later.

Probably time to shift focus back to finishing the write-up. KM has continued progressing the Lemma-bound piece and the conditional computation results, so we should have good momentum.

Will formally close this issue tomorrow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment