-
Notifications
You must be signed in to change notification settings - Fork 12
-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Finalising write-up of the Dirichlet bound numerics and the RH-conditional DBN-results #131
Comments
@teorth @km-git-acc In this section we collect some numerical results verifying the second two hypotheses of Theorem \ref{ubc-0} for larger values of Having the freedom to assume the RH has been verified up to a certain height, allows a Barrier location Include here the derivation of the moll-2 Triangle bound from: With this improved Triangle bound, more feasible Barrier locations and associated 'caption' :
A Barrier location can be chosen from the curve and then be further optimised using an offset as explained in Section \ref{newup-sec}. The 'only' computational task remaining is to verify that no complex zeros have flown through the Barrier. For this task, exactly the same math and software scripts could be applied as used for the proof of Our conclusions may be summarised as follows: insert here the table with the results (in Latex): The following plots illustrate that for increasing 'caption' :
All Barrier runs generated a winding number of zero for each rectangle and the scripts completed successfully without any errors. For all Barrier locations, the computations of the mesh points where calculated at 20 digits accuracy except for the highest two where 10 digits where used (to be able to compute it within a reasonable time). Checks where made before each formal run to assure the target accuracy would be achieved. The computations for |
@teorth @km-git-acc I believe the line segment N=69098 ... 1.5 mln was originally split into two parts for computational reasons. However, when we started using the ARB-software and had developed the faster "sawtooth" mechanism, the associated speed increase made this split range obsolete and we managed to easily process the entire run with a mollifier of 4 primes. Here is the link to the output (we even went up to N=1.7mln): https://github.com/km-git-acc/dbn_upper_bound/blob/master/output/eulerbounds/lemmasawtooth_bounds_t_0.20_y_0.2_N_69098_1.7mil_moll_2%2C3%2C5%2C7.txt Would it therefore be ok to combine (ii) and (iii) in the write-up or should we leave it as is? |
Rudolph, this all looks great, and thanks for taking the initiative on this. Certainly if the numerics no longer proceed using the subdivision (ii), (iii), (iv) that was in an obsolete version of the computation then we can rearrange it. I have been busy with several other things in the last few weeks, but should have some time later this week to look at your revisions and make more suggestions. |
Great, no hurry. Since there is quite a bit of additional Latex involved, I decided to include the updates on II) and III) in a temporary debruijn.pdf to make them easier to review. https://drive.google.com/file/d/1JkYv3-lCUaMovPedRvZ5uSCJjqBp7TGx/view?usp=sharing The proposed write-up on II) starts at the bottom of p49 and the section on III) starts at the top of p62. |
@rudolph-git-acc |
Hey KM, thanks for the 'sign of life' and very happy to hear from you :) Please put your prime focus on getting your work/life balance back to normal again! I believe I can manage the write-up work, unless some very specific technical questions come up. |
Hi, sorry for being absent for so long - there was a serious personal matter that consumed several months of my time. I have now returned to the task of finishing off this writeup. There is a somewhat serious issue in the verification of what is now (ii) - the region where N is between 69098 and 1.5 x 10^6 arising from the fact that the Lemma bound is in fact not monotone in y as previously thought (this is a subtle issue coming from the fact that the Euler-mollified coefficients \tilde alpha_n vary in a strange way with respect to y). Morally it should still be that the y=0.2 case should dominate however. I will try to fix this issue (it will probably need an appeal to the argument principle which, morally speaking, reduces matters to verifying the two sides y=0.2, y=1 of the region, and the y=1 region should be very easy) but it is going to be a bit complicated. In the meantime can you supply me with a link to the ARB code used to verify (ii)? I may eventually need to modify it for the fix I had in mind. Will keep you posted. |
Sorry to hear about the personal matter, but great to have you back :) For updating the code it is probably easiest to update the pari-gp script first: https://github.com/km-git-acc/dbn_upper_bound/blob/master/dbn_upper_bound/pari/abbeff_largex_bounds.txt. This script was replicated in ARB here (that I am happy to update once the new logic is ready): https://github.com/km-git-acc/dbn_upper_bound/blob/master/dbn_upper_bound/arb/LemmaSawtoothFinalv1.c As an ultimate fallback for the issue in (ii), we could try to verify the RH a bit further than Platt's study. Around |
OK, I think I have mostly fixed the issue. We have to show that H_t(x+iy) does not vanish in a long rectangle to the right of the barrier. We modify this to E_{t,7}(x+iy) H_t(x+iy) / B_t(x+iy) where E_{t,7} is the Euler mollifier for primes up to 7. By the argument principle it will suffice to show that this expression stays away not only from zero, but from the negative real axis. It took a little bit of work but the lemma bound that gives a lower bound on the magnitude of this quantity (or more precisely on the Dirichlet series-like approximation to it) also gives a lower bound on the distance to the negative real axis. One does also have to treat the other three sides of the rectangle than just the most important lower side (in which y=0.2); I can handle two of them without difficulty but one of them (the edge on the barrier in which x ~ 6 x 10^10 and y ranges between 0.2 and 1) may need some variant of the calculations already done in the barrier. I have started to modify the writeup to reflect this new strategy. Right now I am trying to modify the pari-gp code to do the new verification. I think I found a way to proceed that is simpler than the sawtooth method. Namely, I can get a uniform lower bound for the distance of E_{t,7}(x+iy) H_t(x+iy) / B_t(x+iy) that works for all N in a given range [N_-, N_+], by using N_- to lower bound various quantities that show up in the lemma bound but N_+ (or more precisely DN_+) to determine the number of terms in the summation. My hope is then to cover the range [69098, 1.5 x 10^6] by a small number of intervals [N_-, N_+] in which this bound gives what one wants; I should be able to do this by hand once I configure the PARI code a little bit to reflect some slight changes in the lemma bound that I made to simplify the expressions somewhat. Will keep you posted. |
I have a question. You indicated in your first post in this thread that you used the primes 2,3,5,7 to get a lower bound of 0.067. But it seems to me (after running abbeff_largex_ep_bound(69098,0.2,0.2,"L",5) ) that just using 2,3,5 one can get a lower bound of 0.042 and this might be enough. Do you recall the reason why you went to 7 instead of 5? On my little laptop I have a certain amount of difficulty running the code at 7 but 5 is still manageable. |
Good question. Just ran the Lemmabound at t=y=0.2 at N=69098 in ARB and indeed get:
The reason for opting for 4 primes was that we wanted the bound to be large enough to:
With your new approach, that potentially replaces the sawtooth mechanism, and also knowing that 0.04 is multiple orders of magnitude above the error terms, I believe we should be ok using 3 primes to cover this range. P.S. I do recall 4 primes made pari/gp struggle a bit, but was not an issue for ARB. |
OK, I am close to finalising the verification to the right of the barrier (Section 8.5 in the newest version of the writeup). One has to show that E_{t,5}(x+iy) H_t(x+iy) / B_t(x+iy) avoids the negative real axis for a certain long rectangle R = { X_0 - 0.5 < x; 0.2 <= y <= 1; N \leq 1.5 x 10^6 } with t = 0.2. I can do this for three of the sides of the rectangle, the one missing thing is the left side x = X_0 - 0.5, 0.2 <= y <= 1 with t=0.2. This however should follow from the analogue of Figure 16, with t=0.2 instead of t=0. Would it be possible to draw this figure? One just needs f_t(x+iy) to have magnitude well away from zero (e.g. larger than 0.1 will be fine) and stay well away from the negative axis (e.g. having an argument of at most 2.58 radians in magnitude would be fine). If the t=0.2 picture is anything like the t=0 picture of Figure 16 then this should be true with lots of room to spare. p.s. I adapted the PARI file (the new file is called new_abeff_largex_bounds.txt in the pari directory) to verify that E_{t,5}(x+iy) H_t(x+iy) / B_t(x+iy) avoids the negative real axis in a range of the form t = 0.2, y = 0.2, N1 <= N <= N2; as long as the output of abbeff_largex_ep_bound2(mtype,N1,N2,y,t) exceeds a threshold (0.00205, as it turns out, for this choice of parameters). For instance abbeff_largex_ep_bound2(5,69098,80000,0.2,0.2) = 0.026... so this clears the region 69098 <= N <= 80000. I was able to cover the region 69098 <= N <= 1.5 * 10^6 by the five intervals [69098, 8 \times 10^4], [8 \times 10^4, 1 \times 10^5], [1 \times 10^5, 2 \times 10^5], [2 \times 10^5, 1 \times 10^6]$, $[1 \times 10^6, 1.5 \times 10^6]; it may be possible to do it with fewer (there is a lot of room to spare) but I wasn't being very efficient. The output here is pretty close to the output from the old abbeff_largex_ep_bound data so I would imagine that all the conditional results you've obtained would also go through. Unfortunately one also now has to do some verifications on the other three sides of the rectangle but they should be doable with a lot of room to spare (and maybe we don't need to do them if we just want to give a tentative indication of what conditional results ought to be possible with our approach). |
Great. Here is the plot for t=0.2 analogous to figure 16. The number of mesh points required for a rectangle at this height of t is only 52. Data used: Polygonplotmeshpointst02.txt The new approach looks like a very nice simplification. As a next step I will try to replicate it in ARB. P.S. |
Thanks for this! Interesting that some of the edges have very wide spacing but I guess this comes from the derivative bound getting considerably better when t and y are large. I've added this to the writeup (and the preceding changes have also been implemented). Actually we now look very close to being able to finalise the writeup. I have just one question about the final table. The final column is referring to the "Moll2 bound", does this mean the Euler mollifier just using the prime 2 and nothing else, combined with what we called the "Lemma bound" (and has now become Lemma 8.5 in the latest writeup)? |
The "Moll2 bound" refers to the (fast) Triangle bound and not to the Lemma bound. Last year you posted the derivation of a prime 2 Triangle bound on the Wiki page, that enabled us to considerably lower the heights of the range end points ( P.S.
|
The write-up starts to really look good :) Some more info the RH-conditional computations: Just to be sure: is it actually correct to assume that an analytical proof of non-vanishing will always exist beyond the point at which a prime 2 (i.e. mollified) Triangle bound has become sufficiently positive? |
Thanks for this! I have now merged in the figures and the text accordingly, and I think the writeup is now nearing a final form. It is technically true that in order to properly clear out the region to the right of the barrier, it's not quite enough to do the calculation at N_0; one should start clearing out intervals as in what we now do in Section 8.5, then do an analytic calculation past some extremely large cutoff N_1, but given how the bounds always get easier as N increases, and the conservative safety margin we have in the lower bound for f_t, this should not be a problem and I am not really concerned about verifying this carefully (though it would not be too difficult to do if, say, a referee insists on it). |
Great! I went through the write-up one more time and spotted these typo's:
|
Thanks! I've now implemented these corrections. |
In the spirit of wrapping up the Polymath15 project, I remembered there was an outstanding action in the "Sharkfin" document ( https://github.com/km-git-acc/dbn_upper_bound/tree/master/Writeup/Sharkfin) to describe the numerical experiments we had done in the negative t domain. I have now proposed a short description in the document (on p2/3). Grateful if you could have a quick look on whether this meets your needs. P.S. |
Thanks for this! Looks great, and I have added a ref to it from the main writeup. My student was able to justify the curves of complex zeroes for large negative values of t (precisely how negative depends on which curve one wants to justify), but something funny happens for small negative values of t in which the zeroes still peel off but are distributed in a much less ordered way than along a discrete family of curves. There seems to be some interesting mathematics going on here that is not yet fully explained. |
Strange to think it's been 2 years..time flies by too quickly. Stating briefly, I finally started my own business in 2019. Turns out to be much tougher than one hopes for. But happy to say it can ride tricycles now, and hopefully soon bicycles too. I have resisted 'math temptations' during this time, but since the last month or two, have found myself yielding to them again. |
@teorth @km-git-acc
I am opening a new issue to help us coordinate and drive the efforts required to finalise the write-up. The OP of the 11th thread nicely summarises the topics still to be addressed:
I) Is my understanding correct that the write-up for the Barrier numerics is pretty much complete now or is there more to be done?
II) The write-up for the numerics that need to show that the 'right side of the Barrier' is zero-free is still incomplete. KM has started work on this over Christmas, however not sure how far he has progressed the work (slightly concerned since he seems no longer active on the blog and doesn't respond to e-mail, hopefully he reads this message and chips in again :) ). We do have an idea on how to complete this piece (also based on the polymath15 10th thread OP):
(insert math descriptions from OP 10th thread of the sawtooth process here)
Plot of Dirichlet and error bounds:
Proposed text as 'caption' for the figure:
III) For the conditional runs up to DBN$\lt 0.1$ , we propose something along the lines of:
So we have to find those
Does this make sense? Any other data/information to provide?
The text was updated successfully, but these errors were encountered: