Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some enhancements #32

Merged
merged 8 commits into from Dec 22, 2023
Merged

Some enhancements #32

merged 8 commits into from Dec 22, 2023

Conversation

hammerfunctor
Copy link
Contributor

@hammerfunctor hammerfunctor commented Dec 4, 2023

  1. I made some grouped exports, changed some functions to one liner
  2. Function principal_log_branch is simplified using standard library rem
  3. I replaced functions validateq and validatetau by two similar macros, since their functionalities are comp-time determined.

UPDATE 12/18:
4. I introduced IrrationalConstants.jl and rewrote many places involving pi. E.g. 1/pi is now invπ and 2pi is now twoπ without digits loss.
5. I added a benchmark system to the test. To use it, run (on linux) e.g.

env BENCHMARK=1 BASELINE_COMMIT=bd39fe8 julia -e 'import Pkg; Pkg.activate("."); Pkg.test()'

the output is put below.
6. Input of jtheta_ab is changed from q to tau

Results

A ratio greater than 1.0 denotes a possible regression (marked with ❌), while a ratio less
than 1.0 denotes a possible improvement (marked with ✅). All results are shown below.

ID time ratio memory ratio
["Elliptic functions", "Aid functions of jacobi theta", "argtheta3"] 0.98 (5%) 1.00 (1%)
["Elliptic functions", "Aid functions of jacobi theta", "calctheta3"] 0.99 (5%) 1.00 (1%)
["Elliptic functions", "Aid functions of jacobi theta", "dologtheta3"] 0.98 (5%) 1.00 (1%)
["Elliptic functions", "Carlson functions", "RC"] 1.00 (5%) 1.00 (1%)
["Elliptic functions", "Carlson functions", "RD"] 0.99 (5%) 1.00 (1%)
["Elliptic functions", "Carlson functions", "RF"] 0.99 (5%) 1.00 (1%)
["Elliptic functions", "Carlson functions", "RG"] 0.98 (5%) 1.00 (1%)
["Elliptic functions", "Carlson functions", "RJ"] 0.98 (5%) 1.00 (1%)
["Elliptic functions", "Eisenstein series", "E2 on 0.005+0.005im"] 1.00 (5%) 1.00 (1%)
["Elliptic functions", "Jacobi thetas", "jtheta1"] 0.98 (5%) 1.00 (1%)
["Elliptic functions", "Jacobi thetas", "jtheta1dash"] 0.95 (5%) 1.00 (1%)
["Elliptic functions", "Jacobi thetas", "jtheta2"] 0.96 (5%) 1.00 (1%)
["Elliptic functions", "Jacobi thetas", "jtheta3"] 0.99 (5%) 1.00 (1%)
["Elliptic functions", "Jacobi thetas", "jtheta4"] 0.97 (5%) 1.00 (1%)
["Elliptic functions", "Klein J", "kleinj(2im)"] 0.97 (5%) 1.00 (1%)
["Elliptic functions", "Klein J", "kleinjinv(0.1+0.2im)"] 0.99 (5%) 1.00 (1%)
["Elliptic functions", "Lambda", "on 1im * sqrt(1/x)"] 0.99 (5%) 1.00 (1%)
["Elliptic functions", "Lambda", "on 1im * sqrt(x)"] 0.98 (5%) 1.00 (1%)
["Elliptic functions", "Log-Jacobi thetas", "ljtheta1"] 1.03 (5%) 1.00 (1%)
["Elliptic functions", "Log-Jacobi thetas", "ljtheta2"] 1.02 (5%) 1.00 (1%)
["Elliptic functions", "Log-Jacobi thetas", "ljtheta3"] 1.03 (5%) 1.00 (1%)
["Elliptic functions", "Log-Jacobi thetas", "ljtheta4"] 1.03 (5%) 1.00 (1%)
["Elliptic functions", "Neville theta functions", "thetaC on big"] 1.00 (5%) 1.00 (1%)
["Elliptic functions", "Neville theta functions", "thetaC"] 0.99 (5%) 1.00 (1%)
["Elliptic functions", "Neville theta functions", "thetaD"] 0.99 (5%) 1.00 (1%)
["Elliptic functions", "Neville theta functions", "thetaN"] 1.00 (5%) 1.00 (1%)
["Elliptic functions", "Neville theta functions", "thetaS"] 1.00 (5%) 1.00 (1%)
["Elliptic functions", "Others", "agm"] 0.98 (5%) 1.00 (1%)
["Elliptic functions", "Others", "elliptic Invariants"] 0.98 (5%) 1.00 (1%)
["Elliptic functions", "Others", "jellip"] 0.99 (5%) 1.00 (1%)
["Elliptic functions", "Others", "wp"] 0.98 (5%) 1.00 (1%)
["Elliptic functions", "Others", "wsigma"] 0.98 (5%) 1.00 (1%)
["Elliptic functions", "Others", "wzeta 1"] 0.98 (5%) 1.00 (1%)
["Elliptic functions", "Others", "wzeta 2"] 0.99 (5%) 1.00 (1%)
["Elliptic functions", "Various elliptic's", "ellipticE on 0.5"] 0.98 (5%) 1.00 (1%)
["Elliptic functions", "Various elliptic's", "ellipticE on 2-3im"] 1.01 (5%) 1.00 (1%)
["Elliptic functions", "Various elliptic's", "ellipticF(big\"0.15\", big\"0.81\")"] 0.98 (5%) 1.00 (1%)
["Elliptic functions", "Various elliptic's", "ellipticK on 0.5"] 1.00 (5%) 1.00 (1%)
["Elliptic functions", "Various elliptic's", "ellipticK on big\"0.5\""] 0.97 (5%) 1.00 (1%)
["Elliptic functions", "Various elliptic's", "ellipticPI(1+im, 1, 2-im)"] 0.98 (5%) 1.00 (1%)
["Elliptic functions", "Various elliptic's", "ellipticZ(-5+3im,-4-9im)"] 1.00 (5%) 1.00 (1%)
["Elliptic functions", "etaDedekind", "on 1im / 2"] 0.95 (5%) ✅ 1.00 (1%)
["Elliptic functions", "etaDedekind", "on 2im"] 0.86 (5%) ✅ 1.00 (1%)

…`@validatetau`, since their roles are closer to macros instead of functions
@simonp0420
Copy link
Contributor

I like most of the changes in your first commit "change styles of some functions...". But when you replaced principal_log_branch with a one-liner, you changed twopi = 2 * one(y) * π to 2pi. The latter will be evaluated as a Float64 and will lead to subtle errors in some cases when the function argument z is an extended precision number, such as a BigFloat. Instead of 2pi I suggest 2 * one(imag(z)) * pi.

I don't like the change of the validation functions to macros. A macro is intended to transform source code prior to compilation. There is no such transformation being accomplished here. The validity checks still need to be made at run time, and using the macro does not even reduce the typing required by the user at all, so I don't see the point here. It simply makes the code a little harder to understand without any benefit, IMO.

@hammerfunctor
Copy link
Contributor Author

When z is a Complex{BigFloat}, rem(imag(z),2pi) should return a BigFloat, right? 2 * one(imag(z)) * pi means nothing but converting 2pi manually, but that is automatic in rem https://github.com/JuliaLang/julia/blob/8e5136fa2979885081cd502d2210633dff1d2a1a/base/math.jl#L1059

julia> z = BigFloat(2) + im * BigFloat(10)
2.0 + 10.0im

julia> typeof(z)
Complex{BigFloat}

julia> EllipticFunctions.principal_log_branch(z)
2.0 - 2.5663706143591724639918538741767406463623046875im

julia> ans |> typeof
Complex{BigFloat}

A macro on the one hand helps the compiler to emit less codes by transforming the function to be

function something(...)
  abs(q) < 1 || error("...")
  ...
end

On the other hand, it put the thrown error right from where the error happen (function something above). It may be an overkill here. I consider it part of julia fashion. See also https://github.com/JuliaLang/julia/blob/8e5136fa2979885081cd502d2210633dff1d2a1a/base/error.jl#L218. Anyway.

@simonp0420
Copy link
Contributor

Regarding use of 2pi in the rem function. The problem is that 2pi is immediately converted to a Float64, prior to being passed to the rem function, so the remainder is being computed with respect to a lower precision approximation to $2\pi$. Here is an example where it matters:

julia> z = 1 + (BigFloat(pi) - BigFloat("1e-18"))*im
1.0 + 3.141592653589793237462643383279502884197169399375105820974944592307816406286212im

julia> rem(imag(z), 2pi, RoundNearest)
-3.141592653589792994533283553808867438983982944374894179025055407692183593713788

julia> rem(imag(z), 2*BigFloat(pi), RoundNearest)
3.141592653589793237462643383279502884197169399375105820974944592307816406286212

@simonp0420
Copy link
Contributor

Regarding the use of macros for validation. I understand that the advantage is that the test is performed inline in the function where the argument occurs, so that a traceback will show this function at the top of the stack. Also, the slight overhead of a function call is avoided. The latter advantage is probably moot because LLVM will inline the short validation function anyway. I understand your point, but my philosophy is to avoid macros when the functionality can be achieved easily by a function. YMMV, of course. Again, it's @stla's repo, he can make the call here. I'm OK either way.

@stla
Copy link
Owner

stla commented Dec 4, 2023

I'm waiting for an agreement between you. You know Julia better than me.

@hammerfunctor
Copy link
Contributor Author

Sorry I overlooked the pi issue. pi is actually a Irrational and gets promoted to Float64 by default. I'll change 2pi to 2convert(typeof(imag(z)),pi) later.

@stla You decide whether to keep function validateq and validatetau or replace them with macros @validateq and @validatetau. Changes to that will be made together with the pi issue.

@simonp0420
Copy link
Contributor

I'm fine with all the changes, after fixing the pi issue. Thanks for your useful contributions, @hammerfunctor!

@stla
Copy link
Owner

stla commented Dec 12, 2023

I'm ok for the macros. In this way I will learn something.

@stla
Copy link
Owner

stla commented Dec 18, 2023

@hammerfunctor Have you made the change regarding pi?

@hammerfunctor
Copy link
Contributor Author

@stla @simonp0420 It's done now. Please check out the changes.

@simonp0420
Copy link
Contributor

This PR is very impressive. Adding a benchmark suite is a great contribution, as is the use of IrrationalConstants, which I was unaware of. I won't be able to provide a detailed review, if desired, until after the holidays.

@stla
Copy link
Owner

stla commented Dec 22, 2023

Looks good. I'm going to merge. Thanks @hammerfunctor

@stla stla merged commit e4b24c2 into stla:master Dec 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants