Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor functions #99

Merged
merged 19 commits into from
Mar 25, 2017
Merged

Refactor functions #99

merged 19 commits into from
Mar 25, 2017

Conversation

lbenet
Copy link
Member

@lbenet lbenet commented Mar 19, 2017

This PR aims to build a more consistent API, in the sense that a function f of Taylor1 and functions of TaylorN should be treated as close as possible. Now, most of the functions have more or less the same structure (except maybe for the first coefficient), and call a related function which mutates the coefficients of the resulting function. The functions that mutate the coefficients (which are the recurrence relations on the coefficients) have nicer names and a more homogeneous structure across all function; note that not all functions have been implemented in this way. By doing this, we have extended with (essentially) no effort the inverse trigonometric functions #49 and the hyperbolic functions #64 to TaylorN.

I have also deleted some documentation of functions (e.g., exp(a) for a::Taylor1 or a::TaylorN), which do not really provided anything new but the extension to these types.

There are still some tests to be added, and the example notebooks should also be updated.

Luis Benet added 10 commits March 16, 2017 19:06
Among others, mulHomogCoeff -> mul! and divHomogCoeff -> div!, and
internal changes.
fixorder is not needed anymore since getindex was fixed and returns
zero of appropriate type whenever the requested coefficient is well
inside the maximum order, but not defined as such.
…fficients

The following functions for the homogeneous coefficients
have been renamed: powerHomogCoef -> pow!, squareHomogCoef -> sqr!,
sqrtHomogCoef -> sqrt!
This allows a more smooth approach for Taylor1 and TaylorN functions.
@coveralls
Copy link

coveralls commented Mar 19, 2017

Coverage Status

Coverage decreased (-8.9%) to 89.465% when pulling f24f911 on refactor_functions into 487cb8b on master.

@dpsanders
Copy link
Contributor

Overall this looks like a very nice improvement, thanks!

I think it can be made even better following some of my comments, trying to unify even more the functions for Taylor1 and TaylorN. I gave some concrete suggestions that I think will help achieve this to a large extent.

Basically, every time there is nearly-duplicated code, it often just needs an extra couple of definitions of extra functions to take care of the small differences, and reduce it to a single function definition.

@lbenet
Copy link
Member Author

lbenet commented Mar 20, 2017

Did you submit your review observations? I haven't received anything and I am very much looking forward. In any case, thanks a lot for the feedback and the review!

I agree with you that there is indeed a pattern in the mutating functions, and that this should be exploited.

@eval function *{T<:$W}(a::$T{T}, b::$T{T})
corder = max(a.order, b.order)
if $T == Taylor1
c = Taylor1(zero(a[1]), corder)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There must be a way of unifying this line and the next, it seems to me.
Is the definition of zero not right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just add a method zero(a, order) to make an object of that order filled with zeros, then this should work here and in the next line.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point! I'll implement it. This should definitely get rid of some of these kind of ifs.

@inbounds for k = 0:a.order
coeffs[k+1] = mulHomogCoef(k, a.coeffs, b.coeffs)
@inbounds for ord = 0:corder
mul!(c, a, b, ord) # updates c[ord+1]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there not a nicer way of writing this?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me try to explain the "design principle". Essentially, we need the same mutating functions for the coefficients of either Taylor1 or TaylorN, which must include "what to update" (c), the info to update it (a and b here because this is a binary function), and the order of the coefficient to be updated. So, as far as I can see, this is some sort of minimum required. For mul! I tried to be as close as what we already had for HomogeneousPolynomial.

Do you have some suggestion on this?

order = a.order + b.order

order > get_order() && return HomogeneousPolynomial([zero(a[1])], get_order())
order > get_order() &&
return HomogeneousPolynomial([zero(a[1])], get_order())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line and the next can be combined with something like min(order, get_order()).
Isn't there a better way of writing this zero?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if I understand your point here.

The case is the following: The multiplication of two HomogeneousPolynomial is another one whose order is the addition of the orders of the polynomials involved. Due to the fact that we have a maximum order for HomogeneousPolynomial (and TaylorN), whenever the resulting polynomial is of larger degree we return a zero of the largest degree. This is what is implemented here. We can certainly return zero with another order, but I'm not sure this helps at all...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking something like zero(HomogeneousPolynomial{T}, get_order(a))
for whatever is a suitable T. Maybe no better.


@eval /{T<:NumberNotSeries}(a::$T{T}, b::T) = a * inv(b)

@eval function /{T<:NumberNotSeries,S<:NumberNotSeries}(a::$T{T}, b::S)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any particular reason for splitting up the @eval begin...end into separate @eval here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really...

end

@eval /{T<:NumberNotSeries}(a::$T, b::T) = a * inv(b)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is there this one and the one on line 369?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch! Indeed, line 369 can be deleted.

src/functions.jl Outdated
c = $T( aux, order )
else
order = get_order()
@inbounds aux = exp( a[1][1] )
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is aux for here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comments presumably apply to all the following functions.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, this appears all over the place. aux is some sort of initial value which allows initializing the returned Taylor1 or TaylorN, and also defines the initial value for the recurrence relations.

src/functions.jl Outdated
@inbounds c = $T( cos(a[1]), order )
else
order = get_order()
@inbounds s = $T( sin(a[1][1]), order )
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about defining a function constant_term (or similar name) that does this correctly for Taylor1 and TaylorN.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i.e.

constant_term(a::Taylor1) = a[1]
constant_term(a::TaylorN) = a[1][1]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is the key comment that will eliminate code duplication here!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice! I'll do it!

src/functions.jl Outdated
# Recursive functions (homogeneous coefficients)
for T in (:Taylor1, :TaylorN)
@eval begin
function exp!(c::$T, a::$T, k::Int)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does exp! differ from exp?
The usual trick is to define exp! correctly, and then exp is just exp(x) = (y = copy(x); exp!(y)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

exp(a) returns a polynomial of type typeof(a).

exp!(c, a, k) mutates c[k+1] with the k-th order coefficient of c = exp(a). The recursion relations are actually coded in exp!.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it would be useful to use https://github.com/alsam/OffsetArrays.jl
to do away with all these annoying +1 everywhere.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should indeed consider it, but let's leave it (again) for another PR.

src/functions.jl Outdated
doc"""
tanHomogCoef(kcoef, ac, coeffst2)
reverse(f)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

inverse?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or can we overload inv?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the correct term for this function?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess inverse or inversion would be ok; it implements Lagrange series inversion.

Cc: @MikaelSlevinsky

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's why I was wondering if overloading inv makes sense here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I prefer not to use inv since it usually refers to the inverse multiplication, i.e. 1/a.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm ok with whatever term you choose. For the case that is implemented (which is numerically stable), I believe inversion and reversion are equivalent.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. So, I'll change it into inverse.

end
## real, imag, conj and ctranspose ##
for f in (:real, :imag, :conj)
fdot = Symbol(f, ".")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't seem to use fdot.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch! Indeed, that was somehow problematic, and at the end $f.(...) worked fine. I'll erase it.

@dpsanders
Copy link
Contributor

Sorry, done. Not used to the new interface!

@lbenet
Copy link
Member Author

lbenet commented Mar 20, 2017

Thanks! I'll read it now!

@lbenet
Copy link
Member Author

lbenet commented Mar 20, 2017

Thank you very much for the good suggestions and the whole review.

I just pushed few commits addressing your observations. There are some I haven't yet implemented (get_order be replaced by order), but I got rid of lots of code duplication and, actually, of the nasty if-block inside some @eval blocks. I introduced a couple of new functions (whose name we may indeed change).

Lots of tests are still to be implemented, but this is left for tomorrow.

Once again, thanks a lot for the review and ideas!

@coveralls
Copy link

coveralls commented Mar 20, 2017

Coverage Status

Coverage decreased (-5.2%) to 93.145% when pulling 6f51e98 on refactor_functions into 487cb8b on master.

Copy link
Contributor

@dpsanders dpsanders left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice work! Just a few more small comments here, but feel free to merge!

src/functions.jl Outdated
# Functions
for T in (:Taylor1, :TaylorN)
@eval begin
## exp ##
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These comments can now be removed ;)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure!

src/functions.jl Outdated
## exp ##
function exp(a::$T)
order = max_order(a)
@inbounds aux = exp( constant_term(a) )
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The @inbounds can be moved inside the constant_term function.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right; I thought I removed them from here...

src/functions.jl Outdated
function exp(a::$T)
order = max_order(a)
@inbounds aux = exp( constant_term(a) )
c = $T( aux, order )
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe get rid of aux everywhere and just write

c = $T( exp(constant_term(a)), order )

Can we think of a better name for constant_term?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. There is at least one exception: tan. There, there is an auxiliary function which is the square, so we save one (expensive?) calculation...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, there are other cases where it is convenient to have aux or a0... sorry for the randomness in naming this.

"""))

order = max_order(a)
c = $T( asin(a0), order )
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

acos?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean the definition of r? I'm not sure...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean that this is in the acos function, but the definition of c has asin in it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's actually a nice trick that @blas-ko came up with: the recursion formula is essentially the same as for asin except for a sign and the constant term.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, thanks. Maybe this should be mentioned in a comment somewhere.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point.

src/functions.jl Outdated

Return the Taylor expansion of $\sin(a)$, of order `a.order`, for
`a::Taylor1` polynomial.
c[k+1] = zero_korder(c, k)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does zero_korder do?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It avoids the nasty if $T == Taylor1 ... end block that were there. For Taylor1 it simply returns zero(a[1]), while for TaylorN it returns HomogeneousPolynomial(zero(a[1][1]),k.

src/power.jl Outdated
n == 1 && return a
n == 2 && return square(a)
n < 0 && return inv( a^(-n) )
return Base.power_by_squaring(a, n)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe use some kind of binomial expansion instead?
Since after normalising we have something of the form [1 + f(t)]^n.
Hmm, sounds complicated.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or just use exp(n*log(a)) here too -- have you compared efficiency?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The advantage of power_by_squaring is that, if you have a::Taylor1{Int}, the result is of the same type, as it would be expected. This is not preserved by exp(n*log(a)), which would return Taylor1{Float64}.

Long time ago I did some comparisons with what we have now and the same type of implementation for ^(a,r::Real); power_by_squaring was better, at least for small n`.

The same subtlety about the type of the result appears here. The nice thing of preserving the type is that you get exact answers for Fateman-like tests.

src/power.jl Outdated

with $a$ a `Taylor1` polynomial.
if $T == Taylor1
@inbounds c[k+1] = zero(c[1])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can these two lines be unified?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I forgot completly this file; I'll implement the same changes as in functions.jl here.

src/power.jl Outdated
@inbounds posTb = pos_table[c.order+1]

for na = 1:num_coeffs_a
@inbounds ca = a[na]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can just put the @inbounds outside the for loop.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

src/power.jl Outdated
for na = 1:num_coeffs_a
@inbounds ca = a[na]
ca == zero(T) && continue
@inbounds inda = index_table[a.order+1][na]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Try given a name to index_table[a.order+1] outside the loop. I expect that would be a bit more efficient. (Just like with posTb.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point; done!

return c
end

function sqrt(a::TaylorN)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this function now be unified with the previous one?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The catch here is that it is simple to factorize Taylor1 polynomials of the form a2 t^2 + a3 t^3 +... into something like t^2 (a2 + a3 t +...) so then taking the sqrt makes it simpler. The same trick doesn't apply in a straight forward way to TaylorN polynomials. (Yet, Taylor1{Taylor1{T}} behaves nicely again!)

…verse

Some tests have been modified to reflect current functionality.
@lbenet
Copy link
Member Author

lbenet commented Mar 20, 2017

I just pushed the changes we discussed. I'll add some more tests related to the new functions, perhaps some docstrings, before merging. Thanks again for the code review!

@coveralls
Copy link

Coverage Status

Coverage decreased (-4.3%) to 93.998% when pulling 957d7be on refactor_functions into 487cb8b on master.

2 similar comments
@coveralls
Copy link

Coverage Status

Coverage decreased (-4.3%) to 93.998% when pulling 957d7be on refactor_functions into 487cb8b on master.

@coveralls
Copy link

coveralls commented Mar 20, 2017

Coverage Status

Coverage decreased (-4.3%) to 93.998% when pulling 957d7be on refactor_functions into 487cb8b on master.

@dpsanders
Copy link
Contributor

Looks good and tests pass! Shall I merge?

@lbenet
Copy link
Member Author

lbenet commented Mar 20, 2017

Hold on... I'm checking performance, and it seems that it is worst.

@lbenet
Copy link
Member Author

lbenet commented Mar 20, 2017

Memory allocation has jump ~10x... I'll check this later

@coveralls
Copy link

coveralls commented Mar 20, 2017

Coverage Status

Coverage increased (+0.2%) to 98.502% when pulling 4c24a05 on refactor_functions into 487cb8b on master.

@lbenet
Copy link
Member Author

lbenet commented Mar 23, 2017

I just pushed another commit which addresses some optimizations in memory allocation I observed using perf/fateman.jl. The problem was that expressions like c[i] += a[j] * b[k] for TaylorNs, where creating temporary HomogeneousPolynomials and therefore allocating memory. So I replaced them by mul!(c[i], a[j], b[k]) which does not allocate such temporaries.

This achieves much better memory allocation and performs better than current master for perf/fateman.jl.

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.3%) to 98.068% when pulling 9e77bd8 on refactor_functions into 487cb8b on master.

1 similar comment
@coveralls
Copy link

Coverage Status

Coverage decreased (-0.3%) to 98.068% when pulling 9e77bd8 on refactor_functions into 487cb8b on master.

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.7%) to 97.591% when pulling e98f3fd on refactor_functions into 487cb8b on master.

@coveralls
Copy link

Coverage Status

Coverage decreased (-1.4%) to 96.924% when pulling e5122ca on refactor_functions into 487cb8b on master.

@lbenet lbenet merged commit 0697deb into master Mar 25, 2017
@lbenet
Copy link
Member Author

lbenet commented Mar 25, 2017

I'm merging... What ever is left, will be the subject of a new PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants