Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Commentary and Corrections #227

Merged
merged 2 commits into from
Jul 31, 2015
Merged

Commentary and Corrections #227

merged 2 commits into from
Jul 31, 2015

Conversation

cmarcotte
Copy link
Contributor

Added front-commentary and explanations throughout, corrected several issues with initial estimates and Falkner-Skan boundary conditions.

Added front commentary, and explanations throughout. Removed
non-convergent initial estimate, and corrected boundary conditions in
Falkner-Skan.
Minor corrections.
dlfivefifty added a commit that referenced this pull request Jul 31, 2015
Commentary and Corrections
@dlfivefifty dlfivefifty merged commit 098b85a into JuliaApproximation:master Jul 31, 2015
@dlfivefifty
Copy link
Member

Thanks! Do you think its worthwhile adding a nonlinsolve that implements Newton iteration?

The limitation at the moment is no automatic differentiation to determine the Jacobian.

@cmarcotte
Copy link
Contributor Author

Without automatic differentiation, the most cumbersome part is still left to the user. I think it makes sense to implement a variant of Broyden's method, as it requires only evaluations of the function itself.

@dlfivefifty
Copy link
Member

According to Wikipedia, Broyden’s method still requires an initial Jacobian. Is this correct?

On 31 Jul 2015, at 2:51 pm, Christopher Marcotte notifications@github.com wrote:

Without automatic differentiation, the most cumbersome part is still left to the user. I think it makes sense to implement a variant of Broyden's method, as it requires only evaluations of the function itself.


Reply to this email directly or view it on GitHub #227 (comment).

@cmarcotte
Copy link
Contributor Author

So it does. I had in mind the secant method, which Broyden's method generalizes to multiple dimensions, and which does not use a Jacobian matrix.

@dlfivefifty
Copy link
Member

Let me know if you find a derivative-free version of the secant method in higher dimensions

There was some talk about "steepest descent methods in function space" a while back to avoid derivatives, but I don't think anyone figured out how that would work

Sent from my iPhone

On 31 Jul 2015, at 8:29 pm, Christopher Marcotte notifications@github.com wrote:

So it does. I had in mind the secant method, which Broyden's method generalizes to multiple dimensions, and which does not use a Jacobian matrix.


Reply to this email directly or view it on GitHub.

@cmarcotte
Copy link
Contributor Author

Is there any intuition for the effectiveness of Krylov sub space methods for Fun-like objects? Depending on the method, we can form an approximate derivative by judicious finite-differences, and the storage could be still small.

@ajt60gaibb
Copy link
Contributor

Steffensen's method is quite close to a derivative-free Netwon's method. It has already been generalized to functions acting on Banach spaces.
https://en.wikipedia.org/wiki/Steffensen%27s_method

@dlfivefifty
Copy link
Member

I don't see how boundary conditions get incorporated in Steffens method: it looks like it divides by a function, so no \

Sent from my iPad

On 31 Jul 2015, at 10:15 pm, Alex Townsend notifications@github.com wrote:

Steffensen's method is quite close to a derivative-free Netwon's method. It has already been generalized to Banach spaces.
https://en.wikipedia.org/wiki/Steffensen%27s_method


Reply to this email directly or view it on GitHub.

@ajt60gaibb
Copy link
Contributor

OK. That's disappointing.

@MikaelSlevinsky
Copy link
Member

Don't we just want a more sophisticated version of this?

using ApproxFun, Calculus

N = parse("2D^3*u + u*D^2*u")
J = differentiate(N,:u)

x=Fun(identity,[0.,4π])
d=domain(x)
B=[ldirichlet(d),neumann(d)]
Bcs=[0.;0.;1.]
D=Derivative(d)
κ = 0.33205733621519630     # This is diff(u,2)[0.], due to Boyd.
u0 = κ*x^2/2                # First-order approximation

u = 0.5x^2                  # Other initial solutions may fail to converge
for k=1:10
    u -= [B;eval(J)]\[B*u-Bcs;eval(N)]
    chop!(u,norm(u.coefficients,Inf)*eps(eltype(u))^2)
    println(norm(eval(N)))
end
norm(eval(N))               # should be zero
abs(u''[first(d)]-κ)        # should also be zero

@MikaelSlevinsky
Copy link
Member

This means we can also handle nonlinear boundary conditions too! There's still a small bug, but it shouldn't be too hard to get it right. Here's the nonlinear pendulum equation with a unit-norm constraint

using ApproxFun, Calculus

N = parse("D^2*u + sin(u)")
J = differentiate(N,:u)
B = [parse("LD*u-0.0"),parse("∫*abs2(u)-1.0")]
BJ = map(b->differentiate(b,:u),B)
BJ[2] = :(∫[2u])      # almost. We need ∫[2u] since it's a Functional, ∫*(2u) is a number.

d=Interval(0,2π)
x=Fun(identity,d)
∫=DefiniteIntegral(space(x))
LD=ldirichlet(d)
D=Derivative(d)

u = x/norm(x)                  # Other initial solutions may fail to converge
while norm(eval(N)) > 10eps()
    u -= [map(eval,BJ)...;eval(J)]\[map(eval,B)...;eval(N)]
    chop!(u,norm(u.coefficients,Inf)*eps(eltype(u))^2)
    println(norm(eval(N)))
end
norm(eval(N))               # should be zero
norm(u)   # should be 1.

@ajt60gaibb
Copy link
Contributor

I did not know about Calculus.jl. Looks great to me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants