-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Commentary and Corrections #227
Conversation
Added front commentary, and explanations throughout. Removed non-convergent initial estimate, and corrected boundary conditions in Falkner-Skan.
Minor corrections.
Commentary and Corrections
Thanks! Do you think its worthwhile adding a The limitation at the moment is no automatic differentiation to determine the Jacobian. |
Without automatic differentiation, the most cumbersome part is still left to the user. I think it makes sense to implement a variant of Broyden's method, as it requires only evaluations of the function itself. |
According to Wikipedia, Broyden’s method still requires an initial Jacobian. Is this correct?
|
So it does. I had in mind the secant method, which Broyden's method generalizes to multiple dimensions, and which does not use a Jacobian matrix. |
Let me know if you find a derivative-free version of the secant method in higher dimensions There was some talk about "steepest descent methods in function space" a while back to avoid derivatives, but I don't think anyone figured out how that would work Sent from my iPhone
|
Is there any intuition for the effectiveness of Krylov sub space methods for Fun-like objects? Depending on the method, we can form an approximate derivative by judicious finite-differences, and the storage could be still small. |
Steffensen's method is quite close to a derivative-free Netwon's method. It has already been generalized to functions acting on Banach spaces. |
I don't see how boundary conditions get incorporated in Steffens method: it looks like it divides by a function, so no \ Sent from my iPad
|
OK. That's disappointing. |
Don't we just want a more sophisticated version of this? using ApproxFun, Calculus
N = parse("2D^3*u + u*D^2*u")
J = differentiate(N,:u)
x=Fun(identity,[0.,4π])
d=domain(x)
B=[ldirichlet(d),neumann(d)]
Bcs=[0.;0.;1.]
D=Derivative(d)
κ = 0.33205733621519630 # This is diff(u,2)[0.], due to Boyd.
u0 = κ*x^2/2 # First-order approximation
u = 0.5x^2 # Other initial solutions may fail to converge
for k=1:10
u -= [B;eval(J)]\[B*u-Bcs;eval(N)]
chop!(u,norm(u.coefficients,Inf)*eps(eltype(u))^2)
println(norm(eval(N)))
end
norm(eval(N)) # should be zero
abs(u''[first(d)]-κ) # should also be zero |
This means we can also handle nonlinear boundary conditions too! There's still a small bug, but it shouldn't be too hard to get it right. Here's the nonlinear pendulum equation with a unit-norm constraint using ApproxFun, Calculus
N = parse("D^2*u + sin(u)")
J = differentiate(N,:u)
B = [parse("LD*u-0.0"),parse("∫*abs2(u)-1.0")]
BJ = map(b->differentiate(b,:u),B)
BJ[2] = :(∫[2u]) # almost. We need ∫[2u] since it's a Functional, ∫*(2u) is a number.
d=Interval(0,2π)
x=Fun(identity,d)
∫=DefiniteIntegral(space(x))
LD=ldirichlet(d)
D=Derivative(d)
u = x/norm(x) # Other initial solutions may fail to converge
while norm(eval(N)) > 10eps()
u -= [map(eval,BJ)...;eval(J)]\[map(eval,B)...;eval(N)]
chop!(u,norm(u.coefficients,Inf)*eps(eltype(u))^2)
println(norm(eval(N)))
end
norm(eval(N)) # should be zero
norm(u) # should be 1. |
I did not know about |
Added front-commentary and explanations throughout, corrected several issues with initial estimates and Falkner-Skan boundary conditions.