Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove deprecations for breaking release #715

Closed
wants to merge 24 commits into from

Conversation

ChrisRackauckas
Copy link
Member

@ChrisRackauckas ChrisRackauckas commented Jun 4, 2022

Fixes #707

src/DiffEqFlux.jl Outdated Show resolved Hide resolved
@ChrisRackauckas
Copy link
Member Author

@Abhishek-1Bhatt can you take this one from here? All that needs to be done is that the tests need to be updated in the same way as the tutorials are being updated (a lot of the tests and examples should be the same)

@Abhishek-1Bhatt
Copy link
Contributor

Sure 👍

@prbzrg
Copy link
Member

prbzrg commented Jun 5, 2022

Issues related to sciml_train should be closed or moved after this PR merged.
https://github.com/search?q=sciml_train+repo%3ASciML%2FDiffEqFlux.jl+state%3Aopen+in%3Atitle&type=Issues

src/hnn.jl Outdated
Comment on lines 45 to 46
end
return new{typeof(model), typeof(re), typeof(p)}(model, re, p)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is missing Lux compatibility dispatches.

@@ -11,19 +11,16 @@ derivatives of the loss backwards in time.

```julia
NeuralODE(model,tspan,alg=nothing,args...;kwargs...)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to mention Lux

@ChrisRackauckas
Copy link
Member Author

Did a lot of issue cleanup. Most were solved already.

@Abhishek-1Bhatt
Copy link
Contributor

Abhishek-1Bhatt commented Jun 15, 2022

Still doesn't run full CI. Also there are some conflicts with master, I can't see what the conflicts are because of access issues. Besides maybe we need to need to merge master into this branch, it says 10 commits behind.

test/cnf_test.jl Outdated Show resolved Hide resolved
ChrisRackauckas and others added 2 commits June 18, 2022 05:50
Co-authored-by: Abhishek Bhatt <46929125+Abhishek-1Bhatt@users.noreply.github.com>
Co-authored-by: Abhishek Bhatt <46929125+Abhishek-1Bhatt@users.noreply.github.com>
@Abhishek-1Bhatt
Copy link
Contributor

This line errors the Newton NeuralODE Test

res = Optimization.solve(optprob, Optim.KrylovTrustRegion(), maxiters = 100, callback=cb)

Is this a compatibility issue with Lux?

ERROR: MethodError: no method matching initial_state(::Optim.KrylovTrustRegion{Float64}, ::Optim.Options{Float64, OptimizationOptimJL.var"#_cb#11"{var"#5#6", Optim.KrylovTrustRegion{Float64}, Base.Iterators.Cycle{Tuple{Optimization.NullData}}}}, ::NLSolversBase.TwiceDifferentiableHV{Float32, ComponentArrays.ComponentVector{Float32, Vector{Float32}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:10, Axis(weight = ViewAxis(1:5, ShapedAxis((5, 1), NamedTuple())), bias = ViewAxis(6:10, ShapedAxis((5, 1), NamedTuple())))), layer_2 = ViewAxis(11:16, Axis(weight = ViewAxis(1:5, ShapedAxis((1, 5), NamedTuple())), bias = ViewAxis(6:6, ShapedAxis((1, 1), NamedTuple())))))}}}, ComponentArrays.ComponentVector{Float32, Vector{Float32}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:10, Axis(weight = ViewAxis(1:5, ShapedAxis((5, 1), NamedTuple())), bias = ViewAxis(6:10, ShapedAxis((5, 1), NamedTuple())))), 
layer_2 = ViewAxis(11:16, Axis(weight = ViewAxis(1:5, ShapedAxis((1, 5), NamedTuple())), bias = ViewAxis(6:6, ShapedAxis((1, 1), NamedTuple())))))}}}, ComponentArrays.ComponentVector{Float32, Vector{Float32}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:10, Axis(weight = ViewAxis(1:5, ShapedAxis((5, 1), NamedTuple())), bias = ViewAxis(6:10, ShapedAxis((5, 1), NamedTuple())))), layer_2 = ViewAxis(11:16, Axis(weight = ViewAxis(1:5, ShapedAxis((1, 5), NamedTuple())), bias = ViewAxis(6:6, ShapedAxis((1, 1), NamedTuple())))))}}}}, ::ComponentArrays.ComponentVector{Float32, Vector{Float32}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:10, Axis(weight = ViewAxis(1:5, ShapedAxis((5, 1), NamedTuple())), bias = ViewAxis(6:10, ShapedAxis((5, 1), NamedTuple())))), layer_2 = ViewAxis(11:16, Axis(weight = ViewAxis(1:5, ShapedAxis((1, 5), NamedTuple())), bias = ViewAxis(6:6, ShapedAxis((1, 1), NamedTuple())))))}}})
Closest candidates are:
  initial_state(::AcceleratedGradientDescent, ::Any, ::Any, ::AbstractArray{T}) where T at C:\Users\user\.julia\packages\Optim\6Lpjy\src\multivariate\solvers\first_order\accelerated_gradient_descent.jl:35
  initial_state(::Optim.KrylovTrustRegion, ::Any, ::Any, ::Array{T}) where T at C:\Users\user\.julia\packages\Optim\6Lpjy\src\multivariate\solvers\second_order\krylov_trust_region.jl:39
  initial_state(::SimulatedAnnealing, ::Any, ::Any, ::AbstractArray{T}) where T at C:\Users\user\.julia\packages\Optim\6Lpjy\src\multivariate\solvers\zeroth_order\simulated_annealing.jl:62

@ChrisRackauckas
Copy link
Member Author

See if you can isolate it to Optim.KrylovTrustRegion() just requiring standard Array types: that would be my guess.

@Abhishek-1Bhatt
Copy link
Contributor

What would be the best way to allow initial_x above to be a ComponentVector?

@ChrisRackauckas
Copy link
Member Author

Just use Flux.jl for that for now. The real answer is an upstream fix to Optim.jl but that shouldn't block this.

@avik-pal
Copy link
Member

Just a heads up here. I will be deprecating quite a few functionalities (mostly these were undocumented but they ended up in user code) in v0.4.8 for removal in v0.5 (See https://github.com/avik-pal/Lux.jl/blob/ap/tests/src/deprecated.jl). Might be worthwhile avoiding using these (most notably ActivationFunction)

@ChrisRackauckas
Copy link
Member Author

Continued in #794

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Remove Requires.jl usage by directly depending on GPUArrays.jl
4 participants