Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deprecate FastChain and sciml_train for v2.0 #794

Merged
merged 36 commits into from
Jan 20, 2023
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
7c1ce87
Deprecate FastChain and sciml_train for v2.0
ChrisRackauckas Jan 17, 2023
f979816
Update test/multiple_shoot.jl
ChrisRackauckas Jan 17, 2023
f6bf0d2
Update test/neural_dae.jl
ChrisRackauckas Jan 17, 2023
894c759
Lux
ChrisRackauckas Jan 17, 2023
ce1d4a0
bring back the reexport
ChrisRackauckas Jan 17, 2023
8f5adea
Update Project.toml
ChrisRackauckas Jan 17, 2023
f2a9834
remove optimization compat
ChrisRackauckas Jan 17, 2023
f71c7fb
Merge remote-tracking branch 'origin/depremoval2' into depremoval2
ChrisRackauckas Jan 17, 2023
22cf368
a few fixes
ChrisRackauckas Jan 17, 2023
eaf8efa
more fixes
ChrisRackauckas Jan 17, 2023
4b1c6d4
add using Zygotes
ChrisRackauckas Jan 18, 2023
8f3b52f
fix loss signature
ChrisRackauckas Jan 18, 2023
efbd9b9
fix a few usings
ChrisRackauckas Jan 18, 2023
9f6b63c
fix some namespacing
ChrisRackauckas Jan 18, 2023
2b6c9df
fix activation definition
ChrisRackauckas Jan 18, 2023
1a7c4d3
use a componentarray
ChrisRackauckas Jan 18, 2023
89088cb
typo
ChrisRackauckas Jan 18, 2023
bc0f86d
add the optimizers
ChrisRackauckas Jan 18, 2023
441c90e
fix multiple shoot tests
ChrisRackauckas Jan 18, 2023
ecb1dbb
Flux it
ChrisRackauckas Jan 18, 2023
f492645
typo
ChrisRackauckas Jan 18, 2023
dfbd38c
mark Flux GPU
ChrisRackauckas Jan 18, 2023
e019eb7
Fix doc build
ChrisRackauckas Jan 18, 2023
6e002f4
Update test/mnist_gpu.jl
ChrisRackauckas Jan 19, 2023
11015c7
Update mnist_gpu.jl
Abhishek-1Bhatt Jan 19, 2023
27317e1
Merge pull request #796 from Abhishek-1Bhatt/patch-1
ChrisRackauckas Jan 19, 2023
24a8005
Flux.cpu
Abhishek-1Bhatt Jan 19, 2023
6faea34
Update mnist_conv_gpu.jl
Abhishek-1Bhatt Jan 19, 2023
0242cb2
Flux.Conv
Abhishek-1Bhatt Jan 19, 2023
537507f
Merge pull request #797 from Abhishek-1Bhatt/patch-1
ChrisRackauckas Jan 19, 2023
184a182
Specify Flux in all layers
Abhishek-1Bhatt Jan 20, 2023
6147fbf
Update mnist_conv_neural_ode.md
Abhishek-1Bhatt Jan 20, 2023
ebd3b47
Update mnist_neural_ode.md
Abhishek-1Bhatt Jan 20, 2023
04c0886
Merge pull request #799 from Abhishek-1Bhatt/patch-1
ChrisRackauckas Jan 20, 2023
3555634
fix Flux designations in docs
ChrisRackauckas Jan 20, 2023
035d19e
try this
ChrisRackauckas Jan 20, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 1 addition & 15 deletions Project.toml
Original file line number Diff line number Diff line change
@@ -1,16 +1,14 @@
name = "DiffEqFlux"
uuid = "aae7a2af-3d4f-5e19-a356-7da93b79d9d0"
authors = ["Chris Rackauckas <accounts@chrisrackauckas.com>"]
version = "1.54.0"
version = "2.0.0"

[deps]
Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
Cassette = "7057c7e9-c182-5462-911a-8362d720325c"
ChainRulesCore = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
ConsoleProgressMonitor = "88cd18e8-d9cc-4ea6-8889-5259c0d15c8b"
DataInterpolations = "82cc6244-b520-54b8-b5a6-8a565e85f1d0"
DiffEqBase = "2b5f629d-d688-5b77-993f-72d75c75574e"
DiffResults = "163ba53b-c6d8-5494-b064-1a9d43ac40c5"
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
DistributionsAD = "ced4e74d-a319-5a8a-b0ac-84af2272839c"
Flux = "587475ba-b771-5e3f-ad9e-33799f191a9c"
Expand All @@ -20,8 +18,6 @@ LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
Logging = "56ddb016-857b-54e1-b83d-db4d58db5568"
LoggingExtras = "e6f89c97-d47a-5376-807f-9c37f3926c36"
Lux = "b2108857-7c20-44ae-9111-449ecde12c47"
NNlib = "872c559c-99b0-510c-b3b7-b6c96a88d5cd"
Optim = "429524aa-4258-5aef-a3af-852621145aeb"
Optimization = "7f7a1694-90dd-40f0-9382-eb1efda571ba"
OptimizationFlux = "253f991c-a7b2-45f8-8852-8b9a9df78a86"
ChrisRackauckas marked this conversation as resolved.
Show resolved Hide resolved
OptimizationOptimJL = "36348300-93cb-4f02-beb5-3c3902f8871e"
Expand All @@ -34,39 +30,29 @@ Reexport = "189a3867-3050-52da-a836-e630ba90ab69"
Requires = "ae029012-a4dd-5104-9daa-d747884805df"
SciMLBase = "0bca4576-84f4-4d90-8ffe-ffa030f20462"
SciMLSensitivity = "1ed8b502-d754-442c-8d5d-10ac956f44a1"
StaticArrays = "90137ffa-7385-5640-81b9-e52037218182"
TerminalLoggers = "5d786b92-1e48-4d6f-9151-6b4477ca9bed"
Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f"
ZygoteRules = "700de1a5-db45-46bc-99cf-38207098b444"

[compat]
Adapt = "3"
Cassette = "0.3.7"
ChainRulesCore = "1"
ConsoleProgressMonitor = "0.1"
DataInterpolations = "3.3"
DiffEqBase = "6.41"
DiffResults = "1.0"
Distributions = "0.23, 0.24, 0.25"
DistributionsAD = "0.6"
Flux = "0.12, 0.13"
ForwardDiff = "0.10"
Functors = "0.4"
LoggingExtras = "0.4, 1"
Lux = "0.4"
NNlib = "0.7, 0.8"
Optim = "1"
Optimization = "3"
OptimizationFlux = "0.1"
OptimizationOptimJL = "0.1"
OptimizationPolyalgorithms = "0.1"
ProgressLogging = "0.1"
RecursiveArrayTools = "2"
Reexport = "0.2, 1"
Requires = "0.5, 1.0"
SciMLBase = "1"
SciMLSensitivity = "7"
StaticArrays = "0.11, 0.12, 1"
TerminalLoggers = "0.1"
Zygote = "0.5, 0.6"
ZygoteRules = "0.2"
Expand Down
1 change: 0 additions & 1 deletion docs/src/layers/CNFLayer.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@
The following layers are helper functions for easily building neural differential equation architectures specialized for the task of density estimation through Continuous Normalizing Flows (CNF).

```@docs
DeterministicCNF
FFJORD
FFJORDDistribution
```
27 changes: 6 additions & 21 deletions src/DiffEqFlux.jl
Original file line number Diff line number Diff line change
@@ -1,23 +1,14 @@
module DiffEqFlux

using Adapt, Base.Iterators, ConsoleProgressMonitor, DataInterpolations,
DiffEqBase, SciMLSensitivity, DiffResults, Distributions, DistributionsAD,
ForwardDiff, Optimization, OptimizationPolyalgorithms, LinearAlgebra,
DiffEqBase, SciMLSensitivity, Distributions, DistributionsAD,
ForwardDiff, LinearAlgebra,
Logging, LoggingExtras, Printf, ProgressLogging, Random, RecursiveArrayTools,
Reexport, SciMLBase, StaticArrays, TerminalLoggers, Zygote, ZygoteRules
Reexport, SciMLBase, TerminalLoggers, Zygote, ZygoteRules

@reexport using SciMLSensitivity
@reexport using Zygote

# deprecate

import OptimizationFlux
import NNlib
import Lux
using Requires
using Cassette
@reexport using Flux
Copy link
Member

@prbzrg prbzrg Jan 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that we support Lux, reexporting Flux could make qualification errors in codes that only have Lux.
Error like:

WARNING: both Lux and Flux export "Chain"; uses of it in module Main must be qualified
ERROR: UndefVarError: Chain not defined

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah we should remove the reexport. Add that to the list of things.

@reexport using OptimizationOptimJL
using Functors

import ChainRulesCore
Expand All @@ -43,25 +34,19 @@ ZygoteRules.@adjoint ZygoteRules.literal_getproperty(A::Tridiagonal, ::Val{:du})
ZygoteRules.@adjoint Tridiagonal(dl, d, du) = Tridiagonal(dl, d, du), p̄ -> (diag(p̄[2:end, 1:end-1]), diag(p̄), diag(p̄[1:end-1, 2:end]))

include("ffjord.jl")
include("train.jl")
include("fast_layers.jl")
include("neural_de.jl")
include("require.jl")
include("spline_layer.jl")
include("tensor_product_basis.jl")
include("tensor_product_layer.jl")
include("collocation.jl")
include("hnn.jl")
include("multiple_shooting.jl")

Flux.device(::FastLayer) = @warn "device(f::FastLayer) is a no-op: to move FastChain computations to a GPU, apply gpu(x) to the weight vector"
Flux.gpu(::FastLayer) = @warn "device(f::FastLayer) is a no-op: to move FastChain computations to a GPU, apply gpu(x) to the weight vector"
Flux.cpu(::FastLayer) = @warn "device(f::FastLayer) is a no-op: to move FastChain computations to a CPU, apply cpu(x) to the weight vector"

export DeterministicCNF, FFJORD, NeuralODE, NeuralDSDE, NeuralSDE, NeuralCDDE, NeuralDAE, NeuralODEMM, TensorLayer, AugmentedNDELayer, SplineLayer, NeuralHamiltonianDE
export FFJORD, NeuralODE, NeuralDSDE, NeuralSDE, NeuralCDDE, NeuralDAE,
NeuralODEMM, TensorLayer, AugmentedNDELayer, SplineLayer, NeuralHamiltonianDE
export HamiltonianNN
export ChebyshevBasis, SinBasis, CosBasis, FourierBasis, LegendreBasis, PolynomialBasis
export FastDense, StaticDense, FastChain, initial_params
export FastDense, StaticDense, initial_params
export FFJORDDistribution
export DimMover, FluxBatchOrder

Expand Down
Loading