Skip to content

Latest commit

 

History

History
161 lines (125 loc) · 5.18 KB

index.md

File metadata and controls

161 lines (125 loc) · 5.18 KB
CurrentModule = NeuralOperators

NeuralOperators

Ground Truth Inferred

The demonstration shown above is the Navier-Stokes equation learned by the MarkovNeuralOperator with only one time step information. The example can be found in example/FlowOverCircle.

Installation

To install NeuralOperators.jl, use the Julia package manager:

using Pkg
Pkg.add("NeuralOperators")

Usage

Fourier Neural Operator

model = Chain(
    # lift (d + 1)-dimensional vector field to n-dimensional vector field
    # here, d == 1 and n == 64
    Dense(2, 64),
    # map each hidden representation to the next by integral kernel operator
    OperatorKernel(64=>64, (16, ), FourierTransform, gelu),
    OperatorKernel(64=>64, (16, ), FourierTransform, gelu),
    OperatorKernel(64=>64, (16, ), FourierTransform, gelu),
    OperatorKernel(64=>64, (16, ), FourierTransform),
    # project back to the scalar field of interest space
    Dense(64, 128, gelu),
    Dense(128, 1),
)

Or one can just call:

model = FourierNeuralOperator(ch = (2, 64, 64, 64, 64, 64, 128, 1),
                              modes = (16,),
                              σ = gelu)

And then train as a Flux model.

loss(𝐱, 𝐲) = l₂loss(model(𝐱), 𝐲)
opt = Flux.Optimiser(WeightDecay(1.0f-4), Flux.Adam(1.0f-3))
Flux.@epochs 50 Flux.train!(loss, params(model), data, opt)

DeepONet

# tuple of Ints for branch net architecture and then for trunk net,
# followed by activations for branch and trunk respectively
model = DeepONet((32, 64, 72), (24, 64, 72), σ, tanh)

Or specify branch and trunk as separate Chain from Flux and pass to DeepONet

branch = Chain(Dense(32, 64, σ), Dense(64, 72, σ))
trunk = Chain(Dense(24, 64, tanh), Dense(64, 72, tanh))
model = DeepONet(branch, trunk)

You can again specify loss, optimization, and training parameters just as you would for a simple neural network with Flux.

loss(xtrain, ytrain, sensor) = Flux.Losses.mse(model(xtrain, sensor), ytrain)
evalcb() = @show(loss(xval, yval, grid))

learning_rate = 0.001
opt = Adam(learning_rate)
parameters = params(model)
Flux.@epochs 400 Flux.train!(loss, parameters, [(xtrain, ytrain, grid)], opt, cb = evalcb)

A more complete example using DeepONet architecture to solve Burgers' equation can be found in the examples.

Contributing

Reproducibility

<details><summary>The documentation of this SciML package was built using these direct dependencies,</summary>
using Pkg # hide
Pkg.status() # hide
</details>
<details><summary>and using this machine and Julia version.</summary>
using InteractiveUtils # hide
versioninfo() # hide
</details>
<details><summary>A more complete overview of all dependencies and their versions is also provided.</summary>
using Pkg # hide
Pkg.status(; mode = PKGMODE_MANIFEST) # hide
</details>
using TOML
using Markdown
version = TOML.parse(read("../../Project.toml", String))["version"]
name = TOML.parse(read("../../Project.toml", String))["name"]
link_manifest = "https://github.com/SciML/" * name * ".jl/tree/gh-pages/v" * version *
                "/assets/Manifest.toml"
link_project = "https://github.com/SciML/" * name * ".jl/tree/gh-pages/v" * version *
               "/assets/Project.toml"
Markdown.parse("""You can also download the
[manifest]($link_manifest)
file and the
[project]($link_project)
file.
""")