The ADCME library (Automatic Differentiation Library for Computational and Mathematical Engineering) aims at generic and scalable inverse modeling with gradient-based optimization techniques. It has TensorFlow as the automatic differentiation and parallel computing backends. The dataflow model adopted by the framework enables researchers to do high-performance inverse modeling without substantial effort after implementing the forward simulation.
Several features of the library are
- MATLAB-style syntax. Write
A*B
for matrix production instead oftf.matmul(A,B)
. - Custom operators. Implement operators in C/C++ for bottleneck parts; incorporate legacy code or specially designed C/C++ code in
ADCME
. - Numerical Scheme. Easy to implement numerical schemes for solving PDEs.
- Static graphs. Compilation time computational graph optimization; automatic parallelism for your simulation codes.
- Custom optimizers. Large scale constrained optimization? Use
CustomOptimizer
to integrate your favorite optimizer.
Start building your forward and inverse modeling on top of the million-dollar TensorFlow project with ADCME today!
Documentation |
---|
PyCall
is forced to use the default interpreter by ADCME
. Do not try to reset the interpreter by rebuilding PyCall
.
-
Install Julia
-
Install
ADCME
julia> ]
pkg> add ADCME
- (Optional) Test
ADCME.jl
julia> ]
pkg> test ADCME
- (Optional) Enable GPU Support
To enable GPU support, first, make sure
nvcc
is available from your environment (e.g., typenvcc
in your shell and you should get the location of the executable binary file).
ENV["GPU"] = 1
Pkg.build("ADCME")
Consider solving the following problem
-bu''(x)+u(x) = f(x), x∈[0,1], u(0)=u(1)=0
where
f(x) = 8 + 4x - 4x²
Assume that we have observed u(0.5)=1
, we want to estimate b
. The true value, in this case, should be b=1
.
using LinearAlgebra
using ADCME
n = 101 # number of grid nodes in [0,1]
h = 1/(n-1)
x = LinRange(0,1,n)[2:end-1]
b = Variable(10.0) # we use Variable keyword to mark the unknowns
A = diagm(0=>2/h^2*ones(n-2), -1=>-1/h^2*ones(n-3), 1=>-1/h^2*ones(n-3))
B = b*A + I # I stands for the identity matrix
f = @. 4*(2 + x - x^2)
u = B\f # solve the equation using built-in linear solver
ue = u[div(n+1,2)] # extract values at x=0.5
loss = (ue-1.0)^2
# Optimization
sess = Session(); init(sess)
BFGS!(sess, loss)
println("Estimated b = ", run(sess, b))
Expected output
Estimated b = 0.9995582304494237
The gradients can be obtained very easily. For example, if we want the gradients of loss
with respect to b
, the following code will create a Tensor for the gradient
julia> gradients(loss, b)
PyObject <tf.Tensor 'gradients_1/Mul_grad/Reshape:0' shape=() dtype=float64>
Under the hood, a computational graph is created for gradients back-propagation.
For more documentation, see here.
It is recommended that you use the default build script. However, in some cases, you may want to install the package and configure the environment manually.
Step 1: Install ADCME
on a computer with Internet access and zip all files from the following paths
julia> using Pkg
julia> Pkg.depots()
The files will contain all the dependencies.
Step 2: Build ADCME
mannually.
using Pkg;
ENV["manual"] = 1
Pkg.build("ADCME")
However, in this case you are responsible for configuring the environment by modifying the file
using ADCME;
print(joinpath(splitdir(pathof(ADCME))[1], "deps/deps.jl"))
Stochastic Inversion with Adversarial Training [Documentation] | Calibrating Lévy Processes [Documentation] |
Learning Constitutive Relations [Documentation] | Time-lapse FWI [Documentation] |
Intelligent Automatic Differentiation [Documentation] |
ADCME.jl is released under MIT License. See License for details.