Skip to content

UnofficialJuliaMirror/ADCME.jl-07b341a0-ce75-57c6-b2de-414ffdc00be5

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


Coverage Status

The ADCME library (Automatic Differentiation Library for Computational and Mathematical Engineering) aims at generic and scalable inverse modeling with gradient-based optimization techniques. It has TensorFlow as the automatic differentiation and parallel computing backends. The dataflow model adopted by the framework enables researchers to do high-performance inverse modeling without substantial effort after implementing the forward simulation.

Several features of the library are

  • MATLAB-style syntax. Write A*B for matrix production instead of tf.matmul(A,B).
  • Custom operators. Implement operators in C/C++ for bottleneck parts; incorporate legacy code or specially designed C/C++ code in ADCME.
  • Numerical Scheme. Easy to implement numerical schemes for solving PDEs.
  • Static graphs. Compilation time computational graph optimization; automatic parallelism for your simulation codes.
  • Custom optimizers. Large scale constrained optimization? Use CustomOptimizer to integrate your favorite optimizer.

Start building your forward and inverse modeling on top of the million-dollar TensorFlow project with ADCME today!

Documentation

Installation

  1. Install Julia

  2. Install ADCME

julia> ]
pkg> add ADCME
  1. (Optional) Test ADCME.jl
julia> ]
pkg> test ADCME
  1. (Optional) Test Custom Operator Functionality If you want to use customop() and test the utility, test with the following command
julia> test_custom_op()

For custom operators, the TensorFlow shared library compiler (see here) must be consistent with the compiler used for compiling custom operators. By default, ADCME installs Anaconda GCC on the Linux system. For a custom version of TensorFlow, if you encounter problems with compiling, load error or link error, consider check GCC and libtensorflow_framework.so versions. The following information may be useful

Variable Description
ADCME.CXX C++ Compiler
ADCME.CC C Compiler
ADCME.TFLIB libtensorflow_framework.so location
ADCME.CMAKE Cmake binary location
ADCME.MAKE Make binary location
  1. (Optional) Enable GPU Support To enable GPU support, first, make sure nvcc is available from your environment (e.g., type nvcc in your shell and you should get the location of the executable binary file).
using ADCME
enable_gpu()

Tutorial

Consider solving the following problem

-bu''(x)+u(x) = f(x), x∈[0,1], u(0)=u(1)=0

where

f(x) = 8 + 4x - 4x²

Assume that we have observed u(0.5)=1, we want to estimate b. The true value, in this case, should be b=1.

using LinearAlgebra
using ADCME

n = 101 # number of grid nodes in [0,1]
h = 1/(n-1)
x = LinRange(0,1,n)[2:end-1]

b = Variable(10.0) # we use Variable keyword to mark the unknowns
A = diagm(0=>2/h^2*ones(n-2), -1=>-1/h^2*ones(n-3), 1=>-1/h^2*ones(n-3)) 
B = b*A + I  # I stands for the identity matrix
f = @. 4*(2 + x - x^2) 
u = B\f # solve the equation using built-in linear solver
ue = u[div(n+1,2)] # extract values at x=0.5

loss = (ue-1.0)^2 

# Optimization
sess = Session(); init(sess) 
BFGS!(sess, loss)

println("Estimated b = ", run(sess, b))

Expected output

Estimated b = 0.9995582304494237

The gradients can be obtained very easily. For example, if we want the gradients of loss with respect to b, the following code will create a Tensor for the gradient

julia> gradients(loss, b)
PyObject <tf.Tensor 'gradients_1/Mul_grad/Reshape:0' shape=() dtype=float64>

Under the hood, a computational graph is created for gradients back-propagation.

For more documentation, see here.

Research Work

Click the images for short descriptions!

[1] Kailai Xu, and Eric Darve. "Adversarial Numerical Analysis for Inverse Problems"

[2] Kailai Xu and Eric Darve. "Calibrating Multivariate Lévy Processes with Neural Networks"

[3] Kailai Xu (co-first author), Huang, Daniel Z. (co-first author), Charbel Farhat, and Eric Darve. "Learning Constitutive Relations from Indirect Observations Using Deep Neural Networks"

image-20191031200808697

LICENSE

ADCME.jl is released under GNU GENERAL PUBLIC LICENSE Version 3. See License for details.

About

Last mirrored from https://github.com/kailaix/ADCME.jl.git on 2019-11-18T17:48:48.412-05:00 by @UnofficialJuliaMirrorBot via Travis job 481.5 , triggered by Travis cron job on branch "master"

Resources

License

Stars

Watchers

Forks

Packages

No packages published