Skip to content

FBPINNs v0.2.0

Latest
Compare
Choose a tag to compare
@benmoseley benmoseley released this 30 Jul 17:26
· 10 commits to main since this release

New features

馃敟 This is a major new update to the FBPINN library 馃敟. We have rewritten the entire library in JAX and added a much more flexible and easier to use high-level interface.

Speed-up

  • The library now runs 10-1000X faster than the original PyTorch code. The core reason is the use of jax.vmap, which parallelises subdomain forward and gradient computations on the GPU (previously, they were done sequentially). This allows us to scale to 1000s+ subdomains, whereas we could only manage <100s of subdomains before.

Flexibility

The high-level interface is much more flexible. In particular you can now:

  • Define irregular and multilevel domain decompositions
  • Define custom subdomain neural network architectures
  • Add arbitrary types of boundary/data constraints, including training FBPINNs with "soft" boundary losses
  • Solve inverse problems
  • Learn domain decompositions

Ease-of-use

Furthermore the interface is easier to use. Compared to the previous code:

  • There is no need to update the gradients of the FBPINN by hand when applying constraining operators, this is now done automatically using autodiff
  • The Domain, Problem, Decomposition and Network classes are designed to be intuitive and minimal
  • Python logging is now used to control the level of output
  • The library is pip installable
  • More examples have been added