Skip to content

Releases: ITMO-NSS-team/torch_DE_solver

v0.4.0

14 Feb 15:51
6a85f42
Compare
Choose a tag to compare

Interface and callbacks were added for better user experience. Plus, we have added PSO optimizer as an alternative one.

There is a ton of refactoring work was done, now we are open for tensor trains ride, stay tuned.

v0.3.0

26 Jul 10:40
a19e359
Compare
Choose a tag to compare

On previous episodes we add Fourier layers. As the big guys say. they are not working without adaptive lambdas magic.

We fully revamped usual for PINNs adaprive lambdas routine such that it is computed using dispersion part directly with Sobol indices (one may refer to https://github.com/ITMO-NSS-team/torch_DE_solver/blob/adaptive_lambdas_sobol/examples/adaptive_disp_ODE.py and https://github.com/ITMO-NSS-team/torch_DE_solver/blob/adaptive_lambdas_sobol/examples/adaptive_disp_wave_eq.py examples with my experiments), not neural tangent kernel eigenvalues analogue. It is done since NTK does not work for anything exept single PDE - in NTK case we would have left ODE and systems out.

Secondly, we rework the loss - it is now computed in two faces - one with lambdas for gradient descent and one with normalized for stopping crtiterion. Even though it is a bit of pulls back everything - namely, training process is not quite connected with stop criterion - it would be benefit for parameter unification.

Additionally, we split Dirichlet and initial (we made a step further and split Dirichlet, operator, periodic) conditions in terms of lambda like big guys. Adaptive lambdas are also split.

So in this release:

  • Adaptive lambdas

Minor:

  • More predictable cache location and behaviour

New layers and better performance

06 Jul 13:25
ee1e5c5
Compare
Choose a tag to compare

We add Fourier layers and fix some performance issues such as better work with cache