Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for symbolic variable type ::Num ? #175

Open
mbuze opened this issue May 9, 2024 · 6 comments
Open

Support for symbolic variable type ::Num ? #175

mbuze opened this issue May 9, 2024 · 6 comments

Comments

@mbuze
Copy link

mbuze commented May 9, 2024

Hi Christoph, together with @Fraser-Birks and @jjbraun we are looking at trying to implement the $\mathcal{O}(r^{-1})$ boundary conditions for Mode I fracture and we believe that we can derive an explicit form for it for a general anisotropic case. This is somewhat optimistic at this stage, but provided that we can do it, then to assemble such a boundary condition, a user would need to supply 2nd order and 3rd order elastic constants. It would be beautiful to just be able to compute them automatically for a given interatomic potential and so really a user gets it "for free". We are playing around with the idea of doing it symbolically, since these days Symbolics.jl seems to work surprisingly well on code that was not developed with symbolic variables in mind.

So I gave it a go with JuLIP, as follows:

using JuLIP
using Symbolics
using LinearAlgebra

# simple system definition
at = bulk(:Si, cubic=true) * 4
set_calculator!(at, StillingerWeber())
energy(at)
X = positions(at)

# changing atom positions numerically:
X_new = [x + [1.0,0.0,0.0] for x in X]
set_positions!(at,X_new)
energy(at)

# symbolic variables defining a homogenous deformation of atoms
YY = Symbolics.variables(:YY, 1:3, 1:3)

# symbolic positions of atoms
X_sym = [YY*x for x in X]

# trying to pass it to JuLIP
set_positions!(at,X_sym)

This errors of course, we get

ERROR: MethodError: no method matching set_positions!(::Atoms{Float64}, ::Vector{Vector{Num}})

Closest candidates are:
  set_positions!(::AbstractAtoms, ::Any, ::Any, ::Any)
   @ JuLIP ~/.julia/packages/JuLIP/NgfTg/src/abstractions.jl:141
  set_positions!(::Atoms{T}, ::AbstractArray{SVector{3, T}, 1}) where T
   @ JuLIP ~/.julia/packages/JuLIP/NgfTg/src/atoms.jl:164
  set_positions!(::AbstractAtoms, ::AbstractMatrix)
   @ JuLIP ~/.julia/packages/JuLIP/NgfTg/src/abstractions.jl:140

And I initially thought that it might to with the fact that ::Num is not compatible with StaticArrays.jl, but I can run

using StaticArrays

@variables yy # defining a variable

# assembling static vectors in two ways:
@SVector [yy^2, yy^3, yy^4]
SA[yy^2, yy^3, yy^4]

and it works. So I also tried

X = positions(at)
X_sym =  [SA[YY * x...] for x in X]
set_positions!(at,X_sym)

but it results in the same error. I think the problem is simply that Atoms requires a type input which is a subtype of AbstractFloat but Num is not that.

However, if I just take bits of codes from JuLIP for computing energies for pair potentials, then everything works rather well:

# define a simple 2D triangular lattice system (so that I can compare with manual calculations)
r0 = 1.0  # atomic spacing 
N = 30    # number of atoms in each direction 

# generate positions  (for real crystals this is automated)
t = range(0.0,(N-1)*r0,N)

o = ones(N)
x, y = t * o', o * t'
X = [ [1.0 0.5; 0.0 √3/2] * [x[:]'; y[:]']; zeros(N^2)' ]
X = X |> vecs
# generate the atoms object  (H is just a place-holder)
at = Atoms(:H, X) 
set_cell!(at, diagm([2.0*N, 2.0*N, 1.0]))
set_pbc!(at, (true, true, true))

YY = Symbolics.variables(:YY, 1:3, 1:3)
X = positions(at)
aa = norm(X[1]-X[2])
NNs = neighbourlist(at, 1.05*aa)

X_sym = [UU * x for x in X]


lljj = LennardJones(; r0 = 1.0, e0 = 1.0)

# periodic b.c.s don't seem to work, so let's work with an atom well inside the domain so that it has full 6 nearest neighbours
ii = 100

rr = NNs.first[ii]:(NNs.first[ii+1]-1)
# symbolic site energy
Es = zero(Num)
for (i,j) in zip(NNs.i[rr],NNs.j[rr])
    Es += Num(0.5)*lljj(norm(X_sym[i]-X_sym[j]))
end

# Compute third order elastic constant:
dd1 = Symbolics.derivative(Es,UU[1,1])
dd2 = Symbolics.derivative(dd1,UU[1,1])
dd3 = Symbolics.derivative(dd2,UU[1,1])

# substitute identity matrix
substitute(dd3, Dict([UU[1,1] => 1.0, UU[1,2] => 0.0, UU[1,3] => 0.0,
                        UU[2,1] => 0.0, UU[2,2] => 1.0, UU[2,3] => 0.0,
                        UU[3,1] => 0.0, UU[3,2] => 0.0, UU[3,3] => 1.0]))

and it gives us what we want and very quickly too!

I think with some effort, this should also work for n-body interactions, especially say with ACE potentials. I would be very happy to try to code it up, but I was wondering if you had some input whether trying to include support for ::Num is feasible? I can envision quite a lot of issues with cut-offs, either because we add awful lot of extra terms in symbolic differentiations and also as symbolic variables usually don't work too well if we have some conditional statement, which we might with cut-offs?

@cortner
Copy link
Member

cortner commented May 9, 2024

JuLIP is quite old and possibly not flexible enough to easily adapt to abstract number types. E.g. I've long wanted to allow dual and hyer-dual numbers. Those would give you the same I think. I was planning to re-work Stillinger-Weber and EAM to allow this, but within the new EmpiricalPotentials.jl instead of JuLIP.jl

@mbuze
Copy link
Author

mbuze commented May 24, 2024

Sounds good, I will keep an eye on EmpiricalPotentials.jl then. Would it work with ACEpotentials.jl?

@cortner
Copy link
Member

cortner commented May 24, 2024

that's the goal

@cortner
Copy link
Member

cortner commented May 24, 2024

If you look at EmpiricalPotentials now, the SW potential can already accept general number types as inputs.

@mbuze
Copy link
Author

mbuze commented May 25, 2024

I had a look and indeed types are no longer the problem, but, as expected, conditional statements involving cut-offs are. Here is a MWE, adapted from the tests in EmpiricalPotentials:

using Symbolics
using LinearAlgebra

using AtomsBase
using AtomsCalculators
using AtomsCalculators.AtomsCalculatorsTesting
using EmpiricalPotentials
using ExtXYZ
using FiniteDiff
using Test
using Unitful
using JSON, StaticArrays


sw = StillingerWeber()
fname = joinpath(pkgdir(EmpiricalPotentials),"test", "data", "test_sw.json")
D = JSON.parsefile(fname)
tests = D["tests"]

function read_test(t::Dict) 
    Rs = SVector{3, Float64}.(t["Rs"])
    Zs = Int.(t["Zs"])
    z0 = Int(t["z0"])
    v = Float64(t["val"])
    return Rs, Zs, z0, v
 end

Rs, Zs, z0, _ = read_test(tests[2])

YY = Symbolics.variables(:YY, 1:3, 1:3)
Rs_sym = [YY* rs for rs in Rs] # create symbolic Rs

EmpiricalPotentials.eval_site(sw, Rs_sym, Zs, z0) # evaluate the site potential

Then we get an error

ERROR: TypeError: non-boolean (Num) used in boolean context
Stacktrace:
 [1] (::EmpiricalPotentials.var"#60#62"{Float64, Float64, Float64, Float64, Float64, Float64})(r::Num)
   @ EmpiricalPotentials ~/.julia/packages/EmpiricalPotentials/APr5T/src/stillingerweber/stillingerweber.jl:124
 [2] eval_site(calc::StillingerWeber{…}, Rs::Vector{…}, Zs::Vector{…}, z0::Int64)
   @ EmpiricalPotentials ~/.julia/packages/EmpiricalPotentials/APr5T/src/stillingerweber/stillingerweber.jl:154
 Some type information was truncated. Use `show(err)` to see complete types.

and the responsible line is

V3 = r -> r >= rcut ? 0.0 : sqrt(ϵ * λ) * exp( γ / (r/σ - a) ).

When I remove the conditional statements in the definition of V2 and V3 manually (by defining a new function StillingerWeber_sym mimicking the in the repo), everything works though!

w_sym = StillingerWeber_sym()
Es = EmpiricalPotentials.eval_site(sw_sym,Rs_sym,Zs,z0)

dd1 = Symbolics.derivative(Es,YY[1,1])
dd2 = Symbolics.derivative(dd1,YY[1,1])
dd3 = Symbolics.derivative(dd2,YY[1,1])

dd2_sub = substitute(dd2, Dict([YY[1,1] => 1.0, YY[1,2] => 0.0, YY[1,3] => 0.0,
                        YY[2,1] => 0.0, YY[2,2] => 1.0, YY[2,3] => 0.0,
                        YY[3,1] => 0.0, YY[3,2] => 0.0, YY[3,3] => 1.0]))

dd3_sub = substitute(dd3, Dict([YY[1,1] => 1.0, YY[1,2] => 0.0, YY[1,3] => 0.0,
                        YY[2,1] => 0.0, YY[2,2] => 1.0, YY[2,3] => 0.0,
                        YY[3,1] => 0.0, YY[3,2] => 0.0, YY[3,3] => 1.0]))

The output for dd3 is ridiculously gigantic and it seems like the computation of dd3 takes about a minute, but in fact it is the printing of the output that takes so long! If the output is suppressed, it is almost instantaneous! Ultimately it spells out numbers, which are kind of believeable, dd3_sub = -4.166279257769397e6 and dd2_sub = 151351.422520672. The magnitude makes a bit worried that it is not correct though... Also entries such as C[1,1,1,2] come out as non-zero too, but I guess it might have something to do that the system is a multi-lattice, so the Cauchy-Born rule works differently?

But the bottom line is, it is super quick and works (at least in principle), provided that we ensure no conditional statements are supplied :)

@cortner
Copy link
Member

cortner commented May 25, 2024

We could remove the cutoff check by multiplying with a characteristic function that could then be overloaded for your number type. In principle I'm open to this, but I suspect this won't go well for more general potentials especially of ACE-like complexity ...

Have you considered symbolic regression instead?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants