Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ConstitutiveMaterial: Threaded evaluation of the gradient/hessian of the strain energy function #680

Closed
adtzlr opened this issue Mar 5, 2024 · 2 comments
Assignees
Labels
performance runtimes related stuff

Comments

@adtzlr
Copy link
Owner

adtzlr commented Mar 5, 2024

Try to run the evaluations of the gradient/hessian from a constitutive material formulation in parallel (threaded). This will be very hard to implement because of state variables, broadcasting, out-support, etc.

@adtzlr adtzlr added the enhancement New feature or request label Mar 5, 2024
@adtzlr adtzlr self-assigned this Mar 5, 2024
@adtzlr
Copy link
Owner Author

adtzlr commented Mar 5, 2024

import felupe as fem
import numpy as np
from threading import Thread

mesh = fem.Cube(n=51)
region = fem.RegionHexahedron(mesh)
field = fem.FieldContainer([fem.Field(region, dim=3)])

boundaries, loadcase = fem.dof.uniaxial(field, clamped=True)


class Threaded(fem.constitution.ConstitutiveMaterial):
    def __init__(self, material, axis, nthreads, **kwargs):
        self.material = material
        self.x = self.material.x
        self.axis = axis
        self._axes = [slice(None)] * abs(self.axis + 1)
        self.nthreads = nthreads
        self._P = None
        self._A = None
        for key, value in kwargs.items():
            setattr(self, key, value)

    def _kernel_grad(self, F, chunk):
        self.material.gradient([F, None], out=self._P[..., slice(*chunk), *self._axes])

    def _kernel_hess(self, F, chunk):
        self.material.hessian([F, None], out=self._A[..., slice(*chunk), *self._axes])

    def chunks(self, F):
        defgrads = np.array_split(F, self.nthreads, axis=self.axis)
        chunksizes = np.cumsum([chunk.shape[self.axis] for chunk in defgrads])
        chunks = np.vstack([np.append(0, chunksizes[:-1]), chunksizes]).T
        return defgrads, chunksizes, chunks

    def evaluate(self, target, defgrads, chunks):
        threads = [
            Thread(target=target, args=(F, chunk)) for F, chunk in zip(defgrads, chunks)
        ]
        for t in threads:
            t.start()
        for t in threads:
            t.join()

    def gradient(self, x):
        F, statevars = x
        defgrads, chunks, chunksizes = self.chunks(F)

        if self._P is None:
            self._P = np.zeros_like(F)

        self.evaluate(self._kernel_grad, defgrads, chunks)
        return [self._P, None]

    def hessian(self, x):
        F, statevars = x
        defgrads, chunks, chunksizes = self.chunks(F)

        if self._A is None:
            self._A = np.zeros((*F.shape[:2], *F.shape))

        self.evaluate(self._kernel_hess, defgrads, chunks)
        return [self._A]


umat_threaded = Threaded(fem.NeoHooke(mu=1, bulk=2), axis=-1, nthreads=16)
solid_threaded = fem.SolidBody(umat_threaded, field)

umat = fem.NeoHooke(mu=1, bulk=2)
solid = fem.SolidBody(umat, field)

gives

>>> %timeit -r1 -n1 solid.assemble.matrix(field)
12.6 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)

>>> %timeit -r1 -n1 solid_threaded.assemble.matrix(field)
10 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)

@adtzlr adtzlr added performance runtimes related stuff and removed enhancement New feature or request labels Mar 5, 2024
@adtzlr
Copy link
Owner Author

adtzlr commented Mar 8, 2024

Nice idea but it seems that the effort is not worth it.

@adtzlr adtzlr closed this as completed Mar 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance runtimes related stuff
Projects
None yet
Development

No branches or pull requests

1 participant