Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increasing memory consumption during optimization when using ROL #77

Closed
GregorWautischer opened this issue Mar 3, 2022 · 1 comment
Closed

Comments

@GregorWautischer
Copy link

GregorWautischer commented Mar 3, 2022

Hi,

I have recently started to use the ROL optimization package. When using Limited-Memory BFGS as optimizer I noticed that the memory consumption is increasing steadily during optimization. This happens even though the "Maximum Secant Storage"-parameter is set. For larger problems, this leads to an out-of-memory error. I have tracked the memory usage of the example below for 500 iterations using scipy L-BFGS-B and ROL as optimizers. The results are plotted here

index

as can be seen the memory usage increases step wise for ROL while it pretty much saturates for scipy.

Any idea why this is happening?

Best regards,

Gregor

Example used:

import os
os.environ["OMP_NUM_THREADS"] = "1"
import time
from firedrake import *
from firedrake_adjoint import *

family = "CG"
degree = 1
mesh = Mesh("AirboxMesh.msh")

FS = FunctionSpace(mesh, family, degree)
VFS = VectorFunctionSpace(mesh, family, degree)

m = interpolate(Constant((0., 0., 0.)), VFS)

u = TrialFunction(FS)
v = TestFunction(FS)

a = inner(grad(u), grad(v)) * dx
L = inner(m, grad(v)) * dx(subdomain_id = 2)

bc = DirichletBC(FS, Constant(0), "on_boundary")

u = Function(FS)
solver = LinearVariationalSolver(LinearVariationalProblem(a, L, u, bc))

print("solve")
stt = time.time()
solver.solve()
print("solving took ", time.time()-stt, " seconds")

print("Define J")
htarget = interpolate(Constant((1.,0.,0.)), VFS)
J = assemble(inner(grad(u)-htarget, grad(u)-htarget)*dx(subdomain_id = 1))
mc = Control(m)
Jhat = ReducedFunctional(J, mc)

print("start Optimization")
stt = time.time()
m_opt = minimize(Jhat, options = {"maxiter": 500, "disp": True, "ftol": 1e-20, "gtol": 1e-20})
print("optimization took ", time.time()-stt, " seconds")

'''
params_dict = {
    'General': {
        'Print Verbosity': 10,
        'Secant': {
            'Type': 'Limited-Memory BFGS',
            'Maximum Secant Storage': 10
        }
    'Step': {
        'Type': 'Line Search',
        'Line Search': {
            'Descent Method': {
                'Type': 'Quasi-Newton Method'
            },
            'Curvature Condition': {
                'Type': 'Wolfe Conditions'
            },
            'Line-Search Method': {
                'Type': 'Backtracking',
            }
        }
    },
    'Status Test': {
        'Gradient Tolerance': 1e-20,
        'Step Tolerance': 1e-20,
        'Relative Step Tolerance': 1e-20,
        'Iteration Limit': 500
    }
}

solver = ROLSolver(problem, params_dict, inner_product=inner_product)
solver.checkGradient()
print("start Optimization")
stt = time.time()
m_opt = solver.solve()
print("optimization took ", time.time()-stt, " seconds")
'''

AirboxMesh.zip

@GregorWautischer
Copy link
Author

I was able to track the problem down to firedrake_adjoint. I opened up a new issue here for whoever is interested. I am closing this issue here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant