Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Driver has access to metadata of distributed I/O #1631

Closed
1 of 2 tasks
JustinSGray opened this issue Aug 18, 2020 · 0 comments · Fixed by #1659
Closed
1 of 2 tasks

Driver has access to metadata of distributed I/O #1631

JustinSGray opened this issue Aug 18, 2020 · 0 comments · Fixed by #1659

Comments

@JustinSGray
Copy link
Member

Summary of Issue

Add support for distributed dvs to group get_design_vars method.

Issue Type

  • Bug
  • Enhancement

Description

Current behavior reports the size of the DV on the root proc to ALL procs. This is correct for non-distributed dvs, but incorrect for distributed one.

For distributed dvs, the local dv size should be given.

Example

from pprint import pprint
import numpy as np
import openmdao.api as om
from mpi4py import MPI

 

class DistribParaboloid(om.ExplicitComponent):

    def setup(self):
        self.options['distributed'] = True

        if self.comm.rank == 0:
            ndvs = 3
        else:
            ndvs = 2

        self.add_input('w', val=1.) # this will connect to a non-distributed IVC
        self.add_input('x', shape=ndvs) # this will connect to a distributed IVC

        self.add_output('y', shape=1) # all-gathered output, duplicated on all procs
        self.add_output('z', shape=ndvs) # distributed output
        self.declare_partials('y', 'x')
        self.declare_partials('y', 'w')
        self.declare_partials('z', 'x')

    def compute(self, inputs, outputs):
        x = inputs['x']
        local_y = np.sum((x-5)**2)
        y_g = np.zeros(self.comm.size)
        self.comm.Allgather(local_y, y_g)
        outputs['y'] = np.sum(y_g) + (inputs['w']-10)**2
        outputs['z'] = x**2

    def compute_partials(self, inputs, J):
        x = inputs['x']
        J['y', 'x'] = 2*(x-5)
        J['y', 'w'] = 2*(inputs['w']-10)
        J['z', 'x'] = np.diag(2*x)
 

if __name__ == "__main__":
    comm = MPI.COMM_WORLD

    p = om.Problem()
    d_ivc = p.model.add_subsystem('distrib_ivc',
                                   om.IndepVarComp(distributed=True),
                                   promotes=['*'])
    if comm.rank == 0:
        ndvs = 3
    else:
        ndvs = 2
    d_ivc.add_output('x', 2*np.ones(ndvs))

    ivc = p.model.add_subsystem('ivc',
                                om.IndepVarComp(distributed=False),
                                promotes=['*'])
    ivc.add_output('w', 2.0)
    p.model.add_subsystem('dp', DistribParaboloid(), promotes=['*'])

 

    p.model.add_design_var('x', lower=-100, upper=100)
    p.model.add_objective('y')
    p.setup()
    p.run_model()
    # p.model.list_outputs(print_arrays=True)
    # p.check_totals(of=['y'], wrt=['x'])
    # J = p.compute_totals(of=['y'], wrt=['x'])
    # pprint(J)

 

    # Check the local size of the design variables on each proc
    dvs = p.model.get_design_vars()
    for name, meta in dvs.items():
        model_size = dvs[name]['size']
        # print("Local model dv size = {0}".format(model_size))
        
        assert(model_size == ndvs)
        
@JustinSGray JustinSGray added this to Issue Backlog in OpenMDAO Dev [Read only] via automation Aug 18, 2020
@JustinSGray JustinSGray added this to To Do in Distributed I/O for Optimization via automation Aug 18, 2020
@DKilkenny DKilkenny self-assigned this Aug 25, 2020
@project-bot project-bot bot moved this from Issue Backlog to In progress in OpenMDAO Dev [Read only] Aug 25, 2020
OpenMDAO Dev [Read only] automation moved this from In progress to Done Sep 9, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Development

Successfully merging a pull request may close this issue.

2 participants