Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

yt runs SlicePlot in-suite analysis failed with Enzo embedded Python2.7 with MPI rank > 1 #2889

Open
cindytsai opened this issue Aug 28, 2020 · 2 comments
Labels
parallelism MPI-based parallelism

Comments

@cindytsai
Copy link

cindytsai commented Aug 28, 2020

Bug report

Bug summary
yt failed while running SlicePlot in Enzo inline-analysis with embedded python.
yt/frontends/enzo/io.py (class IOHandlerInMemory) cannot handle cases when grids needed doesn't exist in self.grids_in_memory in that specific MPI rank inside the function def _read_fluid_selection.
As in contrast, both find_max and ProjectionPlot work well with MPI rank > 1.

Code for reproduction

  1. Get Enzo
  2. Build the Makefile
    Fill in LOCAL_PYTHON_INSTALL, LOCAL_HDF5_INSTALL, LOCAL_INCLUDES_PYTHON, LOCAL_LIBS_PYTHON, and make sure the last two variables should contain numpy package as well.
    • A working makefile on the machine I'm using:
      Make.mach.eureka-python2
    • Run Enzo with embedded python (this should be switched on):
      make python-yes
      
    • Show the configuration of current build system:
      make show-config
      
  3. Name the in-suite analysis script as user_script.py:
    Here, we do SlicePlot operation.
import yt
yt.enable_parallelism()

def main():
    ds = yt.frontends.enzo.EnzoDatasetInMemory()
    
    # Do slice plot, but won't work when Enzo Embedded Python MPI rank > 1
    sz = yt.SlicePlot(ds, 'z', 'density')

    # Output the result on root only
    if yt.is_root():
        sz.save()

Some useful information, see Embedded Python in Enzo. (Beware that this site is out-dated, and it hasn't update for Enzo2.6.)

Actual outcome

  File "<string>", line 1, in <module>
  File "./user_script.py", line 8, in main
    sz = yt.SlicePlot(ds, 'z', 'density')
  File "/work1/cindytsai/Software/SysPython27/lib/python2.7/site-packages/yt/visualization/plot_window.py", line 2039, in SlicePlot
    return AxisAlignedSlicePlot(ds, normal, fields, *args, **kwargs)
  File "/work1/cindytsai/Software/SysPython27/lib/python2.7/site-packages/yt/visualization/plot_window.py", line 1314, in __init__
    slc.get_data(fields)
  File "/work1/cindytsai/Software/SysPython27/lib/python2.7/site-packages/yt/data_objects/data_containers.py", line 1621, in get_data
    fluids, self, self._current_chunk)
  File "/work1/cindytsai/Software/SysPython27/lib/python2.7/site-packages/yt/geometry/geometry_handler.py", line 304, in _read_fluid_fields
    chunk_size)
  File "/work1/cindytsai/Software/SysPython27/lib/python2.7/site-packages/yt/frontends/enzo/io.py", line 279, in _read_fluid_selection
    data_view = self.grids_in_memory[g.id][fname][self.my_slice].swapaxes(0, 2)
P000 yt : [ERROR    ] 2020-08-27 20:32:24,411 KeyError: 4
P000 yt : [ERROR    ] 2020-08-27 20:32:24,411 Error occured on rank 0.

Expected outcome
Successfully produce slice plot image, while Enzo is running.

Version Information

  • Operating System: CentOS 7
  • Python Version: Python 2.7
  • yt version: yt-3.6.0
  • Enzo 2.6
@welcome
Copy link

welcome bot commented Aug 28, 2020

Hi, and welcome to yt! Thanks for opening your first issue. We have an issue template that helps us to gather relevant information to help diagnosing and fixing the issue.

@matthewturk
Copy link
Member

Hi @cindytsai , this is indeed a change in behavior.

What used to happen was that we would chunk up the slice object and then replicate that slice object across processors. With the transition to yt-3.0 this changed, and we no longer parallelized the slicing operation.

What might work best for this moving forward may be to have the parallel operation happen inside the generation of the fixed resolution buffer. This would be somewhat straightforward to implement, in that what you could do is have a parallel reduction operation in the coordinate handler following the call to the pixelization operation.

@neutrinoceros neutrinoceros added the parallelism MPI-based parallelism label Oct 20, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
parallelism MPI-based parallelism
Projects
None yet
Development

No branches or pull requests

3 participants