Skip to content

Commit

Permalink
Moving comments to use the word primary (#718)
Browse files Browse the repository at this point in the history
  • Loading branch information
john-science committed Jun 17, 2022
1 parent 37e4788 commit c7ff593
Show file tree
Hide file tree
Showing 18 changed files with 63 additions and 63 deletions.
2 changes: 1 addition & 1 deletion armi/__init__.py
Expand Up @@ -173,7 +173,7 @@ def init(choice=None, fName=None, cs=None):
try:
return armiCase.initializeOperator()
except: # Catch any and all errors. Naked exception on purpose.
# Concatenate errors to the master log file.
# Concatenate errors to the primary log file.
runLog.close()
raise

Expand Down
2 changes: 1 addition & 1 deletion armi/bookkeeping/db/database3.py
Expand Up @@ -293,7 +293,7 @@ def interactDistributeState(self) -> None:
"""
Reconnect to pre-existing database.
DB is created and managed by the master node only but we can still connect to it
DB is created and managed by the primary node only but we can still connect to it
from workers to enable things like history tracking.
"""
if context.MPI_RANK > 0:
Expand Down
2 changes: 1 addition & 1 deletion armi/cases/case.py
Expand Up @@ -372,7 +372,7 @@ def run(self):

with o:
if self.cs["trace"] and context.MPI_RANK == 0:
# only trace master node.
# only trace primary node.
tracer = trace.Trace(ignoredirs=[sys.prefix, sys.exec_prefix], trace=1)
tracer.runctx("o.operate()", globals(), locals())
else:
Expand Down
2 changes: 1 addition & 1 deletion armi/cli/__init__.py
Expand Up @@ -231,7 +231,7 @@ def executeCommand(self, command, args) -> Optional[int]:


def splash():
"""Emit a the active App's splash text to the runLog for the master node."""
"""Emit a the active App's splash text to the runLog for the primary node."""
from armi import getApp # pylint: disable=import-outside-toplevel

app = getApp()
Expand Down
2 changes: 1 addition & 1 deletion armi/context.py
Expand Up @@ -99,7 +99,7 @@ def setMode(cls, mode):

MPI_COMM = None
# MPI_RANK represents the index of the CPU that is running.
# 0 is typically the master CPU, while 1+ are typically workers.
# 0 is typically the primary CPU, while 1+ are typically workers.
# MPI_SIZE is the total number of CPUs
MPI_RANK = 0
MPI_SIZE = 1
Expand Down
4 changes: 2 additions & 2 deletions armi/interfaces.py
Expand Up @@ -204,7 +204,7 @@ def attachReactor(self, o, r):
Notes
-----
This runs on all worker nodes as well as the master.
This runs on all worker nodes as well as the primary.
"""
self.r = r
self.cs = o.cs
Expand Down Expand Up @@ -316,7 +316,7 @@ def interactError(self):
pass

def interactDistributeState(self):
"""Called after this interface is copied to a different (non-master) MPI node."""
"""Called after this interface is copied to a different (non-primary) MPI node."""
pass

def isRequestedDetailPoint(self, cycle=None, node=None):
Expand Down
52 changes: 25 additions & 27 deletions armi/mpiActions.py
Expand Up @@ -16,8 +16,8 @@
This module provides an abstract class to be used to implement "MPI actions."
MPI actions are tasks, activities, or work that can be executed on the worker nodes. The standard
workflow is essentially that the master node creates an :py:class:`~armi.mpiActions.MpiAction`,
sends it to the workers, and then both the master and the workers
workflow is essentially that the primary node creates an :py:class:`~armi.mpiActions.MpiAction`,
sends it to the workers, and then both the primary and the workers
:py:meth:`invoke() <armi.mpiActions.MpiAction.invoke>` together. For example:
.. list-table:: Sample MPI Action Workflow
Expand All @@ -28,23 +28,23 @@
- Code
- Notes
* - 1
- **master**: :py:class:`distributeState = DistributeStateAction() <armi.mpiActions.MpiAction>`
- **primary**: :py:class:`distributeState = DistributeStateAction() <armi.mpiActions.MpiAction>`
**worker**: :code:`action = context.MPI_COMM.bcast(None, root=0)`
- **master**: Initializing a distribute state action.
- **primary**: Initializing a distribute state action.
**worker**: Waiting for something to do, as determined by the master, this happens within the
**worker**: Waiting for something to do, as determined by the primary, this happens within the
worker's :py:meth:`~armi.operators.MpiOperator.workerOperate`.
* - 2
- **master**: :code:`context.MPI_COMM.bcast(distributeState, root=0)`
- **primary**: :code:`context.MPI_COMM.bcast(distributeState, root=0)`
**worker**: :code:`action = context.MPI_COMM.bcast(None, root=0)`
- **master**: Broadcasts a distribute state action to all the worker nodes
- **primary**: Broadcasts a distribute state action to all the worker nodes
**worker**: Receives the action from the master, which is a
**worker**: Receives the action from the primary, which is a
:py:class:`~armi.mpiActions.DistributeStateAction`.
* - 3
- **master**: :code:`distributeState.invoke(self.o, self.r, self.cs)`
- **primary**: :code:`distributeState.invoke(self.o, self.r, self.cs)`
**worker**: :code:`action.invoke(self.o, self.r, self.cs)`
- Both invoke the action, and are in sync. Any broadcast or receive within the action should
Expand Down Expand Up @@ -93,7 +93,7 @@ def parallel(self):

@classmethod
def invokeAsMaster(cls, o, r, cs):
"""Simplified method to call from the master process.
"""Simplified method to call from the primary process.
This can be used in place of:
Expand All @@ -103,8 +103,8 @@ def invokeAsMaster(cls, o, r, cs):
Interestingly, the code above can be used in two ways:
1. Both the master and worker can call the above code at the same time, or
2. the master can run the above code, which will be handled by the worker's main loop.
1. Both the primary and worker can call the above code at the same time, or
2. the primary can run the above code, which will be handled by the worker's main loop.
Option number 2 is the most common usage.
Expand Down Expand Up @@ -146,8 +146,8 @@ def _mpiOperationHelper(self, obj, mpiFunction):

def broadcast(self, obj=None):
"""
A wrapper around ``bcast``, on the master node can be run with an equals sign, so that it
can be consistent within both master and worker nodes.
A wrapper around ``bcast``, on the primary node can be run with an equals sign, so that it
can be consistent within both primary and worker nodes.
Parameters
----------
Expand All @@ -163,14 +163,14 @@ def broadcast(self, obj=None):
-----
The standard ``bcast`` method creates a new instance even for the root process. Consequently,
when passing an object, references can be broken to the original object. Therefore, this
method, returns the original object when called by the master node, or the broadcasted
method, returns the original object when called by the primary node, or the broadcasted
object when called on the worker nodes.
"""
if self.serial:
return obj if obj is not None else self
if context.MPI_SIZE > 1:
result = self._mpiOperationHelper(obj, context.MPI_COMM.bcast)
# the following if-branch prevents the creation of duplicate objects on the master node
# the following if-branch prevents the creation of duplicate objects on the primary node
# if the object is large with lots of links, it is prudent to call gc.collect()
if obj is None and context.MPI_RANK == 0:
return self
Expand Down Expand Up @@ -460,13 +460,12 @@ def invokeHook(self):
Notes
-----
This is run by all workers and the master any time the code needs to sync all processors.
This is run by all workers and the primary any time the code needs to sync all processors.
"""

if context.MPI_SIZE <= 1:
runLog.extra("Not distributing state because there is only one processor")
return

# Detach phase:
# The Reactor and the interfaces have links to the Operator, which contains Un-MPI-able objects
# like the MPI Comm and the SQL database connections.
Expand Down Expand Up @@ -548,10 +547,10 @@ def _distributeReactor(self, cs):
raise RuntimeError("Failed to transmit reactor, received: {}".format(r))

if context.MPI_RANK == 0:
# on the master node this unfortunately created a __deepcopy__ of the reactor, delete it
# on the primary node this unfortunately created a __deepcopy__ of the reactor, delete it
del r
else:
# maintain original reactor object on master
# maintain original reactor object on primary
self.r = r
self.o.r = r

Expand Down Expand Up @@ -589,7 +588,7 @@ def _distributeInterfaces(self):
Interface copy description
Since interfaces store information that can influence a calculation, it is important
in branch searches to make sure that no information is carried forward from these
runs on either the master node or the workers. However, there are interfaces that
runs on either the primary node or the workers. However, there are interfaces that
cannot be distributed, making this a challenge. To solve this problem, any interface
that cannot be distributed is simply re-initialized. If any information needs to be
given to the worker nodes on a non-distributable interface, additional function definitions
Expand All @@ -598,13 +597,12 @@ def _distributeInterfaces(self):
See Also
--------
armi.interfaces.Interface.preDistributeState : runs on master before DS
armi.interfaces.Interface.postDistributeState : runs on master after DS
armi.interfaces.Interface.preDistributeState : runs on primary before DS
armi.interfaces.Interface.postDistributeState : runs on primary after DS
armi.interfaces.Interface.interactDistributeState : runs on workers after DS
"""
if context.MPI_RANK == 0:
# These run on the master node. (Worker nodes run sychronized code below)
# These run on the primary node. (Worker nodes run sychronized code below)
toRestore = {}
for i in self.o.getInterfaces():
if i.distributable() == interfaces.Interface.Distribute.DUPLICATE:
Expand Down Expand Up @@ -652,7 +650,7 @@ def _distributeInterfaces(self):
for i in self.o.getInterfaces():
runLog.warning(i)
raise RuntimeError(
"Non-distributable interface {0} exists on the master MPI process "
"Non-distributable interface {0} exists on the primary MPI process "
"but not on the workers. "
"Cannot distribute state.".format(iName)
)
Expand Down
6 changes: 4 additions & 2 deletions armi/operators/operator.py
Expand Up @@ -497,14 +497,16 @@ def _checkCsConsistency(self):
cs = settings.getMasterCs()
wrong = (self.cs is not cs) or any((i.cs is not cs) for i in self.interfaces)
if wrong:
msg = ["Master cs ID is {}".format(id(cs))]
msg = ["Primary cs ID is {}".format(id(cs))]
for i in self.interfaces:
msg.append("{:30s} has cs ID: {:12d}".format(str(i), id(i.cs)))
msg.append("{:30s} has cs ID: {:12d}".format(str(self), id(self.cs)))
raise RuntimeError("\n".join(msg))

runLog.debug(
"Reactors, operators, and interfaces all share master cs: {}".format(id(cs))
"Reactors, operators, and interfaces all share primary cs: {}".format(
id(cs)
)
)

def interactAllInit(self):
Expand Down
16 changes: 9 additions & 7 deletions armi/operators/operatorMPI.py
Expand Up @@ -17,7 +17,7 @@
See :py:class:`~armi.operators.operator.Operator` for the parent class.
This sets up the main Operator on the master MPI node and initializes worker
This sets up the main Operator on the primary MPI node and initializes worker
processes on all other MPI nodes. At certain points in the run, particular interfaces
might call into action all the workers. For example, a depletion or
subchannel T/H module may ask the MPI pool to perform a few hundred
Expand Down Expand Up @@ -56,7 +56,7 @@ def __init__(self, cs):
Operator.__init__(self, cs)
except:
# kill the workers too so everything dies.
runLog.important("Master node failed on init. Quitting.")
runLog.important("Primary node failed on init. Quitting.")
if context.MPI_COMM: # else it's a single cpu case.
context.MPI_COMM.bcast("quit", root=0)
raise
Expand All @@ -70,14 +70,16 @@ def operate(self):
"""
runLog.debug("OperatorMPI.operate")
if context.MPI_RANK == 0:
# this is the master
# this is the primary
try:
# run the regular old operate function
Operator.operate(self)
runLog.important(time.ctime())
except Exception as ee:
runLog.error(
"Error in Master Node. Check STDERR for a traceback.\n{}".format(ee)
"Error in Primary Node. Check STDERR for a traceback.\n{}".format(
ee
)
)
raise
finally:
Expand Down Expand Up @@ -120,8 +122,8 @@ def workerOperate(self):
Notes
-----
This method is what worker nodes are in while they wait for instructions from
the master node in a parallel run. The nodes will sit, waiting for a "worker
command". When this comes (from a bcast from the master), a set of if statements
the primary node in a parallel run. The nodes will sit, waiting for a "worker
command". When this comes (from a bcast from the primary), a set of if statements
are evaluated, with specific behaviors defined for each command. If the operator
doesn't understand the command, it loops through the interface stack to see if
any of the interfaces understand it.
Expand All @@ -138,7 +140,7 @@ def workerOperate(self):
"""
while True:
# sit around waiting for a command from the master
# sit around waiting for a command from the primary
runLog.extra("Node {0} ready and waiting".format(context.MPI_RANK))
cmd = context.MPI_COMM.bcast(None, root=0)
runLog.extra("worker received command {0}".format(cmd))
Expand Down
2 changes: 1 addition & 1 deletion armi/reactor/blueprints/reactorBlueprint.py
Expand Up @@ -163,7 +163,7 @@ def construct(self, cs, bp, reactor, geom=None, loadAssems=True):
)
system.spatialLocator = spatialLocator
if context.MPI_RANK != 0:
# on non-master nodes we don't bother building up the assemblies
# on non-primary nodes we don't bother building up the assemblies
# because they will be populated with DistributeState.
return None

Expand Down
7 changes: 3 additions & 4 deletions armi/reactor/reactors.py
Expand Up @@ -1309,7 +1309,7 @@ def getLocationContents(self, locs, assemblyLevel=False, locContents=None):
assemblyLevel : bool, optional
If True, will find assemblies rather than blocks
locContents : dict, optional
A master lookup table with location string keys and block/assembly values
A lookup table with location string keys and block/assembly values
useful if you want to call this function many times and would like a speedup.
Returns
Expand Down Expand Up @@ -1834,7 +1834,7 @@ def updateAxialMesh(self):
See Also
--------
processLoading : sets up the master mesh that this perturbs.
processLoading : sets up the primary mesh that this perturbs.
"""
# most of the time, we want fuel, but they should mostly have the same number of blocks
# if this becomes a problem, we might find either the
Expand Down Expand Up @@ -2078,7 +2078,7 @@ def getAllNuclidesIn(self, mats):
Parameters
----------
mats : iterable or Material
List (or single) of materials to scan the full core for, accumulating a master nuclide list
List (or single) of materials to scan the full core for, accumulating a nuclide list
Returns
-------
Expand All @@ -2095,7 +2095,6 @@ def getAllNuclidesIn(self, mats):
If you need to know the nuclides in a fuel pin, you can't just use the sample returned
from getDominantMaterial, because it may be a fresh fuel material (U and Zr) even though
there are burned materials elsewhere (with U, Zr, Pu, LFP, etc.).
"""
if not isinstance(mats, list):
# single material passed in
Expand Down
2 changes: 1 addition & 1 deletion armi/reactor/tests/test_reactors.py
Expand Up @@ -61,7 +61,7 @@ def buildOperatorOfEmptyHexBlocks(customSettings=None):

customSettings["db"] = False # stop use of database
cs = cs.modified(newSettings=customSettings)
settings.setMasterCs(cs) # reset so everything matches master
settings.setMasterCs(cs) # reset so everything matches the primary Cs

r = tests.getEmptyHexReactor()
r.core.setOptionsFromCs(cs)
Expand Down
6 changes: 3 additions & 3 deletions armi/settings/__init__.py
Expand Up @@ -159,7 +159,7 @@ def getMasterCs():
"""
Return the global case-settings object (cs).
This can be called at any time to create or obtain the master Cs, a module-level CS
This can be called at any time to create or obtain the primary Cs, a module-level CS
intended to be shared by many other objects.
It can have multiple instances in multiprocessing cases.
Expand All @@ -178,9 +178,9 @@ def getMasterCs():

def setMasterCs(cs):
"""
Set the master Cs to be the one that is passed in.
Set the primary Cs to be the one that is passed in.
These are kept track of independently on a PID basis to allow independent multiprocessing.
"""
Settings.instance = cs
runLog.debug("Master cs set to {} with ID: {}".format(cs, id(cs)))
runLog.debug("Primary cs set to {} with ID: {}".format(cs, id(cs)))
2 changes: 1 addition & 1 deletion armi/settings/caseSettings.py
Expand Up @@ -22,7 +22,7 @@
A settings object can be saved as or loaded from an YAML file. The ARMI GUI is designed to
create this settings file, which is then loaded by an ARMI process on the cluster.
A master case settings is created as ``masterCs``
A primary case settings is created as ``masterCs``
"""
import io
Expand Down
4 changes: 2 additions & 2 deletions armi/settings/fwSettings/globalSettings.py
Expand Up @@ -330,7 +330,7 @@ def defineSettings() -> List[setting.Setting]:
CONF_BRANCH_VERBOSITY,
default="error",
label="Worker Log Verbosity",
description="Verbosity of the non-master MPI nodes",
description="Verbosity of the non-primary MPI nodes",
options=[
"debug",
"extra",
Expand Down Expand Up @@ -664,7 +664,7 @@ def defineSettings() -> List[setting.Setting]:
setting.Setting(
CONF_VERBOSITY,
default="info",
label="Master Log Verbosity",
label="Primary Log Verbosity",
description="How verbose the output will be",
options=[
"debug",
Expand Down

0 comments on commit c7ff593

Please sign in to comment.