Skip to content

Commit

Permalink
Refactor/ support both Pydantic 1 and 2 (#1135)
Browse files Browse the repository at this point in the history
* kinda large initial commit

* avoid recursive-validation issue by validators setting attributes by dict. lock to >= current pydantic

* fix no-attribute by .__dict__.get()

* ConfigDict for Platform model, plus apparently new pydantic warns about issues with "del"ing attributes

* skip more steps in deps cache-hit on CI

* tentative change to support both pydantic 1 and pydantic 2

* attempts to set specs and platforms modules based on pydantic version

* Created submodules that import/set versions of specs, platforms based on detected pydantic version. Other model ops are performed based on pydantic version

* typo

* better variables for determining pydantic version, and specs_dump function for dumping specs based on pydantic version

* starting universal field and model validators

* huge refactoring. only a single specs.py for now. validators.py contains validators for both pydantic versions. pydantic_bindings.py binds validators and other configs to their corresponding classes.
only one specs_checkers, utility functions to get/set attributes of pydantic objects

* refactoring docs, turning both platforms.py files back into a single file

* universal platforms.py

* desperately trying to figure out how to do cross-version aliases

* with Pydantic2, we need to use merge_field_infos and model_rebuild to retain field defaults and aliases

* fix test_models

* using create_model to create new model definitions with validators; setting validator attributes directly didn't work since they didn't get bound to fields. various other cross-version fixes and improvements

* tiny fix

* workflow_dir checking bugfix for pydantic2 validator version

* may have just validated a validator

* dunno why we always need to do this?

* what in tarnation, are we missing a dependency for compilation somehow?

* unpin autodoc pydantic

* fix pydantic2 validator bug with enabling final_save when save_every_k is enabled

* adjust required pydantic version, add jobs for testing old-pydantic

* presumably fix matrix?

* missing matrices

* more matrix fixes

* use cross-version wrapper function instead of model_dump in unit test setup

* cross-version test_models

* fix

* adjusts

* fix

* really dont know why the number of errors is variable, but at least its erroring and displaying multiple?

* i just want this to work already...

* change these unsets back to del, adjust cache

* refactoring

* moving some logic to specs_checkers

* ignoring a flakey warning from Pydantic, which will presumably be fixed soon? debugging strings

* specs_checker_getattr can now return a default value

* coverage

* adjust new unit test for pydantic 2

* adjust dependency listings for autodoc_pydantic and pydantic 1.10 lower bound
  • Loading branch information
jlnav committed Dec 7, 2023
1 parent 637adbd commit 3e03f8d
Show file tree
Hide file tree
Showing 18 changed files with 581 additions and 320 deletions.
22 changes: 18 additions & 4 deletions .github/workflows/basic.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,15 +18,28 @@ jobs:
os: [ubuntu-latest]
mpi-version: [mpich]
python-version: [3.9, "3.10", "3.11", "3.12"]
pydantic-version: ["2.5.2"]
comms-type: [m, l]
include:
- os: macos-latest
python-version: 3.11
python-version: "3.11"
mpi-version: "mpich=4.0.3"
pydantic-version: "2.5.2"
comms-type: m
- os: macos-latest
python-version: 3.11
python-version: "3.11"
mpi-version: "mpich=4.0.3"
pydantic-version: "2.5.2"
comms-type: l
- os: ubuntu-latest
mpi-version: mpich
python-version: "3.10"
pydantic-version: "1.10.13"
comms-type: m
- os: ubuntu-latest
mpi-version: mpich
python-version: "3.10"
pydantic-version: "1.10.13"
comms-type: l

env:
Expand Down Expand Up @@ -61,7 +74,7 @@ jobs:
/usr/share/miniconda3/bin
/usr/share/miniconda3/lib
/usr/share/miniconda3/include
key: libe-${{ github.ref_name }}-${{ matrix.python-version }}-${{ matrix.comms-type }}-basic
key: libe-${{ github.ref_name }}-${{ matrix.python-version }}-${{ matrix.comms-type }}-${{ matrix.pydantic-version }}-basic

- name: Force-update certifi
run: |
Expand Down Expand Up @@ -116,8 +129,9 @@ jobs:
/usr/share/miniconda3/include
key: libe-${{ github.ref_name }}-${{ matrix.python-version }}-${{ matrix.comms-type }}

- name: Install libEnsemble, flake8, lock environment
- name: Install libEnsemble, flake8
run: |
pip install pydantic==${{ matrix.pydantic-version }}
pip install -e .
flake8 libensemble
Expand Down
2 changes: 1 addition & 1 deletion docs/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
sphinx<8
sphinxcontrib-bibtex
autodoc_pydantic<2
autodoc_pydantic
sphinx-design
numpy
sphinx_rtd_theme>1
Expand Down
20 changes: 5 additions & 15 deletions libensemble/ensemble.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@
from libensemble.tools import add_unique_random_streams
from libensemble.tools import parse_args as parse_args_f
from libensemble.tools import save_libE_output
from libensemble.utils.misc import specs_dump

ATTR_ERR_MSG = 'Unable to load "{}". Is the function or submodule correctly named?'
ATTR_ERR_MSG = "\n" + 10 * "*" + ATTR_ERR_MSG + 10 * "*" + "\n"
Expand Down Expand Up @@ -325,8 +326,8 @@ def libE_specs(self, new_specs):
return

# Cast new libE_specs temporarily to dict
if isinstance(new_specs, LibeSpecs):
new_specs = new_specs.dict(by_alias=True, exclude_none=True, exclude_unset=True)
if not isinstance(new_specs, dict):
new_specs = specs_dump(new_specs, by_alias=True, exclude_none=True, exclude_unset=True)

# Unset "comms" if we already have a libE_specs that contains that field, that came from parse_args
if new_specs.get("comms") and hasattr(self._libE_specs, "comms") and self.parsed:
Expand Down Expand Up @@ -464,14 +465,7 @@ def _parse_spec(self, loaded_spec):

if len(userf_fields):
for f in userf_fields:
if f == "inputs":
loaded_spec["in"] = field_f[f](loaded_spec[f])
loaded_spec.pop("inputs")
elif f == "outputs":
loaded_spec["out"] = field_f[f](loaded_spec[f])
loaded_spec.pop("outputs")
else:
loaded_spec[f] = field_f[f](loaded_spec[f])
loaded_spec[f] = field_f[f](loaded_spec[f])

return loaded_spec

Expand All @@ -487,13 +481,9 @@ def _parameterize(self, loaded):
old_spec.pop("inputs") # avoid clashes
elif old_spec.get("out") and old_spec.get("outputs"):
old_spec.pop("inputs") # avoid clashes
elif isinstance(old_spec, ClassType):
old_spec.__dict__.update(**loaded_spec)
old_spec = old_spec.dict(by_alias=True)
setattr(self, f, ClassType(**old_spec))
else: # None. attribute not set yet
setattr(self, f, ClassType(**loaded_spec))
return
setattr(self, f, ClassType(**old_spec))

def from_yaml(self, file_path: str):
"""Parameterizes libEnsemble from ``yaml`` file"""
Expand Down
16 changes: 8 additions & 8 deletions libensemble/libE.py
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,6 @@
from typing import Callable, Dict

import numpy as np
from pydantic import validate_arguments

from libensemble.comms.comms import QCommProcess, QCommThread, Timeout
from libensemble.comms.logs import manager_logging_config
Expand All @@ -133,6 +132,8 @@
from libensemble.tools.alloc_support import AllocSupport
from libensemble.tools.tools import _USER_SIM_ID_WARNING
from libensemble.utils import launcher
from libensemble.utils.misc import specs_dump
from libensemble.utils.pydantic_bindings import libE_wrapper
from libensemble.utils.timer import Timer
from libensemble.version import __version__
from libensemble.worker import worker_main
Expand All @@ -142,7 +143,7 @@
# logger.setLevel(logging.DEBUG)


@validate_arguments
@libE_wrapper
def libE(
sim_specs: SimSpecs,
gen_specs: GenSpecs,
Expand Down Expand Up @@ -230,12 +231,11 @@ def libE(
exit_criteria=exit_criteria,
)

# get corresponding dictionaries back (casted in libE() def)
sim_specs = ensemble.sim_specs.dict(by_alias=True)
gen_specs = ensemble.gen_specs.dict(by_alias=True)
exit_criteria = ensemble.exit_criteria.dict(by_alias=True, exclude_none=True)
alloc_specs = ensemble.alloc_specs.dict(by_alias=True)
libE_specs = ensemble.libE_specs.dict(by_alias=True)
(sim_specs, gen_specs, alloc_specs, libE_specs) = [
specs_dump(spec, by_alias=True)
for spec in [ensemble.sim_specs, ensemble.gen_specs, ensemble.alloc_specs, ensemble.libE_specs]
]
exit_criteria = specs_dump(ensemble.exit_criteria, by_alias=True, exclude_none=True)

# Extract platform info from settings or environment
platform_info = get_platform(libE_specs)
Expand Down
55 changes: 15 additions & 40 deletions libensemble/resources/platforms.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,9 @@
import subprocess
from typing import Optional

from pydantic import BaseConfig, BaseModel, root_validator, validator
from pydantic import BaseModel

BaseConfig.validate_assignment = True
from libensemble.utils.misc import specs_dump


class PlatformException(Exception):
Expand All @@ -28,25 +28,25 @@ class Platform(BaseModel):
All are optional, and any not defined will be determined by libEnsemble's auto-detection.
"""

mpi_runner: Optional[str]
mpi_runner: Optional[str] = None
"""MPI runner: One of ``"mpich"``, ``"openmpi"``, ``"aprun"``,
``"srun"``, ``"jsrun"``, ``"msmpi"``, ``"custom"`` """

runner_name: Optional[str]
runner_name: Optional[str] = None
"""Literal string of MPI runner command. Only needed if different to the default
Note that ``"mpich"`` and ``"openmpi"`` runners have the default command ``"mpirun"``
"""
cores_per_node: Optional[int]
cores_per_node: Optional[int] = None
"""Number of physical CPU cores on a compute node of the platform"""

logical_cores_per_node: Optional[int]
logical_cores_per_node: Optional[int] = None
"""Number of logical CPU cores on a compute node of the platform"""

gpus_per_node: Optional[int]
gpus_per_node: Optional[int] = None
"""Number of GPU devices on a compute node of the platform"""

gpu_setting_type: Optional[str]
gpu_setting_type: Optional[str] = None
""" How GPUs will be assigned.
Must take one of the following string options.
Expand Down Expand Up @@ -82,14 +82,14 @@ class Platform(BaseModel):
"""

gpu_setting_name: Optional[str]
gpu_setting_name: Optional[str] = None
"""Name of GPU setting
See :attr:`gpu_setting_type` for more details.
"""

gpu_env_fallback: Optional[str]
gpu_env_fallback: Optional[str] = None
"""GPU fallback environment setting if not using an MPI runner.
For example:
Expand All @@ -106,7 +106,7 @@ class Platform(BaseModel):
"""

scheduler_match_slots: Optional[bool]
scheduler_match_slots: Optional[bool] = True
"""
Whether the libEnsemble resource scheduler should only assign matching slots when
there are multiple (partial) nodes assigned to a sim function.
Expand All @@ -121,31 +121,6 @@ class Platform(BaseModel):
(allowing for more efficient scheduling when MPI runs cross nodes).
"""

@validator("gpu_setting_type")
def check_gpu_setting_type(cls, value):
if value is not None:
assert value in [
"runner_default",
"env",
"option_gpus_per_node",
"option_gpus_per_task",
], "Invalid label for GPU specification type"
return value

@validator("mpi_runner")
def check_mpi_runner_type(cls, value):
if value is not None:
assert value in ["mpich", "openmpi", "aprun", "srun", "jsrun", "msmpi", "custom"], "Invalid MPI runner name"
return value

@root_validator
def check_logical_cores(cls, values):
if values.get("cores_per_node") and values.get("logical_cores_per_node"):
assert (
values["logical_cores_per_node"] % values["cores_per_node"] == 0
), "Logical cores doesn't divide evenly into cores"
return values


# On SLURM systems, let srun assign free GPUs on the node
class Crusher(Platform):
Expand Down Expand Up @@ -298,9 +273,9 @@ def known_envs():
platform_info = {}
if os.environ.get("NERSC_HOST") == "perlmutter":
if os.environ.get("SLURM_JOB_PARTITION").startswith("gpu_"):
platform_info = PerlmutterGPU().dict(by_alias=True)
platform_info = specs_dump(PerlmutterGPU(), by_alias=True)
else:
platform_info = PerlmutterCPU().dict(by_alias=True)
platform_info = specs_dump(PerlmutterCPU(), by_alias=True)
return platform_info


Expand All @@ -314,7 +289,7 @@ def known_system_detect(cmd="hostname -d"):
platform_info = {}
try:
domain_name = subprocess.check_output(run_cmd).decode().rstrip()
platform_info = detect_systems[domain_name]().dict(by_alias=True)
platform_info = specs_dump(detect_systems[domain_name](), by_alias=True)
except Exception:
platform_info = known_envs()
return platform_info
Expand All @@ -333,7 +308,7 @@ def get_platform(libE_specs):
name = libE_specs.get("platform") or os.environ.get("LIBE_PLATFORM")
if name:
try:
known_platforms = Known_platforms().dict()
known_platforms = specs_dump(Known_platforms(), exclude_none=True)
platform_info = known_platforms[name]
except KeyError:
raise PlatformException(f"Error. Unknown platform requested {name}")
Expand Down

0 comments on commit 3e03f8d

Please sign in to comment.