Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor/ support both Pydantic 1 and 2 #1135

Merged
merged 52 commits into from
Dec 7, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
52 commits
Select commit Hold shift + click to select a range
fd6ae59
kinda large initial commit
jlnav Oct 16, 2023
b13c693
avoid recursive-validation issue by validators setting attributes by …
jlnav Oct 17, 2023
471ba01
fix no-attribute by .__dict__.get()
jlnav Oct 17, 2023
8155551
ConfigDict for Platform model, plus apparently new pydantic warns abo…
jlnav Oct 17, 2023
665c88f
skip more steps in deps cache-hit on CI
jlnav Oct 17, 2023
8601273
Merge branch 'develop' into refactor/pydantic2
jlnav Nov 6, 2023
e8f20db
tentative change to support both pydantic 1 and pydantic 2
jlnav Nov 6, 2023
e86fc54
attempts to set specs and platforms modules based on pydantic version
jlnav Nov 6, 2023
3058092
Created submodules that import/set versions of specs, platforms based…
jlnav Nov 6, 2023
01f124b
typo
jlnav Nov 7, 2023
584e16e
Merge branch 'develop' into refactor/pydantic2
jlnav Nov 7, 2023
9833c9d
better variables for determining pydantic version, and specs_dump fun…
jlnav Nov 7, 2023
d47ab2d
starting universal field and model validators
jlnav Nov 7, 2023
a51d742
Merge branch 'develop' into refactor/pydantic2
jlnav Nov 20, 2023
b68d013
huge refactoring. only a single specs.py for now. validators.py conta…
jlnav Nov 20, 2023
526b508
refactoring docs, turning both platforms.py files back into a single …
jlnav Nov 21, 2023
06f4889
universal platforms.py
jlnav Nov 21, 2023
0b10b91
desperately trying to figure out how to do cross-version aliases
jlnav Nov 21, 2023
8aa1890
with Pydantic2, we need to use merge_field_infos and model_rebuild to…
jlnav Nov 27, 2023
fe381d5
fix test_models
jlnav Nov 27, 2023
73564ac
Merge branch 'develop' into refactor/pydantic2
jlnav Nov 27, 2023
c915bb1
using create_model to create new model definitions with validators; s…
jlnav Nov 27, 2023
113dc1e
tiny fix
jlnav Nov 27, 2023
c563e6d
Merge branch 'develop' into refactor/pydantic2
jlnav Nov 27, 2023
c9b0e86
workflow_dir checking bugfix for pydantic2 validator version
jlnav Nov 27, 2023
245a919
may have just validated a validator
jlnav Nov 27, 2023
fcbce07
Merge branch 'develop' into refactor/pydantic2
jlnav Nov 27, 2023
d9914f5
dunno why we always need to do this?
jlnav Nov 27, 2023
39f6af2
what in tarnation, are we missing a dependency for compilation somehow?
jlnav Nov 28, 2023
0c3e541
unpin autodoc pydantic
jlnav Nov 28, 2023
112253e
fix pydantic2 validator bug with enabling final_save when save_every_…
jlnav Nov 28, 2023
10b1378
adjust required pydantic version, add jobs for testing old-pydantic
jlnav Nov 28, 2023
04072e5
presumably fix matrix?
jlnav Nov 28, 2023
1f6e3de
missing matrices
jlnav Nov 28, 2023
e2a9bd8
more matrix fixes
jlnav Nov 28, 2023
c44ecd1
use cross-version wrapper function instead of model_dump in unit test…
jlnav Nov 28, 2023
e9f75be
cross-version test_models
jlnav Nov 28, 2023
9876a28
fix
jlnav Nov 28, 2023
e5fae6e
adjusts
jlnav Nov 28, 2023
bcc3340
fix
jlnav Nov 28, 2023
6c3345e
really dont know why the number of errors is variable, but at least i…
jlnav Nov 28, 2023
0ea1735
i just want this to work already...
jlnav Nov 28, 2023
468845f
change these unsets back to del, adjust cache
jlnav Nov 29, 2023
d2f63a3
refactoring
jlnav Nov 29, 2023
54ac7cc
moving some logic to specs_checkers
jlnav Nov 29, 2023
601fb8a
ignoring a flakey warning from Pydantic, which will presumably be fix…
jlnav Nov 30, 2023
76d0e92
specs_checker_getattr can now return a default value
jlnav Nov 30, 2023
ec4b706
coverage
jlnav Dec 1, 2023
4d4d349
Merge branch 'develop' into refactor/pydantic2
jlnav Dec 4, 2023
180ab0c
Merge branch 'develop' into refactor/pydantic2
jlnav Dec 4, 2023
bfcd48d
adjust new unit test for pydantic 2
jlnav Dec 4, 2023
38e715d
adjust dependency listings for autodoc_pydantic and pydantic 1.10 low…
jlnav Dec 6, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
22 changes: 18 additions & 4 deletions .github/workflows/basic.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,15 +18,28 @@ jobs:
os: [ubuntu-latest]
mpi-version: [mpich]
python-version: [3.9, "3.10", "3.11", "3.12"]
pydantic-version: ["2.5.2"]
comms-type: [m, l]
include:
- os: macos-latest
python-version: 3.11
python-version: "3.11"
mpi-version: "mpich=4.0.3"
pydantic-version: "2.5.2"
comms-type: m
- os: macos-latest
python-version: 3.11
python-version: "3.11"
mpi-version: "mpich=4.0.3"
pydantic-version: "2.5.2"
comms-type: l
- os: ubuntu-latest
mpi-version: mpich
python-version: "3.10"
pydantic-version: "1.10.13"
comms-type: m
- os: ubuntu-latest
mpi-version: mpich
python-version: "3.10"
pydantic-version: "1.10.13"
comms-type: l

env:
Expand Down Expand Up @@ -61,7 +74,7 @@ jobs:
/usr/share/miniconda3/bin
/usr/share/miniconda3/lib
/usr/share/miniconda3/include
key: libe-${{ github.ref_name }}-${{ matrix.python-version }}-${{ matrix.comms-type }}-basic
key: libe-${{ github.ref_name }}-${{ matrix.python-version }}-${{ matrix.comms-type }}-${{ matrix.pydantic-version }}-basic

- name: Force-update certifi
run: |
Expand Down Expand Up @@ -116,8 +129,9 @@ jobs:
/usr/share/miniconda3/include
key: libe-${{ github.ref_name }}-${{ matrix.python-version }}-${{ matrix.comms-type }}

- name: Install libEnsemble, flake8, lock environment
- name: Install libEnsemble, flake8
run: |
pip install pydantic==${{ matrix.pydantic-version }}
pip install -e .
flake8 libensemble

Expand Down
2 changes: 1 addition & 1 deletion docs/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
sphinx<8
sphinxcontrib-bibtex
autodoc_pydantic<2
autodoc_pydantic
sphinx-design
numpy
sphinx_rtd_theme>1
Expand Down
20 changes: 5 additions & 15 deletions libensemble/ensemble.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@
from libensemble.tools import add_unique_random_streams
from libensemble.tools import parse_args as parse_args_f
from libensemble.tools import save_libE_output
from libensemble.utils.misc import specs_dump

ATTR_ERR_MSG = 'Unable to load "{}". Is the function or submodule correctly named?'
ATTR_ERR_MSG = "\n" + 10 * "*" + ATTR_ERR_MSG + 10 * "*" + "\n"
Expand Down Expand Up @@ -325,8 +326,8 @@ def libE_specs(self, new_specs):
return

# Cast new libE_specs temporarily to dict
if isinstance(new_specs, LibeSpecs):
new_specs = new_specs.dict(by_alias=True, exclude_none=True, exclude_unset=True)
if not isinstance(new_specs, dict):
new_specs = specs_dump(new_specs, by_alias=True, exclude_none=True, exclude_unset=True)

# Unset "comms" if we already have a libE_specs that contains that field, that came from parse_args
if new_specs.get("comms") and hasattr(self._libE_specs, "comms") and self.parsed:
Expand Down Expand Up @@ -464,14 +465,7 @@ def _parse_spec(self, loaded_spec):

if len(userf_fields):
for f in userf_fields:
if f == "inputs":
loaded_spec["in"] = field_f[f](loaded_spec[f])
loaded_spec.pop("inputs")
elif f == "outputs":
loaded_spec["out"] = field_f[f](loaded_spec[f])
loaded_spec.pop("outputs")
else:
loaded_spec[f] = field_f[f](loaded_spec[f])
loaded_spec[f] = field_f[f](loaded_spec[f])

return loaded_spec

Expand All @@ -487,13 +481,9 @@ def _parameterize(self, loaded):
old_spec.pop("inputs") # avoid clashes
elif old_spec.get("out") and old_spec.get("outputs"):
old_spec.pop("inputs") # avoid clashes
elif isinstance(old_spec, ClassType):
old_spec.__dict__.update(**loaded_spec)
old_spec = old_spec.dict(by_alias=True)
setattr(self, f, ClassType(**old_spec))
else: # None. attribute not set yet
setattr(self, f, ClassType(**loaded_spec))
return
setattr(self, f, ClassType(**old_spec))

def from_yaml(self, file_path: str):
"""Parameterizes libEnsemble from ``yaml`` file"""
Expand Down
16 changes: 8 additions & 8 deletions libensemble/libE.py
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,6 @@
from typing import Callable, Dict

import numpy as np
from pydantic import validate_arguments

from libensemble.comms.comms import QCommProcess, QCommThread, Timeout
from libensemble.comms.logs import manager_logging_config
Expand All @@ -133,6 +132,8 @@
from libensemble.tools.alloc_support import AllocSupport
from libensemble.tools.tools import _USER_SIM_ID_WARNING
from libensemble.utils import launcher
from libensemble.utils.misc import specs_dump
from libensemble.utils.pydantic_bindings import libE_wrapper
from libensemble.utils.timer import Timer
from libensemble.version import __version__
from libensemble.worker import worker_main
Expand All @@ -142,7 +143,7 @@
# logger.setLevel(logging.DEBUG)


@validate_arguments
@libE_wrapper
def libE(
sim_specs: SimSpecs,
gen_specs: GenSpecs,
Expand Down Expand Up @@ -230,12 +231,11 @@ def libE(
exit_criteria=exit_criteria,
)

# get corresponding dictionaries back (casted in libE() def)
sim_specs = ensemble.sim_specs.dict(by_alias=True)
gen_specs = ensemble.gen_specs.dict(by_alias=True)
exit_criteria = ensemble.exit_criteria.dict(by_alias=True, exclude_none=True)
alloc_specs = ensemble.alloc_specs.dict(by_alias=True)
libE_specs = ensemble.libE_specs.dict(by_alias=True)
(sim_specs, gen_specs, alloc_specs, libE_specs) = [
specs_dump(spec, by_alias=True)
for spec in [ensemble.sim_specs, ensemble.gen_specs, ensemble.alloc_specs, ensemble.libE_specs]
]
exit_criteria = specs_dump(ensemble.exit_criteria, by_alias=True, exclude_none=True)

# Extract platform info from settings or environment
platform_info = get_platform(libE_specs)
Expand Down
55 changes: 15 additions & 40 deletions libensemble/resources/platforms.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,9 @@
import subprocess
from typing import Optional

from pydantic import BaseConfig, BaseModel, root_validator, validator
from pydantic import BaseModel

BaseConfig.validate_assignment = True
from libensemble.utils.misc import specs_dump


class PlatformException(Exception):
Expand All @@ -28,25 +28,25 @@
All are optional, and any not defined will be determined by libEnsemble's auto-detection.
"""

mpi_runner: Optional[str]
mpi_runner: Optional[str] = None
"""MPI runner: One of ``"mpich"``, ``"openmpi"``, ``"aprun"``,
``"srun"``, ``"jsrun"``, ``"msmpi"``, ``"custom"`` """

runner_name: Optional[str]
runner_name: Optional[str] = None
"""Literal string of MPI runner command. Only needed if different to the default

Note that ``"mpich"`` and ``"openmpi"`` runners have the default command ``"mpirun"``
"""
cores_per_node: Optional[int]
cores_per_node: Optional[int] = None
"""Number of physical CPU cores on a compute node of the platform"""

logical_cores_per_node: Optional[int]
logical_cores_per_node: Optional[int] = None
"""Number of logical CPU cores on a compute node of the platform"""

gpus_per_node: Optional[int]
gpus_per_node: Optional[int] = None
"""Number of GPU devices on a compute node of the platform"""

gpu_setting_type: Optional[str]
gpu_setting_type: Optional[str] = None
""" How GPUs will be assigned.

Must take one of the following string options.
Expand Down Expand Up @@ -82,14 +82,14 @@

"""

gpu_setting_name: Optional[str]
gpu_setting_name: Optional[str] = None
"""Name of GPU setting

See :attr:`gpu_setting_type` for more details.

"""

gpu_env_fallback: Optional[str]
gpu_env_fallback: Optional[str] = None
"""GPU fallback environment setting if not using an MPI runner.

For example:
Expand All @@ -106,7 +106,7 @@

"""

scheduler_match_slots: Optional[bool]
scheduler_match_slots: Optional[bool] = True
"""
Whether the libEnsemble resource scheduler should only assign matching slots when
there are multiple (partial) nodes assigned to a sim function.
Expand All @@ -121,31 +121,6 @@
(allowing for more efficient scheduling when MPI runs cross nodes).
"""

@validator("gpu_setting_type")
def check_gpu_setting_type(cls, value):
if value is not None:
assert value in [
"runner_default",
"env",
"option_gpus_per_node",
"option_gpus_per_task",
], "Invalid label for GPU specification type"
return value

@validator("mpi_runner")
def check_mpi_runner_type(cls, value):
if value is not None:
assert value in ["mpich", "openmpi", "aprun", "srun", "jsrun", "msmpi", "custom"], "Invalid MPI runner name"
return value

@root_validator
def check_logical_cores(cls, values):
if values.get("cores_per_node") and values.get("logical_cores_per_node"):
assert (
values["logical_cores_per_node"] % values["cores_per_node"] == 0
), "Logical cores doesn't divide evenly into cores"
return values


# On SLURM systems, let srun assign free GPUs on the node
class Crusher(Platform):
Expand Down Expand Up @@ -298,9 +273,9 @@
platform_info = {}
if os.environ.get("NERSC_HOST") == "perlmutter":
if os.environ.get("SLURM_JOB_PARTITION").startswith("gpu_"):
platform_info = PerlmutterGPU().dict(by_alias=True)
platform_info = specs_dump(PerlmutterGPU(), by_alias=True)

Check warning on line 276 in libensemble/resources/platforms.py

View check run for this annotation

Codecov / codecov/patch

libensemble/resources/platforms.py#L276

Added line #L276 was not covered by tests
else:
platform_info = PerlmutterCPU().dict(by_alias=True)
platform_info = specs_dump(PerlmutterCPU(), by_alias=True)

Check warning on line 278 in libensemble/resources/platforms.py

View check run for this annotation

Codecov / codecov/patch

libensemble/resources/platforms.py#L278

Added line #L278 was not covered by tests
return platform_info


Expand All @@ -314,7 +289,7 @@
platform_info = {}
try:
domain_name = subprocess.check_output(run_cmd).decode().rstrip()
platform_info = detect_systems[domain_name]().dict(by_alias=True)
platform_info = specs_dump(detect_systems[domain_name](), by_alias=True)
except Exception:
platform_info = known_envs()
return platform_info
Expand All @@ -333,7 +308,7 @@
name = libE_specs.get("platform") or os.environ.get("LIBE_PLATFORM")
if name:
try:
known_platforms = Known_platforms().dict()
known_platforms = specs_dump(Known_platforms(), exclude_none=True)
platform_info = known_platforms[name]
except KeyError:
raise PlatformException(f"Error. Unknown platform requested {name}")
Expand Down