Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor/ support both Pydantic 1 and 2 #1135

Merged
merged 52 commits into from
Dec 7, 2023
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
Show all changes
52 commits
Select commit Hold shift + click to select a range
fd6ae59
kinda large initial commit
jlnav Oct 16, 2023
b13c693
avoid recursive-validation issue by validators setting attributes by …
jlnav Oct 17, 2023
471ba01
fix no-attribute by .__dict__.get()
jlnav Oct 17, 2023
8155551
ConfigDict for Platform model, plus apparently new pydantic warns abo…
jlnav Oct 17, 2023
665c88f
skip more steps in deps cache-hit on CI
jlnav Oct 17, 2023
8601273
Merge branch 'develop' into refactor/pydantic2
jlnav Nov 6, 2023
e8f20db
tentative change to support both pydantic 1 and pydantic 2
jlnav Nov 6, 2023
e86fc54
attempts to set specs and platforms modules based on pydantic version
jlnav Nov 6, 2023
3058092
Created submodules that import/set versions of specs, platforms based…
jlnav Nov 6, 2023
01f124b
typo
jlnav Nov 7, 2023
584e16e
Merge branch 'develop' into refactor/pydantic2
jlnav Nov 7, 2023
9833c9d
better variables for determining pydantic version, and specs_dump fun…
jlnav Nov 7, 2023
d47ab2d
starting universal field and model validators
jlnav Nov 7, 2023
a51d742
Merge branch 'develop' into refactor/pydantic2
jlnav Nov 20, 2023
b68d013
huge refactoring. only a single specs.py for now. validators.py conta…
jlnav Nov 20, 2023
526b508
refactoring docs, turning both platforms.py files back into a single …
jlnav Nov 21, 2023
06f4889
universal platforms.py
jlnav Nov 21, 2023
0b10b91
desperately trying to figure out how to do cross-version aliases
jlnav Nov 21, 2023
8aa1890
with Pydantic2, we need to use merge_field_infos and model_rebuild to…
jlnav Nov 27, 2023
fe381d5
fix test_models
jlnav Nov 27, 2023
73564ac
Merge branch 'develop' into refactor/pydantic2
jlnav Nov 27, 2023
c915bb1
using create_model to create new model definitions with validators; s…
jlnav Nov 27, 2023
113dc1e
tiny fix
jlnav Nov 27, 2023
c563e6d
Merge branch 'develop' into refactor/pydantic2
jlnav Nov 27, 2023
c9b0e86
workflow_dir checking bugfix for pydantic2 validator version
jlnav Nov 27, 2023
245a919
may have just validated a validator
jlnav Nov 27, 2023
fcbce07
Merge branch 'develop' into refactor/pydantic2
jlnav Nov 27, 2023
d9914f5
dunno why we always need to do this?
jlnav Nov 27, 2023
39f6af2
what in tarnation, are we missing a dependency for compilation somehow?
jlnav Nov 28, 2023
0c3e541
unpin autodoc pydantic
jlnav Nov 28, 2023
112253e
fix pydantic2 validator bug with enabling final_save when save_every_…
jlnav Nov 28, 2023
10b1378
adjust required pydantic version, add jobs for testing old-pydantic
jlnav Nov 28, 2023
04072e5
presumably fix matrix?
jlnav Nov 28, 2023
1f6e3de
missing matrices
jlnav Nov 28, 2023
e2a9bd8
more matrix fixes
jlnav Nov 28, 2023
c44ecd1
use cross-version wrapper function instead of model_dump in unit test…
jlnav Nov 28, 2023
e9f75be
cross-version test_models
jlnav Nov 28, 2023
9876a28
fix
jlnav Nov 28, 2023
e5fae6e
adjusts
jlnav Nov 28, 2023
bcc3340
fix
jlnav Nov 28, 2023
6c3345e
really dont know why the number of errors is variable, but at least i…
jlnav Nov 28, 2023
0ea1735
i just want this to work already...
jlnav Nov 28, 2023
468845f
change these unsets back to del, adjust cache
jlnav Nov 29, 2023
d2f63a3
refactoring
jlnav Nov 29, 2023
54ac7cc
moving some logic to specs_checkers
jlnav Nov 29, 2023
601fb8a
ignoring a flakey warning from Pydantic, which will presumably be fix…
jlnav Nov 30, 2023
76d0e92
specs_checker_getattr can now return a default value
jlnav Nov 30, 2023
ec4b706
coverage
jlnav Dec 1, 2023
4d4d349
Merge branch 'develop' into refactor/pydantic2
jlnav Dec 4, 2023
180ab0c
Merge branch 'develop' into refactor/pydantic2
jlnav Dec 4, 2023
bfcd48d
adjust new unit test for pydantic 2
jlnav Dec 4, 2023
38e715d
adjust dependency listings for autodoc_pydantic and pydantic 1.10 low…
jlnav Dec 6, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/basic.yml
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ jobs:
pip install -I --upgrade certifi

- name: Install Ubuntu compilers
if: matrix.os == 'ubuntu-latest'
if: matrix.os == 'ubuntu-latest' && steps.cache.outputs.cache-hit != 'true'
run: conda install gcc_linux-64

# Roundabout solution on macos for proper linking with mpicc
Expand All @@ -86,7 +86,7 @@ jobs:
pip install -r install/misc_feature_requirements.txt

- name: Install mpi4py and MPI from conda
if: (matrix.python-version != '3.10' && matrix.os == 'ubuntu-latest') || matrix.os == 'macos-latest'
if: (matrix.python-version != '3.10' && matrix.os == 'ubuntu-latest') || matrix.os == 'macos-latest' && steps.cache.outputs.cache-hit != 'true'
run: |
conda install mpi4py ${{ matrix.mpi-version }}

Expand Down
17 changes: 3 additions & 14 deletions libensemble/ensemble.py
Original file line number Diff line number Diff line change
Expand Up @@ -322,7 +322,7 @@ def libE_specs(self, new_specs):

# Cast new libE_specs temporarily to dict
if not isinstance(new_specs, dict):
new_specs = new_specs.dict(by_alias=True, exclude_none=True, exclude_unset=True)
new_specs = new_specs.model_dump(by_alias=True, exclude_none=True, exclude_unset=True)

# Unset "comms" if we already have a libE_specs that contains that field, that came from parse_args
if new_specs.get("comms") and hasattr(self._libE_specs, "comms") and self.parsed:
Expand Down Expand Up @@ -459,14 +459,7 @@ def _parse_spec(self, loaded_spec):

if len(userf_fields):
for f in userf_fields:
if f == "inputs":
loaded_spec["in"] = field_f[f](loaded_spec[f])
loaded_spec.pop("inputs")
elif f == "outputs":
loaded_spec["out"] = field_f[f](loaded_spec[f])
loaded_spec.pop("outputs")
else:
loaded_spec[f] = field_f[f](loaded_spec[f])
loaded_spec[f] = field_f[f](loaded_spec[f])

return loaded_spec

Expand All @@ -482,13 +475,9 @@ def _parameterize(self, loaded):
old_spec.pop("inputs") # avoid clashes
elif old_spec.get("out") and old_spec.get("outputs"):
old_spec.pop("inputs") # avoid clashes
elif isinstance(old_spec, ClassType):
old_spec.__dict__.update(**loaded_spec)
old_spec = old_spec.dict(by_alias=True)
setattr(self, f, ClassType(**old_spec))
else: # None. attribute not set yet
setattr(self, f, ClassType(**loaded_spec))
return
setattr(self, f, ClassType(**old_spec))

def from_yaml(self, file_path: str):
"""Parameterizes libEnsemble from ``yaml`` file"""
Expand Down
10 changes: 5 additions & 5 deletions libensemble/libE.py
Original file line number Diff line number Diff line change
Expand Up @@ -229,11 +229,11 @@ def libE(
)

# get corresponding dictionaries back (casted in libE() def)
sim_specs = ensemble.sim_specs.dict(by_alias=True)
gen_specs = ensemble.gen_specs.dict(by_alias=True)
exit_criteria = ensemble.exit_criteria.dict(by_alias=True, exclude_none=True)
alloc_specs = ensemble.alloc_specs.dict(by_alias=True)
libE_specs = ensemble.libE_specs.dict(by_alias=True)
sim_specs = ensemble.sim_specs.model_dump(by_alias=True)
gen_specs = ensemble.gen_specs.model_dump(by_alias=True)
exit_criteria = ensemble.exit_criteria.model_dump(by_alias=True, exclude_none=True)
alloc_specs = ensemble.alloc_specs.model_dump(by_alias=True)
libE_specs = ensemble.libE_specs.model_dump(by_alias=True)

# Extract platform info from settings or environment
platform_info = get_platform(libE_specs)
Expand Down
49 changes: 26 additions & 23 deletions libensemble/resources/platforms.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,7 @@
import subprocess
from typing import Optional

from pydantic import BaseConfig, BaseModel, root_validator, validator

BaseConfig.validate_assignment = True
from pydantic import BaseModel, ConfigDict, field_validator, model_validator


class PlatformException(Exception):
Expand All @@ -28,25 +26,25 @@ class Platform(BaseModel):
All are optional, and any not defined will be determined by libEnsemble's auto-detection.
"""

mpi_runner: Optional[str]
mpi_runner: Optional[str] = None
"""MPI runner: One of ``"mpich"``, ``"openmpi"``, ``"aprun"``,
``"srun"``, ``"jsrun"``, ``"msmpi"``, ``"custom"`` """

runner_name: Optional[str]
runner_name: Optional[str] = None
"""Literal string of MPI runner command. Only needed if different to the default

Note that ``"mpich"`` and ``"openmpi"`` runners have the default command ``"mpirun"``
"""
cores_per_node: Optional[int]
cores_per_node: Optional[int] = None
"""Number of physical CPU cores on a compute node of the platform"""

logical_cores_per_node: Optional[int]
logical_cores_per_node: Optional[int] = None
"""Number of logical CPU cores on a compute node of the platform"""

gpus_per_node: Optional[int]
gpus_per_node: Optional[int] = None
"""Number of GPU devices on a compute node of the platform"""

gpu_setting_type: Optional[str]
gpu_setting_type: Optional[str] = None
""" How GPUs will be assigned.

Must take one of the following string options.
Expand Down Expand Up @@ -82,14 +80,14 @@ class Platform(BaseModel):

"""

gpu_setting_name: Optional[str]
gpu_setting_name: Optional[str] = None
"""Name of GPU setting

See :attr:`gpu_setting_type` for more details.

"""

gpu_env_fallback: Optional[str]
gpu_env_fallback: Optional[str] = None
"""GPU fallback environment setting if not using an MPI runner.

For example:
Expand All @@ -106,7 +104,7 @@ class Platform(BaseModel):

"""

scheduler_match_slots: Optional[bool]
scheduler_match_slots: Optional[bool] = None
"""
Whether the libEnsemble resource scheduler should only assign matching slots when
there are multiple (partial) nodes assigned to a sim function.
Expand All @@ -120,8 +118,12 @@ class Platform(BaseModel):
application-level scheduler to manage GPUs, then ``match_slots`` can be **False**
(allowing for more efficient scheduling when MPI runs cross nodes).
"""
model_config = ConfigDict(
arbitrary_types_allowed=True, populate_by_name=True, extra="forbid", validate_assignment=True
)

@validator("gpu_setting_type")
@field_validator("gpu_setting_type")
@classmethod
def check_gpu_setting_type(cls, value):
if value is not None:
assert value in [
Expand All @@ -132,19 +134,20 @@ def check_gpu_setting_type(cls, value):
], "Invalid label for GPU specification type"
return value

@validator("mpi_runner")
@field_validator("mpi_runner")
@classmethod
def check_mpi_runner_type(cls, value):
if value is not None:
assert value in ["mpich", "openmpi", "aprun", "srun", "jsrun", "msmpi", "custom"], "Invalid MPI runner name"
return value

@root_validator
def check_logical_cores(cls, values):
if values.get("cores_per_node") and values.get("logical_cores_per_node"):
@model_validator(mode="after")
def check_logical_cores(self):
if self.__dict__.get("cores_per_node") and self.__dict__.get("logical_cores_per_node"):
assert (
values["logical_cores_per_node"] % values["cores_per_node"] == 0
self.logical_cores_per_node % self.cores_per_node == 0
), "Logical cores doesn't divide evenly into cores"
return values
return self


# On SLURM systems, let srun assign free GPUs on the node
Expand Down Expand Up @@ -298,9 +301,9 @@ def known_envs():
platform_info = {}
if os.environ.get("NERSC_HOST") == "perlmutter":
if os.environ.get("SLURM_JOB_PARTITION").startswith("gpu_"):
platform_info = PerlmutterGPU().dict(by_alias=True)
platform_info = PerlmutterGPU().model_dump(by_alias=True)
else:
platform_info = PerlmutterCPU().dict(by_alias=True)
platform_info = PerlmutterCPU().model_dump(by_alias=True)
return platform_info


Expand All @@ -314,7 +317,7 @@ def known_system_detect(cmd="hostname -d"):
platform_info = {}
try:
domain_name = subprocess.check_output(run_cmd).decode().rstrip()
platform_info = detect_systems[domain_name]().dict(by_alias=True)
platform_info = detect_systems[domain_name]().model_dump(by_alias=True)
except Exception:
platform_info = known_envs()
return platform_info
Expand All @@ -333,7 +336,7 @@ def get_platform(libE_specs):
name = libE_specs.get("platform") or os.environ.get("LIBE_PLATFORM")
if name:
try:
known_platforms = Known_platforms().dict()
known_platforms = Known_platforms().model_dump()
platform_info = known_platforms[name]
except KeyError:
raise PlatformException(f"Error. Unknown platform requested {name}")
Expand Down