Skip to content

Commit

Permalink
➖ Drop support for python 3.7 (#391)
Browse files Browse the repository at this point in the history
  • Loading branch information
Galileo-Galilei committed Jul 23, 2023
1 parent d0a80d3 commit f19f750
Show file tree
Hide file tree
Showing 14 changed files with 41 additions and 63 deletions.
10 changes: 5 additions & 5 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: [3.7, 3.8]
python-version: ["3.8", "3.9", "3.10"]
os: [ubuntu-latest, macos-latest, windows-latest]
env:
OS: ${{ matrix.os }}
Expand All @@ -30,15 +30,15 @@ jobs:
python -m pip install --upgrade pip
pip install .[test,extras]
- name: Check code formatting with black
if: matrix.os == 'ubuntu-latest' && matrix.python-version == '3.7' # linting should occur only once in the loop
if: matrix.os == 'ubuntu-latest' && matrix.python-version == '3.10' # linting should occur only once in the loop
run: |
black . --check
- name: Check import order with isort
if: matrix.os == 'ubuntu-latest' && matrix.python-version == '3.7' # linting should occur only once in the loop
if: matrix.os == 'ubuntu-latest' && matrix.python-version == '3.10' # linting should occur only once in the loop
run: |
isort . --check-only
- name: Lint with flake8
if: matrix.os == 'ubuntu-latest' && matrix.python-version == '3.7' # linting should occur only once in the loop
if: matrix.os == 'ubuntu-latest' && matrix.python-version == '3.10' # linting should occur only once in the loop
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics --exclude kedro_mlflow/template/project/run.py
Expand All @@ -49,7 +49,7 @@ jobs:
pytest --cov=./ --cov-report=xml
- name: Upload coverage report to Codecov
uses: codecov/codecov-action@v1
if: matrix.os == 'ubuntu-latest' && matrix.python-version == '3.7' # upload should occur only once in the loop
if: matrix.os == 'ubuntu-latest' && matrix.python-version == '3.10' # upload should occur only once in the loop
with:
token: ${{ secrets.CODECOV_TOKEN }} # token is not mandatory but make access more stable
file: ./coverage.xml
Expand Down
4 changes: 4 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,10 @@


- :bug: Make ``kedro-mlflow`` hook log parameters when the project is configured with the ``OmegaConfigLoader`` instead of raising an error ([#430](https://github.com/Galileo-Galilei/kedro-mlflow/issues/430))
### Removed


- :bug: Drop support for ``python=3.7`` which has [reached end-of-life status](https://devguide.python.org/versions/) to prepare 0.19 ([#391](https://github.com/Galileo-Galilei/kedro-mlflow/issues/391))

## [0.11.8] - 2023-02-13

Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ The current workflow is the following:
3. Develop locally:
- Install the precommit file (`pip install pre-commit`, then `pre-commit install`)
- Create a branch based on the master branch (``git checkout -b <prefix-branchname> master``)
- Create a conda environment (conda create -n <your-env-name> python==3.7)
- Create a conda environment (conda create -n <your-env-name> python==3.10)
- Activate this environment (`conda activate <your-env-name>`)
- Install the extra dependencies for tests (`pip install kedro-mlflow[dev,test]`)
- Apply your changes
Expand Down
2 changes: 1 addition & 1 deletion docs/source/05_framework_ml/03_framework_solutions.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ mlflow.pyfunc.log_model(
artifact_path="model",
python_model=kedro_model,
artifacts=artifacts,
conda_env={"python": "3.7.0", dependencies: ["kedro==0.16.5"]},
conda_env={"python": "3.10.0", dependencies: ["kedro==0.18.11"]},
signature=model_signature,
)
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ mlflow.pyfunc.log_model(
artifact_path="model",
python_model=kedro_pipeline_model,
artifacts=artifacts,
conda_env={"python": "3.7.0", dependencies: ["kedro==0.16.5"]},
conda_env={"python": "3.10.0", dependencies: ["kedro==0.18.11"]},
model_signature=model_signature,
)
```
Expand Down
2 changes: 1 addition & 1 deletion docs/source/05_pipeline_serving/04_hook_pipeline_ml.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ For consistency, you may want to log an inference pipeline (including some data
log_model_kwargs=dict(
artifact_path="kedro_mlflow_tutorial",
conda_env={
"python": 3.7,
"python": 3.10,
"dependencies": [f"kedro_mlflow_tutorial=={PROJECT_VERSION}"],
},
signature="auto",
Expand Down
6 changes: 3 additions & 3 deletions docs/source/07_python_objects/01_DataSets.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ mlflow_model_logger = MlflowModelLoggerDataSet(
flavor="mlflow.sklearn",
run_id="<the-model-run-id>",
save_args={
"conda_env": {"python": "3.7.0", "dependencies": ["kedro==0.16.5"]},
"conda_env": {"python": "3.10.0", "dependencies": ["kedro==0.18.11"]},
"input_example": data.iloc[0:5, :],
},
)
Expand All @@ -110,9 +110,9 @@ my_model:
run_id: <the-model-run-id>,
save_args:
conda_env:
python: "3.7.0"
python: "3.10.0"
dependencies:
- "kedro==0.16.5"
- "kedro==0.18.11"
```

### ``MlflowModelSaverDataSet``
Expand Down
2 changes: 1 addition & 1 deletion docs/source/07_python_objects/03_Pipelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ mlflow.pyfunc.log_model(
artifact_path="model",
python_model=KedroPipelineModel(pipeline=pipeline_training, catalog=catalog),
artifacts=artifacts,
conda_env={"python": "3.7.0", "dependencies": ["kedro==0.16.5"]},
conda_env={"python": "3.10.0", dependencies: ["kedro==0.18.11"]},
signature=model_signature,
)
```
Expand Down
13 changes: 3 additions & 10 deletions kedro_mlflow/framework/hooks/mlflow_hook.py
Original file line number Diff line number Diff line change
Expand Up @@ -89,22 +89,15 @@ def after_context_created(
mlflow_config.server._mlflow_client = MlflowClient(
tracking_uri=mlflow_config.server.mlflow_tracking_uri
)
# BEWARE: kedro supports python=3.7 which does nto accepts the shorthand f"{x=}" for f"x={x}"
LOGGER.warning(
f"mlflow_config.server.mlflow_tracking_uri={mlflow_config.server.mlflow_tracking_uri}"
)
LOGGER.warning(f"{mlflow_config.server.mlflow_tracking_uri=}")

mlflow_config.tracking.run.id = active_run_info.run_id
LOGGER.warning(
f"mlflow_config.tracking.run.id={mlflow_config.tracking.run.id}"
)
LOGGER.warning(f"{mlflow_config.tracking.run.id=}")

mlflow_config.tracking.experiment.name = mlflow.get_experiment(
experiment_id=active_run_info.experiment_id
).name
LOGGER.warning(
f"mlflow_config.tracking.experiment.name={mlflow_config.tracking.experiment.name}"
)
LOGGER.warning(f"{mlflow_config.tracking.experiment.name=}")

else:
# we infer and setup the configuration only if there is no active run:
Expand Down
5 changes: 2 additions & 3 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@


def _parse_requirements(path, encoding="utf-8"):
with open(path, mode="r", encoding=encoding) as file_handler:
with open(path, encoding=encoding) as file_handler:
requirements = [
x.strip() for x in file_handler if x.strip() and not x.startswith("-r")
]
Expand All @@ -31,7 +31,7 @@ def _parse_requirements(path, encoding="utf-8"):
long_description=README,
long_description_content_type="text/markdown",
url="https://github.com/Galileo-Galilei/kedro-mlflow",
python_requires=">=3.7, <3.11",
python_requires=">=3.8, <3.11",
packages=find_packages(exclude=["docs*", "tests*"]),
setup_requires=["setuptools_scm"],
include_package_data=True,
Expand Down Expand Up @@ -73,7 +73,6 @@ def _parse_requirements(path, encoding="utf-8"):
keywords="kedro-plugin, mlflow, model versioning, model packaging, pipelines, machine learning, data pipelines, data science, data engineering",
classifiers=[
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
Expand Down
8 changes: 4 additions & 4 deletions tests/framework/hooks/test_hook_pipeline_ml.py
Original file line number Diff line number Diff line change
Expand Up @@ -147,9 +147,9 @@ def convert_probs_to_pred(data, threshold):
training=full_pipeline.only_nodes_with_tags("training"),
inference=full_pipeline.only_nodes_with_tags("inference"),
input_name="data",
log_model_kwargs={
"conda_env": {"python": "3.7.0", "dependencies": ["kedro==0.16.5"]},
},
log_model_kwargs=dict(
conda_env=dict(python="3.10.0", dependencies=["kedro==0.18.11"])
),
)
return pipeline_ml_with_parameters

Expand Down Expand Up @@ -300,7 +300,7 @@ def test_mlflow_hook_save_pipeline_ml_with_copy_mode(
input_name=dummy_pipeline_ml.input_name,
log_model_kwargs={
"artifact_path": dummy_pipeline_ml.log_model_kwargs["artifact_path"],
"conda_env": {"python": "3.7.0", "dependencies": ["kedro==0.16.5"]},
"conda_env": {"python": "3.10.0", "dependencies": ["kedro==0.18.11"]},
},
kpm_kwargs={
"copy_mode": copy_mode,
Expand Down
32 changes: 11 additions & 21 deletions tests/io/models/test_mlflow_model_logger_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,6 @@ def dummy_catalog(tmp_path):

@pytest.fixture
def kedro_pipeline_model(pipeline_ml_obj, dummy_catalog):

kedro_pipeline_model = KedroPipelineModel(
pipeline=pipeline_ml_obj,
catalog=dummy_catalog,
Expand Down Expand Up @@ -175,7 +174,6 @@ def test_save_sklearn_flavor_with_run_id_and_already_active_run(tracking_uri):
def test_save_and_load_sklearn_flavor_with_run_id(
tracking_uri, mlflow_client, linreg_model, active_run_when_loading
):

mlflow.set_tracking_uri(tracking_uri)
# close all opened mlflow runs to avoid interference between tests
while mlflow.active_run():
Expand Down Expand Up @@ -217,7 +215,6 @@ def test_save_and_load_sklearn_flavor_with_run_id(
def test_save_and_load_sklearn_flavor_without_run_id(
tracking_uri, mlflow_client, linreg_model, initial_active_run
):

mlflow.set_tracking_uri(tracking_uri)
# close all opened mlflow runs to avoid interference between tests
while mlflow.active_run():
Expand Down Expand Up @@ -265,7 +262,6 @@ def test_save_and_load_sklearn_flavor_without_run_id(


def test_load_without_run_id_nor_active_run(tracking_uri):

mlflow.set_tracking_uri(tracking_uri)
# close all opened mlflow runs to avoid interference between tests
while mlflow.active_run():
Expand Down Expand Up @@ -303,7 +299,6 @@ def test_pyfunc_flavor_python_model_save_and_load(
pipeline,
dummy_catalog,
):

kedro_pipeline_model = KedroPipelineModel(
pipeline=pipeline,
catalog=dummy_catalog,
Expand All @@ -320,7 +315,7 @@ def test_pyfunc_flavor_python_model_save_and_load(
"artifact_path": "test_model",
"save_args": {
"artifacts": artifacts,
"conda_env": {"python": "3.7.0", "dependencies": ["kedro==0.16.5"]},
"conda_env": {"python": "3.10.0", "dependencies": ["kedro==0.18.11"]},
},
},
}
Expand Down Expand Up @@ -348,7 +343,6 @@ def test_pyfunc_flavor_python_model_save_and_load(


def test_pyfunc_flavor_wrong_pyfunc_workflow(tracking_uri):

model_config = {
"name": "kedro_pipeline_model",
"config": {
Expand All @@ -373,23 +367,19 @@ def test_mlflow_model_logger_logging_deactivation(tracking_uri, linreg_model):

mlflow_model_logger_dataset._logging_activated = False

all_runs_id_beginning = set(
[
run.run_id
for k in range(len(mlflow_client.search_experiments()))
for run in mlflow_client.search_runs(experiment_ids=f"{k}")
]
)
all_runs_id_beginning = {
run.run_id
for k in range(len(mlflow_client.search_experiments()))
for run in mlflow_client.search_runs(experiment_ids=f"{k}")
}

mlflow_model_logger_dataset.save(linreg_model)

all_runs_id_end = set(
[
run.run_id
for k in range(len(mlflow_client.search_experiments()))
for run in mlflow_client.search_runs(experiment_ids=f"{k}")
]
)
all_runs_id_end = {
run.run_id
for k in range(len(mlflow_client.search_experiments()))
for run in mlflow_client.search_runs(experiment_ids=f"{k}")
}

assert all_runs_id_beginning == all_runs_id_end

Expand Down
4 changes: 1 addition & 3 deletions tests/io/models/test_mlflow_model_saver_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,6 @@ def dummy_catalog(tmp_path):

@pytest.fixture
def kedro_pipeline_model(tmp_path, pipeline_ml_obj, dummy_catalog):

kedro_pipeline_model = KedroPipelineModel(
pipeline=pipeline_ml_obj,
catalog=dummy_catalog,
Expand Down Expand Up @@ -158,7 +157,6 @@ def test_save_load_local(linreg_path, linreg_model, versioned):
def test_pyfunc_flavor_python_model_save_and_load(
tmp_path, tmp_folder, pipeline, dummy_catalog
):

kedro_pipeline_model = KedroPipelineModel(
pipeline=pipeline,
catalog=dummy_catalog,
Expand All @@ -177,7 +175,7 @@ def test_pyfunc_flavor_python_model_save_and_load(
"pyfunc_workflow": "python_model",
"save_args": {
"artifacts": artifacts,
"conda_env": {"python": "3.7.0", "dependencies": ["kedro==0.16.5"]},
"conda_env": {"python": "3.10.0", "dependencies": ["kedro==0.18.11"]},
},
},
}
Expand Down
12 changes: 3 additions & 9 deletions tests/mlflow/test_kedro_pipeline_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -263,7 +263,6 @@ def dummy_catalog(tmp_path):
def test_model_packaging_with_copy_mode(
tmp_path, tmp_folder, pipeline_inference_dummy, dummy_catalog, copy_mode, expected
):

dummy_catalog._data_sets["model"].save(2) # emulate model fitting

kedro_model = KedroPipelineModel(
Expand All @@ -282,7 +281,7 @@ def test_model_packaging_with_copy_mode(
artifact_path="model",
python_model=kedro_model,
artifacts=artifacts,
conda_env={"python": "3.7.0", "dependencies": ["kedro==0.16.5"]},
conda_env={"python": "3.10.0", "dependencies": ["kedro==0.18.11"]},
)
run_id = mlflow.active_run().info.run_id

Expand Down Expand Up @@ -320,7 +319,6 @@ def test_kedro_pipeline_model_with_wrong_copy_mode_type(


def test_model_packaging_too_many_artifacts(tmp_path, pipeline_inference_dummy):

catalog = DataCatalog(
{
"raw_data": PickleDataSet(
Expand Down Expand Up @@ -356,7 +354,7 @@ def test_model_packaging_too_many_artifacts(tmp_path, pipeline_inference_dummy):
artifact_path="model",
python_model=kedro_model,
artifacts=artifacts,
conda_env={"python": "3.7.0", "dependencies": ["kedro==0.16.5"]},
conda_env={"python": "3.10.0", "dependencies": ["kedro==0.18.11"]},
)
run_id = mlflow.active_run().info.run_id

Expand All @@ -369,7 +367,6 @@ def test_model_packaging_too_many_artifacts(tmp_path, pipeline_inference_dummy):


def test_model_packaging_missing_artifacts(tmp_path, pipeline_inference_dummy):

catalog = DataCatalog(
{
"raw_data": MemoryDataSet(),
Expand All @@ -391,7 +388,7 @@ def test_model_packaging_missing_artifacts(tmp_path, pipeline_inference_dummy):
artifact_path="model",
python_model=kedro_model,
artifacts=None, # no artifacts provided
conda_env={"python": "3.7.0", "dependencies": ["kedro==0.16.5"]},
conda_env={"python": "3.10.0", "dependencies": ["kedro==0.18.11"]},
)
run_id = mlflow.active_run().info.run_id

Expand All @@ -404,7 +401,6 @@ def test_model_packaging_missing_artifacts(tmp_path, pipeline_inference_dummy):


def test_kedro_pipeline_ml_loading_deepcopiable_catalog(tmp_path, tmp_folder):

# create pipelien and catalog. The training will not be triggered
def fit_fun(data):
pass
Expand Down Expand Up @@ -510,7 +506,6 @@ def test_catalog_extraction(pipeline, catalog, input_name, result):


def test_catalog_extraction_missing_inference_input(pipeline_inference_dummy):

catalog = DataCatalog({"raw_data": MemoryDataSet(), "data": MemoryDataSet()})
# "model" is missing in the catalog
with pytest.raises(
Expand Down Expand Up @@ -599,7 +594,6 @@ def test_kedro_pipeline_model_save_and_load(


def test_kedro_pipeline_model_too_many_outputs():

catalog = DataCatalog(
{
"data": MemoryDataSet(),
Expand Down

0 comments on commit f19f750

Please sign in to comment.