Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
62aadce
Rename overwrite_working_directory => overwrite_root_directory
emaMekic Jun 9, 2025
876e5f6
neps.plot fixed
emaMekic Jun 11, 2025
aa1df0c
Merge remote-tracking branch 'origin/master' into trace_csv_file
emaMekic Jun 24, 2025
1c7e99c
Added documentation and logging of incumbent
emaMekic Jun 24, 2025
b17306e
Removed comment
emaMekic Jun 24, 2025
222955f
Renaming max_evalutaions_total to evaluations_to_spend
emaMekic Jun 24, 2025
23e5130
Renaming max_cost_total to cost_to_spend
emaMekic Jun 24, 2025
092d771
Introduction of fidelities_to_spend
emaMekic Jun 29, 2025
459cb88
Mf example
emaMekic Jun 29, 2025
f56f433
Optimizer - fidelities to spend
emaMekic Jun 29, 2025
1e8fdbd
Added trajectory and best incumbent. Solved warning in plot
emaMekic Jul 6, 2025
e2639f4
Fixed multi-fidelity stopping criteria
emaMekic Jul 6, 2025
d5b9285
Merged branch with new txt files
emaMekic Jul 7, 2025
f98d310
Merge branch 'master' into mo-txt-files-fidelity-stopping-crit
Sohambasu07 Jul 16, 2025
eff33c7
feat: update api with MO and fix runtime issues
Sohambasu07 Jul 17, 2025
177a800
Merge branch 'master' into mo-txt-files-fidelity-stopping-crit
Sohambasu07 Jul 17, 2025
5fae231
feat: update primo
Sohambasu07 Jul 17, 2025
376feeb
Logging messages changed
emaMekic Jul 17, 2025
a4c5f43
Removed info_dict from logging
emaMekic Jul 17, 2025
fb9930b
feat: allow None confidence centers for MO priors
Sohambasu07 Jul 18, 2025
b50a462
fix: disable info_dict logging
Sohambasu07 Jul 18, 2025
0d08ad4
fix: disable info_dict logging
Sohambasu07 Jul 18, 2025
4da5e8d
Fixed trace error in case of multiple runs
emaMekic Jul 24, 2025
d1c0af0
Fixed tests
emaMekic Jul 24, 2025
f21aa69
Example fixed
emaMekic Jul 24, 2025
fb9b146
Numpy version update
emaMekic Jul 25, 2025
5092f93
Merge branch 'mo-txt-files-fidelity-stopping-crit' into txt-files-fid…
Sohambasu07 Jul 31, 2025
4c01c02
feat: Modify tests for MOMF opts
Sohambasu07 Jul 31, 2025
285b440
feat: fix ruff-format in status.py
Sohambasu07 Jul 31, 2025
9d1f451
feat: fix more ruff-formatting
Sohambasu07 Jul 31, 2025
0e39bbe
fix: skip tests for PriMO for now
Sohambasu07 Jul 31, 2025
7d983fe
numpy>=2
Sohambasu07 Aug 5, 2025
e5ad72d
Merge branch 'master' into txt-files-fidelity-stopping-crit
Sohambasu07 Aug 24, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Empty file added .trace.lock
Empty file.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ neps.run(
evaluate_pipeline=evaluate_pipeline,
pipeline_space=pipeline_space,
root_directory="path/to/save/results", # Replace with the actual path.
max_evaluations_total=100,
evaluations_to_spend=100,
)
```

Expand Down
2 changes: 1 addition & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ neps.run(
evaluate_pipeline=evaluate_pipeline,
pipeline_space=pipeline_space,
root_directory="path/to/save/results", # Replace with the actual path.
max_evaluations_total=100,
evaluations_to_spend=100,
)
```

Expand Down
16 changes: 9 additions & 7 deletions docs/reference/analyse.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,10 +40,10 @@ Currently, this creates one plot that shows the best error value across the numb

## What's on disk?
In the root directory, NePS maintains several files at all times that are human readable and can be useful
If you pass the `post_run_summary=` argument to [`neps.run()`][neps.api.run],
NePS will also generate a summary CSV file for you.
If you pass the `write_summary_to_disk=` argument to [`neps.run()`][neps.api.run],
NePS will generate a summary CSV and TXT files for you.

=== "`neps.run(..., post_run_summary=True)`"
=== "`neps.run(..., write_summary_to_disk=True)`"

```
ROOT_DIRECTORY
Expand All @@ -54,13 +54,15 @@ NePS will also generate a summary CSV file for you.
│ └── report.yaml
├── summary
│ ├── full.csv
│ └── short.csv
│ ├── short.csv
│ ├── best_config_trajectory.txt
│ └── best_config.txt
├── optimizer_info.yaml
└── optimizer_state.pkl
```


=== "`neps.run(..., post_run_summary=False)`"
=== "`neps.run(..., write_summary_to_disk=False)`"

```
ROOT_DIRECTORY
Expand All @@ -77,8 +79,8 @@ NePS will also generate a summary CSV file for you.
The `full.csv` contains all configuration details in CSV format.
Details include configuration hyperparameters and any returned result and cost from the `evaluate_pipeline` function.

The `run_status.csv` provides general run details, such as the number of failed and successful configurations,
and the best configuration with its corresponding objective value.
The `best_config_trajectory.txt` contains logging of the incumbent trajectory.
The `best_config.txt` records current incumbent.

# TensorBoard Integration

Expand Down
6 changes: 3 additions & 3 deletions docs/reference/evaluate_pipeline.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,12 +63,12 @@ def evaluate_pipeline(

#### Cost

Along with the return of the `loss`, the `evaluate_pipeline=` function would optionally need to return a `cost` in certain cases. Specifically when the `max_cost_total` parameter is being utilized in the `neps.run` function.
Along with the return of the `loss`, the `evaluate_pipeline=` function would optionally need to return a `cost` in certain cases. Specifically when the `cost_to_spend` parameter is being utilized in the `neps.run` function.


!!! note

`max_cost_total` sums the cost from all returned configuration results and checks whether the maximum allowed cost has been reached (if so, the search will come to an end).
`cost_to_spend` sums the cost from all returned configuration results and checks whether the maximum allowed cost has been reached (if so, the search will come to an end).

```python
import neps
Expand Down Expand Up @@ -97,7 +97,7 @@ if __name__ == "__main__":
evaluate_pipeline=evaluate_pipeline,
pipeline_space=pipeline_space, # Assuming the pipeline space is defined
root_directory="results/bo",
max_cost_total=10,
cost_to_spend=10,
optimizer="bayesian_optimization",
)
```
Expand Down
37 changes: 18 additions & 19 deletions docs/reference/neps_run.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,9 +45,9 @@ See the following for more:
* What goes in and what goes out of [`evaluate_pipeline()`](../reference/evaluate_pipeline.md)?

## Budget, how long to run?
To define a budget, provide `max_evaluations_total=` to [`neps.run()`][neps.api.run],
To define a budget, provide `evaluations_to_spend=` to [`neps.run()`][neps.api.run],
to specify the total number of evaluations to conduct before halting the optimization process,
or `max_cost_total=` to specify a cost threshold for your own custom cost metric, such as time, energy, or monetary, as returned by each evaluation of the pipeline .
or `cost_to_spend=` to specify a cost threshold for your own custom cost metric, such as time, energy, or monetary, as returned by each evaluation of the pipeline .


```python
Expand All @@ -60,8 +60,8 @@ def evaluate_pipeline(learning_rate: float, epochs: int) -> float:
return {"objective_function_to_minimize": loss, "cost": duration}

neps.run(
max_evaluations_total=10, # (1)!
max_cost_total=1000, # (2)!
evaluations_to_spend=10, # (1)!
cost_to_spend=1000, # (2)!
)
```

Expand All @@ -87,7 +87,7 @@ Please refer to Python's [logging documentation](https://docs.python.org/3/libra

## Continuing Runs
To continue a run, all you need to do is provide the same `root_directory=` to [`neps.run()`][neps.api.run] as before,
with an increased `max_evaluations_total=` or `max_cost_total=`.
with an increased `evaluations_to_spend=` or `cost_to_spend=`.

```python
def run(learning_rate: float, epochs: int) -> float:
Expand All @@ -100,21 +100,21 @@ def run(learning_rate: float, epochs: int) -> float:

neps.run(
# Increase the total number of trials from 10 as set previously to 50
max_evaluations_total=50,
evaluations_to_spend=50,
)
```

If the run previously stopped due to reaching a budget and you specify the same budget, the worker will immediatly stop as it will remember the amount of budget it used previously.

## Overwriting a Run

To overwrite a run, simply provide the same `root_directory=` to [`neps.run()`][neps.api.run] as before, with the `overwrite_working_directory=True` argument.
To overwrite a run, simply provide the same `root_directory=` to [`neps.run()`][neps.api.run] as before, with the `overwrite_root_directory=True` argument.

```python
neps.run(
...,
root_directory="path/to/previous_result_dir",
overwrite_working_directory=True,
overwrite_root_directory=True,
)
```

Expand All @@ -125,9 +125,6 @@ neps.run(
## Getting the results
The results of the optimization process are stored in the `root_directory=`
provided to [`neps.run()`][neps.api.run].
To obtain a summary of the optimization process, you can enable the
`post_run_summary=True` argument in [`neps.run()`][neps.api.run],
while will generate a summary csv after the run has finished.

=== "Result Directory"

Expand All @@ -143,17 +140,19 @@ while will generate a summary csv after the run has finished.
│ └── config_2
│ ├── config.yaml
│ └── metadata.json
├── summary # Only if post_run_summary=True
├── summary
│ ├── full.csv
│ └── short.csv
│ ├── best_config_trajectory.txt
│ └── best_config.txt
├── optimizer_info.yaml # The optimizer's configuration
└── optimizer_state.pkl # The optimizer's state, shared between workers
```

=== "python"

```python
neps.run(..., post_run_summary=True)
neps.run(..., write_summary_to_disk=True)
```

To capture the results of the optimization process, you can use tensorbaord logging with various utilities to integrate
Expand All @@ -174,20 +173,20 @@ Any new workers that come online will automatically pick up work and work togeth
evaluate_pipeline=...,
pipeline_space=...,
root_directory="some/path",
max_evaluations_total=100,
evaluations_to_spend=100,
max_evaluations_per_run=10, # (1)!
continue_until_max_evaluation_completed=True, # (2)!
overwrite_working_directory=False, #!!!
overwrite_root_directory=False, #!!!
)
```

1. Limits the number of evaluations for this specific call of [`neps.run()`][neps.api.run].
2. Evaluations in-progress count towards max_evaluations_total, halting new ones when this limit is reached.
Setting this to `True` enables continuous sampling of new evaluations until the total of completed ones meets max_evaluations_total, optimizing resource use in time-sensitive scenarios.
2. Evaluations in-progress count towards evaluations_to_spend, halting new ones when this limit is reached.
Setting this to `True` enables continuous sampling of new evaluations until the total of completed ones meets evaluations_to_spend, optimizing resource use in time-sensitive scenarios.

!!! warning

Ensure `overwrite_working_directory=False` to prevent newly spawned workers from deleting the shared directory!
Ensure `overwrite_root_directory=False` to prevent newly spawned workers from deleting the shared directory!


=== "Shell"
Expand Down Expand Up @@ -227,7 +226,7 @@ neps.run(

!!! note

Any runs that error will still count towards the total `max_evaluations_total` or `max_evaluations_per_run`.
Any runs that error will still count towards the total `evaluations_to_spend` or `max_evaluations_per_run`.

### Re-running Failed Configurations

Expand Down
8 changes: 4 additions & 4 deletions docs/reference/optimizers.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ neps.run(
evaluate_pipeline=run_function,
pipeline_space=pipeline_space,
root_directory="results/",
max_evaluations_total=25,
evaluations_to_spend=25,
# no optimizer specified
)
```
Expand All @@ -87,7 +87,7 @@ neps.run(
evaluate_pipeline=run_function,
pipeline_space=pipeline_space,
root_directory="results/",
max_evaluations_total=25,
evaluations_to_spend=25,
# optimizer specified, along with an argument
optimizer=neps.algorithms.bayesian_optimization, # or as string: "bayesian_optimization"
)
Expand All @@ -104,7 +104,7 @@ neps.run(
evaluate_pipeline=run_function,
pipeline_space=pipeline_space,
root_directory="results/",
max_evaluations_total=25,
evaluations_to_spend=25,
optimizer=("bayesian_optimization", {"initial_design_size": 5})
)
```
Expand Down Expand Up @@ -137,7 +137,7 @@ neps.run(
evaluate_pipeline=run_function,
pipeline_space=pipeline_space,
root_directory="results/",
max_evaluations_total=25,
evaluations_to_spend=25,
optimizer=MyOptimizer,
)
```
Expand Down
Loading
Loading