diff --git a/README.md b/README.md index 65c34e3..2b419c7 100644 --- a/README.md +++ b/README.md @@ -24,7 +24,7 @@ * D = (X,y) is a dataset derived from f, * x* is the true optimum of f in O (minimum or maximum). -The testing framework feeds your algorithm constraints and data (O,D) and collects its predicted optimum. The algorithm's predicted optimal value can then be compared to the true optimal value f(x*). By comparing the two over multiple randomly generated optimization problems, `doframework` produces a **prediction profile** for your algorithm. +`doframework` feeds your algorithm constraints and data (O,D) and collects its predicted optimum. The algorithm's predicted optimal value can then be compared to the true optimal value f(x*). By comparing the two over multiple randomly generated optimization problems, `doframework` produces a **prediction profile** for your algorithm. `doframework` integrates with your algorithm (written in Python). @@ -40,7 +40,7 @@ The testing framework feeds your algorithm constraints and data (O,D) and collec `doframework` can run either locally or remotely. For optimal performance, run it on a Kubernetes cluster. Cloud configuration is currently available for AWS and IBM Cloud [OpenShift](https://docs.openshift.com/ "RedHat OpenShift Documentation") clusters. -The framework relies on Cloud Object Storage (COS) to interact with simulation products. Configuration is currently available for [AWS](https://aws.amazon.com/s3/ "AWS S3") or [IBM COS](https://www.ibm.com/cloud/object-storage "IBM Cloud Object Storage"). +The framework uses storage (local or S3) to interact with simulation products. Configuration is currently available for [AWS](https://aws.amazon.com/s3/ "AWS S3") or [IBM Cloud Object Storage COS](https://www.ibm.com/cloud/object-storage "IBM Cloud Object Storage"). # Install @@ -52,14 +52,23 @@ $ pip install doframework # Configs -COS specifications are provided in a `configs.yaml`. +Storage specifications are provided in a `configs.yaml`. You'll find examples under `./configs/*`. -The `configs.yaml` includes the list of source and target bucket names (under `s3:buckets`). Credentials are added under designated fields. - -Currently, two cloud service providers are available under `s3:cloud_service_provider`: `aws` and `ibm`. - -`s3:endpoint_url` is optional for AWS. +The `configs.yaml` includes the list of source and target bucket names (under `buckets`). If necessary, S3 credentials are added under designated fields. +Here is the format of the `configs.yaml` either for local storage +``` +local: + buckets: + inputs: '' + inputs_dest: '' + objectives: '' + objectives_dest: '' + data: '' + data_dest: '' + solutions: '' +``` +or S3 ``` s3: buckets: @@ -75,19 +84,20 @@ s3: endpoint_url: 'https://xxx.xxx.xxx' region: 'xx-xxxx' cloud_service_provider: 'aws' - ``` -**Bucket names above must be distinct**. +Currently, two S3 providers are available under `s3:cloud_service_provider`: either `aws` or `ibm`. The `endpoint_url` is _optional_ for AWS. + +**Bucket / folder names must be distinct**. # Inputs `input.json` files provide the necessary metadata for the random genration of optimization problems. -`doframework` will run end to end, once `input.json` files are uploaded to ``. +`doframework` will run end to end, once `input.json` files are uploaded to `` / ``. -The jupyter notebook `./notebooks/inputs.ipynb` allows you to automatically generate input files and upload them to ``. +The jupyter notebook `./notebooks/inputs.ipynb` allows you to automatically generate input files and upload them to ``. -Here is an example of an input file (see input samples `input_basic.json` and `input_all.json` under `./inputs`). +Here is an example of an input file (see input samples `input_basic.json` under `./inputs`). ``` @@ -102,8 +112,7 @@ Here is an example of an input file (see input samples `input_basic.json` and `i }, }, "omega" : { - "ratio": 0.8, - "scale": 0.01 + "ratio": 0.8 }, "data" : { "N": 750, @@ -111,15 +120,14 @@ Here is an example of an input file (see input samples `input_basic.json` and `i "policy_num": 2, "scale": 0.4 }, - "input_file_name": "input.json" + "input_file_name": "input_basic.json" } ``` `f:vertices:num`: number of vertices in the piece-wise linear graph of f.
-`f:vertices:range`: f domain will be inside this box range.
+`f:vertices:range`: f domain will be inside this range.
`f:values:range`: range of f values.
`omega:ratio`: vol(O) / vol(dom(f)) >= ratio.
-`omega:scale`: scale of jitter when sampling feasibility regions (as a ratio of domain diameter).
`data:N`: number of data points to sample.
`data:noise`: response variable noise.
`data:policy_num`: number of centers in Gaussian mix distribution of data.
@@ -129,11 +137,11 @@ It's a good idea to start experimenting on low-dimensional problems. # User App Integration -Your algorithm will be integrated together with `doframework` once it is decorated with `doframework.resolve`. +Your algorithm will be integrated into `doframework` once it is decorated with `doframework.resolve`. -A `doframework` experiment runs with `doframework.run()`. The `run()` utility accepts the decorated model and a path to the `configs.yaml`. +A `doframework` experiment runs with `doframework.run()`. The `run()` utility accepts the decorated model and an absolute path to the `configs.yaml`. -Here is an example user application `module.py`. +Here is an example a user application `module.py`. ``` import doframework as dof @@ -148,10 +156,13 @@ if __name__ == '__main__': dof.run(alg, 'configs.yaml', objectives=5, datasets=3, **kwargs) ``` -The testing framework supports the following inputs to your algorithm: +`doframework` provides the following inputs to your algorithm: `data`: 2D np.array with features X = data[ : , :-1] and response variable y = data[ : ,-1].
`constraints`: linear constraints as a 2D numpy array A. A data point x satisfies the constraints when A[ : , :-1]*x + A[ : ,-1] <= 0.
+ +It feeds your algorithm additional inputs in kwargs: + `lower_bound`: lower bound per feature variable.
`upper_bound`: upper bound per feature variable.
`init_value`: optional initial value.
@@ -160,26 +171,28 @@ The `run()` utility accepts the arguments: `objectives`: number of objective targets to generate per input file.
`datasets`: number of datasets to generate per objective target.
-`feasibility_regions`: number of feasibility regions to generate per objective and dataset.
`distribute`: True to run distributively, False to run sequentially.
-`logger`: True to see logs, False otherwise.
+`logger`: True to see `doframework` logs, False otherwise.
`after_idle_for`: stop running when event stream is idle after this many seconds.
+`alg_num_cpus`: number of CPUs to dedicate to your algorithm on each optimization task.
+`data_num_cpus`: number of CPUs to dedicate to data generation (useful in high dimensions). + # Algorithm Prediction Profile -Once you are done running a `doframework` experiment, run the notebook `notebooks/profile.ipynb`. It will fetch the relevant experiment products from the target COS buckets and produce the algorithm's prediction profile and prediction probabilities. +Once you are done running a `doframework` experiment, run the notebook `notebooks/profile.ipynb`. It will fetch the relevant experiment products from the target buckets and produce the algorithm's prediction profile and prediction probabilities. -`doframework` produces three types of experiment products files: +`doframework` produces three types of experiment product files: * `objective.json`: containing information on (f,O,x*) * `data.csv`: containing the dataset the algorithm accepts as input * `solution.json`: containing the algorithm's predicted optimum -See sample files under `./outputs`/ +See sample files under `./outputs`. # Kubernetes Cluster -To run `doframework` on a K8S cluster, make sure you are on the cluster's local `kubectl` context. Log into your cluster, if necessary (applicable to OpenShift, see doc). +To run `doframework` on a K8S cluster, make sure you are on the cluster's local `kubectl` context. Log into your cluster, if necessary (applicable to OpenShift, see `./doc/openshift.md`). You can check your local `kubectl` context and change it if necessary with ``` @@ -189,7 +202,7 @@ $ kubectl config use-context cluster_name >> Switched to context "cluster_name". ``` -Now `cd` into your project's folder and run the setup bash script `doframework-setup.sh`. The setup script will generate the cluster configuration file `doframework.yaml` in your project's folder. The setup script requires the absolute path to your `configs.yaml`. Otherwise, it assumes a file `configs.yaml` is located under your project's folder. Running the setup script will establish the `ray` cluster. +Now `cd` into your project's folder and run the setup bash script `doframework-setup.sh`. The setup script will generate the cluster configuration file `doframework.yaml` in your project's folder. The setup script requires the absolute path to your `configs.yaml`. Running the setup `.sh` script will establish the `ray` cluster. ``` $ cd @@ -220,7 +233,7 @@ $ ray submit doframework.yaml module.py # Ray Cluster -To observe the `ray` dashboard, connect to `http://localhost:8265` in your browser. See the OpenShift doc for OpenShift-specific instructions. +To observe the `ray` dashboard, connect to `http://localhost:8265` in your browser. See `./doc/openshift.md` for OpenShift-specific instructions. Some useful health-check commands: @@ -276,5 +289,5 @@ $ ray submit doframework.yaml doframework_example.py --configs configs.yaml ``` [NOTE: we are using the path to the `configs.yaml` that was mounted on cluster nodes under `$HOME`.] -Make sure to upload input json files to `` once you run `doframework_example.py`. +Make sure to upload input json files to `` / `` once you run `doframework_example.py`. diff --git a/configs/aws_configs.yaml b/configs/aws_configs.yaml new file mode 100644 index 0000000..13bd60b --- /dev/null +++ b/configs/aws_configs.yaml @@ -0,0 +1,13 @@ +s3: + buckets: + inputs: '' + inputs_dest: '' + objectives: '' + objectives_dest: '' + data: '' + data_dest: '' + solutions: '' + aws_secret_access_key: '' + aws_access_key_id: '' + region: '' + cloud_service_provider: 'aws' \ No newline at end of file diff --git a/configs/dir_configs.yaml b/configs/dir_configs.yaml index 028470d..bfa1937 100644 --- a/configs/dir_configs.yaml +++ b/configs/dir_configs.yaml @@ -6,5 +6,4 @@ local: objectives_dest: '' data: '' data_dest: '' - solutions: '' - solutions_dest: '' \ No newline at end of file + solutions: '' \ No newline at end of file diff --git a/configs/ibm_configs.yaml b/configs/ibm_configs.yaml new file mode 100644 index 0000000..f855641 --- /dev/null +++ b/configs/ibm_configs.yaml @@ -0,0 +1,14 @@ +s3: + buckets: + inputs: '' + inputs_dest: '' + objectives: '' + objectives_dest: '' + data: '' + data_dest: '' + solutions: '' + aws_secret_access_key: '' + aws_access_key_id: '' + endpoint_url: '' + region: '' + cloud_service_provider: 'ibm' \ No newline at end of file diff --git a/docs/ocl_lab.md b/docs/ocl_lab.md new file mode 100644 index 0000000..c88ad06 --- /dev/null +++ b/docs/ocl_lab.md @@ -0,0 +1,60 @@ + + +# AAAI 2023 OCL Lab Instructions + +Here are the installation instructions for participants of the OCL Lab. + +## `doframework` Installation + +We recommend installing `doframework` on a designated Python 3.8.0 environment. `doframework` has many dependancies that may override package versions in your current Python environment. + +For example, if you're using `pyenv` in combination with `virtualenv` as your Python environment manager, you can type the following in your terminal +``` +$ pyenv virtualenv 3.8.0 dof +$ pyenv local dof +``` +[Here](https://realpython.com/intro-to-pyenv/#virtual-environments-and-pyenv "pyenv and virtualenv") is a good source on `pyenv` and `virtualenv` by Logan Jones. + +Now that you've set up a dedicated Python environment, simply install +``` +$ pip install doframework +``` +Run a simple sanity check with +``` +$ python +>>> import doframework +>>> exit() +``` +The import command may take a while. Once it's finished (successfully, hopefully) you can exit. + +## `doframework` Clonning + +We will be running `doframework` Jupyter Notebooks as well as using other `doframework` material. Therefore, we'll clone a local copy of `doframework`. From your terminal, run + +``` +$ git clone https://github.com/IBM/doframework.git +``` +To launch the OCL lab Jupyter Notebooks, we'll need to add `jupyter` to our new Python environment +``` +$ pip install jupyter +``` +Note that `jupyter` does not come with `doframework`. We want to keep `doframework` light for cloud distribution. Once we're done installing `jupyter`, let's launch the OCL Lab notebooks +``` +$ cd doframework/notebooks +$ jupyter notebook +``` +Now we can begin ... \ No newline at end of file diff --git a/doframework/api.py b/doframework/api.py index 2bbebc6..515dc6d 100644 --- a/doframework/api.py +++ b/doframework/api.py @@ -1,5 +1,3 @@ -import os -import yaml import json from io import StringIO from collections import namedtuple diff --git a/notebooks/example.ipynb b/notebooks/example.ipynb index a3ff4f0..c0c2e90 100644 --- a/notebooks/example.ipynb +++ b/notebooks/example.ipynb @@ -26,7 +26,7 @@ { "cell_type": "code", "execution_count": 2, - "id": "d490dbde", + "id": "692bb66b", "metadata": {}, "outputs": [], "source": [ @@ -64,7 +64,7 @@ }, { "cell_type": "markdown", - "id": "7ddb5500", + "id": "502cc3d2", "metadata": {}, "source": [ "# DOFramework Example\n", @@ -144,7 +144,7 @@ }, { "cell_type": "markdown", - "id": "39407f1d", + "id": "3c88b444", "metadata": {}, "source": [ "and evaluate $f$ at the vertices of $\\mbox{dom}(f)$." @@ -153,7 +153,7 @@ { "cell_type": "code", "execution_count": 6, - "id": "8e923df2", + "id": "a3dc16c3", "metadata": {}, "outputs": [], "source": [ @@ -162,7 +162,7 @@ }, { "cell_type": "markdown", - "id": "221e2260", + "id": "d8025fab", "metadata": {}, "source": [ "## -- Constraints\n", @@ -177,7 +177,7 @@ { "cell_type": "code", "execution_count": 7, - "id": "a72eb22e", + "id": "4c1046e6", "metadata": {}, "outputs": [], "source": [ @@ -187,7 +187,7 @@ { "cell_type": "code", "execution_count": 8, - "id": "eae2f001", + "id": "19300e2e", "metadata": {}, "outputs": [], "source": [ @@ -196,7 +196,7 @@ }, { "cell_type": "markdown", - "id": "dff9634f", + "id": "ad02ae20", "metadata": {}, "source": [ "We'll sample vertics for $\\Omega$ within $\\mbox{dom}(f)$." @@ -205,7 +205,7 @@ { "cell_type": "code", "execution_count": 9, - "id": "d9df74df", + "id": "2f7c5253", "metadata": {}, "outputs": [], "source": [ @@ -236,7 +236,7 @@ { "cell_type": "code", "execution_count": 10, - "id": "421bcdd8", + "id": "94c8d2d4", "metadata": {}, "outputs": [], "source": [ @@ -256,7 +256,7 @@ { "cell_type": "code", "execution_count": 12, - "id": "6bd2d762", + "id": "f1a8d77d", "metadata": {}, "outputs": [], "source": [ @@ -267,7 +267,7 @@ { "cell_type": "code", "execution_count": 13, - "id": "3a029407", + "id": "6bd798c0", "metadata": {}, "outputs": [], "source": [ @@ -277,7 +277,7 @@ }, { "cell_type": "markdown", - "id": "d341c7de", + "id": "ea770d16", "metadata": {}, "source": [ "We can use the PWL object $f$ to sample points in its domain." @@ -285,8 +285,8 @@ }, { "cell_type": "code", - "execution_count": null, - "id": "c185e381", + "execution_count": 14, + "id": "4134e870", "metadata": {}, "outputs": [], "source": [ @@ -295,7 +295,7 @@ }, { "cell_type": "markdown", - "id": "80c387f3", + "id": "e0fc31d7", "metadata": {}, "source": [ "or evaluate points" @@ -303,17 +303,28 @@ }, { "cell_type": "code", - "execution_count": null, - "id": "52e1182c", + "execution_count": 15, + "id": "e28fee3f", "metadata": {}, - "outputs": [], + "outputs": [ + { + "data": { + "text/plain": [ + "array([ 1.09292907, -1.02811853, 0.5257558 ])" + ] + }, + "execution_count": 15, + "metadata": {}, + "output_type": "execute_result" + } + ], "source": [ "f.evaluate(xs)" ] }, { "cell_type": "markdown", - "id": "a632859c", + "id": "59f666dc", "metadata": {}, "source": [ "## -- Ground Truth\n", @@ -323,8 +334,8 @@ }, { "cell_type": "code", - "execution_count": 14, - "id": "df8ed6a3", + "execution_count": 16, + "id": "9f36a741", "metadata": {}, "outputs": [], "source": [ @@ -333,8 +344,8 @@ }, { "cell_type": "code", - "execution_count": 15, - "id": "31aa42bb", + "execution_count": 17, + "id": "9a8ad118", "metadata": {}, "outputs": [], "source": [ @@ -343,8 +354,8 @@ }, { "cell_type": "code", - "execution_count": 16, - "id": "93286523", + "execution_count": 18, + "id": "67788105", "metadata": {}, "outputs": [], "source": [ @@ -353,8 +364,8 @@ }, { "cell_type": "code", - "execution_count": 17, - "id": "19c295d7", + "execution_count": 19, + "id": "29b6a2f8", "metadata": {}, "outputs": [], "source": [ @@ -363,8 +374,8 @@ }, { "cell_type": "code", - "execution_count": 18, - "id": "144d0ed1", + "execution_count": 20, + "id": "3031a9b4", "metadata": {}, "outputs": [], "source": [ @@ -385,7 +396,7 @@ }, { "cell_type": "code", - "execution_count": 19, + "execution_count": 21, "id": "65b88609", "metadata": {}, "outputs": [], @@ -395,7 +406,7 @@ }, { "cell_type": "markdown", - "id": "df3ea6c4", + "id": "de864d4b", "metadata": {}, "source": [ "and how much noise to add to functions values in relative terms (```noise=0.05``` means $5\\%$ of $f$ value range in $\\mbox{dom}(f)$)." @@ -403,8 +414,8 @@ }, { "cell_type": "code", - "execution_count": 20, - "id": "bb4a1602", + "execution_count": 22, + "id": "654fbb36", "metadata": {}, "outputs": [], "source": [ @@ -413,7 +424,7 @@ }, { "cell_type": "markdown", - "id": "8194728b", + "id": "1623a135", "metadata": {}, "source": [ "We'll sample some means for the Gaussians in the mix from $\\mbox{dom}(f)$." @@ -421,7 +432,7 @@ }, { "cell_type": "code", - "execution_count": 21, + "execution_count": 23, "id": "76c948a9", "metadata": {}, "outputs": [], @@ -431,8 +442,8 @@ }, { "cell_type": "code", - "execution_count": 22, - "id": "9297899c", + "execution_count": 24, + "id": "35b3ff11", "metadata": {}, "outputs": [], "source": [ @@ -441,7 +452,7 @@ }, { "cell_type": "markdown", - "id": "2267b8a8", + "id": "1fa3f0eb", "metadata": {}, "source": [ "and sample some non-spherical covariance matrices." @@ -449,8 +460,8 @@ }, { "cell_type": "code", - "execution_count": 23, - "id": "d3c4dd39", + "execution_count": 25, + "id": "0e059207", "metadata": {}, "outputs": [], "source": [ @@ -459,7 +470,7 @@ }, { "cell_type": "markdown", - "id": "4f872ca3", + "id": "052c0f61", "metadata": {}, "source": [ "We'll also sample ```weights``` for the Gaussians in the mix." @@ -467,8 +478,8 @@ }, { "cell_type": "code", - "execution_count": 24, - "id": "4ed58cdc", + "execution_count": 26, + "id": "c6ef03f3", "metadata": {}, "outputs": [], "source": [ @@ -477,7 +488,7 @@ }, { "cell_type": "markdown", - "id": "82c845dc", + "id": "6eb64832", "metadata": {}, "source": [ "We'll decide on the number of data points $N$ to sample." @@ -485,8 +496,8 @@ }, { "cell_type": "code", - "execution_count": 25, - "id": "5bff16cf", + "execution_count": 27, + "id": "6c384255", "metadata": {}, "outputs": [], "source": [ @@ -495,7 +506,7 @@ }, { "cell_type": "markdown", - "id": "12847e06", + "id": "7ff15760", "metadata": {}, "source": [ "and finally get some samples." @@ -503,7 +514,7 @@ }, { "cell_type": "code", - "execution_count": 26, + "execution_count": 28, "id": "215be556", "metadata": {}, "outputs": [ @@ -511,7 +522,7 @@ "name": "stderr", "output_type": "stream", "text": [ - "2022-12-11 15:18:34,137\tINFO worker.py:1519 -- Started a local Ray instance. View the dashboard at \u001b[1m\u001b[32m127.0.0.1:8265 \u001b[39m\u001b[22m\n" + "2022-12-12 11:10:27,508\tINFO worker.py:1519 -- Started a local Ray instance. View the dashboard at \u001b[1m\u001b[32m127.0.0.1:8265 \u001b[39m\u001b[22m\n" ] } ], @@ -521,7 +532,7 @@ }, { "cell_type": "markdown", - "id": "28840f47", + "id": "f263a2d0", "metadata": {}, "source": [ "We'll make sure all data points are indeed in $\\mbox{dom}(f)$." @@ -529,8 +540,8 @@ }, { "cell_type": "code", - "execution_count": 27, - "id": "d4ab9d94", + "execution_count": 29, + "id": "3d3e72f8", "metadata": {}, "outputs": [ { @@ -539,7 +550,7 @@ "True" ] }, - "execution_count": 27, + "execution_count": 29, "metadata": {}, "output_type": "execute_result" } @@ -570,7 +581,7 @@ }, { "cell_type": "code", - "execution_count": 28, + "execution_count": 30, "id": "36bc427e", "metadata": {}, "outputs": [], @@ -596,7 +607,7 @@ }, { "cell_type": "code", - "execution_count": 29, + "execution_count": 31, "id": "dd48e7b5", "metadata": {}, "outputs": [], @@ -618,7 +629,7 @@ }, { "cell_type": "code", - "execution_count": 30, + "execution_count": 32, "id": "37f214cc", "metadata": {}, "outputs": [], @@ -628,7 +639,7 @@ }, { "cell_type": "code", - "execution_count": 31, + "execution_count": 33, "id": "e513a87a", "metadata": {}, "outputs": [], @@ -669,7 +680,7 @@ }, { "cell_type": "code", - "execution_count": 32, + "execution_count": 34, "id": "fa7542b3", "metadata": {}, "outputs": [], @@ -681,18 +692,18 @@ }, { "cell_type": "code", - "execution_count": 33, - "id": "4d605964", + "execution_count": 35, + "id": "b380c921", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ - "True optimum: [ 0.62165481 -0.73054224 -0.92673598 -0.39387107]\n", - "Predicted optimum: [ 0.6216548 -0.73054224 -0.926736 -0.39387107]\n", - "True optimal values: -1.429494476393192\n", - "Predicted optimal value: -1.4525388210010832\n" + "True optimum: [ 0.04689817 -0.15037332 -0.85110166 -0.718292 ]\n", + "Predicted optimum: [ 0.04689817 -0.15037332 -0.85110164 -0.718292 ]\n", + "True optimal values: -1.6728688087017412\n", + "Predicted optimal value: -1.6223043561609414\n" ] } ], @@ -718,8 +729,8 @@ }, { "cell_type": "code", - "execution_count": 34, - "id": "506bf769", + "execution_count": 36, + "id": "597675b3", "metadata": {}, "outputs": [], "source": [ @@ -728,7 +739,7 @@ }, { "cell_type": "code", - "execution_count": 35, + "execution_count": 37, "id": "7d46d5c0", "metadata": {}, "outputs": [], @@ -757,7 +768,7 @@ }, { "cell_type": "markdown", - "id": "f9952dfd", + "id": "9354a90f", "metadata": {}, "source": [ "To integrate our simple algorithm into ```doframework```, we need to **resolve** it." @@ -765,8 +776,8 @@ }, { "cell_type": "code", - "execution_count": 36, - "id": "5af09629", + "execution_count": 38, + "id": "08f68293", "metadata": {}, "outputs": [], "source": [ @@ -777,7 +788,7 @@ }, { "cell_type": "markdown", - "id": "5f636898", + "id": "3818b15b", "metadata": {}, "source": [ "## -- Configs\n", @@ -811,8 +822,8 @@ }, { "cell_type": "code", - "execution_count": 37, - "id": "69b5c623", + "execution_count": 39, + "id": "b49bf547", "metadata": {}, "outputs": [], "source": [ @@ -825,7 +836,7 @@ }, { "cell_type": "markdown", - "id": "7a5ab5ea", + "id": "2393e682", "metadata": {}, "source": [ "## -- Inputs\n", @@ -870,7 +881,7 @@ }, { "cell_type": "markdown", - "id": "aabc20e9", + "id": "234d048a", "metadata": {}, "source": [ "## -- Run\n", @@ -881,7 +892,7 @@ { "cell_type": "code", "execution_count": null, - "id": "3e353b5c", + "id": "f74f829e", "metadata": {}, "outputs": [], "source": [