From 296ee040d727bbc685e617f976fe83b421c2033f Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Sat, 10 Dec 2022 10:09:25 -0500 Subject: [PATCH 01/49] Simplify README example --- README.md | 21 +++++++++++++++++---- 1 file changed, 17 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index 6a242337c..854929eb4 100644 --- a/README.md +++ b/README.md @@ -89,30 +89,32 @@ If none of these folders contain your Julia binary, then you need to add Julia's Let's create a PySR example. First, let's import numpy to generate some test data: + ```python import numpy as np X = 2 * np.random.randn(100, 5) y = 2.5382 * np.cos(X[:, 3]) + X[:, 0] ** 2 - 0.5 ``` + We have created a dataset with 100 datapoints, with 5 features each. The relation we wish to model is $2.5382 \cos(x_3) + x_0^2 - 0.5$. Now, let's create a PySR model and train it. PySR's main interface is in the style of scikit-learn: + ```python from pysr import PySRRegressor model = PySRRegressor( - model_selection="best", # Result is mix of simplicity+accuracy - niterations=40, + niterations=40, # < Increase me for better results binary_operators=["+", "*"], unary_operators=[ "cos", "exp", "sin", "inv(x) = 1/x", - # ^ Custom operator (julia syntax) + # ^ Custom operator (julia syntax) ], extra_sympy_mappings={"inv": lambda x: 1 / x}, # ^ Define operator for SymPy as well @@ -120,12 +122,15 @@ model = PySRRegressor( # ^ Custom loss function (julia syntax) ) ``` + This will set up the model for 40 iterations of the search code, which contains hundreds of thousands of mutations and equation evaluations. Let's train this model on our dataset: + ```python model.fit(X, y) ``` + Internally, this launches a Julia process which will do a multithreaded search for equations to fit the dataset. Equations will be printed during training, and once you are satisfied, you may @@ -135,10 +140,13 @@ After the model has been fit, you can run `model.predict(X)` to see the predictions on a given dataset. You may run: + ```python print(model) ``` + to print the learned equations: + ```python PySRRegressor.equations_ = [ pick score equation loss complexity @@ -150,6 +158,7 @@ PySRRegressor.equations_ = [ 5 >>>> inf (((cos(x3) + -0.19699033) * 2.5382123) + (x0 *... 0.000000 10 ] ``` + This arrow in the `pick` column indicates which equation is currently selected by your `model_selection` strategy for prediction. (You may change `model_selection` after `.fit(X, y)` as well.) @@ -165,6 +174,7 @@ This will cause problems if significant changes are made to the search parameter You will notice that PySR will save two files: `hall_of_fame...csv` and `hall_of_fame...pkl`. The csv file is a list of equations and their losses, and the pkl file is a saved state of the model. You may load the model from the `pkl` file with: + ```python model = PySRRegressor.from_file("hall_of_fame.2022-08-10_100832.281.pkl") ``` @@ -254,22 +264,25 @@ model = PySRRegressor( ) ``` - # Docker You can also test out PySR in Docker, without installing it locally, by running the following command in the root directory of this repo: + ```bash docker build -t pysr . ``` + This builds an image called `pysr` for your system's architecture, which also contains IPython. You can then run this with: + ```bash docker run -it --rm -v "$PWD:/data" pysr ipython ``` + which will link the current directory to the container's `/data` directory and then launch ipython. From 52c49ddacd7d3ec887ce8f0ab64d5f0bc757301a Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Tue, 20 Dec 2022 19:29:09 -0500 Subject: [PATCH 02/49] Disable precompilation in PySR colab demo --- examples/pysr_demo.ipynb | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-) diff --git a/examples/pysr_demo.ipynb b/examples/pysr_demo.ipynb index 337eb6ac0..1e3bc84e9 100644 --- a/examples/pysr_demo.ipynb +++ b/examples/pysr_demo.ipynb @@ -53,8 +53,8 @@ "\n", "#---------------------------------------------------#\n", "JULIA_VERSION=\"1.7.2\"\n", - "JULIA_PACKAGES=\"PyCall SymbolicRegression\"\n", "JULIA_NUM_THREADS=4\n", + "export JULIA_PKG_PRECOMPILE_AUTO=0\n", "#---------------------------------------------------#\n", "\n", "if [ -z `which julia` ]; then\n", @@ -66,12 +66,6 @@ " wget -nv $URL -O /tmp/julia.tar.gz # -nv means \"not verbose\"\n", " tar -x -f /tmp/julia.tar.gz -C /usr/local --strip-components 1\n", " rm /tmp/julia.tar.gz\n", - "\n", - " for PKG in `echo $JULIA_PACKAGES`; do\n", - " echo \"Installing Julia package $PKG...\"\n", - " julia -e 'using Pkg; pkg\"add '$PKG'; precompile;\"'\n", - " done\n", - " \n", " julia -e 'println(\"Success\")'\n", "fi" ] @@ -121,12 +115,13 @@ ] }, { + "attachments": {}, "cell_type": "markdown", "metadata": { "id": "6u2WhbVhht-G" }, "source": [ - "Let's install the backend of PySR, and all required libraries. We will also precompile them so they are faster at startup.\n", + "Let's install the backend of PySR, and all required libraries.\n", "\n", "**(This may take some time)**" ] @@ -1223,7 +1218,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.8.9" + "version": "3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]" }, "widgets": { "application/vnd.jupyter.widget-state+json": { From cc21173bb03a83c2b6c16f371842a6ce9ff5d655 Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Tue, 20 Dec 2022 19:33:56 -0500 Subject: [PATCH 03/49] Fix installation of PyCall.jl --- examples/pysr_demo.ipynb | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/examples/pysr_demo.ipynb b/examples/pysr_demo.ipynb index 1e3bc84e9..e91c76fde 100644 --- a/examples/pysr_demo.ipynb +++ b/examples/pysr_demo.ipynb @@ -66,7 +66,11 @@ " wget -nv $URL -O /tmp/julia.tar.gz # -nv means \"not verbose\"\n", " tar -x -f /tmp/julia.tar.gz -C /usr/local --strip-components 1\n", " rm /tmp/julia.tar.gz\n", + "\n", + " echo \"Installing PyCall.jl...\"\n", + " julia -e 'using Pkg; Pkg.add(\"PyCall\"); Pkg.build(\"PyCall\")'\n", " julia -e 'println(\"Success\")'\n", + "\n", "fi" ] }, From 7c8d588fe262e33b38f01be39dfde5139c17bdad Mon Sep 17 00:00:00 2001 From: Miles Cranmer Date: Fri, 30 Dec 2022 06:51:44 -0500 Subject: [PATCH 04/49] Fix bug with `cluster_manager` not loading --- pysr/sr.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pysr/sr.py b/pysr/sr.py index 2238e6420..1d619bc11 100644 --- a/pysr/sr.py +++ b/pysr/sr.py @@ -1493,7 +1493,7 @@ def _run(self, X, y, mutated_params, weights, seed): Main = init_julia(self.julia_project, julia_kwargs=julia_kwargs) if cluster_manager is not None: - cluster_manager = _load_cluster_manager(cluster_manager) + cluster_manager = _load_cluster_manager(Main, cluster_manager) if self.update: _, is_shared = _process_julia_project(self.julia_project) From 3af12da2c5f1411bc312fd2d411554d0fb763f56 Mon Sep 17 00:00:00 2001 From: Miles Cranmer Date: Wed, 11 Jan 2023 12:07:31 -0500 Subject: [PATCH 05/49] Mention expression selection in `predict` --- README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 854929eb4..b31763bd2 100644 --- a/README.md +++ b/README.md @@ -137,7 +137,8 @@ Equations will be printed during training, and once you are satisfied, you may quit early by hitting 'q' and then \. After the model has been fit, you can run `model.predict(X)` -to see the predictions on a given dataset. +to see the predictions on a given dataset using the automatically-selected expression, +or, for example, `model.predict(X, 3)` to see the predictions of the 3rd equation. You may run: From d04558686078f4b182ad01ca8fe589918883dab1 Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Sat, 14 Jan 2023 15:28:49 -0500 Subject: [PATCH 06/49] Fix latex string tests --- pysr/test/test.py | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pysr/test/test.py b/pysr/test/test.py index 4d718d909..b19d0ebcf 100644 --- a/pysr/test/test.py +++ b/pysr/test/test.py @@ -838,13 +838,13 @@ def test_latex_float_precision(self): x = sympy.Symbol("x") expr = x * 3232.324857384 - 1.4857485e-10 self.assertEqual( - to_latex(expr, prec=2), "3.2 \cdot 10^{3} x - 1.5 \cdot 10^{-10}" + to_latex(expr, prec=2), r"3.2 \cdot 10^{3} x - 1.5 \cdot 10^{-10}" ) self.assertEqual( - to_latex(expr, prec=3), "3.23 \cdot 10^{3} x - 1.49 \cdot 10^{-10}" + to_latex(expr, prec=3), r"3.23 \cdot 10^{3} x - 1.49 \cdot 10^{-10}" ) self.assertEqual( - to_latex(expr, prec=8), "3232.3249 x - 1.4857485 \cdot 10^{-10}" + to_latex(expr, prec=8), r"3232.3249 x - 1.4857485 \cdot 10^{-10}" ) def test_latex_break_long_equation(self): From 04a27399203b419e0c2769a5db03fdfdc161de5e Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Sat, 14 Jan 2023 15:29:32 -0500 Subject: [PATCH 07/49] Create interactive docs page --- docs/interactive-docs.md | 8 ++++++++ mkdocs.yml | 1 + 2 files changed, 9 insertions(+) create mode 100644 docs/interactive-docs.md diff --git a/docs/interactive-docs.md b/docs/interactive-docs.md new file mode 100644 index 000000000..ff11cae31 --- /dev/null +++ b/docs/interactive-docs.md @@ -0,0 +1,8 @@ +# Interactive Reference ⭐ + + + +The following docs are interactive, and, based on your selections, +will create a snippet of Python code at the bottom which you can execute locally. + + \ No newline at end of file diff --git a/mkdocs.yml b/mkdocs.yml index 4fbd94c77..d040ef144 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -34,6 +34,7 @@ nav: - api.md - api-advanced.md - backend.md + - interactive-docs.md extra: homepage: https://astroautomata.com/PySR From 2c3945fb3651fa3684b89bf75648fe3721fa17b5 Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Sat, 14 Jan 2023 15:45:48 -0500 Subject: [PATCH 08/49] Link to API --- docs/interactive-docs.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/interactive-docs.md b/docs/interactive-docs.md index ff11cae31..65463ba2a 100644 --- a/docs/interactive-docs.md +++ b/docs/interactive-docs.md @@ -4,5 +4,7 @@ The following docs are interactive, and, based on your selections, will create a snippet of Python code at the bottom which you can execute locally. +Note that this is an incomplete list of options; for the full list, +see the [API Reference](api.md). \ No newline at end of file From 3c7634e72afd9a813e5a4fee67a766cb0fb5d656 Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Sat, 14 Jan 2023 15:48:25 -0500 Subject: [PATCH 09/49] Use `requirements.txt` for docs requirements --- .github/workflows/docs.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/docs.yml b/.github/workflows/docs.yml index 93a6fde92..42c5975a5 100644 --- a/.github/workflows/docs.yml +++ b/.github/workflows/docs.yml @@ -27,7 +27,7 @@ jobs: python-version: 3.9 cache: pip - name: "Install packages for docs building" - run: pip install mkdocs-material mkdocs-autorefs 'mkdocstrings[python]' docstring_parser + run: pip install -r docs/requirements.txt - name: "Install PySR" run: pip install -e . - name: "Build API docs" From bcaeda25b52023c9ee1563fef64b0f2efe53b561 Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Sat, 14 Jan 2023 15:52:43 -0500 Subject: [PATCH 10/49] Add missing requirements.txt file --- docs/requirements.txt | 4 ++++ 1 file changed, 4 insertions(+) create mode 100644 docs/requirements.txt diff --git a/docs/requirements.txt b/docs/requirements.txt new file mode 100644 index 000000000..29381320e --- /dev/null +++ b/docs/requirements.txt @@ -0,0 +1,4 @@ +mkdocs-material +mkdocs-autorefs +mkdocstrings[python] +docstring_parser \ No newline at end of file From 07da2ce03b43f011b595a84bf6df02bf28879046 Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Sat, 14 Jan 2023 15:55:39 -0500 Subject: [PATCH 11/49] Add README to docs folder --- docs/README.md | 8 ++++++++ 1 file changed, 8 insertions(+) create mode 100644 docs/README.md diff --git a/docs/README.md b/docs/README.md new file mode 100644 index 000000000..34f4e7d85 --- /dev/null +++ b/docs/README.md @@ -0,0 +1,8 @@ +# PySR Documentation + +## Building locally + +1. In the base directory, run `pip install -r docs/requirements.txt`. +2. Install PySR in editable mode: `pip install -e .`. +3. Build doc source with `cd docs && ./gen_docs.sh && cd ..`. +4. Create and serve docs with mkdocs: `mkdocs serve -w pysr`. From 6b34f102ffc2de458ee762a972d9c8ebaa393b04 Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Mon, 16 Jan 2023 15:31:24 -0500 Subject: [PATCH 12/49] Bump backend version with stream fix; fixes #250 --- pysr/version.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pysr/version.py b/pysr/version.py index 7a2c5b6c1..4793f7df2 100644 --- a/pysr/version.py +++ b/pysr/version.py @@ -1,2 +1,2 @@ -__version__ = "0.11.11" -__symbolic_regression_jl_version__ = "0.14.4" +__version__ = "0.11.12" +__symbolic_regression_jl_version__ = "0.15.0" From f39bca3d6c65f890cc0e8e0a1e7fe777bf6fae5c Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Mon, 16 Jan 2023 15:36:01 -0500 Subject: [PATCH 13/49] Update name to `elementwise_loss` --- pysr/sr.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pysr/sr.py b/pysr/sr.py index 1d619bc11..40df6b902 100644 --- a/pysr/sr.py +++ b/pysr/sr.py @@ -1573,7 +1573,7 @@ def _run(self, X, y, mutated_params, weights, seed): complexity_of_constants=self.complexity_of_constants, complexity_of_variables=self.complexity_of_variables, nested_constraints=nested_constraints, - loss=custom_loss, + elementwise_loss=custom_loss, maxsize=int(self.maxsize), output_file=_escape_filename(self.equation_file_), npopulations=int(self.populations), From 5df9acb33d6c54db11d973c305fb2390a3b64289 Mon Sep 17 00:00:00 2001 From: Miles Cranmer Date: Mon, 16 Jan 2023 16:46:45 -0500 Subject: [PATCH 14/49] Update pypi_deploy.yml --- .github/workflows/pypi_deploy.yml | 1 + 1 file changed, 1 insertion(+) diff --git a/.github/workflows/pypi_deploy.yml b/.github/workflows/pypi_deploy.yml index 52b9d92dc..2cb4c47d3 100644 --- a/.github/workflows/pypi_deploy.yml +++ b/.github/workflows/pypi_deploy.yml @@ -3,6 +3,7 @@ on: push: tags: - 'v*.*.*' + workflow_dispatch: jobs: pypi: From a4ab4f431a891056cef585f3b29ecbd599e71259 Mon Sep 17 00:00:00 2001 From: Miles Cranmer Date: Thu, 19 Jan 2023 16:48:47 -0500 Subject: [PATCH 15/49] Fix latex_table assertion for multi-output --- pysr/sr.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pysr/sr.py b/pysr/sr.py index 40df6b902..760ec1c84 100644 --- a/pysr/sr.py +++ b/pysr/sr.py @@ -2231,7 +2231,7 @@ def latex_table( if indices is not None: assert isinstance(indices, list) assert isinstance(indices[0], list) - assert isinstance(len(indices), self.nout_) + assert len(indices) == self.nout_ generator_fnc = generate_multiple_tables else: From bfbeb98245255a807ee6b8370baf94c095604488 Mon Sep 17 00:00:00 2001 From: Miles Cranmer Date: Sat, 21 Jan 2023 16:17:54 -0500 Subject: [PATCH 16/49] Update interactive-docs.md --- docs/interactive-docs.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/docs/interactive-docs.md b/docs/interactive-docs.md index 65463ba2a..3c87a5107 100644 --- a/docs/interactive-docs.md +++ b/docs/interactive-docs.md @@ -4,7 +4,8 @@ The following docs are interactive, and, based on your selections, will create a snippet of Python code at the bottom which you can execute locally. +Clicking on each parameter's name will display a description. Note that this is an incomplete list of options; for the full list, see the [API Reference](api.md). - \ No newline at end of file + From 955078fae6564fa73af9a17310927b48dc838ae9 Mon Sep 17 00:00:00 2001 From: Miles Cranmer Date: Mon, 30 Jan 2023 18:16:06 -0500 Subject: [PATCH 17/49] Remove colab link until working again --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index b31763bd2..921176da2 100644 --- a/README.md +++ b/README.md @@ -13,9 +13,9 @@ PySR uses evolutionary algorithms to search for symbolic expressions which optim
-| **Docs** | **colab** | **pip** | **conda** | **Stats** | -|---|---|---|---|---| -|[![Documentation](https://github.com/MilesCranmer/PySR/actions/workflows/docs.yml/badge.svg)](https://astroautomata.com/PySR/)|[![Colab](https://img.shields.io/badge/colab-notebook-yellow)](https://colab.research.google.com/github/MilesCranmer/PySR/blob/master/examples/pysr_demo.ipynb)|[![PyPI version](https://badge.fury.io/py/pysr.svg)](https://badge.fury.io/py/pysr)|[![Conda Version](https://img.shields.io/conda/vn/conda-forge/pysr.svg)](https://anaconda.org/conda-forge/pysr)|
pip: [![Downloads](https://pepy.tech/badge/pysr)](https://badge.fury.io/py/pysr)
conda: [![Anaconda-Server Badge](https://anaconda.org/conda-forge/pysr/badges/downloads.svg)](https://anaconda.org/conda-forge/pysr)
| +| **Docs** | **pip** | **conda** | **Stats** | +|---|---|---|---| +|[![Documentation](https://github.com/MilesCranmer/PySR/actions/workflows/docs.yml/badge.svg)](https://astroautomata.com/PySR/)|[![PyPI version](https://badge.fury.io/py/pysr.svg)](https://badge.fury.io/py/pysr)|[![Conda Version](https://img.shields.io/conda/vn/conda-forge/pysr.svg)](https://anaconda.org/conda-forge/pysr)|
pip: [![Downloads](https://pepy.tech/badge/pysr)](https://badge.fury.io/py/pysr)
conda: [![Anaconda-Server Badge](https://anaconda.org/conda-forge/pysr/badges/downloads.svg)](https://anaconda.org/conda-forge/pysr)
|
From 9ae2825cd2df6a7f2283ad7cc5467c7579845ed7 Mon Sep 17 00:00:00 2001 From: Miles Cranmer Date: Tue, 31 Jan 2023 16:35:20 -0500 Subject: [PATCH 18/49] Update examples.md --- docs/examples.md | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/docs/examples.md b/docs/examples.md index 2b35b6294..14d3321fb 100644 --- a/docs/examples.md +++ b/docs/examples.md @@ -249,9 +249,11 @@ Next, let's use this list of primes to create a dataset of $x, y$ pairs: import numpy as np X = np.random.randint(0, 100, 100)[:, None] -y = [primes[3*X[i, 0] + 1] - 5 for i in range(100)] +y = [primes[3*X[i, 0] + 1] - 5 + np.random.randn()*0.001 for i in range(100)] ``` +Note that we have also added a tiny bit of noise to the dataset. + Finally, let's create a PySR model, and pass the custom operator. We also need to define the sympy equivalent, which we can leave as a placeholder for now: ```python @@ -264,7 +266,7 @@ class sympy_p(sympy.Function): model = PySRRegressor( binary_operators=["+", "-", "*", "/"], unary_operators=["p"], - niterations=1000, + niterations=100, extra_sympy_mappings={"p": sympy_p} ) ``` @@ -276,10 +278,10 @@ model.fit(X, y) ``` if all works out, you should be able to see the true relation (note that the constant offset might not be exactly 1, since it is allowed to round to the nearest integer). -You can get the sympy version of the last row with: +You can get the sympy version of the best equation with: ```python -model.sympy(index=-1) +model.sympy() ``` ## 8. Additional features From d3b8ee8ce090ff61ee4f6ddb4ba2aebbe2df7b06 Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Wed, 1 Feb 2023 21:05:16 -0500 Subject: [PATCH 19/49] Patch colab with fallback runtime --- README.md | 6 +++--- examples/pysr_demo.ipynb | 19 +++++++++++++------ 2 files changed, 16 insertions(+), 9 deletions(-) diff --git a/README.md b/README.md index 921176da2..b31763bd2 100644 --- a/README.md +++ b/README.md @@ -13,9 +13,9 @@ PySR uses evolutionary algorithms to search for symbolic expressions which optim
-| **Docs** | **pip** | **conda** | **Stats** | -|---|---|---|---| -|[![Documentation](https://github.com/MilesCranmer/PySR/actions/workflows/docs.yml/badge.svg)](https://astroautomata.com/PySR/)|[![PyPI version](https://badge.fury.io/py/pysr.svg)](https://badge.fury.io/py/pysr)|[![Conda Version](https://img.shields.io/conda/vn/conda-forge/pysr.svg)](https://anaconda.org/conda-forge/pysr)|
pip: [![Downloads](https://pepy.tech/badge/pysr)](https://badge.fury.io/py/pysr)
conda: [![Anaconda-Server Badge](https://anaconda.org/conda-forge/pysr/badges/downloads.svg)](https://anaconda.org/conda-forge/pysr)
| +| **Docs** | **colab** | **pip** | **conda** | **Stats** | +|---|---|---|---|---| +|[![Documentation](https://github.com/MilesCranmer/PySR/actions/workflows/docs.yml/badge.svg)](https://astroautomata.com/PySR/)|[![Colab](https://img.shields.io/badge/colab-notebook-yellow)](https://colab.research.google.com/github/MilesCranmer/PySR/blob/master/examples/pysr_demo.ipynb)|[![PyPI version](https://badge.fury.io/py/pysr.svg)](https://badge.fury.io/py/pysr)|[![Conda Version](https://img.shields.io/conda/vn/conda-forge/pysr.svg)](https://anaconda.org/conda-forge/pysr)|
pip: [![Downloads](https://pepy.tech/badge/pysr)](https://badge.fury.io/py/pysr)
conda: [![Anaconda-Server Badge](https://anaconda.org/conda-forge/pysr/badges/downloads.svg)](https://anaconda.org/conda-forge/pysr)
|
diff --git a/examples/pysr_demo.ipynb b/examples/pysr_demo.ipynb index e91c76fde..30550a158 100644 --- a/examples/pysr_demo.ipynb +++ b/examples/pysr_demo.ipynb @@ -10,6 +10,7 @@ ] }, { + "attachments": {}, "cell_type": "markdown", "metadata": { "id": "tQ1r1bbb0yBv" @@ -18,13 +19,19 @@ "\n", "## Instructions\n", "1. Work on a copy of this notebook: _File_ > _Save a copy in Drive_ (you will need a Google account).\n", - "2. (Optional) If you would like to do the deep learning component of this tutorial, turn on the GPU with Edit->Notebook settings->Hardware accelerator->GPU\n", - "3. Execute the following cell (click on it and press Ctrl+Enter) to install Julia, IJulia and other packages (if needed, update `JULIA_VERSION` and the other parameters). This takes a couple of minutes.\n", - "4. Continue to the next section.\n", + "2. (Optional) If you would like to do the deep learning component of this tutorial, ensure the GPU accelerator is turned on, with Edit->Notebook settings->Hardware accelerator->GPU\n", + "3. **Use fallback runtime**. Open the command pallette (bottom left -> second icon from the bottom), and search for \"use fallback runtime\", and hit enter.\n", + " - This is a temporary workaround for a bug in Colab, until the current runtime is patched.\n", + "4. Execute the following cell (click on it and press Ctrl+Enter) to install Julia, IJulia and other packages. This takes a couple of minutes.\n", + "5. Continue to the next section.\n", "\n", "_Notes_:\n", - "* If your Colab Runtime gets reset (e.g., due to inactivity), repeat steps 3, 4.\n", - "* After installation, if you want to change the Julia version or activate/deactivate the GPU, you will need to reset the Runtime: _Runtime_ > _Delete and disconnect runtime_ and repeat steps 2-4." + "* If your Colab Runtime gets reset (e.g., due to inactivity), repeat steps 4-5.\n", + "* After installation, if you want to change the Julia version or activate/deactivate the GPU, you will need to reset the Runtime: _Runtime_ > _Delete and disconnect runtime_ and repeat steps 2-5.\n", + "\n", + "> **Warning**\n", + "> \n", + "> Ensure that you have done step 3 above, otherwise you will get an error when you try to use Julia." ] }, { @@ -1222,7 +1229,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]" + "version": "3.10.9 (main, Dec 15 2022, 17:11:09) [Clang 14.0.0 (clang-1400.0.29.202)]" }, "widgets": { "application/vnd.jupyter.widget-state+json": { From 93f789fd5bd41223cd8cc308c19ca2977a43a638 Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Wed, 1 Feb 2023 21:12:04 -0500 Subject: [PATCH 20/49] Clean up colab notebook --- examples/pysr_demo.ipynb | 21 ++++++++++++--------- 1 file changed, 12 insertions(+), 9 deletions(-) diff --git a/examples/pysr_demo.ipynb b/examples/pysr_demo.ipynb index 30550a158..00dbfbbc4 100644 --- a/examples/pysr_demo.ipynb +++ b/examples/pysr_demo.ipynb @@ -10,7 +10,15 @@ ] }, { - "attachments": {}, + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "!echo \"Runtime started.\"" + ] + }, + { "cell_type": "markdown", "metadata": { "id": "tQ1r1bbb0yBv" @@ -20,9 +28,9 @@ "## Instructions\n", "1. Work on a copy of this notebook: _File_ > _Save a copy in Drive_ (you will need a Google account).\n", "2. (Optional) If you would like to do the deep learning component of this tutorial, ensure the GPU accelerator is turned on, with Edit->Notebook settings->Hardware accelerator->GPU\n", - "3. **Use fallback runtime**. Open the command pallette (bottom left -> second icon from the bottom), and search for \"use fallback runtime\", and hit enter.\n", + "3. **Use fallback runtime**. Run the above cell (`!echo \"Runtime started.\"`) to start the runtime. Now, open the command pallette (bottom left -> second icon from the bottom), and search for \"use fallback runtime\", and hit enter.\n", " - This is a temporary workaround for a bug in Colab, until the current runtime is patched.\n", - "4. Execute the following cell (click on it and press Ctrl+Enter) to install Julia, IJulia and other packages. This takes a couple of minutes.\n", + "4. Execute the following cell (click on it and press Ctrl+Enter or Shift+Enter) to install Julia, IJulia and other packages. This takes a couple of minutes.\n", "5. Continue to the next section.\n", "\n", "_Notes_:\n", @@ -1214,11 +1222,6 @@ "name": "pysr_demo.ipynb", "provenance": [] }, - "kernelspec": { - "display_name": "Python (main_ipynb)", - "language": "python", - "name": "main_ipynb" - }, "language_info": { "codemirror_mode": { "name": "ipython", @@ -1229,7 +1232,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.9 (main, Dec 15 2022, 17:11:09) [Clang 14.0.0 (clang-1400.0.29.202)]" + "version": "3.10.9" }, "widgets": { "application/vnd.jupyter.widget-state+json": { From 1f0b56077cc5386af507ac24765ef7e51199e50e Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Thu, 2 Feb 2023 07:20:53 -0500 Subject: [PATCH 21/49] Remove metadata from colab notebook --- examples/pysr_demo.ipynb | 868 ++------------------------------------- 1 file changed, 28 insertions(+), 840 deletions(-) diff --git a/examples/pysr_demo.ipynb b/examples/pysr_demo.ipynb index 00dbfbbc4..a35c51569 100644 --- a/examples/pysr_demo.ipynb +++ b/examples/pysr_demo.ipynb @@ -55,11 +55,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "GIeFXS0F0zww", - "outputId": "f25272ac-a660-42fd-d739-82778e6d7415" + "id": "GIeFXS0F0zww" }, "outputs": [], "source": [ @@ -111,7 +107,9 @@ }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "etTMEV0wDqld" + }, "source": [ "The following step is not normally required, but colab's printing is non-standard and we need to manually set it up PyJulia:\n" ] @@ -134,7 +132,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "metadata": { "id": "6u2WhbVhht-G" @@ -149,11 +146,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "J-0QbxyK1_51", - "outputId": "3742d548-3574-4739-80b1-ee9e906e4b57" + "id": "J-0QbxyK1_51" }, "outputs": [], "source": [ @@ -255,11 +248,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, "id": "p4PSrO-NK1Wa", - "outputId": "474b4780-5d94-4795-88b9-225030b17abe", "scrolled": true }, "outputs": [], @@ -288,11 +277,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "4HR8gknlZz4W", - "outputId": "606c26ad-6a7d-42e0-a014-fc84bc898ef7" + "id": "4HR8gknlZz4W" }, "outputs": [], "source": [ @@ -312,12 +297,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/", - "height": 38 - }, - "id": "IQKOohdpztS7", - "outputId": "b0538aca-3916-4d2b-f5f2-f394cdba3dad" + "id": "IQKOohdpztS7" }, "outputs": [], "source": [ @@ -337,12 +317,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/", - "height": 38 - }, - "id": "GRcxq-TTlpRX", - "outputId": "5e13599e-d469-4110-94a4-689023b40717" + "id": "GRcxq-TTlpRX" }, "outputs": [], "source": [ @@ -371,12 +346,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/", - "height": 35 - }, - "id": "HFGaNL6tbDgi", - "outputId": "260b0db4-862f-4101-c494-8b03756ed126" + "id": "HFGaNL6tbDgi" }, "outputs": [], "source": [ @@ -398,11 +368,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "Vbz4IMsk2NYH", - "outputId": "c0eb8aeb-6656-40a2-eeaf-cd733a2593b8" + "id": "Vbz4IMsk2NYH" }, "outputs": [], "source": [ @@ -462,11 +428,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, "id": "PoEkpvYuGUdy", - "outputId": "18493db1-67e3-4493-f5e7-19277dd003d9", "scrolled": true }, "outputs": [], @@ -485,12 +447,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/", - "height": 38 - }, - "id": "emn2IajKbDgy", - "outputId": "7bfade39-b95e-4314-8d46-854e2421c496" + "id": "emn2IajKbDgy" }, "outputs": [], "source": [ @@ -612,12 +569,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/", - "height": 298 - }, - "id": "sqMqb4nJ5ZR5", - "outputId": "7f06a215-f3b6-4053-fc98-5da227a388a3" + "id": "sqMqb4nJ5ZR5" }, "outputs": [], "source": [ @@ -650,11 +602,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "v8WBYtcZbDhC", - "outputId": "de926074-1742-4fa3-fe44-188307e9214c" + "id": "v8WBYtcZbDhC" }, "outputs": [], "source": [ @@ -674,11 +622,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, "id": "a07K3KUjOxcp", - "outputId": "629898f5-36aa-4616-bdb7-b41e07619a02", "scrolled": true }, "outputs": [], @@ -706,11 +650,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "oHyUbcg6ggmx", - "outputId": "a261b520-5a89-42c9-982d-b05b905885fb" + "id": "oHyUbcg6ggmx" }, "outputs": [], "source": [ @@ -730,12 +670,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/", - "height": 38 - }, - "id": "PB67POLr8b_L", - "outputId": "69e4d94b-a00a-4e59-a781-38571b5f42e0" + "id": "PB67POLr8b_L" }, "outputs": [], "source": [ @@ -767,12 +702,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/", - "height": 265 - }, - "id": "ezCC0IkS8zFf", - "outputId": "d928f975-3843-4430-93fa-4ae3a18abb51" + "id": "ezCC0IkS8zFf" }, "outputs": [], "source": [ @@ -813,11 +743,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "SXJGXySlbDhL", - "outputId": "ea1af43c-3823-4779-f981-f8ccc431d055" + "id": "SXJGXySlbDhL" }, "outputs": [], "source": [ @@ -981,11 +907,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "1ldN0999bDhU", - "outputId": "269efa86-3331-4ba8-a763-4f26823c8532" + "id": "1ldN0999bDhU" }, "outputs": [], "source": [ @@ -1008,11 +930,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "33R2nrv-b62w", - "outputId": "20a8f626-bb4b-4d08-d8ed-e14fa80b2402" + "id": "33R2nrv-b62w" }, "outputs": [], "source": [ @@ -1032,36 +950,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/", - "height": 240, - "referenced_widgets": [ - "8f7ca3dc340c4b6ca7607fd5c94c79d0", - "cbf3650d65dc4b8986a569d9700207a2", - "b8f6f05741f94fd49901887de806eb1d", - "ad40f001306a44ecb54f62ab0adda91f", - "e4b89b77f1c94abdbffc3ae9a931a148", - "ca1e9af3973845c1b2daee8df04f5050", - "d638dbb7e0a846ebaca5487f0f384b75", - "441f56016ea143d98f2185b42009cc68", - "957b8217f89d449ea226c8d211ca055d", - "c5956a7501e649319193f6899e6f94af", - "e1cc5d5e17ed43ebbd1420cbfbd06758", - "0882e10f5ceb4917b6455b74cf4facfb", - "5867c0acc2a84f7196aa782f0dc6d4bc", - "a243c86070ac4590a99ec844c2cbf677", - "397d0a9190f042d8969d96b766e93b90", - "76abe6940c3642ea90bbab0409d47f80", - "c3d9456987764530a49f80fd08a8a058", - "4419e5228ebb46578d550c3f24096c92", - "c1b5d69fe13445179345852b4c5c4a3f", - "ff544275991b474981f0e55a01a4739a", - "d8747da8b9984e12bf4d7f352a3c0a21", - "6c06f7dfa9d84d62b218d8182c164bf9" - ] - }, - "id": "TXZdF8k1bDhY", - "outputId": "6b1b7f68-5dd8-4613-95a7-776fc71c841f" + "id": "TXZdF8k1bDhY" }, "outputs": [], "source": [ @@ -1083,11 +972,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "s2sQLla5bDhb", - "outputId": "5b48a0b0-6c5e-4e9a-8bfe-e9f888af4b9d" + "id": "s2sQLla5bDhb" }, "outputs": [], "source": [ @@ -1117,11 +1002,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, "id": "51QdHVSkbDhc", - "outputId": "3058d58a-dbbd-4e78-c810-439a5fca76e6", "scrolled": true }, "outputs": [], @@ -1218,710 +1099,17 @@ "metadata": { "accelerator": "GPU", "colab": { - "collapsed_sections": [], "name": "pysr_demo.ipynb", "provenance": [] }, "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.10.9" - }, - "widgets": { - "application/vnd.jupyter.widget-state+json": { - "0882e10f5ceb4917b6455b74cf4facfb": { - "model_module": "@jupyter-widgets/controls", - "model_module_version": "1.5.0", - "model_name": "HBoxModel", - "state": { - "_dom_classes": [], - "_model_module": "@jupyter-widgets/controls", - "_model_module_version": "1.5.0", - "_model_name": "HBoxModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/controls", - "_view_module_version": "1.5.0", - "_view_name": "HBoxView", - "box_style": "", - "children": [ - "IPY_MODEL_5867c0acc2a84f7196aa782f0dc6d4bc", - "IPY_MODEL_a243c86070ac4590a99ec844c2cbf677", - "IPY_MODEL_397d0a9190f042d8969d96b766e93b90" - ], - "layout": "IPY_MODEL_76abe6940c3642ea90bbab0409d47f80" - } - }, - "397d0a9190f042d8969d96b766e93b90": { - "model_module": "@jupyter-widgets/controls", - "model_module_version": "1.5.0", - "model_name": "HTMLModel", - "state": { - "_dom_classes": [], - "_model_module": "@jupyter-widgets/controls", - "_model_module_version": "1.5.0", - "_model_name": "HTMLModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/controls", - "_view_module_version": "1.5.0", - "_view_name": "HTMLView", - "description": "", - "description_tooltip": null, - "layout": "IPY_MODEL_d8747da8b9984e12bf4d7f352a3c0a21", - "placeholder": "​", - "style": "IPY_MODEL_6c06f7dfa9d84d62b218d8182c164bf9", - "value": " 2120/2442 [00:18<00:02, 116.15it/s, loss=6.39, v_num=14]" - } - }, - "4419e5228ebb46578d550c3f24096c92": { - "model_module": "@jupyter-widgets/controls", - "model_module_version": "1.5.0", - "model_name": "DescriptionStyleModel", - "state": { - "_model_module": "@jupyter-widgets/controls", - "_model_module_version": "1.5.0", - "_model_name": "DescriptionStyleModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/base", - "_view_module_version": "1.2.0", - "_view_name": "StyleView", - "description_width": "" - } - }, - "441f56016ea143d98f2185b42009cc68": { - "model_module": "@jupyter-widgets/base", - "model_module_version": "1.2.0", - "model_name": "LayoutModel", - "state": { - "_model_module": "@jupyter-widgets/base", - "_model_module_version": "1.2.0", - "_model_name": "LayoutModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/base", - "_view_module_version": "1.2.0", - "_view_name": "LayoutView", - "align_content": null, - "align_items": null, - "align_self": null, - "border": null, - "bottom": null, - "display": null, - "flex": "2", - "flex_flow": null, - "grid_area": null, - "grid_auto_columns": null, - "grid_auto_flow": null, - "grid_auto_rows": null, - "grid_column": null, - "grid_gap": null, - "grid_row": null, - "grid_template_areas": null, - "grid_template_columns": null, - "grid_template_rows": null, - "height": null, - "justify_content": null, - "justify_items": null, - "left": null, - "margin": null, - "max_height": null, - "max_width": null, - "min_height": null, - "min_width": null, - "object_fit": null, - "object_position": null, - "order": null, - "overflow": null, - "overflow_x": null, - "overflow_y": null, - "padding": null, - "right": null, - "top": null, - "visibility": null, - "width": null - } - }, - "5867c0acc2a84f7196aa782f0dc6d4bc": { - "model_module": "@jupyter-widgets/controls", - "model_module_version": "1.5.0", - "model_name": "HTMLModel", - "state": { - "_dom_classes": [], - "_model_module": "@jupyter-widgets/controls", - "_model_module_version": "1.5.0", - "_model_name": "HTMLModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/controls", - "_view_module_version": "1.5.0", - "_view_name": "HTMLView", - "description": "", - "description_tooltip": null, - "layout": "IPY_MODEL_c3d9456987764530a49f80fd08a8a058", - "placeholder": "​", - "style": "IPY_MODEL_4419e5228ebb46578d550c3f24096c92", - "value": "Epoch 0: 87%" - } - }, - "6c06f7dfa9d84d62b218d8182c164bf9": { - "model_module": "@jupyter-widgets/controls", - "model_module_version": "1.5.0", - "model_name": "DescriptionStyleModel", - "state": { - "_model_module": "@jupyter-widgets/controls", - "_model_module_version": "1.5.0", - "_model_name": "DescriptionStyleModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/base", - "_view_module_version": "1.2.0", - "_view_name": "StyleView", - "description_width": "" - } - }, - "76abe6940c3642ea90bbab0409d47f80": { - "model_module": "@jupyter-widgets/base", - "model_module_version": "1.2.0", - "model_name": "LayoutModel", - "state": { - "_model_module": "@jupyter-widgets/base", - "_model_module_version": "1.2.0", - "_model_name": "LayoutModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/base", - "_view_module_version": "1.2.0", - "_view_name": "LayoutView", - "align_content": null, - "align_items": null, - "align_self": null, - "border": null, - "bottom": null, - "display": "inline-flex", - "flex": null, - "flex_flow": "row wrap", - "grid_area": null, - "grid_auto_columns": null, - "grid_auto_flow": null, - "grid_auto_rows": null, - "grid_column": null, - "grid_gap": null, - "grid_row": null, - "grid_template_areas": null, - "grid_template_columns": null, - "grid_template_rows": null, - "height": null, - "justify_content": null, - "justify_items": null, - "left": null, - "margin": null, - "max_height": null, - "max_width": null, - "min_height": null, - "min_width": null, - "object_fit": null, - "object_position": null, - "order": null, - "overflow": null, - "overflow_x": null, - "overflow_y": null, - "padding": null, - "right": null, - "top": null, - "visibility": null, - "width": "100%" - } - }, - "8f7ca3dc340c4b6ca7607fd5c94c79d0": { - "model_module": "@jupyter-widgets/controls", - "model_module_version": "1.5.0", - "model_name": "HBoxModel", - "state": { - "_dom_classes": [], - "_model_module": "@jupyter-widgets/controls", - "_model_module_version": "1.5.0", - "_model_name": "HBoxModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/controls", - "_view_module_version": "1.5.0", - "_view_name": "HBoxView", - "box_style": "", - "children": [ - "IPY_MODEL_cbf3650d65dc4b8986a569d9700207a2", - "IPY_MODEL_b8f6f05741f94fd49901887de806eb1d", - "IPY_MODEL_ad40f001306a44ecb54f62ab0adda91f" - ], - "layout": "IPY_MODEL_e4b89b77f1c94abdbffc3ae9a931a148" - } - }, - "957b8217f89d449ea226c8d211ca055d": { - "model_module": "@jupyter-widgets/controls", - "model_module_version": "1.5.0", - "model_name": "ProgressStyleModel", - "state": { - "_model_module": "@jupyter-widgets/controls", - "_model_module_version": "1.5.0", - "_model_name": "ProgressStyleModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/base", - "_view_module_version": "1.2.0", - "_view_name": "StyleView", - "bar_color": null, - "description_width": "" - } - }, - "a243c86070ac4590a99ec844c2cbf677": { - "model_module": "@jupyter-widgets/controls", - "model_module_version": "1.5.0", - "model_name": "FloatProgressModel", - "state": { - "_dom_classes": [], - "_model_module": "@jupyter-widgets/controls", - "_model_module_version": "1.5.0", - "_model_name": "FloatProgressModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/controls", - "_view_module_version": "1.5.0", - "_view_name": "ProgressView", - "bar_style": "", - "description": "", - "description_tooltip": null, - "layout": "IPY_MODEL_c1b5d69fe13445179345852b4c5c4a3f", - "max": 2442, - "min": 0, - "orientation": "horizontal", - "style": "IPY_MODEL_ff544275991b474981f0e55a01a4739a", - "value": 2120 - } - }, - "ad40f001306a44ecb54f62ab0adda91f": { - "model_module": "@jupyter-widgets/controls", - "model_module_version": "1.5.0", - "model_name": "HTMLModel", - "state": { - "_dom_classes": [], - "_model_module": "@jupyter-widgets/controls", - "_model_module_version": "1.5.0", - "_model_name": "HTMLModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/controls", - "_view_module_version": "1.5.0", - "_view_name": "HTMLView", - "description": "", - "description_tooltip": null, - "layout": "IPY_MODEL_c5956a7501e649319193f6899e6f94af", - "placeholder": "​", - "style": "IPY_MODEL_e1cc5d5e17ed43ebbd1420cbfbd06758", - "value": " 2/2 [00:00<00:00, 55.34it/s]" - } - }, - "b8f6f05741f94fd49901887de806eb1d": { - "model_module": "@jupyter-widgets/controls", - "model_module_version": "1.5.0", - "model_name": "FloatProgressModel", - "state": { - "_dom_classes": [], - "_model_module": "@jupyter-widgets/controls", - "_model_module_version": "1.5.0", - "_model_name": "FloatProgressModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/controls", - "_view_module_version": "1.5.0", - "_view_name": "ProgressView", - "bar_style": "", - "description": "", - "description_tooltip": null, - "layout": "IPY_MODEL_441f56016ea143d98f2185b42009cc68", - "max": 2, - "min": 0, - "orientation": "horizontal", - "style": "IPY_MODEL_957b8217f89d449ea226c8d211ca055d", - "value": 2 - } - }, - "c1b5d69fe13445179345852b4c5c4a3f": { - "model_module": "@jupyter-widgets/base", - "model_module_version": "1.2.0", - "model_name": "LayoutModel", - "state": { - "_model_module": "@jupyter-widgets/base", - "_model_module_version": "1.2.0", - "_model_name": "LayoutModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/base", - "_view_module_version": "1.2.0", - "_view_name": "LayoutView", - "align_content": null, - "align_items": null, - "align_self": null, - "border": null, - "bottom": null, - "display": null, - "flex": "2", - "flex_flow": null, - "grid_area": null, - "grid_auto_columns": null, - "grid_auto_flow": null, - "grid_auto_rows": null, - "grid_column": null, - "grid_gap": null, - "grid_row": null, - "grid_template_areas": null, - "grid_template_columns": null, - "grid_template_rows": null, - "height": null, - "justify_content": null, - "justify_items": null, - "left": null, - "margin": null, - "max_height": null, - "max_width": null, - "min_height": null, - "min_width": null, - "object_fit": null, - "object_position": null, - "order": null, - "overflow": null, - "overflow_x": null, - "overflow_y": null, - "padding": null, - "right": null, - "top": null, - "visibility": null, - "width": null - } - }, - "c3d9456987764530a49f80fd08a8a058": { - "model_module": "@jupyter-widgets/base", - "model_module_version": "1.2.0", - "model_name": "LayoutModel", - "state": { - "_model_module": "@jupyter-widgets/base", - "_model_module_version": "1.2.0", - "_model_name": "LayoutModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/base", - "_view_module_version": "1.2.0", - "_view_name": "LayoutView", - "align_content": null, - "align_items": null, - "align_self": null, - "border": null, - "bottom": null, - "display": null, - "flex": null, - "flex_flow": null, - "grid_area": null, - "grid_auto_columns": null, - "grid_auto_flow": null, - "grid_auto_rows": null, - "grid_column": null, - "grid_gap": null, - "grid_row": null, - "grid_template_areas": null, - "grid_template_columns": null, - "grid_template_rows": null, - "height": null, - "justify_content": null, - "justify_items": null, - "left": null, - "margin": null, - "max_height": null, - "max_width": null, - "min_height": null, - "min_width": null, - "object_fit": null, - "object_position": null, - "order": null, - "overflow": null, - "overflow_x": null, - "overflow_y": null, - "padding": null, - "right": null, - "top": null, - "visibility": null, - "width": null - } - }, - "c5956a7501e649319193f6899e6f94af": { - "model_module": "@jupyter-widgets/base", - "model_module_version": "1.2.0", - "model_name": "LayoutModel", - "state": { - "_model_module": "@jupyter-widgets/base", - "_model_module_version": "1.2.0", - "_model_name": "LayoutModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/base", - "_view_module_version": "1.2.0", - "_view_name": "LayoutView", - "align_content": null, - "align_items": null, - "align_self": null, - "border": null, - "bottom": null, - "display": null, - "flex": null, - "flex_flow": null, - "grid_area": null, - "grid_auto_columns": null, - "grid_auto_flow": null, - "grid_auto_rows": null, - "grid_column": null, - "grid_gap": null, - "grid_row": null, - "grid_template_areas": null, - "grid_template_columns": null, - "grid_template_rows": null, - "height": null, - "justify_content": null, - "justify_items": null, - "left": null, - "margin": null, - "max_height": null, - "max_width": null, - "min_height": null, - "min_width": null, - "object_fit": null, - "object_position": null, - "order": null, - "overflow": null, - "overflow_x": null, - "overflow_y": null, - "padding": null, - "right": null, - "top": null, - "visibility": null, - "width": null - } - }, - "ca1e9af3973845c1b2daee8df04f5050": { - "model_module": "@jupyter-widgets/base", - "model_module_version": "1.2.0", - "model_name": "LayoutModel", - "state": { - "_model_module": "@jupyter-widgets/base", - "_model_module_version": "1.2.0", - "_model_name": "LayoutModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/base", - "_view_module_version": "1.2.0", - "_view_name": "LayoutView", - "align_content": null, - "align_items": null, - "align_self": null, - "border": null, - "bottom": null, - "display": null, - "flex": null, - "flex_flow": null, - "grid_area": null, - "grid_auto_columns": null, - "grid_auto_flow": null, - "grid_auto_rows": null, - "grid_column": null, - "grid_gap": null, - "grid_row": null, - "grid_template_areas": null, - "grid_template_columns": null, - "grid_template_rows": null, - "height": null, - "justify_content": null, - "justify_items": null, - "left": null, - "margin": null, - "max_height": null, - "max_width": null, - "min_height": null, - "min_width": null, - "object_fit": null, - "object_position": null, - "order": null, - "overflow": null, - "overflow_x": null, - "overflow_y": null, - "padding": null, - "right": null, - "top": null, - "visibility": null, - "width": null - } - }, - "cbf3650d65dc4b8986a569d9700207a2": { - "model_module": "@jupyter-widgets/controls", - "model_module_version": "1.5.0", - "model_name": "HTMLModel", - "state": { - "_dom_classes": [], - "_model_module": "@jupyter-widgets/controls", - "_model_module_version": "1.5.0", - "_model_name": "HTMLModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/controls", - "_view_module_version": "1.5.0", - "_view_name": "HTMLView", - "description": "", - "description_tooltip": null, - "layout": "IPY_MODEL_ca1e9af3973845c1b2daee8df04f5050", - "placeholder": "​", - "style": "IPY_MODEL_d638dbb7e0a846ebaca5487f0f384b75", - "value": "Sanity Checking DataLoader 0: 100%" - } - }, - "d638dbb7e0a846ebaca5487f0f384b75": { - "model_module": "@jupyter-widgets/controls", - "model_module_version": "1.5.0", - "model_name": "DescriptionStyleModel", - "state": { - "_model_module": "@jupyter-widgets/controls", - "_model_module_version": "1.5.0", - "_model_name": "DescriptionStyleModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/base", - "_view_module_version": "1.2.0", - "_view_name": "StyleView", - "description_width": "" - } - }, - "d8747da8b9984e12bf4d7f352a3c0a21": { - "model_module": "@jupyter-widgets/base", - "model_module_version": "1.2.0", - "model_name": "LayoutModel", - "state": { - "_model_module": "@jupyter-widgets/base", - "_model_module_version": "1.2.0", - "_model_name": "LayoutModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/base", - "_view_module_version": "1.2.0", - "_view_name": "LayoutView", - "align_content": null, - "align_items": null, - "align_self": null, - "border": null, - "bottom": null, - "display": null, - "flex": null, - "flex_flow": null, - "grid_area": null, - "grid_auto_columns": null, - "grid_auto_flow": null, - "grid_auto_rows": null, - "grid_column": null, - "grid_gap": null, - "grid_row": null, - "grid_template_areas": null, - "grid_template_columns": null, - "grid_template_rows": null, - "height": null, - "justify_content": null, - "justify_items": null, - "left": null, - "margin": null, - "max_height": null, - "max_width": null, - "min_height": null, - "min_width": null, - "object_fit": null, - "object_position": null, - "order": null, - "overflow": null, - "overflow_x": null, - "overflow_y": null, - "padding": null, - "right": null, - "top": null, - "visibility": null, - "width": null - } - }, - "e1cc5d5e17ed43ebbd1420cbfbd06758": { - "model_module": "@jupyter-widgets/controls", - "model_module_version": "1.5.0", - "model_name": "DescriptionStyleModel", - "state": { - "_model_module": "@jupyter-widgets/controls", - "_model_module_version": "1.5.0", - "_model_name": "DescriptionStyleModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/base", - "_view_module_version": "1.2.0", - "_view_name": "StyleView", - "description_width": "" - } - }, - "e4b89b77f1c94abdbffc3ae9a931a148": { - "model_module": "@jupyter-widgets/base", - "model_module_version": "1.2.0", - "model_name": "LayoutModel", - "state": { - "_model_module": "@jupyter-widgets/base", - "_model_module_version": "1.2.0", - "_model_name": "LayoutModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/base", - "_view_module_version": "1.2.0", - "_view_name": "LayoutView", - "align_content": null, - "align_items": null, - "align_self": null, - "border": null, - "bottom": null, - "display": "inline-flex", - "flex": null, - "flex_flow": "row wrap", - "grid_area": null, - "grid_auto_columns": null, - "grid_auto_flow": null, - "grid_auto_rows": null, - "grid_column": null, - "grid_gap": null, - "grid_row": null, - "grid_template_areas": null, - "grid_template_columns": null, - "grid_template_rows": null, - "height": null, - "justify_content": null, - "justify_items": null, - "left": null, - "margin": null, - "max_height": null, - "max_width": null, - "min_height": null, - "min_width": null, - "object_fit": null, - "object_position": null, - "order": null, - "overflow": null, - "overflow_x": null, - "overflow_y": null, - "padding": null, - "right": null, - "top": null, - "visibility": null, - "width": "100%" - } - }, - "ff544275991b474981f0e55a01a4739a": { - "model_module": "@jupyter-widgets/controls", - "model_module_version": "1.5.0", - "model_name": "ProgressStyleModel", - "state": { - "_model_module": "@jupyter-widgets/controls", - "_model_module_version": "1.5.0", - "_model_name": "ProgressStyleModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/base", - "_view_module_version": "1.2.0", - "_view_name": "StyleView", - "bar_color": null, - "description_width": "" - } - } - } - } + "name": "python" + }, + "kernelspec": { + "name": "python3", + "display_name": "Python 3" + }, + "gpuClass": "standard" }, "nbformat": 4, "nbformat_minor": 0 From 4068c1816887bd28b928677ca1d24ed8dac24c76 Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Thu, 2 Feb 2023 11:51:29 -0500 Subject: [PATCH 22/49] Add more examples to colab notebook --- examples/pysr_demo.ipynb | 270 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 270 insertions(+) diff --git a/examples/pysr_demo.ipynb b/examples/pysr_demo.ipynb index a35c51569..e4bf434b1 100644 --- a/examples/pysr_demo.ipynb +++ b/examples/pysr_demo.ipynb @@ -711,6 +711,276 @@ "plt.scatter(X[:, 0], y_prediction)\n" ] }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Multiple outputs" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For multiple outputs, multiple equations are returned:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "X = 2 * np.random.randn(100, 5)\n", + "y = 1 / X[:, [0, 1, 2]]\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "model = PySRRegressor(\n", + " binary_operators=[\"+\", \"*\"],\n", + " unary_operators=[\"inv(x) = 1/x\"],\n", + " extra_sympy_mappings={\"inv\": lambda x: 1/x},\n", + ")\n", + "model.fit(X, y)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "model" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Julia packages and types" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "PySR uses [SymbolicRegression.jl](https://github.com/MilesCranmer/SymbolicRegression.jl)\n", + "as its search backend. This is a pure Julia package, and so can interface easily with any other\n", + "Julia package.\n", + "For some tasks, it may be necessary to load such a package.\n", + "\n", + "For example, let's say we wish to discovery the following relationship:\n", + "\n", + "$$ y = p_{3x + 1} - 5, $$\n", + "\n", + "where $p_i$ is the $i$th prime number, and $x$ is the input feature.\n", + "\n", + "Let's see if we can discover this using\n", + "the [Primes.jl](https://github.com/JuliaMath/Primes.jl) package.\n", + "\n", + "First, let's get the Julia backend\n", + "(here, we manually specify 4 threads and `-O3` - although this will only work if PySR has not yet started):" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import pysr\n", + "jl = pysr.julia_helpers.init_julia(julia_kwargs={\"threads\": 8, \"optimize\": 3})" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "`jl` stores the Julia runtime.\n", + "\n", + "Now, let's run some Julia code to add the Primes.jl\n", + "package to the PySR environment:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "jl.eval(\"\"\"\n", + "import Pkg\n", + "Pkg.add(\"Primes\")\n", + "\"\"\")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This imports the Julia package manager, and uses it to install\n", + "`Primes.jl`. Now let's import `Primes.jl`:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "jl.eval(\"import Primes\")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "Now, we define a custom operator:\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "jl.eval(\"\"\"\n", + "function p(i::T) where T\n", + " if (0.5 < i < 1000)\n", + " return T(Primes.prime(round(Int, i)))\n", + " else\n", + " return T(NaN)\n", + " end\n", + "end\n", + "\"\"\")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "We have created a a function `p`, which takes an arbitrary number as input.\n", + "`p` first checks whether the input is between 0.5 and 1000.\n", + "If out-of-bounds, it returns `NaN`.\n", + "If in-bounds, it rounds it to the nearest integer, compures the corresponding prime number, and then\n", + "converts it to the same type as input.\n", + "\n", + "Next, let's generate a list of primes for our test dataset.\n", + "Since we are using PyJulia, we can just call `p` directly to do this:\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "primes = {i: jl.p(i*1.0) for i in range(1, 999)}" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Next, let's use this list of primes to create a dataset of $x, y$ pairs:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "\n", + "X = np.random.randint(0, 100, 100)[:, None]\n", + "y = [primes[3*X[i, 0] + 1] - 5 + np.random.randn()*0.001 for i in range(100)]" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Note that we have also added a tiny bit of noise to the dataset.\n", + "\n", + "Finally, let's create a PySR model, and pass the custom operator. We also need to define the sympy equivalent, which we can leave as a placeholder for now:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from pysr import PySRRegressor\n", + "import sympy\n", + "\n", + "class sympy_p(sympy.Function):\n", + " pass\n", + "\n", + "model = PySRRegressor(\n", + " binary_operators=[\"+\", \"-\", \"*\", \"/\"],\n", + " unary_operators=[\"p\"],\n", + " niterations=100,\n", + " extra_sympy_mappings={\"p\": sympy_p}\n", + ")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We are all set to go! Let's see if we can find the true relation:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "model.fit(X, y)" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "if all works out, you should be able to see the true relation (note that the constant offset might not be exactly 1, since it is allowed to round to the nearest integer).\n", + "\n", + "You can get the sympy version of the best equation with:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "model.sympy()" + ] + }, { "cell_type": "markdown", "metadata": { From 0b335aec2d46e3d7f487e159af86e0c8e635f55b Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Mon, 6 Feb 2023 16:39:04 -0500 Subject: [PATCH 23/49] Revert "Patch colab with fallback runtime" This reverts commit d3b8ee8ce090ff61ee4f6ddb4ba2aebbe2df7b06. --- README.md | 6 +++--- examples/pysr_demo.ipynb | 25 +++++-------------------- 2 files changed, 8 insertions(+), 23 deletions(-) diff --git a/README.md b/README.md index b31763bd2..921176da2 100644 --- a/README.md +++ b/README.md @@ -13,9 +13,9 @@ PySR uses evolutionary algorithms to search for symbolic expressions which optim
-| **Docs** | **colab** | **pip** | **conda** | **Stats** | -|---|---|---|---|---| -|[![Documentation](https://github.com/MilesCranmer/PySR/actions/workflows/docs.yml/badge.svg)](https://astroautomata.com/PySR/)|[![Colab](https://img.shields.io/badge/colab-notebook-yellow)](https://colab.research.google.com/github/MilesCranmer/PySR/blob/master/examples/pysr_demo.ipynb)|[![PyPI version](https://badge.fury.io/py/pysr.svg)](https://badge.fury.io/py/pysr)|[![Conda Version](https://img.shields.io/conda/vn/conda-forge/pysr.svg)](https://anaconda.org/conda-forge/pysr)|
pip: [![Downloads](https://pepy.tech/badge/pysr)](https://badge.fury.io/py/pysr)
conda: [![Anaconda-Server Badge](https://anaconda.org/conda-forge/pysr/badges/downloads.svg)](https://anaconda.org/conda-forge/pysr)
| +| **Docs** | **pip** | **conda** | **Stats** | +|---|---|---|---| +|[![Documentation](https://github.com/MilesCranmer/PySR/actions/workflows/docs.yml/badge.svg)](https://astroautomata.com/PySR/)|[![PyPI version](https://badge.fury.io/py/pysr.svg)](https://badge.fury.io/py/pysr)|[![Conda Version](https://img.shields.io/conda/vn/conda-forge/pysr.svg)](https://anaconda.org/conda-forge/pysr)|
pip: [![Downloads](https://pepy.tech/badge/pysr)](https://badge.fury.io/py/pysr)
conda: [![Anaconda-Server Badge](https://anaconda.org/conda-forge/pysr/badges/downloads.svg)](https://anaconda.org/conda-forge/pysr)
|
diff --git a/examples/pysr_demo.ipynb b/examples/pysr_demo.ipynb index e4bf434b1..63abb2f90 100644 --- a/examples/pysr_demo.ipynb +++ b/examples/pysr_demo.ipynb @@ -9,15 +9,6 @@ "# Setup" ] }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "!echo \"Runtime started.\"" - ] - }, { "cell_type": "markdown", "metadata": { @@ -27,19 +18,13 @@ "\n", "## Instructions\n", "1. Work on a copy of this notebook: _File_ > _Save a copy in Drive_ (you will need a Google account).\n", - "2. (Optional) If you would like to do the deep learning component of this tutorial, ensure the GPU accelerator is turned on, with Edit->Notebook settings->Hardware accelerator->GPU\n", - "3. **Use fallback runtime**. Run the above cell (`!echo \"Runtime started.\"`) to start the runtime. Now, open the command pallette (bottom left -> second icon from the bottom), and search for \"use fallback runtime\", and hit enter.\n", - " - This is a temporary workaround for a bug in Colab, until the current runtime is patched.\n", - "4. Execute the following cell (click on it and press Ctrl+Enter or Shift+Enter) to install Julia, IJulia and other packages. This takes a couple of minutes.\n", - "5. Continue to the next section.\n", + "2. (Optional) If you would like to do the deep learning component of this tutorial, turn on the GPU with Edit->Notebook settings->Hardware accelerator->GPU\n", + "3. Execute the following cell (click on it and press Ctrl+Enter) to install Julia, IJulia and other packages (if needed, update `JULIA_VERSION` and the other parameters). This takes a couple of minutes.\n", + "4. Continue to the next section.\n", "\n", "_Notes_:\n", - "* If your Colab Runtime gets reset (e.g., due to inactivity), repeat steps 4-5.\n", - "* After installation, if you want to change the Julia version or activate/deactivate the GPU, you will need to reset the Runtime: _Runtime_ > _Delete and disconnect runtime_ and repeat steps 2-5.\n", - "\n", - "> **Warning**\n", - "> \n", - "> Ensure that you have done step 3 above, otherwise you will get an error when you try to use Julia." + "* If your Colab Runtime gets reset (e.g., due to inactivity), repeat steps 3, 4.\n", + "* After installation, if you want to change the Julia version or activate/deactivate the GPU, you will need to reset the Runtime: _Runtime_ > _Delete and disconnect runtime_ and repeat steps 2-4." ] }, { From a54d0fabe92fa0dd55f87667d4a2418d07921737 Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Mon, 6 Feb 2023 16:58:22 -0500 Subject: [PATCH 24/49] Ensure multithreaded and optimized init in demo --- examples/pysr_demo.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/examples/pysr_demo.ipynb b/examples/pysr_demo.ipynb index 63abb2f90..fa129de6d 100644 --- a/examples/pysr_demo.ipynb +++ b/examples/pysr_demo.ipynb @@ -109,7 +109,7 @@ "source": [ "from julia import Julia\n", "\n", - "julia = Julia(compiled_modules=False)\n", + "julia = Julia(compiled_modules=False, threads='auto', optimize=3)\n", "from julia import Main\n", "from julia.tools import redirect_output_streams\n", "\n", From de1e8dd618223a032d965753bd234089bbd97c46 Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Mon, 6 Feb 2023 17:36:57 -0500 Subject: [PATCH 25/49] Better explain pure-Julia function --- examples/pysr_demo.ipynb | 32 +++++++++++++++++++++++--------- 1 file changed, 23 insertions(+), 9 deletions(-) diff --git a/examples/pysr_demo.ipynb b/examples/pysr_demo.ipynb index fa129de6d..8e1e87943 100644 --- a/examples/pysr_demo.ipynb +++ b/examples/pysr_demo.ipynb @@ -846,7 +846,7 @@ "source": [ "jl.eval(\"\"\"\n", "function p(i::T) where T\n", - " if (0.5 < i < 1000)\n", + " if 0.5 < i < 1000\n", " return T(Primes.prime(round(Int, i)))\n", " else\n", " return T(NaN)\n", @@ -861,12 +861,26 @@ "metadata": {}, "source": [ "\n", - "We have created a a function `p`, which takes an arbitrary number as input.\n", + "We have created a function `p`, which takes a number `i` of type `T` (e.g., `T=Float64`).\n", "`p` first checks whether the input is between 0.5 and 1000.\n", "If out-of-bounds, it returns `NaN`.\n", - "If in-bounds, it rounds it to the nearest integer, compures the corresponding prime number, and then\n", + "If in-bounds, it rounds it to the nearest integer, computes the corresponding prime number, and then\n", "converts it to the same type as input.\n", "\n", + "The equivalent function in Python would be:\n", + "\n", + "```python\n", + "import sympy\n", + "\n", + "def p(i):\n", + " if 0.5 < i < 1000:\n", + " return float(sympy.prime(int(round(i))))\n", + " else:\n", + " return float(\"nan\")\n", + "```\n", + "\n", + "(However, note that this version assumes 64-bit float input, rather than any input type `T`)\n", + "\n", "Next, let's generate a list of primes for our test dataset.\n", "Since we are using PyJulia, we can just call `p` directly to do this:\n" ] @@ -1357,14 +1371,14 @@ "name": "pysr_demo.ipynb", "provenance": [] }, - "language_info": { - "name": "python" - }, + "gpuClass": "standard", "kernelspec": { - "name": "python3", - "display_name": "Python 3" + "display_name": "Python 3", + "name": "python3" }, - "gpuClass": "standard" + "language_info": { + "name": "python" + } }, "nbformat": 4, "nbformat_minor": 0 From 48009833aef981bebcdecddbdb04184f26702f82 Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Mon, 6 Feb 2023 19:21:02 -0500 Subject: [PATCH 26/49] Add back colab link --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 921176da2..b31763bd2 100644 --- a/README.md +++ b/README.md @@ -13,9 +13,9 @@ PySR uses evolutionary algorithms to search for symbolic expressions which optim
-| **Docs** | **pip** | **conda** | **Stats** | -|---|---|---|---| -|[![Documentation](https://github.com/MilesCranmer/PySR/actions/workflows/docs.yml/badge.svg)](https://astroautomata.com/PySR/)|[![PyPI version](https://badge.fury.io/py/pysr.svg)](https://badge.fury.io/py/pysr)|[![Conda Version](https://img.shields.io/conda/vn/conda-forge/pysr.svg)](https://anaconda.org/conda-forge/pysr)|
pip: [![Downloads](https://pepy.tech/badge/pysr)](https://badge.fury.io/py/pysr)
conda: [![Anaconda-Server Badge](https://anaconda.org/conda-forge/pysr/badges/downloads.svg)](https://anaconda.org/conda-forge/pysr)
| +| **Docs** | **colab** | **pip** | **conda** | **Stats** | +|---|---|---|---|---| +|[![Documentation](https://github.com/MilesCranmer/PySR/actions/workflows/docs.yml/badge.svg)](https://astroautomata.com/PySR/)|[![Colab](https://img.shields.io/badge/colab-notebook-yellow)](https://colab.research.google.com/github/MilesCranmer/PySR/blob/master/examples/pysr_demo.ipynb)|[![PyPI version](https://badge.fury.io/py/pysr.svg)](https://badge.fury.io/py/pysr)|[![Conda Version](https://img.shields.io/conda/vn/conda-forge/pysr.svg)](https://anaconda.org/conda-forge/pysr)|
pip: [![Downloads](https://pepy.tech/badge/pysr)](https://badge.fury.io/py/pysr)
conda: [![Anaconda-Server Badge](https://anaconda.org/conda-forge/pysr/badges/downloads.svg)](https://anaconda.org/conda-forge/pysr)
|
From 67c22c772228297c04e8eaa4df4e4d981606e669 Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Wed, 8 Feb 2023 21:15:24 -0500 Subject: [PATCH 27/49] Tweak PySR demo --- examples/pysr_demo.ipynb | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/examples/pysr_demo.ipynb b/examples/pysr_demo.ipynb index 8e1e87943..54a18ffdc 100644 --- a/examples/pysr_demo.ipynb +++ b/examples/pysr_demo.ipynb @@ -990,6 +990,7 @@ ] }, { + "attachments": {}, "cell_type": "markdown", "metadata": { "id": "3hS2kTAbbDhL" @@ -999,9 +1000,9 @@ "\n", "Let's consider a time series problem:\n", "\n", - "$$ z = y^2,\\quad y = \\frac{1}{100} \\sum(y_i),\\quad y_i = x_{i0}^2 + 6 \\cos(2*x_{i2})$$\n", + "$$ z = y^2,\\quad y = \\frac{1}{10} \\sum(y_i),\\quad y_i = x_{i0}^2 + 6 \\cos(2*x_{i2})$$\n", "\n", - "Imagine our time series is 100 timesteps. That is very hard for symbolic regression, even if we impose the inductive bias of $$z=f(\\sum g(x_i))$$ - it is the square of the number of possible equations!\n", + "Imagine our time series is 10 timesteps. That is very hard for symbolic regression, even if we impose the inductive bias of $$z=f(\\sum g(x_i))$$ - it is the square of the number of possible equations!\n", "\n", "But, as in our paper, **we can break this problem down into parts with a neural network. Then approximate the neural network with the symbolic regression!**\n", "\n", @@ -1018,7 +1019,7 @@ "source": [ "###### np.random.seed(0)\n", "N = 100000\n", - "Nt = 100\n", + "Nt = 10\n", "X = 6 * np.random.rand(N, Nt, 5) - 3\n", "y_i = X[..., 0] ** 2 + 6 * np.cos(2 * X[..., 2])\n", "y = np.sum(y_i, axis=1) / y_i.shape[1]\n", @@ -1299,6 +1300,7 @@ ] }, { + "attachments": {}, "cell_type": "markdown", "metadata": { "id": "6WuaeqyqbDhe" @@ -1306,7 +1308,7 @@ "source": [ "Recall we are searching for $y_i$ above:\n", "\n", - "$$ z = y^2,\\quad y = \\frac{1}{100} \\sum(y_i),\\quad y_i = x_{i0}^2 + 6 \\cos(2 x_{i2})$$" + "$$ z = y^2,\\quad y = \\frac{1}{10} \\sum(y_i),\\quad y_i = x_{i0}^2 + 6 \\cos(2 x_{i2})$$" ] }, { @@ -1373,11 +1375,13 @@ }, "gpuClass": "standard", "kernelspec": { - "display_name": "Python 3", - "name": "python3" + "display_name": "Python (main_ipynb)", + "language": "python", + "name": "main_ipynb" }, "language_info": { - "name": "python" + "name": "python", + "version": "3.10.9" } }, "nbformat": 4, From 1c3eec518e9246e5f82ead39aa5989abeede088a Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Wed, 8 Feb 2023 21:43:28 -0500 Subject: [PATCH 28/49] Make precompilation optional --- pysr/julia_helpers.py | 16 ++++++++++++++-- pysr/version.py | 2 +- 2 files changed, 15 insertions(+), 3 deletions(-) diff --git a/pysr/julia_helpers.py b/pysr/julia_helpers.py index 2eafa67c6..de3212fe7 100644 --- a/pysr/julia_helpers.py +++ b/pysr/julia_helpers.py @@ -65,7 +65,7 @@ def _get_io_arg(quiet): return io_arg -def install(julia_project=None, quiet=False): # pragma: no cover +def install(julia_project=None, quiet=False, precompile=True): # pragma: no cover """ Install PyCall.jl and all required dependencies for SymbolicRegression.jl. @@ -78,17 +78,29 @@ def install(julia_project=None, quiet=False): # pragma: no cover processed_julia_project, is_shared = _process_julia_project(julia_project) _set_julia_project_env(processed_julia_project, is_shared) + if not precompile: + os.environ["JULIA_PKG_PRECOMPILE_AUTO"] = "0" + julia.install(quiet=quiet) Main = init_julia(julia_project, quiet=quiet) io_arg = _get_io_arg(quiet) + if not precompile: + Main.eval('ENV["JULIA_PKG_PRECOMPILE_AUTO"] = 0') + if is_shared: # Install SymbolicRegression.jl: _add_sr_to_julia_project(Main, io_arg) Main.eval("using Pkg") Main.eval(f"Pkg.instantiate({io_arg})") - Main.eval(f"Pkg.precompile({io_arg})") + + if precompile and ( + "JULIA_PKG_PRECOMPILE_AUTO" not in os.environ + or str(os.environ["JULIA_PKG_PRECOMPILE_AUTO"]) != "0" + ): + Main.eval(f"Pkg.precompile({io_arg})") + if not quiet: warnings.warn( "It is recommended to restart Python after installing PySR's dependencies," diff --git a/pysr/version.py b/pysr/version.py index 4793f7df2..32d1262c5 100644 --- a/pysr/version.py +++ b/pysr/version.py @@ -1,2 +1,2 @@ -__version__ = "0.11.12" +__version__ = "0.11.13" __symbolic_regression_jl_version__ = "0.15.0" From a74e3083de0ca94831bdae67df57308685802dd2 Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Wed, 8 Feb 2023 21:54:30 -0500 Subject: [PATCH 29/49] Automatically disable precompilation for static binaries --- pysr/julia_helpers.py | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/pysr/julia_helpers.py b/pysr/julia_helpers.py index de3212fe7..41f04a5c6 100644 --- a/pysr/julia_helpers.py +++ b/pysr/julia_helpers.py @@ -65,7 +65,7 @@ def _get_io_arg(quiet): return io_arg -def install(julia_project=None, quiet=False, precompile=True): # pragma: no cover +def install(julia_project=None, quiet=False, precompile=None): # pragma: no cover """ Install PyCall.jl and all required dependencies for SymbolicRegression.jl. @@ -82,9 +82,12 @@ def install(julia_project=None, quiet=False, precompile=True): # pragma: no cov os.environ["JULIA_PKG_PRECOMPILE_AUTO"] = "0" julia.install(quiet=quiet) - Main = init_julia(julia_project, quiet=quiet) + Main, init_log = init_julia(julia_project, quiet=quiet, return_aux=True) io_arg = _get_io_arg(quiet) + if precompile is None and not init_log["compiled_modules"]: + precompile = False + if not precompile: Main.eval('ENV["JULIA_PKG_PRECOMPILE_AUTO"] = 0') @@ -157,7 +160,7 @@ def _check_for_conflicting_libraries(): # pragma: no cover ) -def init_julia(julia_project=None, quiet=False, julia_kwargs=None): +def init_julia(julia_project=None, quiet=False, julia_kwargs=None, return_aux=False): """Initialize julia binary, turning off compiled modules if needed.""" global julia_initialized global julia_kwargs_at_initialization @@ -195,6 +198,10 @@ def init_julia(julia_project=None, quiet=False, julia_kwargs=None): julia_kwargs = {**julia_kwargs, "compiled_modules": False} Julia(**julia_kwargs) + using_compiled_modules = (not "compiled_modules" in julia_kwargs) or julia_kwargs[ + "compiled_modules" + ] + from julia import Main as _Main Main = _Main @@ -234,6 +241,8 @@ def init_julia(julia_project=None, quiet=False, julia_kwargs=None): julia_kwargs_at_initialization = julia_kwargs julia_initialized = True + if return_aux: + return Main, {"compiled_modules": using_compiled_modules} return Main From 883c4d81ea768b22c5e2d7d03232826e58e16e8c Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Wed, 8 Feb 2023 21:56:03 -0500 Subject: [PATCH 30/49] Turn off precompilation in colab demo --- examples/pysr_demo.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/examples/pysr_demo.ipynb b/examples/pysr_demo.ipynb index 54a18ffdc..40408cce1 100644 --- a/examples/pysr_demo.ipynb +++ b/examples/pysr_demo.ipynb @@ -137,7 +137,7 @@ "source": [ "import pysr\n", "\n", - "pysr.install()\n" + "pysr.install(precompile=False)\n" ] }, { From bce8e64c154b1fca224037b321b1766380e5abfa Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Wed, 8 Feb 2023 21:58:25 -0500 Subject: [PATCH 31/49] Only test Julia 1.6 on Ubuntu --- .github/workflows/CI_Windows.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/CI_Windows.yml b/.github/workflows/CI_Windows.yml index baa9a1b8e..b6751e51d 100644 --- a/.github/workflows/CI_Windows.yml +++ b/.github/workflows/CI_Windows.yml @@ -29,7 +29,7 @@ jobs: shell: bash strategy: matrix: - julia-version: ['1.6', '1.8.2'] + julia-version: ['1.8.2'] python-version: ['3.9'] os: [windows-latest] From b31f59413f7554d410320a9449013b1e0d53d9ed Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Wed, 8 Feb 2023 22:19:08 -0500 Subject: [PATCH 32/49] Clean up colab notebook --- examples/pysr_demo.ipynb | 71 ++++++++++++++++++++-------------------- 1 file changed, 35 insertions(+), 36 deletions(-) diff --git a/examples/pysr_demo.ipynb b/examples/pysr_demo.ipynb index 40408cce1..372ffcffd 100644 --- a/examples/pysr_demo.ipynb +++ b/examples/pysr_demo.ipynb @@ -109,11 +109,11 @@ "source": [ "from julia import Julia\n", "\n", - "julia = Julia(compiled_modules=False, threads='auto', optimize=3)\n", + "julia = Julia(compiled_modules=False, threads='auto')\n", "from julia import Main\n", "from julia.tools import redirect_output_streams\n", "\n", - "redirect_output_streams()\n" + "redirect_output_streams()" ] }, { @@ -137,7 +137,8 @@ "source": [ "import pysr\n", "\n", - "pysr.install(precompile=False)\n" + "# We don't precompile in colab because compiled modules are incompatible static Python libraries:\n", + "pysr.install(precompile=False)" ] }, { @@ -157,7 +158,7 @@ "from torch.nn import functional as F\n", "from torch.utils.data import DataLoader, TensorDataset\n", "import pytorch_lightning as pl\n", - "from sklearn.model_selection import train_test_split\n" + "from sklearn.model_selection import train_test_split" ] }, { @@ -191,7 +192,7 @@ "# Dataset\n", "np.random.seed(0)\n", "X = 2 * np.random.randn(100, 5)\n", - "y = 2.5382 * np.cos(X[:, 3]) + X[:, 0] ** 2 - 2\n" + "y = 2.5382 * np.cos(X[:, 3]) + X[:, 0] ** 2 - 2" ] }, { @@ -215,7 +216,7 @@ " populations=30,\n", " procs=4,\n", " model_selection=\"best\",\n", - ")\n" + ")" ] }, { @@ -246,7 +247,7 @@ " **default_pysr_params\n", ")\n", "\n", - "model.fit(X, y)\n" + "model.fit(X, y)" ] }, { @@ -266,7 +267,7 @@ }, "outputs": [], "source": [ - "model\n" + "model" ] }, { @@ -286,7 +287,7 @@ }, "outputs": [], "source": [ - "model.sympy()\n" + "model.sympy()" ] }, { @@ -306,7 +307,7 @@ }, "outputs": [], "source": [ - "model.sympy(2)\n" + "model.sympy(2)" ] }, { @@ -335,7 +336,7 @@ }, "outputs": [], "source": [ - "model.latex()\n" + "model.latex()" ] }, { @@ -361,7 +362,7 @@ "ypredict_simpler = model.predict(X, 2)\n", "\n", "print(\"Default selection MSE:\", np.power(ypredict - y, 2).mean())\n", - "print(\"Manual selection MSE for index 2:\", np.power(ypredict_simpler - y, 2).mean())\n" + "print(\"Manual selection MSE for index 2:\", np.power(ypredict_simpler - y, 2).mean())" ] }, { @@ -395,7 +396,7 @@ }, "outputs": [], "source": [ - "y = X[:, 0] ** 4 - 2\n" + "y = X[:, 0] ** 4 - 2" ] }, { @@ -425,7 +426,7 @@ " unary_operators=[\"cos\", \"exp\", \"sin\", \"quart(x) = x^4\"],\n", " extra_sympy_mappings={\"quart\": lambda x: x**4},\n", ")\n", - "model.fit(X, y)\n" + "model.fit(X, y)" ] }, { @@ -436,7 +437,7 @@ }, "outputs": [], "source": [ - "model.sympy()\n" + "model.sympy()" ] }, { @@ -538,7 +539,7 @@ "X = 2 * np.random.rand(N, 5)\n", "sigma = np.random.rand(N) * (5 - 0.1) + 0.1\n", "eps = sigma * np.random.randn(N)\n", - "y = 5 * np.cos(3.5 * X[:, 0]) - 1.3 + eps\n" + "y = 5 * np.cos(3.5 * X[:, 0]) - 1.3 + eps" ] }, { @@ -560,7 +561,7 @@ "source": [ "plt.scatter(X[:, 0], y, alpha=0.2)\n", "plt.xlabel(\"$x_0$\")\n", - "plt.ylabel(\"$y$\")\n" + "plt.ylabel(\"$y$\")" ] }, { @@ -580,7 +581,7 @@ }, "outputs": [], "source": [ - "weights = 1 / sigma ** 2\n" + "weights = 1 / sigma ** 2" ] }, { @@ -591,7 +592,7 @@ }, "outputs": [], "source": [ - "weights[:5]\n" + "weights[:5]" ] }, { @@ -619,7 +620,7 @@ " binary_operators=[\"plus\", \"mult\"],\n", " unary_operators=[\"cos\"],\n", ")\n", - "model.fit(X, y, weights=weights)\n" + "model.fit(X, y, weights=weights)" ] }, { @@ -639,7 +640,7 @@ }, "outputs": [], "source": [ - "model\n" + "model" ] }, { @@ -662,7 +663,7 @@ "best_idx = model.equations_.query(\n", " f\"loss < {2 * model.equations_.loss.min()}\"\n", ").score.idxmax()\n", - "model.sympy(best_idx)\n" + "model.sympy(best_idx)" ] }, { @@ -693,7 +694,7 @@ "source": [ "plt.scatter(X[:, 0], y, alpha=0.1)\n", "y_prediction = model.predict(X, index=best_idx)\n", - "plt.scatter(X[:, 0], y_prediction)\n" + "plt.scatter(X[:, 0], y_prediction)" ] }, { @@ -719,7 +720,7 @@ "outputs": [], "source": [ "X = 2 * np.random.randn(100, 5)\n", - "y = 1 / X[:, [0, 1, 2]]\n" + "y = 1 / X[:, [0, 1, 2]]" ] }, { @@ -1024,7 +1025,7 @@ "y_i = X[..., 0] ** 2 + 6 * np.cos(2 * X[..., 2])\n", "y = np.sum(y_i, axis=1) / y_i.shape[1]\n", "z = y**2\n", - "X.shape, y.shape\n" + "X.shape, y.shape" ] }, { @@ -1117,7 +1118,7 @@ " ),\n", " \"interval\": \"step\",\n", " }\n", - " return [optimizer], [scheduler]\n" + " return [optimizer], [scheduler]" ] }, { @@ -1152,7 +1153,7 @@ "train_set = TensorDataset(X_train, z_train)\n", "train = DataLoader(train_set, batch_size=128, num_workers=2)\n", "test_set = TensorDataset(X_test, z_test)\n", - "test = DataLoader(test_set, batch_size=256, num_workers=2)\n" + "test = DataLoader(test_set, batch_size=256, num_workers=2)" ] }, { @@ -1184,7 +1185,7 @@ "pl.seed_everything(0)\n", "model = SumNet()\n", "model.total_steps = total_steps\n", - "model.max_lr = 1e-2\n" + "model.max_lr = 1e-2" ] }, { @@ -1204,7 +1205,7 @@ }, "outputs": [], "source": [ - "trainer = pl.Trainer(max_steps=total_steps, gpus=1, benchmark=True)\n" + "trainer = pl.Trainer(max_steps=total_steps, gpus=1, benchmark=True)" ] }, { @@ -1224,7 +1225,7 @@ }, "outputs": [], "source": [ - "trainer.fit(model, train_dataloaders=train, val_dataloaders=test)\n" + "trainer.fit(model, train_dataloaders=train, val_dataloaders=test)" ] }, { @@ -1254,7 +1255,7 @@ "y_for_pysr = torch.sum(y_i_for_pysr, dim=1) / y_i_for_pysr.shape[1]\n", "z_for_pysr = zt[idx] # Use true values.\n", "\n", - "X_for_pysr.shape, y_i_for_pysr.shape\n" + "X_for_pysr.shape, y_i_for_pysr.shape" ] }, { @@ -1287,7 +1288,7 @@ " binary_operators=[\"plus\", \"sub\", \"mult\"],\n", " unary_operators=[\"cos\", \"square\", \"neg\"],\n", ")\n", - "model.fit(X=tmpX[idx2], y=tmpy[idx2])\n" + "model.fit(X=tmpX[idx2], y=tmpy[idx2])" ] }, { @@ -1319,7 +1320,7 @@ }, "outputs": [], "source": [ - "model\n" + "model" ] }, { @@ -1375,9 +1376,7 @@ }, "gpuClass": "standard", "kernelspec": { - "display_name": "Python (main_ipynb)", - "language": "python", - "name": "main_ipynb" + "language": "python" }, "language_info": { "name": "python", From b77fef8f75e7affed0e70a188bf16b6ee4201581 Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Wed, 8 Feb 2023 22:32:30 -0500 Subject: [PATCH 33/49] Disable broken step of deployment action --- .github/workflows/pypi_deploy.yml | 6 ------ 1 file changed, 6 deletions(-) diff --git a/.github/workflows/pypi_deploy.yml b/.github/workflows/pypi_deploy.yml index 2cb4c47d3..9124e35a3 100644 --- a/.github/workflows/pypi_deploy.yml +++ b/.github/workflows/pypi_deploy.yml @@ -9,12 +9,6 @@ jobs: pypi: runs-on: ubuntu-latest steps: - - name: Wait for tests to pass - uses: lewagon/wait-on-check-action@v1.2.0 - with: - ref: ${{ github.ref }} - check-name: 'Linux' - repo-token: ${{ secrets.GITHUB_TOKEN }} - name: "Checkout" uses: actions/checkout@v3 - name: "Set up Python" From a6b35617f2c0398874cb7020c4478e413ee3bb23 Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Wed, 8 Feb 2023 22:41:21 -0500 Subject: [PATCH 34/49] Keep precompilation on if compiled modules enabled --- pysr/julia_helpers.py | 2 ++ 1 file changed, 2 insertions(+) diff --git a/pysr/julia_helpers.py b/pysr/julia_helpers.py index 41f04a5c6..e4638dbdd 100644 --- a/pysr/julia_helpers.py +++ b/pysr/julia_helpers.py @@ -87,6 +87,8 @@ def install(julia_project=None, quiet=False, precompile=None): # pragma: no cov if precompile is None and not init_log["compiled_modules"]: precompile = False + else: + precompile = True if not precompile: Main.eval('ENV["JULIA_PKG_PRECOMPILE_AUTO"] = 0') From 73e83d45b280ec7cf06267735adabe073f762aee Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Wed, 8 Feb 2023 22:56:01 -0500 Subject: [PATCH 35/49] Fix precompilation logic --- pysr/julia_helpers.py | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-) diff --git a/pysr/julia_helpers.py b/pysr/julia_helpers.py index e4638dbdd..b0ec30d24 100644 --- a/pysr/julia_helpers.py +++ b/pysr/julia_helpers.py @@ -78,17 +78,15 @@ def install(julia_project=None, quiet=False, precompile=None): # pragma: no cov processed_julia_project, is_shared = _process_julia_project(julia_project) _set_julia_project_env(processed_julia_project, is_shared) - if not precompile: + if precompile == False: os.environ["JULIA_PKG_PRECOMPILE_AUTO"] = "0" julia.install(quiet=quiet) Main, init_log = init_julia(julia_project, quiet=quiet, return_aux=True) io_arg = _get_io_arg(quiet) - if precompile is None and not init_log["compiled_modules"]: - precompile = False - else: - precompile = True + if precompile is None: + precompile = init_log["compiled_modules"] if not precompile: Main.eval('ENV["JULIA_PKG_PRECOMPILE_AUTO"] = 0') @@ -100,10 +98,7 @@ def install(julia_project=None, quiet=False, precompile=None): # pragma: no cov Main.eval("using Pkg") Main.eval(f"Pkg.instantiate({io_arg})") - if precompile and ( - "JULIA_PKG_PRECOMPILE_AUTO" not in os.environ - or str(os.environ["JULIA_PKG_PRECOMPILE_AUTO"]) != "0" - ): + if precompile: Main.eval(f"Pkg.precompile({io_arg})") if not quiet: From 09c365aede9b1bce5c903d62c54df22576426160 Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Wed, 8 Feb 2023 22:58:30 -0500 Subject: [PATCH 36/49] Fix name of CI for conda --- .github/workflows/CI.yml | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/.github/workflows/CI.yml b/.github/workflows/CI.yml index e238de2c8..8be9ae637 100644 --- a/.github/workflows/CI.yml +++ b/.github/workflows/CI.yml @@ -83,7 +83,6 @@ jobs: shell: bash -l {0} strategy: matrix: - julia-version: ['1.7.1'] python-version: ['3.9'] os: ['ubuntu-latest'] @@ -108,7 +107,7 @@ jobs: - name: "Cache Julia" uses: julia-actions/cache@v1 with: - cache-name: ${{ matrix.os }}-conda-${{ matrix.julia-version }}-${{ matrix.python-version }} + cache-name: ${{ matrix.os }}-conda-${{ matrix.python-version }} cache-packages: false - name: "Install PySR" run: | From ff70e94b36c45d60e9d19cb37010ffc38083e95f Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Wed, 8 Feb 2023 23:30:43 -0500 Subject: [PATCH 37/49] Update Julia version in colab --- examples/pysr_demo.ipynb | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/examples/pysr_demo.ipynb b/examples/pysr_demo.ipynb index 372ffcffd..31a6335de 100644 --- a/examples/pysr_demo.ipynb +++ b/examples/pysr_demo.ipynb @@ -48,8 +48,7 @@ "set -e\n", "\n", "#---------------------------------------------------#\n", - "JULIA_VERSION=\"1.7.2\"\n", - "JULIA_NUM_THREADS=4\n", + "JULIA_VERSION=\"1.8.5\"\n", "export JULIA_PKG_PRECOMPILE_AUTO=0\n", "#---------------------------------------------------#\n", "\n", @@ -214,7 +213,6 @@ "source": [ "default_pysr_params = dict(\n", " populations=30,\n", - " procs=4,\n", " model_selection=\"best\",\n", ")" ] @@ -1376,7 +1374,9 @@ }, "gpuClass": "standard", "kernelspec": { - "language": "python" + "display_name": "Python (main_ipynb)", + "language": "python", + "name": "main_ipynb" }, "language_info": { "name": "python", From 1e15ee2fc36973c981c07f1f29c1edacb0aef9fa Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Wed, 8 Feb 2023 23:39:40 -0500 Subject: [PATCH 38/49] Tweak threading settings in colab --- examples/pysr_demo.ipynb | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/examples/pysr_demo.ipynb b/examples/pysr_demo.ipynb index 31a6335de..653854e21 100644 --- a/examples/pysr_demo.ipynb +++ b/examples/pysr_demo.ipynb @@ -772,7 +772,8 @@ "the [Primes.jl](https://github.com/JuliaMath/Primes.jl) package.\n", "\n", "First, let's get the Julia backend\n", - "(here, we manually specify 4 threads and `-O3` - although this will only work if PySR has not yet started):" + "Here, we might choose to manually specify unlimited threads, `-O3`,\n", + "and `compile_modules=False`, although this will only propagate if Julia has not yet started:" ] }, { @@ -782,7 +783,9 @@ "outputs": [], "source": [ "import pysr\n", - "jl = pysr.julia_helpers.init_julia(julia_kwargs={\"threads\": 8, \"optimize\": 3})" + "jl = pysr.julia_helpers.init_julia(\n", + " julia_kwargs={\"threads\": \"auto\", \"optimize\": 2, \"compiled_modules\": False}\n", + ")" ] }, { From bb70c4282a63f403b7bd1c5cce699847d5515ed5 Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Wed, 8 Feb 2023 23:41:04 -0500 Subject: [PATCH 39/49] Tweak default niterations in colab --- examples/pysr_demo.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/examples/pysr_demo.ipynb b/examples/pysr_demo.ipynb index 653854e21..788dee2a6 100644 --- a/examples/pysr_demo.ipynb +++ b/examples/pysr_demo.ipynb @@ -941,7 +941,7 @@ "model = PySRRegressor(\n", " binary_operators=[\"+\", \"-\", \"*\", \"/\"],\n", " unary_operators=[\"p\"],\n", - " niterations=100,\n", + " niterations=20,\n", " extra_sympy_mappings={\"p\": sympy_p}\n", ")" ] From 0d22412b8875885cf26c6abb144e56fc2479578d Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Wed, 8 Feb 2023 23:45:52 -0500 Subject: [PATCH 40/49] Update deprecated lightning call --- examples/pysr_demo.ipynb | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/examples/pysr_demo.ipynb b/examples/pysr_demo.ipynb index 788dee2a6..0e413d696 100644 --- a/examples/pysr_demo.ipynb +++ b/examples/pysr_demo.ipynb @@ -1206,7 +1206,9 @@ }, "outputs": [], "source": [ - "trainer = pl.Trainer(max_steps=total_steps, gpus=1, benchmark=True)" + "trainer = pl.Trainer(\n", + " max_steps=total_steps, accelerator=\"gpu\", devices=1, benchmark=True\n", + ")\n" ] }, { From 1ec3ee8f6b9c4a12728b430cbc539edea4505cba Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Wed, 8 Feb 2023 23:59:45 -0500 Subject: [PATCH 41/49] Reduce length of DL part of tutorial --- examples/pysr_demo.ipynb | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/examples/pysr_demo.ipynb b/examples/pysr_demo.ipynb index 0e413d696..3f85e0c97 100644 --- a/examples/pysr_demo.ipynb +++ b/examples/pysr_demo.ipynb @@ -1064,7 +1064,7 @@ "outputs": [], "source": [ "hidden = 128\n", - "total_steps = 50000\n", + "total_steps = 10_000\n", "\n", "\n", "def mlp(size_in, size_out, act=nn.ReLU):\n", @@ -1284,7 +1284,7 @@ "np.random.seed(1)\n", "tmpX = X_for_pysr.detach().numpy().reshape(-1, 5)\n", "tmpy = y_i_for_pysr.detach().numpy().reshape(-1)\n", - "idx2 = np.random.randint(0, tmpy.shape[0], size=3000)\n", + "idx2 = np.random.randint(0, tmpy.shape[0], size=500)\n", "\n", "model = PySRRegressor(\n", " niterations=20,\n", From 86d9c0b5d45e72ddabece61aa5fdc5f11e65ce22 Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Thu, 9 Feb 2023 00:30:06 -0500 Subject: [PATCH 42/49] Expand colab notebook --- examples/pysr_demo.ipynb | 79 ++++++++++++++++++++++++++++++++++++---- 1 file changed, 71 insertions(+), 8 deletions(-) diff --git a/examples/pysr_demo.ipynb b/examples/pysr_demo.ipynb index 3f85e0c97..67c112dda 100644 --- a/examples/pysr_demo.ipynb +++ b/examples/pysr_demo.ipynb @@ -1262,6 +1262,7 @@ ] }, { + "attachments": {}, "cell_type": "markdown", "metadata": { "id": "nCCIvvAGuyFi" @@ -1269,7 +1270,60 @@ "source": [ "## Learning over the network:\n", "\n", - "Now, let's fit `g` using PySR:" + "Now, let's fit `g` using PySR.\n", + "\n", + "> **Warning**\n", + ">\n", + "> First, let's save the data, because sometimes PyTorch and PyJulia's C bindings interfere and cause the colab kernel to crash. If we need to restart, we can just load the data without having to retrain the network:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "nnet_recordings = {\n", + " \"g_input\": X_for_pysr.detach().cpu().numpy().reshape(-1, 5),\n", + " \"g_output\": y_i_for_pysr.detach().cpu().numpy().reshape(-1),\n", + " \"f_input\": y_for_pysr.detach().cpu().numpy().reshape(-1, 1),\n", + " \"f_output\": z_for_pysr.detach().cpu().numpy().reshape(-1),\n", + "}\n", + "\n", + "# Save the data for later use:\n", + "import pickle as pkl\n", + "\n", + "with open(\"nnet_recordings.pkl\", \"wb\") as f:\n", + " pkl.dump(nnet_recordings, f)" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can now load the data:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "nnet_recordings = pkl.load(open(\"nnet_recordings.pkl\", \"rb\"))\n", + "f_input = nnet_recordings[\"f_input\"]\n", + "f_output = nnet_recordings[\"f_output\"]\n", + "g_input = nnet_recordings[\"g_input\"]\n", + "g_output = nnet_recordings[\"g_output\"]" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "And now fit using a subsample of the data (symbolic regression only needs a small sample to find the best equation):" ] }, { @@ -1281,17 +1335,15 @@ }, "outputs": [], "source": [ - "np.random.seed(1)\n", - "tmpX = X_for_pysr.detach().numpy().reshape(-1, 5)\n", - "tmpy = y_i_for_pysr.detach().numpy().reshape(-1)\n", - "idx2 = np.random.randint(0, tmpy.shape[0], size=500)\n", + "rstate = np.random.RandomState(0)\n", + "f_sample_idx = rstate.choice(f_input.shape[0], size=500, replace=False)\n", "\n", "model = PySRRegressor(\n", " niterations=20,\n", " binary_operators=[\"plus\", \"sub\", \"mult\"],\n", " unary_operators=[\"cos\", \"square\", \"neg\"],\n", ")\n", - "model.fit(X=tmpX[idx2], y=tmpy[idx2])" + "model.fit(g_input[f_sample_idx], g_output[f_sample_idx])" ] }, { @@ -1310,9 +1362,12 @@ "id": "6WuaeqyqbDhe" }, "source": [ - "Recall we are searching for $y_i$ above:\n", + "Recall we are searching for $f$ and $g$ such that:\n", + "$$z=f(\\sum g(x_i))$$ \n", + "which approximates the true relation:\n", + "$$ z = y^2,\\quad y = \\frac{1}{10} \\sum(y_i),\\quad y_i = x_{i0}^2 + 6 \\cos(2 x_{i2})$$\n", "\n", - "$$ z = y^2,\\quad y = \\frac{1}{10} \\sum(y_i),\\quad y_i = x_{i0}^2 + 6 \\cos(2 x_{i2})$$" + "Let's see how well we did in recovering $g$:" ] }, { @@ -1384,7 +1439,15 @@ "name": "main_ipynb" }, "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", "version": "3.10.9" } }, From 906587429cccaf44b3f61f40ea1371ad4445baf5 Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Thu, 9 Feb 2023 00:42:02 -0500 Subject: [PATCH 43/49] Add missing pickle import --- examples/pysr_demo.ipynb | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/examples/pysr_demo.ipynb b/examples/pysr_demo.ipynb index 67c112dda..24397f61e 100644 --- a/examples/pysr_demo.ipynb +++ b/examples/pysr_demo.ipynb @@ -1302,7 +1302,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We can now load the data:" + "We can now load the data, including after a crash (be sure to re-run the import cells at the top of this notebook, including the one that starts PyJulia)." ] }, { @@ -1311,6 +1311,8 @@ "metadata": {}, "outputs": [], "source": [ + "import pickle as pkl\n", + "\n", "nnet_recordings = pkl.load(open(\"nnet_recordings.pkl\", \"rb\"))\n", "f_input = nnet_recordings[\"f_input\"]\n", "f_output = nnet_recordings[\"f_output\"]\n", From 1f233a4738d4b3537114ede52b388fcf5886f96b Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Thu, 9 Feb 2023 00:53:05 -0500 Subject: [PATCH 44/49] Expand docs in colab notebook --- examples/pysr_demo.ipynb | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/examples/pysr_demo.ipynb b/examples/pysr_demo.ipynb index 24397f61e..4434f502d 100644 --- a/examples/pysr_demo.ipynb +++ b/examples/pysr_demo.ipynb @@ -1391,7 +1391,9 @@ "source": [ "A neural network can easily undo a linear transform, so this is fine: the network for $f$ will learn to undo the linear transform.\n", "\n", - "Then, we can learn another analytic equation for $z$." + "This likely won't find the exact result, but it should find something similar. You may wish to try again but with many more `total_steps` for the neural network (10,000 is quite small!).\n", + "\n", + "Then, we can learn another analytic equation for $f$." ] }, { From a1a766e11ed2b9c3b9e19232dcedac49e5b91d2e Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Thu, 9 Feb 2023 17:49:01 -0500 Subject: [PATCH 45/49] Fix torch segfault in colab example --- examples/pysr_demo.ipynb | 85 ++++++++++++++++++++++------------------ 1 file changed, 46 insertions(+), 39 deletions(-) diff --git a/examples/pysr_demo.ipynb b/examples/pysr_demo.ipynb index 4434f502d..68271b1aa 100644 --- a/examples/pysr_demo.ipynb +++ b/examples/pysr_demo.ipynb @@ -152,11 +152,6 @@ "import numpy as np\n", "from matplotlib import pyplot as plt\n", "from pysr import PySRRegressor\n", - "import torch\n", - "from torch import nn, optim\n", - "from torch.nn import functional as F\n", - "from torch.utils.data import DataLoader, TensorDataset\n", - "import pytorch_lightning as pl\n", "from sklearn.model_selection import train_test_split" ] }, @@ -232,8 +227,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "id": "p4PSrO-NK1Wa", - "scrolled": true + "id": "p4PSrO-NK1Wa" }, "outputs": [], "source": [ @@ -412,8 +406,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "id": "PoEkpvYuGUdy", - "scrolled": true + "id": "PoEkpvYuGUdy" }, "outputs": [], "source": [ @@ -606,8 +599,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "id": "a07K3KUjOxcp", - "scrolled": true + "id": "a07K3KUjOxcp" }, "outputs": [], "source": [ @@ -947,8 +939,8 @@ ] }, { - "attachments": {}, "cell_type": "markdown", + "id": "ee30bd41", "metadata": {}, "source": [ "We are all set to go! Let's see if we can find the true relation:" @@ -1019,10 +1011,13 @@ }, "outputs": [], "source": [ - "###### np.random.seed(0)\n", + "import numpy as np\n", + "\n", + "rstate = np.random.RandomState(0)\n", + "\n", "N = 100000\n", "Nt = 10\n", - "X = 6 * np.random.rand(N, Nt, 5) - 3\n", + "X = 6 * rstate.rand(N, Nt, 5) - 3\n", "y_i = X[..., 0] ** 2 + 6 * np.cos(2 * X[..., 2])\n", "y = np.sum(y_i, axis=1) / y_i.shape[1]\n", "z = y**2\n", @@ -1055,6 +1050,17 @@ "Then, we will fit `g` and `f` **separately** using symbolic regression." ] }, + { + "cell_type": "markdown", + "metadata": { + "id": "aca54ffa" + }, + "source": [ + "> **Warning**\n", + ">\n", + "> We import torch *after* already starting PyJulia. This is required due to interference between their C bindings. If you use torch, and then run PyJulia, you will likely hit a segfault. So keep this in mind for mixed deep learning + PyJulia/PySR workflows." + ] + }, { "cell_type": "code", "execution_count": null, @@ -1063,9 +1069,14 @@ }, "outputs": [], "source": [ - "hidden = 128\n", - "total_steps = 10_000\n", + "import torch\n", + "from torch import nn, optim\n", + "from torch.nn import functional as F\n", + "from torch.utils.data import DataLoader, TensorDataset\n", + "import pytorch_lightning as pl\n", "\n", + "hidden = 128\n", + "total_steps = 30_000\n", "\n", "def mlp(size_in, size_out, act=nn.ReLU):\n", " return nn.Sequential(\n", @@ -1148,13 +1159,14 @@ }, "outputs": [], "source": [ + "from multiprocessing import cpu_count\n", "Xt = torch.tensor(X).float()\n", "zt = torch.tensor(z).float()\n", "X_train, X_test, z_train, z_test = train_test_split(Xt, zt, random_state=0)\n", "train_set = TensorDataset(X_train, z_train)\n", - "train = DataLoader(train_set, batch_size=128, num_workers=2)\n", + "train = DataLoader(train_set, batch_size=128, num_workers=cpu_count(), shuffle=True, pin_memory=True)\n", "test_set = TensorDataset(X_test, z_test)\n", - "test = DataLoader(test_set, batch_size=256, num_workers=2)" + "test = DataLoader(test_set, batch_size=256, num_workers=cpu_count(), pin_memory=True)" ] }, { @@ -1207,8 +1219,8 @@ "outputs": [], "source": [ "trainer = pl.Trainer(\n", - " max_steps=total_steps, accelerator=\"gpu\", devices=1, benchmark=True\n", - ")\n" + " max_steps=total_steps, accelerator=\"gpu\", devices=1\n", + ")" ] }, { @@ -1262,7 +1274,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "metadata": { "id": "nCCIvvAGuyFi" @@ -1332,8 +1343,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "id": "51QdHVSkbDhc", - "scrolled": true + "id": "51QdHVSkbDhc" }, "outputs": [], "source": [ @@ -1348,6 +1358,15 @@ "model.fit(g_input[f_sample_idx], g_output[f_sample_idx])" ] }, + { + "cell_type": "markdown", + "metadata": { + "id": "1a738a33" + }, + "source": [ + "If this segfaults, restart the notebook, and run the initial imports and PyJulia part, but skip the PyTorch training. This is because PyTorch's C binding tends to interefere with PyJulia. You can then re-run the `pkl.load` cell to import the data." + ] + }, { "cell_type": "markdown", "metadata": { @@ -1380,7 +1399,7 @@ }, "outputs": [], "source": [ - "model" + "model.equations_[[\"complexity\", \"loss\", \"equation\"]]" ] }, { @@ -1389,7 +1408,7 @@ "id": "mlU1hidZkgCY" }, "source": [ - "A neural network can easily undo a linear transform, so this is fine: the network for $f$ will learn to undo the linear transform.\n", + "A neural network can easily undo a linear transform (which commutes with the summation), so any affine transform in $g$ is to be expected. The network for $f$ has learned to undo the linear transform.\n", "\n", "This likely won't find the exact result, but it should find something similar. You may wish to try again but with many more `total_steps` for the neural network (10,000 is quite small!).\n", "\n", @@ -1438,21 +1457,9 @@ }, "gpuClass": "standard", "kernelspec": { - "display_name": "Python (main_ipynb)", + "display_name": "Python 3", "language": "python", - "name": "main_ipynb" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.10.9" + "name": "python3" } }, "nbformat": 4, From f5d41d2bebde7b4d62ac8ff1cfb09e3bd6863784 Mon Sep 17 00:00:00 2001 From: Miles Cranmer Date: Sun, 12 Feb 2023 19:34:07 -0500 Subject: [PATCH 46/49] Link colab in introduction --- README.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index b31763bd2..c3e0c57a0 100644 --- a/README.md +++ b/README.md @@ -87,8 +87,10 @@ If none of these folders contain your Julia binary, then you need to add Julia's # Introduction -Let's create a PySR example. First, let's import -numpy to generate some test data: +You might wish to try the interactive tutorial [here](https://colab.research.google.com/github/MilesCranmer/PySR/blob/master/examples/pysr_demo.ipynb), which uses the notebook in `examples/pysr_demo.ipynb`. + +We also give a quick demo here which you can paste into a Python runtime. +First, let's import numpy to generate some test data: ```python import numpy as np From 25164ef901c00697f320ac7f59883fa40ac84ee7 Mon Sep 17 00:00:00 2001 From: Miles Cranmer Date: Sun, 12 Feb 2023 19:43:08 -0500 Subject: [PATCH 47/49] Update README.md --- README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index c3e0c57a0..e6adb436f 100644 --- a/README.md +++ b/README.md @@ -89,7 +89,8 @@ If none of these folders contain your Julia binary, then you need to add Julia's You might wish to try the interactive tutorial [here](https://colab.research.google.com/github/MilesCranmer/PySR/blob/master/examples/pysr_demo.ipynb), which uses the notebook in `examples/pysr_demo.ipynb`. -We also give a quick demo here which you can paste into a Python runtime. +In practice, I highly recommend using IPython rather than Jupyter, as the printing is much nicer. +Below is a quick demo here which you can paste into a Python runtime. First, let's import numpy to generate some test data: ```python From 093e55a7000bf0b608c6fd2f1f7d881ab8287597 Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Sun, 12 Feb 2023 21:18:06 -0500 Subject: [PATCH 48/49] Update backend with constraints fix --- pysr/version.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pysr/version.py b/pysr/version.py index 32d1262c5..673f644c9 100644 --- a/pysr/version.py +++ b/pysr/version.py @@ -1,2 +1,2 @@ -__version__ = "0.11.13" -__symbolic_regression_jl_version__ = "0.15.0" +__version__ = "0.11.14" +__symbolic_regression_jl_version__ = "0.15.1" From b3b408f0f125527f208c2e54fb893b0ebdb89dfc Mon Sep 17 00:00:00 2001 From: MilesCranmer Date: Sat, 18 Feb 2023 01:27:03 -0500 Subject: [PATCH 49/49] Bump backend version with data race fix --- pysr/version.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pysr/version.py b/pysr/version.py index 673f644c9..3c98bbfc8 100644 --- a/pysr/version.py +++ b/pysr/version.py @@ -1,2 +1,2 @@ -__version__ = "0.11.14" -__symbolic_regression_jl_version__ = "0.15.1" +__version__ = "0.11.15" +__symbolic_regression_jl_version__ = "0.15.2"