Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Devel #2578

Merged
merged 146 commits into from
Jan 14, 2023
Merged

Devel #2578

merged 146 commits into from
Jan 14, 2023

Conversation

kmantel
Copy link
Collaborator

@kmantel kmantel commented Jan 14, 2023

No description provided.

jvesely and others added 30 commits August 18, 2022 18:32
Merge pull request from PrincetonUniversity/master
…fer_mech_integration_rate_0_8 (#2471)

EX encapsulates compiled CPU or GPU or Python execution,
use that instead of always executing Python version.

Fixes: cf57224
("test/mechanisms/TransferMechanism: Only run benchmarks if enabled")

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Add missing pytest.mark.composition to tests that construct and run Composition.

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Pulls in pytorch[0] so it needs the same constraints.

[0] https://github.com/ModECI/MDF/blob/main/setup.cfg

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
… bumped package before installing PNL (#2481)

Some packages may pull in updated dependencies that cause issues when rolled back.

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
…p packages (#2482)

The packages should be rebuilt if some of the shared dependencies change

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
…for not installed packages (#2483)

This happens if the package is an 'extras' dependency.
e.g. a 'doc' or 'cuda' dependency not installed in 'dev' CI jobs.
Fixes package bumps of docs dependencies.

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Many psyneulink Functions must be split into several MDF
functions/expressions to reproduce their behavior. Instead of adding
these supplemental functions on the owners of Functions, add
Function._assign_to_mdf_model that adds anything necessary to the owner
MDF: support DriftDiffusionIntegrator
Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
…on test

Pass the modulator value directly from input.

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
…llvm_is_finished_cond'

Passing base parameters matches invocation by the scheduler.
Any modulation needs to be applied explicitly.

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
…threshold.

Matches Python semantics.
Add threshold modulation test for integrator based DDM.

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
…ant (#2490)

Consistently use modulated mechanism parameters (e.g. threshold) of TransferMechanism in is_finished function whether it's called from inside the mechanism loop, or by the scheduler.
Use modulated function parameters in DDM's 'is_finished' function in both internal and scheduler invocation.
scipy doesn't provide 32-bit wheels for >=1.9.2

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
…0,<1.13.0 (#2504)

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
…2505)

The only supported evaluation type atm is "evaluate_type_objective",
which uses the result of ocm.objective_mechanism as output (combined
with costs).

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
kmantel and others added 20 commits December 6, 2022 01:45
jupyter pulls in jupyter-server-terminals which requires pywinpty>=2.0.3
on windows [0]

[0] https://github.com/jupyter-server/jupyter_server_terminals/blob/main/pyproject.toml

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
…tation (#2560)

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
…anch name (#2566)

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
…d in MechanismError (#2569)

string values of errors are not stable and can change between versions.
Fixes 'input_list_of_strings' tests when using numpy>=1.22.x.
Refactor handling of input port execution exceptions to only check calls
to 'execute', parameter set/get do not throw exceptions.

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
…tions

Individual terms of the sum are close in magnitude.
Instead of accmulating all the postive terms, alternate between positive and
negative terms. This keeps the intermadiate/partial results small in magnitude
improving accuracy of the floating point expression.

This is enough to make the
"test_drift_difussion_analytical_shenhav_compat_mode" when using numpy-1.22
with its new AVX512 based floating point operations [0]

[0] numpy/numpy@1eff1c5

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
… operations

Individual terms of the sum are close in magnitude.
Instead of accmulating all the postive terms, alternate between positive and
negative terms. This keeps the intermadiate/partial results small in magnitude
improving accuracy of the floating point expression.
This matches the updated Python version.

Fix use of incorrect terms. Skew calculation needs cubic terms.

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
…22 and AVX512 CPUs

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Updates the requirements on [numpy](https://github.com/numpy/numpy) to permit the latest 1.22.x release.
 - [Release notes](https://github.com/numpy/numpy/releases)
 - [Changelog](https://github.com/numpy/numpy/blob/main/doc/RELEASE_WALKTHROUGH.rst)
 - [Commits](numpy/numpy@v1.17.0...v1.22.4)

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
…function and update numpy to <1.22.5 (#2476)

numpy-1.22 introduced a new set of optimized floating point routines on CPUs that support AVX512 instructions[0].
Reassociate operations in the DriftDiffusionAnalytical function to improve accuracy, this is enough to pass the
"test_drift_difussion_analytical_shenhav_compat_mode" test that otherwise fails when using numpy 1.22.x on CPUs that support AVX512 instructions.

Update results in tests/functions/test_distribution.py::test_execute[DriftDiffusionAnalytical..] to match the new reassocaited routines for small drift rate and fp32 precision.
Use windows/mac expected results on CPUs that support AVX512 when numpy>=1.22 is used.

Allow numpy < 1.22.5

Closes: #2572

[0] numpy/numpy@1eff1c5
* • function.py
  - Function_Base:
    move arguments referring to parameters in call to function to params arg
    (to be treated as runtime_params)

• memoryfunctions.py
  - ContentAddressableMemory:  add store and retrieve convenience methods

* -
Copy link

@github-advanced-security github-advanced-security bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CodeQL found more than 10 potential problems in the proposed changes. Check the Files changed tab for more details.

* master:
  Add CodeQL workflow for GitHub code scanning (#2533)
@kmantel kmantel requested a review from jdcpni January 14, 2023 00:32
@kmantel kmantel temporarily deployed to github-pages January 14, 2023 01:27 — with GitHub Actions Inactive
@github-actions
Copy link

This PR causes the following changes to the html docs (ubuntu-latest-3.7-x64):

diff -r docs-base/AutodiffComposition.html docs-head/AutodiffComposition.html
221a222,224
> <li><p><a class="reference internal" href="#autodiffcomposition-llvm"><span class="std std-ref">LLVM mode</span></a></p></li>
> <li><p><a class="reference internal" href="#autodiffcomposition-pytorch"><span class="std std-ref">PyTorch mode</span></a></p></li>
> <li><p><a class="reference internal" href="#autodiffcomposition-nested-modulation"><span class="std std-ref">Nested Execution and Modulation</span></a></p></li>
223d225
< <li><p><a class="reference internal" href="#autodiffcomposition-nested-execution"><span class="std std-ref">Nested Execution</span></a></p></li>
227a230
> <li><p><a class="reference internal" href="#autodiffcomposition-examples"><span class="std std-ref">Examples</span></a></p></li>
234,242c237,245
< <div class="admonition warning">
< <p class="admonition-title">Warning</p>
< <p>As of PsyNeuLink 0.7.5, the API for using AutodiffCompositions has been slightly changed! Please see <a class="reference internal" href="RefactoredLearningGuide.html"><span class="doc">this link</span></a> for more details!</p>
< </div>
< <p>AutodiffComposition is a subclass of <a class="reference internal" href="Composition.html"><span class="doc">Composition</span></a> used to train feedforward neural network models through integration
< with <a class="reference external" href="https://pytorch.org/">PyTorch</a>, a popular machine learning library, which executes considerably more quickly
< than using the <a class="reference internal" href="Composition.html#composition-learning-standard"><span class="std std-ref">standard implementation of learning</span></a> in a Composition, using its
< <a class="reference internal" href="Composition.html#composition-learning-methods"><span class="std std-ref">learning methods</span></a>. An AutodiffComposition is configured and run similarly to a standard
< Composition, with some exceptions that are described below.</p>
---
> <p>AutodiffComposition is a subclass of <a class="reference internal" href="Composition.html"><span class="doc">Composition</span></a> for constructing and training feedforward neural network
> either, using either direct compilation (to LLVM) or automatic conversion to <a class="reference external" href="https://pytorch.org/">PyTorch</a>,
> both of which considerably accelerate training (by as much as three orders of magnitude) compared to the
> <a class="reference internal" href="Composition.html#composition-learning-standard"><span class="std std-ref">standard implementation of learning</span></a> in a Composition.  Although an
> AutodiffComposition is constructed and executed in much the same way as a standard Composition, it largely restricted
> to feedforward neural networks using <a class="reference internal" href="Composition.html#composition-learning-supervised"><span class="std std-ref">supervised learning</span></a>, and in particular the
> the <a class="reference external" href="https://en.wikipedia.org/wiki/Backpropagation">backpropagation learning algorithm</a>. although it can be used for
> some forms of <a class="reference internal" href="Composition.html#composition-learning-unsupervised"><span class="std std-ref">unsupervised learning</span></a> that are supported in PyTorch (e.g.,
> <a class="reference external" href="https://github.com/giannisnik/som">self-organized maps</a>).</p>
246,249c249,257
< <p>An AutodiffComposition can be created by calling its constructor, and then adding <a class="reference internal" href="Component.html"><span class="doc">Components</span></a> using the
< standard <a class="reference internal" href="Composition.html#composition-creation"><span class="std std-ref">Composition methods</span></a> for doing so.  The constructor also includes an number of
< parameters that are specific to the AutodiffComposition. See the &lt;class reference <a class="reference internal" href="#"><span class="doc">AutodiffComposition</span></a>&gt; for a list of these parameters.</p>
< <div class="admonition warning">
---
> <p>An AutodiffComposition can be created by calling its constructor, and then adding <a class="reference internal" href="Component.html"><span class="doc">Components</span></a> using
> the standard <a class="reference internal" href="Composition.html#composition-creation"><span class="std std-ref">Composition methods</span></a> for doing so (e.g., <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.add_node" title="psyneulink.core.compositions.composition.Composition.add_node"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">add_node</span></code></a>,
> <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.add_projections" title="psyneulink.core.compositions.composition.Composition.add_projections"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">add_projection</span></code></a>,  <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.add_linear_processing_pathway" title="psyneulink.core.compositions.composition.Composition.add_linear_processing_pathway"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">add_linear_processing_pathway</span></code></a>, etc.).  The constructor also includes a number of parameters that are
> specific to the AutodiffComposition (see <a class="reference internal" href="#autodiffcomposition-class-reference"><span class="std std-ref">Class Reference</span></a> for a list of these parameters,
> and <a class="reference internal" href="#autodiffcomposition-examples"><span class="std std-ref">examples</span></a> below).  Note that all of the Components in an AutodiffComposition
> must be able to be subject to <a class="reference internal" href="Composition.html#composition-learning"><span class="std std-ref">learning</span></a>, but cannot include any <a class="reference internal" href="Composition.html#composition-learning-components"><span class="std std-ref">learning components</span></a> themselves.  Specifically, it cannot include any <a class="reference internal" href="ModulatoryMechanism.html"><span class="doc">ModulatoryMechanisms</span></a>, <a class="reference internal" href="LearningProjection.html"><span class="doc">LearningProjections</span></a>, or the ObjectiveMechanism &lt;OBJECTIVE_MECHANISM&gt;`
> used to compute the loss for learning.</p>
> <blockquote>
> <div><div class="admonition warning" id="autodiff-learning-components-warning">
251,253c259,279
< <p>Mechanisms or Projections should not be added to or deleted from an AutodiffComposition after it has
< been run for the first time. Unlike an ordinary Composition, AutodiffComposition does not support this
< functionality.</p>
---
> <p>When an AutodiffComposition is constructed, it creates all of the learning Components
> that are needed, and thus <strong>cannot include</strong> any that are prespecified.</p>
> </div>
> </div></blockquote>
> <p>This means that an AutodiffComposition also cannot itself include a <a class="reference internal" href="Composition.html#composition-controller"><span class="std std-ref">controller</span></a> or any
> <a class="reference internal" href="ControlMechanism.html"><span class="doc">ControlMechanisms</span></a>.  However, it can include Mechanisms that are subject to modulatory control
> (see <a class="reference internal" href="ModulatorySignal.html#modulatorysignal-anatomy-figure"><span class="std std-ref">Figure</span></a>, and <a class="reference internal" href="ModulatorySignal.html#modulatorysignal-modulation"><span class="std std-ref">modulation</span></a>) by ControlMechanisms
> <em>outside</em> the Composition, including the controller of a Composition within which the AutodiffComposition is nested.
> That is, an AutodiffComposition can be <a class="reference internal" href="Composition.html#composition-nested"><span class="std std-ref">nested in a Composition</span></a> that has such other Components
> (see <a class="reference internal" href="#autodiffcomposition-nested-modulation"><span class="std std-ref">Nested Execution and Modulation</span></a> below).</p>
> <p>A few other restrictions apply to the construction and modification of AutodiffCompositions:</p>
> <blockquote>
> <div><div class="admonition hint">
> <p class="admonition-title">Hint</p>
> <p>AutodiffComposition does not (currently) support the <em>automatic</em> construction of separate bias parameters.
> Thus, when comparing a model constructed using an AutodiffComposition to a corresponding model in PyTorch, the
> <code class="xref any docutils literal notranslate"><span class="pre">bias</span></code> parameter of PyTorch modules should be set
> to <code class="xref any docutils literal notranslate"><span class="pre">False</span></code>.  Trainable biases <em>can</em> be specified explicitly in an AutodiffComposition by including a
> TransferMechanism that projects to the relevant Mechanism (i.e., implementing that layer of the network to
> receive the biases) using a <a class="reference internal" href="MappingProjection.html"><span class="doc">MappingProjection</span></a> with a <a class="reference internal" href="MappingProjection.html#psyneulink.core.components.projections.pathway.mappingprojection.MappingProjection.matrix" title="psyneulink.core.components.projections.pathway.mappingprojection.MappingProjection.matrix"><code class="xref any py py-attr docutils literal notranslate"><span class="pre">matrix</span></code></a> parameter that
> implements a diagnoal matrix with values corresponding to the initial value of the biases.</p>
257,258c283,284
< <p>When comparing models built in PyTorch to those using AutodiffComposition,
< the <code class="xref any docutils literal notranslate"><span class="pre">bias</span></code> parameter of PyTorch modules should be set to <code class="xref any docutils literal notranslate"><span class="pre">False</span></code>, as AutodiffComposition does not currently support trainable biases.</p>
---
> <p>Mechanisms or Projections should not be added to or deleted from an AutodiffComposition after it
> has been executed. Unlike an ordinary Composition, AutodiffComposition does not support this functionality.</p>
259a286
> </div></blockquote>
263,265c290,361
< <p>An AutodiffComposition’s <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.run" title="psyneulink.core.compositions.composition.Composition.run"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">run</span></code></a>, <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.execute" title="psyneulink.core.compositions.composition.Composition.execute"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">execute</span></code></a>, and <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.learn" title="psyneulink.core.compositions.composition.Composition.learn"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">learn</span></code></a> methods are the same as for a <a class="reference internal" href="Composition.html"><span class="doc">Composition</span></a>.</p>
< <p>The following is an example showing how to create a
< simple AutodiffComposition, specify its inputs and targets, and run it with learning enabled and disabled.</p>
---
> <p>An AutodiffComposition’s <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.run" title="psyneulink.core.compositions.composition.Composition.run"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">run</span></code></a>, <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.execute" title="psyneulink.core.compositions.composition.Composition.execute"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">execute</span></code></a>, and <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.learn" title="psyneulink.core.compositions.composition.Composition.learn"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">learn</span></code></a>
> methods are the same as for a <a class="reference internal" href="Composition.html"><span class="doc">Composition</span></a>.  However, the <strong>execution_mode</strong> in the <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.learn" title="psyneulink.core.compositions.composition.Composition.learn"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">learn</span></code></a>
> method has different effects than for a standard Composition, that determine whether it uses <a class="reference internal" href="#autodiffcomposition-llvm"><span class="std std-ref">LLVM compilation</span></a> or <a class="reference internal" href="#autodiffcomposition-pytorch"><span class="std std-ref">translation to PyTorch</span></a> to execute learning.
> This <a class="reference internal" href="Composition.html#composition-compilation-table"><span class="std std-ref">table</span></a> provides a summary and comparison of these different modes of execution,
> that are described in greater detail below.</p>
> <section id="llvm-mode">
> <span id="autodiffcomposition-llvm"></span><h3><em>LLVM mode</em><a class="headerlink" href="#llvm-mode" title="Permalink to this headline">¶</a></h3>
> <p>This is specified by setting <strong>execution_mode</strong> = <a class="reference internal" href="LLVM.html#psyneulink.core.llvm.__init__.ExecutionMode.LLVMRun" title="psyneulink.core.llvm.__init__.ExecutionMode.LLVMRun"><code class="xref any py py-attr docutils literal notranslate"><span class="pre">ExecutionMode.LLVMRun</span></code></a> in the <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.learn" title="psyneulink.core.compositions.composition.Composition.learn"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">learn</span></code></a> method
> of an AutodiffCompositon.  This provides the fastest performance, but is limited to <a class="reference internal" href="Composition.html#composition-learning-supervised"><span class="std std-ref">supervised learning</span></a> using the <a class="reference internal" href="LearningFunctions.html#psyneulink.core.components.functions.learningfunctions.BackPropagation" title="psyneulink.core.components.functions.learningfunctions.BackPropagation"><code class="xref any py py-class docutils literal notranslate"><span class="pre">BackPropagation</span></code></a> algorithm. This can be run using standard forms of
> loss, including mean squared error (MSE) and cross entropy, by specifying this in the <strong>loss_spec</strong> argument of
> the constructor (see <a class="reference internal" href="#autodiffcomposition-class-reference"><span class="std std-ref">AutodiffComposition</span></a> for additional details, and
> <a class="reference internal" href="Composition.html#composition-compiled-modes"><span class="std std-ref">Compilation Modes</span></a> for more information about executing a Composition in compiled mode.</p>
> <blockquote>
> <div><div class="admonition note">
> <p class="admonition-title">Note</p>
> <p>Specifying <code class="xref any docutils literal notranslate"><span class="pre">ExecutionMode.LLVMRUn</span></code> in either the <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.learn" title="psyneulink.core.compositions.composition.Composition.learn"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">learn</span></code></a> and <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.run" title="psyneulink.core.compositions.composition.Composition.run"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">run</span></code></a>
> methods of an Au
...

See CI logs for the full diff.

@coveralls
Copy link

Coverage Status

Coverage: 84.118% (-0.05%) from 84.164% when pulling 3bf2289 on devel into a0f8766 on master.

@jdcpni jdcpni removed their request for review January 14, 2023 02:23
@kmantel kmantel merged commit 06f3006 into master Jan 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants