Skip to content
Branch: master
Commits on Jan 26, 2020
  1. Gengolgi (#365)

    chewxy committed Jan 26, 2020
    * Added a bit of code to generate some Golgi code
    * Corrected the autogenerator for the APIs for Golgi
Commits on Jan 19, 2020
  1. Fixes #363 (#364)

    chewxy committed Jan 19, 2020
    The main problem was in the diff function of `Neg`. It should increment the derivative of x, but a wrong method was called (`UnsafeDo`).
    This makes it fail for values that are scalars. This has been fixed
  2. Refactor monads (#361)

    chewxy committed Jan 19, 2020
    * Changed `Err{}` to be a function instead.
    The underlying type is now gErr. It implements Result and error.
    * Added `TransformResult` which respects the input type. It's designed
    as a decorator over `LiftResult`. Examples to come
    * Added example (and a raison d'être for the complicated monad-y stuff)
    * Added more clarity to the example
Commits on Jan 5, 2020
  1. Changed `Err{}` to be a function instead. (#360)

    chewxy committed Jan 5, 2020
    The underlying type is now gErr. It implements Result and error.
Commits on Jan 4, 2020
  1. Fixed a bug in Reshape, where a slice of a slice cannot be reshaped. (#…

    chewxy committed Jan 4, 2020
    This is fixed in two ways:
        1. If the input is a `View` then it is materialized. This
        requires more memory, but it is more worth it to allocate than to
        stress over calculating  how much extra overhead to allocate for
        sharing memories
        2. `ShallowClone` is fixed in package `tensor` to make sure that
        views are correctly shallow cloned as well
Commits on Dec 30, 2019
  1. Fix reduction bugs (#357)

    chewxy committed Dec 30, 2019
    * Fixed bug for Sum
    * added tests
Commits on Dec 8, 2019
  1. A bunch of supporting functions for Golgi (#306)

    chewxy committed Dec 8, 2019
    * Added KeepDims as an additional function to "decorate" another function
    Cleaned up Ones and ones
    * Added broadcasted operations to api_gen
    Wrote program to generate those broadcasted ops
    Renamed BroadcastMul to BroadcastHadamardProd. BroadcastMul is coming soon
    * added an example to show how one may use the broadcasting operations to create dense triangular matrices
    * Added better support for BatchedMatMul. Now more than 3D tensors are supported!
    * Added unaryOp interface to genapi. Generating the interfaces makes the interfaces more consistent. Previously inversef32 gave the wrong ʘUnaryOperatorType
    * Allow axis to be defined in SoftMax. Furthermore the default axis is now the last axis. This allows for SoftMax to be done across ndarrays
    Added more examples
    * Ported Unconcat to Gorgonia. Added tests
    * Added some things for future
    * Added more support functions for Golgi
    * added some statistics generation for genapi
    * Added monad-y error handling to Gorgonia
    * Let's do away with the DoXXX functions
    * Changed the definition of LiftResult a bit.
    * added some helper functions
    * Updated Unconcat tor use Nodes instead of []*Node
    This allows for easier lifting of the return value, however its
    utility is not known at the moment.
    * Added HeEtAl InitWFn
    * Ugh. Copy and pasting sux when you can only type with one hand
    * Squashed commit of the following:
    commit 592126c
    Author: Ben Leitner <>
    Date:   Sun Nov 17 15:09:08 2019 -0800
        Refactor the max/sum ops to share common code. Have the type/inferShape/Do methods behave in a consistent manner: (#346)
        * Dimensions specified in the "along" parameter are reduced to size 1, but not removed. (Note: this caused TestRepeatOpDoDiff, but this version fixes it.  Perhaps we should make preserving the size-1 dimensions an option of the reduction op?)
        * If all dimensions are included, the result will be a scalar.
        * If all dimensions but 1 are included, the result is a vector, regardless of which dimension is left intact.
        Tests verify that the resulting nodes have the expected shape.
        Note: While here, fix a warning on Max's SymDiff where retVal[0] is set when retVal has not been initialized.  I wonder if this is related to #323 where SymDiff for StableSoftMax (which uses Max) was failing with a panic (probably not, as the error message there seems unrelated, but probably a good fix anyway).
        Closes #326
    commit 6fd05db
    Author: Olivier Wulveryck <>
    Date:   Tue Nov 12 09:15:56 2019 +0100
        Examples/readme (#351)
        * chore(readme): add references to the gorgonia website
    commit e6bc7dd
    Merge: 9ecd7d0 d1d231f
    Author: gareth <>
    Date:   Sat Nov 9 06:47:29 2019 +1100
        Merge pull request #350 from mattn/fix-gomod
        Fix go.mod
    commit d1d231f
    Author: Yasuhiro Matsumoto <>
    Date:   Fri Nov 8 21:35:58 2019 +0900
        Fix go.mod
    commit 9ecd7d0
    Author: Olivier Wulveryck <>
    Date:   Thu Nov 7 09:59:37 2019 +0100
        Gap operator (#302)
        * feat(wip): scratch space for a Global Average Pooling operator
        * chore: skeleton of the operator
        * feat: Global Average Pool
    commit 6cc7466
    Author: mattn <>
    Date:   Sat Nov 2 03:16:02 2019 +0900
        Improvement of example/iris (#348)
    commit 6f8c10a
    Author: Olivier Wulveryck <>
    Date:   Thu Oct 31 22:10:37 2019 +0100
        Iris example (#347)
        * fix: do not overwrite the channel if it already exists
        * feat: multivariate linear regression
    commit b7b4b2c
    Author: Olivier Wulveryck <>
    Date:   Wed Oct 16 15:34:26 2019 +0200
        Create FUNDING.yml (#342)
    * Fixed Softmax
Commits on Jun 29, 2019
  1. Added BroadcastAdd and BroadcastMul as operations (#300)

    chewxy committed Jun 29, 2019
Commits on Jun 7, 2019
  1. Added @durp to CONTRIBUTORS (#293)

    chewxy committed Jun 7, 2019
    * bugfix: removes circular import
    * bugfix: module needs fully qualified name to avoid importing itself
    * Added @durp to CONTRIBUTORS
Commits on Jun 1, 2019
  1. Updated `Div` such that it will check to see if both operands are the…

    chewxy committed Jun 1, 2019
    … same shape and thus use HadamardDiv (#288)
  2. In repeated operations the VM spends a lot of time on integer equalit…

    chewxy committed Jun 1, 2019
    …y in the ExprGraph.Node() method. This fixes it (#285)
  3. Updated travis (#287)

    chewxy committed Jun 1, 2019
Commits on May 17, 2019
  1. New logo (#284)

    chewxy authored and owulveryck committed May 17, 2019
    * feat: new logo
    * update the contributors
Commits on Apr 19, 2019
  1. Merge branch 'master' into broadcast-elements

    chewxy committed Apr 19, 2019
Commits on Apr 16, 2019
  1. Merge branch 'master' into possible-race-condition-in-dimsizer

    chewxy committed Apr 16, 2019
Commits on Feb 28, 2019
  1. Updated contributors list. (#272)

    chewxy committed Feb 28, 2019
Commits on Jan 25, 2019
  1. Merge branch 'master' into noasm

    chewxy committed Jan 25, 2019
  2. V0.9.0 working 2 (#227)

    chewxy committed Jan 25, 2019
    * Added Issue 217 to known issues
    Renamed the ConstDeriv test to 182
    * Added Reset() to BatchNormOp to conform to the same interface as the native version woul
    * Clean up, and plugged some leaks
    * Fixed a slight issue with the compilation. DoWork() is emitted before a linalg op now
    * [fix] The test for Reshape Operator was incorrect
    * Temporarily commented away the tests for #217
    * Added error message for VM Genera
    * Added go1.11 to travis
    Fixed CUDA Conv backprop shape error
    * [fix] Using NewOrderedNodes as an implementation of gonum.Nodes
    * Fixed the tests to use the new gonum interface
    * Fixed an error in the error.warpf call
    * [fix] Back to the idea that graph options should not be exported
    * [fix] More idiomatic way to handle error msg
    * Merged from v0.9.2-working2
    * New stuff? I don't know when I did this
    * Fixes #257 I think
    * Renamed examples/stacked autoencoder to examples/stacked_autoencoder
    * Added tests. Added better error support for LISPMachine. Removed sanity check for now because there are some weird things wrt dimensions in dual values
    * Updated with 237fix
    There seems to be a problem with the concurrent training example, so it has been commented out for now
    The `sanity()` method of the dual values have been temporarily suspended. Turns out lispMachines don't play well when there are differently shaped (but legal) derivations.
  3. Merge branch 'master' into noasm

    chewxy committed Jan 25, 2019
Commits on Jan 8, 2019
  1. Fixes #260

    chewxy committed Jan 8, 2019
Commits on Dec 30, 2018
  1. Fixes #257 I think (#258)

    chewxy committed Dec 30, 2018
    * Fixes #257 I think
    * Renamed examples/stacked autoencoder to examples/stacked_autoencoder
Commits on Sep 8, 2018
  1. 233fix (#235)

    chewxy committed Sep 8, 2018
    * Start to touch #233
    * Fixes #233
    * Fixes #233 - col2im matches in algorithm
  2. Fixes #233 (#234)

    chewxy committed Sep 8, 2018
    * Start to touch #233
    * Fixes #233
Commits on Aug 19, 2018
  1. Merge branch 'master' into noasm

    chewxy committed Aug 19, 2018
  2. V0.9.0 (#195)

    chewxy committed Aug 19, 2018
    Ongoing notes:
    * **CUDA**: Better CUDA support (IN PROGRESS)
        * ~ColMajor used by default if engine is CUDA.~ (ColMajor is supported, but defaults to using RowMajor for all the major cuBLAS versions. Careful reasoning of the parameters obviates the need for ColMajor by  default, which causes more headaches. It is still supported)
        * Transposition will be automatically done when performing transports back to CPU.
        * cudnn operations supported (IN PROGRESS) (note: these are the ones I use more often hence gets bigger attention):
            * [x] Conv2d
            * [x] Dropout
            * [x] Maxpool2d 
            * [x] BatchNorm
            * [x] Rectify
        * Other CUDA related optimizations
            *  [x] full cuBLAS support
    * **New Ops**:
        * BatchNorm 
        * InvSqrt
        * CUDA enabled ops in `ops/nn` (preview for how things will start to look in v0.10.0)
    * **New Features**:
        * Limited shape inference.  Working towards a calculus for shapes (first raised in #96 and #97).
    * **Optimizations**:
        * Optimizations of basic ops to use engine functions if available, otherwise, fall back to using `Apply`, which adds a penalty from repeatedly calling functions.
        * Faster VMs (1 of 2 VMs): ~greedy goroutines grabs gigs from a priority queue. This causes faster execution of  code in general.~ (this is moved to a future version of 0.9.xx):
    benchmark                           old ns/op      new ns/op      delta
    BenchmarkTapeMachineExecution-8     3129074510     2695304022     -13.86%
    benchmark                           old allocs     new allocs     delta
    BenchmarkTapeMachineExecution-8     25745          25122          -2.42%
    benchmark                           old bytes      new bytes      delta
    BenchmarkTapeMachineExecution-8     4804578705     4803784111     -0.02%
    * **Code generation**: some exported API is now auto generated
    * **New Solver** : @ynqa added the Momentum solver.
    * **Breaking API**: `Solver` now take a slice of `ValueGrad` instead of `Nodes`. `ValueGrad` is an interface, of which a `*Node` fulfils. An additional utility function `NodesToValueGrads` has been added to aid with refactoring.  This was done for two reasons:
        * ~The support for BatchNorm operation, which is a verily impure and highly stateful function. The BatchNorm Op has internal states that need to have their gradients updated as well. But the internal state of BatchNorm isn't really part of  the expression graph, and really it shouldn't be.~ Turns out there was a better API for `BatchNorm`. 
        * In the next version,  v0.10.0. We aim to  do [better package organization](#91) for managability.  With this API breaking change, the solver now is less dependent on the other parts of Gorgonia and can be easily separated.
    * **Breaking Semantics**: A `gorgonia.VM` now implements `io.Closer`. It should be treated as a resource as well as a computation device - the VM must be `Close()`d in order for the resources acquired by the VM to actually be released. Turns out, automatic resource management is too difficult. Who'd thunk that?
Commits on Jul 2, 2018
  1. Merge pull request #215 from ifraixedes/if-214

    chewxy committed Jul 2, 2018
    Remove waring of dep, update Gonum & minor fix in example
  2. Merge branch 'master' into if-214

    chewxy committed Jul 2, 2018
  3. Merge pull request #212 from ifraixedes/if-minor-readme-fixes

    chewxy committed Jul 2, 2018
    README: Fix link & specific output in example
Commits on Jun 9, 2018
  1. Merge pull request #211 from gorgonia/WithGrad

    chewxy committed Jun 9, 2018
    With grad
  2. Added WithGrad

    chewxy committed Jun 9, 2018
  3. Added WithGrad

    chewxy committed Jun 9, 2018
Commits on May 11, 2018
  1. Merge pull request #202 from trigun117/master

    chewxy committed May 11, 2018
    Сorrected typos, fixed ineffassign, gofmt, go_vet
Commits on May 6, 2018
  1. Merge pull request #200 from gorgonia/variousbugfixes

    chewxy committed May 6, 2018
    Fixed a tiny bug in UnbindNonInputs.
  2. Fixed a tiny bug in UnbindNonInputs.

    chewxy committed May 6, 2018
    Removed pre-go1.8 from travis because Gonum no longer supports pre1.8
Commits on Mar 27, 2018
  1. Merge pull request #198 from gorgonia/gonumgraphupdate

    chewxy committed Mar 27, 2018
    Fixes #197
You can’t perform that action at this time.