Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Split up tensor operators, minor build adjustments #3702

Closed
wants to merge 52 commits into from
Closed

Split up tensor operators, minor build adjustments #3702

wants to merge 52 commits into from

Conversation

cjolivier01
Copy link
Member

No description provided.

tqchen and others added 30 commits October 31, 2016 13:48
* Init nnvm change

* temp checkin

* Move TShape to NNVM

* Redirect Symbolic API to NNVM

* Add Op Prop Adapter

* Finish migrate in shape infer

* Pass all symbolic test

* temp commit

* enable aux data

* [EXEC] Basic version of exec for forward only

* [EXEC] Enable most optimizations, still wait grad and context

* fix legacy op with latest one

* Update NNVM NodeRef

* Adapt to newer interface

* ALl registry of backop is complete

* temp commit

* Hack finish backward pass

* [EXEC] One day pass

* [EXEC] Pass all operator unittest

* [EXEC] enable model parallel

* Fully pass all legacy tests

* Remove legacy symbolic code

* update news

* Make travis compile

* Fix python3

* Update viz module to new json format
* [Engine] Deduplicate Variable Util

* [NNVM] NNVM Imperative Invoke

* [NNVM] Imperative improve speed

* fix

* fix
* [CYTHON] Checkin cython enhancement

* fix lint

* [DOC] Move common doc to base
* [EXEC] Support fcompute

* Fix lint

* fix lint
* Fix path in setup.py

* revert the nnvm version
* [OPERATOR] Refactor Unary Ops

* [OPERATOR] Refactor Binary Scalar Ops

* Use alias
* [NDARRAY] Cython module for ndarray

* More strict tests
* [WIP] binary broadcast wip

[OPERATOR] Binary Broadcast ops

fix lint

lint

fix

max and min

update submodule

before removing reduce axis

broad cast reduce ops

* update

* fix

* fix warning

* fix
* [IO] Python based ImageIter and Augumenter

* fix

* fix

* fix
* [scala] nnvm op support

* [scala] remove unused codes

* fix scala native code style
* [R] Fix the R interface. remove man

* Fix BN legacy issue
* Update legacy op FBackwardInGradIndex

* fix test
- gamma
- gammaln
- log1p
- expm1
piiswrong and others added 22 commits October 31, 2016 13:49
* move matrix op to nnvm

* lint
* NNVM Refactor (#3194)

* Init nnvm change

* temp checkin

* Move TShape to NNVM

* Redirect Symbolic API to NNVM

* Add Op Prop Adapter

* Finish migrate in shape infer

* Pass all symbolic test

* temp commit

* enable aux data

* [EXEC] Basic version of exec for forward only

* [EXEC] Enable most optimizations, still wait grad and context

* fix legacy op with latest one

* Update NNVM NodeRef

* Adapt to newer interface

* ALl registry of backop is complete

* temp commit

* Hack finish backward pass

* [EXEC] One day pass

* [EXEC] Pass all operator unittest

* [EXEC] enable model parallel

* Fully pass all legacy tests

* Remove legacy symbolic code

* update news

* Make travis compile

* Fix python3

* Update viz module to new json format

* [NNVM] Imperative Invoke (#3208)

* [Engine] Deduplicate Variable Util

* [NNVM] NNVM Imperative Invoke

* [NNVM] Imperative improve speed

* fix

* fix

* [scala] link libnnvm.a (#3214)

* [PYTHON] Optional Cython Module for Symbols (#3242)

* [CYTHON] Checkin cython enhancement

* fix lint

* [DOC] Move common doc to base

* [EXEC] Support fcompute (#3249)

* [EXEC] Support fcompute

* Fix lint

* fix lint

* [OP] Add alias support (#3261)

* Fix path in setup.py (#3276)

* Fix path in setup.py

* revert the nnvm version

* [WIP] Element wise op refactor (#3245)

* [OPERATOR] Refactor Unary Ops

* [OPERATOR] Refactor Binary Scalar Ops

* Use alias

* update nnvm version (#3290)

* Fix breaking changes after pull master (#3291)

* [CYTHON] Cython module for NDArray (#3292)

* [NDARRAY] Cython module for ndarray

* More strict tests

* [NNVM] change of attr to set_attr (#3303)

* Update run_test.sh

* add nnvm cmake with windows (#3255)

* [WIP] binary broadcast wip (#3301)

* [WIP] binary broadcast wip

[OPERATOR] Binary Broadcast ops

fix lint

lint

fix

max and min

update submodule

before removing reduce axis

broad cast reduce ops

* update

* fix

* fix warning

* fix

* x (#3308)

* [IO] Python based ImageIter and Augumenter (#3227)

* [IO] Python based ImageIter and Augumenter

* fix

* fix

* fix

* [OPT] NNVM Optimizer (#3314)

* fix cpython in windows (#3309)

* Add Mathematical functions (#3317)

* fix image io

* add hypot degrees radians cosh sinh tanh arcsinh arccosh arctanh (#3335)

* add recent examples, collect some missing tutorials (#3340)

* Improving docs & utilities for distributed training example. (#3341)

* add init dict

* disable SSE for arm hardware e.g. Raspberry Pi (#3346)

* Add channel_ to Shape2D calculation (#3181)

* Add channel_ to Shape2D calculation

* scalapkg, add example multitask (#3186)

* RNN cell demo with ptb LSTM language model (#3197)

* rnn-cell demo (push to server for testing)

* a running example with cuDNN RNN cell

* Bulk lint fix (#3211)

* [TENSOR] Add FlatTo1D for all elementwise ops (#3238)

* Fix little bug on context (#3202)

* add PennTreeBank Language Model using lstm model in R (#2659)

* Add function 'print_summary' and some revise (#3161)

* Add function 'print_summary' and some revise

Add function 'print_summary' for print detail information of network, and format argument was add in 'plot_network'.
You can use 'print_summary' like:
"""
net = get_symbol(1000)
shape = {'softmax_label': (64, 12), 'data': (64, 3, 224, 224)}
mx.viz.print_summary(net, shape=shape)
"""
If without shape, the number of arguments would be nonsense currently.

* Update visualization.py

* Update visualization.py

* Update visualization.py

* Update visualization.py

* Update visualization.py

* Update visualization.py

* Update visualization.py

* Update visualization.py

* Update visualization.py

* Update visualization.py

* Update visualization.py

* Added my CmakeLists.txt for caffe plugin, etc.

* Revert "fix travis scala test config" (#3246)

This reverts parts of commit 3e15f62.
Reenables testing the Julia bindings

* [Scala] Code generation for Symbol (#3217)


[scala] auto-generate Symbol functions

* fix spelling errors (#3258)

Also align grammar and punctuation in short descriptions of features

* fix typo in run_test.sh (#3260)

* Copy slice along arbitrary axis (#3259)

* rnn-cell demo (push to server for testing)

* a running example with cuDNN RNN cell

* add copyslice along arbitrary axis for NDArray

* copy_slice_to as an ndarray operator

* Python interface to the _copy_slice_to operator

* fix lint error

* Enable concatenation for dim-1 vectors (#3264)

* fix PReLU backward computing (#3277)

* Add `reverse` option in Reshape (#3280)

* add scala example, end2end neural-style (#3267)

add scala example, end2end neural-style

* Improve multi-GPU performance (#3241)

* update kvstore

* update model.py

* bandwith tool

* update readme

* tiny

* fix lint

* fix batch size of dist_device_sync

* fix

* fix perf problem of kvstore when only using a single device

* roll back to previous strategy how to choose update_on_kvsotre

* add an optionl MXNET_ENABLE_GPU_P2P to control whether or not use p2p

* update dmlccore (#3293)

* Fix newer version of gtest and cpptest (#3294)

* when set use_global_stats then do not use cudnn (#3289)

* when set use_global_stats then do not use cudnn

* fix batch norm with use_global_stats

* Fix req+reserve_space in cudnn_rnn (#3274)

Fix req

Fix reserve_space

Allocate reserve_space using Storage

* add cudnn off option in Convolution (#3270)

* add support for building on power (#3302)

* add recent examples, collect some missing tutorials (#3340)

* CMake for caffe plugin

* Fix metric & im2rec.py

* [Scala] Nnvm ops for NDArray & Symbol (#3361)

* [scala] nnvm op support

* [scala] remove unused codes

* fix scala native code style

* [R] Fix the R interface (#3334)

* [R] Fix the R interface. remove man

* Fix BN legacy issue

* Locate compiled library on Windows (#3369)

* Fix metric & im2rec.py (#3375)

image io fix

* Update legacy op FBackwardInGradIndex (#3376)

* Update legacy op FBackwardInGradIndex

* fix test

* Fix for LRN Layer (#3366)

* fixed cpu forward bug

* added out_data[lrn_enum::kOut] as backward req.

* removed lint

* removed duplicate out_data[lrn_enum::kTmpNorm],

* removed inplace option

* add backward index

* include some special functions (#3337)

- gamma
- gammaln
- log1p
- expm1

* fix kv build (#3385)

* initial profiler branch based on dmlc/mxnet:nnvm

* [profiler] add profiler & modify engine API

* [profiler] add USE_PROFILER compile flag & modify code for changed engine api

* [profiler] add c_api interface & modify graph_executor

* [profiler] add python api

* [profiler] typo & lint error

* [profiler] reduce overhead & add PROFIELR_MESSAGE_FUNCNAME macro

* [profiler] remove profiling argument from PushSync/PushAsync

* [profiler] refactor profiler.h/.cc

* [profiler] improve readability

* [profiler] typo && add TODO comment

* [profiler] fix ndarray op name & add WaitForVar back

* [profiler] add example/profiler/profiler_ndarray.py

* [profiler] fix memleak by using op->name

* [profiler] fix lint

* [profiler] fix lint
* remove ccsgd

remove cc optimizer

fix

* fix slice

* Move broadcast reduce op to nnvm

* move smooth l1 and softmax cross entropy to nnvm

* remove simple op

* lint

* fix

* fix

* fix

* fix
* fix nnvm scala compile err

* scala test on jenkins
* fix nnvm scala compile err

* scala test in jenkins

* [scala] init NDArray with DType

* [scala] dtype support in Executor
* added log10 operator

* added log2 operator with test

* added rint and fix

* remove gradient calculation for rounding tests
@piiswrong
Copy link
Contributor

squash and rebase to resolve the conflicts?

@piiswrong
Copy link
Contributor

closed for #4150

@piiswrong piiswrong closed this Dec 9, 2016
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants