Skip to content
This repository has been archived by the owner on Jul 1, 2024. It is now read-only.

[WIP]Update benchmark result #97

Merged
merged 3 commits into from
May 18, 2018
Merged

[WIP]Update benchmark result #97

merged 3 commits into from
May 18, 2018

Conversation

roywei
Copy link

@roywei roywei commented May 17, 2018

TF 1.8
MX 1.2
Keras 2.1.6
No major difference, conclusion should not change

@roywei
Copy link
Author

roywei commented May 18, 2018

@sandeep-krishnamurthy please merge the PR, thanks!

Copy link

@sandeep-krishnamurthy sandeep-krishnamurthy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks for redoing all benchmarks with latest MX and TF in 2 days putting day and night.

@aaronmarkham - Here you get the latest results - You will be interested in ImageNet and Synthetic Data results only.

@sandeep-krishnamurthy sandeep-krishnamurthy merged commit de0a1e3 into awslabs:dev May 18, 2018
roywei added a commit that referenced this pull request May 18, 2018
* Improve tests by designating dtype of sample data (keras-team#9834)

* Document that "same" is inconsistent across backends with strides!=1 (keras-team#9629)

* Document that `"same"` is inconsistent across backends with `strides` != 1

* Use "[here](...)"

* keras-team#9642 Add kwarg and documentation for dilation_rate to SeparableConvs (keras-team#9844)

* Add kwarg and documentation for dilation_rate to SeparableConvs

* Fix pep8 complaint

I forgot to check the style before committing. Pep8 was complaining about a missing whitespace after comma, now it's fixed.

* fit/evaluate_generator supporting native tensors (keras-team#9816)

Currently, `fit/evaluate_generator` don't support this case without this fix.
But framework-native data tensors are already supported by `_fit_loop` and `_test_loop`.

Signed-off-by: CUI Wei <ghostplant@qq.com>

* Add h5py to dependencies

* Fixed typo. (keras-team#9866)

* Fix image_ocr.py example ValueError (keras-team#9869)

* Fixed the NASNet issue. (keras-team#9865)

* Fixed the NASNet issue.

* Nasnet doesn't require flatten.

* Updated documentation accordingly.

* Removed generate dropout ones from recurrent. (keras-team#9892)

* Removed generate dropout ones from recurrent.

* Fixed index issue.

* Fix `in_test_phase` of CNTK and Add its tests (keras-team#9902)

* Fix dtype designation for `variable` of CNTK and Add its tests (keras-team#9903)

* import `pydot`, improve error messages about `pydot` and GraphViz, bump to `pydot >= 1.2.4` (keras-team#9904)

* REL: bump to `pydot >= 1.2.4` in `extras_require`

* MAI: import pydot (as required in `extras_require`)

* MAI: refine error messages for `pydot` and GraphViz

distinguish between absence of `pydot` and failure to find
the executables of GraphViz in the $PATH.

* DEV: ignore `.pytest_cache`

* Fix documentation of flow_from_directory() (keras-team#9910)

The way the documentation is parsed for the Keras website made some lines of the documentation beginning with "Default:" look funny. Also changed the documentation of return value to be clear that it always returns a batch of images.

* ModelCheckpoint: print previous best (keras-team#9911)

* multi_gpu_model supporting legacy/fullCPU/fullGPU (keras-team#9638)

Signed-off-by: CUI Wei <ghostplant@qq.com>

* Fix `batch_dot` of Theano when `axes=0` (keras-team#9920)

* Fix `batch_dot` of CNTK when `axes=None` (keras-team#9921)

* Fix `batch_dot` of TensorFlow when `axes=None` (keras-team#9922)

* Fix stateful metrics when passing dict to compile (keras-team#9894)

* Added note to manually install h5py where needed (keras-team#9830)

* Added notes to manually install h5py if needed

* Added FAQ entry on h5py

* deleted redundant remark about h5py

* updated FAQ to reflect dependency change

* fixed comment format to pass failing test

* removed new trailing whitespaces

* improved docstring format

* reverted callbacks.py

* fixed links in model.py

* updated faq.py

* link pointing to FAQ

* Add support for `constants` in Bidirectional wrapper (keras-team#9260)

* Add support fot `constants` in Bidirectional wrapper

* Add more tests for Bidirectional wrapper

* Fix `compute_mask` for Birectional with return_state=True

Fix `compute_mask` to properly support `return_state` introduced in Birectional with keras-team#8977

* Add test for Bidirectional with unknown timestamps

* Skip test for CNTK for unknown timestamps with Bidirectional

* avoid override the input constant when need broadcast sequential axis on rnn's constant

* Move _standardize_args to recurrent, remove duplication

* Fix  for Birectional when multiple masks are passed

* Updated for TF 1.7 (keras-team#9937)

* fix TimeSeriesGenerator glitch (keras-team#9899)

* Added an error message for undefined shape on NASNet. (keras-team#9891)

* Added an error message for undefined shape on NASNet.

* Forgot that the message should be present only when loading imagenet weights.

* Changed the message.

* Fix PEP8

* Allow shift_range to be 1-D array-like or int (keras-team#8869)

* Allow shift_range to be 1-D array-like or int

* Add docstrings

* Fix conflict resolution merge minor disaster

* remove stray line from merge

* Remove extra "tabs"

* Exclude multi-gpu utils when reporting coverages (keras-team#9942)

* Make conv_invalid_use and pooling_invalid_use efficient (keras-team#9944)

* Chenta/cntk bn (keras-team#9952)

* fix cntk static learning phase issue; add a test

* fix code style;add more comments

* add boolean support

* fix code style issue

* Immigrate reference operations to a separate module (keras-team#9948)

* Add MXNet Backend (#59)

* Adding MXNet backend template. Adding all basic Variable and Tensor operations (#1)

* add activation functions

* add activation functions

* fix some legacy

* fix some legacy

* cross entropy

* cross entropy

* fix name scoping introduced in 2.0

* fix name scoping introduced in 2.0

* Add dropout, l2_normalization, random_normal/uniform/binomial (#2)

* remove the logic for hacking RNN

* remove the logic for hacking RNN

* add pooling with utils

* add pooling with utils

* minor

* lint and name scope fix

* fix access protected var

* fix add neighbor, removed __eq__ in KerasSymbol

* fix eval function, unittest for placeholder and variable

* add unittests

* fix bug

* fix bug

* fix

* add some temporary fixes in mxnet backend. undo change to the pytest.ini

* mxnet_backend graph fix, layer support  (#3)

* add activation functions

* fix some legacy

* cross entropy

* fix name scoping introduced in 2.0

* Add dropout, l2_normalization, random_normal/uniform/binomial (#2)

* remove the logic for hacking RNN

* add pooling with utils

* add activation functions

* fix some legacy

* cross entropy

* fix name scoping introduced in 2.0

* remove the logic for hacking RNN

* add pooling with utils

* minor

* lint and name scope fix

* fix access protected var

* fix add neighbor, removed __eq__ in KerasSymbol

* fix eval function, unittest for placeholder and variable

* add unittests

* fix bug

* fix bug

* fix

* add some temporary fixes in mxnet backend. undo change to the pytest.ini

* Keras function not working is a known issue, add skip in the test

* fix random_uniform/constant

* fix legacy randomize methods

* Fix MXNet backend operator bugs. Enabled Keras backend tests

* add bias

* Add Amazon copyrights to License (#6)

* fix

* fix

* fix backend for mlp

* fix context management, add optimizers

* minor change

* undo changes on example

* fix eval

* minor cleanup

* fix some property usage

* fixing AlphaDroupout, not finished yet

* add mx model instantiate

* modifies training model construct logic, fix some tests. fix reshape layer.

* minor fix

* fix bias_add

* more fix on Dense and bias_add

* In progress commit

* fix comment

* small fix

* remove pytest.skip in conv3d. But it failed with theano backend in my workspace though.

* Add conv2d and in_topk operator for mxnet backend (#11)

* Skip BatchDot tests for Theano backend. (#12)

* BatchDot, Basic Batchnorm, Fix BiasAdd, Fix Conv2D, CodeCleanup (#14)

* Fix Conv2d shape issues and enable Conv2D UTs

* Remove redundant mxnet only unit tests

* Adding batch_dot, remove deconv, code comments and cleanup

* Remove buggy conv1d implementation

* Fix CR comments. Fix lint check issues

* Move mxnet specific code from keras engine to mxnet_backend. (#15)

* Move MXNet optimizers from keras optimizers to mxnet backend (#16)

* Fix bug in reshape. Minor rename to avoid local conflicts

* Bug fixes and enable/skip all Keras tests for mxnet backend (#21)

* test results - 374 passed, 235 skipped in 114.44 seconds

* fix/skip keras tests - tests/integration_tests, tests/keras/applications

* fix/skip keras tests - tests/keras/engine/test_topology

* fix/skip keras tests - tests/keras/engine/test_training

* fix/skip keras tests - tests/keras/legacy/

* fix/skip keras tests - tests/keras/preprocessing

* fix/skip keras tests - tests/keras/utils/

* Fix CR comments

* Fix issues in zero_padding. Fix/Enable tests/layers/convolutional_test

* Add momentum to batchnorm. Enable/skip tests in layers/core, local, merge, noise, normalization

* Skip RNN tests in keras/tests/layers/recurrent_test, wrappers_test

* Fix bug in spatial padding, enable/skip tests in loss,optimizers,callback,loss_weighting, model_saving

* Fix mxnet backend multi-gpu training (#31)

Fixing bug for mxnet backend to use multiple gpus.

* Fix performance issue - Batchnormalization, Conv operator (#35)

* Fix default axis for batchnorm layer for channels_first data_format

* Performance improvement by avoiding kernel transpose in conv operation for channels_first format

* Fix model - architecture, weights and both, load and save. (#36)

* Prepare initial version of mxnet related documentation in keras (#38)

* Skip failing unit tests for unsupported functionality in mxnet backend

* Fix pep tests reported by CI

* Use pytest module skip, revert kernel_shape logic

* remove data_format param from bias_add API

* Allow Predict() without compile for mxnet backend and enable tests.

contributor - roywei@

* Fix bug - mxnet backend should not override keras config data_format to channels_first. Only warn of low performance

* Conv3d() operator implementation for Keras2.0 using MXNet backend (#40)

* conv3d implementation for keras2.0 as MXNet backend

* conv3d implementation/testing for keras2.0 using MXNet backend

* keeping -n option in pytest.ini file

* fixed comments given by Sandeep

* Add Conv1D support for MXNet backend (#44)

* Add Conv1D support for MXNet backend

* Fix CR comments

* Conv2d transpose (#47)

* add conv2d_transpose

* conv2d transpose for both channels, enabled test case

* add detailed comments and examples, fix style issue

* enable test case in topology

* Enable performance optimization for conv operators with MXNet backend. Make MXNet default backend with this branch (#48)

* Fix conv kernel shape bug for TF backend. (#50)

* Add support for keras multi_gpu_model() API with MXNet backend (#49)

* Add support for keras multi_gpu_model() API with MXNet backend. Autoset GPU0 context on GPU machine

* Fix typo

* Add SAME padding mode support for pooling operator. (#51)

* Add rnn() operator for MXNet backend with unrolling and masking feature (#46)

* Adding rnn() operator in Keras2.0 with MXNet as backend with unroll=True and Masking=True/False and enabled relevant testcases. Also, modified couple of operators.

* Modified comments

* Added comments to a method

* Enable categorical crossentropy testcases and made minor changes

* Modified message

* nit

* Added detail description of handling variable length input in RNN

* Skip conv2d_transpose and conv3d_transpose test-case for MXNet backend and minor changes in rnn()

* Adamax and NAdam optimizer for MXNet backend (#54)

* Add Adamax optimizer for MXNet backend

* Fix lr and adamax params

* Add Nadam optimizer for mxnet backend

* Add Conv3d transpose (#52)

* conv3d tranpose, enabled test case

* update kernel shape

* replace conv2d_transpse conv3d_transpose with convnd_transpose

* update value errors with MXNet Backend info, fix typo

* add check for conv3d transpose only supports gpu with cudnn

* update context check

* diable conv3d transpose test

* fix typo in comment

* Adding MXNet backend template. Adding all basic Variable and Tensor operations (#1)

* add activation functions

* add activation functions

* fix some legacy

* fix some legacy

* cross entropy

* cross entropy

* fix name scoping introduced in 2.0

* fix name scoping introduced in 2.0

* Add dropout, l2_normalization, random_normal/uniform/binomial (#2)

* remove the logic for hacking RNN

* remove the logic for hacking RNN

* add pooling with utils

* add pooling with utils

* minor

* lint and name scope fix

* fix access protected var

* fix add neighbor, removed __eq__ in KerasSymbol

* fix eval function, unittest for placeholder and variable

* add unittests

* fix bug

* fix bug

* fix

* add some temporary fixes in mxnet backend. undo change to the pytest.ini

* mxnet_backend graph fix, layer support  (#3)

* add activation functions

* fix some legacy

* cross entropy

* fix name scoping introduced in 2.0

* Add dropout, l2_normalization, random_normal/uniform/binomial (#2)

* remove the logic for hacking RNN

* add pooling with utils

* add activation functions

* fix some legacy

* cross entropy

* fix name scoping introduced in 2.0

* remove the logic for hacking RNN

* add pooling with utils

* minor

* lint and name scope fix

* fix access protected var

* fix add neighbor, removed __eq__ in KerasSymbol

* fix eval function, unittest for placeholder and variable

* add unittests

* fix bug

* fix bug

* fix

* add some temporary fixes in mxnet backend. undo change to the pytest.ini

* Keras function not working is a known issue, add skip in the test

* fix random_uniform/constant

* fix legacy randomize methods

* Fix MXNet backend operator bugs. Enabled Keras backend tests

* add bias

* Add Amazon copyrights to License (#6)

* fix

* fix

* fix backend for mlp

* fix context management, add optimizers

* minor change

* undo changes on example

* fix eval

* minor cleanup

* fix some property usage

* fixing AlphaDroupout, not finished yet

* add mx model instantiate

* modifies training model construct logic, fix some tests. fix reshape layer.

* minor fix

* fix bias_add

* more fix on Dense and bias_add

* In progress commit

* fix comment

* small fix

* remove pytest.skip in conv3d. But it failed with theano backend in my workspace though.

* Add conv2d and in_topk operator for mxnet backend (#11)

* Skip BatchDot tests for Theano backend. (#12)

* BatchDot, Basic Batchnorm, Fix BiasAdd, Fix Conv2D, CodeCleanup (#14)

* Fix Conv2d shape issues and enable Conv2D UTs

* Remove redundant mxnet only unit tests

* Adding batch_dot, remove deconv, code comments and cleanup

* Remove buggy conv1d implementation

* Fix CR comments. Fix lint check issues

* Move mxnet specific code from keras engine to mxnet_backend. (#15)

* Move MXNet optimizers from keras optimizers to mxnet backend (#16)

* Fix bug in reshape. Minor rename to avoid local conflicts

* Bug fixes and enable/skip all Keras tests for mxnet backend (#21)

* test results - 374 passed, 235 skipped in 114.44 seconds

* fix/skip keras tests - tests/integration_tests, tests/keras/applications

* fix/skip keras tests - tests/keras/engine/test_topology

* fix/skip keras tests - tests/keras/engine/test_training

* fix/skip keras tests - tests/keras/legacy/

* fix/skip keras tests - tests/keras/preprocessing

* fix/skip keras tests - tests/keras/utils/

* Fix CR comments

* Fix issues in zero_padding. Fix/Enable tests/layers/convolutional_test

* Add momentum to batchnorm. Enable/skip tests in layers/core, local, merge, noise, normalization

* Skip RNN tests in keras/tests/layers/recurrent_test, wrappers_test

* Fix bug in spatial padding, enable/skip tests in loss,optimizers,callback,loss_weighting, model_saving

* Fix mxnet backend multi-gpu training (#31)

Fixing bug for mxnet backend to use multiple gpus.

* Fix performance issue - Batchnormalization, Conv operator (#35)

* Fix default axis for batchnorm layer for channels_first data_format

* Performance improvement by avoiding kernel transpose in conv operation for channels_first format

* Fix model - architecture, weights and both, load and save. (#36)

* Prepare initial version of mxnet related documentation in keras (#38)

* Skip failing unit tests for unsupported functionality in mxnet backend

* Fix pep tests reported by CI

* Use pytest module skip, revert kernel_shape logic

* remove data_format param from bias_add API

* Allow Predict() without compile for mxnet backend and enable tests.

contributor - roywei@

* Fix bug - mxnet backend should not override keras config data_format to channels_first. Only warn of low performance

* Conv3d() operator implementation for Keras2.0 using MXNet backend (#40)

* conv3d implementation for keras2.0 as MXNet backend

* conv3d implementation/testing for keras2.0 using MXNet backend

* keeping -n option in pytest.ini file

* fixed comments given by Sandeep

* Add Conv1D support for MXNet backend (#44)

* Add Conv1D support for MXNet backend

* Fix CR comments

* Conv2d transpose (#47)

* add conv2d_transpose

* conv2d transpose for both channels, enabled test case

* add detailed comments and examples, fix style issue

* enable test case in topology

* Enable performance optimization for conv operators with MXNet backend. Make MXNet default backend with this branch (#48)

* Fix conv kernel shape bug for TF backend. (#50)

* Add support for keras multi_gpu_model() API with MXNet backend (#49)

* Add support for keras multi_gpu_model() API with MXNet backend. Autoset GPU0 context on GPU machine

* Fix typo

* Add SAME padding mode support for pooling operator. (#51)

* Add rnn() operator for MXNet backend with unrolling and masking feature (#46)

* Adding rnn() operator in Keras2.0 with MXNet as backend with unroll=True and Masking=True/False and enabled relevant testcases. Also, modified couple of operators.

* Modified comments

* Added comments to a method

* Enable categorical crossentropy testcases and made minor changes

* Modified message

* nit

* Added detail description of handling variable length input in RNN

* Skip conv2d_transpose and conv3d_transpose test-case for MXNet backend and minor changes in rnn()

* Adamax and NAdam optimizer for MXNet backend (#54)

* Add Adamax optimizer for MXNet backend

* Fix lr and adamax params

* Add Nadam optimizer for mxnet backend

* Add Conv3d transpose (#52)

* conv3d tranpose, enabled test case

* update kernel shape

* replace conv2d_transpse conv3d_transpose with convnd_transpose

* update value errors with MXNet Backend info, fix typo

* add check for conv3d transpose only supports gpu with cudnn

* update context check

* diable conv3d transpose test

* fix typo in comment

* Rebase to latest Keras - April 3, 2018

* Add build badges

* Fix multi_gpu API bug for CPU. Fix PEP. (#64)

* Fix multi_gpu API bug for CPU. Fix PEP.

* fix embedding layer bug (#61)

* fix embedding bug

* addressed comments, enabled more test cases

* add keras test

* reduce line length

* fix style, add blank lines

* Benchmark (#55)

* add conv2d_transpose

* conv2d transpose for both channels, enabled test case

* add detailed comments and examples, fix style issue

* add benchmark scripts for resnet and imagenet data

* combine scripts

* fix args

* fix num of gpus

* update log

* multi_gpu_model only support tf

* add benchamrk scripts for synthetic data

* update read me and scripts

* add mxnet traing result table

* update on readme

* add cifar10 dataset and enable various resnet layers

* fix compile for mxnet multiple gpu

* update callbacks

* update synthetic data script, add credits

* undo new line

* update readme, addressed pr comments

* update readme

* benchmark scripts style fix (#66)

* style fix

* remove unused import, fix line too long

* adrressed pr comments

* Added keras util API for conversion of data tensor from channels_last to channels_first using MXNet backend (#65)

* Added keras util API for conversion of data tensor from channels_last to channels_first using MXNet backend

* Modified comments

* Addressed review comments and made the API more generic accross backends

* Removed shape check

* Modified comments

* Added edge cases

* moved helper method as nested

* Added RNN benchmark scripts (#69)

* Added RNN benchmark scripts

* Fixed new line in bash script

* Removed different backend code and modified comments

* Removed spacing

* Automated the wikiText2 download script

* Added dataset_util functionality to have more flexible code

* Added minor comments

* modified minor comments

* Fixed the multi-gpu context (#68)

* Update benchmark result (#70)

* update benchmark result

* update result

* simplify folder structure

* add image result

* add note

* add note

* rebase to latest Keras - April 20, 2018, fix bug and unit tests

* Added detailed RNN results (#73)

* Added detailed RNN results

* Modified table content and added CUDA version

* fix keras examples (#72)

* fix auto encoder examples

* update other examples

* fix style and add ctc not implemented error

* Added Detailed RNN results (#77)

* Modified RNN benchmark document

* Added minor comments

* fixed broken image link

* Added API to extract metrics from a test and also added epoch parameter (#78)

* Add mxnet backend tutorial documents (#76)

* add performance tips document

* update warning

* add docs from wiki

* add initial multi gpu doc, simplified installation doc, fix benchmark doc typo

* update install steps

* add multi_gpu_model tutorial

* Support exporting model as MXNet model (sym, params). (#80)

* Support exporting model as MXNet model (sym, params).

* Return data_names and data_shapes

* add unit tests for mxnet model save API

* Add test with LSTM layer for mxnet model save API

* Add support for functional Model graphs in save_mxnet_model API

* add multi gpu model example (#85)

* add multi gpu model

* specify param name

* Add additional logging for cnn benchmarks (#89)

* add extra logging

* add logging for cnn synthetic

* fix log name

* fix file name

* Log RNN benchmark results (#90)

* Make benchmark result logging available in RNN scripts

* Make log file name consistent across CNN and RNN benchmarks

* fix pytest errors (#93)

* Cherry pick keras-team/keras 2.1.6 missing 3 commits into awslabs/keras-apache-mxnet (#96)

* update multi_gpu api in benchmark scripts (#95)

* update multi_gpu

* update logging

* fix logging

* fix logging

* fix speed format

* remove learning rate log

* Revamp keras-mxnet docs (#82)

* Update main README and move mxnet_backend_docs under docs

* revisit installation mxnet backend docs

* revisit multi_gpu_training mxnet backend docs

* revisit performance_guide mxnet backend docs

* revisit using rnn with mxnet backend in mxnet backend docs

* add save_mxnet_model tutorials in mxnet backend docs

* Fixing review comments from aaron

* Resolve CR comments on save_mxnet_model tutorial

* Fix broken links, update tutorial links in the mxnet_backend code

* revamp benchmark results readme

* Benchmark results README page revamp

* Add library versions

* Remove too detailed benchmark results. Summarize in README

* Get back detailed results document

* Remove experiemental RNN benchmarks from README

* addressed review comments on benchmark results

* Set latest stable dependency of h5py to avoid warnings

* Update CNN benchmark result (#97)

* update benchmark numbers

* update number

* update result

* Update RNN benchmark results (#98)

* Fix pep failures

* Add 8 GPUs RNN benchmark results

* remove checking data format (#102)

* update imagenet result (#103)
sandeep-krishnamurthy pushed a commit that referenced this pull request Jun 15, 2018
* update benchmark numbers

* update number

* update result
sandeep-krishnamurthy pushed a commit that referenced this pull request Aug 14, 2018
* Improve tests by designating dtype of sample data (keras-team#9834)

* Document that "same" is inconsistent across backends with strides!=1 (keras-team#9629)

* Document that `"same"` is inconsistent across backends with `strides` != 1

* Use "[here](...)"

* keras-team#9642 Add kwarg and documentation for dilation_rate to SeparableConvs (keras-team#9844)

* Add kwarg and documentation for dilation_rate to SeparableConvs

* Fix pep8 complaint

I forgot to check the style before committing. Pep8 was complaining about a missing whitespace after comma, now it's fixed.

* fit/evaluate_generator supporting native tensors (keras-team#9816)

Currently, `fit/evaluate_generator` don't support this case without this fix.
But framework-native data tensors are already supported by `_fit_loop` and `_test_loop`.

Signed-off-by: CUI Wei <ghostplant@qq.com>

* Add h5py to dependencies

* Fixed typo. (keras-team#9866)

* Fix image_ocr.py example ValueError (keras-team#9869)

* Fixed the NASNet issue. (keras-team#9865)

* Fixed the NASNet issue.

* Nasnet doesn't require flatten.

* Updated documentation accordingly.

* Removed generate dropout ones from recurrent. (keras-team#9892)

* Removed generate dropout ones from recurrent.

* Fixed index issue.

* Fix `in_test_phase` of CNTK and Add its tests (keras-team#9902)

* Fix dtype designation for `variable` of CNTK and Add its tests (keras-team#9903)

* import `pydot`, improve error messages about `pydot` and GraphViz, bump to `pydot >= 1.2.4` (keras-team#9904)

* REL: bump to `pydot >= 1.2.4` in `extras_require`

* MAI: import pydot (as required in `extras_require`)

* MAI: refine error messages for `pydot` and GraphViz

distinguish between absence of `pydot` and failure to find
the executables of GraphViz in the $PATH.

* DEV: ignore `.pytest_cache`

* Fix documentation of flow_from_directory() (keras-team#9910)

The way the documentation is parsed for the Keras website made some lines of the documentation beginning with "Default:" look funny. Also changed the documentation of return value to be clear that it always returns a batch of images.

* ModelCheckpoint: print previous best (keras-team#9911)

* multi_gpu_model supporting legacy/fullCPU/fullGPU (keras-team#9638)

Signed-off-by: CUI Wei <ghostplant@qq.com>

* Fix `batch_dot` of Theano when `axes=0` (keras-team#9920)

* Fix `batch_dot` of CNTK when `axes=None` (keras-team#9921)

* Fix `batch_dot` of TensorFlow when `axes=None` (keras-team#9922)

* Fix stateful metrics when passing dict to compile (keras-team#9894)

* Added note to manually install h5py where needed (keras-team#9830)

* Added notes to manually install h5py if needed

* Added FAQ entry on h5py

* deleted redundant remark about h5py

* updated FAQ to reflect dependency change

* fixed comment format to pass failing test

* removed new trailing whitespaces

* improved docstring format

* reverted callbacks.py

* fixed links in model.py

* updated faq.py

* link pointing to FAQ

* Add support for `constants` in Bidirectional wrapper (keras-team#9260)

* Add support fot `constants` in Bidirectional wrapper

* Add more tests for Bidirectional wrapper

* Fix `compute_mask` for Birectional with return_state=True

Fix `compute_mask` to properly support `return_state` introduced in Birectional with keras-team#8977

* Add test for Bidirectional with unknown timestamps

* Skip test for CNTK for unknown timestamps with Bidirectional

* avoid override the input constant when need broadcast sequential axis on rnn's constant

* Move _standardize_args to recurrent, remove duplication

* Fix  for Birectional when multiple masks are passed

* Updated for TF 1.7 (keras-team#9937)

* fix TimeSeriesGenerator glitch (keras-team#9899)

* Added an error message for undefined shape on NASNet. (keras-team#9891)

* Added an error message for undefined shape on NASNet.

* Forgot that the message should be present only when loading imagenet weights.

* Changed the message.

* Fix PEP8

* Allow shift_range to be 1-D array-like or int (keras-team#8869)

* Allow shift_range to be 1-D array-like or int

* Add docstrings

* Fix conflict resolution merge minor disaster

* remove stray line from merge

* Remove extra "tabs"

* Exclude multi-gpu utils when reporting coverages (keras-team#9942)

* Make conv_invalid_use and pooling_invalid_use efficient (keras-team#9944)

* Chenta/cntk bn (keras-team#9952)

* fix cntk static learning phase issue; add a test

* fix code style;add more comments

* add boolean support

* fix code style issue

* Immigrate reference operations to a separate module (keras-team#9948)

* Add MXNet Backend (#59)

* Adding MXNet backend template. Adding all basic Variable and Tensor operations (#1)

* add activation functions

* add activation functions

* fix some legacy

* fix some legacy

* cross entropy

* cross entropy

* fix name scoping introduced in 2.0

* fix name scoping introduced in 2.0

* Add dropout, l2_normalization, random_normal/uniform/binomial (#2)

* remove the logic for hacking RNN

* remove the logic for hacking RNN

* add pooling with utils

* add pooling with utils

* minor

* lint and name scope fix

* fix access protected var

* fix add neighbor, removed __eq__ in KerasSymbol

* fix eval function, unittest for placeholder and variable

* add unittests

* fix bug

* fix bug

* fix

* add some temporary fixes in mxnet backend. undo change to the pytest.ini

* mxnet_backend graph fix, layer support  (#3)

* add activation functions

* fix some legacy

* cross entropy

* fix name scoping introduced in 2.0

* Add dropout, l2_normalization, random_normal/uniform/binomial (#2)

* remove the logic for hacking RNN

* add pooling with utils

* add activation functions

* fix some legacy

* cross entropy

* fix name scoping introduced in 2.0

* remove the logic for hacking RNN

* add pooling with utils

* minor

* lint and name scope fix

* fix access protected var

* fix add neighbor, removed __eq__ in KerasSymbol

* fix eval function, unittest for placeholder and variable

* add unittests

* fix bug

* fix bug

* fix

* add some temporary fixes in mxnet backend. undo change to the pytest.ini

* Keras function not working is a known issue, add skip in the test

* fix random_uniform/constant

* fix legacy randomize methods

* Fix MXNet backend operator bugs. Enabled Keras backend tests

* add bias

* Add Amazon copyrights to License (#6)

* fix

* fix

* fix backend for mlp

* fix context management, add optimizers

* minor change

* undo changes on example

* fix eval

* minor cleanup

* fix some property usage

* fixing AlphaDroupout, not finished yet

* add mx model instantiate

* modifies training model construct logic, fix some tests. fix reshape layer.

* minor fix

* fix bias_add

* more fix on Dense and bias_add

* In progress commit

* fix comment

* small fix

* remove pytest.skip in conv3d. But it failed with theano backend in my workspace though.

* Add conv2d and in_topk operator for mxnet backend (#11)

* Skip BatchDot tests for Theano backend. (#12)

* BatchDot, Basic Batchnorm, Fix BiasAdd, Fix Conv2D, CodeCleanup (#14)

* Fix Conv2d shape issues and enable Conv2D UTs

* Remove redundant mxnet only unit tests

* Adding batch_dot, remove deconv, code comments and cleanup

* Remove buggy conv1d implementation

* Fix CR comments. Fix lint check issues

* Move mxnet specific code from keras engine to mxnet_backend. (#15)

* Move MXNet optimizers from keras optimizers to mxnet backend (#16)

* Fix bug in reshape. Minor rename to avoid local conflicts

* Bug fixes and enable/skip all Keras tests for mxnet backend (#21)

* test results - 374 passed, 235 skipped in 114.44 seconds

* fix/skip keras tests - tests/integration_tests, tests/keras/applications

* fix/skip keras tests - tests/keras/engine/test_topology

* fix/skip keras tests - tests/keras/engine/test_training

* fix/skip keras tests - tests/keras/legacy/

* fix/skip keras tests - tests/keras/preprocessing

* fix/skip keras tests - tests/keras/utils/

* Fix CR comments

* Fix issues in zero_padding. Fix/Enable tests/layers/convolutional_test

* Add momentum to batchnorm. Enable/skip tests in layers/core, local, merge, noise, normalization

* Skip RNN tests in keras/tests/layers/recurrent_test, wrappers_test

* Fix bug in spatial padding, enable/skip tests in loss,optimizers,callback,loss_weighting, model_saving

* Fix mxnet backend multi-gpu training (#31)

Fixing bug for mxnet backend to use multiple gpus.

* Fix performance issue - Batchnormalization, Conv operator (#35)

* Fix default axis for batchnorm layer for channels_first data_format

* Performance improvement by avoiding kernel transpose in conv operation for channels_first format

* Fix model - architecture, weights and both, load and save. (#36)

* Prepare initial version of mxnet related documentation in keras (#38)

* Skip failing unit tests for unsupported functionality in mxnet backend

* Fix pep tests reported by CI

* Use pytest module skip, revert kernel_shape logic

* remove data_format param from bias_add API

* Allow Predict() without compile for mxnet backend and enable tests.

contributor - roywei@

* Fix bug - mxnet backend should not override keras config data_format to channels_first. Only warn of low performance

* Conv3d() operator implementation for Keras2.0 using MXNet backend (#40)

* conv3d implementation for keras2.0 as MXNet backend

* conv3d implementation/testing for keras2.0 using MXNet backend

* keeping -n option in pytest.ini file

* fixed comments given by Sandeep

* Add Conv1D support for MXNet backend (#44)

* Add Conv1D support for MXNet backend

* Fix CR comments

* Conv2d transpose (#47)

* add conv2d_transpose

* conv2d transpose for both channels, enabled test case

* add detailed comments and examples, fix style issue

* enable test case in topology

* Enable performance optimization for conv operators with MXNet backend. Make MXNet default backend with this branch (#48)

* Fix conv kernel shape bug for TF backend. (#50)

* Add support for keras multi_gpu_model() API with MXNet backend (#49)

* Add support for keras multi_gpu_model() API with MXNet backend. Autoset GPU0 context on GPU machine

* Fix typo

* Add SAME padding mode support for pooling operator. (#51)

* Add rnn() operator for MXNet backend with unrolling and masking feature (#46)

* Adding rnn() operator in Keras2.0 with MXNet as backend with unroll=True and Masking=True/False and enabled relevant testcases. Also, modified couple of operators.

* Modified comments

* Added comments to a method

* Enable categorical crossentropy testcases and made minor changes

* Modified message

* nit

* Added detail description of handling variable length input in RNN

* Skip conv2d_transpose and conv3d_transpose test-case for MXNet backend and minor changes in rnn()

* Adamax and NAdam optimizer for MXNet backend (#54)

* Add Adamax optimizer for MXNet backend

* Fix lr and adamax params

* Add Nadam optimizer for mxnet backend

* Add Conv3d transpose (#52)

* conv3d tranpose, enabled test case

* update kernel shape

* replace conv2d_transpse conv3d_transpose with convnd_transpose

* update value errors with MXNet Backend info, fix typo

* add check for conv3d transpose only supports gpu with cudnn

* update context check

* diable conv3d transpose test

* fix typo in comment

* Adding MXNet backend template. Adding all basic Variable and Tensor operations (#1)

* add activation functions

* add activation functions

* fix some legacy

* fix some legacy

* cross entropy

* cross entropy

* fix name scoping introduced in 2.0

* fix name scoping introduced in 2.0

* Add dropout, l2_normalization, random_normal/uniform/binomial (#2)

* remove the logic for hacking RNN

* remove the logic for hacking RNN

* add pooling with utils

* add pooling with utils

* minor

* lint and name scope fix

* fix access protected var

* fix add neighbor, removed __eq__ in KerasSymbol

* fix eval function, unittest for placeholder and variable

* add unittests

* fix bug

* fix bug

* fix

* add some temporary fixes in mxnet backend. undo change to the pytest.ini

* mxnet_backend graph fix, layer support  (#3)

* add activation functions

* fix some legacy

* cross entropy

* fix name scoping introduced in 2.0

* Add dropout, l2_normalization, random_normal/uniform/binomial (#2)

* remove the logic for hacking RNN

* add pooling with utils

* add activation functions

* fix some legacy

* cross entropy

* fix name scoping introduced in 2.0

* remove the logic for hacking RNN

* add pooling with utils

* minor

* lint and name scope fix

* fix access protected var

* fix add neighbor, removed __eq__ in KerasSymbol

* fix eval function, unittest for placeholder and variable

* add unittests

* fix bug

* fix bug

* fix

* add some temporary fixes in mxnet backend. undo change to the pytest.ini

* Keras function not working is a known issue, add skip in the test

* fix random_uniform/constant

* fix legacy randomize methods

* Fix MXNet backend operator bugs. Enabled Keras backend tests

* add bias

* Add Amazon copyrights to License (#6)

* fix

* fix

* fix backend for mlp

* fix context management, add optimizers

* minor change

* undo changes on example

* fix eval

* minor cleanup

* fix some property usage

* fixing AlphaDroupout, not finished yet

* add mx model instantiate

* modifies training model construct logic, fix some tests. fix reshape layer.

* minor fix

* fix bias_add

* more fix on Dense and bias_add

* In progress commit

* fix comment

* small fix

* remove pytest.skip in conv3d. But it failed with theano backend in my workspace though.

* Add conv2d and in_topk operator for mxnet backend (#11)

* Skip BatchDot tests for Theano backend. (#12)

* BatchDot, Basic Batchnorm, Fix BiasAdd, Fix Conv2D, CodeCleanup (#14)

* Fix Conv2d shape issues and enable Conv2D UTs

* Remove redundant mxnet only unit tests

* Adding batch_dot, remove deconv, code comments and cleanup

* Remove buggy conv1d implementation

* Fix CR comments. Fix lint check issues

* Move mxnet specific code from keras engine to mxnet_backend. (#15)

* Move MXNet optimizers from keras optimizers to mxnet backend (#16)

* Fix bug in reshape. Minor rename to avoid local conflicts

* Bug fixes and enable/skip all Keras tests for mxnet backend (#21)

* test results - 374 passed, 235 skipped in 114.44 seconds

* fix/skip keras tests - tests/integration_tests, tests/keras/applications

* fix/skip keras tests - tests/keras/engine/test_topology

* fix/skip keras tests - tests/keras/engine/test_training

* fix/skip keras tests - tests/keras/legacy/

* fix/skip keras tests - tests/keras/preprocessing

* fix/skip keras tests - tests/keras/utils/

* Fix CR comments

* Fix issues in zero_padding. Fix/Enable tests/layers/convolutional_test

* Add momentum to batchnorm. Enable/skip tests in layers/core, local, merge, noise, normalization

* Skip RNN tests in keras/tests/layers/recurrent_test, wrappers_test

* Fix bug in spatial padding, enable/skip tests in loss,optimizers,callback,loss_weighting, model_saving

* Fix mxnet backend multi-gpu training (#31)

Fixing bug for mxnet backend to use multiple gpus.

* Fix performance issue - Batchnormalization, Conv operator (#35)

* Fix default axis for batchnorm layer for channels_first data_format

* Performance improvement by avoiding kernel transpose in conv operation for channels_first format

* Fix model - architecture, weights and both, load and save. (#36)

* Prepare initial version of mxnet related documentation in keras (#38)

* Skip failing unit tests for unsupported functionality in mxnet backend

* Fix pep tests reported by CI

* Use pytest module skip, revert kernel_shape logic

* remove data_format param from bias_add API

* Allow Predict() without compile for mxnet backend and enable tests.

contributor - roywei@

* Fix bug - mxnet backend should not override keras config data_format to channels_first. Only warn of low performance

* Conv3d() operator implementation for Keras2.0 using MXNet backend (#40)

* conv3d implementation for keras2.0 as MXNet backend

* conv3d implementation/testing for keras2.0 using MXNet backend

* keeping -n option in pytest.ini file

* fixed comments given by Sandeep

* Add Conv1D support for MXNet backend (#44)

* Add Conv1D support for MXNet backend

* Fix CR comments

* Conv2d transpose (#47)

* add conv2d_transpose

* conv2d transpose for both channels, enabled test case

* add detailed comments and examples, fix style issue

* enable test case in topology

* Enable performance optimization for conv operators with MXNet backend. Make MXNet default backend with this branch (#48)

* Fix conv kernel shape bug for TF backend. (#50)

* Add support for keras multi_gpu_model() API with MXNet backend (#49)

* Add support for keras multi_gpu_model() API with MXNet backend. Autoset GPU0 context on GPU machine

* Fix typo

* Add SAME padding mode support for pooling operator. (#51)

* Add rnn() operator for MXNet backend with unrolling and masking feature (#46)

* Adding rnn() operator in Keras2.0 with MXNet as backend with unroll=True and Masking=True/False and enabled relevant testcases. Also, modified couple of operators.

* Modified comments

* Added comments to a method

* Enable categorical crossentropy testcases and made minor changes

* Modified message

* nit

* Added detail description of handling variable length input in RNN

* Skip conv2d_transpose and conv3d_transpose test-case for MXNet backend and minor changes in rnn()

* Adamax and NAdam optimizer for MXNet backend (#54)

* Add Adamax optimizer for MXNet backend

* Fix lr and adamax params

* Add Nadam optimizer for mxnet backend

* Add Conv3d transpose (#52)

* conv3d tranpose, enabled test case

* update kernel shape

* replace conv2d_transpse conv3d_transpose with convnd_transpose

* update value errors with MXNet Backend info, fix typo

* add check for conv3d transpose only supports gpu with cudnn

* update context check

* diable conv3d transpose test

* fix typo in comment

* Rebase to latest Keras - April 3, 2018

* Add build badges

* Fix multi_gpu API bug for CPU. Fix PEP. (#64)

* Fix multi_gpu API bug for CPU. Fix PEP.

* fix embedding layer bug (#61)

* fix embedding bug

* addressed comments, enabled more test cases

* add keras test

* reduce line length

* fix style, add blank lines

* Benchmark (#55)

* add conv2d_transpose

* conv2d transpose for both channels, enabled test case

* add detailed comments and examples, fix style issue

* add benchmark scripts for resnet and imagenet data

* combine scripts

* fix args

* fix num of gpus

* update log

* multi_gpu_model only support tf

* add benchamrk scripts for synthetic data

* update read me and scripts

* add mxnet traing result table

* update on readme

* add cifar10 dataset and enable various resnet layers

* fix compile for mxnet multiple gpu

* update callbacks

* update synthetic data script, add credits

* undo new line

* update readme, addressed pr comments

* update readme

* benchmark scripts style fix (#66)

* style fix

* remove unused import, fix line too long

* adrressed pr comments

* Added keras util API for conversion of data tensor from channels_last to channels_first using MXNet backend (#65)

* Added keras util API for conversion of data tensor from channels_last to channels_first using MXNet backend

* Modified comments

* Addressed review comments and made the API more generic accross backends

* Removed shape check

* Modified comments

* Added edge cases

* moved helper method as nested

* Added RNN benchmark scripts (#69)

* Added RNN benchmark scripts

* Fixed new line in bash script

* Removed different backend code and modified comments

* Removed spacing

* Automated the wikiText2 download script

* Added dataset_util functionality to have more flexible code

* Added minor comments

* modified minor comments

* Fixed the multi-gpu context (#68)

* Update benchmark result (#70)

* update benchmark result

* update result

* simplify folder structure

* add image result

* add note

* add note

* rebase to latest Keras - April 20, 2018, fix bug and unit tests

* Added detailed RNN results (#73)

* Added detailed RNN results

* Modified table content and added CUDA version

* fix keras examples (#72)

* fix auto encoder examples

* update other examples

* fix style and add ctc not implemented error

* Added Detailed RNN results (#77)

* Modified RNN benchmark document

* Added minor comments

* fixed broken image link

* Added API to extract metrics from a test and also added epoch parameter (#78)

* Add mxnet backend tutorial documents (#76)

* add performance tips document

* update warning

* add docs from wiki

* add initial multi gpu doc, simplified installation doc, fix benchmark doc typo

* update install steps

* add multi_gpu_model tutorial

* Support exporting model as MXNet model (sym, params). (#80)

* Support exporting model as MXNet model (sym, params).

* Return data_names and data_shapes

* add unit tests for mxnet model save API

* Add test with LSTM layer for mxnet model save API

* Add support for functional Model graphs in save_mxnet_model API

* add multi gpu model example (#85)

* add multi gpu model

* specify param name

* Add additional logging for cnn benchmarks (#89)

* add extra logging

* add logging for cnn synthetic

* fix log name

* fix file name

* Log RNN benchmark results (#90)

* Make benchmark result logging available in RNN scripts

* Make log file name consistent across CNN and RNN benchmarks

* fix pytest errors (#93)

* Cherry pick keras-team/keras 2.1.6 missing 3 commits into awslabs/keras-apache-mxnet (#96)

* update multi_gpu api in benchmark scripts (#95)

* update multi_gpu

* update logging

* fix logging

* fix logging

* fix speed format

* remove learning rate log

* Revamp keras-mxnet docs (#82)

* Update main README and move mxnet_backend_docs under docs

* revisit installation mxnet backend docs

* revisit multi_gpu_training mxnet backend docs

* revisit performance_guide mxnet backend docs

* revisit using rnn with mxnet backend in mxnet backend docs

* add save_mxnet_model tutorials in mxnet backend docs

* Fixing review comments from aaron

* Resolve CR comments on save_mxnet_model tutorial

* Fix broken links, update tutorial links in the mxnet_backend code

* revamp benchmark results readme

* Benchmark results README page revamp

* Add library versions

* Remove too detailed benchmark results. Summarize in README

* Get back detailed results document

* Remove experiemental RNN benchmarks from README

* addressed review comments on benchmark results

* Set latest stable dependency of h5py to avoid warnings

* Update CNN benchmark result (#97)

* update benchmark numbers

* update number

* update result

* Update RNN benchmark results (#98)

* Fix pep failures

* Add 8 GPUs RNN benchmark results

* remove checking data format (#102)

* update imagenet result (#103)
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants