This repository has been archived by the owner on Nov 17, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 6.8k
[DO NOT REVIEW] Numpy unary functions #14754
Closed
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
haojin2
force-pushed
the
numpy_ops
branch
3 times, most recently
from
April 22, 2019 20:48
b1067ec
to
5554146
Compare
* Updates Ubuntu GPU CI image base image to cuda10-devel and manually installs cuDNN version 7.3.1.20 * Updates CentOS 7 GPU CI image base image to cuda10-devel and manually installs cuDNN version 7.3.1.20
* remove unneeded test files * add test files to gitignore
…apache#14868) This reverts commit 369b66d.
Fix the typo.
* Fix sample_multinomial number of outputs bug * Fix lint
* modified: docs/api/python/gluon/contrib.md modified: python/mxnet/gluon/contrib/__init__.py new file: python/mxnet/gluon/contrib/cnn/__init__.py new file: python/mxnet/gluon/contrib/cnn/conv_layers.py new file: tests/python/gpu/test_gluon_contrib_gpu.py * modified: python/mxnet/gluon/contrib/cnn/conv_layers.py * modified: python/mxnet/gluon/contrib/cnn/conv_layers.py * Update conv_layers.py * Update conv_layers.py
* cpu optimized data loader * Fix CI * Fix CI * Fix ci * Fix doc
* Relax constexpr restriction * Change imagenet_gen_qsym_mkldnn * Revert constexpr change * Add dummy image for test * Fix unreverted change * Remove url * Fix imagenet change * Catch only std::exception * Fix const, remove dmlc::Error catch * Retrigger CI
…ache#13226) * Added "factor" and "like" modes into BilinearResize2D operator. Also added tests and some fixes to visualization needed due to added modes. * Lint fix * Test fix * retrigger CI * retrigger CI * Retrigger CI * retrigger CI * retrigger ci * retrigger ci again
…pache#14769) * Rework Bert examples to include QA infer and finetuning * update notebook example and exported markdown * add integration test for the classification * fix tests * add RAT * add another RAT * fix all the typos * Clojure BERT finetuning example: fix CSV parsing * update readme and gitignore * add fix from @davliepmann’s notebook parsing * feedback from @daveliepmann * fix running of example and don’t show very first batch on callback speedometer * rerun the notebook and save results * remove bert stuff from main .gitignore * re-putting the license back on after regen * fix integration test
* fix layer norm for large input shape * try to fix * use a larger eps * try to fix test * try to fix
* upgrade openssl to 1.1.0k * fix the wrong version * upgrade to the latest version * avoid lib to be treated as file instead of folder
* upgrade cuDNN & NCCL * retrigger CI
…4919) * Add API documentation for upsampling operator with examples * Update src/operator/nn/upsampling.cc Co-Authored-By: Aaron Markham <markhama@amazon.com> * Update src/operator/nn/upsampling.cc Co-Authored-By: Aaron Markham <markhama@amazon.com> * Update src/operator/nn/upsampling.cc Co-Authored-By: Aaron Markham <markhama@amazon.com> * Make API doc example as pseudocode than code
* [MXNET-857] Enable CUDA NVTX extensions for profiler These extensions mark readable ranges in the NVIDIA Visual Profiler which helps show correlations between kernel launches and graph node executions. Example shown here: https://user-images.githubusercontent.com/7443219/33946110-34296d18-e021-11e7-8d18-6d40b797405c.png The additional information enabled is in the 'Markers and Ranges' row. * [MXNET-857] Add initial NVTX profiler implementation This commit removes NVTX headers from the Amalgamation build process, but this is a CUDA/CMake only feature, so it's not relevant to Amalagamation builds. * [MXNET-857] Use macro for NVTX specific code * [MXNET-857] Add integration test. * Turn on NVTX by default in Unix. * Fixed typos and added NTVX info to profiler.md * Add NVTX example to profiling tutorial * Add NVTX flags for make
* Generalizes centos7 cudnn download and install script * Updates setting of cudnn version to a position in the Dockerfile that will have the least impact on caching
…pache#14393) * Initial commit * Rebase * WIP for fixing rebase issues * WIP for fixing rebase issues * fix wip * wip fix * wip fix * wip fix * wip fix * wip fix * wip fix * should be good to go * wip remove debug info * wip remove debug info * linter * linter * Retrigger * Address comments from Da
* add linspace operator * add test * fix bug * register gpu op * fix lint
* fix reshape * save * fix reshape * clean code * fix memory allocation & lint * add unit test * req type * add comments to describe the logic
* Pins libnvinfer versions * Sets cudnn to version 7.5.0 in tensorrt environment * Re-enables TensorRT stages
* Add numpy namespace and initial impl of np.sum (not complete) * Clean up * Fix import error * numpy sum * add test and backward data type support * add license to test_numpy_op.py * improve test to reduce flakiness * fix sanity build * extra numeric test and imperative test * add error message for initial argument
…PIs (apache#14758) * Infra of new ndarray and symbol types for numpy operators * Rename * Fix import problem * Refactor * Remove redundant code * Add docstring * More on numpy ndarray and symbol * Override unimplemented methdos for ndarray and _NumpySymbol * Fix built-in methods of ndarray and _NumpySymbol * Fix test and sanity check * Fix pylint * Address cr comments * Add unit tests for ndarray and _NumpySymbol * Add _true_divide * Fix gpu build * Add future import division * More correct way of checking if an output is from a np compat op * Fix gpu build * Fix output ndarray/symbol types with at least one new ndarray/symbol * Modify true_divide doc * Fix flaky copying zero-size arrays via gpus * Fix zero size in gluon hybridize and zeros/ones symbol not creating new symbol type * Fix doc
* Numpy Dot case 1-4 + case 3.5 forward and 0.5 backward * Backward computation and test coverage
haojin2
requested review from
anirudh2290,
gigasquid,
nswamy,
szha and
yzhliu
as code owners
May 15, 2019 18:21
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
As title.
Checklist
Essentials
Please feel free to remove inapplicable items for your PR.
Changes
Comments
Related Issue #14327