This repository has been archived by the owner on Nov 17, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Gluon InstanceNorm and ReflectancePadding #7570
Closed
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
InstanceNorm and ReflectancePadding are important for generative models and style transfer.
Please add doc. see http://pytorch.org/docs/master/nn.html#torch.nn.InstanceNorm1d |
…#7527) * FP16-I/O conv and deconv will use pseudo-fp16, ignoring MSHADOW_USE_PASCAL. * Fixing cpplint error. * Empty commit to trigger CI.
* Set dev_id in streams, also update mshadow. * Fix cpplint error. * Empty commit to trigger CI. * Further update of mshadow to match current hash.
* add ctx to begin_state * fix image classification
* contrib ctc interface changes for compatibility * cudnn ctc * update per comments
* Fix a spelling mistake. * FIX pad example * fix smooth l1 comment * Fix rcnn multi-gpu bucketing warning
…l variable $seq_size. (#7521)
* Relaxing condition in slice * Update ndarray.cc
* Fixing loss function code in tutorial * Updating pull request with feedback
* nightly build stochastically choose optimizer (#7559) * Only call MKL script once * Fix 'momentum' and 'multi_precision' optimizer args * fix cmake build for active kvstore * stochastic choice of optimizer for mnist training * Run all three optimizers * Add just lenet test * Trigger CI
indices are optional, custom cpp iterators providing data batches without indices should work while using MXDataIter.
* Expands linalg_gemm use. Legacy mshadow::dot use only if no cblas. * Fix cpplint.
* fix linalg_impl * fix * fix * fix
Earlier code marks status as success initially. So any new PR shows jenkins status as success if we see the check mark on github. On opening the full build status, we see that builds haven't even started or are running. If something fails, variable changes to failure then. So even without this merge, a red mark on github indicates that build has failed correctly. That behavior is unchanged.
installs bc required by sh2ju.sh and changes the regex match to capital alphabet as it clashes with a warning thrown by opencv driver
* add unit test for csv iter * fix lint * add libsvm to mxnet.io doc * update libsvm doc
* gpu access of ndarray * gpu access from C++ api * gpu access fix * Update c_api.cc * Update c_api.cc
* refactor cudnn algo reg to no use string * refactor ctx list * fix * refactor save_inputs
* Fix missing of CUDA device on non-GPU host The issue that "no CUDA-capable device is detected" occurs when calling MxNet from Java/Scala on a host without GPU. The user scenario is to run MxNet model on CPU-only host with Context input as cpu(0). The default GPU unit shares the same index of zero with the default CPU unit. By default the MXNET_USE_CUDA is enabled and the logic will proceed to the line of "CUDA_CALL". The calling of cudaSetDevice exits with exception since there is GPU found for index of zero. The proposal here is to enable the "CUDA_CALL" only when the device type of Context is not CPU. * Revise conditional statement to be more readable Revise the conditional statement about when the cudaSetDevice could be called to be more readable. * Re-run the CI test Trivial change to re-run the continuous integration test. * Lines should be equal or less than 100 characters Lines should be equal or less than 100 characters * Fix line ending in whitespace Fix line ending in whitespace
* add sparse elementwise test to gpu operator unit tests * fix bug in elemwise_sum unit test and increase number of test runs * add elementwise sum operator for rowsparse tensors on GPU * add density=1.0 to unit test densities * elemwise sum interface change to provide temporary resource (storage) * use OpContext resource for temp storage instead of cudaMalloc * adding fallback call * minor changes suggested in code review Conflicts: tests/python/gpu/test_operator_gpu.py tests/python/unittest/test_sparse_operator.py
* add sparse ftrl optimizer * add back * update * update * update * update * Update optimizer.py
* fix bug where 3.995 gets rounded to 3. * added back static_cast, and use lround rather than round.
* removed unnecessary restriction on sequence ops that bans matrix inputs. * added tests for matrix case for SequenceReverse and SequenceMask. Removed unnecessary grad binding in SequenceMask test.
* add warning to global norm clip * stacklevel
* add mobilenet to gluon model zoo * update * simplify * doc string for multiplier * rename variables
* add advanced indexing * fix * fix * fix
Hey folks, I was busy with my thesis defense last week. Any further feedbacks for this PR? |
I suppose this means you passed. Congrats :) Could you do a rebase onto the latest master to let the tests run? |
InstanceNorm and ReflectancePadding are important for generative models and style transfer.
zhanghang1989
requested review from
cjolivier01,
yzhliu,
mli and
thirdwing
as code owners
September 18, 2017 19:36
Thanks! I have just rebased. Let me know if I am doing something wrong. |
Got messed up ... Creating a new PR |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
InstanceNorm and ReflectancePadding are important for generative models and style transfer.