Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Gluon InstanceNorm and ReflectancePadding #7570

Closed
wants to merge 173 commits into from
Closed

Gluon InstanceNorm and ReflectancePadding #7570

wants to merge 173 commits into from

Conversation

zhanghang1989
Copy link
Contributor

InstanceNorm and ReflectancePadding are important for generative models and style transfer.

InstanceNorm and ReflectancePadding are important for generative models and style transfer.
@piiswrong
Copy link
Contributor

stefanhenneking and others added 28 commits August 23, 2017 12:10
…#7527)

* FP16-I/O conv and deconv will use pseudo-fp16, ignoring MSHADOW_USE_PASCAL.

* Fixing cpplint error.

* Empty commit to trigger CI.
* Set dev_id in streams, also update mshadow.

* Fix cpplint error.

* Empty commit to trigger CI.

* Further update of mshadow to match current hash.
* add ctx to begin_state

* fix image classification
* contrib ctc interface changes for compatibility

* cudnn ctc

* update per comments
* Fix a spelling mistake.

* FIX pad example

* fix smooth l1 comment

* Fix rcnn multi-gpu bucketing warning
* Relaxing condition in slice

* Update ndarray.cc
* Fixing loss function code in tutorial

* Updating pull request with feedback
* nightly build stochastically choose optimizer (#7559)

* Only call MKL script once

* Fix 'momentum' and 'multi_precision' optimizer args

* fix cmake build for active kvstore

* stochastic choice of optimizer for mnist training

* Run all three optimizers

* Add just lenet test

* Trigger CI
indices are optional, custom cpp iterators providing data batches
without indices should work while using MXDataIter.
* Expands linalg_gemm use. Legacy mshadow::dot use only if no cblas.

* Fix cpplint.
* fix linalg_impl

* fix

* fix

* fix
Earlier code marks status as success initially. So any new PR shows jenkins status as success if we see the check mark on github. On opening the full build status, we see that builds haven't even started or are running. 

If something fails, variable changes to failure then. So even without this merge, a red mark on github indicates that build has failed correctly. That behavior is unchanged.
installs bc required by sh2ju.sh and changes the regex match to capital alphabet as it clashes with a warning thrown by opencv driver
* add unit test for csv iter

* fix lint

* add libsvm to mxnet.io doc

* update libsvm doc
* gpu access of ndarray

* gpu access from C++ api

* gpu access fix

* Update c_api.cc

* Update c_api.cc
* refactor cudnn algo reg to no use string

* refactor ctx list

* fix

* refactor save_inputs
cjolivier01 and others added 11 commits September 13, 2017 22:20
* Fix missing of CUDA device on non-GPU host

The issue that "no CUDA-capable device is detected" occurs when calling MxNet from Java/Scala on a host without GPU. 

The user scenario is to run MxNet model on CPU-only host with Context input as cpu(0). The default GPU unit shares the same index of zero with the default CPU unit. By default the MXNET_USE_CUDA is enabled and the logic will proceed to the line of "CUDA_CALL". The calling of cudaSetDevice exits with exception since there is GPU found for index of zero. The proposal here is to enable the "CUDA_CALL" only when the device type of Context is not CPU.

* Revise conditional statement to be more readable

Revise the conditional statement about when the cudaSetDevice could be called to be more readable.

* Re-run the CI test

Trivial change to re-run the continuous integration test.

* Lines should be equal or less than 100 characters

Lines should be equal or less than 100 characters

* Fix line ending in whitespace

Fix line ending in whitespace
* add sparse elementwise test to gpu operator unit tests

* fix bug in elemwise_sum unit test and increase number of test runs

* add elementwise sum operator for rowsparse tensors on GPU

* add density=1.0 to unit test densities

* elemwise sum interface change to provide temporary resource (storage)

* use OpContext resource for temp storage instead of cudaMalloc

* adding fallback call

* minor changes suggested in code review

Conflicts:
	tests/python/gpu/test_operator_gpu.py
	tests/python/unittest/test_sparse_operator.py
* add sparse ftrl optimizer

* add back

* update

* update

* update

* update

* Update optimizer.py
* fix bug where 3.995 gets rounded to 3.

* added back static_cast, and use lround rather than round.
* removed unnecessary restriction on sequence ops that bans matrix inputs.

* added tests for matrix case for SequenceReverse and SequenceMask. Removed unnecessary grad binding in SequenceMask test.
* add warning to global norm clip

* stacklevel
* add mobilenet to gluon model zoo

* update

* simplify

* doc string for multiplier

* rename variables
* add advanced indexing

* fix

* fix

* fix
@zhanghang1989
Copy link
Contributor Author

Hey folks, I was busy with my thesis defense last week. Any further feedbacks for this PR?

@szha
Copy link
Member

szha commented Sep 18, 2017

I suppose this means you passed. Congrats :)

Could you do a rebase onto the latest master to let the tests run?

@zhanghang1989
Copy link
Contributor Author

Thanks! I have just rebased. Let me know if I am doing something wrong.

@zhanghang1989
Copy link
Contributor Author

Got messed up ... Creating a new PR

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet