Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Generalized reshape_like operator #11928

Merged
merged 5 commits into from
Aug 11, 2018
Merged

Generalized reshape_like operator #11928

merged 5 commits into from
Aug 11, 2018

Conversation

sbodenstein
Copy link
Contributor

@sbodenstein sbodenstein commented Jul 30, 2018

This PR implements this proposal for generalizing the reshape_like operator, so that it has more flexible handling of which dimensions of the input and target tensors are used in the reshaping process.

There are a few differences from the proposal in the previous discussion. For example, to be consistent with the inputs names of reshape_like (lhs and rhs instead of src and target), the parameters were called lhs_begin rather than src_begin etc.

I've also added tests for reshape_like, which didn't exist before, along with better documentation.

@ThomasDelteil, @taliesinb

if (*cbegin < 0)
*cbegin += ndims;

if (!static_cast<bool>(end)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can use has_value for better readability.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, that is cleaner. Fixed.

Btw: the reason I used this casting method because it was done here. Should this be changed as well?

DMLC_DECLARE_FIELD(lhs_end)
.set_default(dmlc::optional<int>())
.describe("Defaults to None. The ending index to be used, "
"The ending index along which the lhs dimensions are to be "
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This describe comment seems to have a spurious sentence fragment in it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

NNVM_REGISTER_OP(reshape_like)
.describe("Reshape lhs to have the same shape as rhs.")
.describe(R"code(Reshape `lhs` to have the same shape as `rhs`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would alter this line to say something more along the line:
"Reshape some or all dimensions of lhs to have the same shape as some or all dimensions of rhs"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, much better. Changed.

@taliesinb
Copy link
Contributor

Please see this 'sister' issue that designs what is effectively the dual of this feature: https://discuss.mxnet.io/t/proposal-new-merge-dims-reshaping-op/1524

@@ -476,6 +476,31 @@ void HardSigmoidBackward(const nnvm::NodeAttrs& attrs,
});
}

struct ReshapeLikeParam : public dmlc::Parameter<ReshapeLikeParam> {
int lhs_begin, rhs_begin;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if these two parameters should be made optional too, given that reshape_like op has been around for 10 months.

Copy link
Contributor Author

@sbodenstein sbodenstein Aug 1, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are optional in the sense that they have default values DMLC_DECLARE_FIELD(lhs_begin).set_default(0) that match the old behaviour if not explicitly specified (so no backward-compatibility is broken). I thought dmlc::optional<int> was simply for the case where you had to handle the None-value case, which is not necessary to support for lhs_begin and rhs_begin. Note that the same is done for slice_axis, begin is an int and end is dmlc::optional<int>.

Or am I misunderstanding something?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Considering the case where the parameters may be in some serialized format, it may be necessary to support null values to ensure compatibility there too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, if its necessary, I will change it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@szha Can I clarify something? Maybe I am misunderstanding. Are you saying that it is required to make new parameters on existing layers optional<...> for backward compatibility reasons, even if they have defaults?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@taliesinb I think it doesn't hurt in this case and was suggesting that we lean on the safer side. There might be cases where the default value filling on the frontend fails, such as deserializing a graph.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@szha ok. so it sounds like there isn't a particular policy at the moment about how to add new parameters to existing layers. do you mind if i ask on the mailing list about what such a policy should be, just for the next time this happens?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

of course not. good idea :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@szha: I've changed lhs_begin and rhs_begin to use optional (and added tests for this case). Can we merge?

@sbodenstein
Copy link
Contributor Author

@szha: if you are happy with the changes, can we merge this PR?

@anirudh2290 anirudh2290 added Operator Backend Issues related to the backend of MXNet pr-awaiting-review PR is waiting for code review labels Aug 9, 2018
@szha szha merged commit c44f16b into apache:master Aug 11, 2018
aaronmarkham added a commit to aaronmarkham/incubator-mxnet that referenced this pull request Aug 17, 2018
adding tutorial index pages to whitelist

added custom fork feature

adding settings to turn off/on doc sets

using custom fork directory for artifacts

automate upstream branch refresh

switched to boolean types and added debug messaging

build will copy current config files to each version build

build will copy current config files to each version build

stashing config files before checking out new version

put mxnet.css as artifact to be copied during build

fix formatting issues in h tags

refactored to build each version in a different folder

grab latest README from local fork

using settings.ini for document sets per version

fix R doc config for mxnet root

matching conf.py updates to current and excluding 3rdparty folder

align R doc gen bug fix with other PR 11970

pass the current tag in the make args and set to default if empty

fix bug for default version and add BUILD_VER to make html call

turning off scala docs for versions less than 1.2.0

turning off r docs until CI can handle it

enabling new docs build capability in CI

failover to fetching remote branch

Remove stale Keras-MXNet tests from MXNet repo (apache#11902)

Disable flaky cpp test (apache#12056)

Adjusting tolerance level and removing fixed seed for tests: test_ifft, test_fft (apache#12010)

* adjusting tolerance level and removing fixed seed

* CI retrigger

* removing status

[MXNET-774] Flaky test in test_executor.py:test_bind (apache#12016)

* fix test bind, remove fixed seed

* add tracking info

* remove tracking info

fix flaky test_quantization.test_get_optimal_thresholds (apache#12004)

removed fixed seed 1234 (apache#12072)

tested with 100k runs, no failures

improve error message of cudnn operators (apache#11886)

Fix for undefined variable errors (apache#12037)

* Undefined name in initializer

* Fix undefined name in test_mkldnn

* Fix for undefined names in examples

Fix undefined_variable lint errors in examples (apache#12052)

* Fix lint errors in dqn example

* Fix lint error in gluon example

* Fix undefined error in autoencoder example

MXNET-776 [Perl] Better documentation/bug fixes. (apache#12038)

* MXNET-776
1) Several new metric classes.
2) Improved documentation.
3) Bugfixes.

* added links and fixed a typo.

Redesign Jenkinsfiles (apache#12000)

* Rework Jenkinsfile

* Add functionality to assign node labels dynamically

* Extract functions into util file

* Change all Jenkinsfiles to use utils

* Make a new commit...

* Address review comments 1

* Address review comments 2

fix unidirectional model's parameter format (apache#12055)

* fix unidirectional model's parameter format

* Update rnn_layer.py

Fix syntax errors in Jenkinsfiles (apache#12095)

[MXAPPS-581] Straight Dope nightly fixes. (apache#11934)

Enable 3 notebooks that were failing tests after making updates to the
Straight Dope book. We also add pandas required by one of these
notebooks.

Fix jenkinsfile syntax errors (apache#12096)

remove fixed seed for test_triplet_loss (apache#12011)

got rid of fixed seed for test_optimizer/test_operator_gpu.test_ftml (apache#12003)

[MXNET-696] Fix undefined variable errors (apache#11982)

* Fix undefined error in image segmentation

ctx is used undefined. Setting the default ctx to cpu and
editing the comment to let the user know that it can be
changed to GPU as required.

* Fix undefined names in SSD example

maskUtils is disabled. Remove code referencing it.
Initializing start_offset.

got rid of fixed seed for test_optimizer/test_operator_gpu.test_nag (apache#11981)

Fix flaky test for elementwise_sum (apache#11959)

Re-enabling test_operator.test_binary_math_operators (apache#11712) (apache#12053)

Test passes on CPU and GPU (10000 runs)

update docs to explain CPU incompatibilities (apache#11931)

removed fixed from test_optimizer.test_signum (apache#12088)

Add missing object to tests/nightly/model_backwards_compatibility_check/JenkinsfileForMBCC (apache#12108)

Add GetName function in Symbol class for cpp pack (apache#12076)

Add unique number of parameters to summary output in Gluon Block (apache#12077)

* add unique parameters in summary output

* rebuild

Update fully_connected.cc documentation (apache#12097)

[MXNET-244] Update RaspberryPI instructions (apache#11562)

* Update RaspberryPI instructions

[MXNET-749] Correct usages of `CutSubgraph` in 3 control flow operators (apache#12078)

* Fix cut graph

* Copy only when necessary

* Add unittest for while_loop

* Add unittest for foreach

* Add unittest for cond

* Avoid magic number: 0 => kUndefinedStorage

[MXNET-703] TensorRT runtime integration (apache#11325)

* [MXNET-703] TensorRT runtime integration

Co-authored-by: Clement Fuji-Tsang <caenorst@hotmail.com>
Co-authored-by: Kellen Sunderland <kellen.sunderland@gmail.com>

* correctly assign self._optimized_symbol in executor

* declare GetTrtCompatibleSubsets and ReplaceSubgraph only if MXNET_USE_TENSORRT

* add comments in ReplaceSubgraph

* Addressing Haibin's code review points

* Check that shared_buffer is not empty when USE_TENSORRT is set

* Added check that TensorRT binding is for inference only

* Removed redundant decl.

* WIP Refactored TRT integration and tests

* Add more build guards, remove unused code

* Remove ccache report

* Remove redundant const in declaration

* Clean Cmake TRT files

* Remove TensorRT env var usage

We don't want to use environment variables with TensorRT yet, the
logic being that we want to try and have as much fwd compatiblity as
possible when working on an experimental feature.  Were we to add
env vars they would have to be gaurenteed to work in the future until
a major version change.  Moving the functionality to a contrib call
reduces this risk.

* Use contrib optimize_graph instaed of bind

* Clean up cycle detector

* Convert lenet test to contrib optimize

* Protect interface with trt build flag

* Fix whitespace issues

* Add another build guard to c_api

* Move get_optimized_symbol to contrib area

* Ignore gz files in test folder

* Make trt optimization implicit

* Remove unused declaration

* Replace build guards with runtime errors

* Change default value of TensorRT to off

This is change applies to both TensorRT and non-TensorRT builds.

* Warn user when TRT not active at runtime

* Move TensorRTBind declaration, add descriptive errors

* Test TensorRT graph execution, fix bugs

* Fix lint and whitespace issues

* Fix typo

* Removed default value for set_use_tensorrt

* Improved documentation and fixed spacing issues

* Move static exec funcs to util files

* Update comments to match util style

* Apply const to loop element

* Fix a few namespace issues

* Make static funcs inline to avoid compiler warning

* Remove unused inference code from lenet5_train

* Add explicit trt contrib bind, update tests to use it

* Rename trt bind call

* Remove documentation that is not needed for trt

* Reorder arguments, allow position calling

Decrease success rate to make test more stable (apache#12092)

I have added this test back to unit test coverage and decreased success rate even more, to make sure that fails would happen even more rare

Add Clojure to website nav (apache#12075)

* adding clojure to API navigation

* adding clojure to the sidebar

* switched order

Fix flaky tests for quantize and requantize (apache#12040)

[MXNET-703] Use relative path for symbol import (apache#12124)

Fix shared memory with gluon dataloader, add option pin_memory (apache#11908)

* use threading for mp dataloader fetching, allow pin_memory option

* allow pin tuple of data into cpu_pinned

* fix as_in_context if not cpu_pinned

* fix cpu_pinned

* fix unittest for windows, update doc that windows mp is available

* fix pin_memory

* fix lint

* always use simplequeue for data queue

* remove main thread clearing for data_queue

* do not use outside folder as pythonpath but run nosetests inside

* use :MXNET_LIBRARY_PATH= to locate dll

* fix dll path

* correct dll path

reduce a copy for rowsparse parameter.reduce (apache#12039)

GPU Memory Query to C API (apache#12083)

* add support for GPU memory query

* remove lint

take custom dataset into consideration (apache#12093)

[MXNET-782] Fix Custom Metric Creation in R tutorial (apache#12117)

* fix tutorial

* install instructions

* fix typo

[MXAPPS-805] Notebook execution failures in CI. (apache#12068)

* [MXAPPS-805] Notebook execution failures in CI.

* Add a retry policy when starting a notebook executor to handle the failure to
 start a notebook executor (due to a port collision, kernel taking too
 long to start, etc.).

* Change logging level for tests to INFO so that we have more
 informative test output.

* Make retry logic for Jupyter notebook execution specific to the error
message we are looking for to prevent false positives in the retry logic.

rm wrong infertype for AdaptiveAvgPool and BilinearReisze2D (apache#12098)

Document MXNET_LIBRARY_PATH environment variable which was not documented explicitly. (apache#12074)

Generalized reshape_like operator (apache#11928)

* first commit

* fix documentation

* changed static_cast<bool>(end) to end.has_value()
fixed documentation issues

* change begin from int to optional

* test None as lhs

fix cython nnvm include path (apache#12133)

CI scripts refinements. Separate Py2 and Py3 installs cripts. Fix perms. (apache#12125)

 zipfian random sampler without replacement  (apache#12113)

* code compiles

* update doc

* fix bug and add test

* fix lint

update dmlc-core (apache#12129)

Fix quantized graphpass bug (apache#11937)

* fix quantized graphpass bug

* add residual quantization testcase

* handle dtype and backend issues

support selu activation function (apache#12059)

Fix flaky test test_operator_gpu:deformable_conv and deformable_psroi_pooling (apache#12070)

[MXNET-767] Fix flaky test for kl_loss (apache#11963)

* Fix flaky test for kl_loss

* remove comment.

[MXNET-788] Fix for issue apache#11733 pooling op test (apache#12067)

* added support to check_consistency function to generate random numbers for a specific datatype (ie. fp16)
this ensures that for tests that compare results among different precisions, that data is generated in the least precise type and casted to the most precise

changed test_pooling_with_type test case to specify fp16 precision for random input data
renamed the 2nd test_pooling_with_type function to test_pooling_with_type2 so it doesnt redefine the first and both are tested

fixed equation formatting issue in pooling operator description

Added myself to the contributors readme file

* updated from latest in master (had old version of the file)

* shortened lines per lint spec

* renamed default_type argument to rand_type for clarity
updated function docstring with argument description

removed rand_type setting for non-max pooling tests

* cleaned up check_consistency function docstring

Do not show "needs to register block" warning for registered blocks. (apache#12130)

Fix precision issue of test case test_rnnrelu_bidirectional (apache#12099)

* adjust tolerance only for relu for fixing test case bug

* only adjust torence for test_rnnrelu_bidirectional and adjust back on test_rnnrelu_sym

Accelerate the performance of topk for CPU side (apache#12085)

* Accelerate the performance of topk for CPU side

* Add comments for the code changes

Remove unused TensorRT code (apache#12147)

Removing some python code that isn't in the current TensorRT execution paths.
This should make the code more readable and avoid potential linting errors.

Thanks to @vandanavk for pointing out the dead code and @cclauss for a quick
alternative fix.

Co-authored-by: Vandana Kannan <vandanavk@users.noreply.github.com>
Co-authored-by: cclauss <cclauss@bluewin.ch>

Disable test_io.test_CSVIter (apache#12146)

Fix RAT license checker which is broken in trunk (apache#12148)

Remove obsolete CI folder

set bind flag after bind completes (apache#12155)

Fix MXPredReshape in the c_predict_api (apache#11493)

* Fix MXPredReshape in the c_predict_api.

* Add unittest for the C predict API.

* Fix path in the test.

* Fix for Windows.

* Try again to fix for Windows.

* One more try to fix test on Windows.

* Try again with CI.

* Try importing from mxnet first if cannot find the amalgamation lib.

* Add a log message when libmxnet_predict.so is not found.

* Set specific rtol and atol values.

* Fix missing rtol and atol values.

* Empty commit.

* Try again with CI.

* One more try with CI.

* Retry CI.

[Flaky Test] Fix test_gluon_model_zoo.test_models when MXNET_MKLDNN_DEBUG=1  (apache#12069)

* reorder inputs

* use function flatten vs build in method

* update similar array atoi to 0.01

* fix reorder

* enable MXNET_MKLDNN_DEBUG in CI

* add exclude debug flag

* fix lint

* add warning log for excluded op

* retrigger

RAT check readme updated (apache#12170)

update ndarray stack Doc for apache#11925 (apache#12015)

* update ndarray stack Doc

Add worker_fn argument to multiworker function (apache#12177)

* add worker_fn argument to multiworker function

* fix pylin

Remove fixed seed for test_huber tests (apache#12169)

Removed fixed seed and increased learning rate and tolerance for test_nadam (apache#12164)

documentation changes. added full reference (apache#12153)

* documentation changes. added full reference

* fixing lint

* fixing more lint

* jenkins

* adding the coding line utf-8

Partially enable flaky test for norm operator (apache#12027)

add examples for slicing option (apache#11918)

Module predict API can accept NDArray as input (apache#12166)

* forward and predict can accept nd.array np.array

[MXNET-744] Docs build tools update (apache#11990)

[MXNET-744] Docs build tools update (apache#11990)

[MXNET-696] Fix undefined name errors (apache#12137)

* Fix undefined name error in neural style example

* Fix import exception error

* Fix undefined name in AUCMetric

* Fix undefined name in a3c example

Fix profiler executer when memonger is used (apache#12152)

add handling for grad req type other than kNullOp for indices (apache#11983)

Fix a minor bug in deformable_im2col.cuh (apache#12060)

Function `deformable_col2im_coord ` called deformable_col2im_coord_gpu_kernel but check the deformable_col2im_gpu_kernel.

[MXNet-744] Fix website build pipeline Python 3 issues (apache#12195)

* Fix website build pipeline Python 3 issues (apache#12195)

Fix MKLDNNSum cpp test failure (apache#12080)

bump timeout on Jenkins for docs/website to 120 min (apache#12199)

* bump timeout on Jenkins to 120 min

* add branches to settings using v notation; apply appropiate settings

Fixing typo in python/mxnet/symbol/image.py (apache#12194)

Fixing typo in python/mxnet/symbol/image.py

Fix the topk regression issue (apache#12197) (apache#12202)

* Fix the topk regression issue (apache#12197)

* Add comments

pull changes in from master
XinYao1994 pushed a commit to XinYao1994/incubator-mxnet that referenced this pull request Aug 29, 2018
* first commit

* fix documentation

* changed static_cast<bool>(end) to end.has_value()
fixed documentation issues

* change begin from int to optional

* test None as lhs
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Backend Issues related to the backend of MXNet Operator pr-awaiting-review PR is waiting for code review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants