Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Decouple LOG_FATAL_THROW prepocessor variables between TVM and MXNet #17878

Merged
merged 2 commits into from
Mar 23, 2020

Conversation

leezu
Copy link
Contributor

@leezu leezu commented Mar 20, 2020

Description

Don't share DMLC_LOG_FATAL_THROW preprocessor variable as TVMOP build is not stable. Instead, use MXNET_LOG_FATAL_THROW for MXNet.

Workaround for #17875

@leezu leezu requested a review from szha as a code owner March 20, 2020 00:45
@leezu leezu force-pushed the fixtvm branch 2 times, most recently from 5c3b398 to c14eda3 Compare March 21, 2020 18:03
@leezu
Copy link
Contributor Author

leezu commented Mar 21, 2020

I updated the PR to preserve DMLC_LOG_FATAL_THROW, but only set it on the mxnet target.
Previously DMLC_LOG_FATAL_THROW was configured globally, affecting both tvm and mxnet build.

@leezu
Copy link
Contributor Author

leezu commented Mar 21, 2020

@yzhliu I find that we always build TVM with OpenMP, even if OpenMP is disabled in MXNet. The comment

# Use OPENMP thread pool to be compatible with MXNet
set(USE_OPENMP ON)

suggests that is unintended. I added a fix to disable building TVM OpenMP if it's disabled on MXNet side.

@yzhliu
Copy link
Member

yzhliu commented Mar 23, 2020

@leezu thanks for the finding and fix.

@leezu leezu merged commit 3840786 into apache:master Mar 23, 2020
@leezu leezu deleted the fixtvm branch March 23, 2020 18:25
anirudh2290 added a commit to anirudh2290/mxnet that referenced this pull request Mar 27, 2020
* 'master' of https://github.com/apache/incubator-mxnet: (192 commits)
  * impl - FFI for np einsum (apache#17869)
  [Numpy] FFI for diag/diagonal/diag_indices_from (apache#17789)
  [Numpy] Kron operator (apache#17323)
  cmake: Set DMLC_LOG_FATAL_THROW only for building mxnet and not for tvm (apache#17878)
  Add simplified HybridBlock.forward without F (apache#17530)
  Use FP32 copy of weights for norm (multitensor LAMB optimizer) (apache#17700)
  Use multi-tensor sumSQ in clip_global_norm (apache#17652)
  [Numpy] Add op fmax, fmin, fmod (apache#17567)
  Adding sparse support to MXTensor for custom operators (apache#17569)
  Update 3rdparty/mkldnn to v1.2.2 (apache#17313)
  Dynamic subgraph compile support (apache#17623)
  Refactor cpp-package CMakeLists.txt & add missing inference/imagenet_inference (apache#17835)
  staticbuild: Fix potential user-assisted execution of arbitrary code  (apache#17860)
  * FFI for np.argmax and np.argmin (apache#17843)
  ffi for roll/rot90 (apache#17861)
  Skip test_multi_worker_dataloader_release_pool on OS X (apache#17797)
  add ffi for full_like, binary (apache#17811)
  HybridBlock.export() to return created filenames (apache#17758)
  Fix SoftReLU fused operator numerical stability (apache#17849)
  CI: Test clang10 cpu & gpu builds with -WError (apache#17830)
  ...
MoisesHer pushed a commit to MoisesHer/incubator-mxnet that referenced this pull request Apr 10, 2020
…vm (apache#17878)

Building TVM with DMLC_LOG_FATAL_THROW=0 is unsupported and causes `tvmop/compile.py` to crash.

Further

* remove duplicate "if(MSVC)" in CMakeLists.txt
* Don't set USE_OPENMP=1 in TVM if building MXNet with USE_OPENMP=0
anirudh2290 pushed a commit to anirudh2290/mxnet that referenced this pull request May 29, 2020
…vm (apache#17878)

Building TVM with DMLC_LOG_FATAL_THROW=0 is unsupported and causes `tvmop/compile.py` to crash.

Further

* remove duplicate "if(MSVC)" in CMakeLists.txt
* Don't set USE_OPENMP=1 in TVM if building MXNet with USE_OPENMP=0
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants