Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Poor performance of the libmxnet if OMP_PLACES environment variable is present #14087

Open
stsukrov opened this issue Feb 7, 2019 · 18 comments
Labels
Bug Environment Variables Issues related to setting env vars Performance

Comments

@stsukrov
Copy link

stsukrov commented Feb 7, 2019

Description

Default build depends on both GOMP and Intel OMP runtime.

Environment info (Required)

----------Python Info----------
Version : 3.5.2
Compiler : GCC 5.4.0 20160609
Build : ('default', 'Nov 12 2018 13:43:14')
Arch : ('64bit', 'ELF')
------------Pip Info-----------
Version : 8.1.1
Directory : /usr/lib/python3/dist-packages/pip
----------MXNet Info-----------
Version : 1.5.0
Directory : /home/stsukrov/workspace/mxnet/python/mxnet
Hashtag not found. Not installed from pre-built package.
----------System Info----------
Platform : Linux-4.4.0-1074-aws-x86_64-with-Ubuntu-16.04-xenial
system : Linux
node : ip-172-31-1-212
release : 4.4.0-1074-aws
version : #84-Ubuntu SMP Thu Dec 6 08:57:58 UTC 2018
----------Hardware Info----------
machine : x86_64
processor : x86_64
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 72
On-line CPU(s) list: 0-71
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8124M CPU @ 3.00GHz
Stepping: 4
CPU MHz: 3000.000
BogoMIPS: 6000.00
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 25344K
NUMA node0 CPU(s): 0-17,36-53
NUMA node1 CPU(s): 18-35,54-71
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single kaiser fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f rdseed adx smap clflushopt clwb avx512cd xsaveopt xsavec xgetbv1 ida arat pku
----------Network Test----------
Setting timeout: 10
Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0155 sec, LOAD: 0.0708 sec.
Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0348 sec, LOAD: 0.2342 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0010 sec, LOAD: 0.3944 sec.
Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.0896 sec, LOAD: 0.7671 sec.
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0006 sec, LOAD: 0.9998 sec.
Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.2689 sec, LOAD: 1.0834 sec.

Compiler (gcc/clang/mingw/visual studio):
gcc-5
MXNet commit hash: d684c59

Build info (Required if built from source)

Build config:
make -j20

Minimum reproducible example

Get mxnet
Build from scratch
run the benchmark - get performance on mxnet/batch size=1 like 200img/sec
set OMP_PLACES=cores
rerun the benchmark
get performance like 40 - which is poor

Steps to reproduce

(Paste the commands you ran that produced the error.)

  1. git clone ...
  2. make
  3. python3 benchmark_score.py
  4. export OMP_PLACES=cores

Rerunning with
export OMP_DISPLAY_ENV=verbose we have:

OPENMP DISPLAY ENVIRONMENT BEGIN
_OPENMP = '201511'
OMP_DYNAMIC = 'FALSE'
OMP_NESTED = 'FALSE'
OMP_NUM_THREADS = '72'
OMP_SCHEDULE = 'DYNAMIC'
OMP_PROC_BIND = 'TRUE'
OMP_PLACES = '{0,36},{1,37},{2,38},{3,39},{4,40},{5,41},{6,42},{7,43},{8,44},{9,45},{10,46},{11,47},{12,48},{13,49},{14,50},{15,51},{16,52},{17,53},{18,54},{19,55},{20,56},{21,57},{22,58},{23,59},{24,60},{25,61},{26,62},{27,63},{28,64},{29,65},{30,66},{31,67},{32,68},{33,69},{34,70},{35,71}'
OMP_STACKSIZE = '0'
OMP_WAIT_POLICY = 'PASSIVE'
OMP_THREAD_LIMIT = '4294967295'
OMP_MAX_ACTIVE_LEVELS = '2147483647'
OMP_CANCELLATION = 'FALSE'
OMP_DEFAULT_DEVICE = '0'
OMP_MAX_TASK_PRIORITY = '0'
GOMP_CPU_AFFINITY = ''
GOMP_STACKSIZE = '0'
GOMP_SPINCOUNT = '300000'
OPENMP DISPLAY ENVIRONMENT END

OPENMP DISPLAY ENVIRONMENT BEGIN
_OPENMP='201611'
[host] KMP_ABORT_DELAY='0'
[host] KMP_ADAPTIVE_LOCK_PROPS='1,1024'
[host] KMP_ALIGN_ALLOC='64'
[host] KMP_ALL_THREADPRIVATE='288'
[host] KMP_ATOMIC_MODE='2'
[host] KMP_BLOCKTIME='200'
[host] KMP_CPUINFO_FILE: value is not defined
[host] KMP_DETERMINISTIC_REDUCTION='FALSE'
[host] KMP_DEVICE_THREAD_LIMIT='2147483647'
[host] KMP_DISP_HAND_THREAD='FALSE'
[host] KMP_DISP_NUM_BUFFERS='7'
[host] KMP_DUPLICATE_LIB_OK='FALSE'
[host] KMP_FORCE_REDUCTION: value is not defined
[host] KMP_FOREIGN_THREADS_THREADPRIVATE='TRUE'
[host] KMP_FORKJOIN_BARRIER='2,2'
[host] KMP_FORKJOIN_BARRIER_PATTERN='hyper,hyper'
[host] KMP_FORKJOIN_FRAMES='TRUE'
[host] KMP_FORKJOIN_FRAMES_MODE='3'
[host] KMP_GTID_MODE='3'
[host] KMP_HANDLE_SIGNALS='FALSE'
[host] KMP_HOT_TEAMS_MAX_LEVEL='1'
[host] KMP_HOT_TEAMS_MODE='0'
[host] KMP_INIT_AT_FORK='TRUE'
[host] KMP_INIT_WAIT='2048'
[host] KMP_ITT_PREPARE_DELAY='0'
[host] KMP_LIBRARY='throughput'
[host] KMP_LOCK_KIND='queuing'
[host] KMP_MALLOC_POOL_INCR='1M'
[host] KMP_NEXT_WAIT='1024'
[host] KMP_NUM_LOCKS_IN_BLOCK='1'
[host] KMP_PLAIN_BARRIER='2,2'
[host] KMP_PLAIN_BARRIER_PATTERN='hyper,hyper'
[host] KMP_REDUCTION_BARRIER='1,1'
[host] KMP_REDUCTION_BARRIER_PATTERN='hyper,hyper'
[host] KMP_SCHEDULE='static,balanced;guided,iterative'
[host] KMP_SETTINGS='FALSE'
[host] KMP_SPIN_BACKOFF_PARAMS='4096,100'
[host] KMP_STACKOFFSET='64'
[host] KMP_STACKPAD='0'
[host] KMP_STACKSIZE='4M'
[host] KMP_STORAGE_MAP='FALSE'
[host] KMP_TASKING='2'
[host] KMP_TASKLOOP_MIN_TASKS='0'
[host] KMP_TASK_STEALING_CONSTRAINT='1'
[host] KMP_TEAMS_THREAD_LIMIT='72'
[host] KMP_TOPOLOGY_METHOD='all'
[host] KMP_USER_LEVEL_MWAIT='FALSE'
[host] KMP_VERSION='FALSE'
[host] KMP_WARNINGS='TRUE'
[host] OMP_AFFINITY_FORMAT='OMP: pid %P tid %T thread %n bound to OS proc set {%a}'
[host] OMP_ALLOCATOR='omp_default_mem_alloc'
[host] OMP_CANCELLATION='FALSE'
[host] OMP_DEFAULT_DEVICE='0'
[host] OMP_DISPLAY_AFFINITY='FALSE'
[host] OMP_DISPLAY_ENV='VERBOSE'
[host] OMP_DYNAMIC='FALSE'
[host] OMP_MAX_ACTIVE_LEVELS='2147483647'
[host] OMP_MAX_TASK_PRIORITY='0'
[host] OMP_NESTED='FALSE'
[host] OMP_NUM_THREADS: value is not defined
[host] OMP_PLACES='cores'
[host] OMP_PROC_BIND='spread'
[host] OMP_SCHEDULE='static'
[host] OMP_STACKSIZE='4M'
OMP_TARGET_OFFLOAD=DEFAULT
[host] OMP_THREAD_LIMIT='2147483647'
[host] OMP_TOOL='enabled'
[host] OMP_TOOL_LIBRARIES: value is not defined
[host] OMP_WAIT_POLICY='PASSIVE'
[host] KMP_AFFINITY='noverbose,warnings,respect,granularity=core,compact,0,0'
OPENMP DISPLAY ENVIRONMENT END

The first one is probably GOMP
The second one is IOMP

Setting KMP_AFFINITY=verbose gives this:
OMP: Info #212: KMP_AFFINITY: decoding x2APIC ids.
OMP: Info #210: KMP_AFFINITY: Affinity capable, using global cpuid leaf 11 info
OMP: Info #154: KMP_AFFINITY: Initial OS proc set respected: 0,36
OMP: Info #156: KMP_AFFINITY: 2 available OS procs
OMP: Info #157: KMP_AFFINITY: Uniform topology
OMP: Info #179: KMP_AFFINITY: 1 packages x 1 cores/pkg x 2 threads/core (1 total cores)
OMP: Info #214: KMP_AFFINITY: OS proc to physical thread map:
OMP: Info #171: KMP_AFFINITY: OS proc 0 maps to package 0 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 36 maps to package 0 thread 1

which may mean Intel OMP suddenly thinks, it's just one core

Without OMP_PLACES looks fine:
OMP: Info #157: KMP_AFFINITY: Uniform topology
OMP: Info #159: KMP_AFFINITY: 2 packages x 18 cores/pkg x 2 threads/core (36 total cores)

What have you tried to solve it?

  1. Symlink libgomp.so.1 to intel omp
  2. Got the reasonable performance back
@mxnet-label-bot
Copy link
Contributor

Hey, this is the MXNet Label Bot.
Thank you for submitting the issue! I will try and suggest some labels so that the appropriate MXNet community members can help resolve it.
Here are my recommended labels: Performance

@stsukrov
Copy link
Author

stsukrov commented Feb 7, 2019

@mxnet-label-bot add [bug]

@marcoabreu marcoabreu added the Bug label Feb 7, 2019
@stsukrov
Copy link
Author

stsukrov commented Feb 7, 2019

@mxnet-label-bot add [bug, performance, openmp]

@stsukrov
Copy link
Author

stsukrov commented Feb 7, 2019

@mxnet-label-bot add [Environment Variables]

@marcoabreu marcoabreu added the Environment Variables Issues related to setting env vars label Feb 7, 2019
@pengzhao-intel
Copy link
Contributor

@stsukrov could you try the below environment?

export KMP_AFFINITY=granularity=fine,compact,1,0
export OMP_NUM_THREADS=<# of physical cores>

@stsukrov
Copy link
Author

stsukrov commented Feb 7, 2019

@pengzhao-intel
Thanks for a response, Patric!

Tried.
Same story if OMP_PLACES is present AND 2 OMP runtimes are loaded.

It's issue about having an regression, IF something is present, not about existing good configuration.

My assumption: it's caused by multiple OMPs, since it's gone as soon as I symlink gomp to iomp5.

To my knowledge, (source: https://www.youtube.com/watch?v=Y7eVVfyo0U4, Michael Klemm)
KMP_AFFINITY is deprecated for usecases other than verbose.

KMP_AFFINITY=granularity=fine,compact,1,0

is roughly

OMP_PLACES=cores
OMP_PROC_BIND=close

Right?

Recall, I'm talking about 10x performance regression.
Missing KMP_AFFINITY gives like 1.5 worse performance, which is understandable.

@pengzhao-intel
Copy link
Contributor

@stsukrov thanks for the analysis.
I have done several experiments in the local and I can reproduce your results in step 2.
Agree with you OMP_PLACE should be similar with the two OMP env setting but the result is totally different and I will look into the details in the next week (this week we are on the vacation).

In step 3, two env works well in myside and could you double check?
In step 4, you can leverage the latest optimization of subgraph to get further perforamnce.
In step 5, if you are runing the case with 2 sockets for big batchsize, it is a good trick method.

Hopes this can help to unblock your works during we are debugging the OMP_PLACES issue.

==============================================================

  1. without OMP_PLACE setting, resnet50-v1, 83.5 images/sec
(base) [patric@mlt-skx057 image-classification]$ python benchmark_score.py --network resnetv1-50 --batch-size 1
INFO:root:network: resnetv1-50
INFO:root:device: cpu(0)
/home/patric/develop/incubator-mxnet/python/mxnet/module/base_module.py:67: UserWarning: Data provided by label_shapes don't match names specified by label_names ([] vs. ['softmax_label'])
  warnings.warn(msg)
INFO:root:batch size  1, dtype float32, images/sec: 83.457609
  1. set OMP_PLACES=56, yes, the perforamnce is +40X drop to 2.5 images/sec
(base) [patric@mlt-skx057 image-classification]$ export OMP_PLACES=56
(base) [patric@mlt-skx057 image-classification]$ python benchmark_score.py --network resnetv1-50 --batch-size 1
INFO:root:network: resnetv1-50
INFO:root:device: cpu(0)
/home/patric/develop/incubator-mxnet/python/mxnet/module/base_module.py:67: UserWarning: Data provided by label_shapes don't match names specified by label_names ([] vs. ['softmax_label'])
  warnings.warn(msg)
INFO:root:batch size  1, dtype float32, images/sec: 2.511049
  1. set OMP_NUM_THREAD and KMP_AFFILITY and got 104.0 images/sec
(base) [patric@mlt-skx057 image-classification]$ export KMP_AFFINITY=granularity=fine,compact,1,0
(base) [patric@mlt-skx057 image-classification]$ export OMP_NUM_THREADS=56
(base) [patric@mlt-skx057 image-classification]$ python benchmark_score.py --network resnetv1-50 --batch-size 1
INFO:root:network: resnetv1-50
INFO:root:device: cpu(0)
/home/patric/develop/incubator-mxnet/python/mxnet/module/base_module.py:67: UserWarning: Data provided by label_shapes don't match names specified by label_names ([] vs. ['softmax_label'])
  warnings.warn(msg)
INFO:root:batch size  1, dtype float32, images/sec: 104.027203

  1. using new MKLDNN subgraph optimization by MXNET_SUBGRAPH_BACKEND=MKLDNN and got 176.4 images/sec
(base) [patric@mlt-skx057 image-classification]$ export  MXNET_SUBGRAPH_BACKEND=MKLDNN
(base) [patric@mlt-skx057 image-classification]$ python benchmark_score.py --network resnetv1-50 --batch-size 1
INFO:root:network: resnetv1-50
INFO:root:device: cpu(0)
/home/patric/develop/incubator-mxnet/python/mxnet/module/base_module.py:67: UserWarning: Data provided by label_shapes don't match names specified by label_names ([] vs. ['softmax_label'])
  warnings.warn(msg)
[13:57:30] src/operator/subgraph/mkldnn/mkldnn_conv_property.cc:138: Start to execute MKLDNN Convolution optimization pass.
INFO:root:batch size  1, dtype float32, images/sec: 176.417314
  1. sometime NNVM_EXEC_MATCH_RANGE can help for the big batchsize on the 2 sockets.
    Such as mobilenet perf can be improved from 1941.6 images/sec to 2690 images/sec
(base) [patric@mlt-skx057 image-classification]$ python benchmark_score.py --network mobilenet --batch-size 128
INFO:root:network: mobilenet
INFO:root:device: cpu(0)
/home/patric/develop/incubator-mxnet/python/mxnet/module/base_module.py:67: UserWarning: Data provided by label_shapes don't match names specified by label_names ([] vs. ['softmax_label'])
  warnings.warn(msg)
[14:17:54] src/operator/subgraph/mkldnn/mkldnn_conv_property.cc:138: Start to execute MKLDNN Convolution optimization pass.
INFO:root:batch size 128, dtype float32, images/sec: 1941.671165
(base) [patric@mlt-skx057 image-classification]$ export NNVM_EXEC_MATCH_RANGE=1
(base) [patric@mlt-skx057 image-classification]$ python benchmark_score.py --network mobilenet --batch-size 128
INFO:root:network: mobilenet
INFO:root:device: cpu(0)
/home/patric/develop/incubator-mxnet/python/mxnet/module/base_module.py:67: UserWarning: Data provided by label_shapes don't match names specified by label_names ([] vs. ['softmax_label'])
  warnings.warn(msg)
[14:18:07] src/operator/subgraph/mkldnn/mkldnn_conv_property.cc:138: Start to execute MKLDNN Convolution optimization pass.
INFO:root:batch size 128, dtype float32, images/sec: 2690.545505

@stsukrov
Copy link
Author

stsukrov commented Feb 8, 2019

@pengzhao-intel
Thanks, Patric. Appreciate your help during the vacation.

NNVM_EXEC_MATCH_RANGE and MXNET_SUBGRAPH_BACKEND are additional optimizations.
Great to know.
Let's put them aside for a while to investigate just the issue.

My results:

  1. Nothing was set:
INFO:root:network: resnetv1-50
INFO:root:device: cpu(0)
/home/stsukrov/workspace/mxnet/python/mxnet/module/base_module.py:67: UserWarning: Data provided by label_shapes don't match names specified by label_names ([] vs. ['softmax_label'])
  warnings.warn(msg)
INFO:root:batch size  1, dtype float32, images/sec: 85.332118
  1. OMP_NUM_THREAD was set to 36 (my cores)
INFO:root:network: resnetv1-50
INFO:root:device: cpu(0)
/home/stsukrov/workspace/mxnet/python/mxnet/module/base_module.py:67: UserWarning: Data provided by label_shapes don't match names specified by label_names ([] vs. ['softmax_label'])
  warnings.warn(msg)
INFO:root:batch size  1, dtype float32, images/sec: 89.931452
  1. OMP_NUM_THREAD AND OMP_PLACES=cores are set
INFO:root:network: resnetv1-50
INFO:root:device: cpu(0)
/home/stsukrov/workspace/mxnet/python/mxnet/module/base_module.py:67: UserWarning: Data provided by label_shapes don't match names specified by label_names ([] vs. ['softmax_label'])
  warnings.warn(msg)
INFO:root:batch size  1, dtype float32, images/sec: 4.841513
  1. set OMP_NUM_THREAD and KMP_AFFILITY. Keep OMP_PLACES
OMP: Warning #181: OMP_PLACES: ignored because KMP_AFFINITY has been defined
INFO:root:network: resnetv1-50
INFO:root:device: cpu(0)
/home/stsukrov/workspace/mxnet/python/mxnet/module/base_module.py:67: UserWarning: Data provided by label_shapes don't match names specified by label_names ([] vs. ['softmax_label'])
  warnings.warn(msg)
INFO:root:batch size  1, dtype float32, images/sec: 4.848529

Note the message about ignoring OMP_PLACES.

  1. set OMP_NUM_THREAD and KMP_AFFILITY. Remove OMP_PLACES
INFO:root:network: resnetv1-50
INFO:root:device: cpu(0)
/home/stsukrov/workspace/mxnet/python/mxnet/module/base_module.py:67: UserWarning: Data provided by label_shapes don't match names specified by label_names ([] vs. ['softmax_label'])
  warnings.warn(msg)
INFO:root:batch size  1, dtype float32, images/sec: 99.267594

Looks good to me.

So, yes. just OMP_NUM_THREAD and KMP_AFFILITY do. OMP_PLACES kills the performance.

@stsukrov
Copy link
Author

stsukrov commented Feb 8, 2019

Now, I'm doing a symlink

ldd $MXNET_LIBRARY_PATH | grep omp
	libiomp5.so => /home/stsukrov/workspace/incubator-mxnet/lib/libiomp5.so (0x00007f5db6813000)
	libXcomposite.so.1 => /usr/lib/x86_64-linux-gnu/libXcomposite.so.1 (0x00007f5da7ded000)
	libgomp.so.1 => /usr/lib/x86_64-linux-gnu/libgomp.so.1 (0x00007f5d9d20c000)
ln -s /home/stsukrov/workspace/incubator-mxnet/lib/libiomp5.so 
ldd $MXNET_LIBRARY_PATH | grep omp
	libiomp5.so => /home/stsukrov/workspace/incubator-mxnet/lib/libiomp5.so (0x00007f2398f06000)
	libXcomposite.so.1 => /usr/lib/x86_64-linux-gnu/libXcomposite.so.1 (0x00007f238a4e0000)
stsukrov@ip-172-31-1-212:0:~/mxnet-blog$ 

And OMP_PLACES is not a trouble maker anymore.

OMP: Warning #181: OMP_PLACES: ignored because KMP_AFFINITY has been defined
INFO:root:network: resnetv1-50
INFO:root:device: cpu(0)
/home/stsukrov/workspace/mxnet/python/mxnet/module/base_module.py:67: UserWarning: Data provided by label_shapes don't match names specified by label_names ([] vs. ['softmax_label'])
  warnings.warn(msg)
INFO:root:batch size  1, dtype float32, images/sec: 99.306609

So, I interpret it as sign, that the issue is caused by 2 conflicting OMP runtimes.
As you can see in the original post, both are loaded and initialized.

Quoting from: https://github.com/intel/mkl-dnn:

Having more than one OpenMP runtime initialized may lead to undefined behavior including incorrect results or crashes.

Yes, I know, that Intel OpenMP exports both GOMP_ and kmpc_ functions, but something seems to clash.

@stsukrov
Copy link
Author

@pengzhao-intel
Any feedback on the topic?

@pengzhao-intel
Copy link
Contributor

@stsukrov I am a little busy this week and I will look into the issue later.
Luckily, there's the method to unblock your experiments.

@stsukrov
Copy link
Author

@pengzhao-intel
Thanks a lot. It's not a blocker for us.

@sandeep-krishnamurthy
Copy link
Contributor

@pengzhao-intel - Any suggestion here? Thank you.

@pengzhao-intel
Copy link
Contributor

pengzhao-intel commented Apr 2, 2019

Sorry for the delay. @NeoZhangJianyu @yinghu5 will help to figure out the reason and fix the issue :)

@NeoZhangJianyu
Copy link
Contributor

NeoZhangJianyu commented Apr 3, 2019

@sandeep-krishnamurthy
"This issue is caused by 2 conflicting OMP runtimes" as mentioned by @stsukrov.

For MXNet installed by PIP/Conda by now:
(It's recommended to install in virtual python environment, like Conda. That will help reduce the bad impact of installed lib files.)

  1. 'mxnet-mkl' is without the problem
    pip install mxnet-mkl or conda install mxnet-mkl

  2. 'mxnet' have such issue.
    pip install mxnet or conda install mxnet

For MXNet installed by source:

  1. Set PYTHONPATH
    $ export PYTHONPATH=/path/to/incubator-mxnet/python
  2. Set Library Path
    $ export LD_LIBRARY_PATH=/path/to/incubator-mxnet/lib:$LD_LIBRARY_PATH

Above setting after compile could avoid the issue of conflicting OMP runtimes.

In you want to use OMP_PLACES to improve the performance in this case:

  1. export OMP_PLACES='{0:8:1}' could help you. You need to adjust it for you CPU.
  2. export OMP_PLACES=56 is not supported by GOMP. It only trigger one thread if works.

@stsukrov
Copy link
Author

stsukrov commented Apr 3, 2019

@NeoZhangJianyu

Thanks for providing work around recommendations.

Was actually dreaming of a build, that just works without venv or LD_LIBRARY_PATH-thaumaturgy.

@NeoZhangJianyu
Copy link
Contributor

@stsukrov
Yes, I agree with you. I don't like to set LD_LIBRARY_PATH in fact.

In development server, we always install many many different release libs. So the conflict will happen easily. venv and LD_LIBRARY_PATH are easy way to avoid/clear the conflict of lib files.

In server for deployment, it's better to copy the libs in /usr/lib if the install/upgrade tool/script are good enough.

venv: we could remove totally if we can't resolve the conflict.

LD_LIBRARY_PATH:

  1. it could avoid to copy the files to /usr/lib folder which is hard to maintain.
  2. If there are several release builds from source, it's easy to switch in different local builds.

@lebeg
Copy link
Contributor

lebeg commented Apr 3, 2019

Just to make a few connections between tickets:

Here was an attempt to remove bundled conflicting OpenMP library: #12160

Link to the stale dev list discussion.

A proposal from my side for a robust and stable solution is to make a openmp environment checking script that would:

  • install a performant openmp (from Intel, for example) if not already present
  • issue a warning that conflicting libs were found and ask for uninstallation (check LD_LIBRARY_PATH)
  • symlink all variants of openmp lib (gomp, iomp, omp) to the same lib
  • should work in virtual environments (pipenv, virtualenv, conda etc)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Bug Environment Variables Issues related to setting env vars Performance
Projects
None yet
Development

No branches or pull requests

7 participants