Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New MacOS R v3.4.0 CRAN binary support for OpenMP: error with mclapply #2137

Closed
mattdowle opened this issue May 2, 2017 · 7 comments · Fixed by #2410
Closed

New MacOS R v3.4.0 CRAN binary support for OpenMP: error with mclapply #2137

mattdowle opened this issue May 2, 2017 · 7 comments · Fixed by #2410
Labels
Milestone

Comments

@mattdowle
Copy link
Member

This was reported on r-devel :

https://stat.ethz.ch/pipermail/r-devel/2017-May/074178.html

First I knew that CRAN binaries for MacOS now support OpenMP. Great news!

Either the fork catch isn't working in that environment for some reason, the use case is different to the tests somehow or the fork catch is working but something somewhere is asserting its surprise at being single threaded.

@mattdowle mattdowle changed the title New MacOS binary support for OpenMP error with mclapply New MacOS R v3.4.0 binary support for OpenMP: error with mclapply May 2, 2017
@mattdowle mattdowle changed the title New MacOS R v3.4.0 binary support for OpenMP: error with mclapply New MacOS R v3.4.0 CRAN binary support for OpenMP: error with mclapply May 2, 2017
@mattdowle mattdowle added this to the v1.10.6 milestone May 2, 2017
@RoyalTS
Copy link
Contributor

RoyalTS commented May 23, 2017

You mention in the r-devel thread that

In the meantime setDTthreads(1) can be called manually before the explicit parallelism as a workaround.

I may be misunderstanding how exactly this should be done but the following trivial example produces the same errors for me despite setDTthreads(1):

DT <- data.table(group = rep(letters, each=3),
                             val = rnorm(26*3))

get_group_means <- function(DT) {
  DT[, .(mean(val)), .(group)]
}

setDTthreads(1)
parallel::mclapply(1:10, function(x) get_group_means(DT))

@mattdowle
Copy link
Member Author

@RoyalTS Thanks for this report. Is there any chance you tried this in an R session that had been doing other data.table tasks before trying this call to mclapply? There's a reason why the setDTthreads(1) call might need to be done earlier, straight after the call to library(data.table). Could you try that again in a fresh R session please?

Please also set these environment variables in the command window before starting R. When you run your R example, diagnostic messages should be printed by Intel's OpenMP library / KMP. Please paste the output to this issue.

$ export KMP_AFFINITY=verbose
$ export KMP_SETTINGS=1
$ export KMP_VERSION=1
$ R
> # your test

@mattdowle
Copy link
Member Author

mattdowle commented Jul 11, 2017

I'm having trouble reproducing this one. I've obtained a fresh Mac with MacOS Sierra (10.12.5) with R 3.4.1 and using the CRAN binary version of data.table 1.10.4 for Mac. It all seems fine.

I start with a fresh R session and invoke some usage of OpenMP with fwrite since I thought it might only be a problem after OpenMP has been invoked and created a thread team. htop shows fwrite using 8 threads. Then I call the mclapply provided above, but it works fine. Here is my output. Can anybody with Mac that's able to reproduce the problem, see what the difference is?

Here's what to paste into a fresh R session:

require(data.table)

# invoke some usage of OpenMP :
DT = setDT(lapply(1:10, function(x)sample(100,1e8,replace=TRUE)))
fwrite(DT,"/tmp/DT.csv")

# now do a fork with parallel :
DT <- data.table(group = rep(letters, each=3),
                             val = rnorm(26*3))
get_group_means <- function(DT) {
  DT[, .(mean(val)), .(group)]
}
parallel::mclapply(1:10, function(x) get_group_means(DT))

Here's the output :

$ export KMP_AFFINITY=verbose
$ export KMP_SETTINGS=1
$ export KMP_VERSION=1
$ R
R version 3.4.1 (2017-06-30) -- "Single Candle"
Copyright (C) 2017 The R Foundation for Statistical Computing
Platform: x86_64-apple-darwin15.6.0 (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

  Natural language support but running in an English locale

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

> require(data.table)
Loading required package: data.table
data.table 1.10.4
  The fastest way to learn (by data.table authors): https://www.datacamp.com/courses/data-analysis-the-data-table-way
  Documentation: ?data.table, example(data.table) and browseVignettes("data.table")
  Release notes, videos and slides: http://r-datatable.com
> DT = setDT(lapply(1:10, function(x)sample(100,1e8,replace=TRUE)))
> fwrite(DT,"/tmp/DT.csv")
Intel(R) OMP version: 5.0.20140926
Intel(R) OMP library type: performance
Intel(R) OMP link type: dynamic
Intel(R) OMP build time: no_timestamp
Intel(R) OMP build compiler: Clang 8.0
Intel(R) OMP alternative compiler support: yes
Intel(R) OMP API version: 4.5 (201511)
Intel(R) OMP dynamic error checking: no
Intel(R) OMP plain barrier branch bits: gather=2, release=2
Intel(R) OMP forkjoin barrier branch bits: gather=2, release=2
Intel(R) OMP reduction barrier branch bits: gather=1, release=1
Intel(R) OMP plain barrier pattern: gather=hyper, release=hyper
Intel(R) OMP forkjoin barrier pattern: gather=hyper, release=hyper
Intel(R) OMP reduction barrier pattern: gather=hyper, release=hyper
Intel(R) OMP lock type: run time selectable
Intel(R) OMP thread affinity support: no

User settings:

   KMP_AFFINITY=verbose
   KMP_SETTINGS=1
   KMP_VERSION=1

Effective settings:

   KMP_ABORT_DELAY=0
   KMP_ADAPTIVE_LOCK_PROPS='1,1024'
   KMP_ALIGN_ALLOC=64
   KMP_ALL_THREADPRIVATE=128
   KMP_ALL_THREADS=2147483647
   KMP_ATOMIC_MODE=2
   KMP_A_DEBUG=0
   KMP_BLOCKTIME=200
   KMP_B_DEBUG=0
   KMP_CONSISTENCY_CHECK=none
   KMP_C_DEBUG=0
   KMP_DEBUG_BUF=false
   KMP_DEBUG_BUF_ATOMIC=false
   KMP_DEBUG_BUF_CHARS=128
   KMP_DEBUG_BUF_LINES=512
   KMP_DETERMINISTIC_REDUCTION=false
   KMP_DIAG=0
   KMP_DISP_NUM_BUFFERS=7
   KMP_DUPLICATE_LIB_OK=false
   KMP_DYNAMIC_MODE: value is not defined 
   KMP_D_DEBUG=0
   KMP_E_DEBUG=0
   KMP_FORCE_REDUCTION: value is not defined
   KMP_FOREIGN_THREADS_THREADPRIVATE=true
   KMP_FORKJOIN_BARRIER='2,2'
   KMP_FORKJOIN_BARRIER_PATTERN='hyper,hyper'
   KMP_FORKJOIN_FRAMES=true
   KMP_FORKJOIN_FRAMES_MODE=3
   KMP_F_DEBUG=0
   KMP_GTID_MODE=0
   KMP_HANDLE_SIGNALS=false
   KMP_HOT_TEAMS_MAX_LEVEL=1
   KMP_HOT_TEAMS_MODE=0
   KMP_INHERIT_FP_CONTROL=true
   KMP_INIT_AT_FORK=true
   KMP_INIT_WAIT=2048
   KMP_ITT_PREPARE_DELAY=0
   KMP_LIBRARY=throughput
   KMP_LOAD_BALANCE_INTERVAL=1.000000
   KMP_LOCK_KIND=queuing
   KMP_MALLOC_POOL_INCR=1M
   KMP_NEXT_WAIT=1024
   KMP_NUM_LOCKS_IN_BLOCK=1
   KMP_PLAIN_BARRIER='2,2'
   KMP_PLAIN_BARRIER_PATTERN='hyper,hyper'
   KMP_REDUCTION_BARRIER='1,1'
   KMP_REDUCTION_BARRIER_PATTERN='hyper,hyper'
   KMP_SCHEDULE='static,balanced;guided,iterative'
   KMP_SETTINGS=true
   KMP_SPIN_BACKOFF_PARAMS='4096,100'
   KMP_STACKOFFSET=0
   KMP_STACKPAD=0
   KMP_STACKSIZE=4M
   KMP_STORAGE_MAP=false
   KMP_TASKING=2
   KMP_TASK_STEALING_CONSTRAINT=1
   KMP_VERSION=true
   KMP_WARNINGS=true
   OMP_CANCELLATION=false
   OMP_DEFAULT_DEVICE=0
   OMP_DISPLAY_ENV=false
   OMP_DYNAMIC=false
   OMP_MAX_ACTIVE_LEVELS=2147483647
   OMP_MAX_TASK_PRIORITY=0
   OMP_NESTED=false
   OMP_NUM_THREADS: value is not defined
   OMP_PROC_BIND='false'
   OMP_SCHEDULE='static'
   OMP_STACKSIZE=4M
   OMP_THREAD_LIMIT=2147483647
   OMP_WAIT_POLICY=PASSIVE

> DT <- data.table(group = rep(letters, each=3),
+                              val = rnorm(26*3))
> get_group_means <- function(DT) {
+   DT[, .(mean(val)), .(group)]
+ }
> parallel::mclapply(1:10, function(x) get_group_means(DT))

User settings:

   KMP_AFFINITY=verbose
   KMP_SETTINGS=1
   KMP_VERSION=1

Effective settings:

   KMP_ABORT_DELAY=0
   KMP_ADAPTIVE_LOCK_PROPS='1,1024'
   KMP_ALIGN_ALLOC=64
   KMP_ALL_THREADPRIVATE=128
   KMP_ALL_THREADS=2147483647
   KMP_ATOMIC_MODE=2
   KMP_A_DEBUG=0
   KMP_BLOCKTIME=200
   KMP_B_DEBUG=0
   KMP_CONSISTENCY_CHECK=none
   KMP_C_DEBUG=0
   KMP_DEBUG_BUF=false
   KMP_DEBUG_BUF_ATOMIC=false
   KMP_DEBUG_BUF_CHARS=128
   KMP_DEBUG_BUF_LINES=512
   KMP_DETERMINISTIC_REDUCTION=false
   KMP_DIAG=0
   KMP_DISP_NUM_BUFFERS=7
   KMP_DUPLICATE_LIB_OK=false
   KMP_DYNAMIC_MODE: value is not defined 
   KMP_D_DEBUG=0
   KMP_E_DEBUG=0
   KMP_FORCE_REDUCTION: value is not defined
   KMP_FOREIGN_THREADS_THREADPRIVATE=true
   KMP_FORKJOIN_BARRIER='2,2'
   KMP_FORKJOIN_BARRIER_PATTERN='hyper,hyper'
   KMP_FORKJOIN_FRAMES=true
   KMP_FORKJOIN_FRAMES_MODE=3
   KMP_F_DEBUG=0
   KMP_GTID_MODE=0
   KMP_HANDLE_SIGNALS=false
   KMP_HOT_TEAMS_MAX_LEVEL=1
   KMP_HOT_TEAMS_MODE=0
   KMP_INHERIT_FP_CONTROL=true
   KM
User settings:

   KMP_AFFINITY=verbose
   KMP_SETTINGS=1
   KMP_VERSION=1

Effective settings:

   KMP_ABORT_DELAY=0
   KMP_ADAPTIVE_LOCK_PROPS='1,1024'
   KMP_ALIGN_ALLOC=64
   KMP_ALL_THREADPRIVATE=128
   KMP_ALL_THREADS=2147483647
   KMP_ATOMIC_MODE=2
   KMP_A_DEBUG=0
   KMP_BLOCKTIME=200
   KMP_B_DEBUG=0
   KMP_CONSISTENCY_CHECK=none
   KMP_C_DEBUG=0
   KMP_DEBUG_BUF=false
   KMP_DEBUG_BUF_ATOMIC=false
   KMP_DEBUG_BUF_CHARS=128
   KMP_DEBUG_BUF_LINES=512
   KMP_DETERMINISTIC_REDUCTION=false
   KMP_DIAG=0
   KMP_DISP_NUM_BUFFERS=7
   KMP_DUPLICATE_LIB_OK=false
   KMP_DYNAMIC_MODE: value is not defined 
   KMP_D_DEBUG=0
   KMP_E_DEBUG=0
   KMP_FORCE_REDUCTION: value is not defined
   KMP_FOREIGN_THREADS_THREADPRIVATE=true
   KMP_FORKJOIN_BARRIER='2,2'
   KMP_FORKJOIN_BARRIER_PATTERN='hyper,hyper'
   KMP_FORKJOIN_FRAMES=true
   KMP_FORKJOIN_FRAMES_MODE=3
   KMP_F_DEBUG=0
   KMP_GTID_MODE=0
   KMP_HANDLE_SIGNALS=false
   KMP_HOT_TEAMS_MAX_LEVEL=1
   KMP_HOT_TEAMS_MODE=0
   KMP_INHERIT_FP_CONTROL=true
   KMP_INIT_AT_FORK=true
   KMP_INIT_WAIT=2048
   KMP_ITT_PREPARE_DELAY=0
   KMP_LIBRARY=throughput
   KMP_LOAD_BALANCE_INTERVAL=1.000000
   KMP_LOCK_KIND=queuing
   KMP_MALLOC_POOL_INCR=1M
   KMP_NEXT_WAIT=1024
   KMP_NUM_LOCKS_IN_BLOCK=1
   KMP_PLAIN_BARRIER='2,2'
   KMP_PLAIN_BARRIER_PATTERN='hyper,hyper'
   KMP_REDUCTION_BARRIER='1,1'
   KMP_REDUCTION_BARRIER_PATTERN='hyper,hyper'
   KMP_SCHEDULE='static,balanced;guided,iterative'
   KMP_SETTINGS=true
   KMP_SPIN_BACKOFF_PARAMS='4096,100'
   KMP_STACKOFFSET=0
   KMP_STACKPAD=0
   KMP_STACKSIZE=4M
   KMP_STORAGE_MAP=false
   KMP_TASKING=2
   KMP_TASK_STEALING_CONSTRAINT=1
   KMP_VERSION=true
   KMP_WARNINGS=true
   OMP_CANCELLATION=false
   OMP_DEFAULT_DEVICE=0
   OMP_DISPLAY_ENV=false
   OMP_DYNAMIC=false
   OMP_MAX_ACTIVE_LEVELS=2147483647
   OMP_MAX_TASK_PRIORITY=0
   OMP_NESTED=false
   OMP_NUM_THREADS: value is not defined
   OMP_PROC_BIND='false'
   OMP_SCHEDULE='static'
   OMP_STACKSIZE=4M
   OMP_THREAD_LIMIT=2147483647
   OMP_WAIT_POLICY=PASSIVE

P_INIT_AT_FORK=true
   KMP_INIT_WAIT=2048
   KMP_ITT_PREPARE_DELAY=0
   KMP_LIBRARY=throughput
   KMP_LOAD_BALANCE_INTERVAL=1.000000
   KMP_LOCK_KIND=queuing
   KMP_MALLOC_POOL_INCR=1M
   KMP_NEXT_WAIT=1024
   KMP_NUM_LOCKS_IN_BLOCK=1
   KMP_PLAIN_BARRIER='2,2'
   KMP_PLAIN_BARRIER_PATTERN='hyper,hyper'
   KMP_REDUCTION_BARRIER='1,1'
   KMP_REDUCTION_BARRIER_PATTERN='hyper,hyper'
   KMP_SCHEDULE='static,balanced;guided,iterative'
   KMP_SETTINGS=true
   KMP_SPIN_BACKOFF_PARAMS='4096,100'
   KMP_STACKOFFSET=0
   KMP_STACKPAD=0
   KMP_STACKSIZE=4M
   KMP_STORAGE_MAP=false
   KMP_TASKING=2
   KMP_TASK_STEALING_CONSTRAINT=1
   KMP_VERSION=true
   KMP_WARNINGS=true
   OMP_CANCELLATION=false
   OMP_DEFAULT_DEVICE=0
   OMP_DISPLAY_ENV=false
   OMP_DYNAMIC=false
   OMP_MAX_ACTIVE_LEVELS=2147483647
   OMP_MAX_TASK_PRIORITY=0
   OMP_NESTED=false
   OMP_NUM_THREADS: value is not defined
   OMP_PROC_BIND='false'
   OMP_SCHEDULE='static'
   OMP_STACKSIZE=4M
   OMP_THREAD_LIMIT=2147483647
   OMP_WAIT_POLICY=PASSIVE

[[1]]
    group          V1
 1:     a  0.26302390
 2:     b  0.42977665
 3:     c -0.10900981
 4:     d  0.92687218
 5:     e -0.64800822
 6:     f  0.36601525
 7:     g -0.43251288
 8:     h -0.84954195
 9:     i -0.04290132
10:     j  0.28345015
11:     k  0.43719925
12:     l  0.43637074
13:     m  0.37049567
14:     n -0.55731122
15:     o  0.26104105
16:     p  0.33104576
17:     q -0.95024337
18:     r  1.32241759
19:     s -0.44203967
20:     t -0.20102535
21:     u -0.48977614
22:     v -0.97433810
23:     w -1.10154462
24:     x -0.07125898
25:     y  0.20903872
26:     z  0.55188585
    group          V1

[[2]]
    group          V1
 1:     a  0.26302390
 2:     b  0.42977665
 3:     c -0.10900981
 4:     d  0.92687218
 5:     e -0.64800822
 6:     f  0.36601525
 7:     g -0.43251288
 8:     h -0.84954195
 9:     i -0.04290132
10:     j  0.28345015
11:     k  0.43719925
12:     l  0.43637074
13:     m  0.37049567
14:     n -0.55731122
15:     o  0.26104105
16:     p  0.33104576
17:     q -0.95024337
18:     r  1.32241759
19:     s -0.44203967
20:     t -0.20102535
21:     u -0.48977614
22:     v -0.97433810
23:     w -1.10154462
24:     x -0.07125898
25:     y  0.20903872
26:     z  0.55188585
    group          V1

[[3]]
    group          V1
 1:     a  0.26302390
 2:     b  0.42977665
 3:     c -0.10900981
 4:     d  0.92687218
 5:     e -0.64800822
 6:     f  0.36601525
 7:     g -0.43251288
 8:     h -0.84954195
 9:     i -0.04290132
10:     j  0.28345015
11:     k  0.43719925
12:     l  0.43637074
13:     m  0.37049567
14:     n -0.55731122
15:     o  0.26104105
16:     p  0.33104576
17:     q -0.95024337
18:     r  1.32241759
19:     s -0.44203967
20:     t -0.20102535
21:     u -0.48977614
22:     v -0.97433810
23:     w -1.10154462
24:     x -0.07125898
25:     y  0.20903872
26:     z  0.55188585
    group          V1

[[4]]
    group          V1
 1:     a  0.26302390
 2:     b  0.42977665
 3:     c -0.10900981
 4:     d  0.92687218
 5:     e -0.64800822
 6:     f  0.36601525
 7:     g -0.43251288
 8:     h -0.84954195
 9:     i -0.04290132
10:     j  0.28345015
11:     k  0.43719925
12:     l  0.43637074
13:     m  0.37049567
14:     n -0.55731122
15:     o  0.26104105
16:     p  0.33104576
17:     q -0.95024337
18:     r  1.32241759
19:     s -0.44203967
20:     t -0.20102535
21:     u -0.48977614
22:     v -0.97433810
23:     w -1.10154462
24:     x -0.07125898
25:     y  0.20903872
26:     z  0.55188585
    group          V1

[[5]]
    group          V1
 1:     a  0.26302390
 2:     b  0.42977665
 3:     c -0.10900981
 4:     d  0.92687218
 5:     e -0.64800822
 6:     f  0.36601525
 7:     g -0.43251288
 8:     h -0.84954195
 9:     i -0.04290132
10:     j  0.28345015
11:     k  0.43719925
12:     l  0.43637074
13:     m  0.37049567
14:     n -0.55731122
15:     o  0.26104105
16:     p  0.33104576
17:     q -0.95024337
18:     r  1.32241759
19:     s -0.44203967
20:     t -0.20102535
21:     u -0.48977614
22:     v -0.97433810
23:     w -1.10154462
24:     x -0.07125898
25:     y  0.20903872
26:     z  0.55188585
    group          V1

[[6]]
    group          V1
 1:     a  0.26302390
 2:     b  0.42977665
 3:     c -0.10900981
 4:     d  0.92687218
 5:     e -0.64800822
 6:     f  0.36601525
 7:     g -0.43251288
 8:     h -0.84954195
 9:     i -0.04290132
10:     j  0.28345015
11:     k  0.43719925
12:     l  0.43637074
13:     m  0.37049567
14:     n -0.55731122
15:     o  0.26104105
16:     p  0.33104576
17:     q -0.95024337
18:     r  1.32241759
19:     s -0.44203967
20:     t -0.20102535
21:     u -0.48977614
22:     v -0.97433810
23:     w -1.10154462
24:     x -0.07125898
25:     y  0.20903872
26:     z  0.55188585
    group          V1

[[7]]
    group          V1
 1:     a  0.26302390
 2:     b  0.42977665
 3:     c -0.10900981
 4:     d  0.92687218
 5:     e -0.64800822
 6:     f  0.36601525
 7:     g -0.43251288
 8:     h -0.84954195
 9:     i -0.04290132
10:     j  0.28345015
11:     k  0.43719925
12:     l  0.43637074
13:     m  0.37049567
14:     n -0.55731122
15:     o  0.26104105
16:     p  0.33104576
17:     q -0.95024337
18:     r  1.32241759
19:     s -0.44203967
20:     t -0.20102535
21:     u -0.48977614
22:     v -0.97433810
23:     w -1.10154462
24:     x -0.07125898
25:     y  0.20903872
26:     z  0.55188585
    group          V1

[[8]]
    group          V1
 1:     a  0.26302390
 2:     b  0.42977665
 3:     c -0.10900981
 4:     d  0.92687218
 5:     e -0.64800822
 6:     f  0.36601525
 7:     g -0.43251288
 8:     h -0.84954195
 9:     i -0.04290132
10:     j  0.28345015
11:     k  0.43719925
12:     l  0.43637074
13:     m  0.37049567
14:     n -0.55731122
15:     o  0.26104105
16:     p  0.33104576
17:     q -0.95024337
18:     r  1.32241759
19:     s -0.44203967
20:     t -0.20102535
21:     u -0.48977614
22:     v -0.97433810
23:     w -1.10154462
24:     x -0.07125898
25:     y  0.20903872
26:     z  0.55188585
    group          V1

[[9]]
    group          V1
 1:     a  0.26302390
 2:     b  0.42977665
 3:     c -0.10900981
 4:     d  0.92687218
 5:     e -0.64800822
 6:     f  0.36601525
 7:     g -0.43251288
 8:     h -0.84954195
 9:     i -0.04290132
10:     j  0.28345015
11:     k  0.43719925
12:     l  0.43637074
13:     m  0.37049567
14:     n -0.55731122
15:     o  0.26104105
16:     p  0.33104576
17:     q -0.95024337
18:     r  1.32241759
19:     s -0.44203967
20:     t -0.20102535
21:     u -0.48977614
22:     v -0.97433810
23:     w -1.10154462
24:     x -0.07125898
25:     y  0.20903872
26:     z  0.55188585
    group          V1

[[10]]
    group          V1
 1:     a  0.26302390
 2:     b  0.42977665
 3:     c -0.10900981
 4:     d  0.92687218
 5:     e -0.64800822
 6:     f  0.36601525
 7:     g -0.43251288
 8:     h -0.84954195
 9:     i -0.04290132
10:     j  0.28345015
11:     k  0.43719925
12:     l  0.43637074
13:     m  0.37049567
14:     n -0.55731122
15:     o  0.26104105
16:     p  0.33104576
17:     q -0.95024337
18:     r  1.32241759
19:     s -0.44203967
20:     t -0.20102535
21:     u -0.48977614
22:     v -0.97433810
23:     w -1.10154462
24:     x -0.07125898
25:     y  0.20903872
26:     z  0.55188585
    group          V1
> sessionInfo()
R version 3.4.1 (2017-06-30)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: macOS Sierra 10.12.5

Matrix products: default
BLAS: /Library/Frameworks/R.framework/Versions/3.4/Resources/lib/libRblas.0.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/3.4/Resources/lib/libRlapack.dylib

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] data.table_1.10.4

loaded via a namespace (and not attached):
[1] compiler_3.4.1 parallel_3.4.1
> 

@xhdong-umd
Copy link

I can reproduce this error without using fwrite. However my code used a lot of other functions and packages, I'm not sure if it will be helpful. If you can debug the environment instead of source code, maybe it's still useful.

Basically I have a package that depend on data.table (used fread in one data read function), then I used two optimizing functions in mclapply, 1st always pass, 2nd always have this error. Both optimizing functions didn't use fread. If I save the environment, load it into a new session, run 2nd directly, the error is gone.

My guess is that my code used data.table in some process, then created multiple cores with mclapply to run optimizing function. mclapply cores share the process, and this forking process met error when some settings have been changed by data.table code (not necessarily running any parallel work from data.table. I think only fread used openMP right?)

@xhdong-umd
Copy link

xhdong-umd commented Aug 10, 2017

These parallel bugs are tricky. At one time I though I can produce/avoid the bug with slight different code repeatably, then the bug just disappeared all together with clean R sessions.

@dracodoc
Copy link
Contributor

@mattdowle When I installed data.table in Mac earlier, it asked me to enable OpenMP. I followed the instructions in wiki. Now if R 3.4 enable OpenMP by default, is it possible there is some conflict between R OpenMP setting and the data.table OpenMP setting?

@mattdowle
Copy link
Member Author

Looks like this one is ongoing on MacOS with parallel::mclapply after loading data.table. Let's continue in #2418 where I added links to the NEWS items from 1.10.4-1 and 1.10.4-2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants