Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BLAS threads should default to physical not logical core count? #33409

Open
ChrisRackauckas opened this issue Sep 28, 2019 · 14 comments

Comments

@ChrisRackauckas
Copy link
Contributor

@ChrisRackauckas ChrisRackauckas commented Sep 28, 2019

On an i7-8550u, OpenBLAS is defaulting to 8 threads. I was comparing to RecursiveFactorizations.jl, and saw the performance is like:

using BenchmarkTools
import LinearAlgebra, RecursiveFactorization
ccall((:openblas_get_num_threads64_, Base.libblas_name), Cint, ())
LinearAlgebra.BLAS.set_num_threads(4)
BenchmarkTools.DEFAULT_PARAMETERS.seconds = 0.5

luflop(m, n) = n^3÷3 -3 + m*n^2
luflop(n) = luflop(n, n)

bas_mflops = Float64[]
rec_mflops = Float64[]
ns = 50:50:800
for n in ns
    A = rand(n, n)
    bt = @belapsed LinearAlgebra.lu!($(copy(A)))
    rt = @belapsed RecursiveFactorization.lu!($(copy(A)))
    push!(bas_mflops, luflop(n)/bt/1e9)
    push!(rec_mflops, luflop(n)/rt/1e9)
end

using Plots
plt = plot(ns, bas_mflops, legend=:bottomright, lab="OpenBLAS", title="LU Factorization Benchmark", marker=:auto)
plot!(plt, ns, rec_mflops, lab="RecursiveFactorization", marker=:auto)
xaxis!(plt, "size (N x N)")
yaxis!(plt, "GFLOPS")

1 thread

lubenchbnumthreads1

4 threads

lubenchbnumthreads4

8 threads

lubench

Conclusion: the default that Julia chooses, 8 threads, is the worst, with 1 thread doing better. But using the number of physical cores, 4, is best.

So there were a lot of issues on Discourse and Slack #gripes where essentially "setting BLAS threads to 1 is better than the default!", but it looks like it's because the default should be the number of physical and not logical threads. I am actually very surprised it's not set that way, and so I was wondering why, and also where this default is set (I couldn't find it).

@ChrisRackauckas

This comment has been minimized.

Copy link
Contributor Author

@ChrisRackauckas ChrisRackauckas commented Sep 28, 2019

Interestingly, on Linux I checked our benchmarking server and see:

processor       : 15
vendor_id       : GenuineIntel
cpu family      : 6
model           : 63
model name      : Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
stepping        : 0
microcode       : 0x3d
cpu MHz         : 2394.455
cache size      : 35840 KB
physical id     : 30
siblings        : 1
core id         : 0
cpu cores       : 1
apicid          : 30
initial apicid  : 30
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid xsaveopt arat spec_ctrl intel_stibp flush_l1d arch_capabilities
bogomips        : 4788.91
clflush size    : 64
cache_alignment : 64
address sizes   : 42 bits physical, 48 bits virtual
power management:

[crackauckas@pumas ~]$ julia
               _
   _       _ _(_)_     |  Documentation: https://docs.julialang.org
  (_)     | (_) (_)    |
   _ _   _| |_  __ _   |  Type "?" for help, "]?" for Pkg help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 1.2.0 (2019-08-20)
 _/ |\__'_|_|_|\__'_|  |  Official https://julialang.org/ release
|__/                   |

julia> ccall((:openblas_get_num_threads64_, Base.libblas_name), Cint, ())
8

and it's correct there, so this may be an issue with Windows.

@YingboMa

This comment has been minimized.

Copy link
Contributor

@YingboMa YingboMa commented Sep 28, 2019

This doesn't seem like a Windows-only problem.

julia> versioninfo(verbose=true)
Julia Version 1.3.0-rc2.0
Commit a04936e3e0 (2019-09-12 19:49 UTC)
Platform Info:
  OS: Linux (x86_64-linux-gnu)
      Ubuntu 19.04
  uname: Linux 5.0.0-29-generic #31-Ubuntu SMP Thu Sep 12 13:05:32 UTC 2019 x86_64 x86_64
  CPU: Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz:
              speed         user         nice          sys         idle          irq
       #1  2700 MHz     384385 s       2674 s     128341 s    7458186 s          0 s
       #2  2700 MHz     381396 s       3631 s     125158 s    1466384 s          0 s
       #3  2700 MHz     304236 s       3488 s      99122 s    1470836 s          0 s
       #4  2700 MHz     421380 s       4702 s     137860 s    1454895 s          0 s
       #5  2700 MHz     192528 s       2218 s      83784 s    1472282 s          0 s
       #6  2700 MHz     267616 s       1661 s      82347 s    1470275 s          0 s
       #7  2700 MHz     309143 s       1778 s      98877 s    1478079 s          0 s
       #8  2700 MHz     287978 s       3788 s      85621 s    1470172 s          0 s

  Memory: 15.530269622802734 GB (4015.9765625 MB free)
  Uptime: 238530.0 sec
  Load Avg:  0.39208984375  0.30615234375  0.16015625
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-6.0.1 (ORCJIT, skylake)
Environment:
  HOME = /home/scheme
  OPTFLAGS = -load=/home/scheme/julia/usr/lib/libjulia.so
  PATH = /home/scheme/.cargo/bin:/home/scheme/Downloads/VSCode-linux-x64/code-insiders:/sbin:/home/scheme/.cargo/bin:/home/scheme/Downloads/VSCode-linux-x64/code-insiders:/sbin:/home/scheme/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
  TERM = screen-256color
  WINDOWPATH = 2

julia> ccall((:openblas_get_num_threads64_, Base.libblas_name), Cint, ())
8

My Linux laptop has four physical cores.

@tkf

This comment has been minimized.

Copy link
Contributor

@tkf tkf commented Sep 28, 2019

Isn't 8 coming from here?

julia/base/Base.jl

Lines 395 to 403 in 318affa

# And try to prevent openblas from starting too many threads, unless/until specifically requested
if !haskey(ENV, "OPENBLAS_NUM_THREADS") && !haskey(ENV, "OMP_NUM_THREADS")
cpu_threads = Sys.CPU_THREADS::Int
if cpu_threads > 8 # always at most 8
ENV["OPENBLAS_NUM_THREADS"] = "8"
elseif haskey(ENV, "JULIA_CPU_THREADS") # or exactly as specified
ENV["OPENBLAS_NUM_THREADS"] = cpu_threads
end # otherwise, trust that openblas will pick CPU_THREADS anyways, without any intervention
end

See also

julia/base/sysinfo.jl

Lines 56 to 63 in 318affa

Sys.CPU_THREADS
The number of logical CPU cores available in the system, i.e. the number of threads
that the CPU can run concurrently. Note that this is not necessarily the number of
CPU cores, for example, in the presence of
[hyper-threading](https://en.wikipedia.org/wiki/Hyper-threading).
See Hwloc.jl or CpuId.jl for extended information, including number of physical cores.

Ref: #27856

@macd

This comment has been minimized.

Copy link
Contributor

@macd macd commented Sep 28, 2019

Here is the comparison of the 4 thread case on an Intel i7-6770HQ (Skull Canyon NUC) with using the Intel MKL
plt

@ChrisRackauckas

This comment has been minimized.

Copy link
Contributor Author

@ChrisRackauckas ChrisRackauckas commented Sep 28, 2019

That shows? It seems RecursiveFactorizations is only faster when the BLAS is not well-tuned, but I don't understand how what you show is related @macd . Could you show the 8 thread version of MKL to see if MKL is also not smart with the setting being at logical?

@macd

This comment has been minimized.

Copy link
Contributor

@macd macd commented Sep 28, 2019

Here are the MKL results as a function of number of threads on the Skull Canyon NUC. It has 4 cores and 8 threads. Clearly a stumble with 8 threads around n = 500, but it is still better than 4. Apparently Intel knows how to use the hyper threads more effectively.
mkl_threads

@ChrisRackauckas

This comment has been minimized.

Copy link
Contributor Author

@ChrisRackauckas ChrisRackauckas commented Sep 28, 2019

That explains a lot. Thanks

@macd

This comment has been minimized.

Copy link
Contributor

@macd macd commented Sep 29, 2019

So I have egg all over my face on this one. Just looked at the code in the window I left open this morning to see I incorrectly used an index rather than the indexed value for the number of threads. The above graph is then for 1,2,3,4 threads. The following graph shows no significant difference between 4 and 8 threads. (I never was an early morning person.)
mkl_threads_v2

@ChrisRackauckas

This comment has been minimized.

Copy link
Contributor Author

@ChrisRackauckas ChrisRackauckas commented Sep 29, 2019

Ahh yes, so it looks like it's just smart and doesn't actually run more threads or something like that. Since OpenBLAS doesn't do that, this would account for a good chunk of the difference given the default settings.

@StefanKarpinski

This comment has been minimized.

Copy link
Member

@StefanKarpinski StefanKarpinski commented Oct 1, 2019

Can we easily detect the number of actual cores versus logical cores? We renamed the old Sys.CPU_CORES to Sys.CPU_THREADS fortunately, so we have the name Sys.CPU_CORES available for that value but I'm not sure how best to get it.

@chriselrod

This comment has been minimized.

Copy link

@chriselrod chriselrod commented Oct 1, 2019

Is lscpu reliably on Linux machines?
It correctly identified 10 cores per socket and 2 threads per core:

> lscpu
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
Address sizes:       46 bits physical, 48 bits virtual
CPU(s):              20
On-line CPU(s) list: 0-19
Thread(s) per core:  2
Core(s) per socket:  10
Socket(s):           1
NUMA node(s):        1
Vendor ID:           GenuineIntel
CPU family:          6
Model:               85
Model name:          Intel(R) Core(TM) i9-7900X CPU @ 3.30GHz
Stepping:            4
CPU MHz:             3980.712
CPU max MHz:         4500.0000
CPU min MHz:         1200.0000
BogoMIPS:            6600.00
Virtualization:      VT-x
L1d cache:           32K
L1i cache:           32K
L2 cache:            1024K
L3 cache:            14080K
NUMA node0 CPU(s):   0-19
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear flush_l1d

> lscpu | grep "Core(s) per socket"
Core(s) per socket:  10
@StefanKarpinski

This comment has been minimized.

Copy link
Member

@StefanKarpinski StefanKarpinski commented Oct 1, 2019

Some possibilities for figuring out the number of physical cores:

We may be able to borrow something from https://github.com/m-j-w/CpuId.jl

@StefanKarpinski

This comment has been minimized.

Copy link
Member

@StefanKarpinski StefanKarpinski commented Oct 11, 2019

Another possible library for figuring out the number of cores (from @RoyiAvital):

@KristofferC

This comment has been minimized.

Copy link
Contributor

@KristofferC KristofferC commented Oct 12, 2019

Just as a note, we already do a bunch of cpuid stuff to support picking functions optimized for a feature-set at runtime e.g.

// CPUID
extern "C" JL_DLLEXPORT void jl_cpuid(int32_t CPUInfo[4], int32_t InfoType)
{
#if defined _MSC_VER
__cpuid(CPUInfo, InfoType);
#else
asm volatile (
#if defined(__i386__) && defined(__PIC__)
"xchg %%ebx, %%esi;"
"cpuid;"
"xchg %%esi, %%ebx;" :
"=S" (CPUInfo[1]),
#else
"cpuid" :
"=b" (CPUInfo[1]),
#endif
"=a" (CPUInfo[0]),
"=c" (CPUInfo[2]),
"=d" (CPUInfo[3]) :
"a" (InfoType)
);
#endif
}
.

Perhaps extending that to pick out the number of cores instead of bundling a whole other library is more "clean".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
9 participants
You can’t perform that action at this time.