Navigation Menu

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance issue with Winston #27083

Closed
tknopp opened this issue May 12, 2018 · 11 comments
Closed

Performance issue with Winston #27083

tknopp opened this issue May 12, 2018 · 11 comments
Labels
latency Compiler latency

Comments

@tknopp
Copy link
Contributor

tknopp commented May 12, 2018

Hi,

I am facing a major performance issue that I was now able to track down to a very basic example

   _       _ _(_)_     |  A fresh approach to technical computing
  (_)     | (_) (_)    |  Documentation: https://docs.julialang.org
   _ _   _| |_  __ _   |  Type "?help" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.6.0 (2017-06-19 13:05 UTC)
 _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release
|__/                   |  x86_64-apple-darwin13.4.0

julia> using Winston

julia> @time plot(1:10)
 15.767232 seconds (45.28 M allocations: 2.131 GiB, 7.43% gc time)

julia> 

I am not sure if posting here is correct but I am pretty sure that under 0.5 Winston was not that slow to compile the plot function. Requiring more that 2GB of RAM seems like there is something going wrong their. Almost all work seems to be done within inference. If the performance issue is not a Julia regression I would appreciate any help what could be changed in Winston to make this fast again. (@vtjnash: this was where I thought that Gtk is the root issue but it seems within Winston)

@lobingera
Copy link

Why do you think this is a julia issue (and not local in Winston or imported modules)? And v0.6.0?

btw:

               _
   _       _ _(_)_     |  A fresh approach to technical computing
  (_)     | (_) (_)    |  Documentation: https://docs.julialang.org
   _ _   _| |_  __ _   |  Type "?help" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.6.2 (2017-12-13 18:08 UTC)
 _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release
|__/                   |  x86_64-pc-linux-gnu

julia> using Winston
INFO: Recompiling stale cache file /home/lobi/.julia/lib/v0.6/IniFile.ji for module IniFile.
INFO: Recompiling stale cache file /home/lobi/.julia/lib/v0.6/StatsBase.ji for module StatsBase.

julia> @time plot(1:10)
  5.124027 seconds (11.21 M allocations: 530.531 MiB, 3.44% gc time)

@tknopp
Copy link
Contributor Author

tknopp commented May 12, 2018

Hm ok, I am using 0.6.0 since 0.6.2 has another performance issue: #24383

@tknopp
Copy link
Contributor Author

tknopp commented May 12, 2018

And Winston has basically not changed from 0.3 - 0.6 for which reason it certainly is a good measure for seeing language regressions.

@tknopp
Copy link
Contributor Author

tknopp commented May 24, 2018

Here are numbers from julia 0.5:

               _
   _       _ _(_)_     |  A fresh approach to technical computing
  (_)     | (_) (_)    |  Documentation: https://docs.julialang.org
   _ _   _| |_  __ _   |  Type "?help" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.5.2 (2017-05-06 16:34 UTC)
 _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release
|__/                   |  x86_64-apple-darwin13.4.0

julia> using Winston

julia> @time plot(1:10)
  1.925793 seconds (2.90 M allocations: 121.176 MB, 2.17% gc time)

This really is a strong regression. And as I said, the code in Winston did basically not change. My feeling is that inference got a lot slower during the transition from 0.5 to 0.6. Might that be the case @JeffBezanson @Keno @vtjnash ? The code in Winston is not very type stable but the (Compiler) cost should not be larger than for inferable code?!

@kshyatt kshyatt added the performance Must go faster label May 28, 2018
@nalimilan
Copy link
Member

Maybe use the profiler to identify spots where allocations increased most?

@tknopp
Copy link
Contributor Author

tknopp commented May 28, 2018

I have done so but you see just deep inference calls. The profiler docs are also not very helpful there because they suggest applying the the profiler only to the second call (where everything is all compiled). I would have tried digging deeper to it but compiler performance / characteristics are something where not a lot of information can be found.

@tknopp
Copy link
Contributor Author

tknopp commented Aug 11, 2018

for reference, I ported Winston to 0.7 and now get

julia> @time plot(1:10)
  5.083859 seconds (8.61 M allocations: 427.708 MiB, 5.71% gc time)

@KristofferC KristofferC added latency Compiler latency and removed performance Must go faster labels Oct 24, 2018
@tknopp
Copy link
Contributor Author

tknopp commented Jul 14, 2020

for reference on Julia 1.5rc1.0 we now have:

julia> @time plot(1:10)
  1.753081 seconds (2.44 M allocations: 127.253 MiB, 1.56% gc time)

and

julia> @time using Winston
  8.119557 seconds (6.07 M allocations: 354.985 MiB, 0.82% gc time)

when everything is precompiled. The Cairo window actually appears about 5 seconds but that might be because the final drawing seems to be done in an asynchron way so that it is not included in the TTFP time reported above. Anyway, this is a nice improvement!

@laborg
Copy link
Contributor

laborg commented Feb 23, 2022

@tknopp do you still see this as a specific performance issue? (other than the Julia inherent TTFP...)

julia> @time using Winston
  1.082901 seconds (2.39 M allocations: 150.433 MiB, 1.90% gc time, 18.34% compilation time)

julia> @time plot(1:10)
  1.559175 seconds (2.25 M allocations: 128.291 MiB, 2.50% gc time, 99.71% compilation time)

julia> versioninfo()
Julia Version 1.7.1
Commit ac5cc99908 (2021-12-22 19:35 UTC)
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
  CPU: 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-12.0.1 (ORCJIT, tigerlake)

@KristofferC
Copy link
Sponsor Member

Those numbers don't make it seem like there is something especially wrong with this example. I think this can be closed.

@tknopp
Copy link
Contributor Author

tknopp commented Feb 23, 2022

Yes, just wanted to confirm that I see similar numbers. It was a specific issue during the transitioning from 0.4-0.6 but since then quite some things have changed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
latency Compiler latency
Projects
None yet
Development

No branches or pull requests

6 participants