Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

Already on GitHub? Sign in to your account

alioth.debian.org #660

Closed
mictadlo opened this Issue Apr 2, 2012 · 18 comments

Comments

Projects
None yet
10 participants

mictadlo commented Apr 2, 2012

Hello,
Could you write code for http://alioth.debian.org/scm/viewvc.php/shootout/bench/?root=shootout examples. This will show where are problems and also for newbies to understand the language better.

Owner

JeffBezanson commented Apr 2, 2012

It would be great to have these, but there are a lot of benchmarks there and we're probably not going to budget time to go through and implement all of them. It will probably take a group of people over a period of time to get this done.

Owner

StefanKarpinski commented Apr 2, 2012

@mictadlo: if you're interested in getting more familiar with Julia, implementing some of these benchmarks would be an excellent way to start. Another issue is that it may be fairly difficult to convince the shootout to include Julia.

Owner

ViralBShah commented Apr 2, 2012

This is good idea, but clearly needs to be a community effort as is pointed out. @mictadlo would you like to anchor this - post to the mailing list, and maybe start on a couple? These can live in our performance tests repository, and once complete, we can see if the guys at alioth add them. I think they will if we can have a debian package by then. All in all, will take a while, but let's get started.

Member

dcjones commented Apr 3, 2012

To get you started, here's the "fasta" benchmark, which I had implemented to get to know Julia: https://gist.github.com/2288846

This one is informative because the performance is a bit lagging. (It's slower than the python program, for example.) There are a lot of operations on strings, which makes me thing it's related to #661.

Owner

ViralBShah commented Apr 3, 2012

Excellent! Can you create a pull request and add it to examples?

-viral

On 03-Apr-2012, at 7:58 AM, Daniel Jones wrote:

To get you started, here's the "fasta" benchmark, which I had implemented to get to know Julia: https://gist.github.com/2288846

This one is informative because the performance is a bit lagging. (It's slower than the python program, for example.) There are a lot of operations on strings, which makes me thing it's related to #661.


Reply to this email directly or view it on GitHub:
#660 (comment)

Owner

ViralBShah commented Apr 3, 2012

Also, I think we should have a new issue open to track this specific performance issue.

-viral

On 03-Apr-2012, at 7:58 AM, Daniel Jones wrote:

To get you started, here's the "fasta" benchmark, which I had implemented to get to know Julia: https://gist.github.com/2288846

This one is informative because the performance is a bit lagging. (It's slower than the python program, for example.) There are a lot of operations on strings, which makes me thing it's related to #661.


Reply to this email directly or view it on GitHub:
#660 (comment)

mictadlo commented Apr 3, 2012

Owner

JeffBezanson commented Apr 7, 2012

The global rng_state will be a problem. Are we not allowed to use the usual rand() for this? If not, putting a declaration on uses of that global might fix it completely.

markhend commented Apr 7, 2012

rand() - no
They call out the required random generator at the bottom of the page.

Contributor

dcampbell24 commented Jul 23, 2012

The chameneous-redux and thread-ring benchmarks both require using pre-emptive threads, but I do not see any documentation about using pre-emptive threads in Julia. How should I do these? Use C calls to pthreads? I could also implement versions using tasks that may be considered "interesting alternatives" and might help serve to tell us how well tasks are working. Let me know, thanks.

Member

quinnj commented Apr 8, 2013

So I took a stab at improving the spectral-norm implementation and I think it's significant improvement (293s -> 10-12s).
I'd like to get some other eyeballs on this though to see if there's anything non-idiomatic or other tweaks to improve before doing a pull request.

https://gist.github.com/karbarcca/5340631

Member

pao commented Apr 8, 2013

@karbarcca Please do go ahead and make it a pull request; that makes it easiest to review. Use "RFC:" at the beginning of the pull request title if you are looking for feedback.

Owner

timholy commented Apr 9, 2013

That's a nice improvement!

Member

quinnj commented May 28, 2013

The last two benchmarks require use of pre-emptive threads (chameneos-redux and thread-ring), which Julia, AFAIK, doesn't officially support, so we can probably close this issue.

chameneos: http://benchmarksgame.alioth.debian.org/u32/performance.php?test=chameneosredux#about
thread-ring: http://benchmarksgame.alioth.debian.org/u32/performance.php?test=threadring

Owner

ViralBShah commented May 29, 2013

Should we continue having these in base, and integrate them in our rudimentary perf framework, or move them into a separate package - as a first step towards encouraging more benchmarks to be written in julia?

@ViralBShah ViralBShah closed this May 29, 2013

Member

quinnj commented May 29, 2013

There seems to be quite a few "code example" repos of various kinds; I wonder if it would make sense to consolidate into a single "Code-Examples" package. A few candidates:

  • The shootout benchmarks
  • The homepage benchmarks
  • julia-tutorial repo under JuliaLang
  • The examples folder in the julia repo
  • The Rosetta-Code repo I created (we're above 70 tasks now)
  • Possibly even the RDatasets.jl package (I think the majority of R tutorials/examples I see reference one of these datasets)

This is quite a lot when listing it all out. We should probably do some organizing/trimming. Maybe have a few categories like Popular Benchmarks (focus on performance), Popular Algorithms, and Common Tasks
We would probably want a standard of sorts in terms of how the code is formatted (generous commenting? expected results in comments or assertions?). The tutorials should probably be farmed out to the various packages or take chunks and fit them into the categories I mentioned above. I think we can also advertise the Manual in this case too since it has a very "tutorial" feel about it with plenty of examples.

Owner

ViralBShah commented May 30, 2013

They all serve different purposes. Package tags would help address this considerably.

ViralBShah added a commit that referenced this issue Jul 6, 2013

Refactor and consolidate the performance tests
Run perf tests by running make in test/perf
Factor out timing code into test/perf/perfutil.jl
Micro benchmarks are now in test/perf/micro/
perf2 benchmarks are now in test/perf/kernel
shootout benchmarks now run (not all yet) as part of the perf tests (#660)
cat benchmarks now run as part of the perf tests
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment