Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Share how README.md performance measures were taken #187

Closed
AndreaCorallo opened this issue Mar 18, 2021 · 12 comments
Closed

Share how README.md performance measures were taken #187

AndreaCorallo opened this issue Mar 18, 2021 · 12 comments
Labels
discussion Discussion/Requested Feedback

Comments

@AndreaCorallo
Copy link
Contributor

Hi all,

I'd be interested to know how the fibonacci benchmark reported in the README.md was measured, both for V8 both for native-comp.

Thanks

Andrea

@AndreaCorallo AndreaCorallo changed the title Share how readme performance measures were taken Share how README.md performance measures were taken Mar 18, 2021
@DavidDeSimone
Copy link
Member

I had a small write up on this I made a couple months ago that I need to find. Once I find it I'll include it in our documentation for further reproduction.

Also thank you for all your effort into native-comp, it's really an amazing addition

@DavidDeSimone
Copy link
Member

DavidDeSimone commented Mar 18, 2021

@AndreaCorallo I found the write up:

#94 (comment)

I’ll include this in our docs. My logic of running initialize before we calculate anything is that JS is lazily initialized, and I wanted to just compare the cost of calculating fib(40) without initialization overhead

That code snippet doesn’t cover the code I used to native compile the fib function, I’m still looking for that. However I remember it wasn’t fancy, I was just invoking native compile on the fib function and calling it using the same methodology. I was smart enough not to include the cost of running native comp - I was only measuring the cost of running fib(40)

@AndreaCorallo
Copy link
Contributor Author

AndreaCorallo commented Mar 19, 2021

@DavidDeSimone

Also thank you for all your effort into native-comp, it's really an amazing addition

Thanks appreciated

FWIW the equivalent of this fib.js

const fib = (n) => {
    if (n <= 1) {
	return n;
    }

    return fib(n - 1) + fib(n - 2);
};

fib(40);

would be something like this fib.el

 ;;; -*- lexical-binding: t -*-

 (defun fibonacci (n)
   (if (<= n 1)
       n
     (+ (fibonacci (- n 1)) (fibonacci (- n 2)))))

 (defun fib-run ()
   (fibonacci 40))

 ;; Local Variables:
 ;; comp-speed: 3
 ;; End:

Running then:
(benchmark 1 '(fib-run)) => Elapsed time: 0.000002s

That said I've no doubt V8 can perform very well depending on the task, but I think (and as this example proves) a recursive and pure function of this kind is really not a very representative benchmark.

Thanks

Andrea

@AndreaCorallo
Copy link
Contributor Author

AndreaCorallo commented Mar 19, 2021

PS Not sure if in the light of this you want to update the homepage, it translates I think in native-comp being about 600000 times faster than JS ;-) ;-)

@kiennq
Copy link
Contributor

kiennq commented Mar 19, 2021

@AndreaCorallo Can you also post the disassemble for fib-run in your case here?
Since the comp-speed is 3, I doubt the fib-run just got optimized and print out the value directly. Thus trading compile time for run time calculation.
Here is my result (run in *scratch* buffer) with a little twitch to prevent the extreme optimization like that

(native-compile "./tmp/fib.el")
"/home/xyz/.emacs.d/eln-cache/28.0.50-x86_64-pc-linux-gnu-0c739d4ea6a4f3b714632778d0df4cd8/fib-5f0dec25d5ddb3503168abc12dac2d2d-2fc48f6fd5481f981c72d06bac598a60.eln"

(load"/home/xyz/.emacs.d/eln-cache/28.0.50-x86_64-pc-linux-gnu-0c739d4ea6a4f3b714632778d0df4cd8/fib-5f0dec25d5ddb3503168abc12dac2d2d-2fc48f6fd5481f981c72d06bac598a60.eln")
t

(fibonacci 30)
832040

(benchmark 1 '(fibonacci 30))
"Elapsed time: 0.297596s"

(eval-js-file "./tmp/b.js")
nil

(fib-js 30)
832040

(benchmark 1 '(fib-js 30))
"Elapsed time: 0.015845s"

Content of fib.el

;;; -*- lexical-binding: t -*-

(defun fibonacci (n)
  (if (<= n 1)
      n
    (+ (fibonacci (- n 1)) (fibonacci (- n 2)))))

;; Local Variables:
;; comp-speed: 3
;; End:

Content of b.js

const fib = (n) => {
    if (n <= 1) {
        return n;
    }

    return fib(n - 1) + fib(n - 2);
};

lisp.defun({
    name: "fib-js",
    func: (x) => fib(x)
});

Deno runtime is likely 10x times faster than elisp native compilation in this case.
There's deb package for emacs-ng here in case you want to test https://github.com/kiennq/emacs-ng/releases/tag/v0.1

@AndreaCorallo
Copy link
Contributor Author

AndreaCorallo commented Mar 19, 2021

@AndreaCorallo Can you also post the disassemble for fib-run in your case here?
Since the comp-speed is 3, I doubt the fib-run just got optimized and print out the value directly. Thus trading compile time for run time calculation.

Indeed this is optimized in the compile time, well I see nothing wrong with that (I wrote the code that does this transformation :-) ).

But, as I wrote, this was an example to prove that the specific case of one single u-benchmark composed of one recursive/pure function is really not meaningful. I hope is clear and my "funny" example proves that, OTOH it was really nothing more than the exact 1:1 translation of the JS code...

A priori one could decide which optimization is fair or extreme or which is not, but that's simply arbitrary.

Even more, any expert (or semi expert me included) with some understanding of the two systems can design a u-benchmark that makes one of the two the clear winner.

The real issue here is that a good number of people reading an homepage can't have this understanding, so I think claiming performances based on such a test can be misleading for many readers :)

@brotzeit
Copy link
Member

Maybe we should add emacs specific benchmarks instead.

@appetrosyan
Copy link

I believe that the Readme provides a very misleading example. Firstly, as you mentioned, because the evaluation is replaced at compilation time, the two implementations aren’t really comparable. A small modification, of requesting the number at runtime would prevent the optimisation, and make the two functions at least comparable asymptotically.

Secondly, while arithmetic is an interesting example to look at, most people want to see an actual emacs package made better by using JS V8. Lack of such a package is indicative of the fact that either porting, or distributing such an improved package is problematic, which doesn’t do emacs-ng any favours.

Thirdly, JavaScript having any advantage over native comp, performance-wise is temporary. What isn’t temporary is it’s handling of asynchronous processing. Even if emacs-ng were to lose in terms of raw performance, being able to do some computations in an asynchronous, non-blocking fashion, is an obvious advantage that is not represented in the Readme. It is mentioned, but not explored. For example, I’d be sold on emacs-ng, if it said that it could do font-locking in a parallel multi-threaded fashion, without making the editor unresponsive, or that it helped LSP-UI be less sluggish.

@Chaformbintrano
Copy link

Maybe we should add emacs specific benchmarks instead.

Maybe reimplement elisp-benchmarks in ES?

@DavidDeSimone
Copy link
Member

DavidDeSimone commented Mar 19, 2021

@AndreaCorallo thank you for looking into this further. I wrote this benchmark a while ago when the only people following the project were mostly the core contributors, and obviously I missed a degree of rigor here. I never meant to deceive the community, I was simply trying to show what was possible with the initial product. In light of this community feedback, I will remove the performance claim from the README for the time being. That being said, I will spend some effort following up on the ins and outs of the performance differences (if any, for better or worse) between v8 and native compile with more practical workloads. In addition, I will be more transparent with methodology

Edit: Clarified my language

@ericdallo ericdallo added the discussion Discussion/Requested Feedback label Mar 19, 2021
@black7375
Copy link
Contributor

What isn’t temporary is it’s handling of asynchronous processing. Even if emacs-ng were to lose in terms of raw performance, being able to do some computations in an asynchronous, non-blocking fashion, is an obvious advantage that is not represented in the Readme.

I am also expecting an non-blocking due to asynchronous.
It is still an early project, but when it is stabilized, I would like the feature to handle file I/O asynchronously like DavidDeSimone/ng-async-files.

The reasons I think this project will become a game changer in the future are as follows:

This is why I wanted to participate in emacs-ng, so other member might have different opinions.

@brotzeit
Copy link
Member

Let's keep this open to discuss which benchmarks would be appropriate.

@emacs-ng emacs-ng locked and limited conversation to collaborators Feb 20, 2023
@declantsien declantsien converted this issue into discussion #475 Feb 20, 2023

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
discussion Discussion/Requested Feedback
Projects
None yet
Development

No branches or pull requests

8 participants