Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

get rid of warning on 'npm run bench' #998

Closed
wants to merge 1 commit into from

Conversation

Feder1co5oave
Copy link
Contributor

This fixes #977 (comment)

@UziTech
Copy link
Member

UziTech commented Jan 5, 2018

Do we want to change it to return false if we couldn't bench something?

@Feder1co5oave
Copy link
Contributor Author

As in?

@UziTech
Copy link
Member

UziTech commented Jan 5, 2018

like if showdown or robotskirt fails? (like they are now)

or maybe set a threshold and if the marked bench is higher than that it should fail? that way we could actually use it in our automated testing and flag if a pr makes it too much slower..

@Feder1co5oave
Copy link
Contributor Author

Benching against other parsers is optional so I wouldn't exit 1.
About the threshold... I don't really like it.

@UziTech
Copy link
Member

UziTech commented Jan 5, 2018

It seems like there is not really any point in benching then.

@joshbruce
Copy link
Member

From the product/marketing perspective: We claim to be built for speed, which raises the question, compared to what? I think Apple does this pretty well. When they release a new version of Safari, for example, they do comparitive analysis against the previous version and the other major browsers.

@worker8 posted an interesting link over on #963 that does some comparitive analysis.

If we don't bench, can we really make the claim honestly?

From the developer perspective I think what I would like to see as a consumer of Marked would be the bench results for the supported parsers - and if one fails or times out a message saying something like: This parser failed. If you want to use this parser, please submit a GitHub issue. Maybe include the URL to the issues page.

Re threshold: The issue there is device dependence, I think. @Feder1co5oave ran his benches and they were all 10 seconds longer than mine. I can't think of an elegant way to deal with that to choose a quality threshold. For 8fold Component - at least when I was still doing benchmarking - I used 3ms...no matter the scenario under test. I do appreciate the idea of a threshold because, like @UziTech said, without a target, why bench...just not sure if that is a separate issue compared to the benches throwing the error or not.

My throwing copper moment on this for now.

@UziTech UziTech mentioned this pull request Jan 6, 2018
@Feder1co5oave Feder1co5oave deleted the bench_warning branch January 15, 2018 01:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants