Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ability to handle object with own .toString() method #170

Merged
merged 7 commits into from
Mar 9, 2020
Merged

Ability to handle object with own .toString() method #170

merged 7 commits into from
Mar 9, 2020

Conversation

resetko
Copy link
Contributor

@resetko resetko commented Jul 19, 2018

This will allow using custom .toString() methods for objects to generate class names.

From a performance standpoint, it's not affecting existing scenarios (scenario with an object containing .toString() method will be obviously slower a bit because we need to call this function).

Here is JSPerf for this: https://jsperf.com/classnames-benchmark-for-tostring
From JSPerf I can see that it's hard to find any performance impact, so I it's less than a statistical error rate. Time to time you can get results like this:
2018-07-19-145124_974x309_escrotum

Closing #153 and same problem for SemanticUI Semantic-Org/Semantic-UI-React#2599

@dcousens , @JedWatson can you take a look?

@dcousens
Copy link
Collaborator

dcousens commented Jul 20, 2018

Why not do this yourself before providing them as input?
Smells like over-engineering for one usecase which you could trivially handle by doing .toString yourself?

@resetko
Copy link
Contributor Author

resetko commented Jul 20, 2018

From my understanding .toString() method is designed to not to be called "by hand". ReacJS as an example, if you will pass object to className it will call toString on it, if it exists.

Moreover without this behaviour you cannot properly use custom generators of class names. Just imagine, you have BEM component in react and have block with, let's say 5 elements. And you will need to call manually toString method each time. All your JSX will be polluted by those calls.

Basically this PR just adds consistency in behaviour between this library and React.

@dcousens
Copy link
Collaborator

@resetko I haven't run into this myself, can you provide the example you speak of?

@resetko
Copy link
Contributor Author

resetko commented Jul 20, 2018

@dcousens, I can quickly pick a little bit simplified real-world example of BEM usage (ft.com), with one remark that instead of divs I will use elements from SemanticUI which depends on this library and passing className property through it (because with pure React it will work as expected, .toString() will be called by React).

We have markup like this:

<header class="o-header">

    <div class="o-header__row">
        <div class="o-header__container">
            <div class="o-header__top-wrapper">
                <div class="o-header__top-column o-header__top-column--left">
                    ...
                </div>
                <div class="o-header__top-column o-header__top-column--center">
                    ...
                </div>
                <div class="o-header__top-column o-header__top-column--right">
                    ...
                </div>
            </div>
        </div>
    </div>

</header>

I'm proposing to write something like this:

const css = new Bem('o-header');

return (
    <header className={css}>

        <div className={css.el('row')}>
            <div className={css.el('container')}>
                <div className={css.el('top-wrapper')}>
                    <div className={css.el('top-column').mod('left')}>
                        ...
                    </div>
                    <div className={css.el('top-column').mod('center')}>
                        ...
                    </div>
                    <div className={css.el('top-column').mod('right')}>
                        ...
                    </div>
                </div>
            </div>
        </div>

    </header>
);

And your solution will look like the following:

const css = new Bem('o-header');

return (
    <header className={css.toString()}>

        <div className={css.el('row').toString()}>
            <div className={css.el('container').toString()}>
                <div className={css.el('top-wrapper').toString()}>
                    <div className={css.el('top-column').mod('left').toString()}>
                        ...
                    </div>
                    <div className={css.el('top-column').mod('center').toString()}>
                        ...
                    </div>
                    <div className={css.el('top-column').mod('right').toString()}>
                        ...
                    </div>
                </div>
            </div>
        </div>

    </header>
);

If you want example out of SemanticUI scope - just imagine that I want to use this library like this:
<div className={[css.el('row'), css.el('search-form')]}>
this will not work right now.

Example of implementation of BEM class you can find in SemanticUI issue which I linked it first comment.

@dcousens
Copy link
Collaborator

dcousens commented Jul 20, 2018

Why not wrap it yourself?

function cssel (s) {
  return css.el(s).toString()
}

That wrapper function would resolve 99% of your issue?

@dcousens
Copy link
Collaborator

dcousens commented Jul 20, 2018

ReactJS as an example, if you will pass object to className it will call toString on it, if it exists.

I guess, this is probably the most convincing point, as you said, supporting toString would be a return to the norm for React users (which this library is primarily targeted at).

I'm willing to say +1, but could you show a performance benchmark result?

@resetko
Copy link
Contributor Author

resetko commented Jul 20, 2018

Sorry, I meant

<div className={classNames(css.el('row'), css.el('search-form'))}>

About performance:

Here is JSPerf for this: https://jsperf.com/classnames-benchmark-for-tostring
From JSPerf I can see that it's hard to find any performance impact, so I it's less than a statistical error rate. Time to time you can get results like this:
2018-07-19-145124_974x309_escrotum

Or you want me to run internal benchmark tool on my local node?

@dcousens
Copy link
Collaborator

Or you want me to run internal benchmark tool on my local node?

Aye, we have a benchmark suite in repository to test this.
It'd be great if you test it in both Node and browser.

@resetko
Copy link
Contributor Author

resetko commented Jul 20, 2018

Will do this. Give me some time :)

Also added one commit in which I changed the condition to handle inherited toString() properly. Also, I think that this will even decrease any impact because now I have the condition without &&.

Will do benchmarks bit later (tomorrow probably).

@dcousens
Copy link
Collaborator

Thanks for your effort @resetko , it is appreciated 👍

@resetko
Copy link
Contributor Author

resetko commented Jul 23, 2018

So, I've done benchmarking 3 times and got 3 different results, but from what I see it more looks like fluctuations:

* local#strings x 1,622,816 ops/sec ±2.17% (88 runs sampled)
*   npm#strings x 1,608,798 ops/sec ±3.23% (91 runs sampled)
* local/dedupe#strings x 880,888 ops/sec ±2.37% (93 runs sampled)
*   npm/dedupe#strings x 874,382 ops/sec ±3.53% (93 runs sampled)

> Fastest is local#strings |   npm#strings

* local#object x 1,896,686 ops/sec ±1.65% (92 runs sampled)
*   npm#object x 1,957,842 ops/sec ±0.64% (95 runs sampled)
* local/dedupe#object x 1,574,989 ops/sec ±1.62% (95 runs sampled)
*   npm/dedupe#object x 1,590,080 ops/sec ±0.72% (96 runs sampled)

> Fastest is npm#object

* local#strings, object x 1,661,285 ops/sec ±1.68% (95 runs sampled)
*   npm#strings, object x 1,669,424 ops/sec ±1.41% (91 runs sampled)
* local/dedupe#strings, object x 991,957 ops/sec ±1.22% (96 runs sampled)
*   npm/dedupe#strings, object x 972,987 ops/sec ±1.41% (92 runs sampled)

> Fastest is npm#strings, object | local#strings, object

* local#mix x 1,189,498 ops/sec ±0.90% (95 runs sampled)
*   npm#mix x 1,259,613 ops/sec ±1.34% (96 runs sampled)
* local/dedupe#mix x 406,665 ops/sec ±2.87% (90 runs sampled)
*   npm/dedupe#mix x 426,520 ops/sec ±2.41% (94 runs sampled)

> Fastest is npm#mix

* local#arrays x 479,434 ops/sec ±1.13% (96 runs sampled)
*   npm#arrays x 477,077 ops/sec ±2.90% (92 runs sampled)
* local/dedupe#arrays x 504,051 ops/sec ±0.76% (95 runs sampled)
*   npm/dedupe#arrays x 500,452 ops/sec ±2.08% (94 runs sampled)

> Fastest is local/dedupe#arrays |   npm/dedupe#arrays
* local#strings x 1,785,777 ops/sec ±1.43% (93 runs sampled)
*   npm#strings x 1,745,872 ops/sec ±0.94% (98 runs sampled)
* local/dedupe#strings x 901,360 ops/sec ±2.40% (93 runs sampled)
*   npm/dedupe#strings x 938,255 ops/sec ±1.47% (94 runs sampled)

> Fastest is local#strings

* local#object x 1,962,125 ops/sec ±1.41% (98 runs sampled)
*   npm#object x 1,854,244 ops/sec ±2.61% (86 runs sampled)
* local/dedupe#object x 1,537,853 ops/sec ±1.75% (91 runs sampled)
*   npm/dedupe#object x 1,510,016 ops/sec ±2.50% (91 runs sampled)

> Fastest is local#object

* local#strings, object x 1,615,926 ops/sec ±2.10% (95 runs sampled)
*   npm#strings, object x 1,661,069 ops/sec ±1.80% (92 runs sampled)
* local/dedupe#strings, object x 969,819 ops/sec ±2.22% (91 runs sampled)
*   npm/dedupe#strings, object x 975,455 ops/sec ±1.73% (94 runs sampled)

> Fastest is npm#strings, object

* local#mix x 1,145,552 ops/sec ±1.82% (90 runs sampled)
*   npm#mix x 1,171,823 ops/sec ±2.79% (93 runs sampled)
* local/dedupe#mix x 422,436 ops/sec ±1.75% (93 runs sampled)
*   npm/dedupe#mix x 432,739 ops/sec ±2.02% (94 runs sampled)

> Fastest is npm#mix

* local#arrays x 495,071 ops/sec ±1.03% (96 runs sampled)
*   npm#arrays x 504,153 ops/sec ±1.16% (97 runs sampled)
* local/dedupe#arrays x 506,855 ops/sec ±1.74% (95 runs sampled)
*   npm/dedupe#arrays x 525,883 ops/sec ±0.78% (96 runs sampled)

> Fastest is npm/dedupe#arrays
* local#strings x 1,837,572 ops/sec ±0.96% (97 runs sampled)
*   npm#strings x 1,787,208 ops/sec ±1.23% (96 runs sampled)
* local/dedupe#strings x 931,696 ops/sec ±2.41% (92 runs sampled)
*   npm/dedupe#strings x 990,948 ops/sec ±1.08% (96 runs sampled)

> Fastest is local#strings

* local#object x 2,034,714 ops/sec ±1.30% (94 runs sampled)
*   npm#object x 2,034,265 ops/sec ±0.81% (96 runs sampled)
* local/dedupe#object x 1,625,148 ops/sec ±1.12% (95 runs sampled)
*   npm/dedupe#object x 1,624,291 ops/sec ±1.49% (96 runs sampled)

> Fastest is npm#object | local#object

* local#strings, object x 1,692,427 ops/sec ±2.73% (93 runs sampled)
*   npm#strings, object x 1,741,600 ops/sec ±1.21% (94 runs sampled)
* local/dedupe#strings, object x 1,024,940 ops/sec ±1.18% (94 runs sampled)
*   npm/dedupe#strings, object x 1,012,584 ops/sec ±1.64% (93 runs sampled)

> Fastest is npm#strings, object | local#strings, object

* local#mix x 1,332,043 ops/sec ±1.11% (95 runs sampled)
*   npm#mix x 1,292,486 ops/sec ±1.55% (94 runs sampled)
* local/dedupe#mix x 445,150 ops/sec ±2.34% (94 runs sampled)
*   npm/dedupe#mix x 446,158 ops/sec ±2.30% (91 runs sampled)

> Fastest is local#mix

* local#arrays x 460,591 ops/sec ±3.10% (89 runs sampled)
*   npm#arrays x 488,087 ops/sec ±2.47% (91 runs sampled)
* local/dedupe#arrays x 514,440 ops/sec ±1.63% (95 runs sampled)
*   npm/dedupe#arrays x 492,114 ops/sec ±3.74% (90 runs sampled)

> Fastest is local/dedupe#arrays |   npm/dedupe#arrays

@dcousens
Copy link
Collaborator

Difference appears marginal.

ACK from me.

Copy link
Collaborator

@dcousens dcousens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@resetko
Copy link
Contributor Author

resetko commented Jul 28, 2018

Any plans merging this? :)

@chrismendis
Copy link

My company would also benefit from getting this merged. Are there plans to merge it?

@kinda-neat
Copy link

@dcousens When this PR will be merged? We're waiting :)

@knobo
Copy link

knobo commented Dec 4, 2019

Can I help in some way to make this happen?

@dcousens
Copy link
Collaborator

dcousens commented Dec 4, 2019

Without objection, I think I'll merge this.

@knobo
Copy link

knobo commented Mar 9, 2020

Have you given up on this request?

@dcousens dcousens merged commit b166fd1 into JedWatson:master Mar 9, 2020
@knobo
Copy link

knobo commented Mar 10, 2020

Thanx :)

@vasyan
Copy link

vasyan commented Apr 15, 2020

Great! Any plans to release it?

@iernie
Copy link

iernie commented Oct 8, 2020

Bump

Will this be released any time soon?

@knobo
Copy link

knobo commented Oct 8, 2020

@iernie look at #220

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants