New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Section for peer-reviewed Custom Elements #77

Closed
addyosmani opened this Issue Mar 24, 2014 · 31 comments

Comments

Projects
None yet
@addyosmani
Member

addyosmani commented Mar 24, 2014

Recently, a number of members of the front-end community expressed an interest in a community-driven path for vetting Web Components. Many have (rightly) voiced concerns about this new ecosystem of custom elements repeating the mistakes of past ecosystems, such as the jQuery plugin registry.

In particular: lack of community-driven code reviews, poor re-use, lack of accessibility, documentation, performance benchmarking, internationalization and security audits as features core to their being.

Mariead Buchan has also raised concerns about:

  • Lack of collaboration
  • Increasing complexity
  • Wasted effort
  • Proliferation of code
  • Lack of Re-use
  • Fragmentation
  • Lack of quality control

I wanted to focus on the lack of a platform for quality control and ways to promote the best of what we have to offer. These are paramount issues that we as a community do indeed to evolve solutions for if we're to repeat jQuery's issues of old.

WebComponents.org recap

Over the past few months, I, @zenorocha and members of the Web Components community (including representatives from Mozilla and Opera) have been working on a resource (yet unreleased) called WebComponents.org. The goals behind the project are to offer a community-driven resource for discussing and evolving best practices around components, highlighting educational content and eventually a Bower-driven gallery of elements which are community rated based on a number of quality factors.

We initially put together a set of best practices that elements should target which do include accessibility and performance in mind. This list is undoubtedly going to improve over time and I welcome members of the community to help us hammer this out into a golden list of must-haves that can be used when we're building and reviewing elements.

A section for peer-reviewed elements

As Mariead calls out, one thing we're lacking is a platform for reviewing custom elements which, could potentially use our best practices list as a checklist.

What I would like to propose is that WebComponents.org contains a section listing high-quality elements which are "blessed" by the community as being high-quality. Perhaps it could live at http://webcomponents.org/elements. These elements would be generally applicable enough to be re-used by others (e.g not a <dancing-bear> element, but sliders, buttons, menus, tabs etc).

The process for landing an element on the site could be as follows:

  • The authors of these elements either submit a pull-request asking for a code review of their element or make a request for a review on their own repository.
  • A team of front-enders (who are knowledgable about performance, re-usability, accessibility, UX pattern design, security and so on) then review the source and provide feedback. This team can grow over time and contain those interested in peer reviewing.
  • Once the element has passed review, it appears on the site. Developers who use it can be assured that what they are using is high-quality.

This solution factors in:

  • This system has a specific scope. We are not aiming to be a reviewer for every element in the eco-system and as such have reasonable ability to scale our efforts.
  • The peer-review system not being a bottleneck to someone releasing something on GitHub without a review. It only appears as 'recommended' on the site once it has passed review.
  • We have an existing list of psuedo-requirements (best practices) which we can use as a starting point for element review. Even if a developer doesn't want their element to appear on wc.org, they can use the list as a sanity check.

Alternatives

If this sounds like a bad path to solving this issue, please let me know.

I've considered the idea of a third-party application/destination or a GitHub repository where those working on custom elements would submit their work for review, however I believe that in order to correctly scale we need to focus on elements that are going to be heavily used. Elements that are core and likely to be reused to build other elements.

@addyosmani addyosmani changed the title from Custom Elements peer review to Section for peer-reviewed Custom Elements Mar 24, 2014

@paullewis

This comment has been minimized.

Show comment
Hide comment
@paullewis

paullewis Mar 24, 2014

I think this is a great idea. Thoughts:

We should try to ensure that there is a balanced team. Let's say there are 5 key criteria on which a component is to be measured:

  • Accessibility
  • Security
  • Performance
  • API quality
  • Customizability

We should aim for there to be several motivated experts capable of reviewing a component for those criteria. A component shouldn't get through based on the popularity of its author, nor should they be blocked simply because people didn't get round to reviewing it.

The criteria should be clear, very clear, so that anyone reviewing (or building) a component is super clear on what's expected and why. For performance I can immediately think of a bunch of things I would want to see, and I imagine it'd be good to standardize on those things for all reviewers and across criteria.

I'm also thinking it might be helpful to rotate teams a bit so there's no "old guard" mentality. I don't really like the idea of voting people in, but there's something to the notion that you're not doing the job for any reason but to serve the large community.

paullewis commented Mar 24, 2014

I think this is a great idea. Thoughts:

We should try to ensure that there is a balanced team. Let's say there are 5 key criteria on which a component is to be measured:

  • Accessibility
  • Security
  • Performance
  • API quality
  • Customizability

We should aim for there to be several motivated experts capable of reviewing a component for those criteria. A component shouldn't get through based on the popularity of its author, nor should they be blocked simply because people didn't get round to reviewing it.

The criteria should be clear, very clear, so that anyone reviewing (or building) a component is super clear on what's expected and why. For performance I can immediately think of a bunch of things I would want to see, and I imagine it'd be good to standardize on those things for all reviewers and across criteria.

I'm also thinking it might be helpful to rotate teams a bit so there's no "old guard" mentality. I don't really like the idea of voting people in, but there's something to the notion that you're not doing the job for any reason but to serve the large community.

@addyosmani

This comment has been minimized.

Show comment
Hide comment
@addyosmani

addyosmani Mar 24, 2014

Member

We should aim for there to be several motivated experts capable of reviewing a component for those criteria. A component shouldn't get through based on the popularity of its author, nor should they be blocked simply because people didn't get round to reviewing it.

I completely agree. Several knowledgable experts for each criteria we care about would be the ideal. If there's sufficient interest in this idea, we can reach out to members of the community we know to be well versed in each bucket and gauge their interest in participating.

The criteria should be clear, very clear, so that anyone reviewing (or building) a component is super clear on what's expected and why.

+9001. What we have today (the best practices list) should be viewed as a potential starting point for a super-clear list of criteria. We could easily expand on each bucket to include a number of more specific points and accept pull requests for any requirements for a criteria that we miss.

I'm also thinking it might be helpful to rotate teams a bit so there's no "old guard" mentality. I don't really like the idea of voting people in, but there's something to the notion that you're not doing the job for any reason but to serve the large community

I would be more than happy to see this evolve into a group that's large enough to support rotations. The "old guard" note is particularly something such an idea needs to avoid if it's to truly succeed.

Member

addyosmani commented Mar 24, 2014

We should aim for there to be several motivated experts capable of reviewing a component for those criteria. A component shouldn't get through based on the popularity of its author, nor should they be blocked simply because people didn't get round to reviewing it.

I completely agree. Several knowledgable experts for each criteria we care about would be the ideal. If there's sufficient interest in this idea, we can reach out to members of the community we know to be well versed in each bucket and gauge their interest in participating.

The criteria should be clear, very clear, so that anyone reviewing (or building) a component is super clear on what's expected and why.

+9001. What we have today (the best practices list) should be viewed as a potential starting point for a super-clear list of criteria. We could easily expand on each bucket to include a number of more specific points and accept pull requests for any requirements for a criteria that we miss.

I'm also thinking it might be helpful to rotate teams a bit so there's no "old guard" mentality. I don't really like the idea of voting people in, but there's something to the notion that you're not doing the job for any reason but to serve the large community

I would be more than happy to see this evolve into a group that's large enough to support rotations. The "old guard" note is particularly something such an idea needs to avoid if it's to truly succeed.

@sindresorhus

This comment has been minimized.

Show comment
Hide comment
@sindresorhus

sindresorhus Mar 24, 2014

Member

Great idea! I think we should try it. And agree with your points @paullewis.

The criteria should be clear, very clear, so that anyone reviewing (or building) a component is super clear on what's expected and why.

I think it would be interesting to have a separate resource just for Web Components best-practises which we also make use of for reviewing.

nor should they be blocked simply because people didn't get round to reviewing it.

I don't get this. The whole point is to only feature reviewed high-quality elements.

Member

sindresorhus commented Mar 24, 2014

Great idea! I think we should try it. And agree with your points @paullewis.

The criteria should be clear, very clear, so that anyone reviewing (or building) a component is super clear on what's expected and why.

I think it would be interesting to have a separate resource just for Web Components best-practises which we also make use of for reviewing.

nor should they be blocked simply because people didn't get round to reviewing it.

I don't get this. The whole point is to only feature reviewed high-quality elements.

@paullewis

This comment has been minimized.

Show comment
Hide comment
@paullewis

paullewis Mar 24, 2014

My last point was more around making sure that there are expectations on responsiveness of participants. The last thing any project like this needs is people saying "I submitted my component three weeks ago and I've not heard anything." or similar. Basically, it's more me saying things need good reviews by people who are willing to make a good time investment and be responsive to the component authors.

paullewis commented Mar 24, 2014

My last point was more around making sure that there are expectations on responsiveness of participants. The last thing any project like this needs is people saying "I submitted my component three weeks ago and I've not heard anything." or similar. Basically, it's more me saying things need good reviews by people who are willing to make a good time investment and be responsive to the component authors.

@sindresorhus

This comment has been minimized.

Show comment
Hide comment
@sindresorhus

sindresorhus Mar 24, 2014

Member

My last point was more around making sure that there are expectations on responsiveness of participants. The last thing any project like this needs is people saying "I submitted my component three weeks ago and I've not heard anything." or similar. Basically, it's more me saying things need good reviews by people who are willing to make a good time investment and be responsive to the component authors.

Agreed. That will require us to be super clear about what's expected before submitting for review, otherwise it will become a maintenance nightmare.

Member

sindresorhus commented Mar 24, 2014

My last point was more around making sure that there are expectations on responsiveness of participants. The last thing any project like this needs is people saying "I submitted my component three weeks ago and I've not heard anything." or similar. Basically, it's more me saying things need good reviews by people who are willing to make a good time investment and be responsive to the component authors.

Agreed. That will require us to be super clear about what's expected before submitting for review, otherwise it will become a maintenance nightmare.

@Munter

This comment has been minimized.

Show comment
Hide comment
@Munter

Munter Mar 24, 2014

The pull request interface seems a little to limited. I'd like to see a service where I can submit my repository for review and choose if I want my module to be a part of a listing. The service should provide a search engine that lists if a review is pending, and if done, what score in different categories the component has.

The service should provide a badge that authors can include on their sites to proudly display their score. This might add an element of gamification to improve the score.

We need to figure out how to handle updates, as modules most likely won't be static, and once an author has received a review it will probably trigger a lot of new updates.

Munter commented Mar 24, 2014

The pull request interface seems a little to limited. I'd like to see a service where I can submit my repository for review and choose if I want my module to be a part of a listing. The service should provide a search engine that lists if a review is pending, and if done, what score in different categories the component has.

The service should provide a badge that authors can include on their sites to proudly display their score. This might add an element of gamification to improve the score.

We need to figure out how to handle updates, as modules most likely won't be static, and once an author has received a review it will probably trigger a lot of new updates.

@paullewis

This comment has been minimized.

Show comment
Hide comment
@paullewis

paullewis Mar 24, 2014

@Munter I worry about the gaming aspect a little, not because I think it's wrong per se, but because it requires judgement calls. All the conversations I've had around accessibility (and I know this from performance work, too) imply that you can't checklist everything. There's a certain amount of nuance to assessment.

paullewis commented Mar 24, 2014

@Munter I worry about the gaming aspect a little, not because I think it's wrong per se, but because it requires judgement calls. All the conversations I've had around accessibility (and I know this from performance work, too) imply that you can't checklist everything. There's a certain amount of nuance to assessment.

@Munter

This comment has been minimized.

Show comment
Hide comment
@Munter

Munter Mar 24, 2014

But in the end there has to be some sort of score applied for it to make sense. Otherwise it's hard for a consumer of this list to figure out which component to choose to get the best support.

The parts that can't be automated or fit in a checklist will be subjective, and I think this is ok.
Actually I think that components that don't tick all the tickable boxes should not even be on such a curated list.

Munter commented Mar 24, 2014

But in the end there has to be some sort of score applied for it to make sense. Otherwise it's hard for a consumer of this list to figure out which component to choose to get the best support.

The parts that can't be automated or fit in a checklist will be subjective, and I think this is ok.
Actually I think that components that don't tick all the tickable boxes should not even be on such a curated list.

@addyosmani

This comment has been minimized.

Show comment
Hide comment
@addyosmani

addyosmani Mar 24, 2014

Member

@Munter

A different interface for submitting requests for reviews isn't out of scope at all. I just suggest GitHub as it allows instant two-way communication and avoids the need to setup additional backends to enable a review system (re-using tools we have where possible).

Per-criteria scoring sounds very interesting. It provides additional meta-data that could be used to rank/order components listed but I would share @paullewis's concerns about gamification of the system. If we did end up going for the overall idea, we could set boundaries on the scores an element would need to meet in order to be listed.

Member

addyosmani commented Mar 24, 2014

@Munter

A different interface for submitting requests for reviews isn't out of scope at all. I just suggest GitHub as it allows instant two-way communication and avoids the need to setup additional backends to enable a review system (re-using tools we have where possible).

Per-criteria scoring sounds very interesting. It provides additional meta-data that could be used to rank/order components listed but I would share @paullewis's concerns about gamification of the system. If we did end up going for the overall idea, we could set boundaries on the scores an element would need to meet in order to be listed.

@justmarkup

This comment has been minimized.

Show comment
Hide comment
@justmarkup

justmarkup Mar 24, 2014

For the Frontend I would like to have something like http://html5please.com/ where you are able to search for slider and can see all slider, with its score (overall and for each section like accessibility...), dependencies, browser support...

So we should also think about how we handle the information about dependencies and support as many (well, at least me) are interested in seeing this information immediately?

justmarkup commented Mar 24, 2014

For the Frontend I would like to have something like http://html5please.com/ where you are able to search for slider and can see all slider, with its score (overall and for each section like accessibility...), dependencies, browser support...

So we should also think about how we handle the information about dependencies and support as many (well, at least me) are interested in seeing this information immediately?

@robdodson

This comment has been minimized.

Show comment
Hide comment
@robdodson

robdodson Mar 24, 2014

Contributor

Is the thinking that there will be 3-5 blessed elements per category, or is the hope that this system scales such that anyone writing elements can participate?

@Munter raises a good point about updates. It's very easy to add one line of code that fries your performance or ruins accessibility. Even after a component has been given the gold star, how do you ensure that future updates don't invalidate that?

Maybe you tie the reviews to a specific version of a component? This means authors will have to resubmit their component whenever they feel it's evolved far enough from that initial point.

Just trying to imagine something that scales beyond the inevitable bottleneck of manual review...
Would it be possible to build a testing harness that drives a component and rates it in the various categories? It would be up to the author to write tests for the different categories (perf, security, accessibility) and users would also be able to star or upvote/downvote an element. This means you could have an element that has a 100% score in perf but only has 1 test. This is kind of like a restaurant that has 5 stars but only 1 review (doesn't necessarily mean the restaurant is bad, but you should be cautious about eating there). Perhaps users could also submit tests. A failing test means the score gets dropped in that category and incentivizes the developer to fix it.

Contributor

robdodson commented Mar 24, 2014

Is the thinking that there will be 3-5 blessed elements per category, or is the hope that this system scales such that anyone writing elements can participate?

@Munter raises a good point about updates. It's very easy to add one line of code that fries your performance or ruins accessibility. Even after a component has been given the gold star, how do you ensure that future updates don't invalidate that?

Maybe you tie the reviews to a specific version of a component? This means authors will have to resubmit their component whenever they feel it's evolved far enough from that initial point.

Just trying to imagine something that scales beyond the inevitable bottleneck of manual review...
Would it be possible to build a testing harness that drives a component and rates it in the various categories? It would be up to the author to write tests for the different categories (perf, security, accessibility) and users would also be able to star or upvote/downvote an element. This means you could have an element that has a 100% score in perf but only has 1 test. This is kind of like a restaurant that has 5 stars but only 1 review (doesn't necessarily mean the restaurant is bad, but you should be cautious about eating there). Perhaps users could also submit tests. A failing test means the score gets dropped in that category and incentivizes the developer to fix it.

@paullewis

This comment has been minimized.

Show comment
Hide comment
@paullewis

paullewis Mar 24, 2014

I think we should definitely consider some form of linter for a first pass. It would be good to capture the basics, and that would hopefully reduce the noise for the manual reviewers. Not sure what this looks like in reality though.

paullewis commented Mar 24, 2014

I think we should definitely consider some form of linter for a first pass. It would be good to capture the basics, and that would hopefully reduce the noise for the manual reviewers. Not sure what this looks like in reality though.

@addyosmani

This comment has been minimized.

Show comment
Hide comment
@addyosmani

addyosmani Mar 24, 2014

Member

I agree that a first-pass linter would be hugely useful. There are challenges in just how much we can automate validation of issues to do with accessibility, security and internationalisation. Performance may be a little more straight-forward but we can take a look at the list of best practices we have and look at what could practically be automatically checked via tooling before a manual review.

Member

addyosmani commented Mar 24, 2014

I agree that a first-pass linter would be hugely useful. There are challenges in just how much we can automate validation of issues to do with accessibility, security and internationalisation. Performance may be a little more straight-forward but we can take a look at the list of best practices we have and look at what could practically be automatically checked via tooling before a manual review.

@addyosmani

This comment has been minimized.

Show comment
Hide comment
@addyosmani

addyosmani Mar 24, 2014

Member

Maybe you tie the reviews to a specific version of a component? This means authors will have to resubmit their component whenever they feel it's evolved far enough from that initial point.

I think that typing reviews/scores to a specific version of a component makes a great deal of sense.

Is the thinking that there will be 3-5 blessed elements per category, or is the hope that this system scales such that anyone writing elements can participate?

That's a good question and I don't believe we have an answer for it just yet. It may make sense to consider whether elements are sufficiently generally useful to be considered for inclusion and make a call on a case by case basis based on how different they really are to elements currently in a category.

Perhaps users could also submit tests. A failing test means the score gets dropped in that category and incentivizes the developer to fix it.

+1 on this idea.

Member

addyosmani commented Mar 24, 2014

Maybe you tie the reviews to a specific version of a component? This means authors will have to resubmit their component whenever they feel it's evolved far enough from that initial point.

I think that typing reviews/scores to a specific version of a component makes a great deal of sense.

Is the thinking that there will be 3-5 blessed elements per category, or is the hope that this system scales such that anyone writing elements can participate?

That's a good question and I don't believe we have an answer for it just yet. It may make sense to consider whether elements are sufficiently generally useful to be considered for inclusion and make a call on a case by case basis based on how different they really are to elements currently in a category.

Perhaps users could also submit tests. A failing test means the score gets dropped in that category and incentivizes the developer to fix it.

+1 on this idea.

@Munter

This comment has been minimized.

Show comment
Hide comment
@Munter

Munter Mar 24, 2014

I think that typing reviews/scores to a specific version of a component makes a great deal of sense.

Me too. At least that would give the consumer a bette guarantee what they are getting. A git tag might make sense, check out the version from npm or bower, where the target isn't moving.

The only bad thing about it is that you have to release before you get a review. So most components will start out with really bad reviews, assuming the authors know as little as I do about accessibility

Munter commented Mar 24, 2014

I think that typing reviews/scores to a specific version of a component makes a great deal of sense.

Me too. At least that would give the consumer a bette guarantee what they are getting. A git tag might make sense, check out the version from npm or bower, where the target isn't moving.

The only bad thing about it is that you have to release before you get a review. So most components will start out with really bad reviews, assuming the authors know as little as I do about accessibility

@robdodson

This comment has been minimized.

Show comment
Hide comment
@robdodson

robdodson Mar 24, 2014

Contributor

There are challenges in just how much we can automate validation of issues to do with accessibility, security and internationalisation.

Maybe this is the bigger problem that we're trying to address then. If it's so hard to test accessibility, et al, that we have to resort to manually testing with experts then that says to me that the tools need to be upgraded. Developers not going through the "blessed" gauntlet will fail because these things are just super hard to get right when you're working with primitive tools.

I realize automating these things is a bigger task than what this thread was initially created to address so we don't have to hash it out here, but I want to put it on everyone's minds. Even basic a11y linting could go a long way.

Contributor

robdodson commented Mar 24, 2014

There are challenges in just how much we can automate validation of issues to do with accessibility, security and internationalisation.

Maybe this is the bigger problem that we're trying to address then. If it's so hard to test accessibility, et al, that we have to resort to manually testing with experts then that says to me that the tools need to be upgraded. Developers not going through the "blessed" gauntlet will fail because these things are just super hard to get right when you're working with primitive tools.

I realize automating these things is a bigger task than what this thread was initially created to address so we don't have to hash it out here, but I want to put it on everyone's minds. Even basic a11y linting could go a long way.

@paullewis

This comment has been minimized.

Show comment
Hide comment
@paullewis

paullewis Mar 24, 2014

tools need to be upgraded

Yes, yes they do. True for a11y, security and performance.

paullewis commented Mar 24, 2014

tools need to be upgraded

Yes, yes they do. True for a11y, security and performance.

@AdaRoseCannon

This comment has been minimized.

Show comment
Hide comment
@AdaRoseCannon

AdaRoseCannon Mar 24, 2014

Perhaps a reviewer doesn't have to review every area, just what they feel qualified to. The component is then can be ranked according to accessibility and overall score. A precise quantifiable list of what is required for it to pass certain requirements would allow the developer to make the necessary changes to boost their score. Something along the lines of http://jsmanners.com/

AdaRoseCannon commented Mar 24, 2014

Perhaps a reviewer doesn't have to review every area, just what they feel qualified to. The component is then can be ranked according to accessibility and overall score. A precise quantifiable list of what is required for it to pass certain requirements would allow the developer to make the necessary changes to boost their score. Something along the lines of http://jsmanners.com/

@robdodson

This comment has been minimized.

Show comment
Hide comment
@robdodson

robdodson Mar 24, 2014

Contributor

I quite like this jsmanners.com approach 👍

Contributor

robdodson commented Mar 24, 2014

I quite like this jsmanners.com approach 👍

@paullewis

This comment has been minimized.

Show comment
Hide comment
@paullewis

paullewis Mar 24, 2014

Perhaps a reviewer doesn't have to review every area, just what they feel qualified to

Yeah, that was my hope. Should mean that people are more comfortable reviewing components and should reduce the load on individuals. Also enjoying the jsmanners.com vibe 👍

paullewis commented Mar 24, 2014

Perhaps a reviewer doesn't have to review every area, just what they feel qualified to

Yeah, that was my hope. Should mean that people are more comfortable reviewing components and should reduce the load on individuals. Also enjoying the jsmanners.com vibe 👍

@AdaRoseCannon

This comment has been minimized.

Show comment
Hide comment
@AdaRoseCannon

AdaRoseCannon Mar 24, 2014

I agree ranking each version independently of it's predecessors is very important. I think it is probably a good idea to show individual scores for a11y, security, performance etc.
Perhaps also show a dependency tree?

AdaRoseCannon commented Mar 24, 2014

I agree ranking each version independently of it's predecessors is very important. I think it is probably a good idea to show individual scores for a11y, security, performance etc.
Perhaps also show a dependency tree?

@addyosmani

This comment has been minimized.

Show comment
Hide comment
@addyosmani

addyosmani Mar 24, 2014

Member

I think that getting the scoring right for each of the criteria buckets is a good target goal for now and if we find that we're able to handle that well we could explore additional views of data (tree) at a later date. Right now I think the most useful things to expose are the accessibility, security, perf scoring etc.

Member

addyosmani commented Mar 24, 2014

I think that getting the scoring right for each of the criteria buckets is a good target goal for now and if we find that we're able to handle that well we could explore additional views of data (tree) at a later date. Right now I think the most useful things to expose are the accessibility, security, perf scoring etc.

@davidbgk

This comment has been minimized.

Show comment
Hide comment
@davidbgk

davidbgk Mar 24, 2014

I doubt that a centralized index would work and/or scale. There are different ways to improve the quality of web components like publishing best practices or basic automated checking or even a detailed tutorial to develop an high quality component. And webcomponents.org should definitely be the place for that material.

My vision is that web components will emerge as catalogs maintained by individuals and companies who will set the level of quality of their sets. Some of these catalogs will gain popularity and improve over time. Some will focus on design, others on apps, some will inherit from both. Some will die. That's the story of the Web and it has to be decentralized to work.

I understand your frustration because it takes more time too. On that topic, I think that education and exemplary is a better way to improve an ecosystem than rating and selecting. It produces more value on the long term.

davidbgk commented Mar 24, 2014

I doubt that a centralized index would work and/or scale. There are different ways to improve the quality of web components like publishing best practices or basic automated checking or even a detailed tutorial to develop an high quality component. And webcomponents.org should definitely be the place for that material.

My vision is that web components will emerge as catalogs maintained by individuals and companies who will set the level of quality of their sets. Some of these catalogs will gain popularity and improve over time. Some will focus on design, others on apps, some will inherit from both. Some will die. That's the story of the Web and it has to be decentralized to work.

I understand your frustration because it takes more time too. On that topic, I think that education and exemplary is a better way to improve an ecosystem than rating and selecting. It produces more value on the long term.

@zenorocha

This comment has been minimized.

Show comment
Hide comment
@zenorocha

zenorocha Mar 24, 2014

Member

I totally understand those concerns.

When I first started customelements.io my idea was to display only "useful" components. Every single element was reviewed before getting into the gallery and I used to check not only code but also if the author provided a live demo or documented the API.

The problems with this approach are obvious. You create this "human dependency" and as time passed by, the elements that I've approved get outdated or could be broken.

Now we fetch not only those manual submitted elements but also all bower components that contain the web-components keyword and order them by number of GitHub stars.

By doing this we automate the submit process and hope that the most popular elements are those who follow best practices (which is not true for some cases).

A new platform to control quality can be created, but I wanted to point some quick solutions that could be implemented on customelements.io:

  1. Instead of a "score" we could add a "star" or any other symbol to identify these "recommended" elements.
  2. Stop fetching all bower components that contain the "web-components" keyword and manually review each one of them.

What do you think?

Member

zenorocha commented Mar 24, 2014

I totally understand those concerns.

When I first started customelements.io my idea was to display only "useful" components. Every single element was reviewed before getting into the gallery and I used to check not only code but also if the author provided a live demo or documented the API.

The problems with this approach are obvious. You create this "human dependency" and as time passed by, the elements that I've approved get outdated or could be broken.

Now we fetch not only those manual submitted elements but also all bower components that contain the web-components keyword and order them by number of GitHub stars.

By doing this we automate the submit process and hope that the most popular elements are those who follow best practices (which is not true for some cases).

A new platform to control quality can be created, but I wanted to point some quick solutions that could be implemented on customelements.io:

  1. Instead of a "score" we could add a "star" or any other symbol to identify these "recommended" elements.
  2. Stop fetching all bower components that contain the "web-components" keyword and manually review each one of them.

What do you think?

@mairead

This comment has been minimized.

Show comment
Hide comment
@mairead

mairead Mar 24, 2014

I'm wondering about a kind of Jury Service reviewer selection, rather than a voted system. If you feel you have time to devote to reviewing you can flag yourself as available in the pot. A selection of people are chosen at random each month say. That way you have rotating attention. Think Ada's suggestion of only reviewing in areas you feel you have expertise is good. Maybe the pot can pick one person at random from each category, so there is an expert in each of the 5 areas every month.

mairead commented Mar 24, 2014

I'm wondering about a kind of Jury Service reviewer selection, rather than a voted system. If you feel you have time to devote to reviewing you can flag yourself as available in the pot. A selection of people are chosen at random each month say. That way you have rotating attention. Think Ada's suggestion of only reviewing in areas you feel you have expertise is good. Maybe the pot can pick one person at random from each category, so there is an expert in each of the 5 areas every month.

@ryjoyce

This comment has been minimized.

Show comment
Hide comment
@ryjoyce

ryjoyce Mar 24, 2014

I suspect random selection of nominees would result in variable quality of review, whilst a fixed peer-group review could ultimately lead to well-meaning bias. roads paved with good intentions etc.

There is a great deal of subjectivness too - is a plugin with radically different internal architecture but the same result less useful than an existing, already recommended, solution? Is that reinventing the wheel or just a different kind of wheel?

Either way, it'll be interesting to see how it goes.

ryjoyce commented Mar 24, 2014

I suspect random selection of nominees would result in variable quality of review, whilst a fixed peer-group review could ultimately lead to well-meaning bias. roads paved with good intentions etc.

There is a great deal of subjectivness too - is a plugin with radically different internal architecture but the same result less useful than an existing, already recommended, solution? Is that reinventing the wheel or just a different kind of wheel?

Either way, it'll be interesting to see how it goes.

@karlcow

This comment has been minimized.

Show comment
Hide comment
@karlcow

karlcow Mar 24, 2014

if WebComponents technology which has the purpose of decentralizing the effort of evolving the Web (outside of the browser controls) recreates the need for a centralized bottle neck, we somehow defeat the purpose of its goals. There was no control for who could create a quality Web site and there were quite crappy ones and some very good ones. I agree with @davidbgk we should avoid a centralized system of reviews and vetting.

Education, Best Practices, and occasionally editorialized content pointing at very good components which have been found in the wild is a more effective use of time.

karlcow commented Mar 24, 2014

if WebComponents technology which has the purpose of decentralizing the effort of evolving the Web (outside of the browser controls) recreates the need for a centralized bottle neck, we somehow defeat the purpose of its goals. There was no control for who could create a quality Web site and there were quite crappy ones and some very good ones. I agree with @davidbgk we should avoid a centralized system of reviews and vetting.

Education, Best Practices, and occasionally editorialized content pointing at very good components which have been found in the wild is a more effective use of time.

@AdaRoseCannon

This comment has been minimized.

Show comment
Hide comment
@AdaRoseCannon

AdaRoseCannon Mar 25, 2014

@karlcow I feel this project will be a good way of determining best practises before web components gain traction across the web so developers have a reference of good components on which to base their own. There is no one forcing component makers to submit their components for review.

AdaRoseCannon commented Mar 25, 2014

@karlcow I feel this project will be a good way of determining best practises before web components gain traction across the web so developers have a reference of good components on which to base their own. There is no one forcing component makers to submit their components for review.

@addyosmani

This comment has been minimized.

Show comment
Hide comment
@addyosmani

addyosmani Mar 25, 2014

Member

I think we need to stress @adaroseedwards's point here as there seems to be some confusion. We're definitely not forcing component makers to submit their elements for review. This thread is suggesting that we have a set of elements which we've confirmed do follow all of the best practices and are happy to display them on the site as reference.

It's all good and well to have a list of best practices (which we do and plan on evolving) but I would personally have a hard time trying to figure out how to apply them all without some further reference material in the form of code. I also think there's something particularly useful about having a set of elements which developers can reliably use/reuse/extend knowing that they're following these practices.

+9001 on education still being at the forefront of all of this. That's been the plan with wc.org from the start.

Member

addyosmani commented Mar 25, 2014

I think we need to stress @adaroseedwards's point here as there seems to be some confusion. We're definitely not forcing component makers to submit their elements for review. This thread is suggesting that we have a set of elements which we've confirmed do follow all of the best practices and are happy to display them on the site as reference.

It's all good and well to have a list of best practices (which we do and plan on evolving) but I would personally have a hard time trying to figure out how to apply them all without some further reference material in the form of code. I also think there's something particularly useful about having a set of elements which developers can reliably use/reuse/extend knowing that they're following these practices.

+9001 on education still being at the forefront of all of this. That's been the plan with wc.org from the start.

@gonzofish

This comment has been minimized.

Show comment
Hide comment
@gonzofish

gonzofish Apr 4, 2014

@addyosmani so the suggestion is that this be like a seal-of-approval type of community? I think the comment @paullewis made about gaming the system should definitely be an important consideration. Maybe some sort of final approval board? Not to say "this is a cool component, it deserves to be on the site" but more to ensure that the components do fit some high-quality standard.

gonzofish commented Apr 4, 2014

@addyosmani so the suggestion is that this be like a seal-of-approval type of community? I think the comment @paullewis made about gaming the system should definitely be an important consideration. Maybe some sort of final approval board? Not to say "this is a cool component, it deserves to be on the site" but more to ensure that the components do fit some high-quality standard.

@sindresorhus sindresorhus referenced this issue Apr 6, 2014

Closed

`ng-` prefix #3

@zenorocha zenorocha added the feature label Apr 25, 2014

@zenorocha

This comment has been minimized.

Show comment
Hide comment
@zenorocha

zenorocha Jul 7, 2015

Member

Closing this one due to its inactivity

Member

zenorocha commented Jul 7, 2015

Closing this one due to its inactivity

@zenorocha zenorocha closed this Jul 7, 2015

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment