Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Drop "DOM 4" or any copy/paste efforts #145

Open
marcoscaceres opened this issue Jun 13, 2017 · 5 comments
Open

Drop "DOM 4" or any copy/paste efforts #145

marcoscaceres opened this issue Jun 13, 2017 · 5 comments

Comments

@marcoscaceres
Copy link
Member

The copy/paste efforts are harmful to the community and they should be stopped (+ they've all failed in the past). Would like to see "DOM4" removed from the charter. Same with HTML.

There are plenty of good specs being worked on, so why keep making a mess of things by including HTML and DOM there?

@domenic
Copy link

domenic commented Jun 13, 2017

Notable copy-and-paste efforts:

  • DOM4
  • HTML
  • Microdata
  • Web Sockets API
  • Web Workers
  • HTML Canvas 2D Context
  • All specs under maintenance except view-mode

@chaals
Copy link
Collaborator

chaals commented Jun 13, 2017

@domenic wrote:

things that are not facts.

@marcoscaceres

While I agree that there are problems caused by having more than one organisation develop a specification for the same thing, longstanding requests by W3C to engage in constructive search for resolution to these problems has effectively been rebuffed.

The current approach of the Working Group is not to simply copy-and-paste, but to do necessary work on specifications according to W3C processes, as chartered by W3C members. In particular, W3C produces stable specifications that have undergone both implementation and quality review, and are considered to provide a reasonable reference point for a vast audience.

The working group does copy text that is licensed apparently to allow that - just as W3C's work - including all that of the Working Group - is now generally licensed, since literally re-writing text instead of getting straightforward permission is simply foolish given a goal to enhance interoperability.

But our experience has been that this leads to continued importing of unreliable content, which of course fails to meet the goals outlined above - and is itself a reason for the work done at W3C.

@marcoscaceres
Copy link
Member Author

The current approach of the Working Group is not to simply copy-and-paste, but to do necessary work on specifications according to W3C processes, as chartered by W3C members.

But this risks throwing everything off. If the fixes to bugs are accepted, they should be fixed upstream so they actually end up getting implemented.

In particular, W3C produces stable specifications that have undergone both implementation and quality review, and are considered to provide a reasonable reference point for a vast audience.

I don't think the implementation part is true. No browser vendor implements from the copied W3C specs. This should be demonstrable if web platform tests for any fixes you do on the w3c side don't end up getting fixed in browsers (or there is a mismatch between what is in the WHATWG spec and the WP tests).

I think a case needs to be made here by the WG that proves that implementers are actually reading and implementing the copy/pasted specs (including any fixes claimed to be made). This can only be shown with implementations passing associated Web Platform tests. If the WG can show that, then I, or any else, don't really have a case to argue.

However, if the WG can't prove that this effort is having affect on interoperability, then it would be time to stop with the copy/paste efforts. I'd like nothing more than to be proved wrong.

@chaals
Copy link
Collaborator

chaals commented Jun 15, 2017

@marcoscaceres:

The specs are not mere copies.

While browsers may or may not follow them (and by "no browser" do you mean that, or just "not the people I know"?), others do.

There are mismatches in specs precisely because we find that what we got from the period where there was a common spec doesn't match interoperable reality. This is the purpose of testing, of the Candidate Recommendation phase of the W3C process, and speaking personally a motivation behind most of the changes that I have made.

I'm not sure why you think the only way to demonstrate the spec reflects reality is through WPT, or what certain implementors read - while WPT in particular is very valuable, the fact that there isn't a test there for something isn't a proof that it doesn't happen.

Meanwhile, where there is interoperability, the copy/paste that happens between the two specs should probably be enhanced - it would be nice if the WHATWG spec were more inclined to adopt changes that match observable reality in old parts of the specs.

In any event the specs are different. While we have legacy content from the days when there was an upstream spec which was only ever fantasy or experiment, we are working to remove it, and not to add things that are still speculative.

Likewise, we have a more modular spec - so we can for example take the long-dormant microdata note, which appears not to have any traction among browser vendors but is a technology implementedf by major search engines and used by a massive proportion of real developers, and work on a Recommendation that matches reality, with the benefit of clear signalling through the status of the specification, and working as necessary with implementors to improve interoperability.

Where we do not do and do not expect to do work, we actively drop that from our deliverables - streams, URL, High-level events are examples, while we have explicitly noted that we're not sure if certain former deliverables such as the quota API are worth pursuing and will only take them up if convinced they are. That's a major point of this recharter, and of the W3C Process by which we charter Working Groups in the first place.

@marcoscaceres
Copy link
Member Author

I'm not sure why you think the only way to demonstrate the spec reflects reality is through WPT, or what certain implementors read - while WPT in particular is very valuable, the fact that there isn't a test there for something isn't a proof that it doesn't happen.

I think that's pretty much exactly the definition of "not happening" - because if it's not observable/testable (even manually), then it can't by definition be determined to be interoperable. Thus can't proceed past CR (without political meddling or other shinnanigans).

Look, bottom line is that I think the WG has plenty of good things to do. Where there is a breakdown with the upstream (WHATWG), that can only be resolved by getting implementers to implement stuff. It appears that the W3C Process is being hijacked to push for changes that didn't get consensus - by using "but it's a REC! You have to do it" as some kind of false indication of quality or agreement. That's detrimental to the W3C as a whole, because it discredits all other specs.

Re: microdata, taking things from the WHATWG that are not going to be implemented in browsers is questionable at best (I personally don't think the W3C should be in the business of standardizing non-browser things:... but whatever). Ultimately copy/pasting the upstream specs has to prove of value.

And that value remains unproven (to me, at least). So again: can you please show concretely where something in the w3c spec(s) is reflecting reality as a direct result of the WG's effort? I'm totally willing to be convinced and I'm pretty easily impressed (just drop "Chrome does X", "Firefox does Y", etc. and then there will be no argument about value).

If you can't show the above - then please reconsider your position here. We have plenty of other fun things to focus on. It's heartbreaking we are still having discussions about this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants
@domenic @marcoscaceres @chaals and others