New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stability Policy/Statement #725

Closed
mikeal opened this Issue Feb 5, 2015 · 41 comments

Comments

Projects
None yet
@mikeal
Member

mikeal commented Feb 5, 2015

Something that is coming up over and over again is fear about the stability of io.js between versions.

Because we made such a big jump in functionality from 0.10 to 1.0 people fear that we broke reverse compatibility. The increased pace of releases only feeds in to the fear that we are going to continually break people's applications and parts of the ecosystem.

As we build the roadmap it's important that we have a clear policy about what we will and will not break and what signals we intend to give in those releases to signal the changes.

Here's a starting point, I'm sure it will drum up a bunch of feedback and we will need to continue to iterate on it.

Stability Policy

io.js will not break backwards compatibility in the core JavaScript API.

io.js will will continue to adopt new v8 releases.

  • When the v8 C++ API causes breakage that can be handled by nan
    the minor version of io.js will be increased.
  • When the v8 C++ API cases breakage that can NOT be handled by nan
    the major version of io.js will be increased.
  • When new features in the JavaScript language are introduced by v8 the
    minor version number will be increased. TC39 has stated clearly that no
    backwards incompatible changes will be made to the language so it is
    appropriate to increase the minor rather than major.

No new API will be added in patch releases.

Any API addition will correspond to an increase in the minor version.

Long Term Support

iojs intends to support old version as long as community members are fixing bugs in them. As long as people at committing bug fixes and improvements that don't change or add API we will push patch releases.

legacy-v8

When the v8 team stops supporting a version that a prior iojs release depends on we will create a branch in iojs/legacy-v8. This branch will be used to continue to land fixes in unsupported lines of v8. These branches will be pulled in to future patch releases of iojs.

@mikeal

This comment has been minimized.

Show comment
Hide comment
@mikeal

mikeal Feb 5, 2015

Member

Also, we might want to consider vendoring nan and shipping it with io.js.

Member

mikeal commented Feb 5, 2015

Also, we might want to consider vendoring nan and shipping it with io.js.

@Fishrock123

This comment has been minimized.

Show comment
Hide comment
@Fishrock123

Fishrock123 Feb 5, 2015

Member

io.js will not break backwards compatibility in the core JavaScript API.

I think this is harmful long-term. See the entire promises co-existence discussion.

Saying everything will never change locks you into a position where everything you do is stuck as legacy forever.

Member

Fishrock123 commented Feb 5, 2015

io.js will not break backwards compatibility in the core JavaScript API.

I think this is harmful long-term. See the entire promises co-existence discussion.

Saying everything will never change locks you into a position where everything you do is stuck as legacy forever.

@mikeal

This comment has been minimized.

Show comment
Hide comment
@mikeal

mikeal Feb 5, 2015

Member

@Fishrock123 the thing is, that statement is already pretty true. We still can't even remove the sys module even after years of deprecation. If promises are added to the current API it will have to be done in a compatible way and will be considered an API addition not a backwards incompatible change.

If we ever wanted to move to a better promise-centric API we would probably do that along with support for the new modules spec and all the API changes would only be accessible through new-style modules which would make that an API addition and not a backwards incompatible change.

We've got an ecosystem of 130K modules, it'll be 200K before the end of the year. Any backwards incompatible change will break tens of thousands of modules, not to mention all the applications that rely on them. At this point in the projects maturity it just isn't conceivable that we can make real backwards incompatible breaks.

Member

mikeal commented Feb 5, 2015

@Fishrock123 the thing is, that statement is already pretty true. We still can't even remove the sys module even after years of deprecation. If promises are added to the current API it will have to be done in a compatible way and will be considered an API addition not a backwards incompatible change.

If we ever wanted to move to a better promise-centric API we would probably do that along with support for the new modules spec and all the API changes would only be accessible through new-style modules which would make that an API addition and not a backwards incompatible change.

We've got an ecosystem of 130K modules, it'll be 200K before the end of the year. Any backwards incompatible change will break tens of thousands of modules, not to mention all the applications that rely on them. At this point in the projects maturity it just isn't conceivable that we can make real backwards incompatible breaks.

@mikeal

This comment has been minimized.

Show comment
Hide comment
@mikeal

mikeal Feb 5, 2015

Member

From the Tracing WG: Do we consider the tracing probe endpoints part of the public API and as such removals & additions effecting the major/minor bumps? What is our policy on compatibility changes here?

Member

mikeal commented Feb 5, 2015

From the Tracing WG: Do we consider the tracing probe endpoints part of the public API and as such removals & additions effecting the major/minor bumps? What is our policy on compatibility changes here?

@Qard

This comment has been minimized.

Show comment
Hide comment
@Qard

Qard Feb 5, 2015

Member

I would say moving the probes from core to userland should be allowed, but in a major version bump. The primary users are enterprise that care about that functionality a lot, and moving the parts out of core may require changes to their tooling.

Member

Qard commented Feb 5, 2015

I would say moving the probes from core to userland should be allowed, but in a major version bump. The primary users are enterprise that care about that functionality a lot, and moving the parts out of core may require changes to their tooling.

@chrisdickinson

This comment has been minimized.

Show comment
Hide comment
@chrisdickinson

chrisdickinson Feb 5, 2015

Contributor

io.js will not break backwards compatibility in the core JavaScript API.
@Fishrock123 the thing is, that statement is already pretty true. We still can't even remove the sys module even after years of deprecation.

I understand this sentiment, but in practice that's not accurate:

  • We don't always know for sure that a change will not break existing users. The potential for unforeseen consequences is great. Committing to something so hard-line would be disingenuous to our users.
  • There are cases where API have to break for the health of the platform. The addition of proper keep-alive support, the migration from streams2 to streams3, and the eventual removal of domains.
  • In other cases – like the removal of sys – it's a matter of "is it worth it to break backwards compatibility for this one change?" In many cases, the answer is no. However, if we do have to bump a major version, the value proposition might change and we can revisit that sort of breakage. If we do that, we should commit to being very upfront about what will break far in advance – adding these sorts of features to a v2 milestone wouldn't be a bad idea, and making sure that we limit the number of breaking changes to a easily definiabl

In short, I think the best we can promise is that we will be very conservative about breaking changes, and message them to community (along with remediation approaches) far in advance of the change, with specific steps to prepare for the change.

so, we might want to consider vendoring nan and shipping it with io.js.

100% yesplease.

No new API will be added in minor releases.

By this, do you mean no "net new" modules? For instance, if we added a tracing module, that would require a major version bump? If so, I'm not sure I agree – it seems to me like that could still fall under a minor release.


In addition, I think having a stance on the cadence of releases is important as well – node got away with ad-hoc releases without fatiguing the community primarily because the releases were so infrequent, IMO. This ties back into the releases document I worked on, and letting the community know that there will be "LTS" or stable releases that bugfixes will be backported to, and at what rate those releases will be made available. I've taken a bit of a break from working on that doc. I'll revisit it this weekend and see about addressing concerns / simplifying it. Getting to see the release process as it exists now should help inform the document.

Contributor

chrisdickinson commented Feb 5, 2015

io.js will not break backwards compatibility in the core JavaScript API.
@Fishrock123 the thing is, that statement is already pretty true. We still can't even remove the sys module even after years of deprecation.

I understand this sentiment, but in practice that's not accurate:

  • We don't always know for sure that a change will not break existing users. The potential for unforeseen consequences is great. Committing to something so hard-line would be disingenuous to our users.
  • There are cases where API have to break for the health of the platform. The addition of proper keep-alive support, the migration from streams2 to streams3, and the eventual removal of domains.
  • In other cases – like the removal of sys – it's a matter of "is it worth it to break backwards compatibility for this one change?" In many cases, the answer is no. However, if we do have to bump a major version, the value proposition might change and we can revisit that sort of breakage. If we do that, we should commit to being very upfront about what will break far in advance – adding these sorts of features to a v2 milestone wouldn't be a bad idea, and making sure that we limit the number of breaking changes to a easily definiabl

In short, I think the best we can promise is that we will be very conservative about breaking changes, and message them to community (along with remediation approaches) far in advance of the change, with specific steps to prepare for the change.

so, we might want to consider vendoring nan and shipping it with io.js.

100% yesplease.

No new API will be added in minor releases.

By this, do you mean no "net new" modules? For instance, if we added a tracing module, that would require a major version bump? If so, I'm not sure I agree – it seems to me like that could still fall under a minor release.


In addition, I think having a stance on the cadence of releases is important as well – node got away with ad-hoc releases without fatiguing the community primarily because the releases were so infrequent, IMO. This ties back into the releases document I worked on, and letting the community know that there will be "LTS" or stable releases that bugfixes will be backported to, and at what rate those releases will be made available. I've taken a bit of a break from working on that doc. I'll revisit it this weekend and see about addressing concerns / simplifying it. Getting to see the release process as it exists now should help inform the document.

@mikeal

This comment has been minimized.

Show comment
Hide comment
@mikeal

mikeal Feb 5, 2015

Member

No new API will be added in minor releases.

That was a typo, should have been patch, fixed now.

cadence of releases

I would like to entirely separate the issue of stability from cadence. We should state our policy for stability, what we are willing to do and not to do and what version number changes will correspond to those changes. If we can't comply with those commitments then the cadence will drop and we will have to invest in better tooling and automation in order to pick back up the cadence while still complying with our stability commitments.

We don't always know for sure that a change will not break existing users.

Sure, it's software, we never know anything with 100% certainty, what this says is that if/when we do that it's a bug and entirely unintentional and that will will take every measure available to fix it and ensure it doesn't happen again. If it happens often we'll have to change our release/automation strategy to better detect issues like this or slow down releases. We do almost nothing now (although still more than we did under node.js), there are 130K modules we could automate the testing of to see if there are changes from one version to the next.

Having a strong goal is important, it's what motivates us and the community to step up and create better tooling like this.

There are cases where API have to break for the health of the platform. The addition of proper keep-alive support, the migration from streams2 to streams3, and the eventual removal of domains.

I don't think keep-alive support was a breaking change. request didn't need any code changes when it landed and it touches pretty much the whole http API.

I think that streams2 is exactly what a document like this is trying to assure people we aren't going to do. streams3, as I understand it, fixes more compatibility problems than it causes.

IMO domains will print a very annoying error for years and even then we probably will still keep it around for fear of breaking some applications. But hey, I'd love to be proven wrong.

stable and unstable channels and such

Again, I think this is something that will need to continue to evolve. The goal of these channels is to increase confidence that what we are saying (this doesn't break in these ways) is actually true.

I have big concerns about allowing intentional breaks in the JS API in major version bumps. We don't know for sure yet how often we'll have to increment the major version based on v8 changes but let's be really pessimistic and say that every 6-8 weeks we have to do a major version bump. So now our version number is about as meaningless as Chrome's is at messaging big changes. Slipping a JS API change which has a far wider reach in to such a release and would be difficult if not impossible to message properly.

One idea I had for a selection of big changes (like supporting ES6 Modules but giving them a more ES6 centric API instead of the current one) we would have a branch/channel for it for a long time, something like iojs-NG (Next Generation) and we get people experimenting with that channel for a long time before we announce an integration date and a version number it would land in. But even this is something we should, and I believe we can, do in a backwards compatible way without breaking any existing JS modules.

I don't actually think that dedicating ourselves to not breaking backwards compatibility in the JS API is that huge of a burden. It's the same burden TC39 has and they've mitigated it by coming up with creative ways to separate the new from the old so that the new can be handled differently, we can do the same.

Member

mikeal commented Feb 5, 2015

No new API will be added in minor releases.

That was a typo, should have been patch, fixed now.

cadence of releases

I would like to entirely separate the issue of stability from cadence. We should state our policy for stability, what we are willing to do and not to do and what version number changes will correspond to those changes. If we can't comply with those commitments then the cadence will drop and we will have to invest in better tooling and automation in order to pick back up the cadence while still complying with our stability commitments.

We don't always know for sure that a change will not break existing users.

Sure, it's software, we never know anything with 100% certainty, what this says is that if/when we do that it's a bug and entirely unintentional and that will will take every measure available to fix it and ensure it doesn't happen again. If it happens often we'll have to change our release/automation strategy to better detect issues like this or slow down releases. We do almost nothing now (although still more than we did under node.js), there are 130K modules we could automate the testing of to see if there are changes from one version to the next.

Having a strong goal is important, it's what motivates us and the community to step up and create better tooling like this.

There are cases where API have to break for the health of the platform. The addition of proper keep-alive support, the migration from streams2 to streams3, and the eventual removal of domains.

I don't think keep-alive support was a breaking change. request didn't need any code changes when it landed and it touches pretty much the whole http API.

I think that streams2 is exactly what a document like this is trying to assure people we aren't going to do. streams3, as I understand it, fixes more compatibility problems than it causes.

IMO domains will print a very annoying error for years and even then we probably will still keep it around for fear of breaking some applications. But hey, I'd love to be proven wrong.

stable and unstable channels and such

Again, I think this is something that will need to continue to evolve. The goal of these channels is to increase confidence that what we are saying (this doesn't break in these ways) is actually true.

I have big concerns about allowing intentional breaks in the JS API in major version bumps. We don't know for sure yet how often we'll have to increment the major version based on v8 changes but let's be really pessimistic and say that every 6-8 weeks we have to do a major version bump. So now our version number is about as meaningless as Chrome's is at messaging big changes. Slipping a JS API change which has a far wider reach in to such a release and would be difficult if not impossible to message properly.

One idea I had for a selection of big changes (like supporting ES6 Modules but giving them a more ES6 centric API instead of the current one) we would have a branch/channel for it for a long time, something like iojs-NG (Next Generation) and we get people experimenting with that channel for a long time before we announce an integration date and a version number it would land in. But even this is something we should, and I believe we can, do in a backwards compatible way without breaking any existing JS modules.

I don't actually think that dedicating ourselves to not breaking backwards compatibility in the JS API is that huge of a burden. It's the same burden TC39 has and they've mitigated it by coming up with creative ways to separate the new from the old so that the new can be handled differently, we can do the same.

@mikeal mikeal added the tc-agenda label Feb 6, 2015

@bnoordhuis

This comment has been minimized.

Show comment
Hide comment
@bnoordhuis

bnoordhuis Feb 7, 2015

Member

Also, we might want to consider vendoring nan and shipping it with io.js.

I don't think that's going to work, at least not in the near term. Best case, symbol clashes would happen when io.js ships nan 1.5 but an add-on depends on nan 1.4. Worst case, there are no symbol clashes but the add-on quietly starts doing the wrong thing.

From the Tracing WG: Do we consider the tracing probe endpoints part of the public API and as such removals & additions effecting the major/minor bumps? What is our policy on compatibility changes here?

The Linux kernel counts tracepoints as part of the public API and it occasionally really hampers progress. I would advocate minor version bumps only with no promise of stability for now.

Member

bnoordhuis commented Feb 7, 2015

Also, we might want to consider vendoring nan and shipping it with io.js.

I don't think that's going to work, at least not in the near term. Best case, symbol clashes would happen when io.js ships nan 1.5 but an add-on depends on nan 1.4. Worst case, there are no symbol clashes but the add-on quietly starts doing the wrong thing.

From the Tracing WG: Do we consider the tracing probe endpoints part of the public API and as such removals & additions effecting the major/minor bumps? What is our policy on compatibility changes here?

The Linux kernel counts tracepoints as part of the public API and it occasionally really hampers progress. I would advocate minor version bumps only with no promise of stability for now.

@mikeal mikeal referenced this issue Feb 7, 2015

Closed

WIP: Roadmap #14

@taoeffect

This comment has been minimized.

Show comment
Hide comment
@taoeffect

taoeffect Feb 7, 2015

Could we please get some sort of official stance via a blog post or something? (I take it that's what this issue is trying to get done?)

I couldn't find information as to what's going on with Streams 3 and the other changes introduced in nodejs 0.12 and how it relates to io.js.

Without an official statement there is the very real danger of FUD spreading. I, for example, am honestly concerned about potential continued divergence of the two projects. It could split the node community in two in a way that is bad for everyone.

Request: reassuring words backed by reassuring actions, plz.

taoeffect commented Feb 7, 2015

Could we please get some sort of official stance via a blog post or something? (I take it that's what this issue is trying to get done?)

I couldn't find information as to what's going on with Streams 3 and the other changes introduced in nodejs 0.12 and how it relates to io.js.

Without an official statement there is the very real danger of FUD spreading. I, for example, am honestly concerned about potential continued divergence of the two projects. It could split the node community in two in a way that is bad for everyone.

Request: reassuring words backed by reassuring actions, plz.

@mikeal

This comment has been minimized.

Show comment
Hide comment
@mikeal

mikeal Feb 7, 2015

Member

@taoeffect we only have the ability to talk about what we intend to do. In regard to divergence between the projects we can't speak to what Joyent will decide to do or what direction they might take things. This policy is an attempt to come to a consensus then message and implement a policy or io.js. We have no control over what Joyent's policy might be or how closely they might follow it.

Member

mikeal commented Feb 7, 2015

@taoeffect we only have the ability to talk about what we intend to do. In regard to divergence between the projects we can't speak to what Joyent will decide to do or what direction they might take things. This policy is an attempt to come to a consensus then message and implement a policy or io.js. We have no control over what Joyent's policy might be or how closely they might follow it.

@taoeffect

This comment has been minimized.

Show comment
Hide comment
@taoeffect

taoeffect Feb 7, 2015

@mikeal Well then what does io.js intend to do with regards to new nodejs features and API changes?

taoeffect commented Feb 7, 2015

@mikeal Well then what does io.js intend to do with regards to new nodejs features and API changes?

@mikeal

This comment has been minimized.

Show comment
Hide comment
@mikeal

mikeal Feb 7, 2015

Member

@taoeffect everything they just shipped in 0.12 we've been shipping for weeks. In addtion, we have a newer version of v8 and libuv (both of which are still supported, unlike the versions in 0.12).

The features in 0.12 were built by people in io.js, some of them more than a year ago.

Member

mikeal commented Feb 7, 2015

@taoeffect everything they just shipped in 0.12 we've been shipping for weeks. In addtion, we have a newer version of v8 and libuv (both of which are still supported, unlike the versions in 0.12).

The features in 0.12 were built by people in io.js, some of them more than a year ago.

@taoeffect

This comment has been minimized.

Show comment
Hide comment
@taoeffect

taoeffect Feb 7, 2015

@mikeal That's good to hear, but you seem to be avoiding the question, which keeps me nervous.

The main concern is: what happens when nodejs 0.13, 0.14 and 0.15 are released with features and/or API changes that don't exist in iojs?

taoeffect commented Feb 7, 2015

@mikeal That's good to hear, but you seem to be avoiding the question, which keeps me nervous.

The main concern is: what happens when nodejs 0.13, 0.14 and 0.15 are released with features and/or API changes that don't exist in iojs?

@Oteng

This comment has been minimized.

Show comment
Hide comment
@Oteng

Oteng Feb 7, 2015

I think the problem is with joyents attempt to keep the node community from moving to io.js and in doing so they are making it impossible for the two projects to be compatible. I think what io.js should be doing is working to make is suitable replacement for nodejs

Oteng commented Feb 7, 2015

I think the problem is with joyents attempt to keep the node community from moving to io.js and in doing so they are making it impossible for the two projects to be compatible. I think what io.js should be doing is working to make is suitable replacement for nodejs

@ghost

This comment has been minimized.

Show comment
Hide comment
@ghost

ghost Feb 7, 2015

@Oteng What types of things are they doing to slow the switch?

ghost commented Feb 7, 2015

@Oteng What types of things are they doing to slow the switch?

@Oteng

This comment has been minimized.

Show comment
Hide comment
@Oteng

Oteng Feb 7, 2015

@benjaminProgram
when they start introducing
nodejs features and API changes

Oteng commented Feb 7, 2015

@benjaminProgram
when they start introducing
nodejs features and API changes

@taoeffect

This comment has been minimized.

Show comment
Hide comment
@taoeffect

taoeffect Feb 7, 2015

when they start introducing
nodejs features and API changes

Well we certainly can't have them doing that. Bad Joyent! Stop developing the software you created for great justice!

sigh...

Some sort of reconciliation is needed.

taoeffect commented Feb 7, 2015

when they start introducing
nodejs features and API changes

Well we certainly can't have them doing that. Bad Joyent! Stop developing the software you created for great justice!

sigh...

Some sort of reconciliation is needed.

@ghost

This comment has been minimized.

Show comment
Hide comment
@ghost

ghost commented Feb 8, 2015

@taoeffect Definitely.

@algesten

This comment has been minimized.

Show comment
Hide comment
@algesten

algesten Feb 8, 2015

@taoeffect

when nodejs 0.13, 0.14 and 0.15 are released

That presupposes joyent are driving node development forward, which despite 0.12 there are no signs of happening. We can speculate this is due to the fact that the majority of the top top contributors (5 of 8?), are now working on io.js.

So you're asking mikeal for a statement on if, hypothetically, joyent finds/hires developers to start pushing things forwards, and despite the node advisory board decide to go in a direction that doesn't make sense, what happens then? ... not sure it's a reasonable question to ask.

And regarding reconciliation. The driving members behind io.j says they hope io.js, one day, when joyent gets their act together, will be merged back into node.

algesten commented Feb 8, 2015

@taoeffect

when nodejs 0.13, 0.14 and 0.15 are released

That presupposes joyent are driving node development forward, which despite 0.12 there are no signs of happening. We can speculate this is due to the fact that the majority of the top top contributors (5 of 8?), are now working on io.js.

So you're asking mikeal for a statement on if, hypothetically, joyent finds/hires developers to start pushing things forwards, and despite the node advisory board decide to go in a direction that doesn't make sense, what happens then? ... not sure it's a reasonable question to ask.

And regarding reconciliation. The driving members behind io.j says they hope io.js, one day, when joyent gets their act together, will be merged back into node.

@taoeffect

This comment has been minimized.

Show comment
Hide comment
@taoeffect

taoeffect Feb 8, 2015

@algesten I appreciate the rosy optimistic picture you are trying to paint, but let's keep our heads here in reality and be aware of the potential significant harm that can be done to the community by ignoring this question.

Note that the question has still been ignored. It has not been answered by your post either (which dismissed the question).

Note also that if we took your line of thinking as the reality of how things are then Joyent souldn't have had any reason to release 0.12 because supposedly, according your narrative (paraphrased) "iojs is the future and should one day be merged back into node".

But they did release it.

If everyone keeps turning a blind eye to this issue it will cause harm the community.

Please consider the worst case scenario and have an answer for it.

taoeffect commented Feb 8, 2015

@algesten I appreciate the rosy optimistic picture you are trying to paint, but let's keep our heads here in reality and be aware of the potential significant harm that can be done to the community by ignoring this question.

Note that the question has still been ignored. It has not been answered by your post either (which dismissed the question).

Note also that if we took your line of thinking as the reality of how things are then Joyent souldn't have had any reason to release 0.12 because supposedly, according your narrative (paraphrased) "iojs is the future and should one day be merged back into node".

But they did release it.

If everyone keeps turning a blind eye to this issue it will cause harm the community.

Please consider the worst case scenario and have an answer for it.

@mikeal

This comment has been minimized.

Show comment
Hide comment
@mikeal

mikeal Feb 8, 2015

Member

@taoeffect we aren't ignoring it, it's just literally impossible for us to tell you what someone else will do :) All the harm you're talking about will be the result of someone else's actions so there isn't much that we can do to alleviate your concerns other than to stop shipping code.

Member

mikeal commented Feb 8, 2015

@taoeffect we aren't ignoring it, it's just literally impossible for us to tell you what someone else will do :) All the harm you're talking about will be the result of someone else's actions so there isn't much that we can do to alleviate your concerns other than to stop shipping code.

@taoeffect

This comment has been minimized.

Show comment
Hide comment
@taoeffect

taoeffect Feb 8, 2015

@taoeffect we aren't ignoring it, it's just literally impossible for us to tell you what someone else will do :) All the harm you're talking about will be the result of someone else's actions so there isn't much that we can do to alleviate your concerns other than to stop shipping code.

We're now back where we started.

Let's break this loop.

  • Yes, I agree it is impossible for you to tell me what someone else will do.
  • No, the potential harm would not be entirely the result of "someone else's" actions, half of it would lie with iojs, the other half with nodejs.
  • Yes, there is something you can do. For example, work with Joyent to merge all work they do into iojs, and vice versa.

taoeffect commented Feb 8, 2015

@taoeffect we aren't ignoring it, it's just literally impossible for us to tell you what someone else will do :) All the harm you're talking about will be the result of someone else's actions so there isn't much that we can do to alleviate your concerns other than to stop shipping code.

We're now back where we started.

Let's break this loop.

  • Yes, I agree it is impossible for you to tell me what someone else will do.
  • No, the potential harm would not be entirely the result of "someone else's" actions, half of it would lie with iojs, the other half with nodejs.
  • Yes, there is something you can do. For example, work with Joyent to merge all work they do into iojs, and vice versa.
@taoeffect

This comment has been minimized.

Show comment
Hide comment
@taoeffect

taoeffect Feb 8, 2015

From: http://thechangelog.com/139/

52:02 – “My next task is to jump into the roadmap repo, and figure out more ways of pulling in feedback from the community, and figuring out what people want out of Node next, and that’s the direction I expect it to go in.” – Mikeal Rogers

Consider the above comments some community feedback. 😉

Would you consider, as part of the roadmap work, polling the community to see whether they want reconciliation or further divergence? (Edit: Or similar / related questions?)

taoeffect commented Feb 8, 2015

From: http://thechangelog.com/139/

52:02 – “My next task is to jump into the roadmap repo, and figure out more ways of pulling in feedback from the community, and figuring out what people want out of Node next, and that’s the direction I expect it to go in.” – Mikeal Rogers

Consider the above comments some community feedback. 😉

Would you consider, as part of the roadmap work, polling the community to see whether they want reconciliation or further divergence? (Edit: Or similar / related questions?)

@mikeal

This comment has been minimized.

Show comment
Hide comment
@mikeal

mikeal Feb 8, 2015

Member

@taoeffect we worked w/ Joyent for 6 months before forking, we are still trying ;) it isn't public at their request.

Member

mikeal commented Feb 8, 2015

@taoeffect we worked w/ Joyent for 6 months before forking, we are still trying ;) it isn't public at their request.

@ghost

This comment has been minimized.

Show comment
Hide comment
@ghost

ghost Feb 8, 2015

@mikeal What is not public?

ghost commented Feb 8, 2015

@mikeal What is not public?

@mikeal

This comment has been minimized.

Show comment
Hide comment
@mikeal

mikeal Feb 12, 2015

Member

Added notes about long term support and a proposal for how to better handle unsupported versions of v8.

Member

mikeal commented Feb 12, 2015

Added notes about long term support and a proposal for how to better handle unsupported versions of v8.

@chrisdickinson

This comment has been minimized.

Show comment
Hide comment
@chrisdickinson

chrisdickinson Feb 13, 2015

Contributor

Having a strong goal is important, it's what motivates us and the community to step up and create better tooling like this.

I'm not sure 100% backwards compatibility is the right strong goal for io.js. While we absolutely need tooling to start seeing how changes in core affect the ecosystem at large, I don't think that seeing breakage should preclude those changes. To re-hash that anecdotal example: keep-alive broke tests at npm, IIRC, but the ecosystem is healthier for the change overall.

I may be doomsaying a bit here, but: if we never make breaking changes, it becomes harder to onboard new contributors effectively. They will have an entire project-history's-worth of backwards compatibility to learn before they can navigate the project and make meaningful changes. It becomes harder to document effectively, because old APIs don't die, and efforts have to be made to point newcomers to the "right way" to use the project. It becomes harder to make changes in general, to navigate the maze of backwards compatibility. When I see "no breaking changes in the JavaScript API", I see Python: if we can't fix problems for fear of breaking downstream code, the natural recourse seems to be to spread the backwards-incompatible APIs across a new set of core modules.

If we state how we deprecate APIs and how we delete them, we make a promise of stability to our users that stands less of a chance of stalling the project, and is more realistic overall. We can still state what APIs are off-limits, but on a more granular scale.

We have to strike a balance between our users' investment in io.js versus our users' io.js code. A 100% backwards compatibility policy protects the code to the exclusion of the investment. Iterating and moving io.js forward protects the users' investment. A stated deprecation policy, I think, gives us the best of both worlds.

Contributor

chrisdickinson commented Feb 13, 2015

Having a strong goal is important, it's what motivates us and the community to step up and create better tooling like this.

I'm not sure 100% backwards compatibility is the right strong goal for io.js. While we absolutely need tooling to start seeing how changes in core affect the ecosystem at large, I don't think that seeing breakage should preclude those changes. To re-hash that anecdotal example: keep-alive broke tests at npm, IIRC, but the ecosystem is healthier for the change overall.

I may be doomsaying a bit here, but: if we never make breaking changes, it becomes harder to onboard new contributors effectively. They will have an entire project-history's-worth of backwards compatibility to learn before they can navigate the project and make meaningful changes. It becomes harder to document effectively, because old APIs don't die, and efforts have to be made to point newcomers to the "right way" to use the project. It becomes harder to make changes in general, to navigate the maze of backwards compatibility. When I see "no breaking changes in the JavaScript API", I see Python: if we can't fix problems for fear of breaking downstream code, the natural recourse seems to be to spread the backwards-incompatible APIs across a new set of core modules.

If we state how we deprecate APIs and how we delete them, we make a promise of stability to our users that stands less of a chance of stalling the project, and is more realistic overall. We can still state what APIs are off-limits, but on a more granular scale.

We have to strike a balance between our users' investment in io.js versus our users' io.js code. A 100% backwards compatibility policy protects the code to the exclusion of the investment. Iterating and moving io.js forward protects the users' investment. A stated deprecation policy, I think, gives us the best of both worlds.

@bnoordhuis

This comment has been minimized.

Show comment
Hide comment
@bnoordhuis

bnoordhuis Feb 13, 2015

Member

io.js will not break backwards compatibility in the core JavaScript API.

@mikeal I'm with @chrisdickinson, I don't think that's a good idea or even feasible. A more constructive approach is to outline a policy for deprecating and removing broken features.

To take the sys module that you mentioned elsewhere as an example: the reason it's still in core is that it's not really broken and has zero maintenance and run-time overhead.

Compare that to domains: domains do have a significant maintenance and run-time cost and are good candidates for deprecation (already happened) and removal.

When the v8 team stops supporting a version that a prior iojs release depends on we will create a branch in iojs/legacy-v8. This branch will be used to continue to land fixes in unsupported lines of v8. These branches will be pulled in to future patch releases of iojs.

What is the rationale for a separate repository? joyent/node lands patches in the in-tree copy of V8 and that seems like a reasonable approach to me. It's less work than maintaining a second repository.

I have mixed feelings about pledging support for versions of V8 that upstream has abandoned. Back-porting fixes quickly becomes infeasible due to the ever-growing delta.

Member

bnoordhuis commented Feb 13, 2015

io.js will not break backwards compatibility in the core JavaScript API.

@mikeal I'm with @chrisdickinson, I don't think that's a good idea or even feasible. A more constructive approach is to outline a policy for deprecating and removing broken features.

To take the sys module that you mentioned elsewhere as an example: the reason it's still in core is that it's not really broken and has zero maintenance and run-time overhead.

Compare that to domains: domains do have a significant maintenance and run-time cost and are good candidates for deprecation (already happened) and removal.

When the v8 team stops supporting a version that a prior iojs release depends on we will create a branch in iojs/legacy-v8. This branch will be used to continue to land fixes in unsupported lines of v8. These branches will be pulled in to future patch releases of iojs.

What is the rationale for a separate repository? joyent/node lands patches in the in-tree copy of V8 and that seems like a reasonable approach to me. It's less work than maintaining a second repository.

I have mixed feelings about pledging support for versions of V8 that upstream has abandoned. Back-porting fixes quickly becomes infeasible due to the ever-growing delta.

@mikeal

This comment has been minimized.

Show comment
Hide comment
@mikeal

mikeal Feb 13, 2015

Member

I don't think we are sharing a vision for what "backwards compatibility" means. More accurately, I don't think we're sharing a definition of what the scope of the public API is that we're dedicated to supporting.

The http improvements mentioned earlier are quite telling in this regard. I've been fixing the tests for request which touches pretty much all of http and so far no actual behavior has broken and I wouldn't expect any applications to break, however there are a bunch of broke tests due to:

  • New libuv has slightly improved error message text. This meant that the strict error message comparisons I was doing in tests broke.
  • Testing the raw headers, including capitalizations, broke because I was also testing the default connection header which obviously changed.

Those aren't breaks in backwards compatibility. We aren't going to mint the exact text of every error message forever, or refrain from improving keep-alive support or even changing the default to use keep-alive so long as we can do so without altering the public function signature and expected resulting output.

Coming back to streams, I think that streams1 -> streams2 was a backwards break and is the kind of thing we should dedicate ourselves to avoiding. But, streams2 -> streams3 did not break or alter prior behavior in a breaking way and should not be considered a breaking change.

it becomes harder to onboard new contributors effectively. They will have an entire project-history's-worth of backwards compatibility

Think back to this assert.deepEqual thing last week. The contributor didn't understand the full impact of the change but enough people did that were involved in the review and the TC that we could effectively handle the contribution.

Also, this is the goal and should drive more than just core changes. The roadmap includes mention of better tooling to understand how much of the ecosystem a change might break and how much uses a particular API.

If we state how we deprecate APIs and how we delete them

Why are we still pretending we can do this? Let's be honest, we're not ever going to be able to remove domains or sys, the cost of removing them is just too high. All we can do is add better API and hope that people stop using it. The exception of course is NG, where we can effectively start over without breaking all the old modules and applications.

A 100% backwards compatibility policy

All language is subject. A hosting company that is always up is actually up 99.99% of the time. If we understand how many people use an API and how much we might break by making slight alterations we can effectively mitigate the result of small changes without scaring people by messaging that as a "breaking change." If you take the most conservative view here you could say that the new error messages from libuv are a breaking change but that isn't what we're talking about now and it isn't what anyone expects.

Compare that to domains: domains do have a significant maintenance and run-time cost and are good candidates for deprecation (already happened) and removal.

Statements like this really do scare people that have node in production. While this is very true, the API is terrible and has a bad impact, can we really break a significant portion of the module ecosystem and production applications to get rid of it entirely? What I suspect we'll do is continue to reduce the impact of its use. Hell, I could even see us turning all its APIs in to noops but to actually remove the module entirely to the point where everyone who was pulling it in will see an exception seems like a situation where the cost will always be too great.

What is the rationale for a separate repository? joyent/node lands patches in the in-tree copy of V8 and that seems like a reasonable approach to me. It's less work than maintaining a second repository. I have mixed feelings about pledging support for versions of V8 that upstream has abandoned. Back-porting fixes quickly becomes infeasible due to the ever-growing delta.

What I'm trying to do here is separate and parallelize the efforts of LTS and current development.

The work of back-porting fixes is a burden we don't want to put on the majority of contributors, and definitely shouldn't be somewhere that you're spending your time. But, there are a number of contributors from companies with large financial incentives to keep older lines of node functional and secure because they have customers on them that won't switch.

The only reason to break out the repo is to give the LTS people a place to do that work that they more or less own (similar to a working group) and also to make it clear when we're taking responsibility for a particular line of v8 (any versions that aren't in that repository we are effectively saying must be fixed in upstream v8).

There may be a better way to organize this, it's a problem we don't quite have yet since we're so new, the real point here is that we're saying definitively "when v8 ends support we will inherit it and continue to support it as long as there is a community landing patches."

Member

mikeal commented Feb 13, 2015

I don't think we are sharing a vision for what "backwards compatibility" means. More accurately, I don't think we're sharing a definition of what the scope of the public API is that we're dedicated to supporting.

The http improvements mentioned earlier are quite telling in this regard. I've been fixing the tests for request which touches pretty much all of http and so far no actual behavior has broken and I wouldn't expect any applications to break, however there are a bunch of broke tests due to:

  • New libuv has slightly improved error message text. This meant that the strict error message comparisons I was doing in tests broke.
  • Testing the raw headers, including capitalizations, broke because I was also testing the default connection header which obviously changed.

Those aren't breaks in backwards compatibility. We aren't going to mint the exact text of every error message forever, or refrain from improving keep-alive support or even changing the default to use keep-alive so long as we can do so without altering the public function signature and expected resulting output.

Coming back to streams, I think that streams1 -> streams2 was a backwards break and is the kind of thing we should dedicate ourselves to avoiding. But, streams2 -> streams3 did not break or alter prior behavior in a breaking way and should not be considered a breaking change.

it becomes harder to onboard new contributors effectively. They will have an entire project-history's-worth of backwards compatibility

Think back to this assert.deepEqual thing last week. The contributor didn't understand the full impact of the change but enough people did that were involved in the review and the TC that we could effectively handle the contribution.

Also, this is the goal and should drive more than just core changes. The roadmap includes mention of better tooling to understand how much of the ecosystem a change might break and how much uses a particular API.

If we state how we deprecate APIs and how we delete them

Why are we still pretending we can do this? Let's be honest, we're not ever going to be able to remove domains or sys, the cost of removing them is just too high. All we can do is add better API and hope that people stop using it. The exception of course is NG, where we can effectively start over without breaking all the old modules and applications.

A 100% backwards compatibility policy

All language is subject. A hosting company that is always up is actually up 99.99% of the time. If we understand how many people use an API and how much we might break by making slight alterations we can effectively mitigate the result of small changes without scaring people by messaging that as a "breaking change." If you take the most conservative view here you could say that the new error messages from libuv are a breaking change but that isn't what we're talking about now and it isn't what anyone expects.

Compare that to domains: domains do have a significant maintenance and run-time cost and are good candidates for deprecation (already happened) and removal.

Statements like this really do scare people that have node in production. While this is very true, the API is terrible and has a bad impact, can we really break a significant portion of the module ecosystem and production applications to get rid of it entirely? What I suspect we'll do is continue to reduce the impact of its use. Hell, I could even see us turning all its APIs in to noops but to actually remove the module entirely to the point where everyone who was pulling it in will see an exception seems like a situation where the cost will always be too great.

What is the rationale for a separate repository? joyent/node lands patches in the in-tree copy of V8 and that seems like a reasonable approach to me. It's less work than maintaining a second repository. I have mixed feelings about pledging support for versions of V8 that upstream has abandoned. Back-porting fixes quickly becomes infeasible due to the ever-growing delta.

What I'm trying to do here is separate and parallelize the efforts of LTS and current development.

The work of back-porting fixes is a burden we don't want to put on the majority of contributors, and definitely shouldn't be somewhere that you're spending your time. But, there are a number of contributors from companies with large financial incentives to keep older lines of node functional and secure because they have customers on them that won't switch.

The only reason to break out the repo is to give the LTS people a place to do that work that they more or less own (similar to a working group) and also to make it clear when we're taking responsibility for a particular line of v8 (any versions that aren't in that repository we are effectively saying must be fixed in upstream v8).

There may be a better way to organize this, it's a problem we don't quite have yet since we're so new, the real point here is that we're saying definitively "when v8 ends support we will inherit it and continue to support it as long as there is a community landing patches."

@a0viedo

This comment has been minimized.

Show comment
Hide comment
@a0viedo

a0viedo Feb 13, 2015

Member

@taoeffect Yes, there is something you can do. For example, work with Joyent to merge all work they do into iojs, and vice versa.

If it would be easy or straightforward, then the fork wouldn't make sense in the first place. IMO io.js it's not aimed to be a replacement for Node itself, but for what the community was expecting from Node. In the future, Joyent can change whatever they want in Node but that doesn't mean you (or any of us) should follow that religiously.

Member

a0viedo commented Feb 13, 2015

@taoeffect Yes, there is something you can do. For example, work with Joyent to merge all work they do into iojs, and vice versa.

If it would be easy or straightforward, then the fork wouldn't make sense in the first place. IMO io.js it's not aimed to be a replacement for Node itself, but for what the community was expecting from Node. In the future, Joyent can change whatever they want in Node but that doesn't mean you (or any of us) should follow that religiously.

@mikeal

This comment has been minimized.

Show comment
Hide comment
@mikeal

mikeal Feb 13, 2015

Member

@chrisdickinson what do you think about these alternatives as a replace for "will not break backwards compat in JS API"

  • Will indefinitely support JS API from prior releases.
  • Will not alter public API scheme in a breaking manor or remove support for existing JS APIs.
Member

mikeal commented Feb 13, 2015

@chrisdickinson what do you think about these alternatives as a replace for "will not break backwards compat in JS API"

  • Will indefinitely support JS API from prior releases.
  • Will not alter public API scheme in a breaking manor or remove support for existing JS APIs.
@mikeal

This comment has been minimized.

Show comment
Hide comment
@mikeal

mikeal Feb 13, 2015

Member

Some quick knowledge share so that people can see where I'm coming from.

Perception vs. Reality

The reality of this project is that the code is still being reviewed and maintained by the people who have been building node for many years, we just have a lot more contributors helping now. The perception however is that we are a new project with a new v8 and that we have diverged from node.js. The feelings and sentiments people have about compatibility between node.js versions has not ported to us, at all.

The node.js project has no official written policy about any of this and has made some big mistakes we're still living with (streams2, domains) but the perception is that they are more stable and more dedicated to stability. Whatever we come up with needs not only to alleviate people's concerns about this but make it clear that we won't make the mistakes node.js has made in the past. Two important ones come to mind:

  • streams2: readable-stream should have remained in npm long enough for us to figure out a better compatibility strategy (streams3).
  • domains: committed way to early to an API we no longer want but are stuck with forever.

The node.js project's strategy of marking specific APIs with varying degrees of stability does not live in reality. The cost of changes is directly proportional to the number of modules and applications they will break. The stated "stability" of an API has far less bearing on how much we can change it than the number of modules in the ecosystem which decide to depend on it.

npm napkin math

I understand the attraction to "reserving the right" to deprecate an API but we all know that there is a threshold of modules/applications we are just unwilling to break no matter how bad the API is that they are dependent on.

Your initial impression might be "how many modules can possibly be dependent on this?" I'll have real numbers soon enough based on actual code analysis but let's just do some basic guess math.

The last time I checked (9 months ago) the average dep list per package was a little more than 8 but growing every quarter (yes, not just the overall package ecosystem is growing, even the average dep tree is also growing). That means that every package has an average of 8 deps, which in turn have an average of 8 deps, and so on.

So, say we want to get rid of domains and we figure that only about 1% of the packages in npm actually use it (roughly 1,300 packages today). Removal won't just break 1,300 packages. Potentially, removal could break (1,300 * 8 * 8) or 83,000 packages (more than half the registry). This is, of course, not entirely accurate. Core packages that have been around longer are more likely to be depended on, as are npm packages themselves so newer APIs will have a much smaller reach. This also doesn't account for overlap (deep deps depending on the same packages). But, this should give you an idea of how big a role these deep dependency trees play in the cost of deprecations.

Member

mikeal commented Feb 13, 2015

Some quick knowledge share so that people can see where I'm coming from.

Perception vs. Reality

The reality of this project is that the code is still being reviewed and maintained by the people who have been building node for many years, we just have a lot more contributors helping now. The perception however is that we are a new project with a new v8 and that we have diverged from node.js. The feelings and sentiments people have about compatibility between node.js versions has not ported to us, at all.

The node.js project has no official written policy about any of this and has made some big mistakes we're still living with (streams2, domains) but the perception is that they are more stable and more dedicated to stability. Whatever we come up with needs not only to alleviate people's concerns about this but make it clear that we won't make the mistakes node.js has made in the past. Two important ones come to mind:

  • streams2: readable-stream should have remained in npm long enough for us to figure out a better compatibility strategy (streams3).
  • domains: committed way to early to an API we no longer want but are stuck with forever.

The node.js project's strategy of marking specific APIs with varying degrees of stability does not live in reality. The cost of changes is directly proportional to the number of modules and applications they will break. The stated "stability" of an API has far less bearing on how much we can change it than the number of modules in the ecosystem which decide to depend on it.

npm napkin math

I understand the attraction to "reserving the right" to deprecate an API but we all know that there is a threshold of modules/applications we are just unwilling to break no matter how bad the API is that they are dependent on.

Your initial impression might be "how many modules can possibly be dependent on this?" I'll have real numbers soon enough based on actual code analysis but let's just do some basic guess math.

The last time I checked (9 months ago) the average dep list per package was a little more than 8 but growing every quarter (yes, not just the overall package ecosystem is growing, even the average dep tree is also growing). That means that every package has an average of 8 deps, which in turn have an average of 8 deps, and so on.

So, say we want to get rid of domains and we figure that only about 1% of the packages in npm actually use it (roughly 1,300 packages today). Removal won't just break 1,300 packages. Potentially, removal could break (1,300 * 8 * 8) or 83,000 packages (more than half the registry). This is, of course, not entirely accurate. Core packages that have been around longer are more likely to be depended on, as are npm packages themselves so newer APIs will have a much smaller reach. This also doesn't account for overlap (deep deps depending on the same packages). But, this should give you an idea of how big a role these deep dependency trees play in the cost of deprecations.

@a0viedo

This comment has been minimized.

Show comment
Hide comment
@a0viedo

a0viedo Feb 13, 2015

Member

@mikeal ^ that should be on medium too to get more visibility

Member

a0viedo commented Feb 13, 2015

@mikeal ^ that should be on medium too to get more visibility

@mikeal

This comment has been minimized.

Show comment
Hide comment
@mikeal

mikeal Feb 13, 2015

Member

@a0viedo I'd like to reach a conclusion here about stability before I write something like that up, but it will definitely happen :)

Member

mikeal commented Feb 13, 2015

@a0viedo I'd like to reach a conclusion here about stability before I write something like that up, but it will definitely happen :)

@chrisdickinson

This comment has been minimized.

Show comment
Hide comment
@chrisdickinson

chrisdickinson Feb 14, 2015

Contributor

Why are we still pretending we can do this? Let's be honest, we're not ever going to be able to remove domains or sys, the cost of removing them is just too high.

Part of the reason we pretend to be able to deprecate things is that the tooling to see the results of our actions isn't there yet. If we had that in line we'd be able to make better decisions about what to deprecate/delete, and when to do so. We are, and have been, flying blind in that respect for a very long time, so our decisions in that realm tend to be ultraconservative. Given the opportunity to judge the results of our options with hard numbers would allow us to be more nuanced and data-driven than intuition-driven.

npm napkin math

The other side of this is that the npm ecosystem rapidly heals around breakage from core – even faster when we telegraph it. I'm not saying we would, for example, delete sys immediately, but we should message ahead of time through deprecation notices and the blog / release notes that these things will be going away. I don't want to pile up breaking changes and eventually be forced into a Python3 situation, where the only way to clean up the codebase is to make breaking changes in bulk.

The NG solution is interesting, but it's a solution we only get to use once, which means we have to get the entire reworked API right the first time it's released. I don't like those odds: good APIs evolve, they aren't constructed in one fell swoop, except in rare, lucky cases. The idea of splitting the codebase for NG is also worrisome – NG sounds like another v0.12 branch. I'd rather iterate on the NG concepts behind feature flags, and get that functionality out into all io.js users hands quicker (and see the breakage in core faster!) than to develop it on a separate branch.

Whatever we come up with needs not only to alleviate people's concerns about this but make it clear that we won't make the mistakes node.js has made in the past.

I agree that we need to alleviate people's concerns, but it's nearly impossible to tell a good idea from a mistake with certainty until it's already implemented and in people's hands. We should own up to that – by stating that we will, when necessary, own up to API mistakes. We commit to cleaning them up, and giving clear, repeatable instruction on how to migrate away from those APIs when they happen, whether that be through compatibility modes via require.extensions, go fix-style rewriters, or step-by-step guides on what will change and how to fix it before the fact.

what do you think about these alternatives as a replace for "will not break backwards compat in JS API"

  • Will indefinitely support JS API from prior releases.
  • Will not alter public API scheme in a breaking manor or remove support for existing JS APIs.

I'm advocating for something like the following:

"io.js is a conservative, stable project. Changes will be judged based on their value relative to the breakage they could introduce in the ecosystem. Breaking changes will not be introduced without going through a widely-communicated deprecation process, giving downstream code ample time and instruction on how to adapt to the new codebase. Breaking changes will never be introduced in bulk. Frozen APIs will only accept breaking changes for security fixes."

Contributor

chrisdickinson commented Feb 14, 2015

Why are we still pretending we can do this? Let's be honest, we're not ever going to be able to remove domains or sys, the cost of removing them is just too high.

Part of the reason we pretend to be able to deprecate things is that the tooling to see the results of our actions isn't there yet. If we had that in line we'd be able to make better decisions about what to deprecate/delete, and when to do so. We are, and have been, flying blind in that respect for a very long time, so our decisions in that realm tend to be ultraconservative. Given the opportunity to judge the results of our options with hard numbers would allow us to be more nuanced and data-driven than intuition-driven.

npm napkin math

The other side of this is that the npm ecosystem rapidly heals around breakage from core – even faster when we telegraph it. I'm not saying we would, for example, delete sys immediately, but we should message ahead of time through deprecation notices and the blog / release notes that these things will be going away. I don't want to pile up breaking changes and eventually be forced into a Python3 situation, where the only way to clean up the codebase is to make breaking changes in bulk.

The NG solution is interesting, but it's a solution we only get to use once, which means we have to get the entire reworked API right the first time it's released. I don't like those odds: good APIs evolve, they aren't constructed in one fell swoop, except in rare, lucky cases. The idea of splitting the codebase for NG is also worrisome – NG sounds like another v0.12 branch. I'd rather iterate on the NG concepts behind feature flags, and get that functionality out into all io.js users hands quicker (and see the breakage in core faster!) than to develop it on a separate branch.

Whatever we come up with needs not only to alleviate people's concerns about this but make it clear that we won't make the mistakes node.js has made in the past.

I agree that we need to alleviate people's concerns, but it's nearly impossible to tell a good idea from a mistake with certainty until it's already implemented and in people's hands. We should own up to that – by stating that we will, when necessary, own up to API mistakes. We commit to cleaning them up, and giving clear, repeatable instruction on how to migrate away from those APIs when they happen, whether that be through compatibility modes via require.extensions, go fix-style rewriters, or step-by-step guides on what will change and how to fix it before the fact.

what do you think about these alternatives as a replace for "will not break backwards compat in JS API"

  • Will indefinitely support JS API from prior releases.
  • Will not alter public API scheme in a breaking manor or remove support for existing JS APIs.

I'm advocating for something like the following:

"io.js is a conservative, stable project. Changes will be judged based on their value relative to the breakage they could introduce in the ecosystem. Breaking changes will not be introduced without going through a widely-communicated deprecation process, giving downstream code ample time and instruction on how to adapt to the new codebase. Breaking changes will never be introduced in bulk. Frozen APIs will only accept breaking changes for security fixes."

@Fishrock123

This comment has been minimized.

Show comment
Hide comment
@Fishrock123

Fishrock123 Feb 14, 2015

Member

"io.js is a conservative, stable project. Changes will be judged based on their value relative to the breakage they could introduce in the ecosystem. Breaking changes will not be introduced without going through a widely-communicated deprecation process, giving downstream code ample time and instruction on how to adapt to the new codebase. Breaking changes will never be introduced in bulk. Frozen APIs will only accept breaking changes for security fixes."

+1

Member

Fishrock123 commented Feb 14, 2015

"io.js is a conservative, stable project. Changes will be judged based on their value relative to the breakage they could introduce in the ecosystem. Breaking changes will not be introduced without going through a widely-communicated deprecation process, giving downstream code ample time and instruction on how to adapt to the new codebase. Breaking changes will never be introduced in bulk. Frozen APIs will only accept breaking changes for security fixes."

+1

@mikeal

This comment has been minimized.

Show comment
Hide comment
@mikeal

mikeal Feb 14, 2015

Member

io.js is a conservative, stable project. Changes will be judged based on their value relative to the breakage they could introduce in the ecosystem. Breaking changes will not be introduced without going through a widely-communicated deprecation process, giving downstream code ample time and instruction on how to adapt to the new codebase. Breaking changes will never be introduced in bulk. Frozen APIs will only accept breaking changes for security fixes.

This does not fit on a slide deck :(

Member

mikeal commented Feb 14, 2015

io.js is a conservative, stable project. Changes will be judged based on their value relative to the breakage they could introduce in the ecosystem. Breaking changes will not be introduced without going through a widely-communicated deprecation process, giving downstream code ample time and instruction on how to adapt to the new codebase. Breaking changes will never be introduced in bulk. Frozen APIs will only accept breaking changes for security fixes.

This does not fit on a slide deck :(

@mikeal

This comment has been minimized.

Show comment
Hide comment
@mikeal

mikeal Feb 14, 2015

Member

I'm not saying we would, for example, delete sys immediately, but we should message ahead of time through deprecation notices and the blog / release notes that these things will be going away.

We tried to do exactly this years ago and had to back it out. That was when npm was a quarter the size it is now. What exactly has changed where we think this would work now? We did a full cycle with warnings, spent a ton of time letting everyone know it was going away, but it didn't matter, it broke too many people to pull it out so we didn't. Whatever we might think about the technical merits of of keeping or removing an API the largest consideration about if we can actually remove it will be how may people we break.

The other side of this is that the npm ecosystem rapidly heals around breakage from core – even faster when we telegraph it.

I don't think this is true. Breaks in compatibility between versions persist for quite a while because of the same deep dep map math I mentioned earlier. No matter how fast we get the maintainers of packages to fix themselves it still takes a ton of time to get all of the people depending on them to bump their required versions.

The NG solution is interesting, but it's a solution we only get to use once, which means we have to get the entire reworked API right the first time it's released.

I don't think this is accurate either. First of all, we have an undefined amount of time to work on the new API in a branch with only nightly releases where we can change and break compat whenever we like.

We also have the option of building the new stdlib the same way readable-stream is built and we can publish most of it to npm for use today if we run it through 6to5.

Most importantly, this isn't really a "one time thing." This is actually the latest mechanism in a history of mechanisms TC39 has been using to bring the language forward. It's just another way to define a set of future code that falls under new rules like "strict mode." 5 years from now we could find ourselves using a similar method if the standard style of JavaScript changes dramatically enough and we won't have to make such a drastic change unless it does.

Also, if you want to come up with a better way to handle breaking changes in a way that won't break the ecosystem you'll have a new shot at it when we build NG :)

We should own up to that – by stating that we will, when necessary, own up to API mistakes.

Do you have an example where we were actually able to remove an API that we all know was a mistake? I can't actually think of one. We certainly should own up to the fact that these are mistakes, I think we are doing that with domains but there's a gulf between admitting something was a mistake and breaking everyone who had previously depended on it.

io.js is a conservative, stable project. Changes will be judged based on their value relative to the breakage they could introduce in the ecosystem. Breaking changes will not be introduced without going through a widely-communicated deprecation process, giving downstream code ample time and instruction on how to adapt to the new codebase. Breaking changes will never be introduced in bulk. Frozen APIs will only accept breaking changes for security fixes.

This isn't something we can message properly. I understand what you're saying, because I'm in TC meetings and I know all the people involved and so I can trust it, but someone who isn't familiar with this project doesn't know what this actually means. Their question is simple: "are you going to break this application that I just wrote?" And we want the answer to be something that makes them feel comfortable enough to build their app on this platform.

I don't want to lie to people, but we need to find a policy here that we can actually message well. Small changes in error messages and defaulting to keep-alive don't actually break apps or the ecosystem in a significant way and I don't see a future where we will have the ability to do anything larger in scope than that other than something like NG.

Member

mikeal commented Feb 14, 2015

I'm not saying we would, for example, delete sys immediately, but we should message ahead of time through deprecation notices and the blog / release notes that these things will be going away.

We tried to do exactly this years ago and had to back it out. That was when npm was a quarter the size it is now. What exactly has changed where we think this would work now? We did a full cycle with warnings, spent a ton of time letting everyone know it was going away, but it didn't matter, it broke too many people to pull it out so we didn't. Whatever we might think about the technical merits of of keeping or removing an API the largest consideration about if we can actually remove it will be how may people we break.

The other side of this is that the npm ecosystem rapidly heals around breakage from core – even faster when we telegraph it.

I don't think this is true. Breaks in compatibility between versions persist for quite a while because of the same deep dep map math I mentioned earlier. No matter how fast we get the maintainers of packages to fix themselves it still takes a ton of time to get all of the people depending on them to bump their required versions.

The NG solution is interesting, but it's a solution we only get to use once, which means we have to get the entire reworked API right the first time it's released.

I don't think this is accurate either. First of all, we have an undefined amount of time to work on the new API in a branch with only nightly releases where we can change and break compat whenever we like.

We also have the option of building the new stdlib the same way readable-stream is built and we can publish most of it to npm for use today if we run it through 6to5.

Most importantly, this isn't really a "one time thing." This is actually the latest mechanism in a history of mechanisms TC39 has been using to bring the language forward. It's just another way to define a set of future code that falls under new rules like "strict mode." 5 years from now we could find ourselves using a similar method if the standard style of JavaScript changes dramatically enough and we won't have to make such a drastic change unless it does.

Also, if you want to come up with a better way to handle breaking changes in a way that won't break the ecosystem you'll have a new shot at it when we build NG :)

We should own up to that – by stating that we will, when necessary, own up to API mistakes.

Do you have an example where we were actually able to remove an API that we all know was a mistake? I can't actually think of one. We certainly should own up to the fact that these are mistakes, I think we are doing that with domains but there's a gulf between admitting something was a mistake and breaking everyone who had previously depended on it.

io.js is a conservative, stable project. Changes will be judged based on their value relative to the breakage they could introduce in the ecosystem. Breaking changes will not be introduced without going through a widely-communicated deprecation process, giving downstream code ample time and instruction on how to adapt to the new codebase. Breaking changes will never be introduced in bulk. Frozen APIs will only accept breaking changes for security fixes.

This isn't something we can message properly. I understand what you're saying, because I'm in TC meetings and I know all the people involved and so I can trust it, but someone who isn't familiar with this project doesn't know what this actually means. Their question is simple: "are you going to break this application that I just wrote?" And we want the answer to be something that makes them feel comfortable enough to build their app on this platform.

I don't want to lie to people, but we need to find a policy here that we can actually message well. Small changes in error messages and defaulting to keep-alive don't actually break apps or the ecosystem in a significant way and I don't see a future where we will have the ability to do anything larger in scope than that other than something like NG.

@mikeal

This comment has been minimized.

Show comment
Hide comment
@mikeal

mikeal Feb 14, 2015

Member

How is this for a statement:

The single most important consideration in any 
contribution to io.js is how many applications it 
might effect. Changes that could break a non-trivial 
number of applications will not be accepted and 
no documented JS API will not be removed.

Notice the use of the language removed. Being present doesn't mean it is still supported,
it could literally be a noop so long as it doesn't break just from trying to use it. This is especially
important for domains, while domains might exist as a module in the code base you can see
a future where it traps less and less and potentially doesn't trap anything at all. But, the modules
that require it to do .bind() don't throw.

Of course, if we make those kinds of changes we'll need to bump the major version.

Member

mikeal commented Feb 14, 2015

How is this for a statement:

The single most important consideration in any 
contribution to io.js is how many applications it 
might effect. Changes that could break a non-trivial 
number of applications will not be accepted and 
no documented JS API will not be removed.

Notice the use of the language removed. Being present doesn't mean it is still supported,
it could literally be a noop so long as it doesn't break just from trying to use it. This is especially
important for domains, while domains might exist as a module in the code base you can see
a future where it traps less and less and potentially doesn't trap anything at all. But, the modules
that require it to do .bind() don't throw.

Of course, if we make those kinds of changes we'll need to bump the major version.

@sam-github

This comment has been minimized.

Show comment
Hide comment
@sam-github

sam-github Feb 16, 2015

Member

I think io.js should delete APIs that were mistakes. Don't do it every month, but do do it every year. Too often and people hate you. Too infrequently, and people grow to assume DOS3.1-like stability, and develop a culture of not accepting any breaking changes (python3, I'm looking at you). Its a delicate balance, maybe, but ruby and lua break with big new releases, and the community accept it, as long as the reasons are clearly articulated, and the changes are obviously improvements.

node moved too quickly from rapid exploratory API development to frozen-in-amber API, IMHO. 0.8, 0.10,0.12,1.0... we've had a long run of API frozenness, lets move on before ancient mistakes are frozen in place forever.

Unlike other languages, node/io.js has the option of leaning heavily on npm to deliver features, in particular, that multiple versions of modules can be used simultaneously. Another approach would be for io.js to reduce its API to something minimal, but given the rate of feature addition and where it started, io.js is pretty far from minimal... we'd have to delete domains, streams, cluster, and probably http and https at least to even pretend to be a minimal async io core. Perhaps not a popular way to go, it would make it difficult to write pure io.js code of any kind without pulling in deps from npm, but it would give both core and user-land the freedom to innovate.

Member

sam-github commented Feb 16, 2015

I think io.js should delete APIs that were mistakes. Don't do it every month, but do do it every year. Too often and people hate you. Too infrequently, and people grow to assume DOS3.1-like stability, and develop a culture of not accepting any breaking changes (python3, I'm looking at you). Its a delicate balance, maybe, but ruby and lua break with big new releases, and the community accept it, as long as the reasons are clearly articulated, and the changes are obviously improvements.

node moved too quickly from rapid exploratory API development to frozen-in-amber API, IMHO. 0.8, 0.10,0.12,1.0... we've had a long run of API frozenness, lets move on before ancient mistakes are frozen in place forever.

Unlike other languages, node/io.js has the option of leaning heavily on npm to deliver features, in particular, that multiple versions of modules can be used simultaneously. Another approach would be for io.js to reduce its API to something minimal, but given the rate of feature addition and where it started, io.js is pretty far from minimal... we'd have to delete domains, streams, cluster, and probably http and https at least to even pretend to be a minimal async io core. Perhaps not a popular way to go, it would make it difficult to write pure io.js code of any kind without pulling in deps from npm, but it would give both core and user-land the freedom to innovate.

@mikeal

This comment has been minimized.

Show comment
Hide comment
@mikeal

mikeal Feb 19, 2015

Member

Closing here, continued discussion will need to take place on #886

Member

mikeal commented Feb 19, 2015

Closing here, continued discussion will need to take place on #886

@mikeal mikeal closed this Feb 19, 2015

@ghost ghost referenced this issue Feb 20, 2015

Closed

Roadmap #886

@rvagg rvagg removed the tc-agenda label Mar 11, 2015

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment