Open
Conversation
indutny
reviewed
Apr 3, 2026
| * James: if we don't say anything, we are not encouraguing people to use AI or not. The focus is not ot promote AI. Wheter we like these tools or not. Are our existing code review process to review these? We still have to read the code. Are we going to reject a valid bugfix because it was written by AI? | ||
| * Jakob: AI responses are designed to look legitimate and plausible. It takes an extra level of scrutiny to review this. It tries to ... you, especially if you don't know if its there. | ||
| * James: ... Everybody is agreeing that we should be made aware that a contribution was AI-gen. Be honest. Why are the existing processes not enough? | ||
| * Fedor: I agree that honesty should be encouraged. It remind me of "master/slave" discussion, but at the same time it's not sufficient in other ways. It resulted in Node to be more inclusive long term. Historically measuring only technical merits is insufficient for large project. OpenJS encourages the use of AI given that statement in the AI policy. |
Member
There was a problem hiding this comment.
Suggested change
| * Fedor: I agree that honesty should be encouraged. It remind me of "master/slave" discussion, but at the same time it's not sufficient in other ways. It resulted in Node to be more inclusive long term. Historically measuring only technical merits is insufficient for large project. OpenJS encourages the use of AI given that statement in the AI policy. | |
| * Fedor: I agree that honesty should be encouraged. (The question of sufficiency of the existing code review process) reminds me of the removal of "master/slave" terminology from the core. There is no technical reason not to use this terminology in the code, but at the same time saying that it is technically valid is not sufficient for our community in other ways. It resulted in Node to be more inclusive long term. Historically measuring only technical merits is insufficient for large project. OpenJS encourages the use of AI given that statement in the AI policy. |
indutny
reviewed
Apr 3, 2026
| * Matteo: AI-assistance helps folks contributing, number of contributors is now back to the number it was in 2016. Having a global ban of AI would mean that for many first time contributors, their first interaction with the project would be a block because they are using the wrong tool. Also, we should not incentivize folks to lie. | ||
| * James: nobody has been expliciting why the current set of policies are not enough to cover for AI-assisted engineering. | ||
| * Fedor: I am glad that we are seeing an influx of new contributors. AI companies are known to play productivity metrics that do not reflect reality. Students that use AI are learning worse that students that do not. We are lowering the barrier for contributing, but we are raising the barrier for becoming contributions. | ||
| Our policies are inherited from OpenJS. I don't think we can say that our policies are insufficient. I don't see how our code review policies are not enough. But ... (can somebody fill?) |
Member
There was a problem hiding this comment.
Suggested change
| Our policies are inherited from OpenJS. I don't think we can say that our policies are insufficient. I don't see how our code review policies are not enough. But ... (can somebody fill?) | |
| Our policies are inherited from OpenJS so I don't think we can say that our policies are sufficient. If we chose inaction the OpenJS policies will take place for Node.js too, and since the policy document is encouraging AI use Node.js will be encouraging AI use too. |
indutny
reviewed
Apr 3, 2026
| * Antoine: Wouldn't that incentive folks to lie or stop contributing? | ||
| * Fedor: This is a guideline. It's ok for people to lie. We need to be strong and aspirational, and encourage people to do what's right. | ||
| * Ruy: I was reading the commentary from the Claude Code source leak to hide the fact that a contribution was done with AI. | ||
| * Fedor: there are many things out there and we should not be using them, like assoult rifles. The Claude Code shows that we should have an ethical discussion. |
Member
There was a problem hiding this comment.
Suggested change
| * Fedor: there are many things out there and we should not be using them, like assoult rifles. The Claude Code shows that we should have an ethical discussion. | |
| * Fedor: there are many things out there and we should not be using them, like assoult rifles. The Claude Code source code leak that we saw recently shows that we should have a deep discussion on the ethics of its being used for writing Node.js code. |
indutny
reviewed
Apr 3, 2026
| The policy is in the same spirit of the Linux Kernel policy. AI allows up for innovation. K8s React and PyTorch adopted similar policies to enable these contributions. It was voted by the OpenJS Board unanimously. <https://openjsf.cdn.prismic.io/openjsf/aca4d5GXnQHGZDiZ_OpenJS_AI_Coding_Assistants_Policy.pdf>. | ||
| * Fedor: I'm not in agreement with this policy, as it's unethical. Most companies are | ||
| adopting policies where the the contributor is responsbile for the contribution. | ||
| When you review a PR it's designed to look correct/plausible, they remove tests, the change tests and the code does not work as intended. It's just hard to audit it correctly. We are just shifting the responsibility of using AI fully. |
Member
There was a problem hiding this comment.
Suggested change
| When you review a PR it's designed to look correct/plausible, they remove tests, the change tests and the code does not work as intended. It's just hard to audit it correctly. We are just shifting the responsibility of using AI fully. | |
| When you review an AI generated PR the code is designed to look correct/plausible. AI is known to remove tests or change them, and the code does not work as intended. Unlike regular Pull Request it is not a review, but an audit and it's just hard to audit it correctly. By saying "you are responsible for the code you write" we are just shifting the responsibility of this problem to the contributor instead of addressing it fully. |
indutny
reviewed
Apr 3, 2026
| It's the reason for the TSC to exist. Fedor thinks AI is antithetical to Open Source as it is, at the limit of the MIT license. | ||
| A lot of the aspiration we give to people that contribute is that they are given attribution. | ||
| AI is designed to remove "attribution." | ||
| As the governing body of Node.js, we should reject the use of AI completely. Fedor things should be written by humans. |
Member
There was a problem hiding this comment.
Suggested change
| As the governing body of Node.js, we should reject the use of AI completely. Fedor things should be written by humans. | |
| As the governing body of Node.js, we should reject the use of AI completely. Fundamental platforms should be written by humans. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Closes: #1845