Skip to content

Add minutes for 2026-04-01 meeting#1848

Open
aduh95 wants to merge 1 commit intomainfrom
2026-04-01
Open

Add minutes for 2026-04-01 meeting#1848
aduh95 wants to merge 1 commit intomainfrom
2026-04-01

Conversation

@aduh95
Copy link
Copy Markdown
Contributor

@aduh95 aduh95 commented Apr 2, 2026

Closes: #1845

* James: if we don't say anything, we are not encouraguing people to use AI or not. The focus is not ot promote AI. Wheter we like these tools or not. Are our existing code review process to review these? We still have to read the code. Are we going to reject a valid bugfix because it was written by AI?
* Jakob: AI responses are designed to look legitimate and plausible. It takes an extra level of scrutiny to review this. It tries to ... you, especially if you don't know if its there.
* James: ... Everybody is agreeing that we should be made aware that a contribution was AI-gen. Be honest. Why are the existing processes not enough?
* Fedor: I agree that honesty should be encouraged. It remind me of "master/slave" discussion, but at the same time it's not sufficient in other ways. It resulted in Node to be more inclusive long term. Historically measuring only technical merits is insufficient for large project. OpenJS encourages the use of AI given that statement in the AI policy.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* Fedor: I agree that honesty should be encouraged. It remind me of "master/slave" discussion, but at the same time it's not sufficient in other ways. It resulted in Node to be more inclusive long term. Historically measuring only technical merits is insufficient for large project. OpenJS encourages the use of AI given that statement in the AI policy.
* Fedor: I agree that honesty should be encouraged. (The question of sufficiency of the existing code review process) reminds me of the removal of "master/slave" terminology from the core. There is no technical reason not to use this terminology in the code, but at the same time saying that it is technically valid is not sufficient for our community in other ways. It resulted in Node to be more inclusive long term. Historically measuring only technical merits is insufficient for large project. OpenJS encourages the use of AI given that statement in the AI policy.

* Matteo: AI-assistance helps folks contributing, number of contributors is now back to the number it was in 2016. Having a global ban of AI would mean that for many first time contributors, their first interaction with the project would be a block because they are using the wrong tool. Also, we should not incentivize folks to lie.
* James: nobody has been expliciting why the current set of policies are not enough to cover for AI-assisted engineering.
* Fedor: I am glad that we are seeing an influx of new contributors. AI companies are known to play productivity metrics that do not reflect reality. Students that use AI are learning worse that students that do not. We are lowering the barrier for contributing, but we are raising the barrier for becoming contributions.
Our policies are inherited from OpenJS. I don't think we can say that our policies are insufficient. I don't see how our code review policies are not enough. But ... (can somebody fill?)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Our policies are inherited from OpenJS. I don't think we can say that our policies are insufficient. I don't see how our code review policies are not enough. But ... (can somebody fill?)
Our policies are inherited from OpenJS so I don't think we can say that our policies are sufficient. If we chose inaction the OpenJS policies will take place for Node.js too, and since the policy document is encouraging AI use Node.js will be encouraging AI use too.

* Antoine: Wouldn't that incentive folks to lie or stop contributing?
* Fedor: This is a guideline. It's ok for people to lie. We need to be strong and aspirational, and encourage people to do what's right.
* Ruy: I was reading the commentary from the Claude Code source leak to hide the fact that a contribution was done with AI.
* Fedor: there are many things out there and we should not be using them, like assoult rifles. The Claude Code shows that we should have an ethical discussion.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* Fedor: there are many things out there and we should not be using them, like assoult rifles. The Claude Code shows that we should have an ethical discussion.
* Fedor: there are many things out there and we should not be using them, like assoult rifles. The Claude Code source code leak that we saw recently shows that we should have a deep discussion on the ethics of its being used for writing Node.js code.

The policy is in the same spirit of the Linux Kernel policy. AI allows up for innovation. K8s React and PyTorch adopted similar policies to enable these contributions. It was voted by the OpenJS Board unanimously. <https://openjsf.cdn.prismic.io/openjsf/aca4d5GXnQHGZDiZ_OpenJS_AI_Coding_Assistants_Policy.pdf>.
* Fedor: I'm not in agreement with this policy, as it's unethical. Most companies are
adopting policies where the the contributor is responsbile for the contribution.
When you review a PR it's designed to look correct/plausible, they remove tests, the change tests and the code does not work as intended. It's just hard to audit it correctly. We are just shifting the responsibility of using AI fully.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
When you review a PR it's designed to look correct/plausible, they remove tests, the change tests and the code does not work as intended. It's just hard to audit it correctly. We are just shifting the responsibility of using AI fully.
When you review an AI generated PR the code is designed to look correct/plausible. AI is known to remove tests or change them, and the code does not work as intended. Unlike regular Pull Request it is not a review, but an audit and it's just hard to audit it correctly. By saying "you are responsible for the code you write" we are just shifting the responsibility of this problem to the contributor instead of addressing it fully.

It's the reason for the TSC to exist. Fedor thinks AI is antithetical to Open Source as it is, at the limit of the MIT license.
A lot of the aspiration we give to people that contribute is that they are given attribution.
AI is designed to remove "attribution."
As the governing body of Node.js, we should reject the use of AI completely. Fedor things should be written by humans.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
As the governing body of Node.js, we should reject the use of AI completely. Fedor things should be written by humans.
As the governing body of Node.js, we should reject the use of AI completely. Fundamental platforms should be written by humans.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Node.js Technical Steering Committee (TSC) Meeting 2026-04-01

2 participants