Skip to content

Add an LLM policy for rust-lang/rust#1040

Open
jyn514 wants to merge 3 commits intorust-lang:masterfrom
jyn514:llm-policy
Open

Add an LLM policy for rust-lang/rust#1040
jyn514 wants to merge 3 commits intorust-lang:masterfrom
jyn514:llm-policy

Conversation

@jyn514
Copy link
Copy Markdown
Member

@jyn514 jyn514 commented Apr 17, 2026

Summary

This document establishes a policy for how LLMs can be used when contributing to rust-lang/rust. Subtrees, submodules, and dependencies from crates.io are not in scope. Other repositories in the rust-lang organization are not in scope.

This policy is intended to live in Forge as a living document, not as a dead RFC. It will be linked from CONTRIBUTING.md in rust-lang/rust as well as from the rustc- and std-dev-guides.

Moderation guidelines

This PR is preceded by an enormous amount of discussion on Zulip. Almost every conceivable angle has been discussed to death; there have been upwards of 3000 messages, not even counting discussion on GitHub. We initially doubted whether we could reach consensus at all.

Therefore, we ask to bound the scope of this PR specifically to the policy itself. In particular, we mark several topics as out of scope below. We still consider these topics to be important, we simply do not believe this is the right place to discuss them.

No comment on this PR may mention the following topics:

  • Long-term social or economic impact of LLMs
  • The environmental impact of LLMs
  • Anything to do with the copyright status of LLM output
  • Moral judgements about people who use LLMs

We have asked the moderation team to help us enforce these rules.

Feedback guidelines

We are aware that parts of this policy will make some people very unhappy. As you are reading, we ask you to consider the following.

  • Can you think of a concrete improvement to the policy that addresses your concern? Consider:
    • Whether your change will make the policy harder to moderate
    • Whether your change will make it harder to come to a consensus
  • Does your concern need to be addressed before merging or can it be addressed in a follow-up?
    • Keep in mind the cost of not creating a policy.

If your concern is for yourself or for your team

  • What are the specific parts of your workflow that will be disrupted?
    • In particular we are only interested in workflows involving rust-lang/rust. Other repositories are not affected by this policy and are therefore not in scope.
  • Can you live with the disruption? Is it worth blocking the policy over?

Previous versions of this document were discussed on Zulip, and we have made edits in responses to suggestions there.

Motivation

  • Many people find LLM-generated code and writing deeply unpleasant to read or review.
  • Many people find LLMs to be a significant aid to learning and discovery.
  • rust-lang/rust is currently dealing with a deluge of low-effort "slop" PRs primarily authored by LLMs.
    • Having a policy makes these easier to moderate, without having to take every single instance on a case-by-case basis.

This policy is not intended as a debate over whether LLMs are a good or bad idea, nor over the long-term impact of LLMs. It is only intended to set out the future policy of rust-lang/rust itself.

Drawbacks

  • This bans some valid usages of LLMs. We intentionally err on the side of banning too much rather than too little in order to make the policy easy to understand and moderate.
  • This intentionally does not address the moral, social, and environmental impacts of LLMs. These topics have been extensively discussed on Zulip without reaching consensus, but this policy is relevant regardless of the outcome of these discussions.
  • This intentionally does not attempt to set a project-wide policy. We have attempted to come to a consensus for upwards of a month without significant progress. We are cutting our losses so we can have something rather than adhoc moderation decisions.
  • This intentionally does not apply to subtrees of rust-lang/rust. We don't have the same moderation issues there, so we don't have time pressure to set a policy in the same way.

Rationale and alternatives

  • We could create a project-wide policy, rather than scoping it to rust-lang/rust. This has the advantage that everyone knows what the policy is everywhere, and that it's easy to make things part of the mono-repo at a later date. It has the disadvantage that we think it is nigh-impossible to get everyone to agree. There are also reasons for teams to have different policies; for example, the standard for correctness is much higher within the compiler than within Clippy.
  • We could have different standards for people in the Rust project than for new contributors. That would make moderation much easier, and allow us to experiment with additional LLM use. However, it reinforces existing power structures, creates more of a gap between authors and reviewers, and feels "unfriendly" to new contributors.
  • We could have a more lenient policy that allows "responsible and appropriate" use of LLMs. This raises the question of what "responsible and appropriate" means. The usual suggestion is "self-review, and judging the change by the same standard as any other change"; but this neglects the reputational and social harm of work that "feels" LLM generated. It also makes our moderation policy much harder to understand, and increases the likelihood of re-litigating each moderation decision.
  • We could have a more strict policy that removes the threshold of originality condition. This has the advantage that our policy becomes easier to moderate and understand. It has the disadvantage that it becomes easy for people to intend to follow the policy, but be put in a position where their only choices are to either discard the PR altogether, rewrite it from scratch, or tell "white lies" about whether an LLM was involved.
  • We could have a more strict policy that bans LLMs altogether. It seems unlikely we will be able to agree on this, and we believe attempting it will cause many people to leave the project.

Prior art

This prior art section is taken almost entirely from Jane Lusby's summary of her research, although we have taken the liberty of moving the Rust project's prior art to the top. We thank her for her help.

Rust

Other organizations

These are organized along a spectrum of AI friendliness, where top is least friendly, and bottom is most friendly.

  • full ban
    • postmarketOS - also explicitly bans encouraging others to use AI for solving problems related to postmarketOS - multi point ethics based rational with citations included
    • zig
      • philosophical, cites Profession (novella)
      • rooted in concerns around the construction and origins of original thought
    • servo
      • more pragmatic, directly lists concerns around ai, fairly concise
    • qemu
      • pragmatic, focuses on copyright and licensing concerns
      • explicitly allows AI for exploring api, debugging, and other non generative assistance, other policies do not explicitly ban this or mention it in any way
  • allowed with supervision, human is ultimately responsible
    • scipy
      • strict attribution policy including name of model
    • llvm
    • blender
    • linux kernel
      • quite concise but otherwise seems the same as many in this category
    • mesa
      • framed as a contribution policy not an AI policy, AI is listed as a tool that can be used but emphasizes same requirements that author must understand the code they contribute, seems to leave room for partial understanding from new contributors.

        Understand the code you write at least well enough to be able to explain why your changes are beneficial to the project.

    • forgejo
      • bans AI for review, does not explicitly require contributors to understand code generated by ai. One could interpret the "accountability for contribution lies with contributor even if AI is used" line as implying this requirement, though their version seems poorly worded imo.
    • firefox
    • ghostty
      • pro-AI but views "bad users" as the source of issues with it and the only reason for what ghostty considers a "strict AI policy"
    • fedora
      • clearly inspired and is cited by many of the above, but is definitely framed more pro-ai than the derived policies tend to be
  • curl
    • does not explicitly require humans understand contributions, otherwise policy is similar to above policies
  • linux foundation
    • encourages usage, focuses on legal liability, mentions that tooling exists to help automate managing legal liability, does not mention specific tools
  • In progress

Unresolved questions

See the "Moderation guidelines" and "Drawbacks" section for a list of topics that are out of scope.

Rendered

@rustbot rustbot added the S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. label Apr 17, 2026
@rustbot
Copy link
Copy Markdown
Collaborator

rustbot commented Apr 17, 2026

r? @jieyouxu

rustbot has assigned @jieyouxu.
They will have a look at your PR within the next two weeks and either review your PR or reassign to another reviewer.

Use r? to explicitly pick a reviewer

Why was this reviewer chosen?

The reviewer was selected based on:

  • Fallback group: @Mark-Simulacrum, internal-sites
  • @Mark-Simulacrum, internal-sites expanded to Mark-Simulacrum, Urgau, ehuss, jieyouxu
  • Random selection from Mark-Simulacrum, Urgau, ehuss, jieyouxu

@jyn514
Copy link
Copy Markdown
Member Author

jyn514 commented Apr 17, 2026

@rustbot label T-libs T-compiler T-rustdoc T-bootstrap

@rustbot rustbot added T-bootstrap Team: Bootstrap T-compiler Team: Compiler T-libs Team: Library / libs T-rustdoc Team: rustdoc labels Apr 17, 2026
Comment thread src/policies/llm-usage.md Outdated
## Summary
[summary]: #summary

This document establishes a policy for how LLMs can be used when contributing to `rust-lang/rust`.
Subtrees, submodules, and dependencies from crates.io are not in scope.
Other repositories in the `rust-lang` organization are not in scope.

This policy is intended to live in [Forge](https://forge.rust-lang.org/) as a living document, not as a dead RFC.
It will be linked from `CONTRIBUTING.md` in rust-lang/rust as well as from the rustc- and std-dev-guides.

## Moderation guidelines

This PR is preceded by [an enormous amount of discussion on Zulip](https://rust-lang.zulipchat.com/#narrow/channel/588130-project-llm-policy).
Almost every conceivable angle has been discussed to death;
there have been upwards of 3000 messages, not even counting discussion on GitHub.
We initially doubted whether we could reach consensus at all.

Therefore, we ask to bound the scope of this PR specifically to the policy itself.
In particular, we mark several topics as out of scope below.
We still consider these topics to be important, we simply do not believe this is the right place to discuss them.

No comment on this PR may mention the following topics:

- Long-term social or economic impact of LLMs
- The environmental impact of LLMs
- Anything to do with the copyright status of LLM output
- Moral judgements about people who use LLMs

We have asked the moderation team to help us enforce these rules.

## Feedback guidelines

We are aware that parts of this policy will make some people very unhappy.
As you are reading, we ask you to consider the following.

- Can you think of a *concrete* improvement to the policy that addresses your concern? Consider:
  - Whether your change will make the policy harder to moderate
  - Whether your change will make it harder to come to a consensus
- Does your concern need to be addressed before merging or can it be addressed in a follow-up?
  - Keep in mind the cost of *not* creating a policy.

### If your concern is for yourself or for your team
- What are the *specific* parts of your workflow that will be disrupted?
  - In particular we are *only* interested in workflows involving `rust-lang/rust`.
    Other repositories are not affected by this policy and are therefore not in scope.
- Can you live with the disruption? Is it worth blocking the policy over?

---

Previous versions of this document were discussed on Zulip, and we have made edits in responses to suggestions there.

## Motivation
[motivation]: #motivation

- Many people find LLM-generated code and writing deeply unpleasant to read or review.
- Many people find LLMs to be a significant aid to learning and discovery.
- `rust-lang/rust` is currently dealing with a deluge of low-effort "slop" PRs primarily authored by LLMs.
  - Having *a* policy makes these easier to moderate, without having to take every single instance on a case-by-case basis.

This policy is *not* intended as a debate over whether LLMs are a good or bad idea, nor over the long-term impact of LLMs.
It is only intended to set out the future policy of `rust-lang/rust` itself.

## Drawbacks
[drawbacks]: #drawbacks

- This bans some valid usages of LLMs.
  We intentionally err on the side of banning too much rather than too little in order to make the policy easy to understand and moderate.
- This intentionally does not address the moral, social, and environmental impacts of LLMs.
  These topics have been extensively discussed on Zulip without reaching consensus, but this policy is relevant regardless of the outcome of these discussions.
- This intentionally does not attempt to set a project-wide policy.
  We have attempted to come to a consensus for upwards of a month without significant process.
  We are cutting our losses so we can have *something* rather than adhoc moderation decisions.
- This intentionally does not apply to subtrees of rust-lang/rust.
  We don't have the same moderation issues there, so we don't have time pressure to set a policy in the same way.

## Rationale and alternatives
[rationale-and-alternatives]: #rationale-and-alternatives

- We could create a project-wide policy, rather than scoping it to `rust-lang/rust`.
  This has the advantage that everyone knows what the policy is everywhere, and that it's easy to make things part of the mono-repo at a later date.
  It has the disadvantage that we think it is nigh-impossible to get everyone to agree.
  There are also reasons for teams to have different policies; for example, the standard for correctness is much higher within the compiler than within Clippy.
- We could have a more strict policy that removes the [threshold of originality](https://fsfe.org/news/2025/news-20250515-01.en.html) condition.
  This has the advantage that our policy becomes easier to moderate and understand.
  It has the disadvantage that it becomes easy for people to intend to
  follow the policy, but be put in a position where their only choices
  are to either discard the PR altogether, rewrite it from scratch, or
  tell "white lies" about whether an LLM was involved.
- We could have a more strict policy that bans LLMs altogether.
  It seems unlikely we will be able to agree on this, and we believe attempting it will cause many people to leave the project.

## Prior art
[prior-art]: #prior-art

This prior art section is taken almost entirely from [Jane Lusby's summary of her research](rust-lang/leadership-council#273 (comment)),
although we have taken the liberty of moving the Rust project's prior art to the top.
We thank her for her help.

### Rust
- [Moderation team's spam policy](https://github.com/rust-lang/moderation-team/blob/main/policies/spam.md/#fully-or-partially-automated-contribs)
- [Compiler team's "burdensome PRs" policy](rust-lang/compiler-team#893)
### Other organizations
 These are organized along a spectrum of AI friendliness, where top is least friendly, and bottom is most friendly.
- full ban
  - [postmarketOS](https://docs.postmarketos.org/policies-and-processes/development/ai-policy.html)
        - also explicitly bans encouraging others to use AI for solving problems related to postmarketOS
        - multi point ethics based rational with citations included
  - [zig](https://ziglang.org/code-of-conduct/)
    - philosophical, cites [Profession (novella)](https://en.wikipedia.org/wiki/Profession_(novella))
    - rooted in concerns around the construction and origins of original thought
  - [servo](https://book.servo.org/contributing/getting-started.html#ai-contributions)
    - more pragmatic, directly lists concerns around ai, fairly concise
  - [qemu](https://www.qemu.org/docs/master/devel/code-provenance.html#use-of-ai-content-generators)
    - pragmatic, focuses on copyright and licensing concerns
    - explicitly allows AI for exploring api, debugging, and other non generative assistance, other policies do not explicitly ban this or mention it in any way
- allowed with supervision, human is ultimately responsible
  - [scipy](https://github.com/scipy/scipy/pull/24583/changes)
    - strict attribution policy including name of model
  - [llvm](https://llvm.org/docs/AIToolPolicy.html)
  - [blender](https://devtalk.blender.org/t/ai-contributions-policy/44202)
  - [linux kernel](https://kernel.org/doc/html/next/process/coding-assistants.html)
    - quite concise but otherwise seems the same as many in this category
  - [mesa](https://gitlab.freedesktop.org/mesa/mesa/-/blob/main/docs/submittingpatches.rst)
    - framed as a contribution policy not an AI policy, AI is listed as a tool that can be used but emphasizes same requirements that author must understand the code they contribute, seems to leave room for partial understanding from new contributors.
        > Understand the code you write at least well enough to be able to explain why your changes are beneficial to the project.
  - [forgejo](https://codeberg.org/forgejo/governance/src/branch/main/AIAgreement.md)
    - bans AI for review, does not explicitly require contributors to understand code generated by ai.
      One could interpret the "accountability for contribution lies with contributor even if AI is used" line as implying this requirement, though their version seems poorly worded imo.
  - [firefox](https://firefox-source-docs.mozilla.org/contributing/ai-coding.html)
  - [ghostty](https://github.com/ghostty-org/ghostty/blob/main/AI_POLICY.md)
    - pro-AI but views "bad users" as the source of issues with it and the only reason for what ghostty considers a "strict AI policy"
  - [fedora](https://communityblog.fedoraproject.org/council-policy-proposal-policy-on-ai-assisted-contributions/)
    - clearly inspired and is cited by many of the above, but is definitely framed more pro-ai than the derived policies tend to be
- [curl](https://curl.se/dev/contribute.html#on-ai-use-in-curl)
  - does not explicitly require humans understand contributions, otherwise policy is similar to above policies
- [linux foundation](https://www.linuxfoundation.org/legal/generative-ai)
  - encourages usage, focuses on legal liability, mentions that tooling exists to help automate managing legal liability, does not mention specific tools
- In progress
  - NixOS
    - NixOS/nixpkgs#410741

## Unresolved questions
[unresolved-questions]: #unresolved-questions

See the "Moderation guidelines" and "Drawbacks" section for a list of topics that are out of scope.
Copy link
Copy Markdown
Member

@jieyouxu jieyouxu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like this version, and thanks a ton for working on it. Specifically:

  • It doesn't try to dump entire walls of text, which is unfortunately a good way to be sure nobody reads it. Instead, it gives you concrete examples, and a guiding rule-of-thumb for uncovered scenarios, and acknowledges upfront that it surely cannot be exhaustive.
  • I also like where it points out the nuance and recognizes the uncertainties.
  • I like that it covers both "producers" and "consumers" (with nuance that reviewers can also technically use LLMs in ways that are frustrating to the PR authors!)

I left a few suggestions / nits, but even without them this is still a very good start IMO.

(Will not leave an explicit approval until we establish wider consensus, which likely will take the form of 4-team joint FCP.)

View changes since this review

Comment thread src/policies/llm-usage.md Outdated
Comment thread src/policies/llm-usage.md
Comment thread src/policies/llm-usage.md
Comment thread src/policies/llm-usage.md
Comment thread src/policies/llm-usage.md Outdated
Comment thread src/policies/llm-usage.md Outdated
@ChayimFriedman2
Copy link
Copy Markdown

The links to Zulip are project-private, FWIW.

@jyn514
Copy link
Copy Markdown
Member Author

jyn514 commented Apr 17, 2026

The links to Zulip are project-private, FWIW.

I'm aware. This PR is targeted towards Rust project members moreso than the broad community.

Copy link
Copy Markdown
Member

@davidtwco davidtwco left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy with this as an initial policy for the rust-lang/rust repository.

View changes since this review

Comment thread src/policies/llm-usage.md
- Using machine-translation from your native language without posting your original message.
Doing so can introduce new miscommunications that weren't there originally, and prevents someone who speaks the language from providing a better translation.
- ℹ️ Posting both your original message and the translated version is always ok, but you must still disclose that machine-translation was used.
- ℹ️ This policy also applies to non-LLM machine translations such as Google Translate.
Copy link
Copy Markdown

@clarfonthey clarfonthey Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am pretty sure Google Translate uses LLMs right now, so rather than give a specific example, I think this would be better reworded to just say that this applies to all machine translation whether or not you're sure if an LLM is being used.

View changes since the review

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think going into that much detail will just confuse people.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that this parenthetical obscures more than it clarifies. I would just cut it, or maybe go with "Using machine translation (including e.g. Google Translate)"

Comment thread src/policies/llm-usage.md

## Appendix

### No witch hunts
Copy link
Copy Markdown

@clarfonthey clarfonthey Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have a concrete suggestion so won't block on this, but personally, I only used the term "witch hunt" colloquially and don't think it's a very good choice for policy.

The term is loaded enough that it has the issue that people might try and overzealously not classify what they're doing as a witch hunt because "I'm not being that extreme/rude about it," rather than the focus being that it's both a waste of time and hostile to the project to create an environment where people are being constantly questioned.

If I had to suggest an alternative, "Don't be a cop" is maybe a better way to word this, but again, I don't really have a good alternative and would rather not block on this.

View changes since the review

Copy link
Copy Markdown

@alice-i-cecile alice-i-cecile Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having read this section, I agree. It's not really about witch hunting. To me, the key distinction here is that witch hunts involve an element of "gather a mob", that is not mentioned at all here.

"It is not your job to play detective" feels like it captures the spirit better here.

Comment thread src/policies/llm-usage.md
All contributions are your responsibility; you cannot place any blame on an LLM.
- ℹ️ This includes when asking people to address review comments originally authored by an LLM. See "review bots" under ⚠️ above.

### "originally authored"
Copy link
Copy Markdown

@clarfonthey clarfonthey Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally, I think this would be better to put near the beginning instead of the end, since it is defining a term for use in the document. I don't think this is a pressing concern, just for flow reasons.

View changes since the review

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want the beginning of the policy to start with what's banned and allowed, very clearly and plainly. I think "originally authored" is clear enough on its own that it's fine for it to come after the main text.

Comment thread src/how-to-start-contributing.md Outdated
Comment thread src/policies/llm-usage.md
Comment thread src/policies/llm-usage.md

Therefore, the guidelines are roughly as follows:

> It's fine to use LLMs to answer questions, analyze, distill, refine, check, suggest, review. But not to **create**.
Copy link
Copy Markdown

@alice-i-cecile alice-i-cecile Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
> It's fine to use LLMs to answer questions, analyze, distill, refine, check, suggest, review. But not to **create**.
> It's fine to use LLMs to answer questions, analyze, distill, refine, check, suggest, review. Do not use them to **create**.

Wording nit; I think that this is a bit clearer and more direct.

View changes since the review

Comment thread src/policies/llm-usage.md
- Writing dev-tools for your own personal use using an LLM, as long as you don't try to merge them into `rust-lang/rust`.
- Using an LLM to discover bugs, as long as you personally verify the bug, write it up yourself, and disclose that an LLM was used.
Please refer to [our guidelines for fuzzers](https://rustc-dev-guide.rust-lang.org/fuzzing.html#guidelines).
- ℹ️ This also includes reviewers who use LLMs to discover bugs in unmerged code.
Copy link
Copy Markdown

@alice-i-cecile alice-i-cecile Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Bugs" can be read narrowly. It might be useful to have a parenthetical like "bugs (or other flaws)".

Some of the most useful things to find as a reviewer are not true bugs: they're "questionable code duplication", "dodgy abstraction" or "easily fixed limitation".

View changes since the review

Comment thread src/policies/llm-usage.md
#### ⚠️ Allowed with caveats
The following are decided on a case-by-case basis.
Please avoid them where possible.
In general, existing contributors will be treated more leniently here than new contributors.
Copy link
Copy Markdown

@alice-i-cecile alice-i-cecile Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This distinction needs a rationale to avoid feeling unfair.

View changes since the review

Comment thread src/policies/llm-usage.md
- Using an LLM as a "review bot" for PRs.
- ℹ️ Review bots **must** have a separate GitHub account that marks them as an LLM. They **must not** post under a personal account.
- ℹ️ Review bots that post without being approved by a maintainer will be banned.
- ℹ️ If a linter already exists for the language you're writing, we strongly suggest using that linter instead of or in addition to the LLM.
Copy link
Copy Markdown

@alice-i-cecile alice-i-cecile Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- ℹ️ If a linter already exists for the language you're writing, we strongly suggest using that linter instead of or in addition to the LLM.
- ℹ️ LLM reviews should not be used for deterministic checks, such as formatting or linting. If such tools exist for your code base, we strongly suggest using and improving them.

"Deterministic checks" is a bit fuzzy, but that's the core idea.

You should not use the expensive, unreliable, (morally suspect,) tool when you can do this better, regardless of your feelings on using LLMs in general.

View changes since the review

Comment thread src/policies/llm-usage.md
- ℹ️ Review bots **must** have a separate GitHub account that marks them as an LLM. They **must not** post under a personal account.
- ℹ️ Review bots that post without being approved by a maintainer will be banned.
- ℹ️ If a linter already exists for the language you're writing, we strongly suggest using that linter instead of or in addition to the LLM.
- ℹ️ Please keep in mind that it's easy for LLM reviews to have false positives or focus on trivialities. We suggest configuring it to the "least chatty" setting you can.
Copy link
Copy Markdown

@alice-i-cecile alice-i-cecile Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- ℹ️ Please keep in mind that it's easy for LLM reviews to have false positives or focus on trivialities. We suggest configuring it to the "least chatty" setting you can.
- ℹ️ Configure LLM review tools to reduce false positives and excessive focus on trivialities, as these are common, exhausting failure modes. Per-PR human guidance to provide context on the work and guide it towards the areas that need the most attention is generally more effective.

View changes since the review

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First half is just wording/flow changes. Second half is intended as useful advice, although it may not be particularly feasible WRT most "review bots" per se.

Comment thread src/policies/llm-usage.md

- ✅ Allowed
- ❌ Banned
- ⚠️ Allowed with caveats. Must disclose that an LLM was used.
Copy link
Copy Markdown

@alice-i-cecile alice-i-cecile Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you want a bit of guidance about "which model was used"? Capabilities differ meaningfully, and this can be a useful signal for readers.

View changes since the review

Copy link
Copy Markdown
Member

@Urgau Urgau left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the way this is written. Feels very approachable.

As of the content of the policy, it seems to me like it's a good place to start with.

View changes since this review

Comment thread src/policies/llm-usage.md
If it's clear they've broken the rules, point them to this policy; if it's borderline, report it to the mods and move on.

Conversely, lying about whether you've used an LLM is an instant [code of conduct](https://rust-lang.org/policies/code-of-conduct/) violation.
If you are not sure where you fall in this policy, please talk to us.
Copy link
Copy Markdown

@alice-i-cecile alice-i-cecile Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If you are not sure where you fall in this policy, please talk to us.
If you are not sure where something you would like to do falls in this policy, please talk to us.

Behavior-language, not person-language.

View changes since the review

Comment thread src/policies/llm-usage.md
### No witch hunts
["The optimal amount of fraud is not zero"](https://www.bitsaboutmoney.com/archive/optimal-amount-of-fraud/).
Do not try to be the police for whether someone has used an LLM.
If it's clear they've broken the rules, point them to this policy; if it's borderline, report it to the mods and move on.
Copy link
Copy Markdown

@alice-i-cecile alice-i-cecile Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If it's clear they've broken the rules, point them to this policy; if it's borderline, report it to the mods and move on.
If it's clear they've broken the rules, point them to this policy; if it's borderline, report it to the mods and move on.
If there is a problem that needs to be fixed (such as excessive bolding, undue hype or a stiff tone), address that directly, regardless of your private suspicions of its origin.

View changes since the review

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alice-i-cecile I don't think calling out the specific examples is useful here, and it would add another point to have to reach consensus on. Also, this would be directly instructing people to address that problem rather than the LLM usage, when it is also a perfectly reasonable option to just not engage, or close the PR, or similar.

Comment thread src/policies/llm-usage.md

### Responsibility

All contributions are your responsibility; you cannot place any blame on an LLM.
Copy link
Copy Markdown

@alice-i-cecile alice-i-cecile Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
All contributions are your responsibility; you cannot place any blame on an LLM.
Your contributions are your responsibility; you cannot place any blame on LLMs that you have used.

Clarity / wording.

View changes since the review

Comment thread src/policies/llm-usage.md
All contributions are your responsibility; you cannot place any blame on an LLM.
- ℹ️ This includes when asking people to address review comments originally authored by an LLM. See "review bots" under ⚠️ above.

### "originally authored"
Copy link
Copy Markdown

@alice-i-cecile alice-i-cecile Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### "originally authored"
### The meaning of "originally authored"

View changes since the review

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### "originally authored"
### On "original authorship"

Comment thread src/policies/llm-usage.md
### "originally authored"

This document uses the phrase "originally authored" to mean "text that was generated by an LLM (and then possibly edited by a human)".
No amount of editing can change authorship; authorship sets the initial style and it is very hard to change once it's set.
Copy link
Copy Markdown

@alice-i-cecile alice-i-cecile Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
No amount of editing can change authorship; authorship sets the initial style and it is very hard to change once it's set.
Authorship sets the initial style and direction of a piece of work; later editing to alter that is both inefficient and imperfect.

I think this captures the essence of the argument without making claims that are likely to spark debate. "No amount of editing can..." is a very strong claim (see Ship of Theseus) and a weaker form is still completely sufficient to justify this policy.

View changes since the review

Comment thread src/policies/llm-usage.md
- Usages that use LLMs for creation or show LLM output to another human are likely banned ❌

This policy is not set in stone.
We can evolve it as we gain more experience working with LLMs.
Copy link
Copy Markdown

@alice-i-cecile alice-i-cecile Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
We can evolve it as we gain more experience working with LLMs.
We can and likely will evolve it as we gain more experience working with LLMs.

View changes since the review

Comment thread src/policies/llm-usage.md
- ℹ️ Posting both your original message and the translated version is always ok, but you must still disclose that machine-translation was used.
- ℹ️ This policy also applies to non-LLM machine translations such as Google Translate.
- Using an LLM as a "review bot" for PRs.
- ℹ️ Review bots **must** have a separate GitHub account that marks them as an LLM. They **must not** post under a personal account.
Copy link
Copy Markdown

@alice-i-cecile alice-i-cecile Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- ℹ️ Review bots **must** have a separate GitHub account that marks them as an LLM. They **must not** post under a personal account.
- ℹ️ Review bots **must** have a separate GitHub account that marks them as an LLM. You **must not** post (or allow a tool to post) LLM reviews verbatim on your personal account unless clearly quoted with your own personal interpretation of the bot's analysis.

Capturing some of the ideas from Zulip discussions on academic norms around quotation.

View changes since the review

Comment thread src/policies/llm-usage.md
please have them on-hand, and be available yourself to answer questions about your process.

- Using an LLM to generate a solution to an issue, learning from its solution, and then rewriting it from scratch in your own style.
- Using machine-translation from your native language without posting your original message.
Copy link
Copy Markdown

@alice-i-cecile alice-i-cecile Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that this could be cleaned up by moving "without posting your original message" down into the sub-bullet point.

Cut here, then make that point about how posting your original message can be very useful, especially if nuance is important.

View changes since the review

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this change is adopted, I think that this belongs in the "allowed" category. Other items there have similar "here's some helpful advice, and I worry that "using translations is kinda-borderline" will be perceived as hostile to readers who do not have native-level English skills.

Comment thread src/policies/llm-usage.md
Please avoid them where possible.
In general, existing contributors will be treated more leniently here than new contributors.
We may ask you for the original prompts or design documents that went into the LLM's output;
please have them on-hand, and be available yourself to answer questions about your process.
Copy link
Copy Markdown

@alice-i-cecile alice-i-cecile Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
please have them on-hand, and be available yourself to answer questions about your process.
please have them on-hand, and be available to answer questions about your process.

View changes since the review

Comment thread src/policies/llm-usage.md
Comment on lines +17 to +18
> LLMs work best when used as a tool to write *better*, not *faster*.

Copy link
Copy Markdown
Member

@joshtriplett joshtriplett Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
> LLMs work best when used as a tool to write *better*, not *faster*.
> In `rust-lang/rust`, please do not use LLMs as a tool to write *faster*.

Having this as a high-level summary is offering a judgement on LLMs that feels like it isn't necessary for the policy, and makes consensus more difficult to reach. For anti-LLM folks it's saying that they work best when used to write "better", which is a point in dispute. I would also expect (but don't want to put words in people's mouths) that for pro-LLM folks the point that they don't work well when used to work faster may be in dispute.

I've tried to rephrase this in a fashion that, rather than expressing a general statement on when "LLMs work best", is instead expressing what is desired *for rust-lang/rust as that's the scope of this policy.

View changes since the review

Comment thread src/policies/llm-usage.md
- ℹ️ This also applies to issue bodies and PR descriptions.
- ℹ️ See also "machine-translation" in ⚠️ below.
- Documentation that is originally authored by an LLM.
- ℹ️ This includes non-trivial source comments, such as doc-comments or multiple paragraphs of non-doc-comments.
Copy link
Copy Markdown
Member

@joshtriplett joshtriplett Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- ℹ️ This includes non-trivial source comments, such as doc-comments or multiple paragraphs of non-doc-comments.
- ℹ️ This includes *any* doc comments, or non-trivial source comments.

Reordering this to make it clear first and foremost that "Documentation" includes any doc comments, moving "non-trivial source comments" second. This also drops the quantitative "multiple paragraphs"; some multi-paragraph comments may be trivial, and some one-sentence comments may not be.

View changes since the review

Comment thread src/policies/llm-usage.md
- ℹ️ See also "machine-translation" in ⚠️ below.
- Documentation that is originally authored by an LLM.
- ℹ️ This includes non-trivial source comments, such as doc-comments or multiple paragraphs of non-doc-comments.
- ℹ️ This includes compiler diagnostics.
Copy link
Copy Markdown
Member

@joshtriplett joshtriplett Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- ℹ️ This includes compiler diagnostics.
- ℹ️ This includes compiler diagnostics or similar user-visible output.

View changes since the review

Comment thread src/policies/llm-usage.md
- Code changes that are originally authored by an LLM.
- This does not include "trivial" changes that do not meet the [threshold of originality](https://fsfe.org/news/2025/news-20250515-01.en.html), which fall under ⚠️ below.
We understand that while asking an LLM research questions it may, unprompted, suggest small changes where there really isn't another way to write it.
However, you must still type out the changes yourself; you cannot give the LLM write access to your source code.
Copy link
Copy Markdown
Member

@joshtriplett joshtriplett Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
However, you must still type out the changes yourself; you cannot give the LLM write access to your source code.
However, you must still type out the changes yourself; you cannot give the LLM write access to your source code. This is not because we are trying to turn you into a typist, but because it is *very easy* to err in the direction of increasingly non-trivial changes if the LLM gets to directly write the code.

Trying to address potential reactions of "why are you making me re-type this?!".

View changes since the review

Comment thread src/policies/llm-usage.md
- This does not include "trivial" changes that do not meet the [threshold of originality](https://fsfe.org/news/2025/news-20250515-01.en.html), which fall under ⚠️ below.
We understand that while asking an LLM research questions it may, unprompted, suggest small changes where there really isn't another way to write it.
However, you must still type out the changes yourself; you cannot give the LLM write access to your source code.
- We do not accept PRs made up solely of trivial changes.
Copy link
Copy Markdown
Member

@joshtriplett joshtriplett Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- We do not accept PRs made up solely of trivial changes.
- We do not in general accept PRs made up *solely* of trivial changes, such as non-user-visible typos.

Trying to make sure people get an idea of what "trivial" means here; for instance, typos in user-visible documentation aren't trivial.

View changes since the review

Copy link
Copy Markdown
Member

@GuillaumeGomez GuillaumeGomez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very good start, thanks for writing it!

View changes since this review

Comment thread src/policies/llm-usage.md
- We do not accept PRs made up solely of trivial changes.
See [the compiler team's typo fix policy](https://rustc-dev-guide.rust-lang.org/contributing.html#writing-documentation:~:text=Please%20notice%20that%20we%20don%E2%80%99t%20accept%20typography%2Fspellcheck%20fixes%20to%20internal%20documentation).
- See also "learning from an LLM's solution" in ⚠️ below.
- Treating an LLM review as a sufficient condition to merge a change.
Copy link
Copy Markdown
Member

@joshtriplett joshtriplett Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Treating an LLM review as a sufficient condition to merge a change.
- Treating an LLM review as a sufficient condition to merge a change, or to reject a change.

Rationale: as noted on the next line, LLM reviews must be advisory-only. Someone contributing should not be forced to care about the LLM review, unless a human who wants to deal with the LLM output evaluates it and posts a human-written review.

View changes since the review

Comment thread src/policies/llm-usage.md
- Treating an LLM review as a sufficient condition to merge a change.
LLM reviews, if enabled by a team, **must** be advisory-only.
Teams can have a policy that code can be merged without review, and they can have a policy that code must be reviewed by at least one person,
but they may not have a policy that an LLM counts as a person.
Copy link
Copy Markdown
Member

@joshtriplett joshtriplett Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
but they may not have a policy that an LLM counts as a person.
but they may not have a policy that an LLM review substitutes for a human review.

View changes since the review

Comment thread src/policies/llm-usage.md
- Using machine-translation from your native language without posting your original message.
Doing so can introduce new miscommunications that weren't there originally, and prevents someone who speaks the language from providing a better translation.
- ℹ️ Posting both your original message and the translated version is always ok, but you must still disclose that machine-translation was used.
- ℹ️ This policy also applies to non-LLM machine translations such as Google Translate.
Copy link
Copy Markdown
Member

@joshtriplett joshtriplett Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- ℹ️ This policy also applies to non-LLM machine translations such as Google Translate.
- ℹ️ This policy applies to any machine translations, including Google Translate.

(This removes the assumption that no LLMs are involved in Google Translate; it's not clear if that's true today, and it may not be true in the future.)

View changes since the review

Comment thread src/policies/llm-usage.md
- ℹ️ Posting both your original message and the translated version is always ok, but you must still disclose that machine-translation was used.
- ℹ️ This policy also applies to non-LLM machine translations such as Google Translate.
- Using an LLM as a "review bot" for PRs.
- ℹ️ Review bots **must** have a separate GitHub account that marks them as an LLM. They **must not** post under a personal account.
Copy link
Copy Markdown
Member

@joshtriplett joshtriplett Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- ℹ️ Review bots **must** have a separate GitHub account that marks them as an LLM. They **must not** post under a personal account.
- ℹ️ Review bots **must** have a separate GitHub account that marks them as an LLM. They **must not** post under a personal account.
- ℹ️ Review bot accounts must be blockable by individual users via the standard GitHub user-blocking mechanism. (Some GitHub "app" accounts post comments that look like users but cannot be blocked

This one might be a controversial suggestion, but I think it's important. Claude and openhands-agent, for instance, are (currently) well-behaved here, and use an account that can be blocked. Copilot and Codex (OpenAI/ChatGPT) and some others are not. People should have the ability to opt out of interactions with such bots just as they can with users.

View changes since the review

Comment thread src/policies/llm-usage.md
### "originally authored"

This document uses the phrase "originally authored" to mean "text that was generated by an LLM (and then possibly edited by a human)".
No amount of editing can change authorship; authorship sets the initial style and it is very hard to change once it's set.
Copy link
Copy Markdown
Member

@joshtriplett joshtriplett Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
No amount of editing can change authorship; authorship sets the initial style and it is very hard to change once it's set.
In the manner the phrase is used in this policy, no amount of editing changes how something was "originally authored"; authorship sets the initial style and it is very hard to change once it's set.

Taking a different approach here, of narrowing the focus to the phrasing in this policy, rather than trying to get people to agree with the fully general statement.

View changes since the review

Comment thread src/policies/llm-usage.md
@@ -0,0 +1,116 @@
## Policy
Copy link
Copy Markdown
Member

@joshtriplett joshtriplett Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
## Policy
## Interim LLM Usage Policy

Adding a title that mentions LLM usage, and flagging this as interim to foreshadow the section at the end noting that policies may evolve.

I am hopeful that this is capturing a sentiment shared both by people who want the policy to be stricter and bypeople who want the policy to be less strict.

View changes since the review

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-bootstrap Team: Bootstrap T-compiler Team: Compiler T-libs Team: Library / libs T-rustdoc Team: rustdoc

Projects

None yet

Development

Successfully merging this pull request may close these issues.