Replies: 78 comments 220 replies
-
|
such a great intiative |
Beta Was this translation helpful? Give feedback.
-
|
I know this is a pretty ambitious idea and not trivial to implement, but it would be really powerful to have an AI-detection mechanism with a configurable threshold at the repository or organization level. That way, teams could decide what percentage of AI-generated code is acceptable in pull requests. Another possible approach would be to define a set of rules or prompts and evaluate pull requests against them. PRs that don’t meet those rules could be automatically flagged or potentially even closed. |
Beta Was this translation helpful? Give feedback.
-
|
As of today, I would say that 1 out of 10 PRs created with AI is legitimate and meets the standards required to open that PR.
On 28 Jan 2026, at 18:41, Camilla Moraes ***@***.***> wrote:
Another possible approach would be to define a set of rules or prompts and evaluate pull requests against them. PRs that don’t meet those rules could be automatically flagged or potentially even closed.
This is definitely something we’re exploring. One idea is to leverage a repository’s CONTRIBUTING.md file as a source of truth for project guidelines and then validate PRs against any defined rules.
In regards to AI-generated code, have you seen cases where the code is AI-generated but still high-quality and genuinely solves the problem? Or is it alwaays just something you want to close out immediately? I'm curious because I'm wondering if an AI-detection mechanism would rule out PRs where AI is used constructively, but that's where we'd want to test this thoroughly and understand what sensible thresholds look like.
—
Reply to this email directly, view it on GitHub<#185387 (reply in thread)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ABBWEYEKF6WLNDKE376L3GD4JDYFXAVCNFSM6AAAAACS7B7C7OVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTKNRTGEZTMMI>.
You are receiving this because you commented.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as disruptive content.
This comment was marked as disruptive content.
-
|
Hey! I am from Azure Core Upstream and we have a lot of OSS maintainers who mainly maintain repositories on GitHub. We held an internal session to talk about copilot and there is a discussion on the topic where maintainers feel caught between today’s required review rigor (line-by-line understanding for anything shipped) and a future where agentic / AI-generated code makes that model increasingly unsustainable. below are some key maintainer's pain points:
|
Beta Was this translation helpful? Give feedback.
This comment was marked as off-topic.
This comment was marked as off-topic.
-
|
Beta Was this translation helpful? Give feedback.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
-
|
An option to limit new contributors to one open PR would be nice. Just today I had to batch-close several AI generated PRs which were all submitted around the same time. For this protection, defining "new contributor" is probably not possible to do perfectly. But anyone who has no interactions with a project prior to the last 48 hours seems like a good heuristic. The point is to catch such a user at submission time and limit the amount of maintainer attention they can take up. For a different type of problem, I'd like to be able to close PRs as "abandoned", similar to the issue close statuses. It's a clear UI signal to the contributor that their work isn't being rejected but I'm not going to finish it for them. Several of the low quality contributions I have handled, dating back to before the Slop Era but getting worse, are simply incomplete and need follow through. |
Beta Was this translation helpful? Give feedback.
-
|
For the long term horizon: Implement a reviewer LLM that first does an initial scoring of the PRs? Critique is far easier than creation of a correct result. That automated pre-moderation should give the edge needed to handle. Depending on whether you just use rich prompting or fine-tuning, you can even start building an "oracle vox" for your project, which acts as a reasonably informed, reasonably on point virtual representative for the project/organization. |
Beta Was this translation helpful? Give feedback.
-
|
This is a very real problem, and I appreciate that it’s being treated as systemic rather than blaming maintainers or contributors individually. One concern I have with repo-level PR restrictions is that they may disproportionately impact first-time contributors who do want to engage meaningfully but don’t yet have collaborator status. Personally, I think the most promising direction here is criteria-based PR gating rather than blanket restrictions things like required checklist completion, passing CI, linked issues, or acknowledgement of contribution guidelines before a PR can be opened. On AI usage specifically, transparency feels more scalable than prohibition. Clear disclosure combined with automated guideline checks could help maintainers focus on high-intent contributions without discouraging responsible AI-assisted workflows. Looking forward to seeing how these ideas evolve especially solutions that preserve openness while respecting maintainer time. |
Beta Was this translation helpful? Give feedback.
-
|
Thinking along the lines of the discussion first approach that Ghostty uses, I think one way to create just enough friction would be to have an opt-in where a PR has to be linked to an open issue or discussion topic. So when an unprivileged (i.e. does not have elevated privileges on the repo) user tries to create a PR, there's a required field that takes an issue/discussion number. If that's not provided (or the corresponding issue/discussion is closed), then the PR can't be created. This could be trivially worked around by throwing in any old issue/discussion (or by creating one), but it may cause just enough friction to help. To guard against this, perhaps maintainers could set a "minimum age" for the issue/discussion (e.g. 12 hours) to prevent creating fake issues to support a spammy PR. |
Beta Was this translation helpful? Give feedback.
This comment was marked as off-topic.
This comment was marked as off-topic.
-
|
I started https://llmwelcome.dev/ to flip the topic and be explicit about GH issues where maintainers wouldn't mind LLM input on. Tag issues with If someone's going to spend tokens, at least make it opt-in and explicit. Otherwise I'd recommend automatically closing with a GitHub action like anti-slop (not mine). |
Beta Was this translation helpful? Give feedback.
-
|
Open source repo maintainers / owners are now challenged to describe acceptable use of AI in their CONTRIBUTING instructions. Perhaps GitHub could suggest template text? This could maybe also mirror the additional settings / described in https://github.blog/open-source/maintainers/welcome-to-the-eternal-september-of-open-source-heres-what-we-plan-to-do-for-maintainers/ Here are a couple of examples:
|
Beta Was this translation helpful? Give feedback.
-
|
I would like to see regex based moderation tools added to the organization and repo level. I find that a lot of the low-effort LLM generated PRs share patterns of text that could be categorized and acted upon in a number of different ways, from adding an appropriate label to automatically closing. Some of this can be probably done with agents, but IMO that is a waste of resources when simpler tools like regex matching will do the trick. |
Beta Was this translation helpful? Give feedback.
-
|
How about charging people to post PR? This would deter low effort AI generated posts and add an income stream to the maintainers. maintainers have the option to revert the charge. |
Beta Was this translation helpful? Give feedback.
-
|
Hey, appreciate this one :) Aside from the actual description in PR and issues - and as some people already mentioned - it would be useful to have more control over the "AI slop" in comments (in PRs, issues, but increasingly so - in Discussions). We've started to see more automated comments popping up that don't add value, but seem meaningful at a quick glance, while they aren't. People use it for self-promotion, or even promoting the company/product. |
Beta Was this translation helpful? Give feedback.
-
|
Well, maybe agents should respect CONTRIBUTING.md more? At the end of the day, this is not a human issue: this is a prompt engineering issue. When asked to contribute to a repo, Claude Code should run a "contribution subagent" whose sole job is to ensure it matches contribution guidelines. IF we want to do it on the repo side, I don't think it should be on the budget of the often monetary constrained OSS projects: I do understand that essentially the same subagent can be run as a github action on all PRs ad well, but I believe it is better if it goes from the token budget of the contributor, rather than the receiver. Perhaps a kind of signature system (perhaps blockhain-based?) should be used to ensure that such an agentic check has indeed been run with the right level of model. Very-very soon over 90% of code will be AI assisted. It's time we prepare for those times and not ask in "checkboxes" whether the contribution is from the remaining 10%. |
Beta Was this translation helpful? Give feedback.
-
|
yes
…On Sat, Feb 21, 2026 at 8:40 PM Waldir Pimenta ***@***.***> wrote:
Great points! I agree that this discussion needs to focus less on
restricting the input by contributors and more on how to help communities
grow so that there is more capacity to handle the input in the maintainers'
side.
In fact, this "walls" metaphor leads to a related point: there should not
be a binary distinction between "us" (maintainers) and "them"
(contributors), but rather a gradient-like spectrum. Contributors should
have a clear path to becoming gradually more trusted within a project, and
gradually getting more responsibilities and ability to moderate aspects of
the project. Perhaps something similar to StackExchange sites, which can
specify their own thresholds of what reputation levels allow which kinds of
actions, would be an interesting tool that GitHub could offer communities
to configure.
—
Reply to this email directly, view it on GitHub
<#185387 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/B32TNB5NU34JPQ5EGRPW6AT4NC7EFAVCNFSM6AAAAACS7B7C7OVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTKOBYGQ2DQNA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
This comment was marked as off-topic.
This comment was marked as off-topic.
-
|
I have just participated on the GitHub community survey for project maintainers. My answer for the only free text field To give people ideas what to ask for. |
Beta Was this translation helpful? Give feedback.
-
|
One safeguard I think projects could consider is maintaining a human‑verified baseline snapshot of the repository. Tag and preserve a version of the codebase that predates AI‑generated contributions (is this simple to do now?), and mark it clearly as a “clean, trusted human baseline.” If future AI submissions introduce licensing risks, security flaws, or reputational issues, maintainers would have a clean restore point to roll back to. This will give legal protection by showing a human‑verified state of the project exists. And also guarantees that maintainers always have a safe fallback. This won't replace detection or contribution filters, but will complement them. Even if AI contributions slip through, the project has a resilient safety net. Would love to hear thoughts on whether GitHub could support this with tooling (e.g., tagging, branch protection, or automated “baseline snapshots”) so projects don’t have to manage it manually. I am not saying all AI submissions are bad, or all human ones are good. But it helps to know we have a version built by humans and approved by humans. |
Beta Was this translation helpful? Give feedback.
This comment was marked as off-topic.
This comment was marked as off-topic.
-
|
I maintain open source projects where low-quality PRs are not just noisy, but sometimes risky. For system-level or desktop tools, reviewing a bad PR costs much more than reading code. It can involve checking deletion paths, permission boundaries, platform-specific behavior, packaging side effects, and whether the contributor actually understands the project constraints. A large part of the current problem is not only spam, but low-context PRs that look acceptable on the surface and still create a high review burden. A few things would help a lot:
The biggest need is reducing maintainer review cost without punishing good contributors. For many projects, contribution quality control now matters as much as contribution volume. |
Beta Was this translation helpful? Give feedback.
-
|
The low-quality contribution problem has two layers: people who don't care, and people who are using AI poorly. The second group is solvable. Most bad AI contributions come from unstructured prompts. "Fix the bug" with no context, no constraints, no style guidance. The AI guesses at everything and produces something generic. What actually works: structuring the prompt before you run it. Explicit constraints block (follow project conventions, test coverage required), clear objective (fix this specific behavior), context block (here's how this codebase is structured). When those are defined upfront, the AI produces contributions that actually fit. I've been building flompt for exactly this, a visual prompt builder that decomposes prompts into 12 semantic blocks and compiles to Claude-optimized XML. The constraints + context blocks are what separate AI contributions that get merged from ones that waste maintainer time. Open-source: github.com/Nyrok/flompt |
Beta Was this translation helpful? Give feedback.
-
|
tbh it is really hard now a days, on a public repo getting readme update spam and in the code adding unnecessary comments is a new trend now, the best solution I found so far it make the repo private, create a community in discord with some serious contributor then add only those people as contributor to your repos. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hey everyone,
I wanted to provide an update on a critical issue affecting the open source community: the increasing volume of low-quality contributions that is creating significant operational challenges for maintainers.
We’ve been hearing from you that you’re dedicating substantial time to reviewing contributions that do not meet project quality standards for a number of reasons - they fail to follow project guidelines, are frequently abandoned shortly after submission, and are often AI-generated. As AI continues to reshape software development workflows and the nature of open source collaboration, I want you to know that we are actively investigating this problem and developing both immediate and longer-term strategic solutions.
What we're exploring
We’ve spent time reviewing feedback from community members, working directly with maintainers to explore various solutions, and looking through open source repositories to understand the nature of these contributions. Below is an overview of the solutions we’re currently evaluating.
Short-term solutions:
Long-term direction:
As AI adoption accelerates, we recognize the need to proactively address how it can potentially transform both contributor and maintainer workflows. We are exploring:
Next Steps
These are some starting points, and we’re continuing to explore both immediate improvements and long-term solutions. Please share your feedback, questions, or concerns in this thread. Your input is crucial to making sure we’re building the right things and tackling this challenge effectively. As always, thank you for being part of this conversation. Looking forward to hearing your thoughts and working together to address this problem.
Beta Was this translation helpful? Give feedback.
All reactions