You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In recent months, the CloudNativePG project has seen a significant increase in Pull Requests and Issues that appear to be primarily AI-generated. While some are helpful, a growing number are "low-effort" or "shotgun" contributions that:
Lack architectural alignment with the project's roadmap.
Include unvetted code or "hallucinated" features.
Increase the cognitive load on maintainers who must spend time debunking or explaining why an AI-generated PR is incorrect.
Raise potential long-term copyright and licensing concerns.
Proposed Solution
I propose we adopt a formal AI Policy (to be added as AI_POLICY.md in our core repositories and referenced in CONTRIBUTING.md). This policy shifts the burden of proof back to the contributor, requiring full human accountability for any AI-assisted work.
The goal is not to ban AI entirely, but to ensure that every line of code committed to CNPG is understood and owned by a human being who is prepared to maintain it.
Draft Policy Content
Important
Summary of the proposed policy:
Accountability: The human contributor is 100% responsible. "The AI did it" is not a valid defence for bugs or poor design.
Intentionality: Prohibits "random" refactoring or AI-scout PRs without prior maintainer approval.
Legal Safety: Contributors must warrant that AI usage complies with our Apache 2.0 licensing and does not violate third-party IP.
Transparency: Significant AI usage must be disclosed in the PR description.
Inspiration & Strategic Alignment
Community Leadership: This policy is inspired by the Ghostty project's proactive stance on maintainer burnout and AI noise.
Ecosystem Guidance: We have integrated educational guidance from the CNCF and the Linux Foundation regarding the legal and ethical implications of AI-generated code. This ensures CloudNativePG remains a responsible citizen of the CNCF ecosystem.
Transparency: In line with the transparency this policy promotes, please note that an initial draft of this document was produced using Google Gemini. The maintainers have since refined and vetted the text to ensure it meets the specific needs of the CloudNativePG ecosystem.
Action Required
We request the Maintainers/Steering Committee to review the proposed policy and cast their vote.
Problem Statement
In recent months, the CloudNativePG project has seen a significant increase in Pull Requests and Issues that appear to be primarily AI-generated. While some are helpful, a growing number are "low-effort" or "shotgun" contributions that:
Proposed Solution
I propose we adopt a formal AI Policy (to be added as
AI_POLICY.mdin our core repositories and referenced inCONTRIBUTING.md). This policy shifts the burden of proof back to the contributor, requiring full human accountability for any AI-assisted work.The goal is not to ban AI entirely, but to ensure that every line of code committed to CNPG is understood and owned by a human being who is prepared to maintain it.
Draft Policy Content
Important
Summary of the proposed policy:
Inspiration & Strategic Alignment
Action Required
We request the Maintainers/Steering Committee to review the proposed policy and cast their vote.
/vote-governance