docs: Add advisory requirement APTS-MR-A01 Goal Misgeneralization and Emergent Misalignment Evaluation Suite#43
Merged
jinsonvarghese merged 2 commits intoOWASP:mainfrom May 1, 2026
Conversation
… Emergent Misalignment Evaluation Suite Adds APTS-MR-A01 as a new advisory practice in the Advisory Requirements appendix, evaluating the agent's underlying objective alignment under distribution shift and detecting emergent misalignment after fine-tuning. Addresses failure modes that input-side (MR-013) and control-side (MR-020) adversarial testing do not cover. Begins to close the goal-alignment gap that the Introduction's Capability Frontier section defers to a future revision. Advisory practice count: 13 -> 14. No normative requirement counts changed.
a58b150 to
39cb47a
Compare
Member
|
Thank you @kylejryan for this. Let me get back soon. |
Member
|
@kylejryan Solid advisory, well-researched. One thing to fix before merging: the advisory practice count needs updating in four additional files that still reference "13 advisory practices".
The PR already updates standard/README.md and standard/Introduction.md, but these four were missed. After the fix, this is good to merge. |
Contributor
Author
|
@jinsonvarghese Great catch, fixed all four and pushing now. |
Member
|
Thank you @kylejryan. All four count updates fixed. Looks good, merging. |
This was referenced May 1, 2026
jorgeraad
pushed a commit
to jorgeraad/APTS
that referenced
this pull request
May 1, 2026
…of AI Influence on Operator Decisions Adds APTS-HO-A02 as a new advisory practice in the Human Oversight domain, the second advisory in HO. Addresses a gap in existing coverage: APTS-HO-001, HO-005, HO-010, and AR-006 mandate approval gates, audit trails, and reasoning-chain capture, but none address the form of the question the operator is asked to confirm. The practical effect is that an audit trail can show "operator approved" while concealing that the operator was offered a single highlighted choice with the safer option visually de-emphasized. The advisory pairs provenance for AI-shaped operator affordances with bias mitigation at high-impact gates. The Practice Description is a four-point list ordered by implementation cost, from a single response-classification audit field through to no-preselected-default and typed-confirmation rules at HO-010 gates. Cross-file count sync from 14 to 15 advisory practices (rebased on top of OWASP#43, which brought the count to 14). No new normative requirements, no tier counts changed (173 total, 72/157/173 unchanged). The machine-readable JSON export is intentionally untouched, consistent with the existing convention that advisory practices are excluded.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Context
Hi, I'm Kyle Ryan from Pensar. I work on post-training pipelines and agent evaluation for autonomous offensive security agents.
AI disclosure: This contribution was drafted with AI assistance. I have reviewed all changes for accuracy, consistency with the standard, and compliance with the style guide, and take full ownership of the submission.
What changed and why
Adds Goal Misgeneralization and Emergent Misalignment Evaluation Suite as a new advisory practice (APTS-MR-A01) in the Advisory Requirements appendix. This is the first advisory in the Manipulation Resistance domain.
The Introduction's Capability Frontier and Containment Assumptions section explicitly defers verifiable goal alignment to a future revision: "Research-stage topics (verifiable goal alignment, scheming detection, and containment testing against models that may be aware of the test environment) are out of scope for this version and may be addressed in future versions of APTS as the field matures." This advisory begins to close that gap using an evaluation-based approach that is achievable with today's tooling (Inspect AI, Braintrust, OpenAI Evals).
The operational concern is concrete and recent. Peer-reviewed work in 2026 (Nature, Training LLMs on narrow tasks can lead to broad misalignment, https://www.nature.com/articles/s41586-025-09937-5) demonstrated that fine-tuning a frontier model on a narrow offensive task — producing insecure code — induces broad behavioral shifts well outside the training domain. For autonomous pentesting platforms, which are routinely fine-tuned on offensive data, this surfaces two failure modes that no existing APTS requirement evaluates:
APTS-MR-013 (Adversarial Example Detection) probes input-side robustness; APTS-MR-020 (Adversarial Validation) probes control-side resilience; APTS-AR-019 (Model Change Tracking) tracks output drift; APTS-RP-A01 (Finding Authenticity Verification) catches fabricated evidence. None of these evaluate the agent's underlying objective alignment under distribution shift, which is the upstream failure RP-A01 cannot reach: an agent producing genuinely-grounded findings that the agent itself was misaligned to discover, prioritize, or report.
The advisory text notes this practice is a candidate for tier-gated inclusion in v0.2.0 (likely as SHOULD | Tier 2 for platforms operating at Level 3 autonomy or higher, or for any platform that performs post-deployment fine-tuning on engagement data).
Affected requirements
Files changed
No normative requirement counts changed (173 total, 72/157/173 tier counts unchanged). No changes to Foreword, Frontispiece, Checklists, Getting_Started, Glossary, Vendor Eval, CAT, or other counts. No changes to the machine-readable export (`standard/apts_requirements.json` does not include advisory practices, consistent with the existing convention).