diff --git a/src/pages/docs/release-notes.mdx b/src/pages/docs/release-notes.mdx
index 0a5dc9fd..4111a00e 100644
--- a/src/pages/docs/release-notes.mdx
+++ b/src/pages/docs/release-notes.mdx
@@ -3,6 +3,54 @@ title: "Future AGI Release Notes: Features, Fixes, and Updates"
description: "Latest Future AGI release notes covering new features, improvements, and bug fixes across datasets, evaluations, simulation, and observability products."
---
+## Week of 2026-05-13
+
+
+
+
Features
+
+- **Self-Hosted Install:** Setting up Future AGI on your own machine is now straightforward. Clone the repo, cd into the folder, and run bin/install on macOS or Linux. You need Docker, Docker Compose, and at least 8 GB of RAM. That's it.
+
+- **Expanded Context Injection for Evals:** When configuring an eval, you can now choose exactly which context to inject as separate options: span metadata, trace IDs, session data, or call transcripts and recordings. If you already use variables in your eval, you can map context to them as before. If you do not, you can skip that step entirely. When running evals on sessions, the injected context includes depth into the underlying traces and spans, so you can see exactly where gaps occurred. When building an eval, the right context type is pre-selected automatically based on what you are evaluating, so there is less manual setup.
+
+
Bugs/Improvements
+
+- **Task Page Filters Apply to Eval Variable Mapping:** Filters you set on the task page now carry through when mapping eval variables. The right traces, spans, and sessions are already scoped for you, so there is no need to search for them manually.
+
+- **Image Evals Now Accept URLs:** Image-based evals now accept public HTTP/HTTPS URLs and signed S3 links as inputs. Pass the URL as a string directly in the input field. No file upload or base64 encoding needed. The platform fetches and processes the image server-side before running the eval.
+
+- **Code Evals More Reliable:** Built-in code evals now run in a consistent execution environment. Eval descriptions have also been updated to accurately reflect current behavior.
+
+- **Built-In Validators Improved:** Ten built-in validators have been updated for better accuracy. Email, HTML, SQL, URL, and XML validators now handle a wider range of inputs correctly. Scoring metrics including diff, kappa, word-level error rate, and Meteor score all produce more precise results.
+
+- **Eval Scores Are Consistent Regardless of Input Formatting:** Eval scores no longer vary based on incidental whitespace in inputs. All inputs are normalized before scoring, and comparing two identical empty values now returns a perfect match.
+
+- **Optional Eval Fields Now Have Sensible Defaults:** Code evals with optional numeric configuration fields now run with their default behavior when those fields are left blank. No configuration is needed unless you want to override the defaults.
+
+- **Structured Output Compatibility Improved:** Evals that use LLM as a judge were returning empty results for some nested schema shapes, particularly with models that do not fully support structured output. Both cases are now handled gracefully.
+
+- **Continuous Evals Now Run Reliably at Scale:** Always-on evals with sampling configured now process incoming data consistently over time, regardless of total volume seen so far.
+
+- **Task Submission Error Handling Improved:** If an eval configuration fails to save inside the Tasks wizard, you now see a clear error message immediately and can fix it before submitting. The wizard keeps your inputs intact.
+
+- **Saved Eval Settings Preserved on Re-edit:** Opening the edit view on a staged eval in Tasks was resetting the model selection and error localizer toggle back to defaults. Both settings are now correctly restored when you reopen an eval for editing.
+
+- **Session List Loads Faster:** The session list now loads more quickly, so you spend less time waiting.
+
+- **Playground Handles URL Inputs Reliably:** The Playground now processes URL inputs correctly without becoming unresponsive. In some cases, entering a URL as an input would cause the interface to stop responding until the page was refreshed.
+
+- **Observe Task Evals Now Validate Recording URLs:** Task evals in Observe now verify that recording URLs from your provider's webhook are reachable before running. If a URL is inaccessible, you get a clear error message so you can fix it quickly and get accurate results.
+
+- **Dot-Notation Now Supports All Nesting Patterns:** You can now use any variable notation style in eval prompts, including dot notation and deeply nested references.
+
+- **Only Published Evals Appear in the Eval Drawer:** Draft eval templates created during building or testing no longer show up in the eval selection drawer. Only published evals are visible there.
+
+- **Error Localizer Only Runs When Needed:** The error localizer now skips evals that already passed. It only runs when there is actually something to investigate.
+
+- **Dataset Column Deletion Is Faster:** Deleting columns from a dataset is now more efficient, especially for larger datasets.
+
+
+
## Week of 2026-05-07