Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions AGENTS.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,38 @@
# Repository Guidelines

## Project Structure & Module Organization

The cookbook is organized around runnable examples and reference articles for OpenAI APIs. Place notebooks and Python scripts under `examples/<topic>/`, grouping related assets inside topic subfolders (for example, `examples/agents_sdk/`). Narrative guides and long-form docs live in `articles/`, and shared diagrams or screenshots belong in `images/`. Update `registry.yaml` whenever you add content so it appears on cookbook.openai.com, and add new author metadata in `authors.yaml` if you want custom attribution. Keep large datasets outside the repo; instead, document how to fetch them in the notebook.

## Build, Test, and Development Commands

Use a virtual environment to isolate dependencies:

- `python -m venv .venv && source .venv/bin/activate`
- `pip install -r examples/<topic>/requirements.txt` (each sample lists only what it needs)
- `jupyter lab` or `jupyter notebook` to develop interactively
- `python .github/scripts/check_notebooks.py` to validate notebook structure before pushing

## Coding Style & Naming Conventions

Write Python to PEP 8 with four-space indentation, descriptive variable names, and concise docstrings that explain API usage choices. Name new notebooks with lowercase, dash-or-underscore-separated phrases that match their directory—for example `examples/gpt-5/prompt-optimization-cookbook.ipynb`. Keep markdown cells focused and prefer numbered steps for multi-part workflows. Store secrets in environment variables such as `OPENAI_API_KEY`; never hard-code keys inside notebooks.

## Testing Guidelines

Execute notebooks top-to-bottom after installing dependencies and clear lingering execution counts before committing. For Python modules or utilities, include self-check cells or lightweight `pytest` snippets and show how to run them (for example, `pytest examples/object_oriented_agentic_approach/tests`). When contributions depend on external services, mock responses or gate the cells behind clearly labeled opt-in flags.

## Commit & Pull Request Guidelines

Use concise, imperative commit messages that describe the change scope (e.g., "Add agent portfolio collaboration demo"). Every PR should provide a summary, motivation, and self-review, and must tick the registry and authors checklist from `.github/pull_request_template.md`. Link issues when applicable and attach screenshots or output snippets for UI-heavy content. Confirm CI notebook validation passes locally before requesting review.

## Metadata & Publication Workflow

New or relocated content must have an entry in `registry.yaml` with an accurate path, date, and tag set so the static site generator includes it. When collaborating, coordinate author slugs in `authors.yaml` to avoid duplicates, and run `python -m yaml lint registry.yaml` (or your preferred YAML linter) to catch syntax errors before submitting.

## Review Guidelines

- Verify file, function, and notebook names follow the repo's naming conventions and clearly describe their purpose.
- Scan prose and markdown for typos, broken links, and inconsistent formatting before approving.
- Check that code identifiers remain descriptive (no leftover placeholder names) and that repeated values are factored into constants when practical.
- Ensure notebooks or scripts document any required environment variables instead of hard-coding secrets or keys.
- Confirm metadata files (`registry.yaml`, `authors.yaml`) stay in sync with new or relocated content.
29 changes: 19 additions & 10 deletions examples/codex/build_code_review_with_codex_sdk.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,22 @@
# Build Code Review with the Codex SDK

With [Code Review](https://chatgpt.com/codex/settings/code-review) in Codex Cloud, you can connect your team's cloud hosted Github repository to Codex and receive automated code reviews on every PR. But what if your code is hosted on-prem, or you don't have Github as an SCM?
With [Code Review](https://chatgpt.com/codex/settings/code-review) in Codex Cloud, you can connect your team's cloud hosted GitHub repository to Codex and receive automated code reviews on every PR. But what if your code is hosted on-prem, or you don't have GitHub as an SCM?

Luckily, we can replicate Codex's cloud hosted review process in our own CI/CD runners. In this guide, we'll build our own Code Review action using the Codex CLI headless mode with both Github actions and Jenkins.
Luckily, we can replicate Codex's cloud hosted review process in our own CI/CD runners. In this guide, we'll build our own Code Review action using the Codex CLI headless mode with both GitHub Actions and Jenkins.

To build our own Code review, we'll take the following steps:

1. Install the Codex CLI in our CI/CD runner
1. Prompt Codex in headless (exec) mode with the Code Review prompt that ships with the CLI
1. Specify a structured output JSON schema for Codex
1. Parse the JSON result and use it to make API calls to our SCM to create review comments
2. Prompt Codex in headless (exec) mode with the Code Review prompt that ships with the CLI
3. Specify a structured output JSON schema for Codex
4. Parse the JSON result and use it to make API calls to our SCM to create review comments

Once implemented, Codex will be able to leave inline code review comments:
<img src="../../images/codex_code_review.png" alt="Codex Code Review in Github" width="500"/>
<img src="../../images/codex_code_review.png" alt="Codex Code Review in GitHub" width="500"/>

## The Code Review Prompt
GPT-5-Codex has received specific training to improve is code review abilities. You can steer GPT-5-Codex to conduct a code review with the following prompt:

GPT-5-Codex has received specific training to improve its code review abilities. You can steer GPT-5-Codex to conduct a code review with the following prompt:

```
You are acting as a reviewer for a proposed code change made by another engineer.
Expand All @@ -25,7 +27,9 @@ Prioritize severe issues and avoid nit-level comments unless they block understa
After listing findings, produce an overall correctness verdict (\"patch is correct\" or \"patch is incorrect\") with a concise justification and a confidence score between 0 and 1.
Ensure that file citations and line numbers are exactly correct using the tools available; if they are incorrect your comments will be rejected.
```

## Codex Structured Outputs

In order to make comments on code ranges in our pull request, we need to receive Codex's response in a specific format. To do that we can create a file called `codex-output-schema.json` that conforms to OpenAI's [structured outputs](https://platform.openai.com/docs/guides/structured-outputs) format.

To use this file in our workflow YAML, we can call Codex with the `output-schema-file` argument like this:
Expand All @@ -49,8 +53,10 @@ You can also pass a similar argument to `codex exec` for example:
codex exec "Review my pull request!" --output-schema codex-output-schema.json
```

## Github Actions Example
Let's put it all together. If you're using Github actions in an on-prem environment, you can tailor this example to your specific workflow. Inline comments highlight the key steps.
## GitHub Actions Example

Let's put it all together. If you're using GitHub Actions in an on-prem environment, you can tailor this example to your specific workflow. Inline comments highlight the key steps.

```yaml
name: Codex Code Review

Expand Down Expand Up @@ -331,6 +337,7 @@ jobs:
```

## Jenkins Example

We can use the same approach to scripting a job with Jenkins. Once again, comments highlight key stages of the workflow:

```groovy
Expand Down Expand Up @@ -650,5 +657,7 @@ pipeline {
}
}
```

# Wrap Up
With the Codex SDK, you can build your own Github Code Review in on-prem environments. However, the pattern of triggering Codex with a prompt, receiving a structured output, and then acting on that output with an API call extends far beyond Code Review. For example, we could use this pattern to trigger a root-cause analysis when an incident is created and post a structured report into a slack channel. Or we could create a code quality report on each PR and post results into a dashboard.

With the Codex SDK, you can build your own GitHub Code Review in on-prem environments. However, the pattern of triggering Codex with a prompt, receiving a structured output, and then acting on that output with an API call extends far beyond Code Review. For example, we could use this pattern to trigger a root-cause analysis when an incident is created and post a structured report into a Slack channel. Or we could create a code quality report on each PR and post results into a dashboard.