Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
49 changes: 49 additions & 0 deletions ai_assist_exercises/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# AI Assistant Practice Exercises

Build deliberate habits for collaborating with GUI-based AI coding assistants (such as Cursor) by working through three focused
exercises. Each scenario contrasts a "with best practice" path against a control project so you can observe how instructions shape
the assistant's behavior.

## What's Included

Each exercise folder contains:

- A scenario README with background, prompting instructions, and success criteria.
- Paired project folders ("with" and "without" the highlighted practice) that share identical starting code.
- Reflection prompts plus space for you to note observations.

You will also find an [`observation_log_template.md`](./observation_log_template.md) in this directory. Copy it into your own notes
to track prompt transcripts, agent behavior, and takeaways across runs.

## Recommended Flow

1. **Skim the exercise README** to understand the practice being showcased and the contrast you should expect between scenarios.
2. **Duplicate the observation log template** into a personal notes file. Add a new entry for the exercise and list the prompts you
plan to issue.
3. **Open the "without" scenario first** inside Cursor (or your assistant of choice). Issue the suggested prompts and capture the
transcript plus outcomes in your log.
4. **Repeat inside the "with" scenario**. Pay attention to what changed—Did the assistant create different files? Did it ask fewer
clarifying questions? Capture concrete differences.
5. **Review the reflection prompts** and summarize what the practice unlocked for you. Note any follow-up experiments you want to try.

> 💡 _Tip: If you have time, record a screen capture of each run. Replaying them side-by-side makes the contrast even clearer._

## Exercise Overview

| Exercise | Practice Highlighted | Main Question to Explore |
| --- | --- | --- |
| [Exercise 1](./exercise01-global-instructions/README.md) | Project-wide instructions via `AGENTS.md` | How much guidance does the assistant infer without any persistent instructions? |
| [Exercise 2](./exercise02-local-vs-nested/README.md) | Nested scopes for front-end components | How do local rules change the assistant's TypeScript and documentation output? |
| [Exercise 3](./exercise03-planning-discipline/README.md) | Plan-first workflows for feature work | Does enforcing planning produce cleaner diffs and helpers? |

## Suggested Prompts

While each exercise provides a default prompt, feel free to customize wording once you understand the expected change. A few
variants you can try:

- "Explain what assumptions you made when applying the instructions."
- "Show me the diff you plan to create before touching any files."
- "What would you do differently if the instructions were missing?"

Document the answers in your observation log so you can build an internal sense for how strongly each practice influences the
assistant.
35 changes: 35 additions & 0 deletions ai_assist_exercises/exercise01-global-instructions/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Exercise 1: Establishing Global Instructions

This exercise demonstrates how providing a global `AGENTS.md` file in a project can guide Cursor (or any GUI-based AI assistant)
toward consistent output.

## Scenario Files

| Folder | Purpose |
| --- | --- |
| `scenario_without_agent/` | Baseline project without any assistant guidance. |
| `scenario_with_agent/` | Identical project plus a repository-level `AGENTS.md` describing expectations. |

Each folder contains the same `hello_app.py` starting point so that the only difference between runs is the presence of global
instructions.

## Suggested Prompt Script

1. Ask the assistant: _"Add logging to the greeter and show an example in the docstring."_
2. If the assistant starts editing immediately in the guided scenario, request a short summary of the changes it plans to make
before it writes them.
3. After the diff is generated, ask the assistant to point out how it followed (or deviated from) any instructions it read.

Record the transcript and resulting diff in your observation log. Repeat the prompt flow in both folders to see how behavior differs.

## What to Observe

- How much nudging is required before logging and docstrings appear in the unguided project?
- Does the assistant proactively configure the `logging` module when the global instructions are present?
- Are docstrings updated with new usage examples without being asked?

## Reflection Prompts

- Which follow-up prompts did you have to give in the unguided scenario but not in the guided one?
- Did the assistant reference the `AGENTS.md` expectations in its explanations?
- How might you tailor a future `AGENTS.md` for your own codebase based on what you saw here?
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Global instructions for Cursor practice

- Always add module and function docstrings when editing Python files.
- Prefer explicit type hints on new function parameters and return values.
- Write log statements using the `logging` module configured at module import time.
- Update usage examples in the module docstring if behavior changes.
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
"""Small CLI greeter used for AI-assist practice."""

from datetime import datetime


def greet(name: str) -> str:
timestamp = datetime.utcnow().strftime("%Y-%m-%d %H:%M")
return f"[{timestamp}] Hello, {name}!"


if __name__ == "__main__":
print(greet("Cursor"))
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
"""Small CLI greeter used for AI-assist practice."""

from datetime import datetime


def greet(name: str) -> str:
timestamp = datetime.utcnow().strftime("%Y-%m-%d %H:%M")
return f"[{timestamp}] Hello, {name}!"


if __name__ == "__main__":
print(greet("Cursor"))
41 changes: 41 additions & 0 deletions ai_assist_exercises/exercise02-local-vs-nested/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# Exercise 2: Local vs. Nested Instructions

This exercise highlights how nested `AGENTS.md` files let you fine-tune Cursor's behavior for different parts of a codebase.

## Scenario Files

```
exercise02-local-vs-nested/
└── generic_widget/
├── AGENTS.md (global widget guidance)
└── components/
├── AGENTS.md (stricter component rules)
└── InfoCard.tsx
```

The `AGENTS.md` at the project root nudges the assistant toward semantic markup and documentation. The nested file inside
`components/` adds React- and TypeScript-specific expectations such as named exports and story documentation.

## Suggested Prompt Script

1. Open the `generic_widget/` folder in Cursor and ask: _"Add a subtitle field to the card component and update any docs."_
2. Before accepting the changes, ask the assistant which instructions it read and how it plans to satisfy them.
3. Repeat the same prompt flow inside the `generic_widget/components/` folder so the nested instructions are in scope.

## Optional Variations

- Ask the assistant to generate usage examples or tests in addition to the subtitle field. Note whether it chooses React Testing
Library, Storybook docs, or simple markdown depending on the scope.
- Request a refactor (e.g., extracting a helper component) to see if the assistant maintains named exports without extra reminders.

## What to Observe

- Does the assistant keep the file within 80 columns when only the global instructions are applied?
- When the nested instructions are active, does it create a `story.md` and avoid default exports?
- How much styling polish appears in each run?

## Reflection Prompts

- Which parts of the nested instructions would you adapt for your own component library?
- Did the assistant cite the instructions unprompted when they were more specific?
- What additional nested scopes might help larger projects stay consistent?
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Widget project guidance

- Use semantic HTML and Tailwind-like utility classes for quick styling.
- Keep files within 80 columns when possible so they render well in Cursor's split view.
- Include a short "Implementation Notes" section at the bottom of README files when you make structural changes.
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Component authoring rules

- Always create React functional components with explicit `Props` types when using TypeScript.
- Add a `story.md` file that explains how to demo the component manually.
- Export components as named exports only (no default exports).
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
import React from "react";

type InfoCardProps = {
title: string;
message: string;
};

export function InfoCard({ title, message }: InfoCardProps) {
return (
<section className="rounded border border-slate-300 bg-white p-4 shadow-sm">
<h2 className="text-lg font-semibold text-slate-800">{title}</h2>
<p className="mt-1 text-sm text-slate-600">{message}</p>
</section>
);
}
43 changes: 43 additions & 0 deletions ai_assist_exercises/exercise03-planning-discipline/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# Exercise 3: Planning Discipline

Use this exercise to compare how Cursor behaves when you request a plan-first workflow versus diving directly into edits.

## Scenario Files

```
exercise03-planning-discipline/
├── project_without_plan/
│ └── todo_manager.py
└── project_with_plan/
├── AGENTS.md
└── todo_manager.py
```

Both projects start with the same to-do manager implementation. Only the `project_with_plan/` folder introduces mandatory
planning instructions.

## Suggested Prompt Script

1. Open `project_without_plan/` and ask: _"Add support for tagging todo items and listing them by tag."_
2. Capture how quickly the assistant edits files versus discussing an approach.
3. Repeat the request in `project_with_plan/`. If the assistant does not produce a plan, remind it of the instructions and ask it
to outline steps in `PLAN.md` before coding.
4. After both runs, request a summary of the implementation decisions and any helper methods introduced.

## Optional Variations

- Ask the assistant to add lightweight tests or a usage demo after completing the plan to see if planning improves verification.
- Challenge the assistant to revise its plan mid-way (e.g., "what if tags must be case-insensitive?") and observe how it updates
`PLAN.md` and the final diff.

## What to Observe

- Does the planned workflow encourage smaller, more focused commits?
- How do helper methods or data structures differ between the two runs?
- Is it easier to review the changes when a `PLAN.md` and "Result" section are present?

## Reflection Prompts

- What parts of the plan-first workflow felt most valuable?
- Which plan maintenance habits would you adopt for your own projects?
- Did enforcing planning change the assistant's tone or level of initiative?
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Planning-first workflow

When responding to change requests in this folder:

1. Outline a step-by-step plan and wait for confirmation before editing files.
2. Document the plan in `PLAN.md` and keep it updated if the plan changes.
3. After implementing the plan, append a short "Result" section to `PLAN.md` summarizing the modifications.
4. Prefer adding focused helper methods over mutating existing ones when new behavior is required.
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
"""A toy to-do list manager used for planning exercises."""

from dataclasses import dataclass, field
from typing import List


default_categories = ["inbox", "in-progress", "done"]


@dataclass
class TodoItem:
title: str
category: str = "inbox"
notes: List[str] = field(default_factory=list)


class TodoManager:
def __init__(self) -> None:
self.items: List[TodoItem] = []

def add(self, title: str, category: str = "inbox") -> TodoItem:
item = TodoItem(title=title, category=category)
self.items.append(item)
return item

def find_by_category(self, category: str) -> List[TodoItem]:
return [item for item in self.items if item.category == category]


if __name__ == "__main__":
manager = TodoManager()
manager.add("Prototype Cursor exercise")
print(len(manager.items))
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
"""A toy to-do list manager used for planning exercises."""

from dataclasses import dataclass, field
from typing import List


default_categories = ["inbox", "in-progress", "done"]


@dataclass
class TodoItem:
title: str
category: str = "inbox"
notes: List[str] = field(default_factory=list)


class TodoManager:
def __init__(self) -> None:
self.items: List[TodoItem] = []

def add(self, title: str, category: str = "inbox") -> TodoItem:
item = TodoItem(title=title, category=category)
self.items.append(item)
return item

def find_by_category(self, category: str) -> List[TodoItem]:
return [item for item in self.items if item.category == category]


if __name__ == "__main__":
manager = TodoManager()
manager.add("Prototype Cursor exercise")
print(len(manager.items))
33 changes: 33 additions & 0 deletions ai_assist_exercises/observation_log_template.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# Observation Log Template

Use this template to document each experiment run. Copy the sections below into a personal notes file, then duplicate the
"Experiment" block for every scenario you try.

---

## Exercise Overview
- **Exercise:** (e.g., Exercise 1 — Global Instructions)
- **Scenario:** (e.g., Without `AGENTS.md`)
- **Date:**
- **Assistant / Model Version:**

## Prompt Plan
- Primary request you will make:
- Follow-up questions you expect to ask:
- Metrics or artifacts you intend to capture:

## Transcript Highlights
- Key assistant responses or diffs:
- Any clarifying questions you needed to provide:

## Outcomes
- Files touched by the assistant:
- Quality notes (style, tests, documentation, etc.):
- Issues encountered:

## Reflection
- What worked well?
- What would you change about your prompt next time?
- Which behaviors seem driven by the presence (or absence) of the instructions?

---