diff --git a/01-course/module-01-foundations/README.md b/01-course/module-01-foundations/README.md
new file mode 100644
index 0000000..193d68a
--- /dev/null
+++ b/01-course/module-01-foundations/README.md
@@ -0,0 +1,40 @@
+# Module 1: Foundations
+
+## Course Introduction & Environment Setup
+
+This foundational module introduces you to prompt engineering concepts and gets your development environment configured for hands-on learning.
+
+### Learning Objectives
+By completing this module, you will be able to:
+- β Set up a working development environment with AI assistant access
+- β Identify and apply the four core elements of effective prompts
+- β Write basic prompts for reviewing code
+- β Iterate and refine prompts based on output quality
+
+### Getting Started
+
+**First time here?**
+- If you haven't set up your development environment yet, follow the [Quick Setup guide](../../README.md#-quick-setup) in the main README first
+- **New to Jupyter notebooks?** Read [About Jupyter Notebooks](../../README.md#-about-jupyter-notebooks) to understand how notebooks work and where code executes
+
+**Ready to start?**
+1. **Open the tutorial notebook**: Click on [module1.ipynb](./module1.ipynb) to start the interactive tutorial
+2. **Install dependencies**: Run the "Install Required Dependencies" cell in the notebook
+3. **Follow the notebook**: Work through each cell sequentially - the notebook will guide you through setup and exercises
+4. **Complete exercises**: Practice the hands-on activities as you go
+
+### Module Contents
+- **[module1.ipynb](./module1.ipynb)** - Complete module 1 tutorial notebook
+
+### Time Required
+Approximately 20 minutes
+
+### Prerequisites
+- Python 3.8+ installed
+- IDE with notebook support (VS Code or Cursor recommended)
+- API access to GitHub Copilot, CircuIT, or OpenAI
+
+### Next Steps
+After completing this module:
+1. Review and refine your solutions to the exercises in this module
+2. Continue to [Module 2: Core Prompting Techniques](../module-02-fundamentals/)
diff --git a/01-course/module-01-foundations/module1.ipynb b/01-course/module-01-foundations/module1.ipynb
new file mode 100644
index 0000000..f52de9b
--- /dev/null
+++ b/01-course/module-01-foundations/module1.ipynb
@@ -0,0 +1,991 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Module 1: Foundation\n",
+ "\n",
+ "| **Aspect** | **Details** |\n",
+ "|-------------|-------------|\n",
+ "| **Goal** | Set up your development environment and learn the 4 core elements of effective prompts |\n",
+ "| **Time** | ~20 minutes |\n",
+ "| **Prerequisites** | Python 3.8+, IDE with notebook support, API access (GitHub Copilot, CircuIT, or OpenAI) |\n",
+ "| **Setup Required** | Clone the repository and follow [Quick Setup](../README.md) before running this notebook |\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## π€ Why Prompt Engineering for Software Engineers?\n",
+ "\n",
+ "### What is Prompt Engineering?\n",
+ "\n",
+ "**Prompt Engineering** is the fastest way to harness the power of large language models. By interacting with an LLM through a series of questions, statements, or instructions, you can adjust LLM output behavior based on the specific context of the output you want to achieve.\n",
+ "\n",
+ "**Effective prompt techniques can help your business accomplish the following benefits:**\n",
+ "\n",
+ "- **Boost a model's abilities and improve safety**\n",
+ "- **Augment the model with domain knowledge and external tools** without changing model parameters or fine-tuning\n",
+ "- **Interact with language models to grasp their full capabilities**\n",
+ "- **Achieve better quality outputs through better quality inputs**\n",
+ "\n",
+ "### Two Ways to Influence LLM Behavior\n",
+ "\n",
+ "**1. Fine-tuning (Traditional Approach)**\n",
+ "- Adjust the model's weights/parameters using training data to optimize a cost function\n",
+ "- **Expensive process** - requires significant computation time and cost\n",
+ "- **Limited flexibility** - model is locked into specific behavior patterns\n",
+ "- **Problem:** Still produces vague, inconsistent results without proper context\n",
+ "\n",
+ "**2. Prompt Engineering vs. Context Engineering**\n",
+ "\n",
+ "According to [Anthropic's engineering team](https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents), there's an important distinction:\n",
+ "\n",
+ "- **Prompt Engineering** refers to methods for writing and organizing LLM instructions for optimal outcomes\n",
+ "- **Context Engineering** refers to the set of strategies for curating and maintaining the optimal set of tokens (information) during LLM inference, including all the other information that may land there outside of the prompts\n",
+ "\n",
+ "**Key Difference:** Prompt engineering focuses on writing effective prompts, while context engineering manages the entire context state (system instructions, tools, external data, message history, etc.) as a finite resource.\n",
+ "\n",
+ "### The Evolution: From Prompting to Context Engineering\n",
+ "\n",
+ "**Traditional Prompting** is asking AI questions without providing sufficient context, leading to generic, unhelpful responses. It's like asking a doctor \"fix me\" without describing your symptoms.\n",
+ "\n",
+ "**Context Engineering** treats context as a finite resource that must be carefully curated. As [Anthropic explains](https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents), \"context is a critical but finite resource for AI agents\" that requires thoughtful management.\n",
+ "\n",
+ "**Prompt Engineering** focuses on writing effective instructions, while **Context Engineering** manages the entire information ecosystem that feeds into the model.\n",
+ "\n",
+ "| **Traditional Prompting** | **Context Engineering** | **Prompt Engineering** |\n",
+ "|---------------------------|-------------------------|-------------------------|\n",
+ "| β \"Fix this code\" | β οΈ \"Fix this code. Context: Python e-commerce function. Tools: [code_analyzer, refactor_tool]. History: [previous attempts]\" | β \"You are a senior Python developer. Refactor this e-commerce function following SOLID principles, add type hints, handle edge cases, and maintain backward compatibility. Format your response as: 1) Analysis, 2) Issues found, 3) Refactored code.\" |\n",
+ "| β \"Make it better\" | β οΈ \"Improve this security function. Context: Critical system. Available tools: [security_scanner, vulnerability_checker]. Previous findings: [XSS vulnerability found]\" | β \"Act as a security expert. Analyze this code for vulnerabilities, performance issues, and maintainability problems. Provide specific fixes with code examples. Use this format: [Security Issues], [Performance Issues], [Code Quality], [Solutions].\" |\n",
+ "| β \"Help me debug\" | β οΈ \"Debug this error. Context: Production system. Tools: [log_analyzer, system_monitor]. Recent changes: [deployment at 2pm]\" | β \"You are a debugging specialist. Debug this error: [specific error message]. Context: [system details]. Expected behavior: [description]. Use step-by-step troubleshooting approach: 1) Reproduce, 2) Isolate, 3) Fix, 4) Test.\" |\n",
+ "\n",
+ "**Without Context (Traditional):**\n",
+ "```\n",
+ "User: \"Fix this code\"\n",
+ "AI: \"I'd be happy to help! Could you please share the code you'd like me to fix?\"\n",
+ "```\n",
+ "\n",
+ "**With Context (Prompt Engineering):**\n",
+ "```\n",
+ "User: \"Fix this code: def calculate_total(items): return sum(items)\n",
+ "Context: This is a Python function for an e-commerce checkout. \n",
+ "Requirements: Handle empty lists, add type hints, include error handling.\n",
+ "AI: Here's the improved function with proper error handling and type hints...\"\n",
+ "```\n",
+ "\n",
+ "---\n",
+ "\n",
+ "## π Elements of a Prompt\n",
+ "\n",
+ "A prompt's form depends on the task you are giving to a model. As you explore prompt engineering examples, you will review prompts containing some or all of the following elements:\n",
+ "\n",
+ "### **1. Instructions**\n",
+ "This is a task for the large language model to do. It provides a task description or instruction for how the model should perform.\n",
+ "\n",
+ "**Example:** \"You are a senior software engineer conducting a code review. Analyze the provided code and identify potential issues.\"\n",
+ "\n",
+ "### **2. Context**\n",
+ "This is external information to guide the model.\n",
+ "\n",
+ "**Example:** \"Code context: This is a utility function for user registration in a web application.\"\n",
+ "\n",
+ "### **3. Input Data**\n",
+ "This is the input for which you want a response.\n",
+ "\n",
+ "**Example:** \"Code to review: `def register_user(email, password): ...`\"\n",
+ "\n",
+ "### **4. Output Indicator**\n",
+ "This is the output type or format.\n",
+ "\n",
+ "**Example:** \"Please provide your response in this format: 1) Security Issues, 2) Code Quality Issues, 3) Recommended Improvements, 4) Overall Assessment\"\n",
+ "\n",
+ "---\n",
+ "\n",
+ "## π Evaluate and Iterate\n",
+ "\n",
+ "**Review model responses** to ensure prompts elicit appropriate quality, type, and range of responses. Make changes as needed.\n",
+ "\n",
+ "**Pro tip:** Ask one copy of the model to improve or check output from another copy.\n",
+ "\n",
+ "**Remember:** Prompt engineering is an iterative skill that improves with practice. Experimentation builds intuition for crafting optimal prompts.\n",
+ "\n",
+ "### π― Key Benefits of Effective Prompting\n",
+ "\n",
+ "Effective prompt techniques can help you accomplish the following benefits:\n",
+ "\n",
+ "- **π Boost a model's abilities and improve safety** \n",
+ " Well-crafted prompts guide models toward more accurate and appropriate responses\n",
+ "\n",
+ "- **π§ Augment the model with domain knowledge and external tools** \n",
+ " Without changing model parameters or fine-tuning\n",
+ "\n",
+ "- **π‘ Interact with language models to grasp their full capabilities** \n",
+ " Unlock advanced reasoning and problem-solving abilities\n",
+ "\n",
+ "- **π Achieve better quality outputs through better quality inputs** \n",
+ " The precision of your prompts directly impacts the quality of results\n",
+ "\n",
+ "**Real Impact:** Transform AI from a \"helpful chatbot\" into a reliable development partner that understands your specific coding context and delivers consistent, actionable results.\n",
+ "\n",
+ "---\n",
+ "\n",
+ "## Getting Started: Setup and Practice\n",
+ "\n",
+ "Now that you understand why prompt engineering matters and what makes it effective, let's set up your development environment and start building! You'll create your first AI-powered code review assistant that demonstrates all the concepts we've covered.\n",
+ "\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### π How This Notebook Works\n",
+ "\n",
+ "
\n",
+ "β οΈ Important:
\n",
+ "This notebook cannot be executed directly from GitHub. You must clone the repository and run it locally in your IDE. \n",
+ "
\n",
+ "\n",
+ "
\n",
+ "π First time using Jupyter notebooks?
\n",
+ "See the About Jupyter Notebooks section in the main README for a complete guide on how notebooks work, where code executes, and how to get started.\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "**Quick start:**\n",
+ "- Press `Shift + Enter` to run each cell\n",
+ "- Run cells sequentially from top to bottom\n",
+ "- Output appears below each cell"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "---"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Step 1: Install Required Dependencies\n",
+ "Let's start by installing the packages we need for this tutorial.\n",
+ "\n",
+ "Run the cell below. You should see a success message when installation completes:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Install required packages\n",
+ "import subprocess\n",
+ "import sys\n",
+ "\n",
+ "def install_requirements():\n",
+ " try:\n",
+ " # Install from requirements.txt\n",
+ " subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"-q\", \"-r\", \"requirements.txt\"])\n",
+ " print(\"β SUCCESS! All dependencies installed successfully.\")\n",
+ " print(\"π¦ Installed: openai, anthropic, python-dotenv, requests\")\n",
+ " except subprocess.CalledProcessError as e:\n",
+ " print(f\"β Installation failed: {e}\")\n",
+ " print(\"π‘ Try running: pip install openai anthropic python-dotenv requests\")\n",
+ "\n",
+ "install_requirements()\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "β **Success!** Dependencies installed on your local machine. Now let's connect to an AI model.\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Step 2: Connect to AI Model\n",
+ "\n",
+ "
\n",
+ "π‘ Note:
\n",
+ "The code below runs on your local machine and connects to AI services over the internet.\n",
+ "
\n",
+ "\n",
+ "Choose your preferred option:\n",
+ "\n",
+ "- **Option A: GitHub Copilot API (local proxy)**: Recommended if you don't have OpenAI or CircuIT API access.\n",
+ " - Supports both **Claude** and **OpenAI** models\n",
+ " - No API keys needed - uses your GitHub Copilot subscription\n",
+ " - Follow [GitHub-Copilot-2-API/README.md](../../GitHub-Copilot-2-API/README.md) to authenticate and start the local server\n",
+ " - Run the setup cell below and **edit your preferred provider** (`\"openai\"` or `\"claude\"`) by setting the `PROVIDER` variable\n",
+ " - Available models:\n",
+ " - **OpenAI**: gpt-4o, gpt-4, gpt-3.5-turbo, o3-mini, o4-mini\n",
+ " - **Claude**: claude-3.5-sonnet, claude-3.7-sonnet, claude-sonnet-4\n",
+ "\n",
+ "- **Option B: OpenAI API**: If you have OpenAI API access, you can use the `OpenAI` connection cells provided later in this notebook.\n",
+ "\n",
+ "- **Option C: CircuIT APIs (Azure OpenAI)**: If you have CircuIT API access, you can use the `CircuIT` connection cells provided later in this notebook."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Option A: GitHub Copilot (Recommended)\n",
+ "\n",
+ "If you have GitHub Copilot, this is the easiest option:\n",
+ "
\n",
+ "π‘ Note:
\n",
+ "The GitHub Copilot API repository (copilot-api) used in this course is a fork of the original repository from:
\n",
+ "\n",
+ "- Follow the setup steps in [https://github.com/snehangshu-splunk/copilot-api/blob/main/.github/README.md](https://github.com/snehangshu-splunk/copilot-api/blob/main/.github/README.md) to:\n",
+ " - Authenticate (`auth`) with your GitHub account that has Copilot access\n",
+ " - Start the local server (default: `http://localhost:7711`)\n",
+ "- Then run the \"GitHub Copilot API setup (local proxy)\" cells below.\n",
+ "\n",
+ "Quick reference (see [README](../../GitHub-Copilot-2-API/README.md) for details):\n",
+ "1. Download and install dependencies\n",
+ " ```bash\n",
+ " # Clone the repository\n",
+ " git clone git@github.com:snehangshu-splunk/copilot-api.git\n",
+ " cd copilot-api\n",
+ "\n",
+ " # Install dependencies\n",
+ " uv sync\n",
+ " ```\n",
+ "2. Before starting the server, you need to authenticate with GitHub:\n",
+ " ```bash\n",
+ " # For business account\n",
+ " uv run copilot2api auth --business\n",
+ " ```\n",
+ " When authenticating for the first time, you will see the following information:\n",
+ " ```\n",
+ " Press Ctrl+C to stop the server\n",
+ " Starting Copilot API server...\n",
+ " Starting GitHub device authorization flow...\n",
+ "\n",
+ " Please enter the code '14B4-5D82' at:\n",
+ " https://github.com/login/device\n",
+ "\n",
+ " Waiting for authorization...\n",
+ " ```\n",
+ " You need to copy `https://github.com/login/device` to your browser, then log in to your GitHub account through the browser. This GitHub account should have GitHub Copilot functionality. After authentication is complete, copy '14B4-5D82' in the browser prompt box. This string of numbers is system-generated and may be different each time.\n",
+ "\n",
+ " > **Don't copy the code here.** If you copy this, it will only cause your authorization to fail.\n",
+ "\n",
+ " After successful device authorization:\n",
+ " - macOS or Linux:\n",
+ " - In the `$HOME/.config/copilot2api/` directory, you will see the github-token file.\n",
+ " - Windows system:\n",
+ " - You will find the github-token file in the `C:\\Users\\\\AppData\\Roaming\\copilot2api\\` directory.\n",
+ "\n",
+ " 3. Start the Server\n",
+ " ```bash\n",
+ " # Start API server (default port 7711)\n",
+ " uv run copilot2api start\n",
+ " ```\n",
+ " Now use the OpenAI libraries to connect to the LLM, by executing the below cell. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Option A: GitHub Copilot API setup (Recommended)\n",
+ "import openai\n",
+ "import anthropic\n",
+ "import os\n",
+ "\n",
+ "# ============================================\n",
+ "# π― CHOOSE YOUR AI MODEL PROVIDER\n",
+ "# ============================================\n",
+ "# Set your preference: \"openai\" or \"claude\"\n",
+ "PROVIDER = \"claude\" # Change to \"claude\" to use Claude models\n",
+ "\n",
+ "# ============================================\n",
+ "# π Available Models by Provider\n",
+ "# ============================================\n",
+ "# OpenAI Models (via GitHub Copilot):\n",
+ "# - gpt-4o (recommended, supports vision)\n",
+ "# - gpt-4\n",
+ "# - gpt-3.5-turbo\n",
+ "# - o3-mini, o4-mini\n",
+ "#\n",
+ "# Claude Models (via GitHub Copilot):\n",
+ "# - claude-3.5-sonnet (recommended, supports vision)\n",
+ "# - claude-3.7-sonnet (supports vision)\n",
+ "# - claude-sonnet-4 (supports vision)\n",
+ "# ============================================\n",
+ "\n",
+ "# Configure clients for both providers\n",
+ "openai_client = openai.OpenAI(\n",
+ " base_url=\"http://localhost:7711/v1\",\n",
+ " api_key=\"dummy-key\"\n",
+ ")\n",
+ "\n",
+ "claude_client = anthropic.Anthropic(\n",
+ " api_key=\"dummy-key\",\n",
+ " base_url=\"http://localhost:7711\"\n",
+ ")\n",
+ "\n",
+ "# Set default models for each provider\n",
+ "OPENAI_DEFAULT_MODEL = \"gpt-4o\"\n",
+ "CLAUDE_DEFAULT_MODEL = \"claude-3.5-sonnet\"\n",
+ "\n",
+ "\n",
+ "def _extract_text_from_blocks(blocks):\n",
+ " \"\"\"Extract text content from response blocks returned by the API.\"\"\"\n",
+ " parts = []\n",
+ " for block in blocks:\n",
+ " text_val = getattr(block, \"text\", None)\n",
+ " if isinstance(text_val, str):\n",
+ " parts.append(text_val)\n",
+ " elif isinstance(block, dict):\n",
+ " t = block.get(\"text\")\n",
+ " if isinstance(t, str):\n",
+ " parts.append(t)\n",
+ " return \"\\n\".join(parts)\n",
+ "\n",
+ "\n",
+ "def get_openai_completion(messages, model=None, temperature=0.0):\n",
+ " \"\"\"Get completion from OpenAI models via GitHub Copilot.\"\"\"\n",
+ " if model is None:\n",
+ " model = OPENAI_DEFAULT_MODEL\n",
+ " try:\n",
+ " response = openai_client.chat.completions.create(\n",
+ " model=model,\n",
+ " messages=messages,\n",
+ " temperature=temperature\n",
+ " )\n",
+ " return response.choices[0].message.content\n",
+ " except Exception as e:\n",
+ " return f\"β Error: {e}\\nπ‘ Make sure GitHub Copilot proxy is running on port 7711\"\n",
+ "\n",
+ "\n",
+ "def get_claude_completion(messages, model=None, temperature=0.0):\n",
+ " \"\"\"Get completion from Claude models via GitHub Copilot.\"\"\"\n",
+ " if model is None:\n",
+ " model = CLAUDE_DEFAULT_MODEL\n",
+ " try:\n",
+ " response = claude_client.messages.create(\n",
+ " model=model,\n",
+ " max_tokens=8192,\n",
+ " messages=messages,\n",
+ " temperature=temperature\n",
+ " )\n",
+ " return _extract_text_from_blocks(getattr(response, \"content\", []))\n",
+ " except Exception as e:\n",
+ " return f\"β Error: {e}\\nπ‘ Make sure GitHub Copilot proxy is running on port 7711\"\n",
+ "\n",
+ "\n",
+ "def get_chat_completion(messages, model=None, temperature=0.0):\n",
+ " \"\"\"\n",
+ " Generic function to get chat completion from any provider.\n",
+ " Routes to the appropriate provider-specific function based on PROVIDER setting.\n",
+ " \"\"\"\n",
+ " if PROVIDER.lower() == \"claude\":\n",
+ " return get_claude_completion(messages, model, temperature)\n",
+ " else: # Default to OpenAI\n",
+ " return get_openai_completion(messages, model, temperature)\n",
+ "\n",
+ "\n",
+ "def get_default_model():\n",
+ " \"\"\"Get the default model for the current provider.\"\"\"\n",
+ " if PROVIDER.lower() == \"claude\":\n",
+ " return CLAUDE_DEFAULT_MODEL\n",
+ " else:\n",
+ " return OPENAI_DEFAULT_MODEL\n",
+ "\n",
+ "\n",
+ "# ============================================\n",
+ "# π§ͺ TEST CONNECTION\n",
+ "# ============================================\n",
+ "print(\"π Testing connection to GitHub Copilot proxy...\")\n",
+ "test_result = get_chat_completion([\n",
+ " {\"role\": \"user\", \"content\": \"test\"}\n",
+ "])\n",
+ "\n",
+ "if test_result and \"Error\" in test_result:\n",
+ " print(\"\\n\" + \"=\"*60)\n",
+ " print(\"β CONNECTION FAILED!\")\n",
+ " print(\"=\"*60)\n",
+ " print(f\"Provider: {PROVIDER.upper()}\")\n",
+ " print(f\"Expected endpoint: http://localhost:7711\")\n",
+ " print(\"\\nβ οΈ The GitHub Copilot proxy is NOT running!\")\n",
+ " print(\"\\nπ To fix this:\")\n",
+ " print(\" 1. Open a new terminal\")\n",
+ " print(\" 2. Navigate to your copilot-api directory\")\n",
+ " print(\" 3. Run: uv run copilot2api start\")\n",
+ " print(\" 4. Wait for the server to start (you should see 'Server initialized')\")\n",
+ " print(\" 5. Come back and rerun this cell\")\n",
+ " print(\"\\nπ‘ Need setup help? See: GitHub-Copilot-2-API/README.md\")\n",
+ " print(\"=\"*70)\n",
+ "else:\n",
+ " print(\"\\n\" + \"=\"*60)\n",
+ " print(\"β CONNECTION SUCCESSFUL!\")\n",
+ " print(\"=\"*60)\n",
+ " print(f\"π€ Provider: {PROVIDER.upper()}\")\n",
+ " print(f\"π¦ Default Model: {get_default_model()}\")\n",
+ " print(f\"π Endpoint: http://localhost:7711\")\n",
+ " print(f\"\\nπ‘ To switch providers, change PROVIDER to '{'claude' if PROVIDER.lower() == 'openai' else 'openai'}' and rerun this cell\")\n",
+ " print(\"=\"*70)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Option B: OpenAI API\n",
+ "\n",
+ "**Setup:** Add your API key to `.env` file, then uncomment and run:\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# # Direct OpenAI API setup\n",
+ "# import openai\n",
+ "# import os\n",
+ "# from dotenv import load_dotenv\n",
+ "\n",
+ "# load_dotenv()\n",
+ "\n",
+ "# client = openai.OpenAI(\n",
+ "# api_key=os.getenv(\"OPENAI_API_KEY\") # Set this in your .env file\n",
+ "# )\n",
+ "\n",
+ "# def get_chat_completion(messages, model=\"gpt-4\", temperature=0.7):\n",
+ "# try:\n",
+ "# response = client.chat.completions.create(\n",
+ "# model=model,\n",
+ "# messages=messages,\n",
+ "# temperature=temperature\n",
+ "# )\n",
+ "# return response.choices[0].message.content\n",
+ "# except Exception as e:\n",
+ "# return f\"β Error: {e}\"\n",
+ "\n",
+ "# print(\"β OpenAI API configured successfully!\")\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Option C: CircuIT APIs\n",
+ "\n",
+ "**Setup:** Configure environment variables (`CISCO_CLIENT_ID`, `CISCO_CLIENT_SECRET`, `CISCO_OPENAI_APP_KEY`) in `.env` file.\n",
+ "\n",
+ "Get values from: https://ai-chat.cisco.com/bridgeit-platform/api/home\n",
+ "\n",
+ "Then uncomment and run:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# import openai\n",
+ "# import traceback\n",
+ "# import requests\n",
+ "# import base64\n",
+ "# import os\n",
+ "# from dotenv import load_dotenv\n",
+ "# from openai import AzureOpenAI\n",
+ "\n",
+ "# # Load environment variables\n",
+ "# load_dotenv()\n",
+ "\n",
+ "# # Open AI version to use\n",
+ "# openai.api_type = \"azure\"\n",
+ "# openai.api_version = \"2024-12-01-preview\"\n",
+ "\n",
+ "# # Get API_KEY wrapped in token - using environment variables\n",
+ "# client_id = os.getenv(\"CISCO_CLIENT_ID\")\n",
+ "# client_secret = os.getenv(\"CISCO_CLIENT_SECRET\")\n",
+ "\n",
+ "# url = \"https://id.cisco.com/oauth2/default/v1/token\"\n",
+ "\n",
+ "# payload = \"grant_type=client_credentials\"\n",
+ "# value = base64.b64encode(f\"{client_id}:{client_secret}\".encode(\"utf-8\")).decode(\"utf-8\")\n",
+ "# headers = {\n",
+ "# \"Accept\": \"*/*\",\n",
+ "# \"Content-Type\": \"application/x-www-form-urlencoded\",\n",
+ "# \"Authorization\": f\"Basic {value}\",\n",
+ "# }\n",
+ "\n",
+ "# token_response = requests.request(\"POST\", url, headers=headers, data=payload)\n",
+ "# print(token_response.text)\n",
+ "# token_data = token_response.json()\n",
+ "\n",
+ "# client = AzureOpenAI(\n",
+ "# azure_endpoint=\"https://chat-ai.cisco.com\",\n",
+ "# api_key=token_data.get(\"access_token\"),\n",
+ "# api_version=\"2024-12-01-preview\",\n",
+ "# )\n",
+ "\n",
+ "# app_key = os.getenv(\"CISCO_OPENAI_APP_KEY\")\n",
+ "\n",
+ "# def get_chat_completion(messages, model=\"gpt-4o\", temperature=0.0):\n",
+ "# try:\n",
+ "# response = client.chat.completions.create(\n",
+ "# model=model,\n",
+ "# messages=messages,\n",
+ "# temperature=temperature,\n",
+ "# user=f'{\"appkey\": \"{app_key}\"}',\n",
+ "# )\n",
+ "# return response.choices[0].message.content\n",
+ "# except Exception as e:\n",
+ "# return f\"β Error: {e}\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Step 3: Test Connection\n",
+ "\n",
+ "Run your first prompt to verify everything works:\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Test the connection with a simple prompt\n",
+ "test_messages = [\n",
+ " {\n",
+ " \"role\": \"system\",\n",
+ " \"content\": \"You are a helpful coding assistant. Respond with exactly: 'Connection successful! Ready for prompt engineering.'\"\n",
+ " },\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": \"Test the connection\"\n",
+ " }\n",
+ "]\n",
+ "\n",
+ "response = get_chat_completion(test_messages)\n",
+ "print(\"π§ͺ Test Response:\")\n",
+ "print(response)\n",
+ "\n",
+ "if response and \"Connection successful\" in response:\n",
+ " print(\"\\nπ Perfect! Your AI connection is working!\")\n",
+ "else:\n",
+ " print(\"\\nβ οΈ Connection test complete, but response format may vary.\")\n",
+ " print(\"This is normal - let's continue with the tutorial!\")\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "β **Connection verified!** You're ready to learn prompt engineering.\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Step 4: Craft Your First AI-Powered Code Review\n",
+ "\n",
+ "Time to put theory into practice! You'll engineer a prompt that transforms a generic AI into a specialized code review expert.\n",
+ "\n",
+ "Let's see the 4 core elements in action with a software engineering example:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Example: Code review prompt with all 4 elements\n",
+ "messages = [\n",
+ " {\n",
+ " \"role\": \"system\",\n",
+ " \"content\": (\n",
+ " # 1. INSTRUCTIONS\n",
+ " \"You are a senior software engineer conducting a code review. \"\n",
+ " \"Analyze the provided code and identify potential issues.\"\n",
+ " )\n",
+ " },\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": f\"\"\"\n",
+ "# 2. CONTEXT\n",
+ "Code context: This is a utility function for user registration in a web application.\n",
+ "\n",
+ "# 3. INPUT DATA\n",
+ "Code to review:\n",
+ "```python\n",
+ "def register_user(email, password):\n",
+ " if email and password:\n",
+ " user = {{\"email\": email, \"password\": password}}\n",
+ " return user\n",
+ " return None\n",
+ "```\n",
+ "\n",
+ "# 4. OUTPUT FORMAT\n",
+ "Please provide your response in this format:\n",
+ "1. Security Issues (if any)\n",
+ "2. Code Quality Issues (if any) \n",
+ "3. Recommended Improvements\n",
+ "4. Overall Assessment\n",
+ "\"\"\"\n",
+ " }\n",
+ "]\n",
+ "\n",
+ "response = get_chat_completion(messages)\n",
+ "print(\"π CODE REVIEW RESULT:\")\n",
+ "print(response)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "---"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## πββοΈ Hands-On Practice\n",
+ "\n",
+ "Now let's practice what you've learned! These exercises will help you master the 4 core elements of effective prompts.\n",
+ "\n",
+ "### Activity 1.1: Analyze Prompts and Identify Missing Elements\n",
+ "\n",
+ "Let's examine some incomplete prompts and identify what's missing:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# HINT: For each prompt, decide if it includes:\n",
+ "# - Instructions/persona\n",
+ "# - Context\n",
+ "# - Input data\n",
+ "# - Output indicator/format\n",
+ "# YOUR TASK: Write your notes below or in markdown.\n",
+ "\n",
+ "# Prompt 1 - Missing some elements\n",
+ "prompt_1 = \"\"\"\n",
+ "Fix this code:\n",
+ "def calculate(x, y):\n",
+ " return x + y\n",
+ "\"\"\"\n",
+ "\n",
+ "# Prompt 2 - Missing some elements \n",
+ "prompt_2 = \"\"\"\n",
+ "You are a Python developer.\n",
+ "Make this function better.\n",
+ "\"\"\"\n",
+ "\n",
+ "# Prompt 3 - Missing some elements\n",
+ "prompt_3 = \"\"\"\n",
+ "Review the following function and provide feedback.\n",
+ "Return your response as a list of improvements.\n",
+ "\"\"\"\n",
+ "\n",
+ "# YOUR NOTES:\n",
+ "# - Prompt 1 missing: ...\n",
+ "# - Prompt 2 missing: ...\n",
+ "# - Prompt 3 missing: ...\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Activity 1.2: Create a Complete Prompt with All 4 Elements\n",
+ "\n",
+ "Now let's build a complete prompt for code documentation. Use the function below and create both system and user messages:\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# HINT: Include all 4 elements:\n",
+ "# - Instructions/persona (system)\n",
+ "# - Context (user)\n",
+ "# - Input data (user)\n",
+ "# - Output indicator/format (user)\n",
+ "# YOUR TASK: Build system_message and user_message using the function below, then call get_chat_completion.\n",
+ "\n",
+ "function_to_document = \"\"\"\n",
+ "def process_transaction(user_id, amount, transaction_type):\n",
+ " if transaction_type not in ['deposit', 'withdrawal']:\n",
+ " raise ValueError(\"Invalid transaction type\")\n",
+ " \n",
+ " if amount <= 0:\n",
+ " raise ValueError(\"Amount must be positive\")\n",
+ " \n",
+ " balance = get_user_balance(user_id)\n",
+ " \n",
+ " if transaction_type == 'withdrawal' and balance < amount:\n",
+ " raise InsufficientFundsError(\"Insufficient funds\")\n",
+ " \n",
+ " new_balance = balance + amount if transaction_type == 'deposit' else balance - amount\n",
+ " update_user_balance(user_id, new_balance)\n",
+ " log_transaction(user_id, amount, transaction_type)\n",
+ " \n",
+ " return new_balance\n",
+ "\"\"\"\n",
+ "\n",
+ "# system_message = ...\n",
+ "# user_message = ...\n",
+ "# messages = [\n",
+ "# {\"role\": \"system\", \"content\": system_message},\n",
+ "# {\"role\": \"user\", \"content\": user_message}\n",
+ "# ]\n",
+ "# response = get_chat_completion(messages)\n",
+ "# print(response)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### π― Exercise Solutions & Discussion\n",
+ "\n",
+ "
\n",
+ "π‘ Try the exercises above first!
\n",
+ "Complete Activities 1.1 and 1.2 before checking the solutions below.\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "π Click to reveal solutions and discussion\n",
+ "\n",
+ "**Activity 1.1 Analysis:**\n",
+ "- **Prompt 1** missing: Instructions (role), Context, Output format\n",
+ "- **Prompt 2** missing: Context, Input data, Output format \n",
+ "- **Prompt 3** missing: Instructions (role), Context, Input data\n",
+ "\n",
+ "**Activity 1.2 Solution Example:**\n",
+ "```python\n",
+ "system_message = \"You are a senior software engineer creating technical documentation. Write clear, comprehensive documentation for the provided function.\"\n",
+ "\n",
+ "user_message = f\"\"\"\n",
+ "Context: This is a financial transaction processing function for a banking application.\n",
+ "```\n",
+ "```python\n",
+ "Function to document:\n",
+ "\n",
+ "{function_to_document}\n",
+ "\n",
+ "Please provide documentation in this format:\n",
+ "1. Function Purpose\n",
+ "2. Parameters\n",
+ "3. Return Value\n",
+ "4. Error Conditions\n",
+ "5. Usage Example\n",
+ "\"\"\"\n",
+ "```\n",
+ "\n",
+ "**Key Takeaway:** Notice how each element serves a specific purpose:\n",
+ "- **Instructions** define the AI's role and task\n",
+ "- **Context** provides domain knowledge\n",
+ "- **Input Data** gives the specific content to work with\n",
+ "- **Output Format** ensures consistent, structured results"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "π **Excellent!** You've just executed a structured prompt with all 4 core elements and practiced identifying them in exercises.\n",
+ "\n",
+ "π‘ **What makes this work?**\n",
+ "- **Clear role definition** (\"senior software engineer conducting code review\")\n",
+ "- **Specific context** about the code's purpose\n",
+ "- **Concrete input** to analyze\n",
+ "- **Structured output format** for consistent results\n",
+ "\n",
+ "**You've now completed:**\n",
+ "- β Analyzed incomplete prompts to identify missing elements\n",
+ "- β Created complete prompts with all 4 core elements\n",
+ "- β Applied prompt engineering to real coding scenarios\n",
+ "\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## π Tracking Your Progress\n",
+ "\n",
+ "> **π‘ New to Skills Checklists?** See [Tracking Your Progress](../../README.md#-tracking-your-progress) in the main README for details on how the Skills Checklist works and when to check off skills.\n",
+ "\n",
+ "### Self-Assessment Questions\n",
+ "\n",
+ "After completing Module 1, ask yourself:\n",
+ "1. Can I explain why structured prompts work better than vague ones?\n",
+ "2. Can I apply the 4 core elements to my daily coding tasks?\n",
+ "3. Can I teach a colleague how to write effective prompts?\n",
+ "4. Can I create variations of prompts for different scenarios?\n",
+ "\n",
+ "### Progress Overview\n",
+ "\n",
+ "
\n",
+ "π‘ Note: The status indicators below (β /β¬) are visual guides only and cannot be clicked. Scroll down to \"Check Off Your Skills\" for the interactive checkboxes where you'll track your actual progress!\n",
+ "
\n",
+ "\n",
+ "### Check Off Your Skills\n",
+ "\n",
+ "
\n",
+ "\n",
+ "Mark each skill as you master it:\n",
+ "\n",
+ "**Foundation Skills:**\n",
+ "
\n",
+ "- I can identify the 4 core prompt elements in any example\n",
+ "
\n",
+ "
\n",
+ "- I can convert vague requests into structured prompts\n",
+ "
\n",
+ "
\n",
+ "- I can write clear instructions for AI assistants\n",
+ "
\n",
+ "
\n",
+ "- I can provide appropriate context for coding tasks\n",
+ "
\n",
+ "\n",
+ "**Application Skills:**\n",
+ "
\n",
+ "- I can use prompts for code review and analysis\n",
+ "
\n",
+ "
\n",
+ "- I can adapt prompts for different programming languages\n",
+ "
\n",
+ "
\n",
+ "- I can troubleshoot when prompts don't work as expected\n",
+ "
\n",
+ "
\n",
+ "- I can explain prompt engineering benefits to my team\n",
+ "
\n",
+ "\n",
+ "
\n",
+ "\n",
+ "
\n",
+ "π‘ Remember:
\n",
+ "The goal is not just to complete activities, but to build lasting skills that transform your development workflow!\n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Module 1 Complete! π\n",
+ "\n",
+ "**What You've Accomplished:**\n",
+ "- β Set up Python environment with AI model access\n",
+ "- β Executed your first structured prompt\n",
+ "- β Learned the 4 core elements of effective prompts\n",
+ "- β Conducted your first AI-powered code review\n",
+ "- β Analyzed incomplete prompts to identify missing elements\n",
+ "- β Created complete prompts with all 4 core elements\n",
+ "- β Applied prompt engineering to real coding scenarios\n",
+ "\n",
+ "**Next:** Continue to [**Module 2: Fundamentals**](../module-02-fundamentals/README.md)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Troubleshooting\n",
+ "\n",
+ "**Common Issues:**\n",
+ "- **Installation failed:** Try `pip install openai anthropic python-dotenv requests`\n",
+ "- **Connection failed:** Ensure GitHub Copilot proxy is running on port 7711\n",
+ "- **Authentication errors:** Check your API keys and permissions\n",
+ "\n",
+ "π **Congratulations!** You've completed Module 1 and are ready to become a prompt engineering expert!\n"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": ".venv",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.13.2"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/01-tutorials/module-01-foundations/requirements.txt b/01-course/module-01-foundations/requirements.txt
similarity index 100%
rename from 01-tutorials/module-01-foundations/requirements.txt
rename to 01-course/module-01-foundations/requirements.txt
diff --git a/01-course/module-02-fundamentals/README.md b/01-course/module-02-fundamentals/README.md
new file mode 100644
index 0000000..3af440c
--- /dev/null
+++ b/01-course/module-02-fundamentals/README.md
@@ -0,0 +1,53 @@
+# Module 2: Fundamentals
+
+## Core Prompt Engineering Techniques
+
+This module covers the essential prompt engineering techniques that form the foundation of effective AI assistant interaction for software development.
+
+### Learning Objectives
+By completing this module, you will be able to:
+
+- β Apply eight core prompt engineering techniques to real coding scenarios
+- β Write clear instructions with specific constraints and requirements
+- β Use role prompting to transform AI into specialized domain experts
+- β Organize complex inputs using XML delimiters and structured formatting
+- β Teach AI your preferred styles using few-shot examples
+- β Implement chain-of-thought reasoning for systematic problem-solving
+- β Ground AI responses in reference texts with proper citations
+- β Break complex tasks into sequential workflows using prompt chaining
+- β Create evaluation rubrics and self-critique loops with LLM-as-Judge
+- β Separate reasoning from clean final outputs using inner monologue
+
+### Getting Started
+
+**First time here?** If you haven't set up your development environment yet, follow the [Quick Setup guide](../../README.md#-quick-setup) in the main README first.
+
+**Ready to start?**
+1. **Open the tutorial notebook**: Click on [module2.ipynb](./module2.ipynb) to start the interactive tutorial
+2. **Install dependencies**: Run the "Install Required Dependencies" cell in the notebook
+3. **Follow the notebook**: Work through each cell sequentially - the notebook will guide you through setup and exercises
+4. **Complete exercises**: Practice the hands-on activities as you go
+
+### Module Contents
+- **[module2.ipynb](./module2.ipynb)** - Complete module 2 tutorial notebook
+
+### Time Required
+Approximately 90-120 minutes (1.5-2 hours)
+
+**Time Breakdown:**
+- Setup and introduction: ~10 minutes
+- 8 core tactics with examples: ~70 minutes
+- Hands-on practice activities: ~20-30 minutes
+- Progress tracking: ~5 minutes
+
+π‘ **Tip:** You can complete this module in one session or break it into multiple shorter sessions. Each tactic is self-contained, making it easy to pause and resume.
+
+### Prerequisites
+- Python 3.8+ installed
+- IDE with notebook support (VS Code or Cursor recommended)
+- API access to GitHub Copilot, CircuIT, or OpenAI
+
+### Next Steps
+After completing this module:
+1. Review and refine your solutions to the exercises in this module
+2. Continue to [Module 3: Application in Software Engineering](../module-03-applications/)
diff --git a/01-course/module-02-fundamentals/module2.ipynb b/01-course/module-02-fundamentals/module2.ipynb
new file mode 100644
index 0000000..224738e
--- /dev/null
+++ b/01-course/module-02-fundamentals/module2.ipynb
@@ -0,0 +1,3134 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Module 2 - Core Prompting Techniques\n",
+ "\n",
+ "| **Aspect** | **Details** |\n",
+ "|-------------|-------------|\n",
+ "| **Goal** | Master 8 core prompt engineering tactics: role prompting, structured inputs, few-shot examples, chain-of-thought reasoning, reference citations, prompt chaining, LLM-as-judge, and inner monologue to build professional-grade AI workflows |\n",
+ "| **Time** | ~90-120 minutes (1.5-2 hours) |\n",
+ "| **Prerequisites** | Python 3.8+, IDE with notebook support, API access (GitHub Copilot, CircuIT, or OpenAI) |\n",
+ "| **Setup Required** | Clone the repository and follow [Quick Setup](../README.md) before running this notebook |\n",
+ "\n",
+ "---\n",
+ "\n",
+ "## π Ready to Start?\n",
+ "\n",
+ "
\n",
+ "β οΈ Important:
\n",
+ "This module requires fresh setup. Even if you completed Module 1, run the setup cells below to ensure everything works correctly. \n",
+ "
\n",
+ "The code below runs on your local machine and connects to AI services over the internet.\n",
+ "
\n",
+ "\n",
+ "Choose your preferred option:\n",
+ "\n",
+ "- **Option A: GitHub Copilot API (local proxy)** β **Recommended**: \n",
+ " - Supports both **Claude** and **OpenAI** models\n",
+ " - No API keys needed - uses your GitHub Copilot subscription\n",
+ " - Follow [GitHub-Copilot-2-API/README.md](../../GitHub-Copilot-2-API/README.md) to authenticate and start the local server\n",
+ " - Run the setup cell below and **edit your preferred provider** (`\"openai\"` or `\"claude\"`) by setting the `PROVIDER` variable\n",
+ " - Available models:\n",
+ " - **OpenAI**: gpt-4o, gpt-4, gpt-3.5-turbo, o3-mini, o4-mini\n",
+ " - **Claude**: claude-3.5-sonnet, claude-3.7-sonnet, claude-sonnet-4\n",
+ "\n",
+ "- **Option B: OpenAI API**: If you have OpenAI API access, uncomment and run the **Option B** cell below.\n",
+ "\n",
+ "- **Option C: CircuIT APIs (Azure OpenAI)**: If you have CircuIT API access, uncomment and run the **Option C** cell below.\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Option A: GitHub Copilot API setup (Recommended)\n",
+ "import openai\n",
+ "import anthropic\n",
+ "import os\n",
+ "\n",
+ "# ============================================\n",
+ "# π― CHOOSE YOUR AI MODEL PROVIDER\n",
+ "# ============================================\n",
+ "# Set your preference: \"openai\" or \"claude\"\n",
+ "PROVIDER = \"claude\" # Change to \"claude\" to use Claude models\n",
+ "\n",
+ "# ============================================\n",
+ "# π Available Models by Provider\n",
+ "# ============================================\n",
+ "# OpenAI Models (via GitHub Copilot):\n",
+ "# - gpt-4o (recommended, supports vision)\n",
+ "# - gpt-4\n",
+ "# - gpt-3.5-turbo\n",
+ "# - o3-mini, o4-mini\n",
+ "#\n",
+ "# Claude Models (via GitHub Copilot):\n",
+ "# - claude-3.5-sonnet (recommended, supports vision)\n",
+ "# - claude-3.7-sonnet (supports vision)\n",
+ "# - claude-sonnet-4 (supports vision)\n",
+ "# ============================================\n",
+ "\n",
+ "# Configure clients for both providers\n",
+ "openai_client = openai.OpenAI(\n",
+ " base_url=\"http://localhost:7711/v1\",\n",
+ " api_key=\"dummy-key\"\n",
+ ")\n",
+ "\n",
+ "claude_client = anthropic.Anthropic(\n",
+ " api_key=\"dummy-key\",\n",
+ " base_url=\"http://localhost:7711\"\n",
+ ")\n",
+ "\n",
+ "# Set default models for each provider\n",
+ "OPENAI_DEFAULT_MODEL = \"gpt-4o\"\n",
+ "CLAUDE_DEFAULT_MODEL = \"claude-3.5-sonnet\"\n",
+ "\n",
+ "\n",
+ "def _extract_text_from_blocks(blocks):\n",
+ " \"\"\"Extract text content from response blocks returned by the API.\"\"\"\n",
+ " parts = []\n",
+ " for block in blocks:\n",
+ " text_val = getattr(block, \"text\", None)\n",
+ " if isinstance(text_val, str):\n",
+ " parts.append(text_val)\n",
+ " elif isinstance(block, dict):\n",
+ " t = block.get(\"text\")\n",
+ " if isinstance(t, str):\n",
+ " parts.append(t)\n",
+ " return \"\\n\".join(parts)\n",
+ "\n",
+ "\n",
+ "def get_openai_completion(messages, model=None, temperature=0.0):\n",
+ " \"\"\"Get completion from OpenAI models via GitHub Copilot.\"\"\"\n",
+ " if model is None:\n",
+ " model = OPENAI_DEFAULT_MODEL\n",
+ " try:\n",
+ " response = openai_client.chat.completions.create(\n",
+ " model=model,\n",
+ " messages=messages,\n",
+ " temperature=temperature\n",
+ " )\n",
+ " return response.choices[0].message.content\n",
+ " except Exception as e:\n",
+ " return f\"β Error: {e}\\nπ‘ Make sure GitHub Copilot proxy is running on port 7711\"\n",
+ "\n",
+ "\n",
+ "def get_claude_completion(messages, model=None, temperature=0.0):\n",
+ " \"\"\"Get completion from Claude models via GitHub Copilot.\"\"\"\n",
+ " if model is None:\n",
+ " model = CLAUDE_DEFAULT_MODEL\n",
+ " try:\n",
+ " response = claude_client.messages.create(\n",
+ " model=model,\n",
+ " max_tokens=8192,\n",
+ " messages=messages,\n",
+ " temperature=temperature\n",
+ " )\n",
+ " return _extract_text_from_blocks(getattr(response, \"content\", []))\n",
+ " except Exception as e:\n",
+ " return f\"β Error: {e}\\nπ‘ Make sure GitHub Copilot proxy is running on port 7711\"\n",
+ "\n",
+ "\n",
+ "def get_chat_completion(messages, model=None, temperature=0.7):\n",
+ " \"\"\"\n",
+ " Generic function to get chat completion from any provider.\n",
+ " Routes to the appropriate provider-specific function based on PROVIDER setting.\n",
+ " \"\"\"\n",
+ " if PROVIDER.lower() == \"claude\":\n",
+ " return get_claude_completion(messages, model, temperature)\n",
+ " else: # Default to OpenAI\n",
+ " return get_openai_completion(messages, model, temperature)\n",
+ "\n",
+ "\n",
+ "def get_default_model():\n",
+ " \"\"\"Get the default model for the current provider.\"\"\"\n",
+ " if PROVIDER.lower() == \"claude\":\n",
+ " return CLAUDE_DEFAULT_MODEL\n",
+ " else:\n",
+ " return OPENAI_DEFAULT_MODEL\n",
+ "\n",
+ "\n",
+ "# ============================================\n",
+ "# π§ͺ TEST CONNECTION\n",
+ "# ============================================\n",
+ "print(\"π Testing connection to GitHub Copilot proxy...\")\n",
+ "test_result = get_chat_completion([\n",
+ " {\"role\": \"user\", \"content\": \"test\"}\n",
+ "])\n",
+ "\n",
+ "if test_result and \"Error\" in test_result:\n",
+ " print(\"\\n\" + \"=\"*60)\n",
+ " print(\"β CONNECTION FAILED!\")\n",
+ " print(\"=\"*60)\n",
+ " print(f\"Provider: {PROVIDER.upper()}\")\n",
+ " print(f\"Expected endpoint: http://localhost:7711\")\n",
+ " print(\"\\nβ οΈ The GitHub Copilot proxy is NOT running!\")\n",
+ " print(\"\\nπ To fix this:\")\n",
+ " print(\" 1. Open a new terminal\")\n",
+ " print(\" 2. Navigate to your copilot-api directory\")\n",
+ " print(\" 3. Run: uv run copilot2api start\")\n",
+ " print(\" 4. Wait for the server to start (you should see 'Server initialized')\")\n",
+ " print(\" 5. Come back and rerun this cell\")\n",
+ " print(\"\\nπ‘ Need setup help? See: GitHub-Copilot-2-API/README.md\")\n",
+ " print(\"=\"*70)\n",
+ "else:\n",
+ " print(\"\\n\" + \"=\"*60)\n",
+ " print(\"β CONNECTION SUCCESSFUL!\")\n",
+ " print(\"=\"*60)\n",
+ " print(f\"π€ Provider: {PROVIDER.upper()}\")\n",
+ " print(f\"π¦ Default Model: {get_default_model()}\")\n",
+ " print(f\"π Endpoint: http://localhost:7711\")\n",
+ " print(f\"\\nπ‘ To switch providers, change PROVIDER to '{'claude' if PROVIDER.lower() == 'openai' else 'openai'}' and rerun this cell\")\n",
+ " print(\"=\"*70)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Option B: Direct OpenAI API\n",
+ "\n",
+ "**Setup:** Add your API key to `.env` file, then uncomment and run:\n",
+ "\n",
+ "> π‘ **Note:** This option requires a paid OpenAI API account. If you're using GitHub Copilot, stick with Option A above.\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# # Option B: Direct OpenAI API setup\n",
+ "# import openai\n",
+ "# import os\n",
+ "# from dotenv import load_dotenv\n",
+ "\n",
+ "# load_dotenv()\n",
+ "\n",
+ "# client = openai.OpenAI(\n",
+ "# api_key=os.getenv(\"OPENAI_API_KEY\") # Set this in your .env file\n",
+ "# )\n",
+ "\n",
+ "# def get_chat_completion(messages, model=\"gpt-4o\", temperature=0.7):\n",
+ "# \"\"\"Get a chat completion from OpenAI.\"\"\"\n",
+ "# try:\n",
+ "# response = client.chat.completions.create(\n",
+ "# model=model,\n",
+ "# messages=messages,\n",
+ "# temperature=temperature\n",
+ "# )\n",
+ "# return response.choices[0].message.content\n",
+ "# except Exception as e:\n",
+ "# return f\"β Error: {e}\"\n",
+ "\n",
+ "# print(\"β OpenAI API configured successfully!\")\n",
+ "# print(\"π€ Using OpenAI's official API\")\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Option C: CircuIT APIs (Azure OpenAI)\n",
+ "\n",
+ "**Setup:** Configure environment variables (`CISCO_CLIENT_ID`, `CISCO_CLIENT_SECRET`, `CISCO_OPENAI_APP_KEY`) in `.env` file.\n",
+ "\n",
+ "Get values from: https://ai-chat.cisco.com/bridgeit-platform/api/home\n",
+ "\n",
+ "Then uncomment and run:\n",
+ "\n",
+ "> π‘ **Note:** This option is for Cisco employees with CircuIT API access.\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# # Option C: CircuIT APIs (Azure OpenAI) setup\n",
+ "# import openai\n",
+ "# import traceback\n",
+ "# import requests\n",
+ "# import base64\n",
+ "# import os\n",
+ "# from dotenv import load_dotenv\n",
+ "# from openai import AzureOpenAI\n",
+ "\n",
+ "# # Load environment variables\n",
+ "# load_dotenv()\n",
+ "\n",
+ "# # Open AI version to use\n",
+ "# openai.api_type = \"azure\"\n",
+ "# openai.api_version = \"2024-12-01-preview\"\n",
+ "\n",
+ "# # Get API_KEY wrapped in token - using environment variables\n",
+ "# client_id = os.getenv(\"CISCO_CLIENT_ID\")\n",
+ "# client_secret = os.getenv(\"CISCO_CLIENT_SECRET\")\n",
+ "\n",
+ "# url = \"https://id.cisco.com/oauth2/default/v1/token\"\n",
+ "\n",
+ "# payload = \"grant_type=client_credentials\"\n",
+ "# value = base64.b64encode(f\"{client_id}:{client_secret}\".encode(\"utf-8\")).decode(\"utf-8\")\n",
+ "# headers = {\n",
+ "# \"Accept\": \"*/*\",\n",
+ "# \"Content-Type\": \"application/x-www-form-urlencoded\",\n",
+ "# \"Authorization\": f\"Basic {value}\",\n",
+ "# }\n",
+ "\n",
+ "# token_response = requests.request(\"POST\", url, headers=headers, data=payload)\n",
+ "# print(token_response.text)\n",
+ "# token_data = token_response.json()\n",
+ "\n",
+ "# client = AzureOpenAI(\n",
+ "# azure_endpoint=\"https://chat-ai.cisco.com\",\n",
+ "# api_key=token_data.get(\"access_token\"),\n",
+ "# api_version=\"2024-12-01-preview\",\n",
+ "# )\n",
+ "\n",
+ "# app_key = os.getenv(\"CISCO_OPENAI_APP_KEY\")\n",
+ "\n",
+ "# def get_chat_completion(messages, model=\"gpt-4o\", temperature=0.7):\n",
+ "# \"\"\"Get a chat completion from CircuIT APIs.\"\"\"\n",
+ "# try:\n",
+ "# response = client.chat.completions.create(\n",
+ "# model=model,\n",
+ "# messages=messages,\n",
+ "# temperature=temperature,\n",
+ "# user=f'{{\"appkey\": \"{app_key}\"}}',\n",
+ "# )\n",
+ "# return response.choices[0].message.content\n",
+ "# except Exception as e:\n",
+ "# return f\"β Error: {e}\"\n",
+ "\n",
+ "# print(\"β CircuIT APIs configured successfully!\")\n",
+ "# print(\"π€ Using Azure OpenAI via CircuIT\")\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Step 3: Test Connection\n",
+ "\n",
+ "Let's test that everything is working before we begin:\n",
+ "\n",
+ "
\n",
+ "π‘ Tip: If you see long AI responses and the output shows \"Output is truncated. View as a scrollable element\" - click that link to see the full response in a scrollable view!\n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Quick setup verification\n",
+ "test_messages = [\n",
+ " {\n",
+ " \"role\": \"system\",\n",
+ " \"content\": \"You are a prompt engineering instructor. Respond with: 'Module 2 setup verified! Ready to learn core techniques.'\"\n",
+ " },\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": \"Test Module 2 setup\"\n",
+ " }\n",
+ "]\n",
+ "\n",
+ "response = get_chat_completion(test_messages)\n",
+ "print(\"π§ͺ Setup Test:\")\n",
+ "print(response)\n",
+ "\n",
+ "if response and (\"verified\" in response.lower() or \"ready\" in response.lower()):\n",
+ " print(\"\\nπ Perfect! Module 2 environment is ready!\")\n",
+ "else:\n",
+ " print(\"\\nβ οΈ Setup test complete. Let's continue with the tutorial!\")\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "\n",
+ "## π― Core Prompt Engineering Techniques\n",
+ "\n",
+ "### Introduction: The Art of Prompt Engineering\n",
+ "\n",
+ "#### π Ready to Transform Your AI Interactions?\n",
+ "\n",
+ "You've successfully set up your environment and tested the connection. Now comes the exciting part - **learning the tactical secrets** that separate amateur prompt writers from AI power users.\n",
+ "\n",
+ "Think of what you've accomplished so far as **laying the foundation** of a house. Now we're about to build the **architectural masterpiece** that will revolutionize how you work with AI assistants.\n",
+ "\n",
+ "\n",
+ "#### π¨βπ« What You're About to Master\n",
+ "\n",
+ "In the next sections, you'll discover **eight core tactics** that professional developers use to get consistently excellent results from AI:\n",
+ "\n",
+ "
\n",
+ "\n",
+ "
\n",
+ "π Role Prompting \n",
+ "Transform AI into specialized experts\n",
+ "
\n",
+ "βοΈ LLM-as-Judge \n",
+ "Use AI to evaluate and improve outputs\n",
+ "
\n",
+ "\n",
+ "
\n",
+ "π€« Inner Monologue \n",
+ "Hide reasoning, show only final results\n",
+ "
\n",
+ "\n",
+ "
\n",
+ "\n",
+ "
\n",
+ "π‘ Pro Tip:
\n",
+ "This module covers 8 powerful tactics over 90-120 minutes. Take short breaks between tactics to reflect on how you can apply each technique to your day-to-day work. Make notes as you progressβjot down specific use cases from your projects where each tactic could be valuable. This active reflection will help you retain the techniques and integrate them into your workflow faster!\n",
+ "
\n",
+ "π‘ Taking Breaks? We've Got You Covered!
\n",
+ "\n",
+ "This module is designed for 90-120 minutes of focused learning. To help you manage your time effectively, we've added **4 strategic break points** throughout:\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
Break Point
\n",
+ "
Location
\n",
+ "
Time Elapsed
\n",
+ "
Bookmark Text
\n",
+ "
\n",
+ "
\n",
+ "
β Break #1
\n",
+ "
After Tactic 2
\n",
+ "
~30 min
\n",
+ "
\"Tactic 3: Few-Shot Examples\"
\n",
+ "
\n",
+ "
\n",
+ "
π΅ Break #2
\n",
+ "
After Tactic 4
\n",
+ "
~60 min
\n",
+ "
\"Tactic 5: Reference Citations\"
\n",
+ "
\n",
+ "
\n",
+ "
π§ Break #3
\n",
+ "
After Tactic 6
\n",
+ "
~90 min
\n",
+ "
\"Tactic 7: LLM-as-Judge\"
\n",
+ "
\n",
+ "
\n",
+ "
π― Break #4
\n",
+ "
Before Practice
\n",
+ "
~100 min
\n",
+ "
\"Hands-On Practice - Activity 2.1\"
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "**How to Resume Your Session:**\n",
+ "1. Scroll down to find the colorful break point card you last saw\n",
+ "2. Look for the **\"π BOOKMARK TO RESUME\"** section\n",
+ "3. Use `Ctrl+F` (or `Cmd+F` on Mac) to search for the bookmark text\n",
+ "4. You'll jump right to where you left off!\n",
+ "\n",
+ "**Pro Tip:** Each break point card shows:\n",
+ "- β What you've completed\n",
+ "- βοΈ What's coming next\n",
+ "- β±οΈ Estimated time for the next section\n",
+ "\n",
+ "Feel free to work at your own paceβthese are suggestions, not requirements! π\n",
+ "
\n",
+ "\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### π¬ Tactic 0: Write Clear Instructions\n",
+ "\n",
+ "**Foundation Principle** - Before diving into advanced tactics, master the art of clear, specific instructions."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "**Core Principle:** When interacting with AI models, think of them as brilliant but very new employees who need explicit instructions. The more precisely you explain what you wantβincluding context, specific requirements, and sequential stepsβthe better the AI's response will be.\n",
+ "\n",
+ "**The Golden Rule:** Show your prompt to a colleague with minimal context on the task. If they're confused, the AI will likely be too.\n",
+ "\n",
+ "**Software Engineering Application:** This tactic becomes crucial when asking for code refactoring, where you need to specify coding standards, performance requirements, and constraints to get production-ready results.\n",
+ "\n",
+ "*Reference: [Claude Documentation - Be Clear and Direct](https://docs.claude.com/en/docs/build-with-claude/prompt-engineering/be-clear-and-direct)*"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Example: Vague vs. Specific Instructions\n",
+ "\n",
+ "**Why This Works:** Specific instructions eliminate ambiguity and guide the model toward your exact requirements.\n",
+ "\n",
+ "Let's compare a generic approach with a specific one:\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Vague request - typical beginner mistake\n",
+ "messages = [\n",
+ " {\"role\": \"user\", \"content\": \"Help me choose a programming language for my project\"}\n",
+ "]\n",
+ "\n",
+ "response = get_chat_completion(messages)\n",
+ "\n",
+ "print(\"VAGUE REQUEST RESULT:\")\n",
+ "print(response)\n",
+ "print(\"\\n\" + \"=\"*50 + \"\\n\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Specific request - much better results\n",
+ "messages = [\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": \"I need to choose a programming language for building a real-time chat application that will handle 10,000 concurrent users, needs to integrate with a PostgreSQL database, and must be deployable on AWS. The team has 3 years of experience with web development. Provide the top 3 language recommendations with pros and cons for each.\",\n",
+ " }\n",
+ "]\n",
+ "\n",
+ "response = get_chat_completion(messages)\n",
+ "\n",
+ "print(\"SPECIFIC REQUEST RESULT:\")\n",
+ "print(response)\n",
+ "print(\"\\n\" + \"=\"*50 + \"\\n\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Another way to achieve specificity using the `system prompt`. This is particularly useful when you want to keep the user request clean while providing detailed instructions about response format and constraints."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "messages = [\n",
+ " {\n",
+ " \"role\": \"system\",\n",
+ " \"content\": \"You are a senior technical architect. Provide concise, actionable recommendations in bullet format. Focus only on the most critical factors for the decision. No lengthy explanations.\",\n",
+ " },\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": \"Help me choose between microservices and monolithic architecture for a startup with 5 developers building a fintech application\",\n",
+ " },\n",
+ "]\n",
+ "\n",
+ "response = get_chat_completion(messages)\n",
+ "\n",
+ "print(\"SYSTEM PROMPT RESULT:\")\n",
+ "print(response)\n",
+ "print(\"\\n\" + \"=\"*50 + \"\\n\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### π Tactic 1: Role Prompting\n",
+ "\n",
+ "**Transform AI into specialized domain experts**"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "**Why This Works:** Role prompting using the `system` parameter is the most powerful way to transform any LLM from a general assistant into your virtual domain expert. The right role enhances accuracy in complex scenarios, tailors the communication tone, and improves focus by keeping LLM within the bounds of your task's specific requirements.\n",
+ "\n",
+ "*Reference: [Claude Documentation - System Prompts](https://docs.claude.com/en/docs/build-with-claude/prompt-engineering/system-prompts)*\n",
+ "\n",
+ "**Generic Example:**"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Instead of asking for a generic response, adopt a specific persona\n",
+ "messages = [\n",
+ " {\n",
+ " \"role\": \"system\",\n",
+ " \"content\": \"You are a code reviewer. Analyze the provided code and give exactly 3 specific feedback points: 1 about code structure, 1 about naming conventions, and 1 about potential improvements. Format each point as a bullet with the category in brackets.\",\n",
+ " },\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": \"def calc(x, y): return x + y if x > 0 and y > 0 else 0\",\n",
+ " },\n",
+ "]\n",
+ "response = get_chat_completion(messages)\n",
+ "\n",
+ "print(\"CODE REVIEWER PERSONA RESULT:\")\n",
+ "print(response)\n",
+ "print(\"\\n\" + \"=\"*50 + \"\\n\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Example: Software Engineering Personas\n",
+ "\n",
+ "In coding scenarios, this tactic transforms into:\n",
+ "\n",
+ "- **Specific refactoring requirements** (e.g., \"Extract this into separate classes following SOLID principles\")\n",
+ "- **Detailed code review criteria** (e.g., \"Focus on security vulnerabilities and performance bottlenecks\")\n",
+ "- **Precise testing specifications** (e.g., \"Generate unit tests with 90% coverage including edge cases\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Below cells show how different engineering personas provide specialized expertise for code reviews."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Security Engineer Persona\n",
+ "security_messages = [\n",
+ " {\n",
+ " \"role\": \"system\", \n",
+ " \"content\": \"You are a security engineer. Review code for security vulnerabilities and provide specific recommendations.\"\n",
+ " },\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": \"\"\"Review this login function:\n",
+ " \n",
+ "def login(username, password):\n",
+ " query = f\"SELECT * FROM users WHERE username = '{username}' AND password = '{password}'\"\n",
+ " result = database.execute(query)\n",
+ " return result\"\"\"\n",
+ " }\n",
+ "]\n",
+ "\n",
+ "security_response = get_chat_completion(security_messages)\n",
+ "print(\"π SECURITY ENGINEER ANALYSIS:\")\n",
+ "print(security_response)\n",
+ "print(\"\\n\" + \"=\"*50 + \"\\n\")\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Performance Engineer Persona\n",
+ "performance_messages = [\n",
+ " {\n",
+ " \"role\": \"system\",\n",
+ " \"content\": \"You are a performance engineer. Analyze code for efficiency issues and optimization opportunities.\"\n",
+ " },\n",
+ " {\n",
+ " \"role\": \"user\", \n",
+ " \"content\": \"\"\"Analyze this data processing function:\n",
+ "\n",
+ "def process_data(items):\n",
+ " result = []\n",
+ " for item in items:\n",
+ " if len(item) > 3:\n",
+ " result.append(item.upper())\n",
+ " return result\"\"\"\n",
+ " }\n",
+ "]\n",
+ "\n",
+ "performance_response = get_chat_completion(performance_messages)\n",
+ "print(\"β‘ PERFORMANCE ENGINEER ANALYSIS:\")\n",
+ "print(performance_response)\n",
+ "print(\"\\n\" + \"=\"*50 + \"\\n\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Checkpoint: Compare the Responses\n",
+ "\n",
+ "Notice how each engineering persona focused on their area of expertise:\n",
+ "\n",
+ "- **Security Engineer**: Identified SQL injection vulnerabilities and authentication issues\n",
+ "- **Performance Engineer**: Suggested list comprehensions and optimization techniques\n",
+ "\n",
+ "β **Success!** You've seen how role prompting provides specialized, expert-level analysis.\n",
+ "\n",
+ "#### Practice - Create Your Own Persona\n",
+ "\n",
+ "Now it's your turn! Create a \"QA Engineer\" persona to analyze test coverage edit the `system prompt`:\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# TODO: Fill in the system message to create a QA Engineer role\n",
+ "# Hint: Focus on test cases, edge cases, and error scenarios\n",
+ "qa_messages = [\n",
+ " {\n",
+ " \"role\": \"system\",\n",
+ " \"content\": \"\"\n",
+ " },\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": \"\"\"Analyze test coverage needed for this function:\n",
+ "\n",
+ "def calculate_discount(price, discount_percent):\n",
+ " if discount_percent > 100:\n",
+ " raise ValueError(\"Discount cannot exceed 100%\")\n",
+ " if price < 0:\n",
+ " raise ValueError(\"Price cannot be negative\")\n",
+ " return price * (1 - discount_percent / 100)\"\"\"\n",
+ " }\n",
+ "]\n",
+ "\n",
+ "qa_response = get_chat_completion(qa_messages)\n",
+ "print(\"π§ͺ QA ENGINEER ANALYSIS:\")\n",
+ "print(qa_response)\n",
+ "print(\"\\n\" + \"=\"*50 + \"\\n\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### π Tactic 2: Structured Inputs\n",
+ "\n",
+ "**Organize complex scenarios with XML delimiters**\n",
+ "\n",
+ "**Core Principle:** When your prompts involve multiple components like context, instructions, and examples, delimiters (especially XML tags) can be a game-changer. They help AI models parse your prompts more accurately, leading to higher-quality outputs.\n",
+ "\n",
+ "**Why This Works:**\n",
+ "- **Clarity:** Clearly separate different parts of your prompt and ensure your prompt is well structured\n",
+ "- **Accuracy:** Reduce errors caused by AI models misinterpreting parts of your prompt \n",
+ "- **Flexibility:** Easily find, add, remove, or modify parts of your prompt without rewriting everything\n",
+ "- **Parseability:** Having the AI use delimiters in its output makes it easier to extract specific parts of its response\n",
+ "\n",
+ "**Software Engineering Application Preview:** Essential for multi-file refactoring, separating code from requirements, and organizing complex code review scenarios."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Let's start with a simple example showing how delimiters clarify different sections of your prompt by using `###` as delimiters:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Using delimiters to refactor code\n",
+ "function_code = \"def process_data(items): return [x.upper() for x in items if len(x) > 3]\"\n",
+ "requirements = \"Follow PEP 8 style guide, add type hints, improve readability\"\n",
+ "\n",
+ "delimiter_messages = [\n",
+ " {\n",
+ " \"role\": \"system\",\n",
+ " \"content\": \"You are a Python code reviewer. Provide only the refactored code without explanations.\"\n",
+ " },\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": f\"\"\"Refactor this function based on the requirements:\n",
+ "\n",
+ "### CODE ###\n",
+ "{function_code}\n",
+ "###\n",
+ "\n",
+ "### REQUIREMENTS ###\n",
+ "{requirements}\n",
+ "###\n",
+ "\n",
+ "Return only the improved function code.\"\"\"\n",
+ " }\n",
+ "]\n",
+ "\n",
+ "delimiter_response = get_chat_completion(delimiter_messages)\n",
+ "print(\"π§ REFACTORED CODE:\")\n",
+ "print(delimiter_response)\n",
+ "print(\"\\n\" + \"=\"*70 + \"\\n\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Multi-File Scenarios with XML Delimiters\n",
+ "\n",
+ "One of the most powerful techniques for complex software development tasks is using XML tags and delimiters to structure your prompts. This approach dramatically improves AI accuracy and reduces misinterpretation.\n",
+ "\n",
+ "**Key Benefits:**\n",
+ "- **Clarity**: Clearly separate different parts of your prompt (instructions, context, examples)\n",
+ "- **Accuracy**: Reduce errors caused by AI misinterpreting parts of your prompt\n",
+ "- **Flexibility**: Easily modify specific sections without rewriting everything\n",
+ "- **Parseability**: Structure AI outputs for easier post-processing\n",
+ "\n",
+ "**Best Practices:**\n",
+ "- Use tags like ``, ``, and `` to clearly separate different parts\n",
+ "- Be consistent with tag names throughout your prompts\n",
+ "- Nest tags hierarchically: `` for structured content\n",
+ "- Choose meaningful tag names that describe their content\n",
+ "\n",
+ "**Reference**: Learn more about XML tagging best practices in the [Claude Documentation on XML Tags](https://docs.claude.com/en/docs/build-with-claude/prompt-engineering/use-xml-tags)."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "In coding scenarios, delimiters become essential for:\n",
+ "\n",
+ "- **Multi-file refactoring** - Separate different files being modified: ``, ``\n",
+ "- **Code vs. requirements** - Distinguish between `` and ``\n",
+ "- **Test scenarios** - Organize ``, ``, ``\n",
+ "- **Pull request reviews** - Structure ``, ``, ``\n",
+ "\n",
+ "The below cell demonstrates multi-file refactoring using XML delimiters to organize complex codebases."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Multi-file analysis with XML delimiters\n",
+ "multifile_messages = [\n",
+ " {\n",
+ " \"role\": \"system\",\n",
+ " \"content\": \"You are a software architect. Analyze the provided files and identify architectural concerns.\"\n",
+ " },\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": \"\"\"\n",
+ "\n",
+ "class User:\n",
+ " def __init__(self, email, password):\n",
+ " self.email = email\n",
+ " self.password = password\n",
+ " \n",
+ " def save(self):\n",
+ " # Save to database\n",
+ " pass\n",
+ "\n",
+ "\n",
+ "\n",
+ "from flask import Flask, request\n",
+ "app = Flask(__name__)\n",
+ "\n",
+ "@app.route('/register', methods=['POST'])\n",
+ "def register():\n",
+ " email = request.form['email']\n",
+ " password = request.form['password']\n",
+ " user = User(email, password)\n",
+ " user.save()\n",
+ " return \"User registered\"\n",
+ "\n",
+ "\n",
+ "\n",
+ "- Follow separation of concerns\n",
+ "- Add input validation\n",
+ "- Implement proper error handling\n",
+ "- Use dependency injection\n",
+ "\n",
+ "\n",
+ "Provide architectural recommendations for improving this code structure.\n",
+ "\"\"\"\n",
+ " }\n",
+ "]\n",
+ "\n",
+ "multifile_response = get_chat_completion(multifile_messages)\n",
+ "print(\"ποΈ ARCHITECTURAL ANALYSIS:\")\n",
+ "print(multifile_response)\n",
+ "print(\"\\n\" + \"=\"*70 + \"\\n\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
β Suggested Break Point #1
\n",
+ "
~30 minutes elapsed
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
β Completed:
\n",
+ "
\n",
+ "
Tactic 0: Write Clear Instructions
\n",
+ "
Tactic 1: Role Prompting (Transform AI into specialized experts)
\n",
+ "
Tactic 2: Structured Inputs (Organize with XML delimiters)
\n",
+ " π‘ This is a natural stopping point. Feel free to take a break and return later!\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "---\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### π Tactic 3: Few-Shot Examples\n",
+ "\n",
+ "**Teach AI your preferred styles and standards**\n",
+ "\n",
+ "**Core Principle:** Examples are your secret weapon for getting AI models to generate exactly what you need. By providing a few well-crafted examples in your prompt, you can dramatically improve the accuracy, consistency, and quality of outputs. This technique, known as few-shot or multishot prompting, is particularly effective for tasks that require structured outputs or adherence to specific formats.\n",
+ "\n",
+ "**Why This Works:**\n",
+ "- **Accuracy:** Examples reduce misinterpretation of instructions\n",
+ "- **Consistency:** Examples enforce uniform structure and style across outputs\n",
+ "- **Performance:** Well-chosen examples boost AI's ability to handle complex tasks\n",
+ "\n",
+ "**Crafting Effective Examples:**\n",
+ "- **Relevant:** Your examples should mirror your actual use case\n",
+ "- **Diverse:** Cover edge cases and vary enough to avoid unintended patterns\n",
+ "- **Clear:** Wrap examples in `` tags (if multiple, nest within `` tags)\n",
+ "- **Quantity:** Include 3-5 diverse examples for best results (more examples = better performance)\n",
+ "\n",
+ "**Software Engineering Application Preview:** Essential for establishing coding styles, documentation formats, test case patterns, and consistent API response structures across your development workflow.\n",
+ "\n",
+ "*Reference: [Claude Documentation - Multishot Prompting](https://docs.claude.com/en/docs/build-with-claude/prompt-engineering/multishot-prompting)*"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Let's teach the AI to explain technical concepts in a specific, consistent style:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Few-shot examples for consistent explanations\n",
+ "few_shot_messages = [\n",
+ " {\"role\": \"system\", \"content\": \"Answer in a consistent style using the examples provided.\"},\n",
+ " \n",
+ " # Example 1\n",
+ " {\"role\": \"user\", \"content\": \"Explain Big O notation for O(1).\"},\n",
+ " {\"role\": \"assistant\", \"content\": \"O(1) means constant time - the algorithm takes the same amount of time regardless of input size.\"},\n",
+ " \n",
+ " # Example 2 \n",
+ " {\"role\": \"user\", \"content\": \"Explain Big O notation for O(n).\"},\n",
+ " {\"role\": \"assistant\", \"content\": \"O(n) means linear time - the algorithm's runtime grows proportionally with the input size.\"},\n",
+ " \n",
+ " # New question following the established pattern\n",
+ " {\"role\": \"user\", \"content\": \"Explain Big O notation for O(log n).\"}\n",
+ "]\n",
+ "\n",
+ "few_shot_response = get_chat_completion(few_shot_messages)\n",
+ "print(\"π CONSISTENT STYLE RESPONSE:\")\n",
+ "print(few_shot_response)\n",
+ "print(\"\\n\" + \"=\"*70 + \"\\n\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "π― **Perfect!** Notice how the AI learned the exact format and style from the examples and applied it consistently.\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### βοΈβπ₯ Tactic 4: Chain-of-Thought Reasoning\n",
+ "\n",
+ "**Guide systematic step-by-step reasoning**\n",
+ "\n",
+ "**Core Principle:** When faced with complex tasks like research, analysis, or problem-solving, giving AI models space to think can dramatically improve performance. This technique, known as chain of thought (CoT) prompting, encourages the AI to break down problems step-by-step, leading to more accurate and nuanced outputs.\n",
+ "\n",
+ "**Why This Works:**\n",
+ "- **Accuracy:** Stepping through problems reduces errors, especially in math, logic, analysis, or generally complex tasks\n",
+ "- **Coherence:** Structured thinking leads to more cohesive, well-organized responses\n",
+ "- **Debugging:** Seeing the AI's thought process helps you pinpoint where prompts may be unclear\n",
+ "\n",
+ "**When to Use CoT:**\n",
+ "- Use for tasks that a human would need to think through\n",
+ "- Examples: complex math, multi-step analysis, writing complex documents, decisions with many factors\n",
+ "- **Note:** Increased output length may impact latency, so use judiciously\n",
+ "\n",
+ "**How to Implement CoT (from least to most complex):**\n",
+ "\n",
+ "1. **Basic prompt:** Include \"Think step-by-step\" in your prompt\n",
+ "2. **Guided prompt:** Outline specific steps for the AI to follow in its thinking process\n",
+ "3. **Structured prompt:** Use XML tags like `` and `` to separate reasoning from the final answer\n",
+ "\n",
+ "**Important:** Always have the AI output its thinking. Without outputting its thought process, no thinking occurs!\n",
+ "\n",
+ "**Software Engineering Application Preview:** Critical for test generation, code reviews, debugging workflows, architecture decisions, and security analysis where methodical analysis prevents missed issues.\n",
+ "\n",
+ "*Reference: [Claude Documentation - Chain of Thought](https://docs.claude.com/en/docs/build-with-claude/prompt-engineering/chain-of-thought)*\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Tactic: Give Models Time to Work Before Judging\n",
+ "\n",
+ "**Critical Tactic:** When asking AI to evaluate solutions, code, or designs, instruct it to solve the problem independently *before* judging the provided solution. This prevents premature agreement and ensures thorough analysis.\n",
+ "\n",
+ "**Why This Matters:** AI models can sometimes be too agreeable or overlook subtle issues when they jump straight to evaluation. By forcing them to work through the problem first, they develop genuine understanding and can provide more accurate assessments.\n",
+ "\n",
+ "**The Principle:** *\"Don't decide if the solution is correct until you have worked through the problem yourself.\"*\n",
+ "\n",
+ "Let's see this with a code review scenario:\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Example: Forcing AI to think before judging\n",
+ "problem = \"\"\"\n",
+ "Write a function that checks if a string is a palindrome.\n",
+ "The function should ignore spaces, punctuation, and case.\n",
+ "\"\"\"\n",
+ "\n",
+ "student_solution = \"\"\"\n",
+ "def is_palindrome(s):\n",
+ " cleaned = ''.join(c.lower() for c in s if c.isalnum())\n",
+ " return cleaned == cleaned[::-1]\n",
+ "\"\"\"\n",
+ "\n",
+ "# BAD: Asking AI to judge immediately (may agree too quickly)\n",
+ "print(\"=\" * 70)\n",
+ "print(\"BAD APPROACH: Immediate Judgment\")\n",
+ "print(\"=\" * 70)\n",
+ "\n",
+ "bad_messages = [\n",
+ " {\n",
+ " \"role\": \"system\",\n",
+ " \"content\": \"You are a code reviewer.\"\n",
+ " },\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": f\"\"\"Problem: {problem}\n",
+ "\n",
+ "Student's solution:\n",
+ "{student_solution}\n",
+ "\n",
+ "Is this solution correct?\"\"\"\n",
+ " }\n",
+ "]\n",
+ "\n",
+ "bad_response = get_chat_completion(bad_messages)\n",
+ "print(bad_response)\n",
+ "\n",
+ "# GOOD: Force AI to solve it first, then compare\n",
+ "print(\"=\" * 70)\n",
+ "print(\"GOOD APPROACH: Work Through It First\")\n",
+ "print(\"=\" * 70)\n",
+ "\n",
+ "good_messages = [\n",
+ " {\n",
+ " \"role\": \"system\",\n",
+ " \"content\": \"You are a code reviewer with a methodical approach.\"\n",
+ " },\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": f\"\"\"Problem: {problem}\n",
+ "\n",
+ "Student's solution:\n",
+ "{student_solution}\n",
+ "\n",
+ "Before evaluating the student's solution, follow these steps:\n",
+ "1. In tags, write your own implementation of the palindrome checker\n",
+ "2. In tags, create comprehensive test cases including edge cases\n",
+ "3. In tags, compare the student's solution to yours and test both\n",
+ "4. In tags, provide your final judgment with specific reasoning\n",
+ "\n",
+ "Important: Don't judge the student's solution until you've solved the problem yourself.\"\"\"\n",
+ " }\n",
+ "]\n",
+ "\n",
+ "good_response = get_chat_completion(good_messages)\n",
+ "print(good_response)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "**π Key Takeaway: Give Models Time to Think**\n",
+ "\n",
+ "Notice the difference:\n",
+ "- **Bad approach:** The AI might agree with the student too quickly without thorough analysis\n",
+ "- **Good approach:** By forcing the AI to solve the problem first, it:\n",
+ " - Develops its own understanding of the requirements\n",
+ " - Creates comprehensive test cases independently\n",
+ " - Can objectively compare two solutions\n",
+ " - Catches subtle bugs or edge cases it might have missed\n",
+ "\n",
+ "**Real-World Applications:**\n",
+ "- **Code Review:** Make AI implement a solution before reviewing pull requests\n",
+ "- **Bug Analysis:** Have AI reproduce the bug before suggesting fixes\n",
+ "- **Architecture Review:** Force AI to design its own solution before critiquing proposals\n",
+ "- **Test Review:** Make AI write tests before evaluating test coverage\n",
+ "\n",
+ "**The Golden Rule:** *\"Don't let the AI judge until it has worked through the problem itself.\"*\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Systematic Code Analysis using Chain of Thoughts\n",
+ "\n",
+ "Now let's implement step-by-step reasoning for complex code analysis tasks:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Chain-of-thought for systematic code analysis\n",
+ "system_message = \"\"\"Use the following step-by-step instructions to analyze code:\n",
+ "\n",
+ "Step 1 - Count the number of functions in the code snippet with a prefix that says 'Function Count: '\n",
+ "Step 2 - List each function name with its line number with a prefix that says 'Function List: '\n",
+ "Step 3 - Identify any functions that are longer than 10 lines with a prefix that says 'Long Functions: '\n",
+ "Step 4 - Provide an overall assessment with a prefix that says 'Assessment: '\"\"\"\n",
+ "\n",
+ "user_message = \"\"\"\n",
+ "def calculate_tax(income, deductions):\n",
+ " taxable_income = income - deductions\n",
+ " if taxable_income <= 0:\n",
+ " return 0\n",
+ " elif taxable_income <= 50000:\n",
+ " return taxable_income * 0.1\n",
+ " else:\n",
+ " return 50000 * 0.1 + (taxable_income - 50000) * 0.2\n",
+ "\n",
+ "def format_currency(amount):\n",
+ " return f\"${amount:,.2f}\"\n",
+ "\n",
+ "def generate_report(name, income, deductions):\n",
+ " tax = calculate_tax(income, deductions)\n",
+ " net_income = income - tax\n",
+ " \n",
+ " print(f\"Tax Report for {name}\")\n",
+ " print(f\"Gross Income: {format_currency(income)}\")\n",
+ " print(f\"Deductions: {format_currency(deductions)}\")\n",
+ " print(f\"Tax Owed: {format_currency(tax)}\")\n",
+ " print(f\"Net Income: {format_currency(net_income)}\")\n",
+ "\"\"\"\n",
+ "\n",
+ "chain_messages = [\n",
+ " {\"role\": \"system\", \"content\": system_message},\n",
+ " {\"role\": \"user\", \"content\": user_message}\n",
+ "]\n",
+ "\n",
+ "chain_response = get_chat_completion(chain_messages)\n",
+ "print(\"π CHAIN-OF-THOUGHT ANALYSIS:\")\n",
+ "print(chain_response)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "π **Excellent!** The AI followed each step methodically, providing structured, comprehensive analysis.\n",
+ "\n",
+ "#### Practice Exercise: Combine All Techniques\n",
+ "\n",
+ "Now let's put everything together in a real-world scenario that combines role prompting, delimiters, and chain-of-thought:\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Comprehensive example combining all techniques\n",
+ "comprehensive_messages = [\n",
+ " {\n",
+ " \"role\": \"system\",\n",
+ " \"content\": \"\"\"You are a senior software engineer conducting a comprehensive code review.\n",
+ "\n",
+ "Follow this systematic process:\n",
+ "Step 1 - Security Analysis: Identify potential security vulnerabilities\n",
+ "Step 2 - Performance Review: Analyze efficiency and optimization opportunities \n",
+ "Step 3 - Code Quality: Evaluate readability, maintainability, and best practices\n",
+ "Step 4 - Recommendations: Provide specific, prioritized improvement suggestions\n",
+ "\n",
+ "Format each step clearly with the step name as a header.\"\"\"\n",
+ " },\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": \"\"\"\n",
+ "\n",
+ "from flask import Flask, request, jsonify\n",
+ "import sqlite3\n",
+ "\n",
+ "app = Flask(__name__)\n",
+ "\n",
+ "@app.route('/user/')\n",
+ "def get_user(user_id):\n",
+ " conn = sqlite3.connect('users.db')\n",
+ " cursor = conn.cursor()\n",
+ " cursor.execute(f\"SELECT * FROM users WHERE id = {user_id}\")\n",
+ " user = cursor.fetchone()\n",
+ " conn.close()\n",
+ " \n",
+ " if user:\n",
+ " return jsonify({\n",
+ " \"id\": user[0],\n",
+ " \"name\": user[1], \n",
+ " \"email\": user[2]\n",
+ " })\n",
+ " else:\n",
+ " return jsonify({\"error\": \"User not found\"}), 404\n",
+ "\n",
+ "\n",
+ "\n",
+ "This is a user lookup endpoint for a web application that serves user profiles.\n",
+ "The application handles 1000+ requests per minute during peak hours.\n",
+ "\n",
+ "\n",
+ "Perform a comprehensive code review following the systematic process.\n",
+ "\"\"\"\n",
+ " }\n",
+ "]\n",
+ "\n",
+ "comprehensive_response = get_chat_completion(comprehensive_messages)\n",
+ "print(\"π COMPREHENSIVE CODE REVIEW:\")\n",
+ "print(comprehensive_response)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
π΅ Suggested Break Point #2
\n",
+ "
~60 minutes elapsed β’ Halfway through!
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
β Completed (Tactics 0-4):
\n",
+ "
\n",
+ "
Clear Instructions & Role Prompting
\n",
+ "
Structured Inputs with XML tags
\n",
+ "
Few-Shot Examples for consistent styles
\n",
+ "
Chain-of-Thought for systematic reasoning
\n",
+ "
\n",
+ "
π― You've mastered 5 out of 8 tactics!
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
βοΈ Coming Next:
\n",
+ "
\n",
+ "
Tactic 5: Reference Citations (Ground responses in docs)
\n",
+ "
Tactic 6: Prompt Chaining (Break complex tasks into steps)
\n",
+ "
\n",
+ "
β±οΈ Next section: ~30 minutes
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
π BOOKMARK TO RESUME:
\n",
+ "
\"Tactic 5: Reference Citations\"
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ " π‘ Great progress! Consider taking a break before continuing with the final tactics.\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "---\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### π Tactic 5: Reference Citations\n",
+ "\n",
+ "**Ground responses in actual documentation to reduce hallucinations**\n",
+ "\n",
+ "**Core Principle:** When working with long documents or multiple reference materials, asking AI models to quote relevant parts of the documents first before carrying out tasks helps them cut through the \"noise\" and focus on pertinent information. This technique is especially powerful when working with extended context windows.\n",
+ "\n",
+ "**Why This Works:**\n",
+ "- The AI identifies and focuses on relevant information before generating responses\n",
+ "- Citations make outputs verifiable and trustworthy\n",
+ "- Reduces hallucination by grounding responses in actual source material\n",
+ "- Makes it easy to trace conclusions back to specific code or documentation sections\n",
+ "\n",
+ "**Best Practices for Long Context:**\n",
+ "- **Put longform data at the top:** Place long documents (~20K+ tokens) near the top of your prompt, above queries and instructions (can improve response quality by up to 30%)\n",
+ "- **Structure with XML tags:** Use ``, ``, and `` tags to organize multiple documents\n",
+ "- **Request quotes first:** Ask the AI to extract relevant quotes in `` tags before generating the final response\n",
+ "\n",
+ "**Software Engineering Application Preview:** Critical for code review with large codebases, documentation generation from source files, security audit reports, and analyzing API documentation.\n",
+ "\n",
+ "*Reference: [Claude Documentation - Long Context Tips](https://docs.claude.com/en/docs/build-with-claude/prompt-engineering/long-context-tips)*\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Example 1: Code Review with Multiple Files\n",
+ "\n",
+ "Let's demonstrate how to structure multiple code files and ask the AI to extract relevant quotes before providing analysis:\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Example: Multi-file code review with quote extraction\n",
+ "auth_service = \"\"\"\n",
+ "class AuthService:\n",
+ " def __init__(self, db_connection):\n",
+ " self.db = db_connection\n",
+ " \n",
+ " def authenticate_user(self, username, password):\n",
+ " # TODO: Add password hashing\n",
+ " query = f\"SELECT * FROM users WHERE username='{username}' AND password='{password}'\"\n",
+ " result = self.db.execute(query)\n",
+ " return result.fetchone() is not None\n",
+ " \n",
+ " def create_session(self, user_id):\n",
+ " session_id = str(uuid.uuid4())\n",
+ " # Session expires in 24 hours\n",
+ " expiry = datetime.now() + timedelta(hours=24)\n",
+ " self.db.execute(f\"INSERT INTO sessions VALUES ('{session_id}', {user_id}, '{expiry}')\")\n",
+ " return session_id\n",
+ "\"\"\"\n",
+ "\n",
+ "user_controller = \"\"\"\n",
+ "from flask import Flask, request, jsonify\n",
+ "from auth_service import AuthService\n",
+ "\n",
+ "app = Flask(__name__)\n",
+ "auth = AuthService(db_connection)\n",
+ "\n",
+ "@app.route('/login', methods=['POST'])\n",
+ "def login():\n",
+ " username = request.json.get('username')\n",
+ " password = request.json.get('password')\n",
+ " \n",
+ " if auth.authenticate_user(username, password):\n",
+ " user_id = get_user_id(username)\n",
+ " session_id = auth.create_session(user_id)\n",
+ " return jsonify({'session_id': session_id, 'status': 'success'})\n",
+ " else:\n",
+ " return jsonify({'status': 'failed'}), 401\n",
+ "\"\"\"\n",
+ "\n",
+ "# Structure the prompt with documents at the top, query at the bottom\n",
+ "messages = [\n",
+ " {\n",
+ " \"role\": \"system\",\n",
+ " \"content\": \"You are a senior security engineer reviewing code for vulnerabilities.\"\n",
+ " },\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": f\"\"\"\n",
+ "\n",
+ "auth_service.py\n",
+ "\n",
+ "{auth_service}\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "user_controller.py\n",
+ "\n",
+ "{user_controller}\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "Review the authentication code above for security vulnerabilities. \n",
+ "\n",
+ "First, extract relevant code quotes that demonstrate security issues and place them in tags with the source file indicated.\n",
+ "\n",
+ "Then, provide your security analysis in tags, explaining each vulnerability and its severity.\n",
+ "\n",
+ "Finally, provide specific remediation recommendations in tags.\"\"\"\n",
+ " }\n",
+ "]\n",
+ "\n",
+ "response = get_chat_completion(messages)\n",
+ "print(\"π SECURITY REVIEW WITH CITATIONS:\")\n",
+ "print(response)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Example 2: API Documentation Analysis\n",
+ "\n",
+ "Now let's analyze API documentation to extract specific information with citations:\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Example: Analyzing API documentation with quote grounding\n",
+ "api_docs = \"\"\"\n",
+ "# Payment API Documentation\n",
+ "\n",
+ "## Authentication\n",
+ "All API requests require an API key passed in the `X-API-Key` header.\n",
+ "Rate limit: 1000 requests per hour per API key.\n",
+ "\n",
+ "## Create Payment\n",
+ "POST /api/v2/payments\n",
+ "\n",
+ "Creates a new payment transaction.\n",
+ "\n",
+ "**Request Body:**\n",
+ "- amount (required, decimal): Payment amount in USD\n",
+ "- currency (optional, string): Currency code, defaults to \"USD\"\n",
+ "- customer_id (required, string): Customer identifier\n",
+ "- payment_method (required, string): One of: \"card\", \"bank\", \"wallet\"\n",
+ "- metadata (optional, object): Additional key-value pairs\n",
+ "\n",
+ "**Rate Limit:** 100 requests per minute\n",
+ "\n",
+ "**Response:**\n",
+ "{\n",
+ " \"payment_id\": \"pay_abc123\",\n",
+ " \"status\": \"pending\",\n",
+ " \"amount\": 99.99,\n",
+ " \"created_at\": \"2024-01-15T10:30:00Z\"\n",
+ "}\n",
+ "\n",
+ "## Retrieve Payment\n",
+ "GET /api/v2/payments/{payment_id}\n",
+ "\n",
+ "Retrieves details of a specific payment.\n",
+ "\n",
+ "**Security Note:** Only returns payments belonging to the authenticated API key's account.\n",
+ "\n",
+ "**Response Codes:**\n",
+ "- 200: Success\n",
+ "- 404: Payment not found\n",
+ "- 401: Invalid API key\n",
+ "\"\"\"\n",
+ "\n",
+ "integration_question = \"\"\"\n",
+ "I need to integrate payment processing into my e-commerce checkout flow.\n",
+ "The checkout needs to:\n",
+ "1. Create a payment when user clicks \"Pay Now\"\n",
+ "2. Handle USD and EUR currencies\n",
+ "3. Store order metadata with the payment\n",
+ "4. Check payment status after creation\n",
+ "\n",
+ "What do I need to know from the API documentation?\n",
+ "\"\"\"\n",
+ "\n",
+ "messages = [\n",
+ " {\n",
+ " \"role\": \"system\",\n",
+ " \"content\": \"You are a technical integration specialist helping developers implement APIs.\"\n",
+ " },\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": f\"\"\"\n",
+ "\n",
+ "payment_api_docs.md\n",
+ "\n",
+ "{api_docs}\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "{integration_question}\n",
+ "\n",
+ "\n",
+ "First, find and quote the relevant sections from the API documentation that address the integration requirements. Place these quotes in tags with the section name indicated.\n",
+ "\n",
+ "Then, provide a step-by-step integration guide in tags that references the quoted documentation.\"\"\"\n",
+ " }\n",
+ "]\n",
+ "\n",
+ "response = get_chat_completion(messages)\n",
+ "print(\"π API INTEGRATION GUIDE WITH CITATIONS:\")\n",
+ "print(response)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Key Takeaways: Reference Citations\n",
+ "\n",
+ "**Best Practices Demonstrated:**\n",
+ "1. **Document Structure:** Used `` and `` tags with `` and `` metadata\n",
+ "2. **Documents First:** Placed all reference materials at the top of the prompt, before the query\n",
+ "3. **Quote Extraction:** Asked AI to extract relevant quotes first, then perform analysis\n",
+ "4. **Structured Output:** Used XML tags like ``, ``, and `` to organize responses\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### π Tactic 6: Prompt Chaining\n",
+ "\n",
+ "**Break complex tasks into sequential workflows**\n",
+ "\n",
+ "**Core Principle:** When working with complex tasks, AI models can sometimes drop the ball if you try to handle everything in a single prompt. Prompt chaining breaks down complex tasks into smaller, manageable subtasks, where each subtask gets the AI's full attention.\n",
+ "\n",
+ "**Why Chain Prompts:**\n",
+ "- **Accuracy:** Each subtask gets full attention, reducing errors\n",
+ "- **Clarity:** Simpler subtasks mean clearer instructions and outputs\n",
+ "- **Traceability:** Easily pinpoint and fix issues in your prompt chain\n",
+ "- **Focus:** Each link in the chain gets the AI's complete concentration\n",
+ "\n",
+ "**When to Chain Prompts:**\n",
+ "Use prompt chaining for multi-step tasks like:\n",
+ "- Research synthesis and document analysis\n",
+ "- Iterative content creation\n",
+ "- Multiple transformations or citations\n",
+ "- Code generation β Review β Refactoring workflows\n",
+ "\n",
+ "**How to Chain Prompts:**\n",
+ "1. **Identify subtasks:** Break your task into distinct, sequential steps\n",
+ "2. **Structure with XML:** Use XML tags to pass outputs between prompts\n",
+ "3. **Single-task goal:** Each subtask should have one clear objective\n",
+ "4. **Iterate:** Refine subtasks based on performance\n",
+ "\n",
+ "**Common Software Development Workflows:**\n",
+ "- **Code Review Pipeline:** Extract code β Analyze issues β Propose fixes β Generate tests\n",
+ "- **Documentation Generation:** Analyze code β Extract docstrings β Format β Review\n",
+ "- **Refactoring Workflow:** Identify patterns β Suggest improvements β Generate refactored code β Validate\n",
+ "- **Testing Pipeline:** Analyze function β Generate test cases β Create assertions β Review coverage\n",
+ "- **Debugging Chain:** Reproduce issue β Analyze root cause β Suggest fixes β Verify solution\n",
+ "\n",
+ "**Debugging Tip:** If the AI misses a step or performs poorly, isolate that step in its own prompt. This lets you fine-tune problematic steps without redoing the entire task.\n",
+ "\n",
+ "**Software Engineering Application Preview:** Essential for complex code reviews, multi-stage refactoring, comprehensive test generation, and architectural analysis where breaking down the task ensures nothing is missed.\n",
+ "\n",
+ "*Reference: [Claude Documentation - Chain Complex Prompts](https://docs.claude.com/en/docs/build-with-claude/prompt-engineering/chain-prompts)*\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Example 1: Code Review with Prompt Chaining\n",
+ "\n",
+ "Let's demonstrate a 3-step prompt chain for comprehensive code review:\n",
+ "1. **Step 1:** Analyze code for issues\n",
+ "2. **Step 2:** Review the analysis for completeness\n",
+ "3. **Step 3:** Generate final recommendations with fixes\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Prompt Chain Example: Code Review Pipeline\n",
+ "code_to_review = \"\"\"\n",
+ "def process_user_data(user_input):\n",
+ " # Process user registration data\n",
+ " data = eval(user_input) # Parse input\n",
+ " \n",
+ " username = data['username']\n",
+ " email = data['email']\n",
+ " password = data['password']\n",
+ " \n",
+ " # Save to database\n",
+ " query = f\"INSERT INTO users (username, email, password) VALUES ('{username}', '{email}', '{password}')\"\n",
+ " db.execute(query)\n",
+ " \n",
+ " # Send welcome email\n",
+ " send_email(email, f\"Welcome {username}!\")\n",
+ " \n",
+ " return {\"status\": \"success\", \"user\": username}\n",
+ "\"\"\"\n",
+ "\n",
+ "# STEP 1: Analyze code for issues\n",
+ "print(\"=\" * 60)\n",
+ "print(\"STEP 1: Initial Code Analysis\")\n",
+ "print(\"=\" * 60)\n",
+ "\n",
+ "step1_messages = [\n",
+ " {\n",
+ " \"role\": \"system\",\n",
+ " \"content\": \"You are a senior code reviewer specializing in security and best practices.\"\n",
+ " },\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": f\"\"\"Analyze this Python function for issues:\n",
+ "\n",
+ "\n",
+ "{code_to_review}\n",
+ "\n",
+ "\n",
+ "Identify all security vulnerabilities, code quality issues, and potential bugs.\n",
+ "Provide your analysis in tags with specific line references.\"\"\"\n",
+ " }\n",
+ "]\n",
+ "\n",
+ "analysis = get_chat_completion(step1_messages)\n",
+ "print(analysis)\n",
+ "print(\"\\n\")\n",
+ "\n",
+ "# STEP 2: Review the analysis for completeness\n",
+ "print(\"=\" * 60)\n",
+ "print(\"STEP 2: Review Analysis for Completeness\")\n",
+ "print(\"=\" * 60)\n",
+ "\n",
+ "step2_messages = [\n",
+ " {\n",
+ " \"role\": \"system\",\n",
+ " \"content\": \"You are a principal engineer reviewing a code analysis. Check for completeness and accuracy.\"\n",
+ " },\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": f\"\"\"Here is a code analysis from a code reviewer:\n",
+ "\n",
+ "\n",
+ "{code_to_review}\n",
+ "\n",
+ "\n",
+ "\n",
+ "{analysis}\n",
+ "\n",
+ "\n",
+ "Review this analysis and:\n",
+ "1. Verify all issues are correctly identified\n",
+ "2. Check if any critical issues were missed\n",
+ "3. Rate the severity of each issue (Critical/High/Medium/Low)\n",
+ "\n",
+ "Provide feedback in tags.\"\"\"\n",
+ " }\n",
+ "]\n",
+ "\n",
+ "review = get_chat_completion(step2_messages)\n",
+ "print(review)\n",
+ "print(\"\\n\")\n",
+ "\n",
+ "# STEP 3: Generate final recommendations with code fixes\n",
+ "print(\"=\" * 60)\n",
+ "print(\"STEP 3: Final Recommendations and Code Fixes\")\n",
+ "print(\"=\" * 60)\n",
+ "\n",
+ "step3_messages = [\n",
+ " {\n",
+ " \"role\": \"system\",\n",
+ " \"content\": \"You are a senior developer providing actionable solutions.\"\n",
+ " },\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": f\"\"\"Based on the code analysis and review, provide final recommendations:\n",
+ "\n",
+ "\n",
+ "{code_to_review}\n",
+ "\n",
+ "\n",
+ "\n",
+ "{analysis}\n",
+ "\n",
+ "\n",
+ "\n",
+ "{review}\n",
+ "\n",
+ "\n",
+ "Provide:\n",
+ "1. A prioritized list of fixes in tags\n",
+ "2. The complete refactored code in tags\n",
+ "3. Brief explanation of key changes in tags\"\"\"\n",
+ " }\n",
+ "]\n",
+ "\n",
+ "final_recommendations = get_chat_completion(step3_messages)\n",
+ "print(final_recommendations)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Example 2: Test Generation with Prompt Chaining\n",
+ "\n",
+ "Now let's create a chain for comprehensive test generation:\n",
+ "1. **Step 1:** Analyze function to identify test scenarios\n",
+ "2. **Step 2:** Generate test cases based on scenarios \n",
+ "3. **Step 3:** Review and enhance test coverage\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Prompt Chain Example: Test Generation Pipeline\n",
+ "function_to_test = \"\"\"\n",
+ "def calculate_discount(price, discount_percent, customer_tier='standard'):\n",
+ " \\\"\\\"\\\"\n",
+ " Calculate final price after applying discount.\n",
+ " \n",
+ " Args:\n",
+ " price: Original price (must be positive)\n",
+ " discount_percent: Discount percentage (0-100)\n",
+ " customer_tier: Customer tier ('standard', 'premium', 'vip')\n",
+ " \n",
+ " Returns:\n",
+ " Final price after discount and tier bonus\n",
+ " \\\"\\\"\\\"\n",
+ " if price < 0:\n",
+ " raise ValueError(\"Price cannot be negative\")\n",
+ " \n",
+ " if discount_percent < 0 or discount_percent > 100:\n",
+ " raise ValueError(\"Discount must be between 0 and 100\")\n",
+ " \n",
+ " # Apply base discount\n",
+ " discounted_price = price * (1 - discount_percent / 100)\n",
+ " \n",
+ " # Apply tier bonus\n",
+ " tier_bonuses = {'standard': 0, 'premium': 5, 'vip': 10}\n",
+ " if customer_tier not in tier_bonuses:\n",
+ " raise ValueError(f\"Invalid tier: {customer_tier}\")\n",
+ " \n",
+ " tier_bonus = tier_bonuses[customer_tier]\n",
+ " final_price = discounted_price * (1 - tier_bonus / 100)\n",
+ " \n",
+ " return round(final_price, 2)\n",
+ "\"\"\"\n",
+ "\n",
+ "# STEP 1: Analyze function and identify test scenarios\n",
+ "print(\"=\" * 60)\n",
+ "print(\"STEP 1: Analyze Function and Identify Test Scenarios\")\n",
+ "print(\"=\" * 60)\n",
+ "\n",
+ "step1_messages = [\n",
+ " {\n",
+ " \"role\": \"system\",\n",
+ " \"content\": \"You are a QA engineer analyzing code for test coverage.\"\n",
+ " },\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": f\"\"\"Analyze this function and identify all test scenarios needed:\n",
+ "\n",
+ "\n",
+ "{function_to_test}\n",
+ "\n",
+ "\n",
+ "Identify and categorize test scenarios:\n",
+ "1. Happy path scenarios\n",
+ "2. Edge cases\n",
+ "3. Error cases\n",
+ "4. Boundary conditions\n",
+ "\n",
+ "Provide your analysis in tags.\"\"\"\n",
+ " }\n",
+ "]\n",
+ "\n",
+ "test_scenarios = get_chat_completion(step1_messages)\n",
+ "print(test_scenarios)\n",
+ "print(\"\\n\")\n",
+ "\n",
+ "# STEP 2: Generate test cases based on scenarios\n",
+ "print(\"=\" * 60)\n",
+ "print(\"STEP 2: Generate Test Cases\")\n",
+ "print(\"=\" * 60)\n",
+ "\n",
+ "step2_messages = [\n",
+ " {\n",
+ " \"role\": \"system\",\n",
+ " \"content\": \"You are a test automation engineer. Write pytest test cases.\"\n",
+ " },\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": f\"\"\"Based on these test scenarios, generate pytest test cases:\n",
+ "\n",
+ "\n",
+ "{function_to_test}\n",
+ "\n",
+ "\n",
+ "\n",
+ "{test_scenarios}\n",
+ "\n",
+ "\n",
+ "Generate complete, executable pytest test cases in tags.\n",
+ "Include assertions, test data, and descriptive test names.\"\"\"\n",
+ " }\n",
+ "]\n",
+ "\n",
+ "test_code = get_chat_completion(step2_messages)\n",
+ "print(test_code)\n",
+ "print(\"\\n\")\n",
+ "\n",
+ "# STEP 3: Review and enhance test coverage\n",
+ "print(\"=\" * 60)\n",
+ "print(\"STEP 3: Review Test Coverage and Suggest Enhancements\")\n",
+ "print(\"=\" * 60)\n",
+ "\n",
+ "step3_messages = [\n",
+ " {\n",
+ " \"role\": \"system\",\n",
+ " \"content\": \"You are a principal QA engineer reviewing test coverage.\"\n",
+ " },\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": f\"\"\"Review this test suite for completeness:\n",
+ "\n",
+ "\n",
+ "{function_to_test}\n",
+ "\n",
+ "\n",
+ "\n",
+ "{test_scenarios}\n",
+ "\n",
+ "\n",
+ "\n",
+ "{test_code}\n",
+ "\n",
+ "\n",
+ "Evaluate:\n",
+ "1. Are all scenarios covered?\n",
+ "2. Are there any missing edge cases?\n",
+ "3. Is the test data comprehensive?\n",
+ "4. Estimate coverage percentage\n",
+ "\n",
+ "Provide:\n",
+ "- Coverage assessment in tags\n",
+ "- Any additional test cases needed in tags\"\"\"\n",
+ " }\n",
+ "]\n",
+ "\n",
+ "coverage_review = get_chat_completion(step3_messages)\n",
+ "print(coverage_review)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Key Takeaways: Prompt Chaining\n",
+ "\n",
+ "**What We Demonstrated:**\n",
+ "\n",
+ "**Example 1: Code Review Chain**\n",
+ "- **Step 1:** Initial analysis identifies security vulnerabilities and code quality issues\n",
+ "- **Step 2:** Principal engineer validates the analysis and adds severity ratings\n",
+ "- **Step 3:** Generates actionable fixes and refactored code\n",
+ "\n",
+ "**Example 2: Test Generation Chain**\n",
+ "- **Step 1:** Analyzes function to identify all necessary test scenarios\n",
+ "- **Step 2:** Generates complete pytest test cases with proper structure\n",
+ "- **Step 3:** Reviews coverage and suggests additional tests for completeness\n",
+ "\n",
+ "**Why Chaining Works Better Than Single Prompts:**\n",
+ "- **Focused attention:** Each step handles one specific task without distraction\n",
+ "- **Quality control:** Later steps can review and enhance earlier outputs\n",
+ "- **Iterative refinement:** Each link improves the overall result\n",
+ "- **Easier debugging:** Problems can be isolated to specific steps\n",
+ "\n",
+ "**Best Practices Demonstrated:**\n",
+ "1. **Pass context forward:** Each step receives relevant outputs from previous steps\n",
+ "2. **Use XML tags:** Structured tags (``, ``, ``) organize data flow\n",
+ "3. **Clear objectives:** Each step has one specific, measurable goal\n",
+ "4. **Role specialization:** Different expert personas for different steps\n",
+ "\n",
+ "**Real-World Applications:**\n",
+ "- **Multi-stage refactoring:** Analyze β Plan β Refactor β Validate β Document\n",
+ "- **Comprehensive security audits:** Scan β Analyze β Prioritize β Generate fixes β Verify\n",
+ "- **API development:** Design schema β Generate code β Create tests β Write docs β Review\n",
+ "- **Database migrations:** Analyze schema β Generate migration β Create rollback β Test β Deploy\n",
+ "- **CI/CD pipeline generation:** Analyze project β Design workflow β Generate config β Add tests β Optimize\n",
+ "\n",
+ "**Pro Tip:** You can also create **self-correction chains** where the AI reviews its own work! Just pass the output back with a review prompt to catch errors and refine results.\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
π§ Suggested Break Point #3
\n",
+ "
~90 minutes elapsed β’ Almost there!
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
β Completed (Tactics 0-6):
\n",
+ "
\n",
+ "
Clear Instructions, Role Prompting & Structured Inputs