diff --git a/01-course/module-01-foundations/README.md b/01-course/module-01-foundations/README.md new file mode 100644 index 0000000..193d68a --- /dev/null +++ b/01-course/module-01-foundations/README.md @@ -0,0 +1,40 @@ +# Module 1: Foundations + +## Course Introduction & Environment Setup + +This foundational module introduces you to prompt engineering concepts and gets your development environment configured for hands-on learning. + +### Learning Objectives +By completing this module, you will be able to: +- βœ… Set up a working development environment with AI assistant access +- βœ… Identify and apply the four core elements of effective prompts +- βœ… Write basic prompts for reviewing code +- βœ… Iterate and refine prompts based on output quality + +### Getting Started + +**First time here?** +- If you haven't set up your development environment yet, follow the [Quick Setup guide](../../README.md#-quick-setup) in the main README first +- **New to Jupyter notebooks?** Read [About Jupyter Notebooks](../../README.md#-about-jupyter-notebooks) to understand how notebooks work and where code executes + +**Ready to start?** +1. **Open the tutorial notebook**: Click on [module1.ipynb](./module1.ipynb) to start the interactive tutorial +2. **Install dependencies**: Run the "Install Required Dependencies" cell in the notebook +3. **Follow the notebook**: Work through each cell sequentially - the notebook will guide you through setup and exercises +4. **Complete exercises**: Practice the hands-on activities as you go + +### Module Contents +- **[module1.ipynb](./module1.ipynb)** - Complete module 1 tutorial notebook + +### Time Required +Approximately 20 minutes + +### Prerequisites +- Python 3.8+ installed +- IDE with notebook support (VS Code or Cursor recommended) +- API access to GitHub Copilot, CircuIT, or OpenAI + +### Next Steps +After completing this module: +1. Review and refine your solutions to the exercises in this module +2. Continue to [Module 2: Core Prompting Techniques](../module-02-fundamentals/) diff --git a/01-course/module-01-foundations/module1.ipynb b/01-course/module-01-foundations/module1.ipynb new file mode 100644 index 0000000..f52de9b --- /dev/null +++ b/01-course/module-01-foundations/module1.ipynb @@ -0,0 +1,991 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Module 1: Foundation\n", + "\n", + "| **Aspect** | **Details** |\n", + "|-------------|-------------|\n", + "| **Goal** | Set up your development environment and learn the 4 core elements of effective prompts |\n", + "| **Time** | ~20 minutes |\n", + "| **Prerequisites** | Python 3.8+, IDE with notebook support, API access (GitHub Copilot, CircuIT, or OpenAI) |\n", + "| **Setup Required** | Clone the repository and follow [Quick Setup](../README.md) before running this notebook |\n", + "---" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## πŸ€” Why Prompt Engineering for Software Engineers?\n", + "\n", + "### What is Prompt Engineering?\n", + "\n", + "**Prompt Engineering** is the fastest way to harness the power of large language models. By interacting with an LLM through a series of questions, statements, or instructions, you can adjust LLM output behavior based on the specific context of the output you want to achieve.\n", + "\n", + "**Effective prompt techniques can help your business accomplish the following benefits:**\n", + "\n", + "- **Boost a model's abilities and improve safety**\n", + "- **Augment the model with domain knowledge and external tools** without changing model parameters or fine-tuning\n", + "- **Interact with language models to grasp their full capabilities**\n", + "- **Achieve better quality outputs through better quality inputs**\n", + "\n", + "### Two Ways to Influence LLM Behavior\n", + "\n", + "**1. Fine-tuning (Traditional Approach)**\n", + "- Adjust the model's weights/parameters using training data to optimize a cost function\n", + "- **Expensive process** - requires significant computation time and cost\n", + "- **Limited flexibility** - model is locked into specific behavior patterns\n", + "- **Problem:** Still produces vague, inconsistent results without proper context\n", + "\n", + "**2. Prompt Engineering vs. Context Engineering**\n", + "\n", + "According to [Anthropic's engineering team](https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents), there's an important distinction:\n", + "\n", + "- **Prompt Engineering** refers to methods for writing and organizing LLM instructions for optimal outcomes\n", + "- **Context Engineering** refers to the set of strategies for curating and maintaining the optimal set of tokens (information) during LLM inference, including all the other information that may land there outside of the prompts\n", + "\n", + "**Key Difference:** Prompt engineering focuses on writing effective prompts, while context engineering manages the entire context state (system instructions, tools, external data, message history, etc.) as a finite resource.\n", + "\n", + "### The Evolution: From Prompting to Context Engineering\n", + "\n", + "**Traditional Prompting** is asking AI questions without providing sufficient context, leading to generic, unhelpful responses. It's like asking a doctor \"fix me\" without describing your symptoms.\n", + "\n", + "**Context Engineering** treats context as a finite resource that must be carefully curated. As [Anthropic explains](https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents), \"context is a critical but finite resource for AI agents\" that requires thoughtful management.\n", + "\n", + "**Prompt Engineering** focuses on writing effective instructions, while **Context Engineering** manages the entire information ecosystem that feeds into the model.\n", + "\n", + "| **Traditional Prompting** | **Context Engineering** | **Prompt Engineering** |\n", + "|---------------------------|-------------------------|-------------------------|\n", + "| ❌ \"Fix this code\" | ⚠️ \"Fix this code. Context: Python e-commerce function. Tools: [code_analyzer, refactor_tool]. History: [previous attempts]\" | βœ… \"You are a senior Python developer. Refactor this e-commerce function following SOLID principles, add type hints, handle edge cases, and maintain backward compatibility. Format your response as: 1) Analysis, 2) Issues found, 3) Refactored code.\" |\n", + "| ❌ \"Make it better\" | ⚠️ \"Improve this security function. Context: Critical system. Available tools: [security_scanner, vulnerability_checker]. Previous findings: [XSS vulnerability found]\" | βœ… \"Act as a security expert. Analyze this code for vulnerabilities, performance issues, and maintainability problems. Provide specific fixes with code examples. Use this format: [Security Issues], [Performance Issues], [Code Quality], [Solutions].\" |\n", + "| ❌ \"Help me debug\" | ⚠️ \"Debug this error. Context: Production system. Tools: [log_analyzer, system_monitor]. Recent changes: [deployment at 2pm]\" | βœ… \"You are a debugging specialist. Debug this error: [specific error message]. Context: [system details]. Expected behavior: [description]. Use step-by-step troubleshooting approach: 1) Reproduce, 2) Isolate, 3) Fix, 4) Test.\" |\n", + "\n", + "**Without Context (Traditional):**\n", + "```\n", + "User: \"Fix this code\"\n", + "AI: \"I'd be happy to help! Could you please share the code you'd like me to fix?\"\n", + "```\n", + "\n", + "**With Context (Prompt Engineering):**\n", + "```\n", + "User: \"Fix this code: def calculate_total(items): return sum(items)\n", + "Context: This is a Python function for an e-commerce checkout. \n", + "Requirements: Handle empty lists, add type hints, include error handling.\n", + "AI: Here's the improved function with proper error handling and type hints...\"\n", + "```\n", + "\n", + "---\n", + "\n", + "## πŸ“‹ Elements of a Prompt\n", + "\n", + "A prompt's form depends on the task you are giving to a model. As you explore prompt engineering examples, you will review prompts containing some or all of the following elements:\n", + "\n", + "### **1. Instructions**\n", + "This is a task for the large language model to do. It provides a task description or instruction for how the model should perform.\n", + "\n", + "**Example:** \"You are a senior software engineer conducting a code review. Analyze the provided code and identify potential issues.\"\n", + "\n", + "### **2. Context**\n", + "This is external information to guide the model.\n", + "\n", + "**Example:** \"Code context: This is a utility function for user registration in a web application.\"\n", + "\n", + "### **3. Input Data**\n", + "This is the input for which you want a response.\n", + "\n", + "**Example:** \"Code to review: `def register_user(email, password): ...`\"\n", + "\n", + "### **4. Output Indicator**\n", + "This is the output type or format.\n", + "\n", + "**Example:** \"Please provide your response in this format: 1) Security Issues, 2) Code Quality Issues, 3) Recommended Improvements, 4) Overall Assessment\"\n", + "\n", + "---\n", + "\n", + "## πŸ”„ Evaluate and Iterate\n", + "\n", + "**Review model responses** to ensure prompts elicit appropriate quality, type, and range of responses. Make changes as needed.\n", + "\n", + "**Pro tip:** Ask one copy of the model to improve or check output from another copy.\n", + "\n", + "**Remember:** Prompt engineering is an iterative skill that improves with practice. Experimentation builds intuition for crafting optimal prompts.\n", + "\n", + "### 🎯 Key Benefits of Effective Prompting\n", + "\n", + "Effective prompt techniques can help you accomplish the following benefits:\n", + "\n", + "- **πŸš€ Boost a model's abilities and improve safety** \n", + " Well-crafted prompts guide models toward more accurate and appropriate responses\n", + "\n", + "- **🧠 Augment the model with domain knowledge and external tools** \n", + " Without changing model parameters or fine-tuning\n", + "\n", + "- **πŸ’‘ Interact with language models to grasp their full capabilities** \n", + " Unlock advanced reasoning and problem-solving abilities\n", + "\n", + "- **πŸ“ˆ Achieve better quality outputs through better quality inputs** \n", + " The precision of your prompts directly impacts the quality of results\n", + "\n", + "**Real Impact:** Transform AI from a \"helpful chatbot\" into a reliable development partner that understands your specific coding context and delivers consistent, actionable results.\n", + "\n", + "---\n", + "\n", + "## Getting Started: Setup and Practice\n", + "\n", + "Now that you understand why prompt engineering matters and what makes it effective, let's set up your development environment and start building! You'll create your first AI-powered code review assistant that demonstrates all the concepts we've covered.\n", + "\n", + "---" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### πŸ“š How This Notebook Works\n", + "\n", + "
\n", + "⚠️ Important:

\n", + "This notebook cannot be executed directly from GitHub. You must clone the repository and run it locally in your IDE.
\n", + "
\n", + "\n", + "
\n", + "πŸ†• First time using Jupyter notebooks?

\n", + "See the About Jupyter Notebooks section in the main README for a complete guide on how notebooks work, where code executes, and how to get started.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Quick start:**\n", + "- Press `Shift + Enter` to run each cell\n", + "- Run cells sequentially from top to bottom\n", + "- Output appears below each cell" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Step 1: Install Required Dependencies\n", + "Let's start by installing the packages we need for this tutorial.\n", + "\n", + "Run the cell below. You should see a success message when installation completes:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Install required packages\n", + "import subprocess\n", + "import sys\n", + "\n", + "def install_requirements():\n", + " try:\n", + " # Install from requirements.txt\n", + " subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"-q\", \"-r\", \"requirements.txt\"])\n", + " print(\"βœ… SUCCESS! All dependencies installed successfully.\")\n", + " print(\"πŸ“¦ Installed: openai, anthropic, python-dotenv, requests\")\n", + " except subprocess.CalledProcessError as e:\n", + " print(f\"❌ Installation failed: {e}\")\n", + " print(\"πŸ’‘ Try running: pip install openai anthropic python-dotenv requests\")\n", + "\n", + "install_requirements()\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "βœ… **Success!** Dependencies installed on your local machine. Now let's connect to an AI model.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Step 2: Connect to AI Model\n", + "\n", + "
\n", + "πŸ’‘ Note:

\n", + "The code below runs on your local machine and connects to AI services over the internet.\n", + "
\n", + "\n", + "Choose your preferred option:\n", + "\n", + "- **Option A: GitHub Copilot API (local proxy)**: Recommended if you don't have OpenAI or CircuIT API access.\n", + " - Supports both **Claude** and **OpenAI** models\n", + " - No API keys needed - uses your GitHub Copilot subscription\n", + " - Follow [GitHub-Copilot-2-API/README.md](../../GitHub-Copilot-2-API/README.md) to authenticate and start the local server\n", + " - Run the setup cell below and **edit your preferred provider** (`\"openai\"` or `\"claude\"`) by setting the `PROVIDER` variable\n", + " - Available models:\n", + " - **OpenAI**: gpt-4o, gpt-4, gpt-3.5-turbo, o3-mini, o4-mini\n", + " - **Claude**: claude-3.5-sonnet, claude-3.7-sonnet, claude-sonnet-4\n", + "\n", + "- **Option B: OpenAI API**: If you have OpenAI API access, you can use the `OpenAI` connection cells provided later in this notebook.\n", + "\n", + "- **Option C: CircuIT APIs (Azure OpenAI)**: If you have CircuIT API access, you can use the `CircuIT` connection cells provided later in this notebook." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Option A: GitHub Copilot (Recommended)\n", + "\n", + "If you have GitHub Copilot, this is the easiest option:\n", + "
\n", + "πŸ’‘ Note:

\n", + "The GitHub Copilot API repository (copilot-api) used in this course is a fork of the original repository from:

https://cto-github.cisco.com/xinyu3/copilot2api\n", + "
\n", + "\n", + "- Follow the setup steps in [https://github.com/snehangshu-splunk/copilot-api/blob/main/.github/README.md](https://github.com/snehangshu-splunk/copilot-api/blob/main/.github/README.md) to:\n", + " - Authenticate (`auth`) with your GitHub account that has Copilot access\n", + " - Start the local server (default: `http://localhost:7711`)\n", + "- Then run the \"GitHub Copilot API setup (local proxy)\" cells below.\n", + "\n", + "Quick reference (see [README](../../GitHub-Copilot-2-API/README.md) for details):\n", + "1. Download and install dependencies\n", + " ```bash\n", + " # Clone the repository\n", + " git clone git@github.com:snehangshu-splunk/copilot-api.git\n", + " cd copilot-api\n", + "\n", + " # Install dependencies\n", + " uv sync\n", + " ```\n", + "2. Before starting the server, you need to authenticate with GitHub:\n", + " ```bash\n", + " # For business account\n", + " uv run copilot2api auth --business\n", + " ```\n", + " When authenticating for the first time, you will see the following information:\n", + " ```\n", + " Press Ctrl+C to stop the server\n", + " Starting Copilot API server...\n", + " Starting GitHub device authorization flow...\n", + "\n", + " Please enter the code '14B4-5D82' at:\n", + " https://github.com/login/device\n", + "\n", + " Waiting for authorization...\n", + " ```\n", + " You need to copy `https://github.com/login/device` to your browser, then log in to your GitHub account through the browser. This GitHub account should have GitHub Copilot functionality. After authentication is complete, copy '14B4-5D82' in the browser prompt box. This string of numbers is system-generated and may be different each time.\n", + "\n", + " > **Don't copy the code here.** If you copy this, it will only cause your authorization to fail.\n", + "\n", + " After successful device authorization:\n", + " - macOS or Linux:\n", + " - In the `$HOME/.config/copilot2api/` directory, you will see the github-token file.\n", + " - Windows system:\n", + " - You will find the github-token file in the `C:\\Users\\\\AppData\\Roaming\\copilot2api\\` directory.\n", + "\n", + " 3. Start the Server\n", + " ```bash\n", + " # Start API server (default port 7711)\n", + " uv run copilot2api start\n", + " ```\n", + " Now use the OpenAI libraries to connect to the LLM, by executing the below cell. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Option A: GitHub Copilot API setup (Recommended)\n", + "import openai\n", + "import anthropic\n", + "import os\n", + "\n", + "# ============================================\n", + "# 🎯 CHOOSE YOUR AI MODEL PROVIDER\n", + "# ============================================\n", + "# Set your preference: \"openai\" or \"claude\"\n", + "PROVIDER = \"claude\" # Change to \"claude\" to use Claude models\n", + "\n", + "# ============================================\n", + "# πŸ“‹ Available Models by Provider\n", + "# ============================================\n", + "# OpenAI Models (via GitHub Copilot):\n", + "# - gpt-4o (recommended, supports vision)\n", + "# - gpt-4\n", + "# - gpt-3.5-turbo\n", + "# - o3-mini, o4-mini\n", + "#\n", + "# Claude Models (via GitHub Copilot):\n", + "# - claude-3.5-sonnet (recommended, supports vision)\n", + "# - claude-3.7-sonnet (supports vision)\n", + "# - claude-sonnet-4 (supports vision)\n", + "# ============================================\n", + "\n", + "# Configure clients for both providers\n", + "openai_client = openai.OpenAI(\n", + " base_url=\"http://localhost:7711/v1\",\n", + " api_key=\"dummy-key\"\n", + ")\n", + "\n", + "claude_client = anthropic.Anthropic(\n", + " api_key=\"dummy-key\",\n", + " base_url=\"http://localhost:7711\"\n", + ")\n", + "\n", + "# Set default models for each provider\n", + "OPENAI_DEFAULT_MODEL = \"gpt-4o\"\n", + "CLAUDE_DEFAULT_MODEL = \"claude-3.5-sonnet\"\n", + "\n", + "\n", + "def _extract_text_from_blocks(blocks):\n", + " \"\"\"Extract text content from response blocks returned by the API.\"\"\"\n", + " parts = []\n", + " for block in blocks:\n", + " text_val = getattr(block, \"text\", None)\n", + " if isinstance(text_val, str):\n", + " parts.append(text_val)\n", + " elif isinstance(block, dict):\n", + " t = block.get(\"text\")\n", + " if isinstance(t, str):\n", + " parts.append(t)\n", + " return \"\\n\".join(parts)\n", + "\n", + "\n", + "def get_openai_completion(messages, model=None, temperature=0.0):\n", + " \"\"\"Get completion from OpenAI models via GitHub Copilot.\"\"\"\n", + " if model is None:\n", + " model = OPENAI_DEFAULT_MODEL\n", + " try:\n", + " response = openai_client.chat.completions.create(\n", + " model=model,\n", + " messages=messages,\n", + " temperature=temperature\n", + " )\n", + " return response.choices[0].message.content\n", + " except Exception as e:\n", + " return f\"❌ Error: {e}\\nπŸ’‘ Make sure GitHub Copilot proxy is running on port 7711\"\n", + "\n", + "\n", + "def get_claude_completion(messages, model=None, temperature=0.0):\n", + " \"\"\"Get completion from Claude models via GitHub Copilot.\"\"\"\n", + " if model is None:\n", + " model = CLAUDE_DEFAULT_MODEL\n", + " try:\n", + " response = claude_client.messages.create(\n", + " model=model,\n", + " max_tokens=8192,\n", + " messages=messages,\n", + " temperature=temperature\n", + " )\n", + " return _extract_text_from_blocks(getattr(response, \"content\", []))\n", + " except Exception as e:\n", + " return f\"❌ Error: {e}\\nπŸ’‘ Make sure GitHub Copilot proxy is running on port 7711\"\n", + "\n", + "\n", + "def get_chat_completion(messages, model=None, temperature=0.0):\n", + " \"\"\"\n", + " Generic function to get chat completion from any provider.\n", + " Routes to the appropriate provider-specific function based on PROVIDER setting.\n", + " \"\"\"\n", + " if PROVIDER.lower() == \"claude\":\n", + " return get_claude_completion(messages, model, temperature)\n", + " else: # Default to OpenAI\n", + " return get_openai_completion(messages, model, temperature)\n", + "\n", + "\n", + "def get_default_model():\n", + " \"\"\"Get the default model for the current provider.\"\"\"\n", + " if PROVIDER.lower() == \"claude\":\n", + " return CLAUDE_DEFAULT_MODEL\n", + " else:\n", + " return OPENAI_DEFAULT_MODEL\n", + "\n", + "\n", + "# ============================================\n", + "# πŸ§ͺ TEST CONNECTION\n", + "# ============================================\n", + "print(\"πŸ”„ Testing connection to GitHub Copilot proxy...\")\n", + "test_result = get_chat_completion([\n", + " {\"role\": \"user\", \"content\": \"test\"}\n", + "])\n", + "\n", + "if test_result and \"Error\" in test_result:\n", + " print(\"\\n\" + \"=\"*60)\n", + " print(\"❌ CONNECTION FAILED!\")\n", + " print(\"=\"*60)\n", + " print(f\"Provider: {PROVIDER.upper()}\")\n", + " print(f\"Expected endpoint: http://localhost:7711\")\n", + " print(\"\\n⚠️ The GitHub Copilot proxy is NOT running!\")\n", + " print(\"\\nπŸ“‹ To fix this:\")\n", + " print(\" 1. Open a new terminal\")\n", + " print(\" 2. Navigate to your copilot-api directory\")\n", + " print(\" 3. Run: uv run copilot2api start\")\n", + " print(\" 4. Wait for the server to start (you should see 'Server initialized')\")\n", + " print(\" 5. Come back and rerun this cell\")\n", + " print(\"\\nπŸ’‘ Need setup help? See: GitHub-Copilot-2-API/README.md\")\n", + " print(\"=\"*70)\n", + "else:\n", + " print(\"\\n\" + \"=\"*60)\n", + " print(\"βœ… CONNECTION SUCCESSFUL!\")\n", + " print(\"=\"*60)\n", + " print(f\"πŸ€– Provider: {PROVIDER.upper()}\")\n", + " print(f\"πŸ“¦ Default Model: {get_default_model()}\")\n", + " print(f\"πŸ”— Endpoint: http://localhost:7711\")\n", + " print(f\"\\nπŸ’‘ To switch providers, change PROVIDER to '{'claude' if PROVIDER.lower() == 'openai' else 'openai'}' and rerun this cell\")\n", + " print(\"=\"*70)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Option B: OpenAI API\n", + "\n", + "**Setup:** Add your API key to `.env` file, then uncomment and run:\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# # Direct OpenAI API setup\n", + "# import openai\n", + "# import os\n", + "# from dotenv import load_dotenv\n", + "\n", + "# load_dotenv()\n", + "\n", + "# client = openai.OpenAI(\n", + "# api_key=os.getenv(\"OPENAI_API_KEY\") # Set this in your .env file\n", + "# )\n", + "\n", + "# def get_chat_completion(messages, model=\"gpt-4\", temperature=0.7):\n", + "# try:\n", + "# response = client.chat.completions.create(\n", + "# model=model,\n", + "# messages=messages,\n", + "# temperature=temperature\n", + "# )\n", + "# return response.choices[0].message.content\n", + "# except Exception as e:\n", + "# return f\"❌ Error: {e}\"\n", + "\n", + "# print(\"βœ… OpenAI API configured successfully!\")\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Option C: CircuIT APIs\n", + "\n", + "**Setup:** Configure environment variables (`CISCO_CLIENT_ID`, `CISCO_CLIENT_SECRET`, `CISCO_OPENAI_APP_KEY`) in `.env` file.\n", + "\n", + "Get values from: https://ai-chat.cisco.com/bridgeit-platform/api/home\n", + "\n", + "Then uncomment and run:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# import openai\n", + "# import traceback\n", + "# import requests\n", + "# import base64\n", + "# import os\n", + "# from dotenv import load_dotenv\n", + "# from openai import AzureOpenAI\n", + "\n", + "# # Load environment variables\n", + "# load_dotenv()\n", + "\n", + "# # Open AI version to use\n", + "# openai.api_type = \"azure\"\n", + "# openai.api_version = \"2024-12-01-preview\"\n", + "\n", + "# # Get API_KEY wrapped in token - using environment variables\n", + "# client_id = os.getenv(\"CISCO_CLIENT_ID\")\n", + "# client_secret = os.getenv(\"CISCO_CLIENT_SECRET\")\n", + "\n", + "# url = \"https://id.cisco.com/oauth2/default/v1/token\"\n", + "\n", + "# payload = \"grant_type=client_credentials\"\n", + "# value = base64.b64encode(f\"{client_id}:{client_secret}\".encode(\"utf-8\")).decode(\"utf-8\")\n", + "# headers = {\n", + "# \"Accept\": \"*/*\",\n", + "# \"Content-Type\": \"application/x-www-form-urlencoded\",\n", + "# \"Authorization\": f\"Basic {value}\",\n", + "# }\n", + "\n", + "# token_response = requests.request(\"POST\", url, headers=headers, data=payload)\n", + "# print(token_response.text)\n", + "# token_data = token_response.json()\n", + "\n", + "# client = AzureOpenAI(\n", + "# azure_endpoint=\"https://chat-ai.cisco.com\",\n", + "# api_key=token_data.get(\"access_token\"),\n", + "# api_version=\"2024-12-01-preview\",\n", + "# )\n", + "\n", + "# app_key = os.getenv(\"CISCO_OPENAI_APP_KEY\")\n", + "\n", + "# def get_chat_completion(messages, model=\"gpt-4o\", temperature=0.0):\n", + "# try:\n", + "# response = client.chat.completions.create(\n", + "# model=model,\n", + "# messages=messages,\n", + "# temperature=temperature,\n", + "# user=f'{\"appkey\": \"{app_key}\"}',\n", + "# )\n", + "# return response.choices[0].message.content\n", + "# except Exception as e:\n", + "# return f\"❌ Error: {e}\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Step 3: Test Connection\n", + "\n", + "Run your first prompt to verify everything works:\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Test the connection with a simple prompt\n", + "test_messages = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": \"You are a helpful coding assistant. Respond with exactly: 'Connection successful! Ready for prompt engineering.'\"\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": \"Test the connection\"\n", + " }\n", + "]\n", + "\n", + "response = get_chat_completion(test_messages)\n", + "print(\"πŸ§ͺ Test Response:\")\n", + "print(response)\n", + "\n", + "if response and \"Connection successful\" in response:\n", + " print(\"\\nπŸŽ‰ Perfect! Your AI connection is working!\")\n", + "else:\n", + " print(\"\\n⚠️ Connection test complete, but response format may vary.\")\n", + " print(\"This is normal - let's continue with the tutorial!\")\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "βœ… **Connection verified!** You're ready to learn prompt engineering.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Step 4: Craft Your First AI-Powered Code Review\n", + "\n", + "Time to put theory into practice! You'll engineer a prompt that transforms a generic AI into a specialized code review expert.\n", + "\n", + "Let's see the 4 core elements in action with a software engineering example:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Example: Code review prompt with all 4 elements\n", + "messages = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": (\n", + " # 1. INSTRUCTIONS\n", + " \"You are a senior software engineer conducting a code review. \"\n", + " \"Analyze the provided code and identify potential issues.\"\n", + " )\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": f\"\"\"\n", + "# 2. CONTEXT\n", + "Code context: This is a utility function for user registration in a web application.\n", + "\n", + "# 3. INPUT DATA\n", + "Code to review:\n", + "```python\n", + "def register_user(email, password):\n", + " if email and password:\n", + " user = {{\"email\": email, \"password\": password}}\n", + " return user\n", + " return None\n", + "```\n", + "\n", + "# 4. OUTPUT FORMAT\n", + "Please provide your response in this format:\n", + "1. Security Issues (if any)\n", + "2. Code Quality Issues (if any) \n", + "3. Recommended Improvements\n", + "4. Overall Assessment\n", + "\"\"\"\n", + " }\n", + "]\n", + "\n", + "response = get_chat_completion(messages)\n", + "print(\"πŸ” CODE REVIEW RESULT:\")\n", + "print(response)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## πŸƒβ€β™€οΈ Hands-On Practice\n", + "\n", + "Now let's practice what you've learned! These exercises will help you master the 4 core elements of effective prompts.\n", + "\n", + "### Activity 1.1: Analyze Prompts and Identify Missing Elements\n", + "\n", + "Let's examine some incomplete prompts and identify what's missing:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# HINT: For each prompt, decide if it includes:\n", + "# - Instructions/persona\n", + "# - Context\n", + "# - Input data\n", + "# - Output indicator/format\n", + "# YOUR TASK: Write your notes below or in markdown.\n", + "\n", + "# Prompt 1 - Missing some elements\n", + "prompt_1 = \"\"\"\n", + "Fix this code:\n", + "def calculate(x, y):\n", + " return x + y\n", + "\"\"\"\n", + "\n", + "# Prompt 2 - Missing some elements \n", + "prompt_2 = \"\"\"\n", + "You are a Python developer.\n", + "Make this function better.\n", + "\"\"\"\n", + "\n", + "# Prompt 3 - Missing some elements\n", + "prompt_3 = \"\"\"\n", + "Review the following function and provide feedback.\n", + "Return your response as a list of improvements.\n", + "\"\"\"\n", + "\n", + "# YOUR NOTES:\n", + "# - Prompt 1 missing: ...\n", + "# - Prompt 2 missing: ...\n", + "# - Prompt 3 missing: ...\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Activity 1.2: Create a Complete Prompt with All 4 Elements\n", + "\n", + "Now let's build a complete prompt for code documentation. Use the function below and create both system and user messages:\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# HINT: Include all 4 elements:\n", + "# - Instructions/persona (system)\n", + "# - Context (user)\n", + "# - Input data (user)\n", + "# - Output indicator/format (user)\n", + "# YOUR TASK: Build system_message and user_message using the function below, then call get_chat_completion.\n", + "\n", + "function_to_document = \"\"\"\n", + "def process_transaction(user_id, amount, transaction_type):\n", + " if transaction_type not in ['deposit', 'withdrawal']:\n", + " raise ValueError(\"Invalid transaction type\")\n", + " \n", + " if amount <= 0:\n", + " raise ValueError(\"Amount must be positive\")\n", + " \n", + " balance = get_user_balance(user_id)\n", + " \n", + " if transaction_type == 'withdrawal' and balance < amount:\n", + " raise InsufficientFundsError(\"Insufficient funds\")\n", + " \n", + " new_balance = balance + amount if transaction_type == 'deposit' else balance - amount\n", + " update_user_balance(user_id, new_balance)\n", + " log_transaction(user_id, amount, transaction_type)\n", + " \n", + " return new_balance\n", + "\"\"\"\n", + "\n", + "# system_message = ...\n", + "# user_message = ...\n", + "# messages = [\n", + "# {\"role\": \"system\", \"content\": system_message},\n", + "# {\"role\": \"user\", \"content\": user_message}\n", + "# ]\n", + "# response = get_chat_completion(messages)\n", + "# print(response)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 🎯 Exercise Solutions & Discussion\n", + "\n", + "
\n", + "πŸ’‘ Try the exercises above first!

\n", + "Complete Activities 1.1 and 1.2 before checking the solutions below.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "πŸ“‹ Click to reveal solutions and discussion\n", + "\n", + "**Activity 1.1 Analysis:**\n", + "- **Prompt 1** missing: Instructions (role), Context, Output format\n", + "- **Prompt 2** missing: Context, Input data, Output format \n", + "- **Prompt 3** missing: Instructions (role), Context, Input data\n", + "\n", + "**Activity 1.2 Solution Example:**\n", + "```python\n", + "system_message = \"You are a senior software engineer creating technical documentation. Write clear, comprehensive documentation for the provided function.\"\n", + "\n", + "user_message = f\"\"\"\n", + "Context: This is a financial transaction processing function for a banking application.\n", + "```\n", + "```python\n", + "Function to document:\n", + "\n", + "{function_to_document}\n", + "\n", + "Please provide documentation in this format:\n", + "1. Function Purpose\n", + "2. Parameters\n", + "3. Return Value\n", + "4. Error Conditions\n", + "5. Usage Example\n", + "\"\"\"\n", + "```\n", + "\n", + "**Key Takeaway:** Notice how each element serves a specific purpose:\n", + "- **Instructions** define the AI's role and task\n", + "- **Context** provides domain knowledge\n", + "- **Input Data** gives the specific content to work with\n", + "- **Output Format** ensures consistent, structured results" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "πŸŽ‰ **Excellent!** You've just executed a structured prompt with all 4 core elements and practiced identifying them in exercises.\n", + "\n", + "πŸ’‘ **What makes this work?**\n", + "- **Clear role definition** (\"senior software engineer conducting code review\")\n", + "- **Specific context** about the code's purpose\n", + "- **Concrete input** to analyze\n", + "- **Structured output format** for consistent results\n", + "\n", + "**You've now completed:**\n", + "- βœ… Analyzed incomplete prompts to identify missing elements\n", + "- βœ… Created complete prompts with all 4 core elements\n", + "- βœ… Applied prompt engineering to real coding scenarios\n", + "\n", + "---" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## πŸ“ˆ Tracking Your Progress\n", + "\n", + "> **πŸ’‘ New to Skills Checklists?** See [Tracking Your Progress](../../README.md#-tracking-your-progress) in the main README for details on how the Skills Checklist works and when to check off skills.\n", + "\n", + "### Self-Assessment Questions\n", + "\n", + "After completing Module 1, ask yourself:\n", + "1. Can I explain why structured prompts work better than vague ones?\n", + "2. Can I apply the 4 core elements to my daily coding tasks?\n", + "3. Can I teach a colleague how to write effective prompts?\n", + "4. Can I create variations of prompts for different scenarios?\n", + "\n", + "### Progress Overview\n", + "\n", + "
\n", + "πŸ’‘ Note: The status indicators below (βœ…/⬜) are visual guides only and cannot be clicked. Scroll down to \"Check Off Your Skills\" for the interactive checkboxes where you'll track your actual progress!\n", + "
\n", + "\n", + "
\n", + "\n", + "**Module 1 Skills Checklist:** \n", + "
Track your progress by checking off skills below. When you master all 8 skills, you'll have achieved 100% completion!
\n", + "\n", + "**Current Status:**\n", + "- βœ… Environment Setup (Tutorial Completed)\n", + "- βœ… Basic Understanding (Tutorial Completed) \n", + "- ⬜ Skills Mastery (Use Skills Checklist below)\n", + "\n", + "**Progress Guide:**\n", + "- 0-2 skills checked: Beginner (50-63%)\n", + "- 3-5 skills checked: Intermediate (69-81%)\n", + "- 6-7 skills checked: Advanced (88-94%)\n", + "- 8 skills checked: Expert (100%) πŸŽ‰\n", + "\n", + "**Module 2:** Coming Next (8 Core Tactics)\n", + "- ⬜ Role Prompting & Structured Inputs\n", + "- ⬜ Few-Shot Examples & Chain-of-Thought\n", + "- ⬜ Reference Citations & Prompt Chaining\n", + "- ⬜ LLM-as-Judge & Inner Monologue\n", + "\n", + "
\n", + "\n", + "### Check Off Your Skills\n", + "\n", + "
\n", + "\n", + "Mark each skill as you master it:\n", + "\n", + "**Foundation Skills:**\n", + "
\n", + "- I can identify the 4 core prompt elements in any example\n", + "
\n", + "
\n", + "- I can convert vague requests into structured prompts\n", + "
\n", + "
\n", + "- I can write clear instructions for AI assistants\n", + "
\n", + "
\n", + "- I can provide appropriate context for coding tasks\n", + "
\n", + "\n", + "**Application Skills:**\n", + "
\n", + "- I can use prompts for code review and analysis\n", + "
\n", + "
\n", + "- I can adapt prompts for different programming languages\n", + "
\n", + "
\n", + "- I can troubleshoot when prompts don't work as expected\n", + "
\n", + "
\n", + "- I can explain prompt engineering benefits to my team\n", + "
\n", + "\n", + "
\n", + "\n", + "
\n", + "πŸ’‘ Remember:

\n", + "The goal is not just to complete activities, but to build lasting skills that transform your development workflow!\n", + "
\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Module 1 Complete! πŸŽ‰\n", + "\n", + "**What You've Accomplished:**\n", + "- βœ… Set up Python environment with AI model access\n", + "- βœ… Executed your first structured prompt\n", + "- βœ… Learned the 4 core elements of effective prompts\n", + "- βœ… Conducted your first AI-powered code review\n", + "- βœ… Analyzed incomplete prompts to identify missing elements\n", + "- βœ… Created complete prompts with all 4 core elements\n", + "- βœ… Applied prompt engineering to real coding scenarios\n", + "\n", + "**Next:** Continue to [**Module 2: Fundamentals**](../module-02-fundamentals/README.md)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Troubleshooting\n", + "\n", + "**Common Issues:**\n", + "- **Installation failed:** Try `pip install openai anthropic python-dotenv requests`\n", + "- **Connection failed:** Ensure GitHub Copilot proxy is running on port 7711\n", + "- **Authentication errors:** Check your API keys and permissions\n", + "\n", + "🎊 **Congratulations!** You've completed Module 1 and are ready to become a prompt engineering expert!\n" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": ".venv", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.13.2" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/01-tutorials/module-01-foundations/requirements.txt b/01-course/module-01-foundations/requirements.txt similarity index 100% rename from 01-tutorials/module-01-foundations/requirements.txt rename to 01-course/module-01-foundations/requirements.txt diff --git a/01-course/module-02-fundamentals/README.md b/01-course/module-02-fundamentals/README.md new file mode 100644 index 0000000..3af440c --- /dev/null +++ b/01-course/module-02-fundamentals/README.md @@ -0,0 +1,53 @@ +# Module 2: Fundamentals + +## Core Prompt Engineering Techniques + +This module covers the essential prompt engineering techniques that form the foundation of effective AI assistant interaction for software development. + +### Learning Objectives +By completing this module, you will be able to: + +- βœ… Apply eight core prompt engineering techniques to real coding scenarios +- βœ… Write clear instructions with specific constraints and requirements +- βœ… Use role prompting to transform AI into specialized domain experts +- βœ… Organize complex inputs using XML delimiters and structured formatting +- βœ… Teach AI your preferred styles using few-shot examples +- βœ… Implement chain-of-thought reasoning for systematic problem-solving +- βœ… Ground AI responses in reference texts with proper citations +- βœ… Break complex tasks into sequential workflows using prompt chaining +- βœ… Create evaluation rubrics and self-critique loops with LLM-as-Judge +- βœ… Separate reasoning from clean final outputs using inner monologue + +### Getting Started + +**First time here?** If you haven't set up your development environment yet, follow the [Quick Setup guide](../../README.md#-quick-setup) in the main README first. + +**Ready to start?** +1. **Open the tutorial notebook**: Click on [module2.ipynb](./module2.ipynb) to start the interactive tutorial +2. **Install dependencies**: Run the "Install Required Dependencies" cell in the notebook +3. **Follow the notebook**: Work through each cell sequentially - the notebook will guide you through setup and exercises +4. **Complete exercises**: Practice the hands-on activities as you go + +### Module Contents +- **[module2.ipynb](./module2.ipynb)** - Complete module 2 tutorial notebook + +### Time Required +Approximately 90-120 minutes (1.5-2 hours) + +**Time Breakdown:** +- Setup and introduction: ~10 minutes +- 8 core tactics with examples: ~70 minutes +- Hands-on practice activities: ~20-30 minutes +- Progress tracking: ~5 minutes + +πŸ’‘ **Tip:** You can complete this module in one session or break it into multiple shorter sessions. Each tactic is self-contained, making it easy to pause and resume. + +### Prerequisites +- Python 3.8+ installed +- IDE with notebook support (VS Code or Cursor recommended) +- API access to GitHub Copilot, CircuIT, or OpenAI + +### Next Steps +After completing this module: +1. Review and refine your solutions to the exercises in this module +2. Continue to [Module 3: Application in Software Engineering](../module-03-applications/) diff --git a/01-course/module-02-fundamentals/module2.ipynb b/01-course/module-02-fundamentals/module2.ipynb new file mode 100644 index 0000000..224738e --- /dev/null +++ b/01-course/module-02-fundamentals/module2.ipynb @@ -0,0 +1,3134 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Module 2 - Core Prompting Techniques\n", + "\n", + "| **Aspect** | **Details** |\n", + "|-------------|-------------|\n", + "| **Goal** | Master 8 core prompt engineering tactics: role prompting, structured inputs, few-shot examples, chain-of-thought reasoning, reference citations, prompt chaining, LLM-as-judge, and inner monologue to build professional-grade AI workflows |\n", + "| **Time** | ~90-120 minutes (1.5-2 hours) |\n", + "| **Prerequisites** | Python 3.8+, IDE with notebook support, API access (GitHub Copilot, CircuIT, or OpenAI) |\n", + "| **Setup Required** | Clone the repository and follow [Quick Setup](../README.md) before running this notebook |\n", + "\n", + "---\n", + "\n", + "## πŸš€ Ready to Start?\n", + "\n", + "
\n", + "⚠️ Important:

\n", + "This module requires fresh setup. Even if you completed Module 1, run the setup cells below to ensure everything works correctly.
\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## πŸ”§ Setup: Environment Configuration\n", + "\n", + "### Step 1: Install Required Dependencies\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "Let's start by installing the packages we need for this tutorial.\n", + "\n", + "Run the cell below. You should see a success message when installation completes:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Install required packages for Module 2\n", + "import subprocess\n", + "import sys\n", + "\n", + "def install_requirements():\n", + " try:\n", + " # Install from requirements.txt\n", + " subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"-q\", \"-r\", \"requirements.txt\"])\n", + " print(\"βœ… SUCCESS! Module 2 dependencies installed successfully.\")\n", + " print(\"πŸ“¦ Ready for: openai, anthropic, python-dotenv, requests\")\n", + " except subprocess.CalledProcessError as e:\n", + " print(f\"❌ Installation failed: {e}\")\n", + " print(\"πŸ’‘ Try running: pip install openai anthropic python-dotenv requests\")\n", + "\n", + "install_requirements()\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Step 2: Connect to AI Model\n", + "\n", + "
\n", + "πŸ’‘ Note:

\n", + "The code below runs on your local machine and connects to AI services over the internet.\n", + "
\n", + "\n", + "Choose your preferred option:\n", + "\n", + "- **Option A: GitHub Copilot API (local proxy)** ⭐ **Recommended**: \n", + " - Supports both **Claude** and **OpenAI** models\n", + " - No API keys needed - uses your GitHub Copilot subscription\n", + " - Follow [GitHub-Copilot-2-API/README.md](../../GitHub-Copilot-2-API/README.md) to authenticate and start the local server\n", + " - Run the setup cell below and **edit your preferred provider** (`\"openai\"` or `\"claude\"`) by setting the `PROVIDER` variable\n", + " - Available models:\n", + " - **OpenAI**: gpt-4o, gpt-4, gpt-3.5-turbo, o3-mini, o4-mini\n", + " - **Claude**: claude-3.5-sonnet, claude-3.7-sonnet, claude-sonnet-4\n", + "\n", + "- **Option B: OpenAI API**: If you have OpenAI API access, uncomment and run the **Option B** cell below.\n", + "\n", + "- **Option C: CircuIT APIs (Azure OpenAI)**: If you have CircuIT API access, uncomment and run the **Option C** cell below.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Option A: GitHub Copilot API setup (Recommended)\n", + "import openai\n", + "import anthropic\n", + "import os\n", + "\n", + "# ============================================\n", + "# 🎯 CHOOSE YOUR AI MODEL PROVIDER\n", + "# ============================================\n", + "# Set your preference: \"openai\" or \"claude\"\n", + "PROVIDER = \"claude\" # Change to \"claude\" to use Claude models\n", + "\n", + "# ============================================\n", + "# πŸ“‹ Available Models by Provider\n", + "# ============================================\n", + "# OpenAI Models (via GitHub Copilot):\n", + "# - gpt-4o (recommended, supports vision)\n", + "# - gpt-4\n", + "# - gpt-3.5-turbo\n", + "# - o3-mini, o4-mini\n", + "#\n", + "# Claude Models (via GitHub Copilot):\n", + "# - claude-3.5-sonnet (recommended, supports vision)\n", + "# - claude-3.7-sonnet (supports vision)\n", + "# - claude-sonnet-4 (supports vision)\n", + "# ============================================\n", + "\n", + "# Configure clients for both providers\n", + "openai_client = openai.OpenAI(\n", + " base_url=\"http://localhost:7711/v1\",\n", + " api_key=\"dummy-key\"\n", + ")\n", + "\n", + "claude_client = anthropic.Anthropic(\n", + " api_key=\"dummy-key\",\n", + " base_url=\"http://localhost:7711\"\n", + ")\n", + "\n", + "# Set default models for each provider\n", + "OPENAI_DEFAULT_MODEL = \"gpt-4o\"\n", + "CLAUDE_DEFAULT_MODEL = \"claude-3.5-sonnet\"\n", + "\n", + "\n", + "def _extract_text_from_blocks(blocks):\n", + " \"\"\"Extract text content from response blocks returned by the API.\"\"\"\n", + " parts = []\n", + " for block in blocks:\n", + " text_val = getattr(block, \"text\", None)\n", + " if isinstance(text_val, str):\n", + " parts.append(text_val)\n", + " elif isinstance(block, dict):\n", + " t = block.get(\"text\")\n", + " if isinstance(t, str):\n", + " parts.append(t)\n", + " return \"\\n\".join(parts)\n", + "\n", + "\n", + "def get_openai_completion(messages, model=None, temperature=0.0):\n", + " \"\"\"Get completion from OpenAI models via GitHub Copilot.\"\"\"\n", + " if model is None:\n", + " model = OPENAI_DEFAULT_MODEL\n", + " try:\n", + " response = openai_client.chat.completions.create(\n", + " model=model,\n", + " messages=messages,\n", + " temperature=temperature\n", + " )\n", + " return response.choices[0].message.content\n", + " except Exception as e:\n", + " return f\"❌ Error: {e}\\nπŸ’‘ Make sure GitHub Copilot proxy is running on port 7711\"\n", + "\n", + "\n", + "def get_claude_completion(messages, model=None, temperature=0.0):\n", + " \"\"\"Get completion from Claude models via GitHub Copilot.\"\"\"\n", + " if model is None:\n", + " model = CLAUDE_DEFAULT_MODEL\n", + " try:\n", + " response = claude_client.messages.create(\n", + " model=model,\n", + " max_tokens=8192,\n", + " messages=messages,\n", + " temperature=temperature\n", + " )\n", + " return _extract_text_from_blocks(getattr(response, \"content\", []))\n", + " except Exception as e:\n", + " return f\"❌ Error: {e}\\nπŸ’‘ Make sure GitHub Copilot proxy is running on port 7711\"\n", + "\n", + "\n", + "def get_chat_completion(messages, model=None, temperature=0.7):\n", + " \"\"\"\n", + " Generic function to get chat completion from any provider.\n", + " Routes to the appropriate provider-specific function based on PROVIDER setting.\n", + " \"\"\"\n", + " if PROVIDER.lower() == \"claude\":\n", + " return get_claude_completion(messages, model, temperature)\n", + " else: # Default to OpenAI\n", + " return get_openai_completion(messages, model, temperature)\n", + "\n", + "\n", + "def get_default_model():\n", + " \"\"\"Get the default model for the current provider.\"\"\"\n", + " if PROVIDER.lower() == \"claude\":\n", + " return CLAUDE_DEFAULT_MODEL\n", + " else:\n", + " return OPENAI_DEFAULT_MODEL\n", + "\n", + "\n", + "# ============================================\n", + "# πŸ§ͺ TEST CONNECTION\n", + "# ============================================\n", + "print(\"πŸ”„ Testing connection to GitHub Copilot proxy...\")\n", + "test_result = get_chat_completion([\n", + " {\"role\": \"user\", \"content\": \"test\"}\n", + "])\n", + "\n", + "if test_result and \"Error\" in test_result:\n", + " print(\"\\n\" + \"=\"*60)\n", + " print(\"❌ CONNECTION FAILED!\")\n", + " print(\"=\"*60)\n", + " print(f\"Provider: {PROVIDER.upper()}\")\n", + " print(f\"Expected endpoint: http://localhost:7711\")\n", + " print(\"\\n⚠️ The GitHub Copilot proxy is NOT running!\")\n", + " print(\"\\nπŸ“‹ To fix this:\")\n", + " print(\" 1. Open a new terminal\")\n", + " print(\" 2. Navigate to your copilot-api directory\")\n", + " print(\" 3. Run: uv run copilot2api start\")\n", + " print(\" 4. Wait for the server to start (you should see 'Server initialized')\")\n", + " print(\" 5. Come back and rerun this cell\")\n", + " print(\"\\nπŸ’‘ Need setup help? See: GitHub-Copilot-2-API/README.md\")\n", + " print(\"=\"*70)\n", + "else:\n", + " print(\"\\n\" + \"=\"*60)\n", + " print(\"βœ… CONNECTION SUCCESSFUL!\")\n", + " print(\"=\"*60)\n", + " print(f\"πŸ€– Provider: {PROVIDER.upper()}\")\n", + " print(f\"πŸ“¦ Default Model: {get_default_model()}\")\n", + " print(f\"πŸ”— Endpoint: http://localhost:7711\")\n", + " print(f\"\\nπŸ’‘ To switch providers, change PROVIDER to '{'claude' if PROVIDER.lower() == 'openai' else 'openai'}' and rerun this cell\")\n", + " print(\"=\"*70)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Option B: Direct OpenAI API\n", + "\n", + "**Setup:** Add your API key to `.env` file, then uncomment and run:\n", + "\n", + "> πŸ’‘ **Note:** This option requires a paid OpenAI API account. If you're using GitHub Copilot, stick with Option A above.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# # Option B: Direct OpenAI API setup\n", + "# import openai\n", + "# import os\n", + "# from dotenv import load_dotenv\n", + "\n", + "# load_dotenv()\n", + "\n", + "# client = openai.OpenAI(\n", + "# api_key=os.getenv(\"OPENAI_API_KEY\") # Set this in your .env file\n", + "# )\n", + "\n", + "# def get_chat_completion(messages, model=\"gpt-4o\", temperature=0.7):\n", + "# \"\"\"Get a chat completion from OpenAI.\"\"\"\n", + "# try:\n", + "# response = client.chat.completions.create(\n", + "# model=model,\n", + "# messages=messages,\n", + "# temperature=temperature\n", + "# )\n", + "# return response.choices[0].message.content\n", + "# except Exception as e:\n", + "# return f\"❌ Error: {e}\"\n", + "\n", + "# print(\"βœ… OpenAI API configured successfully!\")\n", + "# print(\"πŸ€– Using OpenAI's official API\")\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Option C: CircuIT APIs (Azure OpenAI)\n", + "\n", + "**Setup:** Configure environment variables (`CISCO_CLIENT_ID`, `CISCO_CLIENT_SECRET`, `CISCO_OPENAI_APP_KEY`) in `.env` file.\n", + "\n", + "Get values from: https://ai-chat.cisco.com/bridgeit-platform/api/home\n", + "\n", + "Then uncomment and run:\n", + "\n", + "> πŸ’‘ **Note:** This option is for Cisco employees with CircuIT API access.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# # Option C: CircuIT APIs (Azure OpenAI) setup\n", + "# import openai\n", + "# import traceback\n", + "# import requests\n", + "# import base64\n", + "# import os\n", + "# from dotenv import load_dotenv\n", + "# from openai import AzureOpenAI\n", + "\n", + "# # Load environment variables\n", + "# load_dotenv()\n", + "\n", + "# # Open AI version to use\n", + "# openai.api_type = \"azure\"\n", + "# openai.api_version = \"2024-12-01-preview\"\n", + "\n", + "# # Get API_KEY wrapped in token - using environment variables\n", + "# client_id = os.getenv(\"CISCO_CLIENT_ID\")\n", + "# client_secret = os.getenv(\"CISCO_CLIENT_SECRET\")\n", + "\n", + "# url = \"https://id.cisco.com/oauth2/default/v1/token\"\n", + "\n", + "# payload = \"grant_type=client_credentials\"\n", + "# value = base64.b64encode(f\"{client_id}:{client_secret}\".encode(\"utf-8\")).decode(\"utf-8\")\n", + "# headers = {\n", + "# \"Accept\": \"*/*\",\n", + "# \"Content-Type\": \"application/x-www-form-urlencoded\",\n", + "# \"Authorization\": f\"Basic {value}\",\n", + "# }\n", + "\n", + "# token_response = requests.request(\"POST\", url, headers=headers, data=payload)\n", + "# print(token_response.text)\n", + "# token_data = token_response.json()\n", + "\n", + "# client = AzureOpenAI(\n", + "# azure_endpoint=\"https://chat-ai.cisco.com\",\n", + "# api_key=token_data.get(\"access_token\"),\n", + "# api_version=\"2024-12-01-preview\",\n", + "# )\n", + "\n", + "# app_key = os.getenv(\"CISCO_OPENAI_APP_KEY\")\n", + "\n", + "# def get_chat_completion(messages, model=\"gpt-4o\", temperature=0.7):\n", + "# \"\"\"Get a chat completion from CircuIT APIs.\"\"\"\n", + "# try:\n", + "# response = client.chat.completions.create(\n", + "# model=model,\n", + "# messages=messages,\n", + "# temperature=temperature,\n", + "# user=f'{{\"appkey\": \"{app_key}\"}}',\n", + "# )\n", + "# return response.choices[0].message.content\n", + "# except Exception as e:\n", + "# return f\"❌ Error: {e}\"\n", + "\n", + "# print(\"βœ… CircuIT APIs configured successfully!\")\n", + "# print(\"πŸ€– Using Azure OpenAI via CircuIT\")\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Step 3: Test Connection\n", + "\n", + "Let's test that everything is working before we begin:\n", + "\n", + "
\n", + "πŸ’‘ Tip: If you see long AI responses and the output shows \"Output is truncated. View as a scrollable element\" - click that link to see the full response in a scrollable view!\n", + "
\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Quick setup verification\n", + "test_messages = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": \"You are a prompt engineering instructor. Respond with: 'Module 2 setup verified! Ready to learn core techniques.'\"\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": \"Test Module 2 setup\"\n", + " }\n", + "]\n", + "\n", + "response = get_chat_completion(test_messages)\n", + "print(\"πŸ§ͺ Setup Test:\")\n", + "print(response)\n", + "\n", + "if response and (\"verified\" in response.lower() or \"ready\" in response.lower()):\n", + " print(\"\\nπŸŽ‰ Perfect! Module 2 environment is ready!\")\n", + "else:\n", + " print(\"\\n⚠️ Setup test complete. Let's continue with the tutorial!\")\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---\n", + "\n", + "## 🎯 Core Prompt Engineering Techniques\n", + "\n", + "### Introduction: The Art of Prompt Engineering\n", + "\n", + "#### πŸš€ Ready to Transform Your AI Interactions?\n", + "\n", + "You've successfully set up your environment and tested the connection. Now comes the exciting part - **learning the tactical secrets** that separate amateur prompt writers from AI power users.\n", + "\n", + "Think of what you've accomplished so far as **laying the foundation** of a house. Now we're about to build the **architectural masterpiece** that will revolutionize how you work with AI assistants.\n", + "\n", + "\n", + "#### πŸ‘¨β€πŸ« What You're About to Master\n", + "\n", + "In the next sections, you'll discover **eight core tactics** that professional developers use to get consistently excellent results from AI:\n", + "\n", + "
\n", + "\n", + "
\n", + "🎭 Role Prompting
\n", + "Transform AI into specialized experts\n", + "
\n", + "\n", + "
\n", + "πŸ“‹ Structured Inputs
\n", + "Organize complex scenarios with precision\n", + "
\n", + "\n", + "
\n", + "πŸ“š Few-Shot Examples
\n", + "Teach AI your preferred style\n", + "
\n", + "\n", + "
\n", + "⛓️‍πŸ’₯ Chain-of-Thought
\n", + "Guide AI through systematic reasoning\n", + "
\n", + "\n", + "
\n", + "πŸ“– Reference Citations
\n", + "Answer with citations from reference text\n", + "
\n", + "\n", + "
\n", + "πŸ”— Prompt Chaining
\n", + "Break complex tasks into sequential steps\n", + "
\n", + "\n", + "
\n", + "βš–οΈ LLM-as-Judge
\n", + "Use AI to evaluate and improve outputs\n", + "
\n", + "\n", + "
\n", + "🀫 Inner Monologue
\n", + "Hide reasoning, show only final results\n", + "
\n", + "\n", + "
\n", + "\n", + "
\n", + "πŸ’‘ Pro Tip:

\n", + "This module covers 8 powerful tactics over 90-120 minutes. Take short breaks between tactics to reflect on how you can apply each technique to your day-to-day work. Make notes as you progressβ€”jot down specific use cases from your projects where each tactic could be valuable. This active reflection will help you retain the techniques and integrate them into your workflow faster!\n", + "
\n", + "\n", + "---" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### πŸ“ How to Use Break Points\n", + "\n", + "
\n", + "πŸ’‘ Taking Breaks? We've Got You Covered!

\n", + "\n", + "This module is designed for 90-120 minutes of focused learning. To help you manage your time effectively, we've added **4 strategic break points** throughout:\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
Break PointLocationTime ElapsedBookmark Text
β˜• Break #1After Tactic 2~30 min\"Tactic 3: Few-Shot Examples\"
🍡 Break #2After Tactic 4~60 min\"Tactic 5: Reference Citations\"
πŸ§ƒ Break #3After Tactic 6~90 min\"Tactic 7: LLM-as-Judge\"
🎯 Break #4Before Practice~100 min\"Hands-On Practice - Activity 2.1\"
\n", + "\n", + "**How to Resume Your Session:**\n", + "1. Scroll down to find the colorful break point card you last saw\n", + "2. Look for the **\"πŸ“Œ BOOKMARK TO RESUME\"** section\n", + "3. Use `Ctrl+F` (or `Cmd+F` on Mac) to search for the bookmark text\n", + "4. You'll jump right to where you left off!\n", + "\n", + "**Pro Tip:** Each break point card shows:\n", + "- βœ… What you've completed\n", + "- ⏭️ What's coming next\n", + "- ⏱️ Estimated time for the next section\n", + "\n", + "Feel free to work at your own paceβ€”these are suggestions, not requirements! πŸš€\n", + "
\n", + "\n", + "---" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 🎬 Tactic 0: Write Clear Instructions\n", + "\n", + "**Foundation Principle** - Before diving into advanced tactics, master the art of clear, specific instructions." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Core Principle:** When interacting with AI models, think of them as brilliant but very new employees who need explicit instructions. The more precisely you explain what you wantβ€”including context, specific requirements, and sequential stepsβ€”the better the AI's response will be.\n", + "\n", + "**The Golden Rule:** Show your prompt to a colleague with minimal context on the task. If they're confused, the AI will likely be too.\n", + "\n", + "**Software Engineering Application:** This tactic becomes crucial when asking for code refactoring, where you need to specify coding standards, performance requirements, and constraints to get production-ready results.\n", + "\n", + "*Reference: [Claude Documentation - Be Clear and Direct](https://docs.claude.com/en/docs/build-with-claude/prompt-engineering/be-clear-and-direct)*" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Example: Vague vs. Specific Instructions\n", + "\n", + "**Why This Works:** Specific instructions eliminate ambiguity and guide the model toward your exact requirements.\n", + "\n", + "Let's compare a generic approach with a specific one:\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Vague request - typical beginner mistake\n", + "messages = [\n", + " {\"role\": \"user\", \"content\": \"Help me choose a programming language for my project\"}\n", + "]\n", + "\n", + "response = get_chat_completion(messages)\n", + "\n", + "print(\"VAGUE REQUEST RESULT:\")\n", + "print(response)\n", + "print(\"\\n\" + \"=\"*50 + \"\\n\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Specific request - much better results\n", + "messages = [\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": \"I need to choose a programming language for building a real-time chat application that will handle 10,000 concurrent users, needs to integrate with a PostgreSQL database, and must be deployable on AWS. The team has 3 years of experience with web development. Provide the top 3 language recommendations with pros and cons for each.\",\n", + " }\n", + "]\n", + "\n", + "response = get_chat_completion(messages)\n", + "\n", + "print(\"SPECIFIC REQUEST RESULT:\")\n", + "print(response)\n", + "print(\"\\n\" + \"=\"*50 + \"\\n\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Another way to achieve specificity using the `system prompt`. This is particularly useful when you want to keep the user request clean while providing detailed instructions about response format and constraints." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "messages = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": \"You are a senior technical architect. Provide concise, actionable recommendations in bullet format. Focus only on the most critical factors for the decision. No lengthy explanations.\",\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": \"Help me choose between microservices and monolithic architecture for a startup with 5 developers building a fintech application\",\n", + " },\n", + "]\n", + "\n", + "response = get_chat_completion(messages)\n", + "\n", + "print(\"SYSTEM PROMPT RESULT:\")\n", + "print(response)\n", + "print(\"\\n\" + \"=\"*50 + \"\\n\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 🎭 Tactic 1: Role Prompting\n", + "\n", + "**Transform AI into specialized domain experts**" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Why This Works:** Role prompting using the `system` parameter is the most powerful way to transform any LLM from a general assistant into your virtual domain expert. The right role enhances accuracy in complex scenarios, tailors the communication tone, and improves focus by keeping LLM within the bounds of your task's specific requirements.\n", + "\n", + "*Reference: [Claude Documentation - System Prompts](https://docs.claude.com/en/docs/build-with-claude/prompt-engineering/system-prompts)*\n", + "\n", + "**Generic Example:**" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Instead of asking for a generic response, adopt a specific persona\n", + "messages = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": \"You are a code reviewer. Analyze the provided code and give exactly 3 specific feedback points: 1 about code structure, 1 about naming conventions, and 1 about potential improvements. Format each point as a bullet with the category in brackets.\",\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": \"def calc(x, y): return x + y if x > 0 and y > 0 else 0\",\n", + " },\n", + "]\n", + "response = get_chat_completion(messages)\n", + "\n", + "print(\"CODE REVIEWER PERSONA RESULT:\")\n", + "print(response)\n", + "print(\"\\n\" + \"=\"*50 + \"\\n\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Example: Software Engineering Personas\n", + "\n", + "In coding scenarios, this tactic transforms into:\n", + "\n", + "- **Specific refactoring requirements** (e.g., \"Extract this into separate classes following SOLID principles\")\n", + "- **Detailed code review criteria** (e.g., \"Focus on security vulnerabilities and performance bottlenecks\")\n", + "- **Precise testing specifications** (e.g., \"Generate unit tests with 90% coverage including edge cases\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Below cells show how different engineering personas provide specialized expertise for code reviews." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Security Engineer Persona\n", + "security_messages = [\n", + " {\n", + " \"role\": \"system\", \n", + " \"content\": \"You are a security engineer. Review code for security vulnerabilities and provide specific recommendations.\"\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": \"\"\"Review this login function:\n", + " \n", + "def login(username, password):\n", + " query = f\"SELECT * FROM users WHERE username = '{username}' AND password = '{password}'\"\n", + " result = database.execute(query)\n", + " return result\"\"\"\n", + " }\n", + "]\n", + "\n", + "security_response = get_chat_completion(security_messages)\n", + "print(\"πŸ”’ SECURITY ENGINEER ANALYSIS:\")\n", + "print(security_response)\n", + "print(\"\\n\" + \"=\"*50 + \"\\n\")\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Performance Engineer Persona\n", + "performance_messages = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": \"You are a performance engineer. Analyze code for efficiency issues and optimization opportunities.\"\n", + " },\n", + " {\n", + " \"role\": \"user\", \n", + " \"content\": \"\"\"Analyze this data processing function:\n", + "\n", + "def process_data(items):\n", + " result = []\n", + " for item in items:\n", + " if len(item) > 3:\n", + " result.append(item.upper())\n", + " return result\"\"\"\n", + " }\n", + "]\n", + "\n", + "performance_response = get_chat_completion(performance_messages)\n", + "print(\"⚑ PERFORMANCE ENGINEER ANALYSIS:\")\n", + "print(performance_response)\n", + "print(\"\\n\" + \"=\"*50 + \"\\n\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Checkpoint: Compare the Responses\n", + "\n", + "Notice how each engineering persona focused on their area of expertise:\n", + "\n", + "- **Security Engineer**: Identified SQL injection vulnerabilities and authentication issues\n", + "- **Performance Engineer**: Suggested list comprehensions and optimization techniques\n", + "\n", + "βœ… **Success!** You've seen how role prompting provides specialized, expert-level analysis.\n", + "\n", + "#### Practice - Create Your Own Persona\n", + "\n", + "Now it's your turn! Create a \"QA Engineer\" persona to analyze test coverage edit the `system prompt`:\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# TODO: Fill in the system message to create a QA Engineer role\n", + "# Hint: Focus on test cases, edge cases, and error scenarios\n", + "qa_messages = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": \"\"\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": \"\"\"Analyze test coverage needed for this function:\n", + "\n", + "def calculate_discount(price, discount_percent):\n", + " if discount_percent > 100:\n", + " raise ValueError(\"Discount cannot exceed 100%\")\n", + " if price < 0:\n", + " raise ValueError(\"Price cannot be negative\")\n", + " return price * (1 - discount_percent / 100)\"\"\"\n", + " }\n", + "]\n", + "\n", + "qa_response = get_chat_completion(qa_messages)\n", + "print(\"πŸ§ͺ QA ENGINEER ANALYSIS:\")\n", + "print(qa_response)\n", + "print(\"\\n\" + \"=\"*50 + \"\\n\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### πŸ“‹ Tactic 2: Structured Inputs\n", + "\n", + "**Organize complex scenarios with XML delimiters**\n", + "\n", + "**Core Principle:** When your prompts involve multiple components like context, instructions, and examples, delimiters (especially XML tags) can be a game-changer. They help AI models parse your prompts more accurately, leading to higher-quality outputs.\n", + "\n", + "**Why This Works:**\n", + "- **Clarity:** Clearly separate different parts of your prompt and ensure your prompt is well structured\n", + "- **Accuracy:** Reduce errors caused by AI models misinterpreting parts of your prompt \n", + "- **Flexibility:** Easily find, add, remove, or modify parts of your prompt without rewriting everything\n", + "- **Parseability:** Having the AI use delimiters in its output makes it easier to extract specific parts of its response\n", + "\n", + "**Software Engineering Application Preview:** Essential for multi-file refactoring, separating code from requirements, and organizing complex code review scenarios." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's start with a simple example showing how delimiters clarify different sections of your prompt by using `###` as delimiters:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Using delimiters to refactor code\n", + "function_code = \"def process_data(items): return [x.upper() for x in items if len(x) > 3]\"\n", + "requirements = \"Follow PEP 8 style guide, add type hints, improve readability\"\n", + "\n", + "delimiter_messages = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": \"You are a Python code reviewer. Provide only the refactored code without explanations.\"\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": f\"\"\"Refactor this function based on the requirements:\n", + "\n", + "### CODE ###\n", + "{function_code}\n", + "###\n", + "\n", + "### REQUIREMENTS ###\n", + "{requirements}\n", + "###\n", + "\n", + "Return only the improved function code.\"\"\"\n", + " }\n", + "]\n", + "\n", + "delimiter_response = get_chat_completion(delimiter_messages)\n", + "print(\"πŸ”§ REFACTORED CODE:\")\n", + "print(delimiter_response)\n", + "print(\"\\n\" + \"=\"*70 + \"\\n\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Multi-File Scenarios with XML Delimiters\n", + "\n", + "One of the most powerful techniques for complex software development tasks is using XML tags and delimiters to structure your prompts. This approach dramatically improves AI accuracy and reduces misinterpretation.\n", + "\n", + "**Key Benefits:**\n", + "- **Clarity**: Clearly separate different parts of your prompt (instructions, context, examples)\n", + "- **Accuracy**: Reduce errors caused by AI misinterpreting parts of your prompt\n", + "- **Flexibility**: Easily modify specific sections without rewriting everything\n", + "- **Parseability**: Structure AI outputs for easier post-processing\n", + "\n", + "**Best Practices:**\n", + "- Use tags like ``, ``, and `` to clearly separate different parts\n", + "- Be consistent with tag names throughout your prompts\n", + "- Nest tags hierarchically: `` for structured content\n", + "- Choose meaningful tag names that describe their content\n", + "\n", + "**Reference**: Learn more about XML tagging best practices in the [Claude Documentation on XML Tags](https://docs.claude.com/en/docs/build-with-claude/prompt-engineering/use-xml-tags)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In coding scenarios, delimiters become essential for:\n", + "\n", + "- **Multi-file refactoring** - Separate different files being modified: ``, ``\n", + "- **Code vs. requirements** - Distinguish between `` and ``\n", + "- **Test scenarios** - Organize ``, ``, ``\n", + "- **Pull request reviews** - Structure ``, ``, ``\n", + "\n", + "The below cell demonstrates multi-file refactoring using XML delimiters to organize complex codebases." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Multi-file analysis with XML delimiters\n", + "multifile_messages = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": \"You are a software architect. Analyze the provided files and identify architectural concerns.\"\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": \"\"\"\n", + "\n", + "class User:\n", + " def __init__(self, email, password):\n", + " self.email = email\n", + " self.password = password\n", + " \n", + " def save(self):\n", + " # Save to database\n", + " pass\n", + "\n", + "\n", + "\n", + "from flask import Flask, request\n", + "app = Flask(__name__)\n", + "\n", + "@app.route('/register', methods=['POST'])\n", + "def register():\n", + " email = request.form['email']\n", + " password = request.form['password']\n", + " user = User(email, password)\n", + " user.save()\n", + " return \"User registered\"\n", + "\n", + "\n", + "\n", + "- Follow separation of concerns\n", + "- Add input validation\n", + "- Implement proper error handling\n", + "- Use dependency injection\n", + "\n", + "\n", + "Provide architectural recommendations for improving this code structure.\n", + "\"\"\"\n", + " }\n", + "]\n", + "\n", + "multifile_response = get_chat_completion(multifile_messages)\n", + "print(\"πŸ—οΈ ARCHITECTURAL ANALYSIS:\")\n", + "print(multifile_response)\n", + "print(\"\\n\" + \"=\"*70 + \"\\n\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---\n", + "\n", + "
\n", + "
\n", + "

β˜• Suggested Break Point #1

\n", + "

~30 minutes elapsed

\n", + "
\n", + " \n", + "
\n", + "

βœ… Completed:

\n", + "
    \n", + "
  • Tactic 0: Write Clear Instructions
  • \n", + "
  • Tactic 1: Role Prompting (Transform AI into specialized experts)
  • \n", + "
  • Tactic 2: Structured Inputs (Organize with XML delimiters)
  • \n", + "
\n", + "
\n", + " \n", + "
\n", + "

⏭️ Coming Next:

\n", + "
    \n", + "
  • Tactic 3: Few-Shot Examples (Teach AI your style)
  • \n", + "
  • Tactic 4: Chain-of-Thought Reasoning (Step-by-step analysis)
  • \n", + "
\n", + "

⏱️ Next section: ~25-30 minutes

\n", + "
\n", + " \n", + "
\n", + "

πŸ“Œ BOOKMARK TO RESUME:

\n", + "

\"Tactic 3: Few-Shot Examples\"

\n", + "
\n", + " \n", + "

\n", + " πŸ’‘ This is a natural stopping point. Feel free to take a break and return later!\n", + "

\n", + "
\n", + "\n", + "---\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### πŸ“š Tactic 3: Few-Shot Examples\n", + "\n", + "**Teach AI your preferred styles and standards**\n", + "\n", + "**Core Principle:** Examples are your secret weapon for getting AI models to generate exactly what you need. By providing a few well-crafted examples in your prompt, you can dramatically improve the accuracy, consistency, and quality of outputs. This technique, known as few-shot or multishot prompting, is particularly effective for tasks that require structured outputs or adherence to specific formats.\n", + "\n", + "**Why This Works:**\n", + "- **Accuracy:** Examples reduce misinterpretation of instructions\n", + "- **Consistency:** Examples enforce uniform structure and style across outputs\n", + "- **Performance:** Well-chosen examples boost AI's ability to handle complex tasks\n", + "\n", + "**Crafting Effective Examples:**\n", + "- **Relevant:** Your examples should mirror your actual use case\n", + "- **Diverse:** Cover edge cases and vary enough to avoid unintended patterns\n", + "- **Clear:** Wrap examples in `` tags (if multiple, nest within `` tags)\n", + "- **Quantity:** Include 3-5 diverse examples for best results (more examples = better performance)\n", + "\n", + "**Software Engineering Application Preview:** Essential for establishing coding styles, documentation formats, test case patterns, and consistent API response structures across your development workflow.\n", + "\n", + "*Reference: [Claude Documentation - Multishot Prompting](https://docs.claude.com/en/docs/build-with-claude/prompt-engineering/multishot-prompting)*" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's teach the AI to explain technical concepts in a specific, consistent style:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Few-shot examples for consistent explanations\n", + "few_shot_messages = [\n", + " {\"role\": \"system\", \"content\": \"Answer in a consistent style using the examples provided.\"},\n", + " \n", + " # Example 1\n", + " {\"role\": \"user\", \"content\": \"Explain Big O notation for O(1).\"},\n", + " {\"role\": \"assistant\", \"content\": \"O(1) means constant time - the algorithm takes the same amount of time regardless of input size.\"},\n", + " \n", + " # Example 2 \n", + " {\"role\": \"user\", \"content\": \"Explain Big O notation for O(n).\"},\n", + " {\"role\": \"assistant\", \"content\": \"O(n) means linear time - the algorithm's runtime grows proportionally with the input size.\"},\n", + " \n", + " # New question following the established pattern\n", + " {\"role\": \"user\", \"content\": \"Explain Big O notation for O(log n).\"}\n", + "]\n", + "\n", + "few_shot_response = get_chat_completion(few_shot_messages)\n", + "print(\"πŸ“š CONSISTENT STYLE RESPONSE:\")\n", + "print(few_shot_response)\n", + "print(\"\\n\" + \"=\"*70 + \"\\n\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "🎯 **Perfect!** Notice how the AI learned the exact format and style from the examples and applied it consistently.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### ⛓️‍πŸ’₯ Tactic 4: Chain-of-Thought Reasoning\n", + "\n", + "**Guide systematic step-by-step reasoning**\n", + "\n", + "**Core Principle:** When faced with complex tasks like research, analysis, or problem-solving, giving AI models space to think can dramatically improve performance. This technique, known as chain of thought (CoT) prompting, encourages the AI to break down problems step-by-step, leading to more accurate and nuanced outputs.\n", + "\n", + "**Why This Works:**\n", + "- **Accuracy:** Stepping through problems reduces errors, especially in math, logic, analysis, or generally complex tasks\n", + "- **Coherence:** Structured thinking leads to more cohesive, well-organized responses\n", + "- **Debugging:** Seeing the AI's thought process helps you pinpoint where prompts may be unclear\n", + "\n", + "**When to Use CoT:**\n", + "- Use for tasks that a human would need to think through\n", + "- Examples: complex math, multi-step analysis, writing complex documents, decisions with many factors\n", + "- **Note:** Increased output length may impact latency, so use judiciously\n", + "\n", + "**How to Implement CoT (from least to most complex):**\n", + "\n", + "1. **Basic prompt:** Include \"Think step-by-step\" in your prompt\n", + "2. **Guided prompt:** Outline specific steps for the AI to follow in its thinking process\n", + "3. **Structured prompt:** Use XML tags like `` and `` to separate reasoning from the final answer\n", + "\n", + "**Important:** Always have the AI output its thinking. Without outputting its thought process, no thinking occurs!\n", + "\n", + "**Software Engineering Application Preview:** Critical for test generation, code reviews, debugging workflows, architecture decisions, and security analysis where methodical analysis prevents missed issues.\n", + "\n", + "*Reference: [Claude Documentation - Chain of Thought](https://docs.claude.com/en/docs/build-with-claude/prompt-engineering/chain-of-thought)*\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Tactic: Give Models Time to Work Before Judging\n", + "\n", + "**Critical Tactic:** When asking AI to evaluate solutions, code, or designs, instruct it to solve the problem independently *before* judging the provided solution. This prevents premature agreement and ensures thorough analysis.\n", + "\n", + "**Why This Matters:** AI models can sometimes be too agreeable or overlook subtle issues when they jump straight to evaluation. By forcing them to work through the problem first, they develop genuine understanding and can provide more accurate assessments.\n", + "\n", + "**The Principle:** *\"Don't decide if the solution is correct until you have worked through the problem yourself.\"*\n", + "\n", + "Let's see this with a code review scenario:\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Example: Forcing AI to think before judging\n", + "problem = \"\"\"\n", + "Write a function that checks if a string is a palindrome.\n", + "The function should ignore spaces, punctuation, and case.\n", + "\"\"\"\n", + "\n", + "student_solution = \"\"\"\n", + "def is_palindrome(s):\n", + " cleaned = ''.join(c.lower() for c in s if c.isalnum())\n", + " return cleaned == cleaned[::-1]\n", + "\"\"\"\n", + "\n", + "# BAD: Asking AI to judge immediately (may agree too quickly)\n", + "print(\"=\" * 70)\n", + "print(\"BAD APPROACH: Immediate Judgment\")\n", + "print(\"=\" * 70)\n", + "\n", + "bad_messages = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": \"You are a code reviewer.\"\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": f\"\"\"Problem: {problem}\n", + "\n", + "Student's solution:\n", + "{student_solution}\n", + "\n", + "Is this solution correct?\"\"\"\n", + " }\n", + "]\n", + "\n", + "bad_response = get_chat_completion(bad_messages)\n", + "print(bad_response)\n", + "\n", + "# GOOD: Force AI to solve it first, then compare\n", + "print(\"=\" * 70)\n", + "print(\"GOOD APPROACH: Work Through It First\")\n", + "print(\"=\" * 70)\n", + "\n", + "good_messages = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": \"You are a code reviewer with a methodical approach.\"\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": f\"\"\"Problem: {problem}\n", + "\n", + "Student's solution:\n", + "{student_solution}\n", + "\n", + "Before evaluating the student's solution, follow these steps:\n", + "1. In tags, write your own implementation of the palindrome checker\n", + "2. In tags, create comprehensive test cases including edge cases\n", + "3. In tags, compare the student's solution to yours and test both\n", + "4. In tags, provide your final judgment with specific reasoning\n", + "\n", + "Important: Don't judge the student's solution until you've solved the problem yourself.\"\"\"\n", + " }\n", + "]\n", + "\n", + "good_response = get_chat_completion(good_messages)\n", + "print(good_response)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**πŸ“Œ Key Takeaway: Give Models Time to Think**\n", + "\n", + "Notice the difference:\n", + "- **Bad approach:** The AI might agree with the student too quickly without thorough analysis\n", + "- **Good approach:** By forcing the AI to solve the problem first, it:\n", + " - Develops its own understanding of the requirements\n", + " - Creates comprehensive test cases independently\n", + " - Can objectively compare two solutions\n", + " - Catches subtle bugs or edge cases it might have missed\n", + "\n", + "**Real-World Applications:**\n", + "- **Code Review:** Make AI implement a solution before reviewing pull requests\n", + "- **Bug Analysis:** Have AI reproduce the bug before suggesting fixes\n", + "- **Architecture Review:** Force AI to design its own solution before critiquing proposals\n", + "- **Test Review:** Make AI write tests before evaluating test coverage\n", + "\n", + "**The Golden Rule:** *\"Don't let the AI judge until it has worked through the problem itself.\"*\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Systematic Code Analysis using Chain of Thoughts\n", + "\n", + "Now let's implement step-by-step reasoning for complex code analysis tasks:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Chain-of-thought for systematic code analysis\n", + "system_message = \"\"\"Use the following step-by-step instructions to analyze code:\n", + "\n", + "Step 1 - Count the number of functions in the code snippet with a prefix that says 'Function Count: '\n", + "Step 2 - List each function name with its line number with a prefix that says 'Function List: '\n", + "Step 3 - Identify any functions that are longer than 10 lines with a prefix that says 'Long Functions: '\n", + "Step 4 - Provide an overall assessment with a prefix that says 'Assessment: '\"\"\"\n", + "\n", + "user_message = \"\"\"\n", + "def calculate_tax(income, deductions):\n", + " taxable_income = income - deductions\n", + " if taxable_income <= 0:\n", + " return 0\n", + " elif taxable_income <= 50000:\n", + " return taxable_income * 0.1\n", + " else:\n", + " return 50000 * 0.1 + (taxable_income - 50000) * 0.2\n", + "\n", + "def format_currency(amount):\n", + " return f\"${amount:,.2f}\"\n", + "\n", + "def generate_report(name, income, deductions):\n", + " tax = calculate_tax(income, deductions)\n", + " net_income = income - tax\n", + " \n", + " print(f\"Tax Report for {name}\")\n", + " print(f\"Gross Income: {format_currency(income)}\")\n", + " print(f\"Deductions: {format_currency(deductions)}\")\n", + " print(f\"Tax Owed: {format_currency(tax)}\")\n", + " print(f\"Net Income: {format_currency(net_income)}\")\n", + "\"\"\"\n", + "\n", + "chain_messages = [\n", + " {\"role\": \"system\", \"content\": system_message},\n", + " {\"role\": \"user\", \"content\": user_message}\n", + "]\n", + "\n", + "chain_response = get_chat_completion(chain_messages)\n", + "print(\"πŸ”— CHAIN-OF-THOUGHT ANALYSIS:\")\n", + "print(chain_response)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "πŸš€ **Excellent!** The AI followed each step methodically, providing structured, comprehensive analysis.\n", + "\n", + "#### Practice Exercise: Combine All Techniques\n", + "\n", + "Now let's put everything together in a real-world scenario that combines role prompting, delimiters, and chain-of-thought:\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Comprehensive example combining all techniques\n", + "comprehensive_messages = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": \"\"\"You are a senior software engineer conducting a comprehensive code review.\n", + "\n", + "Follow this systematic process:\n", + "Step 1 - Security Analysis: Identify potential security vulnerabilities\n", + "Step 2 - Performance Review: Analyze efficiency and optimization opportunities \n", + "Step 3 - Code Quality: Evaluate readability, maintainability, and best practices\n", + "Step 4 - Recommendations: Provide specific, prioritized improvement suggestions\n", + "\n", + "Format each step clearly with the step name as a header.\"\"\"\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": \"\"\"\n", + "\n", + "from flask import Flask, request, jsonify\n", + "import sqlite3\n", + "\n", + "app = Flask(__name__)\n", + "\n", + "@app.route('/user/')\n", + "def get_user(user_id):\n", + " conn = sqlite3.connect('users.db')\n", + " cursor = conn.cursor()\n", + " cursor.execute(f\"SELECT * FROM users WHERE id = {user_id}\")\n", + " user = cursor.fetchone()\n", + " conn.close()\n", + " \n", + " if user:\n", + " return jsonify({\n", + " \"id\": user[0],\n", + " \"name\": user[1], \n", + " \"email\": user[2]\n", + " })\n", + " else:\n", + " return jsonify({\"error\": \"User not found\"}), 404\n", + "\n", + "\n", + "\n", + "This is a user lookup endpoint for a web application that serves user profiles.\n", + "The application handles 1000+ requests per minute during peak hours.\n", + "\n", + "\n", + "Perform a comprehensive code review following the systematic process.\n", + "\"\"\"\n", + " }\n", + "]\n", + "\n", + "comprehensive_response = get_chat_completion(comprehensive_messages)\n", + "print(\"πŸ” COMPREHENSIVE CODE REVIEW:\")\n", + "print(comprehensive_response)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---\n", + "\n", + "
\n", + "
\n", + "

🍡 Suggested Break Point #2

\n", + "

~60 minutes elapsed β€’ Halfway through!

\n", + "
\n", + " \n", + "
\n", + "

βœ… Completed (Tactics 0-4):

\n", + "
    \n", + "
  • Clear Instructions & Role Prompting
  • \n", + "
  • Structured Inputs with XML tags
  • \n", + "
  • Few-Shot Examples for consistent styles
  • \n", + "
  • Chain-of-Thought for systematic reasoning
  • \n", + "
\n", + "

🎯 You've mastered 5 out of 8 tactics!

\n", + "
\n", + " \n", + "
\n", + "

⏭️ Coming Next:

\n", + "
    \n", + "
  • Tactic 5: Reference Citations (Ground responses in docs)
  • \n", + "
  • Tactic 6: Prompt Chaining (Break complex tasks into steps)
  • \n", + "
\n", + "

⏱️ Next section: ~30 minutes

\n", + "
\n", + " \n", + "
\n", + "

πŸ“Œ BOOKMARK TO RESUME:

\n", + "

\"Tactic 5: Reference Citations\"

\n", + "
\n", + " \n", + "

\n", + " πŸ’‘ Great progress! Consider taking a break before continuing with the final tactics.\n", + "

\n", + "
\n", + "\n", + "---\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### πŸ“– Tactic 5: Reference Citations\n", + "\n", + "**Ground responses in actual documentation to reduce hallucinations**\n", + "\n", + "**Core Principle:** When working with long documents or multiple reference materials, asking AI models to quote relevant parts of the documents first before carrying out tasks helps them cut through the \"noise\" and focus on pertinent information. This technique is especially powerful when working with extended context windows.\n", + "\n", + "**Why This Works:**\n", + "- The AI identifies and focuses on relevant information before generating responses\n", + "- Citations make outputs verifiable and trustworthy\n", + "- Reduces hallucination by grounding responses in actual source material\n", + "- Makes it easy to trace conclusions back to specific code or documentation sections\n", + "\n", + "**Best Practices for Long Context:**\n", + "- **Put longform data at the top:** Place long documents (~20K+ tokens) near the top of your prompt, above queries and instructions (can improve response quality by up to 30%)\n", + "- **Structure with XML tags:** Use ``, ``, and `` tags to organize multiple documents\n", + "- **Request quotes first:** Ask the AI to extract relevant quotes in `` tags before generating the final response\n", + "\n", + "**Software Engineering Application Preview:** Critical for code review with large codebases, documentation generation from source files, security audit reports, and analyzing API documentation.\n", + "\n", + "*Reference: [Claude Documentation - Long Context Tips](https://docs.claude.com/en/docs/build-with-claude/prompt-engineering/long-context-tips)*\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Example 1: Code Review with Multiple Files\n", + "\n", + "Let's demonstrate how to structure multiple code files and ask the AI to extract relevant quotes before providing analysis:\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Example: Multi-file code review with quote extraction\n", + "auth_service = \"\"\"\n", + "class AuthService:\n", + " def __init__(self, db_connection):\n", + " self.db = db_connection\n", + " \n", + " def authenticate_user(self, username, password):\n", + " # TODO: Add password hashing\n", + " query = f\"SELECT * FROM users WHERE username='{username}' AND password='{password}'\"\n", + " result = self.db.execute(query)\n", + " return result.fetchone() is not None\n", + " \n", + " def create_session(self, user_id):\n", + " session_id = str(uuid.uuid4())\n", + " # Session expires in 24 hours\n", + " expiry = datetime.now() + timedelta(hours=24)\n", + " self.db.execute(f\"INSERT INTO sessions VALUES ('{session_id}', {user_id}, '{expiry}')\")\n", + " return session_id\n", + "\"\"\"\n", + "\n", + "user_controller = \"\"\"\n", + "from flask import Flask, request, jsonify\n", + "from auth_service import AuthService\n", + "\n", + "app = Flask(__name__)\n", + "auth = AuthService(db_connection)\n", + "\n", + "@app.route('/login', methods=['POST'])\n", + "def login():\n", + " username = request.json.get('username')\n", + " password = request.json.get('password')\n", + " \n", + " if auth.authenticate_user(username, password):\n", + " user_id = get_user_id(username)\n", + " session_id = auth.create_session(user_id)\n", + " return jsonify({'session_id': session_id, 'status': 'success'})\n", + " else:\n", + " return jsonify({'status': 'failed'}), 401\n", + "\"\"\"\n", + "\n", + "# Structure the prompt with documents at the top, query at the bottom\n", + "messages = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": \"You are a senior security engineer reviewing code for vulnerabilities.\"\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": f\"\"\"\n", + "\n", + "auth_service.py\n", + "\n", + "{auth_service}\n", + "\n", + "\n", + "\n", + "\n", + "user_controller.py\n", + "\n", + "{user_controller}\n", + "\n", + "\n", + "\n", + "\n", + "Review the authentication code above for security vulnerabilities. \n", + "\n", + "First, extract relevant code quotes that demonstrate security issues and place them in tags with the source file indicated.\n", + "\n", + "Then, provide your security analysis in tags, explaining each vulnerability and its severity.\n", + "\n", + "Finally, provide specific remediation recommendations in tags.\"\"\"\n", + " }\n", + "]\n", + "\n", + "response = get_chat_completion(messages)\n", + "print(\"πŸ”’ SECURITY REVIEW WITH CITATIONS:\")\n", + "print(response)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Example 2: API Documentation Analysis\n", + "\n", + "Now let's analyze API documentation to extract specific information with citations:\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Example: Analyzing API documentation with quote grounding\n", + "api_docs = \"\"\"\n", + "# Payment API Documentation\n", + "\n", + "## Authentication\n", + "All API requests require an API key passed in the `X-API-Key` header.\n", + "Rate limit: 1000 requests per hour per API key.\n", + "\n", + "## Create Payment\n", + "POST /api/v2/payments\n", + "\n", + "Creates a new payment transaction.\n", + "\n", + "**Request Body:**\n", + "- amount (required, decimal): Payment amount in USD\n", + "- currency (optional, string): Currency code, defaults to \"USD\"\n", + "- customer_id (required, string): Customer identifier\n", + "- payment_method (required, string): One of: \"card\", \"bank\", \"wallet\"\n", + "- metadata (optional, object): Additional key-value pairs\n", + "\n", + "**Rate Limit:** 100 requests per minute\n", + "\n", + "**Response:**\n", + "{\n", + " \"payment_id\": \"pay_abc123\",\n", + " \"status\": \"pending\",\n", + " \"amount\": 99.99,\n", + " \"created_at\": \"2024-01-15T10:30:00Z\"\n", + "}\n", + "\n", + "## Retrieve Payment\n", + "GET /api/v2/payments/{payment_id}\n", + "\n", + "Retrieves details of a specific payment.\n", + "\n", + "**Security Note:** Only returns payments belonging to the authenticated API key's account.\n", + "\n", + "**Response Codes:**\n", + "- 200: Success\n", + "- 404: Payment not found\n", + "- 401: Invalid API key\n", + "\"\"\"\n", + "\n", + "integration_question = \"\"\"\n", + "I need to integrate payment processing into my e-commerce checkout flow.\n", + "The checkout needs to:\n", + "1. Create a payment when user clicks \"Pay Now\"\n", + "2. Handle USD and EUR currencies\n", + "3. Store order metadata with the payment\n", + "4. Check payment status after creation\n", + "\n", + "What do I need to know from the API documentation?\n", + "\"\"\"\n", + "\n", + "messages = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": \"You are a technical integration specialist helping developers implement APIs.\"\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": f\"\"\"\n", + "\n", + "payment_api_docs.md\n", + "\n", + "{api_docs}\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "{integration_question}\n", + "\n", + "\n", + "First, find and quote the relevant sections from the API documentation that address the integration requirements. Place these quotes in tags with the section name indicated.\n", + "\n", + "Then, provide a step-by-step integration guide in tags that references the quoted documentation.\"\"\"\n", + " }\n", + "]\n", + "\n", + "response = get_chat_completion(messages)\n", + "print(\"πŸ“š API INTEGRATION GUIDE WITH CITATIONS:\")\n", + "print(response)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Key Takeaways: Reference Citations\n", + "\n", + "**Best Practices Demonstrated:**\n", + "1. **Document Structure:** Used `` and `` tags with `` and `` metadata\n", + "2. **Documents First:** Placed all reference materials at the top of the prompt, before the query\n", + "3. **Quote Extraction:** Asked AI to extract relevant quotes first, then perform analysis\n", + "4. **Structured Output:** Used XML tags like ``, ``, and `` to organize responses\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### πŸ”— Tactic 6: Prompt Chaining\n", + "\n", + "**Break complex tasks into sequential workflows**\n", + "\n", + "**Core Principle:** When working with complex tasks, AI models can sometimes drop the ball if you try to handle everything in a single prompt. Prompt chaining breaks down complex tasks into smaller, manageable subtasks, where each subtask gets the AI's full attention.\n", + "\n", + "**Why Chain Prompts:**\n", + "- **Accuracy:** Each subtask gets full attention, reducing errors\n", + "- **Clarity:** Simpler subtasks mean clearer instructions and outputs\n", + "- **Traceability:** Easily pinpoint and fix issues in your prompt chain\n", + "- **Focus:** Each link in the chain gets the AI's complete concentration\n", + "\n", + "**When to Chain Prompts:**\n", + "Use prompt chaining for multi-step tasks like:\n", + "- Research synthesis and document analysis\n", + "- Iterative content creation\n", + "- Multiple transformations or citations\n", + "- Code generation β†’ Review β†’ Refactoring workflows\n", + "\n", + "**How to Chain Prompts:**\n", + "1. **Identify subtasks:** Break your task into distinct, sequential steps\n", + "2. **Structure with XML:** Use XML tags to pass outputs between prompts\n", + "3. **Single-task goal:** Each subtask should have one clear objective\n", + "4. **Iterate:** Refine subtasks based on performance\n", + "\n", + "**Common Software Development Workflows:**\n", + "- **Code Review Pipeline:** Extract code β†’ Analyze issues β†’ Propose fixes β†’ Generate tests\n", + "- **Documentation Generation:** Analyze code β†’ Extract docstrings β†’ Format β†’ Review\n", + "- **Refactoring Workflow:** Identify patterns β†’ Suggest improvements β†’ Generate refactored code β†’ Validate\n", + "- **Testing Pipeline:** Analyze function β†’ Generate test cases β†’ Create assertions β†’ Review coverage\n", + "- **Debugging Chain:** Reproduce issue β†’ Analyze root cause β†’ Suggest fixes β†’ Verify solution\n", + "\n", + "**Debugging Tip:** If the AI misses a step or performs poorly, isolate that step in its own prompt. This lets you fine-tune problematic steps without redoing the entire task.\n", + "\n", + "**Software Engineering Application Preview:** Essential for complex code reviews, multi-stage refactoring, comprehensive test generation, and architectural analysis where breaking down the task ensures nothing is missed.\n", + "\n", + "*Reference: [Claude Documentation - Chain Complex Prompts](https://docs.claude.com/en/docs/build-with-claude/prompt-engineering/chain-prompts)*\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Example 1: Code Review with Prompt Chaining\n", + "\n", + "Let's demonstrate a 3-step prompt chain for comprehensive code review:\n", + "1. **Step 1:** Analyze code for issues\n", + "2. **Step 2:** Review the analysis for completeness\n", + "3. **Step 3:** Generate final recommendations with fixes\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Prompt Chain Example: Code Review Pipeline\n", + "code_to_review = \"\"\"\n", + "def process_user_data(user_input):\n", + " # Process user registration data\n", + " data = eval(user_input) # Parse input\n", + " \n", + " username = data['username']\n", + " email = data['email']\n", + " password = data['password']\n", + " \n", + " # Save to database\n", + " query = f\"INSERT INTO users (username, email, password) VALUES ('{username}', '{email}', '{password}')\"\n", + " db.execute(query)\n", + " \n", + " # Send welcome email\n", + " send_email(email, f\"Welcome {username}!\")\n", + " \n", + " return {\"status\": \"success\", \"user\": username}\n", + "\"\"\"\n", + "\n", + "# STEP 1: Analyze code for issues\n", + "print(\"=\" * 60)\n", + "print(\"STEP 1: Initial Code Analysis\")\n", + "print(\"=\" * 60)\n", + "\n", + "step1_messages = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": \"You are a senior code reviewer specializing in security and best practices.\"\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": f\"\"\"Analyze this Python function for issues:\n", + "\n", + "\n", + "{code_to_review}\n", + "\n", + "\n", + "Identify all security vulnerabilities, code quality issues, and potential bugs.\n", + "Provide your analysis in tags with specific line references.\"\"\"\n", + " }\n", + "]\n", + "\n", + "analysis = get_chat_completion(step1_messages)\n", + "print(analysis)\n", + "print(\"\\n\")\n", + "\n", + "# STEP 2: Review the analysis for completeness\n", + "print(\"=\" * 60)\n", + "print(\"STEP 2: Review Analysis for Completeness\")\n", + "print(\"=\" * 60)\n", + "\n", + "step2_messages = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": \"You are a principal engineer reviewing a code analysis. Check for completeness and accuracy.\"\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": f\"\"\"Here is a code analysis from a code reviewer:\n", + "\n", + "\n", + "{code_to_review}\n", + "\n", + "\n", + "\n", + "{analysis}\n", + "\n", + "\n", + "Review this analysis and:\n", + "1. Verify all issues are correctly identified\n", + "2. Check if any critical issues were missed\n", + "3. Rate the severity of each issue (Critical/High/Medium/Low)\n", + "\n", + "Provide feedback in tags.\"\"\"\n", + " }\n", + "]\n", + "\n", + "review = get_chat_completion(step2_messages)\n", + "print(review)\n", + "print(\"\\n\")\n", + "\n", + "# STEP 3: Generate final recommendations with code fixes\n", + "print(\"=\" * 60)\n", + "print(\"STEP 3: Final Recommendations and Code Fixes\")\n", + "print(\"=\" * 60)\n", + "\n", + "step3_messages = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": \"You are a senior developer providing actionable solutions.\"\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": f\"\"\"Based on the code analysis and review, provide final recommendations:\n", + "\n", + "\n", + "{code_to_review}\n", + "\n", + "\n", + "\n", + "{analysis}\n", + "\n", + "\n", + "\n", + "{review}\n", + "\n", + "\n", + "Provide:\n", + "1. A prioritized list of fixes in tags\n", + "2. The complete refactored code in tags\n", + "3. Brief explanation of key changes in tags\"\"\"\n", + " }\n", + "]\n", + "\n", + "final_recommendations = get_chat_completion(step3_messages)\n", + "print(final_recommendations)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Example 2: Test Generation with Prompt Chaining\n", + "\n", + "Now let's create a chain for comprehensive test generation:\n", + "1. **Step 1:** Analyze function to identify test scenarios\n", + "2. **Step 2:** Generate test cases based on scenarios \n", + "3. **Step 3:** Review and enhance test coverage\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Prompt Chain Example: Test Generation Pipeline\n", + "function_to_test = \"\"\"\n", + "def calculate_discount(price, discount_percent, customer_tier='standard'):\n", + " \\\"\\\"\\\"\n", + " Calculate final price after applying discount.\n", + " \n", + " Args:\n", + " price: Original price (must be positive)\n", + " discount_percent: Discount percentage (0-100)\n", + " customer_tier: Customer tier ('standard', 'premium', 'vip')\n", + " \n", + " Returns:\n", + " Final price after discount and tier bonus\n", + " \\\"\\\"\\\"\n", + " if price < 0:\n", + " raise ValueError(\"Price cannot be negative\")\n", + " \n", + " if discount_percent < 0 or discount_percent > 100:\n", + " raise ValueError(\"Discount must be between 0 and 100\")\n", + " \n", + " # Apply base discount\n", + " discounted_price = price * (1 - discount_percent / 100)\n", + " \n", + " # Apply tier bonus\n", + " tier_bonuses = {'standard': 0, 'premium': 5, 'vip': 10}\n", + " if customer_tier not in tier_bonuses:\n", + " raise ValueError(f\"Invalid tier: {customer_tier}\")\n", + " \n", + " tier_bonus = tier_bonuses[customer_tier]\n", + " final_price = discounted_price * (1 - tier_bonus / 100)\n", + " \n", + " return round(final_price, 2)\n", + "\"\"\"\n", + "\n", + "# STEP 1: Analyze function and identify test scenarios\n", + "print(\"=\" * 60)\n", + "print(\"STEP 1: Analyze Function and Identify Test Scenarios\")\n", + "print(\"=\" * 60)\n", + "\n", + "step1_messages = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": \"You are a QA engineer analyzing code for test coverage.\"\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": f\"\"\"Analyze this function and identify all test scenarios needed:\n", + "\n", + "\n", + "{function_to_test}\n", + "\n", + "\n", + "Identify and categorize test scenarios:\n", + "1. Happy path scenarios\n", + "2. Edge cases\n", + "3. Error cases\n", + "4. Boundary conditions\n", + "\n", + "Provide your analysis in tags.\"\"\"\n", + " }\n", + "]\n", + "\n", + "test_scenarios = get_chat_completion(step1_messages)\n", + "print(test_scenarios)\n", + "print(\"\\n\")\n", + "\n", + "# STEP 2: Generate test cases based on scenarios\n", + "print(\"=\" * 60)\n", + "print(\"STEP 2: Generate Test Cases\")\n", + "print(\"=\" * 60)\n", + "\n", + "step2_messages = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": \"You are a test automation engineer. Write pytest test cases.\"\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": f\"\"\"Based on these test scenarios, generate pytest test cases:\n", + "\n", + "\n", + "{function_to_test}\n", + "\n", + "\n", + "\n", + "{test_scenarios}\n", + "\n", + "\n", + "Generate complete, executable pytest test cases in tags.\n", + "Include assertions, test data, and descriptive test names.\"\"\"\n", + " }\n", + "]\n", + "\n", + "test_code = get_chat_completion(step2_messages)\n", + "print(test_code)\n", + "print(\"\\n\")\n", + "\n", + "# STEP 3: Review and enhance test coverage\n", + "print(\"=\" * 60)\n", + "print(\"STEP 3: Review Test Coverage and Suggest Enhancements\")\n", + "print(\"=\" * 60)\n", + "\n", + "step3_messages = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": \"You are a principal QA engineer reviewing test coverage.\"\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": f\"\"\"Review this test suite for completeness:\n", + "\n", + "\n", + "{function_to_test}\n", + "\n", + "\n", + "\n", + "{test_scenarios}\n", + "\n", + "\n", + "\n", + "{test_code}\n", + "\n", + "\n", + "Evaluate:\n", + "1. Are all scenarios covered?\n", + "2. Are there any missing edge cases?\n", + "3. Is the test data comprehensive?\n", + "4. Estimate coverage percentage\n", + "\n", + "Provide:\n", + "- Coverage assessment in tags\n", + "- Any additional test cases needed in tags\"\"\"\n", + " }\n", + "]\n", + "\n", + "coverage_review = get_chat_completion(step3_messages)\n", + "print(coverage_review)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Key Takeaways: Prompt Chaining\n", + "\n", + "**What We Demonstrated:**\n", + "\n", + "**Example 1: Code Review Chain**\n", + "- **Step 1:** Initial analysis identifies security vulnerabilities and code quality issues\n", + "- **Step 2:** Principal engineer validates the analysis and adds severity ratings\n", + "- **Step 3:** Generates actionable fixes and refactored code\n", + "\n", + "**Example 2: Test Generation Chain**\n", + "- **Step 1:** Analyzes function to identify all necessary test scenarios\n", + "- **Step 2:** Generates complete pytest test cases with proper structure\n", + "- **Step 3:** Reviews coverage and suggests additional tests for completeness\n", + "\n", + "**Why Chaining Works Better Than Single Prompts:**\n", + "- **Focused attention:** Each step handles one specific task without distraction\n", + "- **Quality control:** Later steps can review and enhance earlier outputs\n", + "- **Iterative refinement:** Each link improves the overall result\n", + "- **Easier debugging:** Problems can be isolated to specific steps\n", + "\n", + "**Best Practices Demonstrated:**\n", + "1. **Pass context forward:** Each step receives relevant outputs from previous steps\n", + "2. **Use XML tags:** Structured tags (``, ``, ``) organize data flow\n", + "3. **Clear objectives:** Each step has one specific, measurable goal\n", + "4. **Role specialization:** Different expert personas for different steps\n", + "\n", + "**Real-World Applications:**\n", + "- **Multi-stage refactoring:** Analyze β†’ Plan β†’ Refactor β†’ Validate β†’ Document\n", + "- **Comprehensive security audits:** Scan β†’ Analyze β†’ Prioritize β†’ Generate fixes β†’ Verify\n", + "- **API development:** Design schema β†’ Generate code β†’ Create tests β†’ Write docs β†’ Review\n", + "- **Database migrations:** Analyze schema β†’ Generate migration β†’ Create rollback β†’ Test β†’ Deploy\n", + "- **CI/CD pipeline generation:** Analyze project β†’ Design workflow β†’ Generate config β†’ Add tests β†’ Optimize\n", + "\n", + "**Pro Tip:** You can also create **self-correction chains** where the AI reviews its own work! Just pass the output back with a review prompt to catch errors and refine results.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---\n", + "\n", + "
\n", + "
\n", + "

πŸ§ƒ Suggested Break Point #3

\n", + "

~90 minutes elapsed β€’ Almost there!

\n", + "
\n", + " \n", + "
\n", + "

βœ… Completed (Tactics 0-6):

\n", + "
    \n", + "
  • Clear Instructions, Role Prompting & Structured Inputs
  • \n", + "
  • Few-Shot Examples & Chain-of-Thought
  • \n", + "
  • Reference Citations for grounded responses
  • \n", + "
  • Prompt Chaining for complex workflows
  • \n", + "
\n", + "

🎯 You've mastered 7 out of 8 tactics!

\n", + "
\n", + " \n", + "
\n", + "

⏭️ Final Sprint:

\n", + "
    \n", + "
  • Tactic 7: LLM-as-Judge (Create evaluation rubrics)
  • \n", + "
  • Tactic 8: Inner Monologue (Clean outputs)
  • \n", + "
  • Hands-On Practice Activities (Apply what you learned)
  • \n", + "
\n", + "

⏱️ Remaining time: ~30-40 minutes

\n", + "
\n", + " \n", + "
\n", + "

πŸ“Œ BOOKMARK TO RESUME:

\n", + "

\"Tactic 7: LLM-as-Judge\"

\n", + "
\n", + " \n", + "

\n", + " πŸ’‘ You're in the home stretch! Take a quick break before the final tactics.\n", + "

\n", + "
\n", + "\n", + "---\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### βš–οΈ Tactic 7: LLM-as-Judge\n", + "\n", + "**Create evaluation rubrics and self-critique loops**\n", + "\n", + "**Core Principle:** One of the most powerful patterns in prompt engineering is using an AI model as a judge or critic to evaluate and improve outputs. This creates a self-improvement loop where the AI reviews, critiques, and refines workβ€”either its own outputs or those from other sources.\n", + "\n", + "**Why Use LLM-as-Judge:**\n", + "- **Quality assurance:** Catch errors, inconsistencies, and areas for improvement\n", + "- **Objective evaluation:** Get unbiased assessment based on specific criteria\n", + "- **Iterative refinement:** Continuously improve outputs through multiple review cycles\n", + "- **Scalable review:** Automate code reviews, documentation checks, and quality audits\n", + "\n", + "**When to Use LLM-as-Judge:**\n", + "- Code review and quality assessment\n", + "- Evaluating multiple solution approaches\n", + "- Grading or scoring responses against rubrics\n", + "- Providing constructive feedback on technical writing\n", + "- Testing and validation of AI-generated content\n", + "- Comparing different implementations\n", + "\n", + "**How to Implement:**\n", + "1. **Define clear criteria:** Specify what makes a good/bad output\n", + "2. **Provide rubrics:** Give the judge specific evaluation dimensions\n", + "3. **Request structured feedback:** Ask for scores, ratings, or categorized feedback\n", + "4. **Include examples:** Show what excellent vs. poor outputs look like\n", + "5. **Iterate:** Use feedback to improve and re-evaluate\n", + "\n", + "**Software Engineering Application Preview:** Essential for automated code reviews, architecture decision validation, test coverage assessment, documentation quality checks, and comparing multiple implementation approaches.\n", + "\n", + "*Reference: This technique combines elements from evaluation frameworks and self-critique patterns used in production AI systems.*\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Example 1: Code Quality Judge\n", + "\n", + "Let's use AI as a judge to evaluate and compare two different implementations:\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Example: LLM as Judge - Comparing Two Implementations\n", + "implementation_a = \"\"\"\n", + "def find_duplicates(items):\n", + " duplicates = []\n", + " for i in range(len(items)):\n", + " for j in range(i + 1, len(items)):\n", + " if items[i] == items[j] and items[i] not in duplicates:\n", + " duplicates.append(items[i])\n", + " return duplicates\n", + "\"\"\"\n", + "\n", + "implementation_b = \"\"\"\n", + "def find_duplicates(items):\n", + " from collections import Counter\n", + " counts = Counter(items)\n", + " return [item for item, count in counts.items() if count > 1]\n", + "\"\"\"\n", + "\n", + "print(\"=\" * 70)\n", + "print(\"LLM AS JUDGE: Comparing Implementations\")\n", + "print(\"=\" * 70)\n", + "\n", + "judge_messages = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": \"\"\"You are a senior software engineer acting as an impartial code judge.\n", + " \n", + "Evaluate code based on these criteria:\n", + "1. Time Complexity (weight: 30%)\n", + "2. Space Complexity (weight: 20%)\n", + "3. Readability (weight: 25%)\n", + "4. Maintainability (weight: 15%)\n", + "5. Edge Case Handling (weight: 10%)\n", + "\n", + "Provide:\n", + "- Scores (0-10) for each criterion\n", + "- Overall weighted score\n", + "- Pros and cons for each implementation\n", + "- Final recommendation\"\"\"\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": f\"\"\"Compare these two implementations of a function that finds duplicate items in a list:\n", + "\n", + "\n", + "{implementation_a}\n", + "\n", + "\n", + "\n", + "{implementation_b}\n", + "\n", + "\n", + "Evaluate both implementations using the criteria provided. Structure your response with:\n", + "1. tags for Implementation A analysis\n", + "2. tags for Implementation B analysis\n", + "3. tags for side-by-side comparison\n", + "4. tags for final verdict\"\"\"\n", + " }\n", + "]\n", + "\n", + "judge_response = get_chat_completion(judge_messages)\n", + "print(judge_response)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Example 2: Self-Critique and Improvement Loop\n", + "\n", + "Now let's create an improvement loop where AI generates code, critiques it, and then improves it:\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Example: Self-Critique and Improvement Loop\n", + "requirement = \"Create a function that validates and sanitizes user input for a SQL query\"\n", + "\n", + "# STEP 1: Generate initial solution\n", + "print(\"=\" * 70)\n", + "print(\"STEP 1: Generate Initial Solution\")\n", + "print(\"=\" * 70)\n", + "\n", + "generate_messages = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": \"You are a Python developer. Generate code solutions.\"\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": f\"\"\"{requirement}\n", + "\n", + "Provide your implementation in tags.\"\"\"\n", + " }\n", + "]\n", + "\n", + "initial_code = get_chat_completion(generate_messages)\n", + "print(initial_code)\n", + "print(\"\\n\")\n", + "\n", + "# STEP 2: Critique the solution\n", + "print(\"=\" * 70)\n", + "print(\"STEP 2: Critique the Solution\")\n", + "print(\"=\" * 70)\n", + "\n", + "critique_messages = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": \"\"\"You are a security-focused code reviewer. \n", + " \n", + "Evaluate code for:\n", + "- Security vulnerabilities\n", + "- Best practices\n", + "- Error handling\n", + "- Edge cases\n", + "- Code quality\n", + "\n", + "Provide brutally honest feedback with specific issues and severity levels.\"\"\"\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": f\"\"\"Requirement: {requirement}\n", + "\n", + "Initial implementation:\n", + "{initial_code}\n", + "\n", + "Critique this implementation. Identify all issues, rate severity (Critical/High/Medium/Low), and suggest specific improvements.\n", + "\n", + "Structure your response:\n", + "Your detailed critique\n", + "List of specific issues with severity\n", + "Actionable improvement suggestions\"\"\"\n", + " }\n", + "]\n", + "\n", + "critique = get_chat_completion(critique_messages)\n", + "print(critique)\n", + "print(\"\\n\")\n", + "\n", + "# STEP 3: Improve based on critique\n", + "print(\"=\" * 70)\n", + "print(\"STEP 3: Improved Implementation\")\n", + "print(\"=\" * 70)\n", + "\n", + "improve_messages = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": \"You are a senior Python developer who learns from feedback.\"\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": f\"\"\"Requirement: {requirement}\n", + "\n", + "Original implementation:\n", + "{initial_code}\n", + "\n", + "Critique received:\n", + "{critique}\n", + "\n", + "Create an improved implementation that addresses ALL the issues raised in the critique.\n", + "Provide the improved code in tags and explain key changes in tags.\"\"\"\n", + " }\n", + "]\n", + "\n", + "improved_code = get_chat_completion(improve_messages)\n", + "print(improved_code)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Key Takeaways: LLM-as-Judge\n", + "\n", + "**What We Demonstrated:**\n", + "\n", + "**Example 1: Code Quality Judge**\n", + "- Defined clear evaluation criteria with weights\n", + "- Provided structured rubrics for assessment\n", + "- Got objective comparison of two implementations\n", + "- Received scored evaluation with pros/cons and recommendation\n", + "\n", + "**Example 2: Self-Critique and Improvement Loop**\n", + "- **Step 1:** Generated initial code solution\n", + "- **Step 2:** Used AI as brutal critic to identify issues\n", + "- **Step 3:** Improved code based on critique feedback\n", + "- Created a self-improvement cycle\n", + "\n", + "**Benefits of LLM-as-Judge:**\n", + "\n", + "1. **Objective Evaluation:**\n", + " - Unbiased assessment based on defined criteria\n", + " - Consistent scoring across multiple evaluations\n", + " - Reduces human bias in code reviews\n", + "\n", + "2. **Continuous Improvement:**\n", + " - Iterative refinement through critique loops\n", + " - Learn from mistakes and feedback\n", + " - Progressive quality enhancement\n", + "\n", + "3. **Scalable Reviews:**\n", + " - Automate repetitive evaluation tasks\n", + " - Handle multiple implementations simultaneously\n", + " - Save senior engineers' time for complex decisions\n", + "\n", + "4. **Structured Feedback:**\n", + " - Clear, actionable improvement suggestions\n", + " - Severity ratings for prioritization\n", + " - Specific examples and recommendations\n", + "\n", + "**Real-World Applications:**\n", + "\n", + "- **Automated Code Reviews:** Evaluate PRs against coding standards before human review\n", + "- **Architecture Decisions:** Compare multiple design approaches objectively\n", + "- **Test Quality Assessment:** Evaluate test coverage and edge case handling\n", + "- **Documentation Quality:** Grade documentation completeness and clarity\n", + "- **API Design Review:** Compare REST vs GraphQL implementations\n", + "- **Performance Optimization:** Evaluate before/after optimization attempts\n", + "- **Security Audits:** Systematic vulnerability assessment with severity ratings\n", + "\n", + "**Implementation Patterns:**\n", + "\n", + "```python\n", + "# Pattern 1: Single evaluation\n", + "judge_prompt = \"\"\"\n", + "Evaluate [OUTPUT] based on:\n", + "1. Criterion A (weight: X%)\n", + "2. Criterion B (weight: Y%)\n", + "\n", + "Provide scores and recommendation.\n", + "\"\"\"\n", + "\n", + "# Pattern 2: Comparative evaluation\n", + "judge_prompt = \"\"\"\n", + "Compare [OPTION_A] and [OPTION_B] against:\n", + "- Criteria 1\n", + "- Criteria 2\n", + "- Criteria 3\n", + "\n", + "Recommend the better option with justification.\n", + "\"\"\"\n", + "\n", + "# Pattern 3: Self-improvement loop\n", + "1. Generate solution\n", + "2. Critique solution (AI as judge)\n", + "3. Improve based on critique\n", + "4. (Optional) Re-evaluate improvement\n", + "```\n", + "\n", + "**Pro Tips:**\n", + "- **Define clear rubrics:** Specific criteria produce better judgments\n", + "- **Use weighted scoring:** Prioritize what matters most\n", + "- **Request examples:** Ask for specific code snippets in feedback\n", + "- **Iterate multiple times:** Don't stop at first critique\n", + "- **Combine with other tactics:** Use with prompt chaining for multi-stage reviews\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 🀫 Tactic 8: Inner Monologue\n", + "\n", + "**Separate reasoning from clean final outputs**\n", + "\n", + "**Core Principle:** The Inner Monologue technique guides AI models to articulate their thought process internally before delivering a final response, effectively \"hiding\" the reasoning steps from the end user. This is particularly useful when you want the benefits of chain-of-thought reasoning without exposing the intermediate thinking to users.\n", + "\n", + "**Why Use Inner Monologue:**\n", + "- **Cleaner output:** Users see only the final answer, not the reasoning steps\n", + "- **Better reasoning:** The AI still benefits from step-by-step thinking internally\n", + "- **Professional presentation:** Provides concise, polished responses without verbose explanations\n", + "- **Flexible control:** You decide what to show and what to keep internal\n", + "\n", + "**When to Use Inner Monologue:**\n", + "- Customer-facing applications where clean responses are important\n", + "- API responses that need to be concise\n", + "- Documentation generation where only conclusions matter\n", + "- Code generation where you want the code, not the thought process\n", + "- Production systems where token efficiency is critical\n", + "\n", + "**How to Implement:**\n", + "1. **Instruct internal thinking:** Tell the AI to think through the problem internally\n", + "2. **Separate reasoning from output:** Use tags like `` for internal reasoning and `` for final results\n", + "3. **Extract final result:** Parse only the `` section for user-facing display\n", + "4. **Optional logging:** Store the `` section for debugging or quality assurance\n", + "\n", + "**Software Engineering Application Preview:** Critical for code generation tools, automated PR reviews, documentation generators, and customer-facing chatbots where you want intelligent responses without exposing the AI's reasoning process.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Example 1: Code Generation with Hidden Reasoning\n", + "\n", + "Let's compare code generation with and without inner monologue:\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Example 1: WITHOUT Inner Monologue (verbose response)\n", + "print(\"=\" * 70)\n", + "print(\"WITHOUT INNER MONOLOGUE (Verbose)\")\n", + "print(\"=\" * 70)\n", + "\n", + "without_monologue = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": \"You are a Python developer helping with code generation.\"\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": \"\"\"Create a function that validates email addresses using regex. \n", + "It should check for proper format and common email providers.\"\"\"\n", + " }\n", + "]\n", + "\n", + "response_verbose = get_chat_completion(without_monologue)\n", + "print(response_verbose)\n", + "print(\"\\n\")\n", + "\n", + "# Example 1: WITH Inner Monologue (clean output)\n", + "print(\"=\" * 70)\n", + "print(\"WITH INNER MONOLOGUE (Clean Output Only)\")\n", + "print(\"=\" * 70)\n", + "\n", + "with_monologue = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": \"\"\"You are a Python developer. When solving problems:\n", + "1. Think through the requirements internally in tags\n", + "2. Provide only the final code in tags\n", + "3. Keep the output clean and production-ready\"\"\"\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": \"\"\"Create a function that validates email addresses using regex. \n", + "It should check for proper format and common email providers.\n", + "\n", + "Think through the requirements internally, then provide only the final code.\"\"\"\n", + " }\n", + "]\n", + "\n", + "response_clean = get_chat_completion(with_monologue)\n", + "print(response_clean)\n", + "\n", + "# Extract only the output section (simulating production use)\n", + "import re\n", + "output_match = re.search(r'(.*?)', response_clean, re.DOTALL)\n", + "if output_match:\n", + " print(\"\\n\" + \"=\" * 70)\n", + " print(\"EXTRACTED FOR USER (Production Output)\")\n", + " print(\"=\" * 70)\n", + " print(output_match.group(1).strip())\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Key Takeaways: Inner Monologue\n", + "\n", + "**What We Demonstrated:**\n", + "\n", + "**Example 1: Code Generation**\n", + "- **Without inner monologue:** AI provides verbose explanations mixed with code\n", + "- **With inner monologue:** AI thinks internally in `` tags, outputs clean code in `` tags\n", + "- **Production use:** Extract only the `` section for user-facing applications\n", + "\n", + "**Example 2: Bug Analysis**\n", + "- AI analyzes the bug internally (division by zero for empty list)\n", + "- Provides concise, actionable fix without lengthy explanation\n", + "- Perfect for automated bug-fixing tools or PR comments\n", + "\n", + "**Benefits of Inner Monologue:**\n", + "\n", + "1. **Best of Both Worlds:**\n", + " - AI still benefits from step-by-step reasoning\n", + " - Users get clean, concise results\n", + "\n", + "2. **Production Ready:**\n", + " - Responses are polished and professional\n", + " - No verbose explanations cluttering the output\n", + " - Token-efficient for cost-sensitive applications\n", + "\n", + "3. **Flexible Control:**\n", + " - Keep `` for debugging and logging\n", + " - Show `` to end users\n", + " - Audit AI reasoning when needed\n", + "\n", + "4. **User Experience:**\n", + " - Faster to read and understand\n", + " - More professional appearance\n", + " - Reduces cognitive load on users\n", + "\n", + "**Real-World Applications:**\n", + "\n", + "- **Code Generation Tools:** IDE extensions that generate clean code without explanations\n", + "- **Automated PR Reviews:** Concise comments on pull requests with reasoning logged separately\n", + "- **Documentation Generators:** Clean docs without showing the analysis process\n", + "- **Customer Support Bots:** Helpful answers without exposing decision trees\n", + "- **API Code Examples:** Clean, copy-paste ready code snippets\n", + "- **Debugging Assistants:** Direct fixes without lengthy troubleshooting narratives\n", + "\n", + "**Implementation Pattern:**\n", + "\n", + "```python\n", + "system_prompt = \"\"\"\n", + "Process:\n", + "1. In tags: Analyze, plan, consider edge cases\n", + "2. In tags: Provide only the final result\n", + "\n", + "Never show to users - it's for your internal process only.\n", + "\"\"\"\n", + "\n", + "# Then extract: \n", + "output = extract_tag(response, 'output') # Show to user\n", + "thinking = extract_tag(response, 'thinking') # Log for debugging\n", + "```\n", + "\n", + "**Pro Tip:** You can combine inner monologue with other tactics! Use it with prompt chaining for multi-step workflows where each step produces clean output, or with role prompting for specialized expert responses without verbose explanations.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---\n", + "\n", + "
\n", + "
\n", + "

🎯 Ready for Hands-On Practice?

\n", + "

All 8 tactics learned! β€’ ~20-30 minutes for practice

\n", + "
\n", + " \n", + "
\n", + "

πŸŽ‰ Congratulations! You've Learned:

\n", + "
    \n", + "
  • Tactic 0: Write Clear Instructions
  • \n", + "
  • Tactic 1: Role Prompting
  • \n", + "
  • Tactic 2: Structured Inputs
  • \n", + "
  • Tactic 3: Few-Shot Examples
  • \n", + "
  • Tactic 4: Chain-of-Thought
  • \n", + "
  • Tactic 5: Reference Citations
  • \n", + "
  • Tactic 6: Prompt Chaining
  • \n", + "
  • Tactic 7: LLM-as-Judge
  • \n", + "
  • Tactic 8: Inner Monologue
  • \n", + "
\n", + "
\n", + " \n", + "
\n", + "

⏭️ What's Next:

\n", + "
    \n", + "
  • Activity 2.1: Role Prompting + Structured Inputs
  • \n", + "
  • Activity 2.2: Few-Shot Examples + Chain-of-Thought
  • \n", + "
  • Activity 2.3: Reference Citations + Prompt Chaining
  • \n", + "
  • Activity 2.4: LLM-as-Judge + Inner Monologue
  • \n", + "
\n", + "

πŸ’ͺ Time to apply what you've learned!

\n", + "
\n", + " \n", + "
\n", + "

πŸ“Œ BOOKMARK TO RESUME:

\n", + "

\"Hands-On Practice - Activity 2.1\"

\n", + "
\n", + " \n", + "

\n", + " πŸ’‘ The practice activities reinforce learning. Take a break if needed before diving in!\n", + "

\n", + "
\n", + "\n", + "---\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## πŸƒβ€β™€οΈ Hands-On Practice\n", + "\n", + "Now let's practice what you've learned! These exercises will help you master the 8 core tactics.\n", + "\n", + "### Activity 2.1: Role Prompting & Structured Inputs\n", + "\n", + "**Goal:** Combine role prompting with XML delimiters to organize multi-file code analysis.\n", + "\n", + "**Your Task:** Create a prompt that uses a QA Engineer persona to analyze test coverage for multiple files." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# HINT: Combine Tactic 1 (Role Prompting) + Tactic 2 (Structured Inputs)\n", + "# - Use system message to define QA Engineer role\n", + "# - Use XML tags: , , \n", + "# - Ask for structured output with coverage analysis\n", + "\n", + "test_file = \"\"\"\n", + "def calculate_total(items, tax_rate=0.1):\n", + " subtotal = sum(item['price'] * item['quantity'] for item in items)\n", + " return subtotal * (1 + tax_rate)\n", + "\"\"\"\n", + "\n", + "source_code = \"\"\"\n", + "class ShoppingCart:\n", + " def __init__(self):\n", + " self.items = []\n", + " \n", + " def add_item(self, name, price, quantity=1):\n", + " self.items.append({'name': name, 'price': price, 'quantity': quantity})\n", + " \n", + " def get_total(self, tax_rate=0.1):\n", + " return calculate_total(self.items, tax_rate)\n", + "\"\"\"\n", + "\n", + "# YOUR TASK: Create messages using role prompting + XML structure\n", + "# messages = [\n", + "# {\"role\": \"system\", \"content\": \"...\"}, \n", + "# {\"role\": \"user\", \"content\": \"...\"}\n", + "# ]\n", + "# response = get_chat_completion(messages)\n", + "# print(response)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Activity 2.2: Few-Shot Examples & Chain-of-Thought\n", + "\n", + "**Goal:** Use examples to teach AI your coding style, then apply chain-of-thought for analysis.\n", + "\n", + "**Your Task:** Provide 3 examples of your preferred error message format, then ask AI to generate error messages for a new function using step-by-step reasoning.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# HINT: Combine Tactic 3 (Few-Shot) + Tactic 4 (Chain-of-Thought)\n", + "# - Provide 3 examples of error messages in your preferred style\n", + "# - Use tags for each\n", + "# - Add \"Think step-by-step\" instruction\n", + "# - Ask AI to analyze a new function and generate error messages\n", + "\n", + "new_function = \"\"\"\n", + "def transfer_funds(from_account, to_account, amount, currency='USD'):\n", + " if amount <= 0:\n", + " raise ValueError(\"Amount must be positive\")\n", + " if from_account == to_account:\n", + " raise ValueError(\"Cannot transfer to same account\")\n", + " # Transfer logic here...\n", + "\"\"\"\n", + "\n", + "# YOUR TASK: Create messages with few-shot examples + CoT reasoning\n", + "# Example format you want:\n", + "# - \"ERROR [CODE]: Human-readable message. Suggestion: ...\"\n", + "# \n", + "# messages = [...]\n", + "# response = get_chat_completion(messages)\n", + "# print(response)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Activity 2.3: Reference Citations & Prompt Chaining\n", + "\n", + "**Goal:** Build a 2-step prompt chain that analyzes documentation and generates code.\n", + "\n", + "**Your Task:** \n", + "- **Step 1:** Extract relevant quotes from API docs about authentication\n", + "- **Step 2:** Use those quotes to generate secure authentication code\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# HINT: Combine Tactic 5 (Reference Citations) + Tactic 6 (Prompt Chaining)\n", + "# - STEP 1: Extract quotes about auth requirements\n", + "# - STEP 2: Use extracted quotes to generate implementation\n", + "\n", + "api_documentation = \"\"\"\n", + "# Authentication API v2\n", + "\n", + "## Security Requirements\n", + "All API requests must include:\n", + "- API key in X-API-Key header\n", + "- Request signature using HMAC-SHA256\n", + "- Timestamp within 5 minutes of server time\n", + "- Rate limiting: 100 requests per minute per key\n", + "\n", + "## Key Management\n", + "- Store keys in environment variables, never in code\n", + "- Rotate keys every 90 days\n", + "- Use separate keys for dev/staging/production\n", + "\n", + "## Error Handling\n", + "- 401: Invalid or missing API key\n", + "- 403: Valid key but insufficient permissions\n", + "- 429: Rate limit exceeded\n", + "\"\"\"\n", + "\n", + "# YOUR TASK: Create a 2-step chain\n", + "# STEP 1: Extract relevant quotes in tags\n", + "# step1_messages = [...]\n", + "# quotes = get_chat_completion(step1_messages)\n", + "# print(\"STEP 1 - Extracted Quotes:\")\n", + "# print(quotes)\n", + "# print(\"\\n\")\n", + "\n", + "# STEP 2: Generate code using the quotes\n", + "# step2_messages = [...] # Pass quotes from step 1\n", + "# code = get_chat_completion(step2_messages)\n", + "# print(\"STEP 2 - Generated Code:\")\n", + "# print(code)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Activity 2.4: LLM-as-Judge & Inner Monologue\n", + "\n", + "**Goal:** Create a self-critique loop with clean final output.\n", + "\n", + "**Your Task:** \n", + "- Generate a function with potential issues\n", + "- Use AI as judge to critique it with weighted criteria\n", + "- Get improved version with inner monologue (only show final code)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# HINT: Combine Tactic 7 (LLM-as-Judge) + Tactic 8 (Inner Monologue)\n", + "# Create a 3-step process:\n", + "# - STEP 1: Generate initial code\n", + "# - STEP 2: Judge it with weighted criteria (Security 40%, Performance 30%, Readability 30%)\n", + "# - STEP 3: Improve using and tags\n", + "\n", + "requirement = \"Create a function to validate and sanitize user email input\"\n", + "\n", + "# YOUR TASK: Build the 3-step self-improvement loop\n", + "# STEP 1: Generate initial implementation\n", + "# step1_messages = [...]\n", + "# initial_code = get_chat_completion(step1_messages)\n", + "# print(\"STEP 1 - Initial Code:\")\n", + "# print(initial_code)\n", + "# print(\"\\n\")\n", + "\n", + "# STEP 2: Critique with weighted rubric\n", + "# step2_messages = [...] # Define criteria with weights\n", + "# critique = get_chat_completion(step2_messages)\n", + "# print(\"STEP 2 - Critique:\")\n", + "# print(critique)\n", + "# print(\"\\n\")\n", + "\n", + "# STEP 3: Improve with inner monologue\n", + "# step3_messages = [...] # Use and tags\n", + "# improved = get_chat_completion(step3_messages)\n", + "# \n", + "# # Extract only the section\n", + "# import re\n", + "# output_match = re.search(r'(.*?)', improved, re.DOTALL)\n", + "# if output_match:\n", + "# print(\"STEP 3 - Improved Code (Clean Output):\")\n", + "# print(output_match.group(1).strip())\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 🎯 Exercise Solutions & Discussion\n", + "\n", + "
\n", + "πŸ’‘ Try the exercises above first!

\n", + "Complete Activities 2.1-2.4 before checking the solutions below.\n", + "
\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "πŸ“‹ Click to reveal solutions\n", + "\n", + "**Activity 2.1 Solution:**\n", + "```python\n", + "messages = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": \"You are a QA engineer analyzing test coverage. Provide detailed coverage recommendations.\"\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": f\"\"\"\n", + "\n", + "{test_file}\n", + "\n", + "\n", + "\n", + "{source_code}\n", + "\n", + "\n", + "\n", + "- Test happy path scenarios\n", + "- Test edge cases (empty lists, zero quantities)\n", + "- Test error conditions\n", + "- Verify tax calculations\n", + "\n", + "\n", + "Analyze test coverage and identify missing test cases. Format response as:\n", + "1. Current Coverage Assessment\n", + "2. Missing Test Scenarios\n", + "3. Recommended Test Cases\n", + "\"\"\"\n", + " }\n", + "]\n", + "```\n", + "\n", + "**Activity 2.2 Solution:**\n", + "```python\n", + "messages = [\n", + " {\"role\": \"system\", \"content\": \"Generate error messages following the provided examples.\"},\n", + " {\"role\": \"user\", \"content\": \"Generate error message for invalid email\"},\n", + " {\"role\": \"assistant\", \"content\": \"ERROR [E001]: Invalid email format. Suggestion: Use format 'user@domain.com'\"},\n", + " \n", + " {\"role\": \"user\", \"content\": \"Generate error message for empty field\"},\n", + " {\"role\": \"assistant\", \"content\": \"ERROR [E002]: Required field is empty. Suggestion: Provide a valid value\"},\n", + " \n", + " {\"role\": \"user\", \"content\": \"Generate error message for duplicate entry\"},\n", + " {\"role\": \"assistant\", \"content\": \"ERROR [E003]: Duplicate entry detected. Suggestion: Use a unique identifier\"},\n", + " \n", + " {\"role\": \"user\", \"content\": f\"Analyze this function step-by-step and generate appropriate error messages:\\n{new_function}\"}\n", + "]\n", + "```\n", + "\n", + "**Activity 2.3 Solution:**\n", + "```python\n", + "# STEP 1\n", + "step1_messages = [{\n", + " \"role\": \"user\",\n", + " \"content\": f\"\"\"{api_documentation}\n", + "\n", + "Extract key quotes about authentication requirements in tags.\"\"\"\n", + "}]\n", + "\n", + "# STEP 2 (pass quotes from step 1)\n", + "step2_messages = [{\n", + " \"role\": \"user\",\n", + " \"content\": f\"\"\"Based on these requirements:\n", + "{quotes}\n", + "\n", + "Generate Python code implementing secure authentication. Use for analysis and for code.\"\"\"\n", + "}]\n", + "```\n", + "\n", + "**Activity 2.4 Solution:**\n", + "```python\n", + "# STEP 2 - Judge\n", + "step2_messages = [{\n", + " \"role\": \"system\",\n", + " \"content\": \"\"\"You are a code quality judge. Evaluate based on:\n", + "- Security (40%)\n", + "- Performance (30%) \n", + "- Readability (30%)\n", + "\n", + "Provide scores and specific issues.\"\"\"\n", + "}]\n", + "\n", + "# STEP 3 - Improve with inner monologue\n", + "step3_messages = [{\n", + " \"role\": \"user\",\n", + " \"content\": f\"\"\"Improve this code addressing the critique:\n", + "\n", + "{initial_code}\n", + "\n", + "Critique: {critique}\n", + "\n", + "Process:\n", + "1. In tags: Analyze issues and plan improvements\n", + "2. In tags: Provide only the final improved code\"\"\"\n", + "}]\n", + "```\n", + "\n", + "**Key Takeaways:**\n", + "- Tactics work better combined than alone\n", + "- XML tags organize complex information\n", + "- Chains enable multi-step reasoning\n", + "- Inner monologue keeps output clean\n", + "\n", + "
\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "πŸŽ‰ **Excellent work!** You've practiced combining multiple tactics to solve real-world coding challenges.\n", + "\n", + "**What you've demonstrated:**\n", + "- βœ… Combined role prompting with structured inputs (Activity 2.1)\n", + "- βœ… Used few-shot examples with chain-of-thought (Activity 2.2)\n", + "- βœ… Built prompt chains with reference citations (Activity 2.3)\n", + "- βœ… Created self-improvement loops with clean output (Activity 2.4)\n", + "\n", + "---" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## πŸ“ˆ Track Your Progress\n", + "\n", + "> **πŸ’‘ New to Skills Checklists?** See [Tracking Your Progress](../../README.md#-tracking-your-progress) in the main README for details on how the Skills Checklist works and when to check off skills.\n", + "\n", + "### Self-Assessment Questions\n", + "\n", + "After completing Module 2, ask yourself:\n", + "1. Can I explain how role prompting improves AI responses?\n", + "2. Can I use delimiters (XML tags) effectively to organize complex inputs?\n", + "3. Can I create few-shot examples to establish consistent styles?\n", + "4. Can I implement chain-of-thought reasoning for systematic analysis?\n", + "5. Can I ground AI responses in reference texts with proper citations?\n", + "6. Can I break complex tasks into sequential prompt chains?\n", + "7. Can I use LLM-as-Judge to evaluate and improve code quality?\n", + "8. Can I implement inner monologue to separate reasoning from final output?\n", + "\n", + "### Progress Overview\n", + "\n", + "
\n", + "πŸ’‘ Note: The status indicators below (βœ…/⬜) are visual guides only and cannot be clicked. Scroll down to \"Check Off Your Skills\" for the interactive checkboxes where you'll track your actual progress!\n", + "
\n", + "\n", + "
\n", + "\n", + "**Module 2 Skills Checklist:** \n", + "
Track your progress by checking off skills below. When you master all 16 skills (2 per tactic), you'll have achieved 100% completion!
\n", + "\n", + "**Current Status:**\n", + "- βœ… Environment Setup (Tutorial Completed)\n", + "- βœ… 8 Core Techniques Learned (Tutorial Completed) \n", + "- ⬜ Skills Mastery (Use Skills Checklist below)\n", + "\n", + "**Progress Guide:**\n", + "- 0-4 skills checked: Beginner (25-38%)\n", + "- 5-8 skills checked: Developing (44-56%)\n", + "- 9-12 skills checked: Intermediate (63-75%)\n", + "- 13-15 skills checked: Advanced (81-94%)\n", + "- 16 skills checked: Expert (100%) πŸŽ‰\n", + "\n", + "**Module 3:** Coming Next\n", + "- ⬜ Advanced Applications\n", + "- ⬜ Complex Refactoring Scenarios\n", + "- ⬜ Testing and QA Workflows\n", + "- ⬜ Production Debugging Prompts\n", + "\n", + "
\n", + "\n", + "### Check Off Your Skills\n", + "\n", + "
\n", + "\n", + "Mark each skill as you master it (2 skills per tactic = 16 total):\n", + "\n", + "**1. Role Prompting:**\n", + "
\n", + "- I can create effective software engineering personas (security, performance, QA)\n", + "
\n", + "
\n", + "- I can assign specific expertise roles to get specialized analysis\n", + "
\n", + "\n", + "**2. Structured Inputs:**\n", + "
\n", + "- I can use delimiters (### or XML) to organize complex inputs\n", + "
\n", + "
\n", + "- I can handle multi-file scenarios with clear structure\n", + "
\n", + "\n", + "**3. Few-Shot Examples:**\n", + "
\n", + "- I can create few-shot examples to establish consistent response styles\n", + "
\n", + "
\n", + "- I can use examples to teach AI my coding standards and documentation formats\n", + "
\n", + "\n", + "**4. Chain-of-Thought:**\n", + "
\n", + "- I can implement step-by-step reasoning for systematic analysis\n", + "
\n", + "
\n", + "- I can force AI to work through problems before judging solutions\n", + "
\n", + "\n", + "**5. Reference Citations:**\n", + "
\n", + "- I can structure multi-document prompts with proper XML tags\n", + "
\n", + "
\n", + "- I can request quote extraction before analysis to reduce hallucinations\n", + "
\n", + "\n", + "**6. Prompt Chaining:**\n", + "
\n", + "- I can break complex tasks into sequential prompt chains\n", + "
\n", + "
\n", + "- I can pass context between chain steps using structured tags\n", + "
\n", + "\n", + "**7. LLM-as-Judge:**\n", + "
\n", + "- I can create evaluation rubrics with weighted criteria for code assessment\n", + "
\n", + "
\n", + "- I can implement self-critique loops for iterative improvement\n", + "
\n", + "\n", + "**8. Inner Monologue:**\n", + "
\n", + "- I can separate thinking from output using <thinking> and <output> tags\n", + "
\n", + "
\n", + "- I can extract clean outputs while logging reasoning for debugging\n", + "
\n", + "\n", + "
\n", + "\n", + "
\n", + "πŸ’‘ Remember: The goal is not just to complete activities, but to build lasting skills that transform your development workflow!\n", + "
\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "πŸŽ‰ **Outstanding!** You've completed Module 2 and learned **eight powerful prompt engineering tactics**:\n", + "\n", + "1. 🎭 **Role Prompting** - Transform AI into specialized domain experts\n", + "2. πŸ“‹ **Structured Inputs** - Organize complex scenarios with XML delimiters\n", + "3. πŸ“š **Few-Shot Examples** - Teach AI your preferred styles and standards\n", + "4. ⛓️‍πŸ’₯ **Chain-of-Thought** - Guide systematic step-by-step reasoning\n", + "5. πŸ“– **Reference Citations** - Ground responses in actual documentation to reduce hallucinations\n", + "6. πŸ”— **Prompt Chaining** - Break complex tasks into sequential workflows\n", + "7. βš–οΈ **LLM-as-Judge** - Create evaluation rubrics and self-critique loops\n", + "8. 🀫 **Inner Monologue** - Separate reasoning from clean final outputs\n", + "\n", + "This is exactly how professional developers use AI assistants for real-world software engineering tasks!\n", + "\n", + "## 🎊 Module 2 Complete!\n", + "\n", + "### What You've Accomplished\n", + "\n", + "- βœ… **Practiced about role prompting** and saw how personas provide specialized expertise\n", + "- βœ… **Used structured delimiters** to organize complex, multi-part inputs with XML tags\n", + "- βœ… **Applied few-shot examples** to establish consistent response styles\n", + "- βœ… **Implemented chain-of-thought reasoning** for systematic analysis\n", + "- βœ… **Grounded AI responses** in reference texts with proper citations\n", + "- βœ… **Built prompt chains** to break complex tasks into sequential steps\n", + "- βœ… **Used LLM-as-Judge** for evaluating and improving code quality\n", + "- βœ… **Implemented inner monologue** to separate reasoning from final output\n", + "\n", + "### Next Steps\n", + "\n", + "Continue to **Module 3: Advanced Software Engineering Applications** where you'll learn:\n", + "- Building prompts for complex refactoring scenarios\n", + "- Creating systematic testing and QA workflows\n", + "- Designing effective debugging and performance optimization prompts\n", + "- Developing API integration and documentation helpers\n" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": ".venv", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.13.2" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/01-tutorials/module-02-fundamentals/requirements.txt b/01-course/module-02-fundamentals/requirements.txt similarity index 100% rename from 01-tutorials/module-02-fundamentals/requirements.txt rename to 01-course/module-02-fundamentals/requirements.txt diff --git a/01-tutorials/prompt-engineering-for-developers.ipynb b/01-course/prompt-engineering-for-developers.ipynb similarity index 100% rename from 01-tutorials/prompt-engineering-for-developers.ipynb rename to 01-course/prompt-engineering-for-developers.ipynb diff --git a/01-tutorials/README.md b/01-tutorials/README.md deleted file mode 100644 index 688ff12..0000000 --- a/01-tutorials/README.md +++ /dev/null @@ -1,48 +0,0 @@ -# 01-tutorials: Prompt Engineering Fundamentals - -This directory contains comprehensive tutorials that teach the fundamentals of prompt engineering for developers through hands-on examples and progressive learning modules. - -## Structure - -### Core Modules - -- **[module-01-foundations/](./module-01-foundations/)** - Course introduction, environment setup, and prompt anatomy -- **[module-02-fundamentals/](./module-02-fundamentals/)** - Core prompting techniques: clear instructions, personas, delimiters, reasoning -- **[module-03-applications/](./module-03-applications/)** - Software engineering applications: code quality, testing, debugging, API integration -- **[module-04-integration/](./module-04-integration/)** - Custom command integration for AI code assistants - -## Learning Path - -1. **Start Here**: Begin with any individual module notebook or the comprehensive course notebook -2. **Prerequisites**: Python 3.8+, IDE with notebook support, API access (GitHub Copilot, CircuIT, or OpenAI) -3. **Time Required**: ~90 minutes total, can be completed in 30-minute sessions -4. **Practice**: Complete hands-on exercises in [02-exercises/](../02-exercises/) - -## Module Overview - -### Module 1: Foundations (20 min) -- Environment setup and API configuration -- Understanding prompt anatomy and structure -- First prompt engineering workflow - -### Module 2: Fundamentals (30 min) -- Clear instructions and specification techniques -- Role prompting and persona adoption -- Delimiters and structured inputs -- Step-by-step reasoning and few-shot examples - -### Module 3: Applications (30 min) -- Code quality and refactoring patterns -- Testing and quality assurance workflows -- Code review and debugging techniques -- API integration and error handling - -### Module 4: Integration (10 min) -- Custom command creation for AI assistants -- Platform-specific integration patterns - -## Next Steps - -After completing the tutorials, continue to: -- **[02-exercises/](../02-exercises/)** - Hands-on practice activities and solutions -- **[03-examples/](../03-examples/)** - Real-world use cases and implementation patterns diff --git a/01-tutorials/module-01-foundations/README.md b/01-tutorials/module-01-foundations/README.md deleted file mode 100644 index b435d66..0000000 --- a/01-tutorials/module-01-foundations/README.md +++ /dev/null @@ -1,34 +0,0 @@ -# Module 1: Foundations - -## Course Introduction & Environment Setup - -This foundational module introduces you to prompt engineering concepts and gets your development environment configured for hands-on learning. - -### What You'll Learn -- Understanding the anatomy of effective prompts -- Setting up local development environment for AI assistant integration -- Configuring API access (GitHub Copilot, CircuIT, or OpenAI) -- Writing your first structured prompts for development tasks - -### Module Contents -- **[module1.ipynb](./module1.ipynb)** - Complete module 1 tutorial notebook - -### Learning Objectives -By completing this module, you will: -- βœ… Have a working development environment with AI assistant access -- βœ… Understand the four core elements of effective prompts -- βœ… Be able to write basic prompts for code improvement and documentation -- βœ… Know how to iterate and refine prompts based on output quality - -### Time Required -Approximately 20 minutes - -### Prerequisites -- Python 3.8+ installed -- IDE with notebook support (VS Code or Cursor recommended) -- API access to GitHub Copilot, CircuIT, or OpenAI - -### Next Steps -After completing this module: -1. Practice with [Module 1 exercises](../../02-exercises/hands-on/) -2. Continue to [Module 2: Core Prompting Techniques](../module-02-fundamentals/) diff --git a/01-tutorials/module-01-foundations/module1.ipynb b/01-tutorials/module-01-foundations/module1.ipynb deleted file mode 100644 index 69f34e0..0000000 --- a/01-tutorials/module-01-foundations/module1.ipynb +++ /dev/null @@ -1,779 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Tutorial: Prompt Engineering for Developers\n", - "\n", - "## What You'll Learn\n", - "\n", - "By the end of this hands-on tutorial, you'll master the essential prompt engineering skills that transform AI assistants into powerful development tools. You'll learn to write precise prompts that consistently deliver production-ready code, thorough code reviews, and effective debugging assistance.\n", - "\n", - "**What you'll accomplish:**\n", - "- βœ… Set up a local development environment for prompt engineering\n", - "- βœ… Master foundational prompting techniques (zero-shot, few-shot, chain-of-thought)\n", - "- βœ… Build advanced prompt structures that eliminate hallucinations\n", - "- βœ… Create custom commands for code review, debugging, and API integration\n", - "- βœ… Deploy a working prompt engineering toolkit for your daily development workflow\n", - "\n", - "## Prerequisites\n", - "\n", - "### Required Knowledge\n", - "- Basic familiarity with Python (variables, functions, basic syntax)\n", - "- Experience with any IDE (VS Code, Cursor, or similar)\n", - "- Understanding of basic software development concepts\n", - "\n", - "### Required Setup\n", - "- [ ] Python 3.8+ installed on your system\n", - "- [ ] IDE with notebook support (VS Code or Cursor) or Google Collab \n", - "- [ ] API access to either:\n", - " - GitHub Copilot (preferred for this tutorial)\n", - " - CircuIT APIs, or\n", - " - OpenAI API key\n", - "\n", - "### Time Required\n", - "- Approximately 90 minutes total\n", - "- Can be completed in 3 sessions of 30 minutes each\n", - "\n", - "## Tutorial Structure\n", - "\n", - "### Module 1: Foundation Setup (20 min)\n", - "- Set up your development environment\n", - "- Connect to AI models via API\n", - "- Verify everything works with your first prompt\n", - "- Understand the 4 core elements of effective prompts\n", - "\n", - "### Module 2: Core Prompting Techniques (30 min)\n", - "- Master role prompting and personas\n", - "- Use delimiters and structured inputs\n", - "- Apply few-shot examples and chain-of-thought reasoning\n", - "- Practice with real software engineering scenarios\n", - "\n", - "### Module 3: Advanced Software Engineering Applications (30 min)\n", - "- Build prompts for code quality and refactoring\n", - "- Create systematic testing and QA workflows\n", - "- Design effective code review and debugging prompts\n", - "- Develop API integration and documentation helpers\n", - "\n", - "### Module 4: Custom Command Integration (10 min)\n", - "- Create reusable prompt templates\n", - "- Set up custom commands for your AI assistant\n", - "- Build a personal prompt engineering toolkit\n", - "- Plan your next steps for continued learning\n", - "\n", - "---\n", - "\n", - "## πŸš€ Ready to Start?\n", - "\n", - "Let's begin by setting up your development environment and running your first successful prompt!\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "---\n", - "\n", - "# Module 1: Foundation Setup\n", - "\n", - "In this section, we'll get your development environment ready and help you understand what makes prompts effective.\n", - "\n", - "## Learning Outcomes for Module 1\n", - "\n", - "By the end of this section, you will:\n", - "- [ ] Have a working Python environment with AI model access\n", - "- [ ] Successfully execute your first structured prompt\n", - "- [ ] Understand the 4 core elements that make prompts effective\n", - "- [ ] Feel confident to move to advanced techniques\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Step 1.1: Install Required Dependencies\n", - "\n", - "Let's start by installing the packages we need for this tutorial.\n", - "\n", - "Run the cell below. You should see a success message when installation completes:\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Install required packages\n", - "import subprocess\n", - "import sys\n", - "\n", - "def install_requirements():\n", - " try:\n", - " # Install from requirements.txt\n", - " subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"-r\", \"requirements.txt\"])\n", - " print(\"βœ… SUCCESS! All dependencies installed successfully.\")\n", - " print(\"πŸ“¦ Installed: openai, anthropic, python-dotenv, requests\")\n", - " except subprocess.CalledProcessError as e:\n", - " print(f\"❌ Installation failed: {e}\")\n", - " print(\"πŸ’‘ Try running: pip install openai anthropic python-dotenv requests\")\n", - "\n", - "install_requirements()\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "βœ… **Success!** You've installed the necessary Python packages.\n", - "\n", - "πŸ’‘ **What just happened?** We installed libraries that let us communicate with AI models programmatically.\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Step 1.2: Set Up API Connection\n", - "\n", - "Now let's connect to an AI model. We'll use GitHub Copilot through a local proxy (recommended) or you can use other options.\n", - "\n", - "Run these Python cells to set up authentication to the foundational LLM models and a helper for chat completions. There are two ways to setup authentication to access foundational LLM models for this course:\n", - "\n", - "- **Oprion A: GitHub Copilot API (local proxy)**: Recommended if you don't have OpenAI or CircuIT API access. Follow `GitHub-Copilot-2-API/README.md` to authenticate and start the local server, then run the \"GitHub Copilot (local proxy)\" setup cells below.\n", - "\n", - "- **Option B: OpenAI API**: If you have OpenAI API access, you can use the OpenAI connection cells provided later in this notebook.\n", - "\n", - "- **Option C: CircuIT APIs (Azure OpenAI)**: If you have CircuIT API access, you can use the CircuIT connection cells provided later in this notebook." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Option A: GitHub Copilot (Recommended)\n", - "\n", - "If you have GitHub Copilot, this is the easiest option:\n", - "
\n", - "πŸ’‘ Note:

\n", - "The GitHub Copilot API repository (copilot-api) used in this course is a fork of the original repository from https://cto-github.cisco.com/xinyu3/copilot2api.\n", - "
\n", - "\n", - "- Follow the setup steps in [https://github.com/snehangshu-splunk/copilot-api/blob/main/.github/README.md](https://github.com/snehangshu-splunk/copilot-api/blob/main/.github/README.md) to:\n", - " - Authenticate (`auth`) with your GitHub account that has Copilot access\n", - " - Start the local server (default: `http://localhost:7711`)\n", - "- Then run the \"GitHub Copilot API setup (local proxy)\" cells below.\n", - "\n", - "Quick reference (see `README` for details):\n", - "1. Download and install dependencies\n", - " ```bash\n", - " # Clone the repository\n", - " git clone git@github.com:snehangshu-splunk/copilot-api.git\n", - " cd copilot-api\n", - "\n", - " # Install dependencies\n", - " uv sync\n", - " ```\n", - "2. Before starting the server, you need to authenticate with GitHub:\n", - " ```bash\n", - " # For business account\n", - " uv run copilot2api auth --business\n", - " ```\n", - " When authenticating for the first time, you will see the following information:\n", - " ```\n", - " Press Ctrl+C to stop the server\n", - " Starting Copilot API server...\n", - " Starting GitHub device authorization flow...\n", - "\n", - " Please enter the code '14B4-5D82' at:\n", - " https://github.com/login/device\n", - "\n", - " Waiting for authorization...\n", - " ```\n", - " You need to copy `https://github.com/login/device` to your browser, then log in to your GitHub account through the browser. This GitHub account should have GitHub Copilot functionality. After authentication is complete, copy '14B4-5D82' in the browser prompt box. This string of numbers is system-generated and may be different each time.\n", - "\n", - " > **Don't copy the code here.** If you copy this, it will only cause your authorization to fail.\n", - "\n", - " After successful device authorization:\n", - " - macOS or Linux:\n", - " - In the `$HOME/.config/copilot2api/` directory, you will see the github-token file.\n", - " - Windows system:\n", - " - You will find the github-token file in the `C:\\Users\\\\AppData\\Roaming\\copilot2api\\` directory.\n", - "\n", - " 3. Start the Server\n", - " ```bash\n", - " # Start API server (default port 7711)\n", - " uv run copilot2api start\n", - " ```\n", - " Now use the OpenAI libraries to connect to the LLM, by executing the below cell. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# GitHub Copilot API setup (local proxy)\n", - "import openai\n", - "import os\n", - "\n", - "# Configure for local GitHub Copilot proxy\n", - "client = openai.OpenAI(\n", - " base_url=\"http://localhost:7711/v1\",\n", - " api_key=\"dummy-key\" # The local proxy doesn't need a real key\n", - ")\n", - "\n", - "def get_chat_completion(messages, model=\"gpt-4\", temperature=0.7):\n", - " \"\"\"\n", - " Get a chat completion from the AI model.\n", - " \n", - " Args:\n", - " messages: List of message dictionaries with 'role' and 'content'\n", - " model: Model name (default: gpt-4)\n", - " temperature: Creativity level 0-1 (default: 0.7)\n", - " \n", - " Returns:\n", - " String response from the AI model\n", - " \"\"\"\n", - " try:\n", - " response = client.chat.completions.create(\n", - " model=model,\n", - " messages=messages,\n", - " temperature=temperature\n", - " )\n", - " return response.choices[0].message.content\n", - " except Exception as e:\n", - " return f\"❌ Error: {e}\\\\n\\\\nπŸ’‘ Make sure the GitHub Copilot local proxy is running on port 7711\"\n", - "\n", - "print(\"βœ… GitHub Copilot API configured successfully!\")\n", - "print(\"πŸ”— Connected to: http://localhost:7711\")\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Option B: Direct OpenAI API\n", - "\n", - "If you prefer to use OpenAI directly, uncomment and run this cell instead:\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# # Direct OpenAI API setup\n", - "# import openai\n", - "# import os\n", - "# from dotenv import load_dotenv\n", - "\n", - "# load_dotenv()\n", - "\n", - "# client = openai.OpenAI(\n", - "# api_key=os.getenv(\"OPENAI_API_KEY\") # Set this in your .env file\n", - "# )\n", - "\n", - "# def get_chat_completion(messages, model=\"gpt-4\", temperature=0.7):\n", - "# try:\n", - "# response = client.chat.completions.create(\n", - "# model=model,\n", - "# messages=messages,\n", - "# temperature=temperature\n", - "# )\n", - "# return response.choices[0].message.content\n", - "# except Exception as e:\n", - "# return f\"❌ Error: {e}\"\n", - "\n", - "# print(\"βœ… OpenAI API configured successfully!\")\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Option C: CircuIT APIs (Azure OpenAI)\n", - "\n", - "If you have CircuIT API access, you can use the Azure OpenAI-backed APIs instead of the Copilot proxy.\n", - "\n", - "- Ensure your environment variables are configured (`CISCO_CLIENT_ID`, `CISCO_CLIENT_SECRET`, `CISCO_OPENAI_APP_KEY`) in the `.env` file.\n", - "\n", - "
\n", - "πŸ’‘ Remember:

\n", - "The values for these enviroment variables can be found at https://ai-chat.cisco.com/bridgeit-platform/api/home by clicking the View button found against the App Key.\n", - "
\n", - "\n", - "If you prefer to use CircuIT APIs, uncomment and run this cell instead:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# import openai\n", - "# import traceback\n", - "# import requests\n", - "# import base64\n", - "# import os\n", - "# from dotenv import load_dotenv\n", - "# from openai import AzureOpenAI\n", - "\n", - "# # Load environment variables\n", - "# load_dotenv()\n", - "\n", - "# # Open AI version to use\n", - "# openai.api_type = \"azure\"\n", - "# openai.api_version = \"2024-12-01-preview\"\n", - "\n", - "# # Get API_KEY wrapped in token - using environment variables\n", - "# client_id = os.getenv(\"CISCO_CLIENT_ID\")\n", - "# client_secret = os.getenv(\"CISCO_CLIENT_SECRET\")\n", - "\n", - "# url = \"https://id.cisco.com/oauth2/default/v1/token\"\n", - "\n", - "# payload = \"grant_type=client_credentials\"\n", - "# value = base64.b64encode(f\"{client_id}:{client_secret}\".encode(\"utf-8\")).decode(\"utf-8\")\n", - "# headers = {\n", - "# \"Accept\": \"*/*\",\n", - "# \"Content-Type\": \"application/x-www-form-urlencoded\",\n", - "# \"Authorization\": f\"Basic {value}\",\n", - "# }\n", - "\n", - "# token_response = requests.request(\"POST\", url, headers=headers, data=payload)\n", - "# print(token_response.text)\n", - "# token_data = token_response.json()\n", - "\n", - "# client = AzureOpenAI(\n", - "# azure_endpoint=\"https://chat-ai.cisco.com\",\n", - "# api_key=token_data.get(\"access_token\"),\n", - "# api_version=\"2024-12-01-preview\",\n", - "# )\n", - "\n", - "# app_key = os.getenv(\"CISCO_OPENAI_APP_KEY\")\n", - "\n", - "# def get_chat_completion(messages, model=\"gpt-4o\", temperature=0.0):\n", - "# try:\n", - "# response = client.chat.completions.create(\n", - "# model=model,\n", - "# messages=messages,\n", - "# temperature=temperature,\n", - "# user=f'{\"appkey\": \"{app_key}\"}',\n", - "# )\n", - "# return response.choices[0].message.content\n", - "# except Exception as e:\n", - "# return f\"❌ Error: {e}\"" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Step 1.3: Test Your Connection\n", - "\n", - "Let's verify everything is working by running your first structured prompt:\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Test the connection with a simple prompt\n", - "test_messages = [\n", - " {\n", - " \"role\": \"system\",\n", - " \"content\": \"You are a helpful coding assistant. Respond with exactly: 'Connection successful! Ready for prompt engineering.'\"\n", - " },\n", - " {\n", - " \"role\": \"user\",\n", - " \"content\": \"Test the connection\"\n", - " }\n", - "]\n", - "\n", - "response = get_chat_completion(test_messages)\n", - "print(\"πŸ§ͺ Test Response:\")\n", - "print(response)\n", - "\n", - "if response and \"Connection successful\" in response:\n", - " print(\"\\\\nπŸŽ‰ Perfect! Your AI connection is working!\")\n", - "else:\n", - " print(\"\\\\n⚠️ Connection test complete, but response format may vary.\")\n", - " print(\"This is normal - let's continue with the tutorial!\")\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Checkpoint: Verify Your Progress\n", - "\n", - "Before continuing, let's make sure everything is working:\n", - "\n", - "1. βœ… Check that you saw \"SUCCESS!\" from the dependency installation\n", - "2. βœ… Verify you saw \"configured successfully!\" from the API setup\n", - "3. βœ… Confirm you received a response from the test prompt\n", - "\n", - "If any of these checks fail, see the Troubleshooting section below.\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Step 1.4: Why Prompt Engineering for Software Engineers?\n", - "\n", - "Prompt engineering is the fastest way to harness the power of large language models. By interacting with an LLM through a series of questions, statements, or instructions, you can adjust LLM output behavior based on the specific context of the output you want to achieve.\n", - "\n", - "### πŸ” Traditional Approach vs. Prompt Engineering\n", - "\n", - "| **Traditional Approach** | **Prompt Engineering Approach** |\n", - "| ----------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- |\n", - "| ❌ Generic queries: \"Fix this code\" | βœ… Specific requirements: \"Refactor this code following SOLID principles, add type hints, handle edge cases, and maintain backward compatibility\" |\n", - "| ❌ Vague requests: \"Make it better\" | βœ… Systematic analysis: Step-by-step code reviews covering security, performance, and maintainability |\n", - "| ❌ Inconsistent results and quality | βœ… Consistent, production-ready outputs |\n", - "\n", - "### 🎯 Key Benefits\n", - "\n", - "Effective prompt techniques can help you accomplish the following benefits:\n", - "\n", - "- **πŸš€ Boost a model's abilities and improve safety** \n", - " Well-crafted prompts guide models toward more accurate and appropriate responses\n", - "\n", - "- **🧠 Augment the model with domain knowledge and external tools** \n", - " Without changing model parameters or fine-tuning\n", - "\n", - "- **πŸ’‘ Interact with language models to grasp their full capabilities** \n", - " Unlock advanced reasoning and problem-solving abilities\n", - "\n", - "- **πŸ“ˆ Achieve better quality outputs through better quality inputs** \n", - " The precision of your prompts directly impacts the quality of results" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Step 1.5: Understanding Prompt Structure\n", - "\n", - "A prompt's form depends on the task you are giving to a model. Now that your environment is ready, let's understand what makes prompts effective by examining the **4 Core Elements**. Let's explore a basic example that demonstrates the key prompt elements in action.\n", - "\n", - "### πŸ“– Example Prompt\n", - "\n", - "Review the following prompt example to understand the structure of a prompt.\n", - "\n", - "```text\n", - "You are a helpful writing assistant. Analyze the following email draft and provide suggestions for improvement.\n", - "\n", - "Email context: This is a follow-up email to a client after a project meeting.\n", - "\n", - "Email draft:\n", - "\"Hi there,\n", - "Thanks for the meeting. We discussed some stuff and I think we're on the right track. Let me know if you have questions.\n", - "Best,\n", - "John\"\n", - "\n", - "Please provide your response in the following format:\n", - "1. Tone assessment\n", - "2. Three specific improvement suggestions\n", - "3. Revised email draft\n", - "```\n", - "---\n", - "\n", - "**The Output:**\n", - "\n", - "```text\n", - "1. Tone assessment:\n", - "The current tone is overly casual and vague for professional client communication. It lacks specificity and doesn't reinforce professionalism or next steps.\n", - "\n", - "2. Three specific improvement suggestions:\n", - "- Replace \"Hi there\" with the client's name for personalization\n", - "- Replace \"some stuff\" with specific meeting topics discussed\n", - "- Add clear next steps and timeline expectations\n", - "\n", - "3. Revised email draft:\n", - "\"Dear [Client Name],\n", - "\n", - "Thank you for taking the time to meet with us today. We had a productive discussion about the project timeline, budget considerations, and deliverable specifications. Based on our conversation, I believe we're aligned on the project direction and ready to move forward.\n", - "\n", - "I'll send you the detailed project proposal by Friday, and we can schedule a follow-up call next week to address any questions you might have.\n", - "\n", - "Best regards,\n", - "John\"\n", - "```\n", - "---\n", - "\n", - "### The 4 Core Elements of Effective Prompts\n", - "\n", - "Every effective prompt contains these elements:\n", - "\n", - "1. **πŸ“ Instructions** - What you want the AI to do\n", - "2. **🌐 Context** - Background information that helps the AI understand the situation\n", - "3. **πŸ“Š Input Data** - The specific content to work with\n", - "4. **🎨 Output Format** - How you want the response structured\n", - "\n", - "Let's see this in action with a software engineering example:\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Example: Code review prompt with all 4 elements\n", - "messages = [\n", - " {\n", - " \"role\": \"system\",\n", - " \"content\": (\n", - " # 1. INSTRUCTIONS\n", - " \"You are a senior software engineer conducting a code review. \"\n", - " \"Analyze the provided code and identify potential issues.\"\n", - " )\n", - " },\n", - " {\n", - " \"role\": \"user\",\n", - " \"content\": f\"\"\"\n", - "# 2. CONTEXT\n", - "Code context: This is a utility function for user registration in a web application.\n", - "\n", - "# 3. INPUT DATA\n", - "Code to review:\n", - "```python\n", - "def register_user(email, password):\n", - " if email and password:\n", - " user = {{\"email\": email, \"password\": password}}\n", - " return user\n", - " return None\n", - "```\n", - "\n", - "# 4. OUTPUT FORMAT\n", - "Please provide your response in this format:\n", - "1. Security Issues (if any)\n", - "2. Code Quality Issues (if any) \n", - "3. Recommended Improvements\n", - "4. Overall Assessment\n", - "\"\"\"\n", - " }\n", - "]\n", - "\n", - "response = get_chat_completion(messages)\n", - "print(\"πŸ” CODE REVIEW RESULT:\")\n", - "print(response)\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "
\n", - "\n", - "
πŸƒβ€β™€οΈ Practice Exercises
\n", - "\n", - "
Overview:Ready to test your skills? The prompt-engineering-exercises.ipynb notebook contains hands-on activities that reinforce the concepts you've learned in this module.
\n", - "\n", - "
Module 1 Activities:β€’ Activity 1.1: Analyze these prompts and identify missing elements\n", - "β€’ Activity 1.2: Create a complete prompt with all 4 elements for code\n", - "
" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "πŸŽ‰ **Excellent!** You've just executed a structured prompt with all 4 core elements.\n", - "\n", - "πŸ’‘ **What makes this work?**\n", - "- **Clear role definition** (\"senior software engineer conducting code review\")\n", - "- **Specific context** about the code's purpose\n", - "- **Concrete input** to analyze\n", - "- **Structured output format** for consistent results" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## πŸ“ˆ Tracking Your Progress\n", - "\n", - "### Self-Assessment Questions\n", - "\n", - "After completing Module 1, ask yourself:\n", - "1. Can I explain why structured prompts work better than vague ones?\n", - "2. Can I apply the 4 core elements to my daily coding tasks?\n", - "3. Can I teach a colleague how to write effective prompts?\n", - "4. Can I create variations of prompts for different scenarios?\n", - "\n", - "### Progress Tracking\n", - "\n", - "
\n", - "\n", - "**Module 1 Skills Mastery:** \n", - "
Track your progress by checking off skills below. When you master all 8 skills, you'll have achieved 100% completion!
\n", - "\n", - "**Current Status:**\n", - "- βœ… Environment Setup (Tutorial Completed)\n", - "- βœ… Basic Understanding (Tutorial Completed) \n", - "- ⬜ Skills Mastery (Use Skills Checklist below)\n", - "\n", - "**Progress Guide:**\n", - "- 0-2 skills checked: Beginner (50-63%)\n", - "- 3-5 skills checked: Intermediate (69-81%)\n", - "- 6-7 skills checked: Advanced (88-94%)\n", - "- 8 skills checked: Expert (100%) πŸŽ‰\n", - "\n", - "**Module 2:** Coming Next\n", - "- ⬜ Role Prompting Mastered\n", - "- ⬜ Delimiters & Structure\n", - "- ⬜ Few-Shot Examples\n", - "- ⬜ Chain-of-Thought Reasoning\n", - "\n", - "
\n", - "\n", - "### Skills Checklist\n", - "\n", - "
\n", - "\n", - "Mark each skill as you master it:\n", - "\n", - "**Foundation Skills:**\n", - "
\n", - "- I can identify the 4 core prompt elements in any example\n", - "
\n", - "
\n", - "- I can convert vague requests into structured prompts\n", - "
\n", - "
\n", - "- I can write clear instructions for AI assistants\n", - "
\n", - "
\n", - "- I can provide appropriate context for coding tasks\n", - "
\n", - "\n", - "**Application Skills:**\n", - "
\n", - "- I can use prompts for code review and analysis\n", - "
\n", - "
\n", - "- I can adapt prompts for different programming languages\n", - "
\n", - "
\n", - "- I can troubleshoot when prompts don't work as expected\n", - "
\n", - "
\n", - "- I can explain prompt engineering benefits to my team\n", - "
\n", - "\n", - "
\n", - "\n", - "
\n", - "πŸ’‘ Remember:

\n", - "The goal is not just to complete activities, but to build lasting skills that transform your development workflow!\n", - "
\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "---\n", - "\n", - "## Module 1 Complete! πŸŽ‰\n", - "\n", - "### What You've Accomplished\n", - "- βœ… Set up a working Python environment with AI model access\n", - "- βœ… Successfully executed your first structured prompt\n", - "- βœ… Learned the 4 core elements of effective prompts\n", - "- βœ… Conducted your first AI-powered code review\n", - "\n", - "### Next Steps\n", - "Ready to learn advanced prompting techniques? \n", - "Continue to **Module 2: Core Prompting Techniques** where you'll master:\n", - "- Role prompting and personas for specialized expertise\n", - "- Using delimiters and structured inputs\n", - "- Few-shot examples and chain-of-thought reasoning" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "---\n", - "\n", - "## Troubleshooting\n", - "\n", - "### Common Issues\n", - "\n", - "#### Issue: \"pip install failed\" or \"ModuleNotFoundError\"\n", - "**Solution**: \n", - "1. Make sure you're in the correct directory with `requirements.txt`\n", - "2. Try installing packages individually: `pip install openai anthropic python-dotenv requests`\n", - "3. If using a virtual environment, make sure it's activated\n", - "\n", - "#### Issue: \"Connection failed\" or \"Error connecting to localhost:7711\"\n", - "**Solution**: \n", - "1. Make sure the GitHub Copilot local proxy is running\n", - "2. Follow the setup instructions in `GitHub-Copilot-2-API/README.md`\n", - "3. Alternative: Use Option B (Direct OpenAI API) instead\n", - "\n", - "#### Issue: \"Invalid API key\" or authentication errors\n", - "**Solution**: \n", - "1. For GitHub Copilot: Ensure you've authenticated with `uv run copilot2api auth --business`\n", - "2. For OpenAI: Check that your API key is correctly set in the `.env` file\n", - "3. Verify your account has the necessary permissions\n", - "\n", - "## Complete Code Reference\n", - "\n", - "Here's the complete setup code for reference:\n", - "\n", - "```python\n", - "# Install requirements\n", - "import subprocess\n", - "import sys\n", - "subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"-r\", \"../requirements.txt\"])\n", - "\n", - "# Configure API client\n", - "import openai\n", - "client = openai.OpenAI(base_url=\"http://localhost:7711/v1\", api_key=\"dummy-key\")\n", - "\n", - "# Helper function\n", - "def get_chat_completion(messages, model=\"gpt-4\", temperature=0.7):\n", - " response = client.chat.completions.create(\n", - " model=model, messages=messages, temperature=temperature\n", - " )\n", - " return response.choices[0].message.content\n", - "```\n", - "\n", - "---\n", - "\n", - "🎊 **Congratulations!** You've completed Module 1 and are ready to become a prompt engineering expert!\n" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": ".venv", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.13.2" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/01-tutorials/module-02-fundamentals/README.md b/01-tutorials/module-02-fundamentals/README.md deleted file mode 100644 index cb63447..0000000 --- a/01-tutorials/module-02-fundamentals/README.md +++ /dev/null @@ -1,62 +0,0 @@ -# Module 2: Fundamentals - -## Core Prompt Engineering Techniques - -This module covers the essential prompt engineering techniques that form the foundation of effective AI assistant interaction for software development. - -### What You'll Learn -- Clear instruction writing and specification techniques -- Role prompting and persona adoption for specialized expertise -- Using delimiters and structured inputs for complex tasks -- Step-by-step reasoning and few-shot learning patterns -- Providing reference text to reduce hallucinations - -### Module Contents -- **[module2.ipynb](./module2.ipynb)** - Complete module 2 tutorial notebook - -### Core Techniques Covered - -#### 1. Clear Instructions & Specifications -- Writing precise, unambiguous prompts -- Specifying constraints, formats, and requirements -- Handling edge cases and error conditions - -#### 2. Role Prompting & Personas -- Adopting specialized engineering roles (security, performance, QA) -- Leveraging domain expertise through persona prompting -- Combining multiple perspectives for comprehensive analysis - -#### 3. Delimiters & Structured Inputs -- Organizing complex multi-file inputs using headers and XML-like tags -- Separating requirements, context, and code cleanly -- Structuring outputs for consistency and parsability - -#### 4. Step-by-Step Reasoning -- Guiding systematic analysis through explicit steps -- Building chains of reasoning for complex problems -- Creating reproducible analytical workflows - -#### 5. Few-Shot Learning & Examples -- Providing high-quality examples to establish patterns -- Teaching consistent formatting and style -- Demonstrating edge case handling - -### Learning Objectives -By completing this module, you will: -- βœ… Master the six core prompt engineering techniques -- βœ… Be able to transform vague requests into specific, actionable prompts -- βœ… Know how to structure complex multi-file refactoring tasks -- βœ… Understand how to guide AI assistants through systematic analysis -- βœ… Have practical experience with each technique applied to code - -### Time Required -Approximately 30 minutes - -### Prerequisites -- Completion of [Module 1: Foundations](../module-01-foundations/) -- Working development environment with AI assistant access - -### Next Steps -After completing this module: -1. Practice with [Module 2 exercises](../../02-exercises/hands-on/) -2. Continue to [Module 3: Applications](../module-03-applications/) diff --git a/01-tutorials/module-02-fundamentals/module2.ipynb b/01-tutorials/module-02-fundamentals/module2.ipynb deleted file mode 100644 index b047b6d..0000000 --- a/01-tutorials/module-02-fundamentals/module2.ipynb +++ /dev/null @@ -1,972 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Module 2 - Core Prompting Techniques\n", - "\n", - "## What You'll Learn\n", - "\n", - "In this hands-on module, you'll master the fundamental prompting techniques that professional developers use daily. You'll learn to craft prompts that leverage role-playing, structured inputs, examples, and step-by-step reasoning to get consistently excellent results from AI assistants.\n", - "\n", - "**What you'll accomplish:**\n", - "- βœ… Master role prompting and personas for specialized expertise\n", - "- βœ… Use delimiters and structured inputs for complex scenarios\n", - "- βœ… Apply few-shot examples to establish consistent output styles\n", - "- βœ… Implement chain-of-thought reasoning for complex problems\n", - "- βœ… Build advanced prompts that reference external documentation\n", - "- βœ… Create production-ready prompts for software engineering tasks\n", - "\n", - "## Prerequisites\n", - "\n", - "### Required Knowledge\n", - "- Completion of Module 1 (Foundation Setup) or equivalent experience\n", - "- Basic understanding of prompt structure (instructions, context, input, output format)\n", - "- Familiarity with Python and software development concepts\n", - "\n", - "### Required Setup\n", - "- [ ] Python 3.8+ installed on your system\n", - "- [ ] IDE with notebook support (VS Code, Cursor, or Jupyter)\n", - "- [ ] API access to either:\n", - " - GitHub Copilot (preferred for this tutorial)\n", - " - CircuIT APIs, or\n", - " - OpenAI API key\n", - "\n", - "### Time Required\n", - "- Approximately 45 minutes total\n", - "- Can be completed in 2 sessions of 20-25 minutes each\n", - "\n", - "## Tutorial Structure\n", - "\n", - "### Part 1: Role Prompting and Personas (15 min)\n", - "- Learn to assign specific expertise roles to AI assistants\n", - "- Practice with software engineering personas\n", - "- See immediate improvements in response quality\n", - "\n", - "### Part 2: Structured Inputs and Delimiters (15 min)\n", - "- Master the use of delimiters for complex inputs\n", - "- Organize multi-file code scenarios\n", - "- Handle mixed content types effectively\n", - "\n", - "### Part 3: Examples and Chain-of-Thought (15 min)\n", - "- Use few-shot examples to establish consistent styles\n", - "- Implement step-by-step reasoning for complex tasks\n", - "- Build systematic approaches to code analysis\n", - "\n", - "---\n", - "\n", - "## πŸš€ Ready to Start?\n", - "\n", - "**Important:** This module requires fresh setup. Even if you completed Module 1, please run the setup cells below to ensure everything works correctly.\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "---\n", - "\n", - "# Fresh Environment Setup\n", - "\n", - "Even if you completed Module 1, please run these setup cells to ensure your environment is ready for Module 2.\n", - "\n", - "## Step 0.1: Install Dependencies\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "❌ Installation failed: Command '['/Users/snekarma/Development/SplunkDev/prompteng-devs/.venv/bin/python', '-m', 'pip', 'install', '-r', './requirements.txt']' returned non-zero exit status 1.\n", - "πŸ’‘ Try running: pip install openai anthropic python-dotenv requests\n" - ] - }, - { - "name": "stderr", - "output_type": "stream", - "text": [ - "/Users/snekarma/Development/SplunkDev/prompteng-devs/.venv/bin/python: No module named pip\n" - ] - } - ], - "source": [ - "# Install required packages for Module 2\n", - "import subprocess\n", - "import sys\n", - "\n", - "def install_requirements():\n", - " try:\n", - " # Install from requirements.txt\n", - " subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"-r\", \"requirements.txt\"])\n", - " print(\"βœ… SUCCESS! Module 2 dependencies installed successfully.\")\n", - " print(\"πŸ“¦ Ready for: openai, anthropic, python-dotenv, requests\")\n", - " except subprocess.CalledProcessError as e:\n", - " print(f\"❌ Installation failed: {e}\")\n", - " print(\"πŸ’‘ Try running: pip install openai anthropic python-dotenv requests\")\n", - "\n", - "install_requirements()\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Step 0.2: Configure API Connection\n", - "\n", - "Choose your preferred API option and run the corresponding cell:\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Option A: GitHub Copilot API setup (Recommended)\n", - "import openai\n", - "import os\n", - "\n", - "# Configure for local GitHub Copilot proxy\n", - "client = openai.OpenAI(\n", - " base_url=\"http://localhost:7711/v1\",\n", - " api_key=\"dummy-key\"\n", - ")\n", - "\n", - "def get_chat_completion(messages, model=\"gpt-4\", temperature=0.7):\n", - " \"\"\"Get a chat completion from the AI model.\"\"\"\n", - " try:\n", - " response = client.chat.completions.create(\n", - " model=model,\n", - " messages=messages,\n", - " temperature=temperature\n", - " )\n", - " return response.choices[0].message.content\n", - " except Exception as e:\n", - " return f\"❌ Error: {e}\\\\nπŸ’‘ Make sure GitHub Copilot proxy is running on port 7711\"\n", - "\n", - "print(\"βœ… GitHub Copilot API configured for Module 2!\")\n", - "print(\"πŸ”— Connected to: http://localhost:7711\")\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Step 0.3: Verify Setup\n", - "\n", - "Let's test that everything is working before we begin:\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Quick setup verification\n", - "test_messages = [\n", - " {\n", - " \"role\": \"system\",\n", - " \"content\": \"You are a prompt engineering instructor. Respond with: 'Module 2 setup verified! Ready to learn core techniques.'\"\n", - " },\n", - " {\n", - " \"role\": \"user\",\n", - " \"content\": \"Test Module 2 setup\"\n", - " }\n", - "]\n", - "\n", - "response = get_chat_completion(test_messages)\n", - "print(\"πŸ§ͺ Setup Test:\")\n", - "print(response)\n", - "\n", - "if response and (\"verified\" in response.lower() or \"ready\" in response.lower()):\n", - " print(\"\\\\nπŸŽ‰ Perfect! Module 2 environment is ready!\")\n", - "else:\n", - " print(\"\\\\n⚠️ Setup test complete. Let's continue with the tutorial!\")\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "---\n", - "\n", - "# Part 1: Role Prompting and Personas\n", - "\n", - "In this section, you'll learn to assign specific roles and expertise to AI assistants, dramatically improving the quality and relevance of their responses.\n", - "\n", - "## Learning Outcomes for Part 1\n", - "\n", - "By the end of this section, you will:\n", - "- [ ] Understand how personas improve AI responses\n", - "- [ ] Write effective role prompts for software engineering tasks\n", - "- [ ] See immediate improvements in code review quality\n", - "- [ ] Know when and how to use different engineering personas\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Step 1.1: Your First Role Prompt\n", - "\n", - "Let's start with a simple example to see the power of role prompting. We'll compare a generic request with a role-specific one.\n", - "\n", - "**First, let's try a generic approach:**\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Generic approach - no specific role\n", - "generic_messages = [\n", - " {\n", - " \"role\": \"user\",\n", - " \"content\": \"Look at this function and tell me what you think: def calc(x, y): return x + y if x > 0 and y > 0 else 0\"\n", - " }\n", - "]\n", - "\n", - "generic_response = get_chat_completion(generic_messages)\n", - "print(\"πŸ” GENERIC RESPONSE:\")\n", - "print(generic_response)\n", - "print(\"\\\\n\" + \"=\"*50 + \"\\\\n\")\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "**Now, let's try the same request with a specific role:**\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Role-specific approach - code reviewer persona\n", - "role_messages = [\n", - " {\n", - " \"role\": \"system\",\n", - " \"content\": \"\"\"You are a senior code reviewer.\n", - "\n", - " Analyze the provided code and give exactly 3 specific feedback points: \n", - " 1. about code structure\n", - " 2. about naming conventions\n", - " 3. about potential improvements\n", - " \n", - " Format each point as a bullet with the category in brackets.\"\"\"\n", - " },\n", - " {\n", - " \"role\": \"user\",\n", - " \"content\": \"def calc(x, y): return x + y if x > 0 and y > 0 else 0\"\n", - " }\n", - "]\n", - "\n", - "role_response = get_chat_completion(role_messages)\n", - "print(\"🎯 ROLE-SPECIFIC RESPONSE:\")\n", - "print(role_response)\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "πŸŽ‰ **Amazing difference!** Notice how the role-specific response is more structured, actionable, and focused.\n", - "\n", - "πŸ’‘ **What made the difference?**\n", - "- **Specific expertise role** (\"senior code reviewer\")\n", - "- **Clear output requirements** (exactly 3 points with specific categories)\n", - "- **Structured format** (bullets with category labels)\n", - "\n", - "## Step 1.2: Software Engineering Personas\n", - "\n", - "Let's practice with different software engineering roles to see how each provides specialized expertise:\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Security Engineer Persona\n", - "security_messages = [\n", - " {\n", - " \"role\": \"system\", \n", - " \"content\": \"You are a security engineer. Review code for security vulnerabilities and provide specific recommendations.\"\n", - " },\n", - " {\n", - " \"role\": \"user\",\n", - " \"content\": \"\"\"Review this login function:\n", - " \n", - "def login(username, password):\n", - " query = f\"SELECT * FROM users WHERE username = '{username}' AND password = '{password}'\"\n", - " result = database.execute(query)\n", - " return result\"\"\"\n", - " }\n", - "]\n", - "\n", - "security_response = get_chat_completion(security_messages)\n", - "print(\"πŸ”’ SECURITY ENGINEER ANALYSIS:\")\n", - "print(security_response)\n", - "print(\"\\\\n\" + \"=\"*50 + \"\\\\n\")\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Performance Engineer Persona\n", - "performance_messages = [\n", - " {\n", - " \"role\": \"system\",\n", - " \"content\": \"You are a performance engineer. Analyze code for efficiency issues and optimization opportunities.\"\n", - " },\n", - " {\n", - " \"role\": \"user\", \n", - " \"content\": \"\"\"Analyze this data processing function:\n", - "\n", - "def process_data(items):\n", - " result = []\n", - " for item in items:\n", - " if len(item) > 3:\n", - " result.append(item.upper())\n", - " return result\"\"\"\n", - " }\n", - "]\n", - "\n", - "performance_response = get_chat_completion(performance_messages)\n", - "print(\"⚑ PERFORMANCE ENGINEER ANALYSIS:\")\n", - "print(performance_response)\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Checkpoint: Compare the Responses\n", - "\n", - "Notice how each engineering persona focused on their area of expertise:\n", - "\n", - "- **Security Engineer**: Identified SQL injection vulnerabilities and authentication issues\n", - "- **Performance Engineer**: Suggested list comprehensions and optimization techniques\n", - "\n", - "βœ… **Success!** You've seen how role prompting provides specialized, expert-level analysis.\n", - "\n", - "## Step 1.3: Practice - Create Your Own Persona\n", - "\n", - "Now it's your turn! Create a \"QA Engineer\" persona to analyze test coverage:\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Your turn: Create a QA Engineer persona\n", - "# Fill in the system message to create a QA Engineer role\n", - "\n", - "qa_messages = [\n", - " {\n", - " \"role\": \"system\",\n", - " \"content\": \"You are a QA engineer. Analyze the provided function and identify test cases needed, including edge cases and error scenarios. Provide specific test recommendations.\"\n", - " },\n", - " {\n", - " \"role\": \"user\",\n", - " \"content\": \"\"\"Analyze test coverage needed for this function:\n", - "\n", - "def calculate_discount(price, discount_percent):\n", - " if discount_percent > 100:\n", - " raise ValueError(\"Discount cannot exceed 100%\")\n", - " if price < 0:\n", - " raise ValueError(\"Price cannot be negative\")\n", - " return price * (1 - discount_percent / 100)\"\"\"\n", - " }\n", - "]\n", - "\n", - "qa_response = get_chat_completion(qa_messages)\n", - "print(\"πŸ§ͺ QA ENGINEER ANALYSIS:\")\n", - "print(qa_response)\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "πŸŽ‰ **Excellent!** You've created your own engineering persona and seen how it provides specialized test analysis.\n", - "\n", - "---\n", - "\n", - "# Part 2: Structured Inputs and Delimiters\n", - "\n", - "Now you'll learn to organize complex inputs using delimiters, making your prompts crystal clear even with multiple files, requirements, and data types.\n", - "\n", - "## Learning Outcomes for Part 2\n", - "\n", - "By the end of this section, you will:\n", - "- [ ] Use delimiters to organize complex, multi-part inputs\n", - "- [ ] Handle multi-file code scenarios effectively\n", - "- [ ] Separate different types of content (code, requirements, documentation)\n", - "- [ ] Build prompts that scale to real-world complexity\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Step 2.1: Basic Delimiters\n", - "\n", - "Let's start with a simple example showing how delimiters clarify different sections of your prompt:\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Using delimiters to refactor code\n", - "function_code = \"def process_data(items): return [x.upper() for x in items if len(x) > 3]\"\n", - "requirements = \"Follow PEP 8 style guide, add type hints, improve readability\"\n", - "\n", - "delimiter_messages = [\n", - " {\n", - " \"role\": \"system\",\n", - " \"content\": \"You are a Python code reviewer. Provide only the refactored code without explanations.\"\n", - " },\n", - " {\n", - " \"role\": \"user\",\n", - " \"content\": f\"\"\"Refactor this function based on the requirements:\n", - "\n", - "### CODE ###\n", - "{function_code}\n", - "###\n", - "\n", - "### REQUIREMENTS ###\n", - "{requirements}\n", - "###\n", - "\n", - "Return only the improved function code.\"\"\"\n", - " }\n", - "]\n", - "\n", - "delimiter_response = get_chat_completion(delimiter_messages)\n", - "print(\"πŸ”§ REFACTORED CODE:\")\n", - "print(delimiter_response)\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Step 2.2: Multi-File Scenarios with XML Delimiters\n", - "\n", - "For complex projects with multiple files, XML-style delimiters work even better:\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Multi-file analysis with XML delimiters\n", - "multifile_messages = [\n", - " {\n", - " \"role\": \"system\",\n", - " \"content\": \"You are a software architect. Analyze the provided files and identify architectural concerns.\"\n", - " },\n", - " {\n", - " \"role\": \"user\",\n", - " \"content\": \"\"\"\n", - "\n", - "class User:\n", - " def __init__(self, email, password):\n", - " self.email = email\n", - " self.password = password\n", - " \n", - " def save(self):\n", - " # Save to database\n", - " pass\n", - "\n", - "\n", - "\n", - "from flask import Flask, request\n", - "app = Flask(__name__)\n", - "\n", - "@app.route('/register', methods=['POST'])\n", - "def register():\n", - " email = request.form['email']\n", - " password = request.form['password']\n", - " user = User(email, password)\n", - " user.save()\n", - " return \"User registered\"\n", - "\n", - "\n", - "\n", - "- Follow separation of concerns\n", - "- Add input validation\n", - "- Implement proper error handling\n", - "- Use dependency injection\n", - "\n", - "\n", - "Provide architectural recommendations for improving this code structure.\n", - "\"\"\"\n", - " }\n", - "]\n", - "\n", - "multifile_response = get_chat_completion(multifile_messages)\n", - "print(\"πŸ—οΈ ARCHITECTURAL ANALYSIS:\")\n", - "print(multifile_response)\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "---\n", - "\n", - "# Part 3: Examples and Chain-of-Thought\n", - "\n", - "In this final section, you'll master two powerful techniques: few-shot examples to establish consistent styles, and chain-of-thought reasoning for complex problem solving.\n", - "\n", - "## Learning Outcomes for Part 3\n", - "\n", - "By the end of this section, you will:\n", - "- [ ] Use few-shot examples to teach AI your preferred response style\n", - "- [ ] Implement step-by-step reasoning for complex tasks\n", - "- [ ] Build systematic approaches to code analysis\n", - "- [ ] Create production-ready prompts that scale\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Step 3.1: Few-Shot Examples for Consistent Style\n", - "\n", - "Let's teach the AI to explain technical concepts in a specific, consistent style:\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Few-shot examples for consistent explanations\n", - "few_shot_messages = [\n", - " {\"role\": \"system\", \"content\": \"Answer in a consistent style using the examples provided.\"},\n", - " \n", - " # Example 1\n", - " {\"role\": \"user\", \"content\": \"Explain Big O notation for O(1).\"},\n", - " {\"role\": \"assistant\", \"content\": \"O(1) means constant time - the algorithm takes the same amount of time regardless of input size.\"},\n", - " \n", - " # Example 2 \n", - " {\"role\": \"user\", \"content\": \"Explain Big O notation for O(n).\"},\n", - " {\"role\": \"assistant\", \"content\": \"O(n) means linear time - the algorithm's runtime grows proportionally with the input size.\"},\n", - " \n", - " # New question following the established pattern\n", - " {\"role\": \"user\", \"content\": \"Explain Big O notation for O(log n).\"}\n", - "]\n", - "\n", - "few_shot_response = get_chat_completion(few_shot_messages)\n", - "print(\"πŸ“š CONSISTENT STYLE RESPONSE:\")\n", - "print(few_shot_response)\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "🎯 **Perfect!** Notice how the AI learned the exact format and style from the examples and applied it consistently.\n", - "\n", - "## Step 3.2: Chain-of-Thought Reasoning\n", - "\n", - "Now let's implement step-by-step reasoning for complex code analysis tasks:\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Chain-of-thought for systematic code analysis\n", - "system_message = \"\"\"Use the following step-by-step instructions to analyze code:\n", - "\n", - "Step 1 - Count the number of functions in the code snippet with a prefix that says 'Function Count: '\n", - "Step 2 - List each function name with its line number with a prefix that says 'Function List: '\n", - "Step 3 - Identify any functions that are longer than 10 lines with a prefix that says 'Long Functions: '\n", - "Step 4 - Provide an overall assessment with a prefix that says 'Assessment: '\"\"\"\n", - "\n", - "user_message = \"\"\"\n", - "def calculate_tax(income, deductions):\n", - " taxable_income = income - deductions\n", - " if taxable_income <= 0:\n", - " return 0\n", - " elif taxable_income <= 50000:\n", - " return taxable_income * 0.1\n", - " else:\n", - " return 50000 * 0.1 + (taxable_income - 50000) * 0.2\n", - "\n", - "def format_currency(amount):\n", - " return f\"${amount:,.2f}\"\n", - "\n", - "def generate_report(name, income, deductions):\n", - " tax = calculate_tax(income, deductions)\n", - " net_income = income - tax\n", - " \n", - " print(f\"Tax Report for {name}\")\n", - " print(f\"Gross Income: {format_currency(income)}\")\n", - " print(f\"Deductions: {format_currency(deductions)}\")\n", - " print(f\"Tax Owed: {format_currency(tax)}\")\n", - " print(f\"Net Income: {format_currency(net_income)}\")\n", - "\"\"\"\n", - "\n", - "chain_messages = [\n", - " {\"role\": \"system\", \"content\": system_message},\n", - " {\"role\": \"user\", \"content\": user_message}\n", - "]\n", - "\n", - "chain_response = get_chat_completion(chain_messages)\n", - "print(\"πŸ”— CHAIN-OF-THOUGHT ANALYSIS:\")\n", - "print(chain_response)\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "πŸš€ **Excellent!** The AI followed each step methodically, providing structured, comprehensive analysis.\n", - "\n", - "## Step 3.3: Practice - Combine All Techniques\n", - "\n", - "Now let's put everything together in a real-world scenario that combines role prompting, delimiters, and chain-of-thought:\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Comprehensive example combining all techniques\n", - "comprehensive_messages = [\n", - " {\n", - " \"role\": \"system\",\n", - " \"content\": \"\"\"You are a senior software engineer conducting a comprehensive code review.\n", - "\n", - "Follow this systematic process:\n", - "Step 1 - Security Analysis: Identify potential security vulnerabilities\n", - "Step 2 - Performance Review: Analyze efficiency and optimization opportunities \n", - "Step 3 - Code Quality: Evaluate readability, maintainability, and best practices\n", - "Step 4 - Recommendations: Provide specific, prioritized improvement suggestions\n", - "\n", - "Format each step clearly with the step name as a header.\"\"\"\n", - " },\n", - " {\n", - " \"role\": \"user\",\n", - " \"content\": \"\"\"\n", - "\n", - "from flask import Flask, request, jsonify\n", - "import sqlite3\n", - "\n", - "app = Flask(__name__)\n", - "\n", - "@app.route('/user/')\n", - "def get_user(user_id):\n", - " conn = sqlite3.connect('users.db')\n", - " cursor = conn.cursor()\n", - " cursor.execute(f\"SELECT * FROM users WHERE id = {user_id}\")\n", - " user = cursor.fetchone()\n", - " conn.close()\n", - " \n", - " if user:\n", - " return jsonify({\n", - " \"id\": user[0],\n", - " \"name\": user[1], \n", - " \"email\": user[2]\n", - " })\n", - " else:\n", - " return jsonify({\"error\": \"User not found\"}), 404\n", - "\n", - "\n", - "\n", - "This is a user lookup endpoint for a web application that serves user profiles.\n", - "The application handles 1000+ requests per minute during peak hours.\n", - "\n", - "\n", - "Perform a comprehensive code review following the systematic process.\n", - "\"\"\"\n", - " }\n", - "]\n", - "\n", - "comprehensive_response = get_chat_completion(comprehensive_messages)\n", - "print(\"πŸ” COMPREHENSIVE CODE REVIEW:\")\n", - "print(comprehensive_response)\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## πŸ“ˆ Tracking Your Progress\n", - "\n", - "### Self-Assessment Questions\n", - "\n", - "After completing Module 2, ask yourself:\n", - "1. Can I explain how role prompting improves AI responses?\n", - "2. Can I use delimiters effectively to organize complex inputs?\n", - "3. Can I create few-shot examples to establish consistent styles?\n", - "4. Can I implement chain-of-thought reasoning for systematic analysis?\n", - "\n", - "### Progress Tracking Template\n", - "\n", - "
\n", - "\n", - "**Module 2 Skills Mastery:** \n", - "
Track your progress by checking off skills below. When you master all 8 skills, you'll have achieved 100% completion!
\n", - "\n", - "**Current Status:**\n", - "- βœ… Environment Setup (Tutorial Completed)\n", - "- βœ… Core Techniques Learned (Tutorial Completed) \n", - "- ⬜ Skills Mastery (Use Skills Checklist below)\n", - "\n", - "**Progress Guide:**\n", - "- 0-2 skills checked: Beginner (50-63%)\n", - "- 3-5 skills checked: Intermediate (69-81%)\n", - "- 6-7 skills checked: Advanced (88-94%)\n", - "- 8 skills checked: Expert (100%) πŸŽ‰\n", - "\n", - "**Module 3:** Coming Next\n", - "- ⬜ Advanced Applications\n", - "- ⬜ Complex Refactoring Scenarios\n", - "- ⬜ Testing and QA Workflows\n", - "- ⬜ Production Debugging Prompts\n", - "\n", - "
\n", - "\n", - "### Skills Checklist\n", - "\n", - "
\n", - "\n", - "Mark each skill as you master it:\n", - "\n", - "**Role Prompting & Personas:**\n", - "
\n", - "- I can create effective software engineering personas (security, performance, QA)\n", - "
\n", - "
\n", - "- I can assign specific expertise roles to get specialized analysis\n", - "
\n", - "\n", - "**Structured Inputs & Delimiters:**\n", - "
\n", - "- I can use delimiters (### or XML) to organize complex inputs\n", - "
\n", - "
\n", - "- I can handle multi-file scenarios with clear structure\n", - "
\n", - "\n", - "**Examples & Chain-of-Thought:**\n", - "
\n", - "- I can create few-shot examples to establish consistent response styles\n", - "
\n", - "
\n", - "- I can implement step-by-step reasoning for systematic analysis\n", - "
\n", - "\n", - "**Advanced Applications:**\n", - "
\n", - "- I can combine all techniques in production-ready prompts\n", - "
\n", - "
\n", - "- I can create comprehensive code review prompts with multiple perspectives\n", - "
\n", - "\n", - "
\n", - "\n", - "
\n", - "πŸ’‘ Remember: The goal is not just to complete activities, but to build lasting skills that transform your development workflow!\n", - "
\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "
\n", - "\n", - "
πŸƒβ€β™€οΈ Practice Exercises
\n", - "\n", - "
Overview:Ready to test your skills? The prompt-engineering-exercises.ipynb notebook contains hands-on activities that reinforce the concepts you've learned in this module.
\n", - "\n", - "
Module 2 Activities:β€’ Activity 2.1: Convert vague prompts to specific ones using real code examples\n", - "β€’ Activity 2.2: Persona Adoption Workshop - Compare insights from Security, Performance, and QA engineers \n", - "β€’ Activity 2.3: Delimiter Mastery Exercise - Organize multi-file refactoring scenarios\n", - "β€’ Activity 2.4: Step-by-Step Reasoning Lab - Systematic code review with explicit steps
\n", - "\n", - "
How to Access:1. Open: notebooks/activities/prompt-engineering-exercises.ipynb\n", - "2. Complete the setup section to configure your API access\n", - "3. Work through Module 2 activities - they build on concepts from this tutorial\n", - "4. Track your progress using the competency checklist included
\n", - "\n", - "
🎯 Complete the practice exercises to solidify your understanding and build confidence with real scenarios!
\n", - "\n", - "
" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "πŸŽ‰ **Outstanding!** You've just executed a production-quality prompt that combines:\n", - "- **Role prompting** (senior software engineer)\n", - "- **Structured delimiters** (`` and ``)\n", - "- **Chain-of-thought reasoning** (4-step systematic process)\n", - "\n", - "This is exactly how professional developers use AI assistants for real-world code reviews!\n", - "\n", - "---\n", - "\n", - "## Module 2 Complete! 🎊\n", - "\n", - "### What You've Accomplished\n", - "\n", - "- βœ… **Mastered role prompting** and saw how personas provide specialized expertise\n", - "- βœ… **Used delimiters effectively** to organize complex, multi-part inputs\n", - "- βœ… **Applied few-shot examples** to establish consistent response styles\n", - "- βœ… **Implemented chain-of-thought reasoning** for systematic analysis\n", - "- βœ… **Combined all techniques** in a production-ready code review prompt\n", - "\n", - "### Key Takeaways\n", - "\n", - "1. **Role Prompting**: Assign specific expertise roles (security engineer, QA engineer, etc.) for specialized analysis\n", - "2. **Delimiters**: Use `###` or `` tags to organize complex inputs with multiple files/requirements\n", - "3. **Few-Shot Examples**: Provide 2-3 examples to teach the AI your preferred response style\n", - "4. **Chain-of-Thought**: Break complex tasks into numbered steps for systematic processing\n", - "\n", - "### Real-World Applications\n", - "\n", - "You can now create prompts for:\n", - "- **Code Reviews**: Multi-step analysis covering security, performance, and quality\n", - "- **Refactoring**: Structured input with original code, requirements, and context\n", - "- **Documentation**: Consistent style across your team using few-shot examples\n", - "- **Debugging**: Step-by-step problem analysis and solution development\n", - "\n", - "### Next Steps\n", - "\n", - "Continue to **Module 3: Advanced Software Engineering Applications** where you'll learn:\n", - "- Building prompts for complex refactoring scenarios\n", - "- Creating systematic testing and QA workflows\n", - "- Designing effective debugging and performance optimization prompts\n", - "- Developing API integration and documentation helpers\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "---\n", - "\n", - "## Troubleshooting\n", - "\n", - "### Common Issues\n", - "\n", - "#### Issue: \"Persona responses are too generic\"\n", - "**Solution**: \n", - "1. Be more specific about the role (e.g., \"senior Python security engineer\" vs \"engineer\")\n", - "2. Add specific output requirements (\"provide exactly 3 recommendations\")\n", - "3. Include context about the expertise level needed\n", - "\n", - "#### Issue: \"Delimiters aren't working properly\"\n", - "**Solution**: \n", - "1. Make sure delimiters are unique and consistent (use `###` or `` consistently)\n", - "2. Always close XML-style delimiters (`...`)\n", - "3. Test with simpler examples first before adding complexity\n", - "\n", - "#### Issue: \"AI isn't following the step-by-step process\"\n", - "**Solution**: \n", - "1. Number your steps clearly (Step 1, Step 2, etc.)\n", - "2. Specify output format for each step (\"with a prefix that says...\")\n", - "3. Keep steps focused and specific rather than too broad\n", - "\n", - "#### Issue: \"Few-shot examples aren't being followed\"\n", - "**Solution**: \n", - "1. Provide 2-3 consistent examples, not just one\n", - "2. Make sure examples clearly demonstrate the pattern you want\n", - "3. Use the system message to explicitly state \"follow the pattern shown\"\n", - "\n", - "### Quick Reference\n", - "\n", - "```python\n", - "# Template for combining all techniques\n", - "messages = [\n", - " {\n", - " \"role\": \"system\",\n", - " \"content\": \"\"\"You are a [SPECIFIC ROLE].\n", - "\n", - "Follow this process:\n", - "Step 1 - [First analysis step]\n", - "Step 2 - [Second analysis step] \n", - "Step 3 - [Final recommendations]\n", - "\n", - "Format: [Specify output format]\"\"\"\n", - " },\n", - " {\n", - " \"role\": \"user\", \n", - " \"content\": \"\"\"\n", - "\n", - "[Your content here]\n", - "\n", - "\n", - "\n", - "[Your requirements here]\n", - "\n", - "\n", - "[Your specific request]\"\"\"\n", - " }\n", - "]\n", - "```\n", - "\n", - "🎯 **You're now ready to create professional-grade prompts for any software engineering task!**\n" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": ".venv", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.13.2" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/02-exercises/README.md b/02-exercises/README.md deleted file mode 100644 index 90c8f57..0000000 --- a/02-exercises/README.md +++ /dev/null @@ -1,83 +0,0 @@ -# 02-exercises: Hands-On Practice Activities - -This directory contains practical exercises and assessments to reinforce prompt engineering concepts through hands-on coding and implementation tasks. - -## Structure - -### Exercise Materials -- **[hands-on/](./hands-on/)** - Interactive exercises and practice activities - - `prompt-engineering-exercises.ipynb` - Main exercise notebook with guided activities - - `circuit_setup.py` - CircuIT API setup helper - - `github_copilot_setup.py` - GitHub Copilot local proxy setup - -### Solutions & References -- **[solutions/](./solutions/)** - Complete solutions and reference implementations - - `prompt-engineering-solutions.ipynb` - Detailed solutions with explanations - -## Exercise Categories - -### Module 1 Exercises: Environment & Basics -- **Environment Setup** - Email improvement workflow to verify API connectivity -- **Code Comment Enhancement** - Practice technical documentation improvements -- **Prompt Structure Analysis** - Identify and apply the four core prompt elements -- **Complete Prompt Construction** - Build comprehensive prompts with all elements - -### Module 2 Exercises: Core Techniques -- **Vague to Specific** - Transform unclear requests into precise, testable prompts -- **Persona Workshop** - Apply multiple engineering perspectives to code analysis -- **Delimiter Mastery** - Organize complex multi-file refactoring inputs -- **Step-by-Step Reasoning** - Guide systematic code review processes - -### Module 3 Exercises: Real Applications -- **Code Refactoring Project** - Modernize legacy code with comprehensive requirements -- **Production Debugging** - Investigate and resolve critical incidents using prompt chains -- **API Integration Workshop** - Build production-ready clients from documentation - -### Module 4 Exercises: Custom Integration -- **Command Creation Challenge** - Author reusable custom commands for common tasks -- **Team Implementation Plan** - Design adoption strategy with training and metrics -- **Advanced Command Patterns** - Build complex, chained workflows with conditional logic - -## Getting Started - -### Prerequisites -- Complete [01-tutorials/](../01-tutorials/) modules 1-2 minimum -- Python 3.8+ with virtual environment set up -- API access configured (GitHub Copilot preferred) -- IDE with notebook support (VS Code or Cursor recommended) - -### Setup Instructions -1. **Environment**: Ensure your `.venv` is activated and dependencies installed -2. **API Configuration**: Run the appropriate setup script: - - For GitHub Copilot: `python github_copilot_setup.py` - - For CircuIT: `python circuit_setup.py` -3. **Open Exercises**: Launch `hands-on/prompt-engineering-exercises.ipynb` - -### Assessment Approach -- **Self-Guided Learning**: Work through exercises at your own pace -- **Compare Solutions**: Check your work against reference solutions -- **Iterate and Improve**: Refine prompts based on output quality and consistency -- **Apply to Real Work**: Adapt patterns to your actual development tasks - -## Exercise Flow - -1. **Read Objective** - Understand what you're practicing and why -2. **Follow Instructions** - Complete the guided tasks step-by-step -3. **Execute and Evaluate** - Run your prompts and assess the results -4. **Compare Solutions** - Review reference implementations and explanations -5. **Reflect and Adapt** - Consider how to apply these patterns in your work - -## Assessment Criteria - -Each exercise includes specific success criteria: -- **Functional Requirements** - Does the output meet the specified requirements? -- **Quality Standards** - Is the response accurate, relevant, and well-structured? -- **Consistency** - Do repeated executions produce reliable results? -- **Practical Value** - Can this pattern be applied to real development tasks? - -## Next Steps - -After completing exercises, explore: -- **[03-examples/](../03-examples/)** - Real-world implementation patterns and use cases -- **Production Integration** - Apply learned patterns to your actual development workflow -- **Team Adoption** - Share effective patterns with your development team diff --git a/02-exercises/hands-on/circuit_setup.py b/02-exercises/hands-on/circuit_setup.py deleted file mode 100644 index e7e966a..0000000 --- a/02-exercises/hands-on/circuit_setup.py +++ /dev/null @@ -1,57 +0,0 @@ -# Option 2: CircuIT APIs (Azure OpenAI) setup -# Run this cell if you have CircuIT API access - -import warnings -warnings.filterwarnings('ignore') - -import openai -import traceback -import requests -import base64 -import os -from dotenv import load_dotenv - -# Load environment variables -load_dotenv() - -# Open AI version to use -openai.api_type = "azure" -openai.api_version = "2024-12-01-preview" - -# Get API_KEY wrapped in token - using environment variables -client_id = os.getenv('CISCO_CLIENT_ID') -client_secret = os.getenv('CISCO_CLIENT_SECRET') - -url = "https://id.cisco.com/oauth2/default/v1/token" - -payload = "grant_type=client_credentials" -value = base64.b64encode(f"{client_id}:{client_secret}".encode("utf-8")).decode("utf-8") -headers = { - "Accept": "*/*", - "Content-Type": "application/x-www-form-urlencoded", - "Authorization": f"Basic {value}", -} - -token_response = requests.request("POST", url, headers=headers, data=payload) -token_data = token_response.json() - -from openai import AzureOpenAI - -client = AzureOpenAI( - azure_endpoint="https://chat-ai.cisco.com", - api_key=token_data.get('access_token'), - api_version="2024-12-01-preview" -) - -app_key = os.getenv("CISCO_OPENAI_APP_KEY") - -def get_chat_completion(messages, model="gpt-4o", temperature=0.0): - response = client.chat.completions.create( - model=model, - messages=messages, - temperature=temperature, - user=f'{{"appkey": "{app_key}"}}' - ) - return response.choices[0].message.content - -print("βœ… CircuIT API setup complete! Ready for activities.") \ No newline at end of file diff --git a/02-exercises/hands-on/github_copilot_setup.py b/02-exercises/hands-on/github_copilot_setup.py deleted file mode 100644 index a5aad27..0000000 --- a/02-exercises/hands-on/github_copilot_setup.py +++ /dev/null @@ -1,39 +0,0 @@ -# Option 1: GitHub Copilot (local API proxy) setup -# Run this cell if you don't have CircuIT access -# Make sure you have followed the setup steps in GitHub-Copilot-2-API/README.md first - -import warnings -warnings.filterwarnings('ignore') - -import anthropic -from typing import Any, List - -def _extract_text_from_blocks(blocks: List[Any]) -> str: - """Extract text content from response blocks returned by the API.""" - parts: List[str] = [] - for block in blocks: - text_val = getattr(block, "text", None) - if isinstance(text_val, str): - parts.append(text_val) - elif isinstance(block, dict): - t = block.get("text") - if isinstance(t, str): - parts.append(t) - return "\n".join(parts) - -# This function handles communication with the GitHub Copilot API proxy -# Select the appropriate model from those available through the proxy server -def get_chat_completion(messages, model="claude-sonnet-4", temperature=0.0): - client = anthropic.Anthropic( - api_key="dummy-key", # not used by local proxy - base_url="http://localhost:7711" - ) - response = client.messages.create( - model=model, - max_tokens=1000, - messages=messages, - temperature=temperature - ) - return _extract_text_from_blocks(getattr(response, "content", [])) - -print("βœ… GitHub Copilot API setup complete! Ready for activities.") \ No newline at end of file diff --git a/02-exercises/hands-on/prompt-engineering-exercises.ipynb b/02-exercises/hands-on/prompt-engineering-exercises.ipynb deleted file mode 100644 index e082b1a..0000000 --- a/02-exercises/hands-on/prompt-engineering-exercises.ipynb +++ /dev/null @@ -1,611 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# πŸ“‹ Prompt Engineering Course Activities & Competency Checklist\n", - "\n", - "## Overview\n", - "\n", - "Activities throughout the course will contribute to a competency checklist, indicating successful understanding of prompt engineering techniques. Each module includes hands-on activities designed to reinforce concepts and build practical skills.\n", - "\n", - "---" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Setup the Environment\n", - "\n", - "You have two options for API access. Please choose one of the following methods:\n", - "\n", - "1. **GitHub Copilot API (local proxy)** - Recommended if you don't have CircuIT access\n", - "2. **CircuIT APIs (Azure OpenAI)** - If you have CircuIT API access\n", - "\n", - "Run the appropriate setup cell below." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Option 1: GitHub Copilot (local API proxy) setup\n", - "# Run this cell if you don't have CircuIT access\n", - "# Make sure you have followed the setup steps in GitHub-Copilot-2-API/README.md first\n", - "\n", - "# Load the GitHub Copilot setup code\n", - "%run github_copilot_setup.py" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "
OR
" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Option 2: CircuIT APIs (Azure OpenAI) setup\n", - "# Run this cell if you have CircuIT API access\n", - "\n", - "# Load the CircuIT setup code\n", - "%run circuit_setup.py" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## πŸƒβ€β™€οΈ Module 1 Hands-On Activities\n", - "\n", - "Now let's practice the concepts with executable code examples." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Activity 1.1: Analyze these prompts and identify missing elements\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# HINT: For each prompt, decide if it includes:\n", - "# - Instructions/persona\n", - "# - Context\n", - "# - Input data\n", - "# - Output indicator/format\n", - "# YOUR TASK: Write your notes below or in markdown.\n", - "\n", - "# Prompt 1 - Missing some elements\n", - "prompt_1 = \"\"\"\n", - "Fix this code:\n", - "def calculate(x, y):\n", - " return x + y\n", - "\"\"\"\n", - "\n", - "# Prompt 2 - Missing some elements \n", - "prompt_2 = \"\"\"\n", - "You are a Python developer.\n", - "Make this function better.\n", - "\"\"\"\n", - "\n", - "# Prompt 3 - Missing some elements\n", - "prompt_3 = \"\"\"\n", - "Review the following function and provide feedback.\n", - "Return your response as a list of improvements.\n", - "\"\"\"\n", - "\n", - "# YOUR NOTES:\n", - "# - Prompt 1 missing: ...\n", - "# - Prompt 2 missing: ...\n", - "# - Prompt 3 missing: ..." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Activity 1.2: Create a complete prompt with all 4 elements for code documentation\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# HINT: Include all 4 elements:\n", - "# - Instructions/persona (system)\n", - "# - Context (user)\n", - "# - Input data (user)\n", - "# - Output indicator/format (user)\n", - "# YOUR TASK: Build system_message and user_message using the function below, then call get_chat_completion.\n", - "\n", - "function_to_document = \"\"\"\n", - "def process_transaction(user_id, amount, transaction_type):\n", - " if transaction_type not in ['deposit', 'withdrawal']:\n", - " raise ValueError(\"Invalid transaction type\")\n", - " \n", - " if amount <= 0:\n", - " raise ValueError(\"Amount must be positive\")\n", - " \n", - " balance = get_user_balance(user_id)\n", - " \n", - " if transaction_type == 'withdrawal' and balance < amount:\n", - " raise InsufficientFundsError(\"Insufficient funds\")\n", - " \n", - " new_balance = balance + amount if transaction_type == 'deposit' else balance - amount\n", - " update_user_balance(user_id, new_balance)\n", - " log_transaction(user_id, amount, transaction_type)\n", - " \n", - " return new_balance\n", - "\"\"\"\n", - "\n", - "# system_message = ...\n", - "# user_message = ...\n", - "# messages = [\n", - "# {\"role\": \"system\", \"content\": system_message},\n", - "# {\"role\": \"user\", \"content\": user_message}\n", - "# ]\n", - "# response = get_chat_completion(messages)\n", - "# print(response)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## πŸƒβ€β™€οΈ Module 2 Hands-On Activities\n", - "\n", - "Now let's practice the prompting fundamentals with hands-on activities that reinforce each tactic." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Activity 2.1 - Beginner Level: Convert vague to specific\n", - "vague_prompt = \"Fix this function\"\n", - "function_with_issues = \"\"\"\n", - "def calc_price(items, tax, discount):\n", - " total = 0\n", - " for i in items:\n", - " total = total + i\n", - " return total + tax - discount\n", - "\"\"\"\n", - "\n", - "# HINT: Rewrite the request to be specific. Include:\n", - "# - What to fix (validation, types, naming, edge cases)\n", - "# - Constraints (performance, correctness)\n", - "# - Desired output format (e.g., refactored code + explanation)\n", - "# YOUR TASK: Create a 'specific_prompt' string.\n", - "\n", - "# specific_prompt = \"\"\"...\"\"\"\n", - "\n", - "# OPTIONAL: Compare results by sending both prompts\n", - "# messages = [\n", - "# {\"role\": \"user\", \"content\": f\"{vague_prompt}\\n\\n```python\\n{function_with_issues}\\n```\"}\n", - "# ]\n", - "# print(get_chat_completion(messages))\n", - "# messages = [\n", - "# {\"role\": \"user\", \"content\": f\"{specific_prompt}\\n\\n```python\\n{function_with_issues}\\n```\"}\n", - "# ]\n", - "# print(get_chat_completion(messages))" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Activity 2.2: Persona Adoption Workshop - Multiple Engineering Perspectives\n", - "# HINT: Try the same code with different personas and compare the insights.\n", - "# YOUR TASK: Create three message sets (Security, Performance, QA) and run them.\n", - "\n", - "code_to_review = \"\"\"\n", - "def user_login(username, password):\n", - " users = get_all_users() # Loads entire user database\n", - " for user in users:\n", - " if user['username'] == username and user['password'] == password:\n", - " session_id = generate_random_string(10)\n", - " save_session(session_id, user['id'])\n", - " return {\"success\": True, \"session_id\": session_id}\n", - " return {\"success\": False, \"message\": \"Invalid credentials\"}\n", - "\"\"\"\n", - "\n", - "# security_messages = [\n", - "# {\"role\": \"system\", \"content\": \"You are a Security Engineer reviewing code for security vulnerabilities. Focus on authentication weaknesses, data exposure, and secure coding practices.\"},\n", - "# {\"role\": \"user\", \"content\": f\"Review this login function:\\n\\n```python\\n{code_to_review}\\n```\"}\n", - "# ]\n", - "# performance_messages = [\n", - "# {\"role\": \"system\", \"content\": \"You are a Performance Engineer reviewing code for efficiency and scalability issues. Focus on bottlenecks, resource usage, and optimization opportunities.\"},\n", - "# {\"role\": \"user\", \"content\": f\"Review this login function:\\n\\n```python\\n{code_to_review}\\n```\"}\n", - "# ]\n", - "# qa_messages = [\n", - "# {\"role\": \"system\", \"content\": \"You are a QA Engineer reviewing code for testing and quality assurance. Focus on edge cases, error handling, and testability.\"},\n", - "# {\"role\": \"user\", \"content\": f\"Review this login function:\\n\\n```python\\n{code_to_review}\\n```\"}\n", - "# ]\n", - "\n", - "# print(get_chat_completion(security_messages))\n", - "# print(get_chat_completion(performance_messages))\n", - "# print(get_chat_completion(qa_messages))\n", - "\n", - "# SUMMARY (write your comparison below):\n", - "# - Security: ...\n", - "# - Performance: ...\n", - "# - QA: ..." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Activity 2.3: Delimiter Mastery Exercise - Multi-File Refactoring\n", - "# HINT: Use headers and XML-like tags to organize complex inputs.\n", - "# YOUR TASK: Write system_message and user_message assembling the sections below, then call get_chat_completion.\n", - "\n", - "requirements = \"\"\"\n", - "### REFACTORING REQUIREMENTS ###\n", - "- Extract shared logic into utility functions\n", - "- Improve error handling across all files\n", - "- Add proper logging and monitoring\n", - "- Follow SOLID principles\n", - "###\n", - "\"\"\"\n", - "\n", - "original_code = \"\"\"\n", - "### ORIGINAL CODE ###\n", - "\n", - "class User:\n", - " def __init__(self, name, email):\n", - " self.name = name\n", - " self.email = email\n", - " \n", - " def save(self):\n", - " # Direct database access - not ideal\n", - " db.execute(\"INSERT INTO users (name, email) VALUES (?, ?)\", (self.name, self.email))\n", - "\n", - "\n", - "\n", - "def create_user(request):\n", - " name = request.get('name')\n", - " email = request.get('email')\n", - " \n", - " # No validation\n", - " user = User(name, email)\n", - " user.save()\n", - " return {\"success\": True}\n", - "\n", - "def get_user(user_id):\n", - " # Direct query - no error handling\n", - " result = db.execute(\"SELECT * FROM users WHERE id = ?\", (user_id,))\n", - " return result.fetchone()\n", - "\n", - "###\n", - "\"\"\"\n", - "\n", - "target_architecture = \"\"\"\n", - "### TARGET ARCHITECTURE ###\n", - "- Repository pattern for data access\n", - "- Service layer for business logic\n", - "- Proper dependency injection\n", - "- Comprehensive error handling\n", - "###\n", - "\"\"\"\n", - "\n", - "# system_message = ...\n", - "# user_message = requirements + \"\\n\\n\" + original_code + \"\\n\\n\" + target_architecture\n", - "# messages = [\n", - "# {\"role\": \"system\", \"content\": system_message},\n", - "# {\"role\": \"user\", \"content\": user_message}\n", - "# ]\n", - "# print(get_chat_completion(messages))" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Activity 2.4: Step-by-Step Reasoning Lab - Systematic Code Review\n", - "# HINT: Guide the model through explicit numbered steps and ask for prioritized fixes.\n", - "# YOUR TASK: Draft a system_message with steps and a user_message embedding the code.\n", - "\n", - "steps = \"\"\"\n", - "Step 1 - Analyze code structure and identify main components\n", - "Step 2 - Check for potential bugs and logic errors \n", - "Step 3 - Evaluate performance and efficiency concerns\n", - "Step 4 - Assess code maintainability and readability\n", - "Step 5 - Provide prioritized recommendations with specific fixes\n", - "\"\"\"\n", - "\n", - "code_to_review = \"\"\"\n", - "def process_orders(orders):\n", - " processed = []\n", - " total_revenue = 0\n", - " \n", - " for order in orders:\n", - " if order['status'] == 'pending':\n", - " # Calculate order total\n", - " item_total = 0\n", - " for item in order['items']:\n", - " item_total += item['price'] * item['quantity']\n", - " \n", - " # Apply discount\n", - " if order['customer_type'] == 'premium':\n", - " item_total = item_total * 0.9\n", - " elif order['customer_type'] == 'regular':\n", - " if item_total > 100:\n", - " item_total = item_total * 0.95\n", - " \n", - " # Process payment\n", - " if item_total > 0:\n", - " payment_result = charge_customer(order['customer_id'], item_total)\n", - " if payment_result:\n", - " order['status'] = 'completed'\n", - " order['total'] = item_total\n", - " processed.append(order)\n", - " total_revenue += item_total\n", - " else:\n", - " order['status'] = 'failed'\n", - " \n", - " return processed, total_revenue\n", - "\"\"\"\n", - "\n", - "# system_message = f\"\"\"\n", - "# Review the following code using these systematic steps:\n", - "# \n", - "# {steps}\n", - "# \n", - "# Follow each step methodically and show your reasoning.\n", - "# \"\"\"\n", - "# user_message = f\"\"\"\n", - "# Please review this order processing function:\n", - "# \n", - "# ```python\n", - "# {code_to_review}\n", - "# ```\n", - "# \"\"\"\n", - "# messages = [\n", - "# {\"role\": \"system\", \"content\": system_message},\n", - "# {\"role\": \"user\", \"content\": user_message}\n", - "# ]\n", - "# print(get_chat_completion(messages))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## πŸƒβ€β™€οΈ Module 4 Hands-On Activities β€” Custom Commands\n", - "\n", - "Turn your best prompts into reusable commands for AI code assistants. You will scaffold command files for Claude Code and GitHub Copilot, then customize them for your team’s workflow.\n", - "\n", - "Inspiration: See advanced command patterns in AWS’s Anthropic on AWS samples (advanced Claude Code patterns) β€” `https://github.com/aws-samples/anthropic-on-aws/tree/main/advanced-claude-code-patterns/commands`.\n", - "\n", - "### What you’ll build\n", - "- Reusable command files under `.claude/commands/` (Claude Code)\n", - "- Reusable prompt files under `.github/prompts/` (GitHub Copilot)\n", - "- Two commands to start: code review and production debugging\n", - "- Optional: extend with your own commands\n", - "\n", - "Tip: Keep commands concise, tool-aware, and focused on specific outcomes (review scope, severity, etc.).\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Tasks\n", - "\n", - "1) Create cross-platform command folders (manual)\n", - "- `.claude/commands/code/` and `.claude/commands/debug/`\n", - "- `.github/prompts/`\n", - "- Hint (optional):\n", - " - macOS/Linux: `mkdir -p .claude/commands/{code,debug} && mkdir -p .github/prompts`\n", - " - Windows (PowerShell): `New-Item -ItemType Directory .claude/commands/code,.claude/commands/debug,.github/prompts`\n", - "\n", - "2) Create your own starter command files (manual)\n", - "- Claude Code (examples): `.claude/commands/code/review.md`, `.claude/commands/debug/production.md`\n", - "- GitHub Copilot (examples): `.github/prompts/code-review.md`, `.github/prompts/debug-production.md`\n", - "- Draw inspiration from: `https://github.com/aws-samples/anthropic-on-aws/tree/main/advanced-claude-code-patterns/commands`\n", - "- Start minimal; then iterate based on results.\n", - "\n", - "3) Customize arguments, allowed tools, and output structure\n", - "- Tighten focus (e.g., security-only review) and define stable output sections\n", - "- Add/remove allowed tools per platform as needed\n", - "\n", - "4) Try and iterate until satisfied\n", - "- Claude Code: `/review [focus] [language]`, `/production [severity] [component]`\n", - "- Copilot: `/code-review`, `/debug-production`\n", - "- After each run, refine prompts to improve signal and consistency\n", - "\n", - "5) Extension (optional)\n", - "- Add a PR triage command or repository-wide dead-code finder\n", - "- Add organization-specific standards and links\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Activity 4.1: Command Creation Challenge\n", - "\n", - "- **Objective**: Author first custom commands for recurring engineering tasks.\n", - "- **Learner Tasks**:\n", - " - Identify 3 frequent tasks you perform (e.g., code review focus, PR triage, production debugging).\n", - " - Create commands with variables and clear usage examples for Claude Code and/or GitHub Copilot.\n", - " - Use the AWS sample only as inspiration; author your own minimal template and iterate until satisfied.\n", - " - Test in your editor and refine for clarity, scope, and outputs.\n", - "- **Deliverables**:\n", - " - Three reusable command files with documentation and example invocations.\n", - "- **Assessment Criteria**:\n", - " - Commands are structured, tool-aware, and produce consistent, actionable outputs.\n", - " - Arguments are clear; outputs have predictable headings/sections.\n", - "\n", - "Inspiration (don’t copy verbatim): `https://github.com/aws-samples/anthropic-on-aws/tree/main/advanced-claude-code-patterns/commands`\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Activity 4.1 Helper: Generate three starter command templates (edit content after creation)\n", - "from pathlib import Path\n", - "\n", - "# Compute project root deterministically\n", - "repo_root = Path.cwd().resolve()\n", - "for p in [repo_root] + list(repo_root.parents):\n", - " if (p / \"requirements.txt\").exists() and (p / \"README.md\").exists():\n", - " repo_root = p\n", - " break\n", - "\n", - "claude_dir = repo_root / \".claude\" / \"commands\"\n", - "copilot_dir = repo_root / \".github\" / \"prompts\"\n", - "(claude_dir / \"custom\").mkdir(parents=True, exist_ok=True)\n", - "copilot_dir.mkdir(parents=True, exist_ok=True)\n", - "\n", - "# Define your three commands here (filenames only; customize after generation)\n", - "command_names = [\n", - " \"your-command-1\",\n", - " \"your-command-2\",\n", - " \"your-command-3\",\n", - "]\n", - "\n", - "claude_template = \"\"\"---\n", - "allowed-tools: Read, Grep\n", - "argument-hint: [arg1] [arg2]\n", - "description: Replace with a clear, outcome-oriented description\n", - "---\n", - "\n", - "You are a senior engineer. Task: $1. Context: $2.\n", - "\n", - "Provide a concise, actionable output with:\n", - "- Summary\n", - "- Steps / Findings\n", - "- Next actions\n", - "\"\"\"\n", - "\n", - "copilot_template = \"\"\"---\n", - "mode: agent\n", - "tools: ['terminal', 'codeSearch']\n", - "description: Replace with a clear, outcome-oriented description\n", - "---\n", - "\n", - "You are a senior engineer.\n", - "\n", - "Task: {arg1}\n", - "Context: {arg2}\n", - "\n", - "Provide a concise, actionable output with:\n", - "- Summary\n", - "- Steps / Findings\n", - "- Next actions\n", - "\"\"\"\n", - "\n", - "for name in command_names:\n", - " # Claude Code command\n", - " c_path = claude_dir / \"custom\" / f\"{name}.md\"\n", - " if not c_path.exists():\n", - " c_path.write_text(claude_template, encoding=\"utf-8\")\n", - " print(\"WROTE:\", c_path)\n", - " else:\n", - " print(\"SKIP (exists):\", c_path)\n", - " # Copilot prompt\n", - " g_path = copilot_dir / f\"{name}.md\"\n", - " if not g_path.exists():\n", - " g_path.write_text(copilot_template, encoding=\"utf-8\")\n", - " print(\"WROTE:\", g_path)\n", - " else:\n", - " print(\"SKIP (exists):\", g_path)\n", - "\n", - "print(\"\\nNext: open and customize each file's frontmatter, arguments, and output sections.\")\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Activity 4.2: Advanced Command Patterns\n", - "\n", - "- **Objective**: Design complex, chained command workflows and document interoperation.\n", - "- **Learner Tasks**:\n", - " - Implement a multi-step flow (e.g., feature dev: spec β†’ scaffold β†’ review β†’ tests).\n", - " - Use arguments to pass state across commands; define consistent output anchors.\n", - " - Document how commands interoperate; seed a small knowledge base.\n", - "- **Deliverables**:\n", - " - One workflow command and notes on how it composes with others.\n", - "- **Assessment Criteria**:\n", - " - Handles branching/conditional paths; outputs are consistent and composable.\n", - "\n", - "See command design inspirations: `https://github.com/aws-samples/anthropic-on-aws/tree/main/advanced-claude-code-patterns/commands`.\n", - "\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### How to run and customize\n", - "\n", - "- **Claude Code**\n", - " - Place files in `.claude/commands/**.md`. In the editor, type `/review [focus] [language]`, `/production [severity] [component]`, or your new `/your-command-1 ...`.\n", - " - Adjust `allowed-tools` in the frontmatter to enable the right capabilities.\n", - "\n", - "- **GitHub Copilot**\n", - " - Place files in `.github/prompts/*.md`. In Copilot Chat, run `/code-review`, `/debug-production`, or `/your-command-1`.\n", - " - Edit `tools` and descriptions to match your workflow.\n", - "\n", - "- **Tips**\n", - " - Keep outputs structured (Summary, Findings, Actions). Stable structure improves reusability.\n", - " - Start narrow; expand scope only after you validate consistency.\n", - "\n", - "Reference: `https://github.com/aws-samples/anthropic-on-aws/tree/main/advanced-claude-code-patterns/commands`\n", - "\n" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": ".venv", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.13.2" - } - }, - "nbformat": 4, - "nbformat_minor": 4 -} diff --git a/02-exercises/solutions/prompt-engineering-solutions.ipynb b/02-exercises/solutions/prompt-engineering-solutions.ipynb deleted file mode 100644 index 68c1c02..0000000 --- a/02-exercises/solutions/prompt-engineering-solutions.ipynb +++ /dev/null @@ -1,834 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# πŸ“˜ Prompt Engineering Course Activities β€” Solutions\n", - "\n", - "This notebook contains reference solutions for the activities in `course-activities.ipynb`.\n", - "\n", - "- Run the setup cell first.\n", - "- Use these solutions after attempting the hint-only activities.\n", - "- For learning, compare your prompts with these references and note differences.\n" - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "βœ… Environment setup complete! Ready for solutions.\n" - ] - } - ], - "source": [ - "# Setup code (same as activities)\n", - "import warnings\n", - "warnings.filterwarnings('ignore')\n", - "\n", - "import openai\n", - "import traceback\n", - "import requests\n", - "import base64\n", - "import os\n", - "from dotenv import load_dotenv\n", - "\n", - "# Load environment variables\n", - "load_dotenv()\n", - "\n", - "# Open AI version to use\n", - "openai.api_type = \"azure\"\n", - "openai.api_version = \"2024-12-01-preview\"\n", - "\n", - "# Get API_KEY wrapped in token - using environment variables\n", - "client_id = os.getenv('CISCO_CLIENT_ID')\n", - "client_secret = os.getenv('CISCO_CLIENT_SECRET')\n", - "\n", - "url = \"https://id.cisco.com/oauth2/default/v1/token\"\n", - "\n", - "payload = \"grant_type=client_credentials\"\n", - "value = base64.b64encode(f\"{client_id}:{client_secret}\".encode(\"utf-8\")).decode(\"utf-8\")\n", - "headers = {\n", - " \"Accept\": \"*/*\",\n", - " \"Content-Type\": \"application/x-www-form-urlencoded\",\n", - " \"Authorization\": f\"Basic {value}\",\n", - "}\n", - "\n", - "token_response = requests.request(\"POST\", url, headers=headers, data=payload)\n", - "token_data = token_response.json()\n", - "\n", - "from openai import AzureOpenAI\n", - "\n", - "client = AzureOpenAI(\n", - " azure_endpoint=\"https://chat-ai.cisco.com\",\n", - " api_key=token_data.get('access_token'),\n", - " api_version=\"2024-12-01-preview\"\n", - ")\n", - "\n", - "app_key = os.getenv(\"CISCO_OPENAI_APP_KEY\")\n", - "\n", - "def get_chat_completion(messages, model=\"gpt-4o\", temperature=0.0):\n", - " response = client.chat.completions.create(\n", - " model=model,\n", - " messages=messages,\n", - " temperature=temperature,\n", - " user=f'{{\"appkey\": \"{app_key}\"}}'\n", - " )\n", - " return response.choices[0].message.content\n", - "\n", - "print(\"βœ… Environment setup complete! Ready for solutions.\")\n" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "πŸ“ SOLUTION 1.2 - MISSING ELEMENTS:\n", - "\n", - "Prompt 1 has:\n", - "βœ“ Input Data (the code)\n", - "βœ— Missing: Clear Instructions, Context, Output Format\n", - "\n", - "Prompt 2 has:\n", - "βœ“ Instructions/Persona\n", - "βœ— Missing: Input Data, Context, Output Format\n", - "\n", - "Prompt 3 has:\n", - "βœ“ Instructions, Output Format\n", - "βœ— Missing: Input Data, Context\n", - "\n", - "================================================================================\n" - ] - } - ], - "source": [ - "# Solution: Activity 1.2 Task 1 - Identify missing elements\n", - "print(\"πŸ“ SOLUTION 1.2 - MISSING ELEMENTS:\")\n", - "print(\"\\nPrompt 1 has:\")\n", - "print(\"βœ“ Input Data (the code)\")\n", - "print(\"βœ— Missing: Clear Instructions, Context, Output Format\")\n", - "\n", - "print(\"\\nPrompt 2 has:\")\n", - "print(\"βœ“ Instructions/Persona\")\n", - "print(\"βœ— Missing: Input Data, Context, Output Format\")\n", - "\n", - "print(\"\\nPrompt 3 has:\")\n", - "print(\"βœ“ Instructions, Output Format\")\n", - "print(\"βœ— Missing: Input Data, Context\")\n", - "print(\"\\n\" + \"=\"*80)\n" - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "βœ… SOLUTION 1.2 - COMPLETE PROMPT WITH ALL 4 ELEMENTS:\n", - "```python\n", - "def process_transaction(user_id: int, amount: float, transaction_type: str) -> float:\n", - " \"\"\"Processes a financial transaction for a user.\n", - "\n", - " This function handles user transactions by either depositing or withdrawing\n", - " a specified amount from the user's account. It ensures that the transaction\n", - " type is valid and that the user has sufficient funds for withdrawals.\n", - "\n", - " Args:\n", - " user_id (int): The unique identifier of the user.\n", - " amount (float): The amount of money to be processed. Must be positive.\n", - " transaction_type (str): The type of transaction, either 'deposit' or 'withdrawal'.\n", - "\n", - " Returns:\n", - " float: The new balance of the user's account after the transaction.\n", - "\n", - " Raises:\n", - " ValueError: If the transaction type is invalid or if the amount is not positive.\n", - " InsufficientFundsError: If a withdrawal is attempted with insufficient funds.\n", - "\n", - " Example:\n", - " >>> process_transaction(12345, 100.0, 'deposit')\n", - " 1100.0\n", - " >>> process_transaction(12345, 50.0, 'withdrawal')\n", - " 1050.0\n", - " \"\"\"\n", - " # Validate transaction type\n", - " if transaction_type not in ['deposit', 'withdrawal']:\n", - " raise ValueError(\"Invalid transaction type\")\n", - "\n", - " # Ensure the amount is positive\n", - " if amount <= 0:\n", - " raise ValueError(\"Amount must be positive\")\n", - "\n", - " # Retrieve the current balance of the user\n", - " balance = get_user_balance(user_id)\n", - "\n", - " # Check for sufficient funds if the transaction is a withdrawal\n", - " if transaction_type == 'withdrawal' and balance < amount:\n", - " raise InsufficientFundsError(\"Insufficient funds\")\n", - "\n", - " # Calculate the new balance based on the transaction type\n", - " new_balance = balance + amount if transaction_type == 'deposit' else balance - amount\n", - "\n", - " # Update the user's balance in the system\n", - " update_user_balance(user_id, new_balance)\n", - "\n", - " # Log the transaction for auditing purposes\n", - " log_transaction(user_id, amount, transaction_type)\n", - "\n", - " return new_balance\n", - "```\n", - "\n", - "### Inline Comments Explanation:\n", - "- **Validate transaction type**: Ensures that the transaction type is either 'deposit' or 'withdrawal'.\n", - "- **Ensure the amount is positive**: Checks that the amount to be processed is greater than zero.\n", - "- **Retrieve the current balance of the user**: Fetches the user's current balance from the system.\n", - "- **Check for sufficient funds if the transaction is a withdrawal**: Ensures that the user has enough funds to cover the withdrawal.\n", - "- **Calculate the new balance based on the transaction type**: Computes the new balance by adding or subtracting the amount based on the transaction type.\n", - "- **Update the user's balance in the system**: Updates the user's balance in the database or system.\n", - "- **Log the transaction for auditing purposes**: Records the transaction details for future reference and auditing.\n", - "\n", - "================================================================================\n" - ] - } - ], - "source": [ - "# Solution: Activity 1.2 Task 2 - Complete 4-element prompt\n", - "system_message = \"\"\"You are an expert technical writer specializing in Python documentation.\n", - "Your task is to create comprehensive docstrings following Google style guide.\"\"\"\n", - "\n", - "user_message = \"\"\"Context: This function is part of a financial application that processes user transactions.\n", - "It needs clear documentation for both developers and auditors.\n", - "\n", - "Function to document:\n", - "def process_transaction(user_id, amount, transaction_type):\n", - " if transaction_type not in ['deposit', 'withdrawal']:\n", - " raise ValueError(\"Invalid transaction type\")\n", - " \n", - " if amount <= 0:\n", - " raise ValueError(\"Amount must be positive\")\n", - " \n", - " balance = get_user_balance(user_id)\n", - " \n", - " if transaction_type == 'withdrawal' and balance < amount:\n", - " raise InsufficientFundsError(\"Insufficient funds\")\n", - " \n", - " new_balance = balance + amount if transaction_type == 'deposit' else balance - amount\n", - " update_user_balance(user_id, new_balance)\n", - " log_transaction(user_id, amount, transaction_type)\n", - " \n", - " return new_balance\n", - "\n", - "Please provide:\n", - "1. A comprehensive docstring including:\n", - " - Brief description\n", - " - Args section with parameter types and descriptions\n", - " - Returns section\n", - " - Raises section for all possible exceptions\n", - " - Example usage\n", - "2. Inline comments for complex logic\n", - "3. Type hints for the function signature\"\"\"\n", - "\n", - "messages = [\n", - " {\"role\": \"system\", \"content\": system_message},\n", - " {\"role\": \"user\", \"content\": user_message}\n", - "]\n", - "\n", - "response = get_chat_completion(messages)\n", - "print(\"βœ… SOLUTION 1.2 - COMPLETE PROMPT WITH ALL 4 ELEMENTS:\")\n", - "print(response)\n", - "print(\"\\n\" + \"=\"*80)\n" - ] - }, - { - "cell_type": "code", - "execution_count": 6, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "πŸ”΄ VAGUE PROMPT RESULT:\n", - "The function you provided calculates the total price of a list of items, adds tax, and subtracts a discount. However, there are a few improvements and potential issues to consider:\n", - "\n", - "1. **Tax Calculation**: The function currently adds the tax directly to the total. Typically, tax is a percentage of the total price, not a fixed amount. If `tax` is meant to be a percentage, you should calculate it based on the total price.\n", - "\n", - "2. **Discount Calculation**: Similar to tax, if the discount is a percentage, it should be calculated based on the total price.\n", - "\n", - "3. **Variable Naming**: Using more descriptive variable names can improve readability.\n", - "\n", - "Here's a revised version of the function, assuming `tax` and `discount` are percentages:\n", - "\n", - "```python\n", - "def calc_price(items, tax_rate, discount_rate):\n", - " subtotal = sum(items)\n", - " tax_amount = subtotal * (tax_rate / 100)\n", - " discount_amount = subtotal * (discount_rate / 100)\n", - " total_price = subtotal + tax_amount - discount_amount\n", - " return total_price\n", - "```\n", - "\n", - "### Key Changes:\n", - "- **`subtotal`**: Calculated using `sum(items)` for simplicity.\n", - "- **`tax_rate` and `discount_rate`**: Assumed to be percentages, so they are divided by 100 to convert them to decimal form for calculations.\n", - "- **`tax_amount` and `discount_amount`**: Calculated based on the `subtotal`.\n", - "- **`total_price`**: The final amount after adding tax and subtracting the discount.\n", - "\n", - "If `tax` and `discount` are meant to be fixed amounts rather than percentages, you can keep the original logic but improve readability:\n", - "\n", - "```python\n", - "def calc_price(items, tax, discount):\n", - " subtotal = sum(items)\n", - " total_price = subtotal + tax - discount\n", - " return total_price\n", - "```\n", - "\n", - "In this version, `tax` and `discount` are directly added and subtracted from the `subtotal`.\n", - "\n", - "================================================================================\n", - "\n", - "🟒 SPECIFIC PROMPT RESULT:\n", - "Here is the refactored version of the `calc_price` function, incorporating all the specified requirements:\n", - "\n", - "```python\n", - "from typing import List, Optional\n", - "from decimal import Decimal, ROUND_HALF_UP\n", - "\n", - "def calculate_total_price(\n", - " item_prices: Optional[List[float]], \n", - " tax_rate: Optional[float], \n", - " discount: Optional[float]\n", - ") -> Decimal:\n", - " \"\"\"\n", - " Calculate the total price of items, including tax and discount.\n", - "\n", - " Args:\n", - " item_prices (Optional[List[float]]): A list of item prices. Each price must be a non-negative number.\n", - " tax_rate (Optional[float]): The tax rate as a percentage (e.g., 10 for 10%). Must be a non-negative number.\n", - " discount (Optional[float]): The discount amount to subtract from the total. Must be a non-negative number.\n", - "\n", - " Returns:\n", - " Decimal: The total price after applying tax and discount, rounded to two decimal places.\n", - "\n", - " Raises:\n", - " ValueError: If any input is invalid (e.g., negative numbers, None values, or empty item list).\n", - " \"\"\"\n", - " # Input validation\n", - " if item_prices is None or not isinstance(item_prices, list) or len(item_prices) == 0:\n", - " raise ValueError(\"item_prices must be a non-empty list of non-negative numbers.\")\n", - " if any(price < 0 for price in item_prices):\n", - " raise ValueError(\"All item prices must be non-negative numbers.\")\n", - " if tax_rate is None or tax_rate < 0:\n", - " raise ValueError(\"tax_rate must be a non-negative number.\")\n", - " if discount is None or discount < 0:\n", - " raise ValueError(\"discount must be a non-negative number.\")\n", - "\n", - " # Convert inputs to Decimal for precise currency calculations\n", - " item_prices = [Decimal(price) for price in item_prices]\n", - " tax_rate = Decimal(tax_rate)\n", - " discount = Decimal(discount)\n", - "\n", - " # Calculate subtotal\n", - " subtotal = sum(item_prices)\n", - "\n", - " # Calculate tax amount (percentage-based)\n", - " tax_amount = (subtotal * tax_rate / Decimal(100)).quantize(Decimal('0.01'), rounding=ROUND_HALF_UP)\n", - "\n", - " # Calculate total price\n", - " total_price = (subtotal + tax_amount - discount).quantize(Decimal('0.01'), rounding=ROUND_HALF_UP)\n", - "\n", - " # Ensure total price is not negative\n", - " if total_price < 0:\n", - " total_price = Decimal('0.00')\n", - "\n", - " return total_price\n", - "```\n", - "\n", - "### Key Improvements:\n", - "1. **Input Validation**: Added checks to ensure all inputs are valid (e.g., non-negative numbers, non-empty lists).\n", - "2. **Type Hints**: Used Python's `typing` module to specify the types of parameters and return value.\n", - "3. **Edge Case Handling**: Handled cases like empty lists, `None` values, and negative numbers.\n", - "4. **Improved Variable Names**: Renamed variables for better clarity (`items` β†’ `item_prices`, `tax` β†’ `tax_rate`, etc.).\n", - "5. **Docstring**: Added a detailed docstring explaining the function's purpose, arguments, return value, and potential exceptions.\n", - "6. **Decimal Precision**: Used Python's `decimal.Decimal` for precise currency calculations and rounded to two decimal places.\n", - "7. **Percentage-Based Tax**: Changed the tax calculation to be percentage-based instead of a fixed amount.\n", - "\n", - "### Example Usage:\n", - "```python\n", - "# Example usage of the function\n", - "item_prices = [19.99, 5.49, 3.50]\n", - "tax_rate = 10.0 # 10%\n", - "discount = 5.00\n", - "\n", - "total_price = calculate_total_price(item_prices, tax_rate, discount)\n", - "print(total_price) # Output: 26.48\n", - "```\n" - ] - } - ], - "source": [ - "# Solution: Activity 2.1 - Convert vague to specific\n", - "vague_prompt = \"Fix this function\"\n", - "function_with_issues = \"\"\"\n", - "def calc_price(items, tax, discount):\n", - " total = 0\n", - " for i in items:\n", - " total = total + i\n", - " return total + tax - discount\n", - "\"\"\"\n", - "\n", - "specific_prompt = \"\"\"\n", - "Refactor this pricing calculation function with the following requirements:\n", - "1. Add input validation for all parameters\n", - "2. Add type hints following Python typing standards\n", - "3. Handle edge cases (empty lists, None values, negative numbers)\n", - "4. Improve variable names for clarity\n", - "5. Add a docstring explaining the function's purpose\n", - "6. Ensure the function handles decimal precision correctly for currency\n", - "7. Make the tax calculation percentage-based rather than a fixed amount\n", - "\"\"\"\n", - "\n", - "print(\"πŸ”΄ VAGUE PROMPT RESULT:\")\n", - "messages = [\n", - " {\"role\": \"user\", \"content\": f\"{vague_prompt}:\\n\\n```python\\n{function_with_issues}\\n```\"}\n", - "]\n", - "vague_result = get_chat_completion(messages)\n", - "print(vague_result)\n", - "print(\"\\n\" + \"=\"*80 + \"\\n\")\n", - "\n", - "print(\"🟒 SPECIFIC PROMPT RESULT:\")\n", - "messages = [\n", - " {\"role\": \"user\", \"content\": f\"{specific_prompt}\\n\\n```python\\n{function_with_issues}\\n```\"}\n", - "]\n", - "specific_result = get_chat_completion(messages)\n", - "print(specific_result)\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Solution: Activity 2.2 - Personas\n", - "code_to_review = \"\"\"\n", - "def user_login(username, password):\n", - " users = get_all_users() # Loads entire user database\n", - " for user in users:\n", - " if user['username'] == username and user['password'] == password:\n", - " session_id = generate_random_string(10)\n", - " save_session(session_id, user['id'])\n", - " return {\"success\": True, \"session_id\": session_id}\n", - " return {\"success\": False, \"message\": \"Invalid credentials\"}\n", - "\"\"\"\n", - "\n", - "security_messages = [\n", - " {\"role\": \"system\", \"content\": \"You are a Security Engineer reviewing code for security vulnerabilities. Focus on authentication weaknesses, data exposure, and secure coding practices.\"},\n", - " {\"role\": \"user\", \"content\": f\"Review this login function:\\n\\n```python\\n{code_to_review}\\n```\"}\n", - "]\n", - "performance_messages = [\n", - " {\"role\": \"system\", \"content\": \"You are a Performance Engineer reviewing code for efficiency and scalability issues. Focus on bottlenecks, resource usage, and optimization opportunities.\"},\n", - " {\"role\": \"user\", \"content\": f\"Review this login function:\\n\\n```python\\n{code_to_review}\\n```\"}\n", - "]\n", - "qa_messages = [\n", - " {\"role\": \"system\", \"content\": \"You are a QA Engineer reviewing code for testing and quality assurance. Focus on edge cases, error handling, and testability.\"},\n", - " {\"role\": \"user\", \"content\": f\"Review this login function:\\n\\n```python\\n{code_to_review}\\n```\"}\n", - "]\n", - "\n", - "security_review = get_chat_completion(security_messages)\n", - "performance_review = get_chat_completion(performance_messages)\n", - "qa_review = get_chat_completion(qa_messages)\n", - "\n", - "print(\"πŸ”’ SECURITY ENGINEER PERSPECTIVE:\")\n", - "print(security_review)\n", - "print(\"\\n\" + \"=\"*80 + \"\\n\")\n", - "\n", - "print(\"⚑ PERFORMANCE ENGINEER PERSPECTIVE:\")\n", - "print(performance_review)\n", - "print(\"\\n\" + \"=\"*80 + \"\\n\")\n", - "\n", - "print(\"πŸ§ͺ QA ENGINEER PERSPECTIVE:\")\n", - "print(qa_review)\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Solution: Activity 2.3 - Delimiter mastery\n", - "system_message = \"You are a software architect. Refactor the provided multi-file code using the organized input sections.\"\n", - "\n", - "user_message = \"\"\"\n", - "### REFACTORING REQUIREMENTS ###\n", - "- Extract shared logic into utility functions\n", - "- Improve error handling across all files\n", - "- Add proper logging and monitoring\n", - "- Follow SOLID principles\n", - "### \n", - "\n", - "### ORIGINAL CODE ###\n", - "\n", - "class User:\n", - " def __init__(self, name, email):\n", - " self.name = name\n", - " self.email = email\n", - " \n", - " def save(self):\n", - " # Direct database access - not ideal\n", - " db.execute(\"INSERT INTO users (name, email) VALUES (?, ?)\", (self.name, self.email))\n", - "\n", - "\n", - "\n", - "def create_user(request):\n", - " name = request.get('name')\n", - " email = request.get('email')\n", - " \n", - " # No validation\n", - " user = User(name, email)\n", - " user.save()\n", - " return {\"success\": True}\n", - "\n", - "def get_user(user_id):\n", - " # Direct query - no error handling\n", - " result = db.execute(\"SELECT * FROM users WHERE id = ?\", (user_id,))\n", - " return result.fetchone()\n", - "\n", - "###\n", - "\n", - "### TARGET ARCHITECTURE ###\n", - "- Repository pattern for data access\n", - "- Service layer for business logic\n", - "- Proper dependency injection\n", - "- Comprehensive error handling\n", - "###\n", - "\n", - "Provide refactored code with clear separation of concerns.\n", - "\"\"\"\n", - "\n", - "messages = [\n", - " {\"role\": \"system\", \"content\": system_message},\n", - " {\"role\": \"user\", \"content\": user_message}\n", - "]\n", - "\n", - "response = get_chat_completion(messages)\n", - "print(\"πŸ—οΈ SOLUTION - MULTI-FILE REFACTORING:\")\n", - "print(response)\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Solution: Activity 2.4 - Step-by-step reasoning\n", - "system_message = \"\"\"\n", - "Review the following code using these systematic steps:\n", - "\n", - "Step 1 - Analyze code structure and identify main components\n", - "Step 2 - Check for potential bugs and logic errors \n", - "Step 3 - Evaluate performance and efficiency concerns\n", - "Step 4 - Assess code maintainability and readability\n", - "Step 5 - Provide prioritized recommendations with specific fixes\n", - "\n", - "Follow each step methodically and show your reasoning.\n", - "\"\"\"\n", - "\n", - "code_to_review = \"\"\"\n", - "def process_orders(orders):\n", - " processed = []\n", - " total_revenue = 0\n", - " \n", - " for order in orders:\n", - " if order['status'] == 'pending':\n", - " # Calculate order total\n", - " item_total = 0\n", - " for item in order['items']:\n", - " item_total += item['price'] * item['quantity']\n", - " \n", - " # Apply discount\n", - " if order['customer_type'] == 'premium':\n", - " item_total = item_total * 0.9\n", - " elif order['customer_type'] == 'regular':\n", - " if item_total > 100:\n", - " item_total = item_total * 0.95\n", - " \n", - " # Process payment\n", - " if item_total > 0:\n", - " payment_result = charge_customer(order['customer_id'], item_total)\n", - " if payment_result:\n", - " order['status'] = 'completed'\n", - " order['total'] = item_total\n", - " processed.append(order)\n", - " total_revenue += item_total\n", - " else:\n", - " order['status'] = 'failed'\n", - " \n", - " return processed, total_revenue\n", - "\"\"\"\n", - "\n", - "user_message = f\"\"\"\n", - "Please review this order processing function:\n", - "\n", - "```python\n", - "{code_to_review}\n", - "```\n", - "\"\"\"\n", - "\n", - "messages = [\n", - " {\"role\": \"system\", \"content\": system_message},\n", - " {\"role\": \"user\", \"content\": user_message}\n", - "]\n", - "\n", - "response = get_chat_completion(messages)\n", - "print(\"πŸ” SOLUTION - STEP-BY-STEP REVIEW:\")\n", - "print(response)\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Solution: Activity 4.1 β€” Command Creation Challenge\n", - "\n", - "Below are reference templates you can use after attempting your own. Treat these as examples to compare against your work and iterate.\n", - "\n", - "### Claude Code β€” `.claude/commands/code/review.md`\n", - "```markdown\n", - "---\n", - "allowed-tools: Read, Grep, Bash(git diff:*), Bash(git blame:*)\n", - "argument-hint: [focus_area] [language]\n", - "description: Comprehensive code review with prioritized actions\n", - "---\n", - "\n", - "You are a senior software engineer performing a code review for $2 code.\n", - "\n", - "Focus Area: $1\n", - "\n", - "Perform systematic analysis:\n", - "1) Overall Assessment β€” architecture, structure, complexity\n", - "2) Quality β€” standards, readability, potential bugs, error handling\n", - "3) Security β€” input validation, secrets, injection, authz/authn\n", - "4) Performance β€” hot paths, allocations, I/O patterns\n", - "5) Maintainability β€” naming, cohesion, testability, documentation\n", - "\n", - "Output format:\n", - "- Summary (3–5 bullets)\n", - "- Findings by category (Critical/High/Medium/Low)\n", - "- Actionable Next Steps (prioritized)\n", - "```\n", - "\n", - "### Claude Code β€” `.claude/commands/debug/production.md`\n", - "```markdown\n", - "---\n", - "allowed-tools: Read, Bash(log:*), Bash(grep:*), Bash(ps:*)\n", - "argument-hint: [severity] [component]\n", - "description: Debug production issue and produce executive summary + technical notes\n", - "---\n", - "\n", - "You are analyzing a $1 production incident in the $2 component.\n", - "\n", - "Steps:\n", - "1) Root Cause Analysis β€” symptom timeline, triggers, failure chain\n", - "2) Impact Assessment β€” scope, user impact, SLO/SLA\n", - "3) Solution Options β€” quick fix vs long-term fix, trade-offs\n", - "4) Risk & Rollout β€” deployment plan, fallback, observability\n", - "\n", - "Output format:\n", - "- Executive Summary (non-technical, 4–6 lines)\n", - "- Technical Notes (root cause, evidence, logs/metrics)\n", - "- Mitigation Plan (immediate + long-term)\n", - "- Verification & Monitoring\n", - "```\n", - "\n", - "### GitHub Copilot β€” `.github/prompts/code-review.md`\n", - "```markdown\n", - "---\n", - "mode: agent\n", - "tools: ['githubRepo', 'terminal', 'codeSearch']\n", - "description: Perform a comprehensive code review with prioritized actions\n", - "---\n", - "\n", - "You are a senior software engineer conducting a code review.\n", - "\n", - "Provide:\n", - "1. Overall assessment\n", - "2. Detailed findings by: Quality, Security, Performance, Maintainability\n", - "3. Prioritized action list (Critical/High/Medium/Low)\n", - "```\n", - "\n", - "### GitHub Copilot β€” `.github/prompts/debug-production.md`\n", - "```markdown\n", - "---\n", - "mode: agent\n", - "tools: ['terminal', 'fileSearch', 'codeAnalysis']\n", - "description: Debug a production issue; produce executive summary + technical notes\n", - "---\n", - "\n", - "You are debugging a production incident. Provide:\n", - "- Executive summary (business-focused)\n", - "- Root cause analysis (technical)\n", - "- Mitigation plan (immediate/long-term)\n", - "- Verification steps & monitoring\n", - "```\n", - "\n", - "### Iteration guidance\n", - "- Start minimal; run the command; adjust focus, allowed tools, and output sections.\n", - "- Tune for brevity and actionability; reduce noise, keep headings stable across runs.\n", - "- Validate that Critical/High findings always appear first.\n", - "\n", - "Inspiration (don’t copy verbatim): https://github.com/aws-samples/anthropic-on-aws/tree/main/advanced-claude-code-patterns/commands\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Solution: Activity 4.2 β€” Advanced Command Patterns (Chained Workflows)\n", - "\n", - "Use this reference after attempting your own workflow design. The goal is to chain specialized commands with clear handoffs, consistent outputs, and optional conditional logic.\n", - "\n", - "### Design principles\n", - "- Keep each command focused (single responsibility)\n", - "- Define a stable output schema for handoffs (JSON in a fenced block)\n", - "- Prefer short, action-oriented sections (Summary, Findings, Next Steps)\n", - "- Document which command consumes which handoff fields\n", - "\n", - "### Claude Code β€” Workflow Orchestrator (feature development)\n", - "Example file: `.claude/commands/workflows/feature-dev.md`\n", - "```markdown\n", - "---\n", - "allowed-tools: Read, Grep, Bash(git diff:*)\n", - "argument-hint: [feature_name] [component]\n", - "description: Orchestrate feature dev flow: spec β†’ scaffold β†’ review β†’ tests\n", - "---\n", - "\n", - "You are orchestrating a feature workflow: \"$1\" in \"$2\".\n", - "\n", - "Steps:\n", - "1) Specification β€” derive concise requirements and constraints\n", - "2) Scaffold Plan β€” list minimal file changes and stubs\n", - "3) Review Focus β€” define critical review criteria for this change\n", - "4) Test Plan β€” enumerate essential test cases (unit/integration)\n", - "\n", - "Output:\n", - "- Executive Summary (3–5 bullets)\n", - "- Spec (requirements/constraints/acceptance)\n", - "- Scaffold Plan (files, functions, TODOs)\n", - "- Review Focus (areas, risks)\n", - "- Test Plan (cases by priority)\n", - "\n", - "Handoff (JSON):\n", - "```json\n", - "{\n", - " \"feature\": \"$1\",\n", - " \"component\": \"$2\",\n", - " \"files\": [],\n", - " \"functions\": [],\n", - " \"acceptance\": [],\n", - " \"review_focus\": [\"security\", \"performance\"],\n", - " \"test_plan\": []\n", - "}\n", - "```\n", - "```\n", - "\n", - "### Claude Code β€” Consumer: Scaffold from Handoff\n", - "Example file: `.claude/commands/code/scaffold.md`\n", - "```markdown\n", - "---\n", - "allowed-tools: Read, Write, Bash(git:*)\n", - "argument-hint: [language]\n", - "description: Generate minimal scaffolding from previous workflow handoff\n", - "---\n", - "\n", - "You are generating code scaffolding in $1.\n", - "\n", - "Input contains a prior handoff JSON in a fenced block. Parse it strictly and:\n", - "- Create only the listed files/functions with minimal stubs\n", - "- Insert TODO markers where implementation is needed\n", - "- Include top-of-file docstrings linking to acceptance criteria\n", - "\n", - "Output:\n", - "- Summary (created/modified)\n", - "- File list with brief descriptions\n", - "- Next steps for implementers\n", - "\n", - "Expect an input block like:\n", - "```json\n", - "{\"feature\":\"\", \"component\":\"\", \"files\":[], \"functions\":[], \"acceptance\":[]}\n", - "```\n", - "```\n", - "\n", - "### GitHub Copilot β€” Workflow Template\n", - "Example file: `.github/prompts/feature-workflow.md`\n", - "```markdown\n", - "---\n", - "mode: agent\n", - "tools: ['githubRepo', 'codeSearch']\n", - "description: Orchestrate feature dev workflow; emit JSON handoff for next command\n", - "---\n", - "\n", - "You are orchestrating a feature workflow. Provide:\n", - "- Spec (requirements β†’ acceptance)\n", - "- Scaffold Plan (files/functions)\n", - "- Review Focus\n", - "- Test Plan\n", - "\n", - "Emit a JSON handoff in a fenced block at the end with keys:\n", - "feature, component, files, functions, acceptance, review_focus, test_plan\n", - "```\n", - "\n", - "### Example: Handoff β†’ Consumer chaining\n", - "- Run `/feature-dev SignIn user-auth` (Claude Code), copy JSON handoff block\n", - "- Run `/scaffold python`, paste the JSON under your request\n", - "- Implement stubs, then invoke your code-review command with the same handoff\n", - "\n", - "### Conditional logic ideas\n", - "- If the component touches auth or payments, auto-include security review focus\n", - "- If scaffold adds migrations, include rollback/backup steps in the Test Plan\n", - "\n", - "### Iteration guidance\n", - "- Validate outputs across multiple features; stabilize section headers and JSON keys\n", - "- Keep handoffs minimal and documented; avoid brittle implicit fields\n", - "\n", - "Inspiration (don’t copy verbatim): https://github.com/aws-samples/anthropic-on-aws/tree/main/advanced-claude-code-patterns/commands\n" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": ".venv", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.13.2" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/03-examples/README.md b/02-implementation-examples/README.md similarity index 96% rename from 03-examples/README.md rename to 02-implementation-examples/README.md index 883a3c8..1e8af2b 100644 --- a/03-examples/README.md +++ b/02-implementation-examples/README.md @@ -1,4 +1,4 @@ -# 03-examples: Real-World Use Cases & Implementation Patterns +# 02-implementation-examples: Real-World Use Cases & Implementation Patterns This directory provides practical implementation examples demonstrating how to apply prompt engineering capabilities to solve real business problems and common software development challenges. @@ -49,8 +49,8 @@ Each use case provides: ## Getting Started ### Prerequisites -- Completion of [01-tutorials/](../01-tutorials/) modules 1-3 -- Practical experience with [02-exercises/](../02-exercises/) activities +- Completion of [01-course/](../01-course/) modules 1-3 +- Practical experience with integrated exercises in the course modules - Understanding of your target AI assistant platform (GitHub Copilot, Claude Code, etc.) ### Selection Guide diff --git a/README.md b/README.md index 9f4a205..e848c20 100644 --- a/README.md +++ b/README.md @@ -1,145 +1,191 @@ # Prompt Engineering for Developers -A comprehensive learning resource for mastering prompt engineering techniques specifically designed for software developers. This course provides structured tutorials, hands-on exercises, and real-world implementation examples to help you integrate AI assistants effectively into your development workflow. +Master prompting techniques for software development with structured tutorials, hands-on exercises, and real-world examples. -## Course Structure +## πŸš€ Get Started -Following the proven structure of AWS educational resources, this course is organized into three main sections: +**1. Clone the repository:** +```bash +git clone git@github.com:splunk/prompteng-devs.git +cd prompteng-devs +``` -### πŸ“š [01-tutorials/](./01-tutorials/) - Fundamentals & Learning -Complete tutorials teaching prompt engineering from foundations to advanced integration: -- **Module 1**: Course introduction, environment setup, and prompt anatomy -- **Module 2**: Core techniques - clear instructions, personas, delimiters, reasoning -- **Module 3**: Software engineering applications - code quality, testing, debugging, APIs -- **Module 4**: Custom command integration for AI code assistants +**2. Begin learning:** +- **[Start Module 1: Foundations](./01-course/module-01-foundations/)** ← Read README.md, then open the `.ipynb` notebook +- **[View All Modules](./01-course/)** ← Browse the complete course +- **[Implementation Examples](./02-implementation-examples/)** ← Production patterns -### πŸ› οΈ [02-exercises/](./02-exercises/) - Hands-On Practice -Interactive exercises and assessments to reinforce learning: -- **hands-on/**: Guided practice activities for each module -- **solutions/**: Complete reference implementations with detailed explanations +--- -### 🎯 [03-examples/](./03-examples/) - Real-World Use Cases -Production-ready patterns and implementation examples: -- **code-quality/**: Refactoring, modernization, and quality improvement workflows -- **debugging/**: Incident investigation, root cause analysis, and resolution patterns -- **api-integration/**: Client generation, error handling, and robust integration patterns -- **custom-commands/**: Reusable command templates and team adoption strategies +## 🎯 Recommended Learning Workflow -## Quick Start Guide +> **πŸ“š For Each Module:** -### Learning Path -1. **🎯 Start Here**: [01-tutorials/module-01-foundations/](./01-tutorials/module-01-foundations/) for environment setup -2. **πŸ“– Learn**: Progress through tutorials in order (modules 1-4) -3. **πŸ› οΈ Practice**: Complete exercises in [02-exercises/hands-on/](./02-exercises/hands-on/) -4. **🎯 Apply**: Implement patterns from [03-examples/](./03-examples/) in real projects +#### **Step 1: πŸ“– Read the Module** +- Open the module's `README.md` file to understand learning objectives and prerequisites -### Prerequisites -- **Python 3.8+** and package manager (uv recommended) -- **IDE** with notebook support (VS Code or Cursor) -- **API Access** to one of: - - GitHub Copilot (recommended) - - CircuIT APIs - - OpenAI API key +#### **Step 2: πŸš€ Launch the Notebook** +- Open the `.ipynb` notebook file to begin the interactive tutorial -### Environment Setup +#### **Step 3: πŸ’» Complete All Cells** +- Run through each cell sequentially from top to bottom -Use uv to manage dependencies: +#### **Step 4: πŸƒβ€β™€οΈ Practice Exercises** +- Complete the hands-on exercises to reinforce learning -#### Using uv (Required) +#### **Step 5: πŸ“Š Self-Assess** +- Use the Skills Checklist in the notebook to track your progress -[uv](https://github.com/astral-sh/uv) is a fast Python package installer and resolver. +#### **Step 6: ➑️ Next Module** +- Move to the next module and repeat the process -> Note: To use the Splunk hosted PyPi repository, use the following command: -> ```bash -> brew upgrade okta-artifactory-login -> okta-artifactory-login -t pypi -> ``` +**πŸ“ˆ Track Progress**: Use the Skills Checklist in each notebook to mark skills as you master them +**πŸš€ Apply Skills**: Use real-world examples after completing all modules -```bash -# Install uv -curl -LsSf https://astral.sh/uv/install.sh | sh +πŸ’‘ **Tip**: Each module directory contains a `README.md` file explaining what you'll learn and how to get started. -# Alternative: Install using pip -pip install uv -``` +--- +## ⚑ Quick Setup + +**Prerequisites**: Python 3.8+, IDE with notebook support, API access (GitHub Copilot/CircuIT/OpenAI) ```bash -# Setup and install dependencies +# 1. Clone the repository +git clone git@github.com:splunk/prompteng-devs.git cd prompteng-devs + +# 2. Install dependencies +curl -LsSf https://astral.sh/uv/install.sh | sh uv venv .venv --seed source .venv/bin/activate uv pip install ipykernel -``` - -**Configure environment variables**: -```bash +# 3. Configure environment cp .env-example .env -$EDITOR .env +# Edit .env with your API keys ``` -Rename `.env-example` to `.env` and edit the values to match your environment (e.g., API keys or tokens required by your workflow). Ensure `.env` is present before running notebooks that depend on environment variables. +**Splunk users**: Run `okta-artifactory-login -t pypi` before installing dependencies. +--- -You can also open the folder directly in VS Code or Cursor and use their built-in notebook support. -When prompted for a kernel, select the interpreter from `.venv`. +## πŸ““ About Jupyter Notebooks -## Navigation & Usage +> **πŸ†• First time using Jupyter notebooks?** Read this section before starting the modules. -### Directory Structure Overview -``` -prompteng-devs/ -β”œβ”€β”€ 01-tutorials/ # Complete learning modules -β”‚ β”œβ”€β”€ module-01-foundations/ -β”‚ β”œβ”€β”€ module-02-fundamentals/ -β”‚ β”œβ”€β”€ module-03-applications/ -β”‚ β”œβ”€β”€ module-04-integration/ -β”‚ └── prompt-engineering-for-developers.ipynb # Complete course -β”œβ”€β”€ 02-exercises/ # Hands-on practice -β”‚ β”œβ”€β”€ hands-on/ # Exercise notebooks -β”‚ └── solutions/ # Reference solutions -β”œβ”€β”€ 03-examples/ # Real-world patterns -β”‚ β”œβ”€β”€ code-quality/ -β”‚ β”œβ”€β”€ debugging/ -β”‚ β”œβ”€β”€ api-integration/ -β”‚ └── custom-commands/ -└── GitHub-Copilot-2-API/ # GitHub Copilot proxy setup -``` +All course modules use **Jupyter notebooks** (`.ipynb` files) - interactive documents that let you run code directly in your IDE. + +### ⚠️ Important Requirements + +
+ +**You must clone this repository and run notebooks locally.** They cannot be executed directly from GitHub. + +
+ +### πŸ’‘ How Notebooks Work + +- **Code cells** contain Python code that runs on your local machine +- **Click the ▢️ button** (or press `Shift + Enter`) to execute a cell +- **Output appears** below each cell after you run it +- **To edit cells**: Double-click to edit, make changes (like uncommenting code), then press `Shift + Enter` to run +- **Installation commands** run locally and install packages to your Python environment +- **You don't copy/paste** - just click the run button in each cell +- **Long outputs are truncated**: If you see "Output is truncated. View as a scrollable element" - click that link to see the full response + +### πŸ”’ Where Code Executes + +All code runs on your local machine. When you: +- Install packages β†’ They're installed to your Python environment +- Connect to AI services β†’ Your computer sends requests over the internet to those services +- Run API calls β†’ They execute from your machine using your credentials -### Using the Notebooks -- **Kernel**: Select the `.venv` Python interpreter as the notebook kernel -- **Execution**: Run cells top-to-bottom initially, then iterate as needed -- **Experimentation**: Create new cells for testing; preserve original examples -- **IDE Integration**: VS Code/Cursor built-in notebook support recommended +### πŸš€ Getting Started with Notebooks -### Course Timing -- **Total Duration**: ~90 minutes -- **Session Options**: - - Single 90-minute session, or - - Three 30-minute focused sessions, or - - Self-paced over multiple days +1. **Open the `.ipynb` file** in your IDE (VS Code or Cursor recommended) +2. **Select the Python kernel**: Choose your `.venv` interpreter when prompted +3. **Run cells sequentially** from top to bottom +4. **Complete exercises** as you go through the modules +5. **Experiment**: Add new cells to try your own code -## Target Audience +--- -This course is designed for: -- **Software Engineers** looking to integrate AI assistants into their workflow -- **Technical Leads** wanting to establish team prompt engineering standards -- **DevOps Engineers** seeking to automate development workflows with AI -- **Engineering Managers** planning AI-assisted development adoption +## πŸ“Š Tracking Your Progress -## What You'll Build +Each module includes a **Skills Checklist** to help you track your mastery of prompt engineering techniques. + +### How It Works + +Each module notebook has **two sections** for tracking progress: + +#### 1️⃣ **Progress Overview** (Visual Status Only - Not Interactive) +- Shows automatic status: Tutorial completion and overall progress +- These checkmarks (βœ…/⬜) are **visual indicators only** - you cannot click them +- Automatically shows βœ… for "Tutorial Completed" after you finish all cells +- The ⬜ for "Skills Mastery" reminds you to use the Skills Checklist below + +#### 2️⃣ **Check Off Your Skills** (Interactive Checkboxes - This is Where You Track!) +- Contains **clickable checkboxes** for each individual skill +- **This is where you actively track your mastery** as you learn +- Check off each skill as you achieve it (see criteria below) +- Your progress percentage updates automatically based on checked skills + +### When to Check Off a Skill + +βœ… You can confidently apply the technique without referring back to examples +βœ… You understand why and when to use the technique +βœ… You can explain the technique to a colleague +βœ… You've successfully used it in your own coding tasks + +πŸ’‘ **Important**: The interactive checkboxes are in the "**Check Off Your Skills**" section. Don't worry if you can't click the status indicators in "Progress Overview" - those are just visual guides! + +πŸ’‘ **Tip**: Don't rush to check off skills. The goal is genuine mastery, not completion speed. Come back and practice skills until you feel confident. + +--- + +## πŸ“š Learning Path + +### 1. **Interactive Course** - Learn the fundamentals +- **[Module 1: Foundations](./01-course/module-01-foundations/)** - Interactive notebook (`.ipynb`) with environment setup & prompt anatomy (20 min) +- **[Module 2: Core Techniques](./01-course/module-02-fundamentals/)** - Interactive notebook (`.ipynb`) with role prompting, structured inputs, few-shot examples, chain-of-thought reasoning, reference citations, prompt chaining, and evaluation techniques (90-120 min) +- **[Module 3: Applications](./01-course/module-03-applications/)** - Interactive notebook (`.ipynb`) with code quality, testing, debugging (30 min) +- **[Module 4: Integration](./01-course/module-04-integration/)** - Interactive notebook (`.ipynb`) with custom commands & AI assistants (10 min) + +### 2. **Practice** - Reinforce learning +- **Hands-on Exercises** - Integrated into each module to reinforce concepts +- **Self-Assessment** - Use the Skills Checklist in each module to track your progress + +### 3. **Apply** - Real-world patterns +- **[Code Quality](./02-implementation-examples/code-quality/)** - Refactoring & modernization +- **[Debugging](./02-implementation-examples/debugging/)** - Incident investigation & resolution +- **[API Integration](./02-implementation-examples/api-integration/)** - Client generation & error handling +- **[Custom Commands](./02-implementation-examples/custom-commands/)** - Reusable templates + + +## 🎯 What You'll Build -By course completion, you'll have: - βœ… **Working Development Environment** with AI assistant integration - βœ… **Prompt Engineering Toolkit** with reusable patterns and commands - βœ… **Production-Ready Workflows** for code quality, debugging, and API integration -## Contributing +**Total Time**: ~90 minutes (can be split into 3Γ—30min sessions) + +--- + +## πŸ“ Project Structure + +``` +prompteng-devs/ +β”œβ”€β”€ 01-course/ # Learning modules +β”œβ”€β”€ 02-implementation-examples/ # Real-world patterns +└── GitHub-Copilot-2-API/ # Copilot setup +``` + +**New to notebooks?** See [About Jupyter Notebooks](#-about-jupyter-notebooks) section above. -Issues and pull requests welcome! Please ensure: -- Examples are minimal, reproducible, and well-documented -- New patterns include both implementation and usage guidance -- Educational content follows the established progression structure +--- +## 🀝 Contributing +Issues and pull requests welcome! Ensure examples are minimal, reproducible, and well-documented. \ No newline at end of file