Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions 01-course/module-01-foundations/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@
This foundational module introduces you to prompt engineering concepts and gets your development environment configured for hands-on learning.

### Learning Objectives
By completing this module, you will:
By completing this module, you will be able to:
- ✅ Set up a working development environment with AI assistant access
- ✅ Identify and apply the four core elements of effective prompts
- ✅ Write basic prompts for code improvement and documentation
- ✅ Write basic prompts for reviewing code
- ✅ Iterate and refine prompts based on output quality

### Getting Started
Expand Down
160 changes: 136 additions & 24 deletions 01-course/module-01-foundations/module1.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -166,6 +166,7 @@
"<li><strong>Installation commands</strong> run locally and install packages to your Python environment</li>\n",
"<li><strong>You don't copy/paste</strong> - just click the run button in each cell</li>\n",
"<li><strong>Output appears</strong> below each cell after you run it</li>\n",
"<li><strong>Long outputs are truncated:</strong> If you see \"Output is truncated. View as a scrollable element\" - <strong>click that link</strong> to see the full response in a scrollable view</li>\n",
"</ul>\n",
"</div>\n"
]
Expand Down Expand Up @@ -205,7 +206,7 @@
"def install_requirements():\n",
" try:\n",
" # Install from requirements.txt\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"-r\", \"requirements.txt\"])\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"-q\", \"-r\", \"requirements.txt\"])\n",
" print(\"✅ SUCCESS! All dependencies installed successfully.\")\n",
" print(\"📦 Installed: openai, anthropic, python-dotenv, requests\")\n",
" except subprocess.CalledProcessError as e:\n",
Expand Down Expand Up @@ -235,7 +236,14 @@
"\n",
"Choose your preferred option:\n",
"\n",
"- **Option A: GitHub Copilot API (local proxy)**: Recommended if you don't have OpenAI or CircuIT API access. Follow [GitHub-Copilot-2-API/README.md](../../GitHub-Copilot-2-API/README.md) to authenticate and start the local server, then run the `GitHub Copilot (local proxy)` setup cells below.\n",
"- **Option A: GitHub Copilot API (local proxy)**: Recommended if you don't have OpenAI or CircuIT API access.\n",
" - Supports both **Claude** and **OpenAI** models\n",
" - No API keys needed - uses your GitHub Copilot subscription\n",
" - Follow [GitHub-Copilot-2-API/README.md](../../GitHub-Copilot-2-API/README.md) to authenticate and start the local server\n",
" - Run the setup cell below and **edit your preferred provider** (`\"openai\"` or `\"claude\"`) by setting the `PROVIDER` variable\n",
" - Available models:\n",
" - **OpenAI**: gpt-4o, gpt-4, gpt-3.5-turbo, o3-mini, o4-mini\n",
" - **Claude**: claude-3.5-sonnet, claude-3.7-sonnet, claude-sonnet-4\n",
"\n",
"- **Option B: OpenAI API**: If you have OpenAI API access, you can use the `OpenAI` connection cells provided later in this notebook.\n",
"\n",
Expand Down Expand Up @@ -309,40 +317,144 @@
"metadata": {},
"outputs": [],
"source": [
"# GitHub Copilot API setup (local proxy)\n",
"# Option A: GitHub Copilot API setup (Recommended)\n",
"import openai\n",
"import anthropic\n",
"import os\n",
"\n",
"# Configure for local GitHub Copilot proxy\n",
"client = openai.OpenAI(\n",
"# ============================================\n",
"# 🎯 CHOOSE YOUR AI MODEL PROVIDER\n",
"# ============================================\n",
"# Set your preference: \"openai\" or \"claude\"\n",
"PROVIDER = \"claude\" # Change to \"claude\" to use Claude models\n",
"\n",
"# ============================================\n",
"# 📋 Available Models by Provider\n",
"# ============================================\n",
"# OpenAI Models (via GitHub Copilot):\n",
"# - gpt-4o (recommended, supports vision)\n",
"# - gpt-4\n",
"# - gpt-3.5-turbo\n",
"# - o3-mini, o4-mini\n",
"#\n",
"# Claude Models (via GitHub Copilot):\n",
"# - claude-3.5-sonnet (recommended, supports vision)\n",
"# - claude-3.7-sonnet (supports vision)\n",
"# - claude-sonnet-4 (supports vision)\n",
"# ============================================\n",
"\n",
"# Configure clients for both providers\n",
"openai_client = openai.OpenAI(\n",
" base_url=\"http://localhost:7711/v1\",\n",
" api_key=\"dummy-key\" # The local proxy doesn't need a real key\n",
" api_key=\"dummy-key\"\n",
")\n",
"\n",
"def get_chat_completion(messages, model=\"gpt-4\", temperature=0.7):\n",
" \"\"\"\n",
" Get a chat completion from the AI model.\n",
" \n",
" Args:\n",
" messages: List of message dictionaries with 'role' and 'content'\n",
" model: Model name (default: gpt-4)\n",
" temperature: Creativity level 0-1 (default: 0.7)\n",
" \n",
" Returns:\n",
" String response from the AI model\n",
" \"\"\"\n",
"claude_client = anthropic.Anthropic(\n",
" api_key=\"dummy-key\",\n",
" base_url=\"http://localhost:7711\"\n",
")\n",
"\n",
"# Set default models for each provider\n",
"OPENAI_DEFAULT_MODEL = \"gpt-4o\"\n",
"CLAUDE_DEFAULT_MODEL = \"claude-3.5-sonnet\"\n",
"\n",
"\n",
"def _extract_text_from_blocks(blocks):\n",
" \"\"\"Extract text content from response blocks returned by the API.\"\"\"\n",
" parts = []\n",
" for block in blocks:\n",
" text_val = getattr(block, \"text\", None)\n",
" if isinstance(text_val, str):\n",
" parts.append(text_val)\n",
" elif isinstance(block, dict):\n",
" t = block.get(\"text\")\n",
" if isinstance(t, str):\n",
" parts.append(t)\n",
" return \"\\n\".join(parts)\n",
"\n",
"\n",
"def get_openai_completion(messages, model=None, temperature=0.0):\n",
" \"\"\"Get completion from OpenAI models via GitHub Copilot.\"\"\"\n",
" if model is None:\n",
" model = OPENAI_DEFAULT_MODEL\n",
" try:\n",
" response = client.chat.completions.create(\n",
" response = openai_client.chat.completions.create(\n",
" model=model,\n",
" messages=messages,\n",
" temperature=temperature\n",
" )\n",
" return response.choices[0].message.content\n",
" except Exception as e:\n",
" return f\"❌ Error: {e}\\\\n\\\\n💡 Make sure the GitHub Copilot local proxy is running on port 7711\"\n",
" return f\"❌ Error: {e}\\n💡 Make sure GitHub Copilot proxy is running on port 7711\"\n",
"\n",
"\n",
"def get_claude_completion(messages, model=None, temperature=0.0):\n",
" \"\"\"Get completion from Claude models via GitHub Copilot.\"\"\"\n",
" if model is None:\n",
" model = CLAUDE_DEFAULT_MODEL\n",
" try:\n",
" response = claude_client.messages.create(\n",
" model=model,\n",
" max_tokens=8192,\n",
" messages=messages,\n",
" temperature=temperature\n",
" )\n",
" return _extract_text_from_blocks(getattr(response, \"content\", []))\n",
" except Exception as e:\n",
" return f\"❌ Error: {e}\\n💡 Make sure GitHub Copilot proxy is running on port 7711\"\n",
"\n",
"print(\"✅ GitHub Copilot API configured successfully!\")\n",
"print(\"🔗 Connected to: http://localhost:7711\")\n"
"\n",
"def get_chat_completion(messages, model=None, temperature=0.7):\n",
" \"\"\"\n",
" Generic function to get chat completion from any provider.\n",
" Routes to the appropriate provider-specific function based on PROVIDER setting.\n",
" \"\"\"\n",
" if PROVIDER.lower() == \"claude\":\n",
" return get_claude_completion(messages, model, temperature)\n",
" else: # Default to OpenAI\n",
" return get_openai_completion(messages, model, temperature)\n",
"\n",
"\n",
"def get_default_model():\n",
" \"\"\"Get the default model for the current provider.\"\"\"\n",
" if PROVIDER.lower() == \"claude\":\n",
" return CLAUDE_DEFAULT_MODEL\n",
" else:\n",
" return OPENAI_DEFAULT_MODEL\n",
"\n",
"\n",
"# ============================================\n",
"# 🧪 TEST CONNECTION\n",
"# ============================================\n",
"print(\"🔄 Testing connection to GitHub Copilot proxy...\")\n",
"test_result = get_chat_completion([\n",
" {\"role\": \"user\", \"content\": \"test\"}\n",
"])\n",
"\n",
"if test_result and \"Error\" in test_result:\n",
" print(\"\\n\" + \"=\"*60)\n",
" print(\"❌ CONNECTION FAILED!\")\n",
" print(\"=\"*60)\n",
" print(f\"Provider: {PROVIDER.upper()}\")\n",
" print(f\"Expected endpoint: http://localhost:7711\")\n",
" print(\"\\n⚠️ The GitHub Copilot proxy is NOT running!\")\n",
" print(\"\\n📋 To fix this:\")\n",
" print(\" 1. Open a new terminal\")\n",
" print(\" 2. Navigate to your copilot-api directory\")\n",
" print(\" 3. Run: uv run copilot2api start\")\n",
" print(\" 4. Wait for the server to start (you should see 'Server initialized')\")\n",
" print(\" 5. Come back and rerun this cell\")\n",
" print(\"\\n💡 Need setup help? See: GitHub-Copilot-2-API/README.md\")\n",
" print(\"=\"*70)\n",
"else:\n",
" print(\"\\n\" + \"=\"*60)\n",
" print(\"✅ CONNECTION SUCCESSFUL!\")\n",
" print(\"=\"*60)\n",
" print(f\"🤖 Provider: {PROVIDER.upper()}\")\n",
" print(f\"📦 Default Model: {get_default_model()}\")\n",
" print(f\"🔗 Endpoint: http://localhost:7711\")\n",
" print(f\"\\n💡 To switch providers, change PROVIDER to '{'claude' if PROVIDER.lower() == 'openai' else 'openai'}' and rerun this cell\")\n",
" print(\"=\"*70)\n"
]
},
{
Expand Down Expand Up @@ -490,9 +602,9 @@
"print(response)\n",
"\n",
"if response and \"Connection successful\" in response:\n",
" print(\"\\\\n🎉 Perfect! Your AI connection is working!\")\n",
" print(\"\\n🎉 Perfect! Your AI connection is working!\")\n",
"else:\n",
" print(\"\\\\n⚠️ Connection test complete, but response format may vary.\")\n",
" print(\"\\n⚠️ Connection test complete, but response format may vary.\")\n",
" print(\"This is normal - let's continue with the tutorial!\")\n"
]
},
Expand Down
77 changes: 34 additions & 43 deletions 01-course/module-02-fundamentals/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,59 +4,50 @@

This module covers the essential prompt engineering techniques that form the foundation of effective AI assistant interaction for software development.

### What You'll Learn
- Clear instruction writing and specification techniques
- Role prompting and persona adoption for specialized expertise
- Using delimiters and structured inputs for complex tasks
- Step-by-step reasoning and few-shot learning patterns
- Providing reference text to reduce hallucinations

### Module Contents
- **[module2.ipynb](./module2.ipynb)** - Complete module 2 tutorial notebook
### Learning Objectives
By completing this module, you will be able to:

### Core Techniques Covered
- ✅ Apply eight core prompt engineering techniques to real coding scenarios
- ✅ Write clear instructions with specific constraints and requirements
- ✅ Use role prompting to transform AI into specialized domain experts
- ✅ Organize complex inputs using XML delimiters and structured formatting
- ✅ Teach AI your preferred styles using few-shot examples
- ✅ Implement chain-of-thought reasoning for systematic problem-solving
- ✅ Ground AI responses in reference texts with proper citations
- ✅ Break complex tasks into sequential workflows using prompt chaining
- ✅ Create evaluation rubrics and self-critique loops with LLM-as-Judge
- ✅ Separate reasoning from clean final outputs using inner monologue

#### 1. Clear Instructions & Specifications
- Writing precise, unambiguous prompts
- Specifying constraints, formats, and requirements
- Handling edge cases and error conditions
### Getting Started

#### 2. Role Prompting & Personas
- Adopting specialized engineering roles (security, performance, QA)
- Leveraging domain expertise through persona prompting
- Combining multiple perspectives for comprehensive analysis
**First time here?** If you haven't set up your development environment yet, follow the [Quick Setup guide](../../README.md#-quick-setup) in the main README first.

#### 3. Delimiters & Structured Inputs
- Organizing complex multi-file inputs using headers and XML-like tags
- Separating requirements, context, and code cleanly
- Structuring outputs for consistency and parsability
**Ready to start?**
1. **Open the tutorial notebook**: Click on [module2.ipynb](./module2.ipynb) to start the interactive tutorial
2. **Install dependencies**: Run the "Install Required Dependencies" cell in the notebook
3. **Follow the notebook**: Work through each cell sequentially - the notebook will guide you through setup and exercises
4. **Complete exercises**: Practice the hands-on activities as you go

#### 4. Step-by-Step Reasoning
- Guiding systematic analysis through explicit steps
- Building chains of reasoning for complex problems
- Creating reproducible analytical workflows
### Module Contents
- **[module2.ipynb](./module2.ipynb)** - Complete module 2 tutorial notebook

#### 5. Few-Shot Learning & Examples
- Providing high-quality examples to establish patterns
- Teaching consistent formatting and style
- Demonstrating edge case handling
### Time Required
Approximately 90-120 minutes (1.5-2 hours)

### Learning Objectives
By completing this module, you will:
- ✅ Master the six core prompt engineering techniques
- ✅ Be able to transform vague requests into specific, actionable prompts
- ✅ Know how to structure complex multi-file refactoring tasks
- ✅ Understand how to guide AI assistants through systematic analysis
- ✅ Have practical experience with each technique applied to code
**Time Breakdown:**
- Setup and introduction: ~10 minutes
- 8 core tactics with examples: ~70 minutes
- Hands-on practice activities: ~20-30 minutes
- Progress tracking: ~5 minutes

### Time Required
Approximately 30 minutes
💡 **Tip:** You can complete this module in one session or break it into multiple shorter sessions. Each tactic is self-contained, making it easy to pause and resume.

### Prerequisites
- Completion of [Module 1: Foundations](../module-01-foundations/)
- Working development environment with AI assistant access
- Python 3.8+ installed
- IDE with notebook support (VS Code or Cursor recommended)
- API access to GitHub Copilot, CircuIT, or OpenAI

### Next Steps
After completing this module:
1. Practice with the integrated exercises in this module
2. Continue to [Module 3: Applications](../module-03-applications/)
1. Practice with the integrated exercises in this module
2. Continue to [Module 3: Application in Software Engineering](../module-03-applications/)
Loading