diff --git a/content/modernizr/00-preparation/index.en.md b/content/modernizr/00-preparation/index.en.md new file mode 100644 index 0000000..57d63c3 --- /dev/null +++ b/content/modernizr/00-preparation/index.en.md @@ -0,0 +1,101 @@ +--- +title: "Environment Setup" +date: 2025-09-01T09:53:04-05:00 +weight: 30 +chapter: true +--- + +## Development Environment Setup + +Before beginning the modernization process, you need to configure your development environment. You'll be working with Visual Studio Code Web running on a pre-configured EC2 instance that includes all necessary AWS permissions and tooling for this workshop. This cloud-based development environment provides immediate access to AWS services and the specialized MCP servers required for the modernization workflow, eliminating the need for local environment configuration. + +## Step 1: Accessing Your Development Environment + +Navigate to the workshop studio page and access the "Outputs" tab to retrieve your environment credentials: + +![Workshop studio tab](/static/images/modernizr/0/setup-01.png) + +Locate these two important values: +- **VSCodeServerPassword** - Authentication credential for your VS Code instance +- **VSCodeServerURLModernizr** - Direct endpoint to your cloud-based IDE + +Your values are unique to you and will differ from the above example. + +Click the `VSCodeServerURLModernizr` to launch your development environment. Enter the password when prompted and allow approximately 60 seconds for the environment to initialize. Any startup notifications can be safely dismissed. + +## Step 2: Configuring the Cline AI Agent + +Your workspace includes **Cline**, an autonomous AI coding agent that will assist with the modernization process. Cline provides intelligent code analysis, generation, and refactoring capabilities throughout this workshop. + +Access the [Cline](https://cline.bot/) extension through the VS Code interface: + +![Cline plugin](/static/images/modernizr/0/setup-02.png) + +## Step 3: Establishing Bedrock Integration + +Configure Cline to use Amazon Bedrock as its foundation model provider. Select "Use your own API Key" and choose "Amazon Bedrock" from the dropdown: + +![Cline plugin API config](/static/images/modernizr/0/setup-03.png) + +Apply the following configuration parameters: +- **Authentication**: AWS Profile +- **Profile**: empty (default) +- **Region**: us-west-2 +- **Cross region inference**: enabled + +Confirm the settings and initialize the connection: + +![Cline plugin config](/static/images/modernizr/0/setup-04.png) + + +::alert[You are working in a lab environment, the foundational models have configured rate limiting, it is highly likely during the workshop you will get throttled and you will have to retry the operations. You have 4 request per minute using Claude 4 and 6 request per minute using Claude 3.7.]{type="info"} + + +## Step 4: Validating the Integration + +Verify the Bedrock connection is functioning correctly by sending the below test prompt to Cline. If you are throttled, press "proceed anyway" to retry the request. + +```terminal +Hello and Welcome to this modernization project, can you confirm you can read and list all the files in the workspace? +``` + +![Cline plugin testing](/static/images/modernizr/0/setup-05.png) + +This workshop provides access to two foundation models: `claude-sonnet-4` (latest generation with enhanced reasoning capabilities) and `claude-sonnet-3.7` (previous generation with proven reliability). + +![Cline plugin response](/static/images/modernizr/0/setup-06.png) ![Cline plugin result](/static/images/modernizr/0/setup-07.png) + +## Step 5: Configuring Auto-Approval Settings + +Optimize Cline's efficiency by enabling automatic permissions for routine operations. Access the auto-approve settings via the `^` icon in the bottom-right corner of the Cline interface. + +Enable the following capabilities: +- **Read all files** - Full workspace file access +- **Use MCP servers** - Integration with specialized AI tools +- **Edit project files** - Code modification permissions + +Increase the **Max requests** parameter from 20 to 40 to accommodate complex multi-step operations without interruption. + +![Cline plugin auto-approve settings](/static/images/modernizr/0/setup-08.png) + +## Step 6: Verifying MCP Server Status + +Access the MCP server management interface (stacked server icon) to confirm all specialized AI tools are operational. You should observe 3 active servers with green status indicators, confirming proper connectivity. + +![Cline plugin auto-approve settings](/static/images/modernizr/0/setup-09.png) + +## Step 7: MCP Integration Verification + +Test the MCP server functionality by querying existing DynamoDB resources: + +```shell +Please show me the DynamoDB tables that I have in this region +``` + +![Task MCP Server](/static/images/modernizr/0/setup-10.png) + +## Environment Ready + +Your development environment is now fully configured with AI-assisted tooling. The integration of VS Code Web, Cline, Bedrock foundation models, and specialized MCP servers provides a comprehensive platform for the database modernization workflow. + +Proceed to the next module to begin the systematic analysis of your legacy application. diff --git a/content/modernizr/01-modernization/index.en.md b/content/modernizr/01-modernization/index.en.md new file mode 100644 index 0000000..0af8025 --- /dev/null +++ b/content/modernizr/01-modernization/index.en.md @@ -0,0 +1,116 @@ +--- +title: "Database Modernizr Workflow" +date: 2025-09-01T10:41:04-05:00 +weight: 30 +chapter: true +--- + +## Application Analysis and Baseline Establishment + +Before beginning the modernization process, you must thoroughly understand the existing system architecture. This involves analyzing the current MySQL-based e-commerce application to identify all data access patterns, performance characteristics, and system dependencies. + +Begin by reviewing the application documentation in the `README.md` file, which provides an overview of the system architecture and functionality. + +::alert[You don't have to follow the `README.md` instructions as your environment is already pre-configured for you. ]{type="info"} + +![README](/static/images/modernizr/1/workflow-01.png) + +## Starting the Application Services + +The e-commerce application follows a standard three-tier architecture: +1. **Backend Service** - Express.js API server handling business logic and database interactions +2. **Frontend Application** - React-based user interface for customer interactions +3. **MySQL Database** - Relational database storing all application data + +While your application environment is already preconfigured, you'll still need to start both the backend and frontend services to establish a baseline for analysis. + +### Initializing the Backend Service + +The backend API server manages all client requests and database transactions. Navigate to the `/backend` directory and open an integrated terminal (right-click > "Open in Integrated Terminal"). Grant any requested permissions when prompted. + +![menu](/static/images/modernizr/1/workflow-02.png) + +Build and start the backend service using these commands: + +```shell +npm run build +npm run start +``` + +Monitor the startup sequence output. A successful initialization will display output similar to: + +```console +👤 DatabaseFactory.createUserRepository called +🔧 DynamoDBClientManager.getClient() called +👤 Creating UserDualWriteWrapper +🔐 AuthService repositories created +🛍️ ShoppingCartService constructor called +🛍️ ShoppingCartService repositories created +🛍️ ShoppingCartService constructor called +🛍️ ShoppingCartService repositories created +Starting server with enhanced error handling... +✅ Environment variables validated successfully +✅ Database abstraction layer initialized with mysql configuration +Registering routes... +All routes registered. +Server setup complete with comprehensive error handling +📝 Using MySQL-only mode (Phase 1) +🚀 Server is running on port 8100 +📊 Health check: http://localhost:8100/api/health +🔧 Performance: http://localhost:8100/api/performance +📈 Metrics: http://localhost:8100/api/metrics +``` + +**Important:** Keep this terminal window open! If you close it, the backend service will stop working, and our application won't function properly. + +### Launching the Frontend Application + +Initialize the React frontend application by opening a terminal in the `/frontend` directory and executing: + +```shell +npm run serve:prod +``` + +Allow several minutes for the build process to complete. Dismiss any popup notifications that may appear during startup. + +## Application Verification and Analysis + +With both services running, you can now interact with the complete application stack to understand its current behavior and performance characteristics. + +### Backend Health Check + +Verify the API server is responding correctly by accessing the health endpoint: + +1. Copy your VS Code environment URL +2. Open a new browser tab +3. Navigate to `[your-url]/api/health` + +This endpoint should return status information confirming the backend is operational: + +![Backend](/static/images/modernizr/1/workflow-03.png) + +Monitor your backend terminal for request logging, which demonstrates the API request flow. + +### Frontend Application Access + +Access the e-commerce frontend by navigating to `[your-url]/store/`: + +![Store](/static/images/modernizr/1/workflow-04.png) + +## System Interaction Analysis + +Explore the application functionality to understand the data access patterns that will need to be modernized: + +**User Registration:** Create a new user account to observe authentication workflows + +**Admin Access:** Use the administrative account (username: `admin`, password: `0137183966133de0ace3d7e65e025d12`) to access extended functionality + +**E-commerce Operations:** Navigate through product browsing, cart management, and checkout processes. We have two simulated payment methods, paypal and credit card, both will auto-approve any order! + +Pay close attention to the backend terminal output as you interact with the application. Each user action generates specific database queries that represent the access patterns you'll need to replicate in your DynamoDB implementation. + +## Baseline Understanding + +This exploration phase establishes your baseline understanding of the current system architecture. Each user interaction demonstrates how the Express.js backend translates HTTP requests into MySQL queries, processes the results, and returns formatted responses to the React frontend. + +This request-response cycle represents the core functionality that must be preserved during the database modernization process, ensuring zero functional regression while achieving improved performance and scalability through DynamoDB. diff --git a/content/modernizr/01-modernization/workflow-00.en.md b/content/modernizr/01-modernization/workflow-00.en.md new file mode 100644 index 0000000..7236a5c --- /dev/null +++ b/content/modernizr/01-modernization/workflow-00.en.md @@ -0,0 +1,61 @@ +--- +title: "1.1 The workflow" +menuTitle: "The workflow" +date: 2025-09-01T10:42:04-05:00 +weight: 31 +chapter: false +--- + +## Your 7-Step Journey to Database Modernization + +Think of database modernization like renovating a house while people are still living in it. You can't just tear everything down and start over — you need a careful, step-by-step plan that keeps everything working while you upgrade piece by piece. That's exactly what our modernization workflow does! + +Our process consists of seven carefully designed stages, each one building on the previous step. It's like following a recipe — each ingredient needs to be added at the right time and in the right order to get the perfect result. + +![Modernization workflow](/static/images/modernizr/1/workflow-base-01.png) + +## Stage 1: Understanding What You Have — Database Detective Work + +The first stage is like being a detective investigating the current system. We need to understand everything about how the existing MySQL database works before we can improve it. This involves connecting to the database, examining what predicates are used in queries and what data is returned, how fast different operations run, looking at the structure of all the tables, and studying the application code to understand exactly how data flows through the system. + +Think of it like doing a walkthrough with a designer before your remodel. They need to understand your taste and how you use your space to fully capture the requirements for the project. We use a specialized AI tool (the MySQL MCP server) to help us gather all this information systematically. + +## Stage 2: Designing the New Blueprint — Creating Your DynamoDB Model + +This is where the real design work happens! Using all the information we gathered in Stage 1, we create a completely new data model designed specifically for DynamoDB. This stage is highly interactive — you'll work closely with the AI to make important decisions about how to structure your data. + +It's like working with an architect to design your dream house renovation. The AI provides technical expertise and suggestions, but you need to guide the process and make the final decisions about what works best for your specific needs. This collaboration ensures the new design fits your application's requirements. + +## Stage 3: Building the Bridge — Creating a Database Abstraction Layer + +Now we create a special "bridge" layer in your application code that can talk to both the old MySQL database and the new DynamoDB system at the same time. This follows AWS best practices and ensures you can switch between systems safely without breaking anything. + +Think of this like rennovating the guest room before you rennovate the primary bedroom — you'll always have somewhere to sleep. Similarly with a database abstraction layer. everything continues to work normally while you prepare for the upgrade. + +## Stage 4: Testing the Connection — Validating DynamoDB Integration + +In this stage, we set up a local version of DynamoDB and test our bridge layer to make sure everything works correctly. It's like the city inspector making sure your renovated house systems are up to code before you complete the project. + +We validate that all the connections work properly and that data flows correctly through both systems. This gives us confidence that everything is ready for the next phase. + +## Stage 5: Running Both Systems — Application Refactoring and Dual Writes + +This is the most complex stage, where your application learns to write data to both databases simultaneously. We use a method called "test-driven development," which means we write tests first to ensure everything works correctly, then modify the code to pass those tests. + +During this stage, we also create a special admin control panel that lets you monitor and control the modernization process. You can watch both the old and new systems running side by side and manage the transition safely. + +## Stage 6: Moving to the Cloud — Deploying the cloud Infrastructure + +Once everything is tested and working locally, we deploy your new DynamoDB tables to the actual AWS cloud environment. You've finally got your Certificate of Occupancy! + +The deployment process ensures your cloud infrastructure is set up correctly and ready to handle real traffic. + +## Stage 7: The Great Migration — Moving Your Data + +The final stage is where we actually move all your existing data from MySQL to DynamoDB. This is carefully controlled and monitored — you decide when you're ready to start using dual writes, and then we gradually migrate all your historical data. + +We use specialized data processing tools (like the AWS Glue MCP Server) to handle this migration safely and efficiently. It's like having professional movers who ensure all your belongings get to the new house safely and end up in exactly the right places. + +## Why This Approach Works + +This seven-stage approach ensures that at no point during the modernization process is your application broken or unavailable. Users continue shopping, orders continue processing, and everything works smoothly while we upgrade the foundation underneath. It's a proven method that minimizes risk while maximizing the benefits of modern database technology. diff --git a/content/modernizr/01-modernization/workflow-01.en.md b/content/modernizr/01-modernization/workflow-01.en.md new file mode 100644 index 0000000..511f526 --- /dev/null +++ b/content/modernizr/01-modernization/workflow-01.en.md @@ -0,0 +1,35 @@ +--- +title: "1.2 The Anatomy of a prompt" +menuTitle: "Anatomy of a prompt" +date: 2025-09-01T10:43:04-05:00 +weight: 32 +chapter: false +--- + +## 📚 Prompt Engineering Architecture + +The `/prompts` directory implements a structured approach to AI-driven development through organized subfolders that correspond to each stage of the modernization workflow. Each subfolder contains three components: Requirements, Design, and Tasks documents. This architectural pattern creates a comprehensive framework that systematically guides the entire modernization process from initial analysis through final implementation. + +![Prompts](/static/images/modernizr/1/workflow-prompt-01.png) + +### Requirements Documentation — Defining Objectives and Success Criteria + +The Requirements Document establishes the foundational context by articulating the business objectives and technical constraints that drive the modernization initiative. This document defines explicit acceptance criteria and success metrics, creating what software engineers refer to as the "definition of done." By providing comprehensive context about the problem domain, the requirements document enables the LLM to understand not just what needs to be built, but why it needs to be built and how to validate that the implementation meets the specified goals. + +### Design Documentation — Technical Architecture and Implementation Strategy + +The Design Document serves as the technical specification that translates high-level requirements into concrete architectural decisions and implementation strategies. This document defines the specific methodologies, data structures, and system workflows that will be employed throughout the modernization process. It includes detailed implementation guidelines, architectural patterns, and design rationale that provide the LLM with a comprehensive technical blueprint for executing the modernization according to established software engineering principles. + +### Task Documentation — Executable Implementation Steps + +The Tasks Document functions as the bridge between abstract architectural design and concrete implementation by decomposing design specifications into discrete, executable development tasks. This document provides sequenced instructions that reference specific files, tools, and expected deliverables, ensuring the LLM receives actionable directives rather than abstract concepts. The task breakdown transforms architectural decisions into manageable development units that can be systematically executed and validated. + +### The Importance of Structured Prompt Engineering + +This three-tier documentation approach addresses a fundamental challenge in AI-assisted development: the complexity management problem. When an LLM encounters a sophisticated design document without accompanying task structure, it faces the same challenges that human developers experience when given high-level specifications without clear implementation guidance. The system may attempt to implement multiple components simultaneously, leading to incomplete or inconsistent results, or it may struggle to determine the appropriate sequencing of development activities. + +The structured approach provides several advantages. The LLM receives explicit guidance on which specialized tools and MCP servers to utilize for each development phase. Dependencies between implementation tasks become clearly defined, preventing issues that arise from incorrect execution ordering. The system understands exactly what deliverables should be produced at each stage, enabling proper validation and quality assurance throughout the process. + +To conceptualize this framework using a navigation analogy: the Requirements document defines the destination and the reasons for traveling there, the Design document provides the comprehensive route map showing all available paths and optimal strategies, and the Tasks document functions as turn-by-turn GPS navigation that guides execution through each specific step of the journey. This structured approach transforms complex modernization challenges into systematic, manageable processes that can be executed reliably through AI-assisted development workflows. + +![Tasks](/static/images/modernizr/1/workflow-prompt-02.png) diff --git a/content/modernizr/01-modernization/workflow-02.en.md b/content/modernizr/01-modernization/workflow-02.en.md new file mode 100644 index 0000000..b24b3c5 --- /dev/null +++ b/content/modernizr/01-modernization/workflow-02.en.md @@ -0,0 +1,26 @@ +--- +title: "1.3 Stage Artifacts" +menuTitle: "Stage Artifacts" +date: 2025-09-02T10:43:04-05:00 +weight: 33 +chapter: false +--- + +Each stage generates "Artifacts" — deliverables that will be used in the future stages across the solution. This project works sequentially, using output artifacts from initial stages as input for the next. Every stage will create a new folder `stage-xx` where all the artifacts will be stored. In addition another file `xx-working_log.md` will be generated, this file is used by the LLM to keep track of the work that has been done so far, consider it as a notepad, or scratch pad. + +::alert[If you execute this workshop from scratch (as available in `/clean-start` folder) it will take ~11.5 hours to complete, where the most of its time will be spent in application re-factoring (stages 3, 4 and 5). For simplicity and to streamline the duration of this workshop you will have these steps already completed for you.]{type="info"} + +# Stage-01 artifacts + +Let's start exploring the artifacts available for the first stage `stage-01`. This stage is focused on capturing the data that is available from the source database and application backend logic. We use the MySQL MCP server to understand table structure, constraints and data. Next we use the MySQL query logs to identify the data . Finally, we explore the application logic to capture all the access patterns that we will need to modernize. + +![Artifacts](/static/images/modernizr/1/workflow-artifacts-01.png) + +DynamoDB is all about application the access patterns that we need to support. The secret to DynamoDB data modelling is to store data exactly in the format your application will consume it, structing your data in a way that can be read as efficiently as possible with the smallest number of queries. + +Open the files available in the `stage-01` folder, familiarize with them and understand in detail the current application access patterns. This is the application logic that you will need to support in the modernized application. + +- `01_1_API_access_patterns.md` — This file should be your primary source of information. It contains an analysis of the backend code. When the LLM finishes creating it, it should contain a numbered list of 48 different application access patterns! If you want to better understand these access patterns you can learn more about this project by reading the README available in the front end folder `frontend/README.md` It contains a description of which patters requires authentication and a quick explanation on how to execute the API calls. +- `01_2_mysql_log_analysis.md` — This file is the MySQL log analysis, containing a description of how many times different access patterns were detected. We run a small load test to simulate traffic and capture data on the results. Please note that the load test does not capture 100% of the application access patterns in the logs. If you plan to use a similar approach for your modrnization process, you should use logs that capture live traffic for a period of time sufficient to capture all required patterns. Keep in mind, though, that there still might be some application access patters that were not captured if they weren't exercised during the loging window. +- `01_3_table_structure_analysis.md` — Uses the MySQL MCP server to explore table structure and identify table contraints, relationships, and data format. + diff --git a/content/modernizr/01-mysql-mcp/index.en.md b/content/modernizr/01-mysql-mcp/index.en.md new file mode 100644 index 0000000..11131c2 --- /dev/null +++ b/content/modernizr/01-mysql-mcp/index.en.md @@ -0,0 +1,55 @@ +--- +title: "Stage 1: Source Database Analysis" +date: 2025-09-02T15:41:04-05:00 +weight: 30 +chapter: true +--- + +::alert[In this workshop this stage has already been completed for you. Please review the artifacts available on `/artifacts/stage-01` ]{type="info"} + +## Comprehensive MySQL Database Analysis + +Stage 1 establishes the foundation for your modernization project by conducting a systematic analysis of your existing MySQL database. This phase involves using specialized AI tools to automatically discover and document your current data architecture, relationships, and usage patterns. + +The analysis process leverages the MySQL MCP Server — a specialized AI assistant designed specifically for relational database analysis. This tool connects directly to your running MySQL instance to extract comprehensive metadata about your database schema. + +## Key Analysis Components + +### Schema Discovery and Documentation + +The MySQL MCP Server performs automated schema discovery by querying the MySQL information schema to extract detailed metadata about: + +- **Table Structures**: Column definitions, data types, constraints, and nullable fields +- **Primary and Foreign Keys**: Relationship mappings between tables and referential integrity constraints +- **Indexes**: Performance optimization structures including primary, unique, and composite indexes +- **Views and Stored Procedures**: Complex query logic and business rules embedded in the database + +### Data Relationship Mapping + +Understanding how your data entities relate to each other is crucial for designing an effective NoSQL structure. The analysis identifies: + +- **One-to-Many Relationships**: Parent-child relationships that may benefit from modeling as DynamoDB item collections +- **Many-to-Many Relationships**: Associations that require designs such as adjacency lists to model in NoSQL +- **Hierarchical Data Patterns**: Nested structures that can modeled with composite sort keys + +### Access Pattern Analysis + +The system analyzes your application's database interaction patterns to understand: + +- **Query Frequency Patterns**: Which tables and queries are accessed most frequently +- **Join Operations**: Complex relational queries that need restructuring for NoSQL +- **Transaction Boundaries**: ACID transaction requirements that must be preserved during migration + +## Automated Documentation Generation + +The MySQL MCP Server generates comprehensive documentation artifacts that serve as reference materials throughout the modernization process: + +- **Entity Relationship Diagrams**: Visual representations of your current data model +- **Schema Documentation**: Detailed specifications of all database objects +- **Access Pattern Catalog**: Documentation of how your application interacts with data + +## Setting Up the Analysis Environment + +Before beginning the analysis, ensure your MySQL database is accessible and the MCP Server has appropriate permissions to read schema metadata. The analysis process is read-only and does not modify your production data. + +Information gathered in this step of database analysis provides the detailed understanding necessary to design an optimal DynamoDB architecture that preserves all existing functionality while improving performance and scalability. diff --git a/content/modernizr/02-data-modeling/data-modeling-01.en.md b/content/modernizr/02-data-modeling/data-modeling-01.en.md new file mode 100644 index 0000000..ce97b0f --- /dev/null +++ b/content/modernizr/02-data-modeling/data-modeling-01.en.md @@ -0,0 +1,75 @@ +--- +title: "2.1 Data modeling" +menuTitle: "Data modeling" +date: 2025-09-02T16:43:04-05:00 +weight: 32 +chapter: false +--- + +## Design Process Continuation and Context Management + +During the data modeling phase, you may encounter context window limitations as the system accumulates extensive analysis data. When Cline prompts for a new task initiation, this indicates the conversation has reached capacity limits. This is standard behavior in AI-assisted development workflows. Your progress remains preserved through the working log and task files, ensuring continuity across sessions. + +## Proceeding to Validation Phase (Task 3.1) + +If the system doesn't automatically advance to task 3.1, manually initiate the next phase: + +```shell +Great please continue with task 3.1 available here prompts/02-dynamodb-data-modeling/tasks.md +``` + +## Access Pattern Validations + +Task 3.1 implements a validation step that verifies your DynamoDB design supports all identified access patterns from the requirements analysis. This validation process serves as a quality assurance checkpoint, detecting potential AI-generated artifacts that don't correspond to actual system requirements — a common issue in AI-assisted development where models may extrapolate beyond provided specifications. + +Following validation, you'll receive a table-by-table analysis of your data model. Before proceeding, review the generated design document at `artifacts/stage-02/dynamodb_data_model.md` to understand the proposed architecture. + +::alert[Pro-TIP! for an easier visualization right click on the file and select open preview]{type="info"} +::alert[Given the non-deterministic nature of GenAI the data model might be different for everyone that is running this workshop. Some of them might have gotten a single table design and other closer to normalization (basically SQL in NoSQL which we don't want). ]{type="warning"} + +## Data Relationship Analysis + +Rather than following prescriptive instructions, focus on understanding the core data modeling concepts underlying your e-commerce application. Analyzing entity relationships and data dependencies is fundamental to creating an effective NoSQL design that supports all required access patterns. + +This picture is an Entity Relationship Mapping that will help you understand in detail what it needs to be modeled. + +![Start conversation](/static/images/modernizr/2/stage02-10.png) + +## Entity Relationship Mapping + +The e-commerce application presents several one-to-many relationships that inform the DynamoDB design strategy: + +### Primary Relationship Patterns: + +- **Users → Products**: One-to-many relationship where seller accounts can manage multiple product listings +- **Users → Orders**: One-to-many relationship supporting customer order history tracking +- **Users → Cart Items**: One-to-many relationship enabling persistent shopping cart functionality +- **Categories → Products**: One-to-many relationship for product categorization and navigation +- **Products → Order Items**: Many-to-many relationship through order items junction entity +- **Products → Cart Items**: Many-to-many relationship through cart items junction entity +- **Orders → Order Items**: One-to-many relationship for multi-item order support + +## DynamoDB Entity Aggregation Strategy + +These relationships reveal natural aggregation opportunities where related entities can be co-located within the same table partition. For instance, user-centric data including profile information, order history, and active cart contents share logical cohesion and similar access patterns. Generally a good mantra for data modeling in DynamoDB is that "data accessed together should be stored together". + +## Partition Key Design Principles + +DynamoDB lets you use real business attributes as partition keys instead of generated IDs. For example, using `user_email` as the key ensures each record is unique while also making it easier to organize and query data in ways that match how the application actually uses it. + +For additional query flexibility during the migration phase, Global Secondary Indexes (GSI) can provide alternate access paths based on `userID` or `username` attributes without impacting primary table performance. + +## Single Table Design Patterns + +DynamoDB's strength lies in its ability to store heterogeneous entity types within a single table structure, provided they share logical relationships and similar access patterns. This approach, known as single table design, optimizes performance by co-locating related data and minimizing cross-table queries. + +Example entity co-location for the Users table: +- **User Profile Entity**: Core user metadata and authentication information +- **User Order History**: Historical order records associated with the user +- **Active Cart Items**: Current shopping session state data + +This co-location strategy enables efficient retrieval of all user-related information through single query operations, reducing read cost and improving performance. + +## Implementation Considerations + +The effectiveness of this design approach depends on carefully analyzing query patterns and ensuring that entity groupings align with actual application access requirements rather than traditional relational modeling conventions. diff --git a/content/modernizr/02-data-modeling/data-modeling-02.en.md b/content/modernizr/02-data-modeling/data-modeling-02.en.md new file mode 100644 index 0000000..6dd4df0 --- /dev/null +++ b/content/modernizr/02-data-modeling/data-modeling-02.en.md @@ -0,0 +1,175 @@ +--- +title: "2.2 Entities Design - Users" +menuTitle: "Entities Design - Users" +date: 2025-09-02T16:43:04-05:00 +weight: 33 +chapter: false +--- + +Think of designing database entities like organizing your closet. Instead of throwing everything in randomly, you want to group related items together so you can find what you need quickly. This is exactly what we're doing with our data - we're taking information that belongs together and storing it in the same place. + +In NoSQL databases like DynamoDB, we have can use "denormalization." This means we can copy some information into multiple places to make our queries faster. It's like writing your friend's phone number in multiple notebooks so you can always find it quickly, no matter which notebook you're using. + +## The User Entity - Our Digital ID Card + +Let's start with the most important piece: **the user**. Think of this as a digital ID card that contains everything we need to know about a person using our application. + +We'll use their email address as the main identifier (called PK or Primary Key) because everyone has a unique email. Then we'll add a special tag called `#META` (that's our SK or Sort Key) to indicate this contains all the user's basic information. + +| What We Store | Type of Data | Why We Need It | +|---------------|--------------|----------------| +| PK | Text | The person's email address | +| SK | Text | "#META" (like a label saying "this is user info") | +| username | Text | Their chosen display name | +| email | Text | Their email address (same as PK) | +| password_hash | Text | Their password (encrypted for security) | +| profile_data | Collection | Things like birthday, interests, etc. | +| is_seller | True/False | Can they sell things on our platform? | +| seller_profile | Collection | Extra info if they're a seller | +| created_at | Text | When they joined | +| updated_at | Text | When they last changed something | +| status | Text | Are they active or inactive? | + +**Why do we need extra search methods (GSIs)?** + +During our transition from the old system to the new one, we need to be able to find users in multiple ways. It's like having multiple indexes in a library - you might want to find a book by title, author, or subject. We create these extra search methods (called GSIs - Global Secondary Indexes) so we can still find users by their old ID numbers or usernames while we're switching everything over. + +## The Shopping Cart Entity - Your Digital Shopping Basket + +Imagine you're walking through a store with a shopping basket. You'll select various quantities of different items to add to your cart. That's exactly what our shopping cart entity does digitally, except instead of a phyical item its a small note representing what is what is being purchased and in what quantity. + +We link each cart item to the person's email (PK) and give each item a special label that starts with `CART#` followed by the product ID (SK). + +| What We Store | Type of Data | Why We Need It | +|---------------|--------------|----------------| +| PK | Text | The shopper's email | +| SK | Text | "CART#" + the product's ID number | +| product_id | Text | Which product this is | +| quantity | Number | How many they want | +| price | Number | How much it cost when they added it | +| product_name | Text | The name of the product (copied for quick access) | +| seller_id | Text | Who's selling it (copied for quick access) | +| created_at | Text | When they added it to cart | +| updated_at | Text | When they last changed the quantity | + +The beauty of this design is that we only care about each person's own shopping cart - we don't need to search through everyone else's carts. It's like your personal storage space - you only need access to your own stuff, not everyone else's. + +## The Orders Entity - Your Purchase History + +When someone actually buys something, we create an order record. Think of this like a receipt that keeps track of everything about the purchase. We organize these by the customer's email and create a special identifier that includes the date and a unique order number. + +| What We Store | Type of Data | Why We Need It | +|---------------|--------------|----------------| +| PK | Text | The customer's email | +| SK | Text | "ORDER#" + date + order number | +| user_id | Text | Customer identifier | +| order_status | Text | Is it pending, shipped, delivered, etc.? | +| total_amount | Number | How much they paid in total | +| order_items | List | All the products they bought | +| shipping_address | Collection | Where to send it | +| payment_info | Collection | How they paid (kept secure) | +| seller_orders | Collection | Orders organized by which seller | +| created_at | Text | When they placed the order | +| updated_at | Text | When something changed | +| order_id | Text | Unique order number | + +**Our Search Helpers (GSIs Explained Simply)** + +Sometimes we need to find information in different ways, like when a customer calls and says "I have order number 12345, where is my package?" We need to be able to find that order quickly just from the order number. + +**GSI-1 & GSI-2: Temporary Helpers for Migration** +- These help us find users by their old ID numbers and usernames while we're switching systems +- Think of them as temporary bridges while we move from the old way to the new way + +**GSI-3: Order Finder** +- This lets us quickly find any order just by its order number +- Like having a master list of all orders organized by order number + +We could have been more efficient and combined some of these search helpers, but for this workshop, we're keeping them separate to make the code easier to understand and work with. + +Now it's time to put our design into action! We need to provide all this information to our AI assistant (Cline) so it can start building our new database structure. Since we've been working on this for a while, we need to give it the complete context of what we're trying to accomplish. + +## Setting Up the Data Model - What to Tell Your AI Assistant + +When you're ready to create this database structure, you'll need to provide your AI assistant (like Cline) with all the details we've discussed. Here's exactly what information to share: + +```shell +Products table updates + +For simplicity let's have a table with PK = product_id and SK = #META this allow us future expansion if we need to start denormalizing the attributes. (MAKE SURE THE ATTRIBUTE NAME MATCHES WHAT IT SAYS HERE!!) + +| Attribute | Type | Purpose | +|-----------|------|---------| +| PK | String | product_id | +| SK | String | #META | +| product_id | String | product_id | +| seller_id | String | Seller identifier | +| category_id | String | Category identifier | +| category_path | String | Full category hierarchy | +| product_name | String | Product title | +| description | String | Product description | +| price | Number | Current price | +| inventory_quantity | Number | Available quantity | +| image_url | String | Image URL | +| search_terms | String | Searchable text | +| created_at | String | ISO timestamp | +| updated_at | String | ISO timestamp | +| status | String | active/inactive | + +{ + "PK": "6", + "SK": "#META", + "category_id": "11", + "category_path": "Tools", + "created_at": "2025-08-14 21:16:07", + "description": "20V MAX cordless drill with 2 batteries, charger, and carrying case. 1/2-inch chuck, LED light, and 15 clutch settings.", + "GSI1PK": "11", + "GSI1SK": "Cordless Drill Kit", + "GSI2PK": "1", + "GSI2SK": "Cordless Drill Kit", + "id": 6, + "image_url": "https://images.unsplash.com/photo-1504148455328-c376907d081c?w=600&h=600&fit=crop&auto=format", + "inventory_quantity": 19, + "price": 129.99, + "product_name": "Cordless Drill Kit", + "seller_id": "1", + "updated_at": "2025-08-17T15:05:47.840Z" +} + +We will add two indexes, that are possible future hot partitions, but with the numbers that we have discussed so far, this will be fine for this design. + +**GSI-1: Category Products (Potential Hot Partition)** +- PK: GSI1PK = category_id, SK: GSI1SK = category_id +- Projection: ALL +- Purpose: Category-based browsing +- **Warning:** Monitor for hot partitions with popular categories + +**GSI-2: Seller Products (Potential Hot Partition)** +- PK: GSI2PK = seller_id, SK: GSI2SK = seller_id +- Projection: ALL +- Purpose: Seller product management +- **Warning:** Monitor for hot partitions with high-volume sellers +``` + +![User entitiy modifcation](/static/images/modernizr/2/stage02-11.png) + +Make sure all the modifications are stored in the `working_log.md` file! sometimes `Cline` ignores what you say!! + +![Working Log](/static/images/modernizr/2/stage02-12.png) + +In my specific scenario, `Cline` was having a hard time understanding it needed to create the GSIs and it wasn't including them in the data model. It is your responsibility to make sure every access patterns is properly identified and recorded, remember this is the base of data modeling and all the future steps depend on the decisions you make at this point. + +![Working Log](/static/images/modernizr/2/stage02-13.png) + +Even at some points it actually suggested me to have a different table for my shopping cart entity. + +![Working Log](/static/images/modernizr/2/stage02-14.png) + +But with a little patience, and making sure all the data is there I was able to guide `Cline` to obtain the outcome and table structure that we originally intended. + +![Working Log](/static/images/modernizr/2/stage02-15.png) + +Before completing this workshop section, make sure the content you approve is valid, we are trying to use one table with 3 different entities, one for User metadata, another for User cart items and finally one for User orders! If in your case you are still getting suggestions to have another table for orders or shopping cart items, make sure you tell `Cline` we don't need it anymore. + + + diff --git a/content/modernizr/02-data-modeling/data-modeling-03.en.md b/content/modernizr/02-data-modeling/data-modeling-03.en.md new file mode 100644 index 0000000..df129c6 --- /dev/null +++ b/content/modernizr/02-data-modeling/data-modeling-03.en.md @@ -0,0 +1,140 @@ +--- +title: "2.3 Entities Design - Products" +menuTitle: "Entities Design - Products" +date: 2025-09-02T16:43:04-05:00 +weight: 34 +chapter: false +--- + +## The Products Table - Our Digital Catalog + +Now that we've designed how to store user information and their activities, we need a place to store all the products available in our marketplace. Think of this as creating a digital catalog, like those product catalogs you might get in the mail, but much smarter and more organized. + +When your AI assistant (the LLM) first looked at this problem, it came up with a design that was pretty close to what we wanted. However, we're going to make a few small but important changes to make our system more flexible for the future. + +## Why Are We Using PK and SK Instead of Just Product ID? + +You might wonder why we're not just using `product_id` as our main identifier. The reason is like building a house with room to expand later. By using PK (Primary Key) and SK (Sort Key), we're creating space to add related information about each product in the future without having to rebuild our entire system. As a side benefit, shorter key names save storage too! + +For now, our PK will be the product ID, and our SK will be `#META` (which means "this contains the main product information"). This setup is like having a filing cabinet where each product gets its own folder, and we can add different types of documents to that folder later. + +## What Information Do We Store About Each Product? + +Here's everything we need to track for each product in our marketplace: + +| What We Store | Type of Data | Why We Need It | +|---------------|--------------|----------------| +| PK | Text | The unique product ID | +| SK | Text | "#META" (label for main product info) | +| product_id | Text | The same as PK (for easy reference) | +| seller_id | Text | Who is selling this product | +| category_id | Text | What category it belongs to (like "Electronics") | +| category_path | Text | The full category tree (like "Electronics > Laptops > Gaming") | +| product_name | Text | The product's title | +| description | Text | Details about what the product does | +| price | Number | How much it costs | +| inventory_quantity | Number | How many are available | +| image_url | Text | Link to the product's picture | +| search_terms | Text | Words people might use to find this product | +| created_at | Text | When it was first listed | +| updated_at | Text | When it was last changed | +| status | Text | Is it available or discontinued? | + +## Making Products Easy to Find + +Just like with users and orders, we need ways to find products quickly. We'll create two main search methods (GSIs): + +### GSI-1: Finding Products by Category +This is like organizing products by department in a store. When someone wants to browse all laptops or all books, this index makes it super fast to find them. + +**Important Note:** Popular categories (like "Electronics") might get very busy, which could slow things down. It's like having everyone crowd into the most popular store section - we'll need to keep an eye on this. + +### GSI-2: Finding Products by Seller +This lets sellers quickly see all their products, and lets customers browse everything a particular seller offers. It's like having a directory of all the shops in a mall. + +**Important Note:** Sellers with lots of products might create the same crowding issue as popular categories, so we'll monitor this too. + +## What Makes This Design Flexible? + +The beauty of our setup is that it's designed to grow. Right now, each product just has one record (the #META record with all its basic information). But later, we could add: + +- Customer reviews (using the same product ID but different SK values) +- Price history tracking +- Inventory updates from different warehouses +- Product variants (different sizes, colors, etc.) + +It's like starting with a basic photo album but designing it so you can easily add more sections later without having to reorganize everything. + +## Setting Up the Products Table - Instructions for Your AI Assistant + +When you're ready to create the products table, make sure your AI assistant understands these exact specifications. The attribute names must match exactly what's listed above - this is crucial for everything to work together properly. + +**Key Points to Emphasize:** +1. Use PK = product_id and SK = #META for the main product record +2. Include both GSI-1 (for categories) and GSI-2 (for sellers) +3. Make sure all attribute names match the table above exactly +4. Set up monitoring for the GSIs since they could become busy with popular categories or high-volume sellers + +The changes we're making are small adjustments to what the AI originally suggested, but these details matter a lot for making sure everything works smoothly together. + +```shell +Products table updates + +For simplicity let's have a table with PK = product_id and SK = #META this allow us future expansion if we need to start denormalizing the attributes. (MAKE SURE THE ATTRIBUTE NAME MATCHES WHAT IT SAYS HERE!!) + +| Attribute | Type | Purpose | +|-----------|------|---------| +| PK | String | product_id | +| SK | String | #META | +| product_id | String | product_id | +| seller_id | String | Seller identifier | +| category_id | String | Category identifier | +| category_path | String | Full category hierarchy | +| product_name | String | Product title | +| description | String | Product description | +| price | Number | Current price | +| inventory_quantity | Number | Available quantity | +| image_url | String | Image URL | +| search_terms | String | Searchable text | +| created_at | String | ISO timestamp | +| updated_at | String | ISO timestamp | +| status | String | active/inactive | + +{ + "PK": "6", + "SK": "#META", + "category_id": "11", + "category_path": "Tools", + "created_at": "2025-08-14 21:16:07", + "description": "20V MAX cordless drill with 2 batteries, charger, and carrying case. 1/2-inch chuck, LED light, and 15 clutch settings.", + "GSI1PK": "11", + "GSI1SK": "Cordless Drill Kit", + "GSI2PK": "1", + "GSI2SK": "Cordless Drill Kit", + "id": 6, + "image_url": "https://images.unsplash.com/photo-1504148455328-c376907d081c?w=600&h=600&fit=crop&auto=format", + "inventory_quantity": 19, + "price": 129.99, + "product_name": "Cordless Drill Kit", + "seller_id": "1", + "updated_at": "2025-08-17T15:05:47.840Z" +} + +We will add two indexes, that are possible future hot partitions, but with the numbers that we have discussed so far, this will be fine for this design. + +**GSI-1: Category Products (Potential Hot Partition)** +- PK: GSI1PK = category_id, SK: GSI1SK = category_id +- Projection: ALL +- Purpose: Category-based browsing +- **Warning:** Monitor for hot partitions with popular categories + +**GSI-2: Seller Products (Potential Hot Partition)** +- PK: GSI2PK = seller_id, SK: GSI2SK = seller_id +- Projection: ALL +- Purpose: Seller product management +- **Warning:** Monitor for hot partitions with high-volume sellers +``` + +![Working Log](/static/images/modernizr/2/stage02-16.png) + +In my execution, I was getting throttled, please remember to be patient and just retry! but also review the changes made by `Cline`. \ No newline at end of file diff --git a/content/modernizr/02-data-modeling/data-modeling-04.en.md b/content/modernizr/02-data-modeling/data-modeling-04.en.md new file mode 100644 index 0000000..9ad5c08 --- /dev/null +++ b/content/modernizr/02-data-modeling/data-modeling-04.en.md @@ -0,0 +1,181 @@ +--- +title: "2.4 Entities Design - Categories" +menuTitle: "Entities Design - Categories" +date: 2025-09-02T16:43:04-05:00 +weight: 35 +chapter: false +--- + +## The Categories Table - Our Digital Department Store + +Imagine walking into a huge department store. At the entrance, you see big signs pointing to major sections like "Electronics," "Clothing," and "Home & Garden." When you go to Electronics, you find smaller signs for "Computers," "Gaming," and "Mobile Phones." And if you go to Computers, you see even more specific sections like "Laptops," "Desktops," and "Accessories." + +This is exactly what we're building with our categories table - a digital version of those department store signs that help people find what they're looking for. But instead of physical signs, we're creating a smart system that knows how all these categories connect to each other. + +## Understanding the Family Tree Structure + +Categories work like a family tree, but upside down. At the top, you have the "grandparent" categories (like Electronics), then "parent" categories (like Computers), and finally "child" categories (like Gaming Laptops). This is called a hierarchical structure - a fancy way of saying "organized levels." + +In our system, we store two types of category information: + +1. **Root Categories** - The big department signs (Electronics, Clothing, etc.) +2. **Child Categories** - All the smaller, more specific categories that belong under the big ones + +## How We Organize This Information + +Just like our other tables, we use the same smart organizing system with PK (Primary Key) and SK (Sort Key): + +**For Root Categories:** +- PK = "ROOT" (a special label meaning "this is a top-level category") +- SK = category name (like "Electronics") + +**For Child Categories:** +- PK = parent category name (like "Electronics") +- SK = category name (like "Computers") + +## What Information Do We Store About Each Category? + +Here's everything we track for each category in our digital department store: + +| What We Store | Type of Data | Why We Need It | +|---------------|--------------|----------------| +| PK | Text | Parent category name (or "ROOT" for main departments) | +| SK | Text | This category's name | +| parent_name | Text | What category this belongs under | +| category_name | Text | The display name people see | +| category_path | Text | The full path (like "Electronics > Computers > Laptops") | +| level | Number | How deep in the tree (0 = top level) | +| children_count | Number | How many subcategories this has | +| product_count | Number | How many products are in this category | +| created_at | Text | When this category was created | + +## Making Categories Easy to Navigate - Our Search Helper + +We need one main search method (GSI) to help people navigate through our category tree: + +### GSI-1: Category Family Tree Navigator +This search helper lets us quickly find: +- All the main department categories (the "ROOT" categories) +- All subcategories under any parent category +- Any specific category by its ID (for migration from the old system) + +Think of it like having a smart directory at the mall entrance that can instantly tell you: +- "Show me all the main departments" +- "Show me everything under Electronics" +- "Find the Gaming category for me" + +## The Tricky Part - Handling "No Parent" Categories + +Here's where things get a bit technical, but stick with me. In the old database system (MySQL), when a category didn't have a parent (meaning it was a top-level category), it was marked as "NULL" - basically meaning "empty" or "no parent." + +With DynamoDB instead of leaving it empty, we use the word "ROOT" to mean "this is a top-level category." + +**The Translation:** +- **Old System:** "This category has no parent" = NULL +- **New System:** "This category has no parent" = "ROOT" + +This means when we move data from the old system to the new one, our code needs to automatically change any "no parent" categories to use "ROOT" instead. + +## Why This Design Works So Well + +This structure gives us amazing flexibility: +- Customers can browse by department, then drill down to exactly what they want +- Store managers can easily see how many products are in each category +- We can add new subcategories anywhere in the tree without breaking anything +- The system automatically keeps track of the full path to any category + +It's like having a smart filing system that automatically organizes itself and can tell you exactly where everything belongs in the grand scheme of things. + +## Setting Up the Categories Table - Instructions for Your AI Assistant + +When creating this table, make sure your AI assistant understands: + +**Key Design Points:** +1. Use "ROOT" for all top-level categories (PK = "ROOT") +2. Use the parent category name as PK for all child categories +3. Include the GSI for hierarchical navigation +4. Remember to translate NULL parent values to "ROOT" during migration +5. Keep track of children_count and product_count for performance + +This categories system is the backbone of how customers will find products, so getting the structure right is crucial for a smooth shopping experience. + + +This is the prompt you should send. + +```shell +Categories table updates +This table contains two entities, it uses single table design as well. + +Parent category entity: **ROOT Categories (PK = ROOT, SK = category_name)** +Child category entity: **CHILD Categories (PK = parent_category_name, SK = category_name)** + +| Attribute | Type | Purpose | +|-----------|------|---------| +| PK | String | parent_category_name (ROOT for root categories) | +| SK | String | category_name | +| parent_name | String | Parent category name (null for roots) | +| category_name | String | Category display name | +| category_path | String | Full hierarchy path | +| level | Number | Hierarchy level (0 = root) | +| children_count | Number | Number of child categories | +| product_count | Number | Number of products | +| created_at | String | ISO timestamp | + +{ + "PK": "ROOT", + "SK": "Electronics", + "category_id": "1", + "category_name": "Electronics", + "category_path": "Electronics", + "children_count": 2, + "created_at": "2023-01-01T00:00:00.000Z", + "GSI1PK": "ROOT", + "GSI1SK": "1", + "level": 0, + "parent_id": null, + "parent_name": null, + "product_count": 0 +} + +{ + "PK": "Electronics", + "SK": "Laptops", + "category_id": "7", + "category_name": "Laptops", + "category_path": "Electronics/Laptops", + "children_count": 0, + "created_at": "2025-08-14 21:16:06", + "GSI1PK": "1", + "GSI1SK": "7", + "level": 1, + "parent_id": "1", + "parent_name": "Electronics", + "product_count": 0 +} + +GSI1: Category Hierarchy Lookup (Migration Support) + +* Purpose: Enable hierarchical category queries by parent ID and direct category ID lookups for migration compatibility +* Partition Key: GSI1PK = parent_id (or "ROOT" for root categories) - Groups child categories under parent +* Sort Key: GSI1SK = category_id - Individual category identifier +* Projection: ALL - Complete category data for hierarchical access +* Sparse: No - All categories have parent relationships (ROOT for roots) +* Access Patterns Served: + * Find child categories by parent ID (AP22: GSI1PK = parent_id) + * Direct category lookup by ID (AP24: GSI1SK = category_id) + * Root categories lookup (AP21: GSI1PK = "ROOT") +* Capacity Planning: 210 RPS reads for hierarchical and ID-based category access +* Migration Critical: Supports both hierarchical navigation and legacy ID-based lookups +``` + +Always remember to check if the input was added in the working log and the instructions were followed. + +![Working Log](/static/images/modernizr/2/stage02-17.png) + +Make sure that everything that is not needed, such other tables that might have been suggested before are not longer there, we don't need to send noise when we will re-generate the data model. + +![Working Log](/static/images/modernizr/2/stage02-18.png) + +At this point `Cline` should tell you the summary of the work, and that it has effectively created a 3 table architecture. + +![Working Log](/static/images/modernizr/2/stage02-19.png) diff --git a/content/modernizr/02-data-modeling/data-modeling-05.en.md b/content/modernizr/02-data-modeling/data-modeling-05.en.md new file mode 100644 index 0000000..03e5164 --- /dev/null +++ b/content/modernizr/02-data-modeling/data-modeling-05.en.md @@ -0,0 +1,54 @@ +--- +title: "2.5 Data model redesign" +menuTitle: "Data model redesign" +date: 2025-09-02T16:43:04-05:00 +weight: 36 +chapter: false +--- + +## Initiating the Design Refinement Process + +At this stage, you'll execute the most critical phase of the data modeling process: iterative design refinement. This process transforms your initial conceptual model into a DynamoDB schema that efficiently supports all identified access patterns. + +```shell +Great! Now it is time to mark as completed task 3.2 and proceed with the next one as listed here `prompts/02-dynamodb-data-modeling/tasks.md` +``` + +This task involves comprehensive analysis and redesign of your initial data model based on the validation results from the previous step. The DynamoDB MCP Server will perform detailed optimization analysis, potentially recommending significant structural changes to improve performance and cost efficiency. + +![Data model](/static/images/modernizr/2/stage02-20.png) + +## Critical Review and Validation Phase + +Once the redesign is complete, thoroughly examine the updated `dynamodb_data_model.md` file. Compare the new design against all previously identified requirements, access patterns, and performance criteria. This review process mirrors real-world database design workflows where architects spend considerable time validating and iterating on designs before implementation. + +The design refinement process often reveals opportunities for optimization that weren't apparent in the initial modeling phase. Pay particular attention to: + +- **Entity consolidation opportunities** where related data can be co-located for improved query performance +- **GSI optimization** to minimize the number of secondary indexes while maintaining query flexibility +- **Partition key distribution** to ensure even data distribution and avoid hot partition issues +- **Sort key design** that enables efficient range queries and supports multiple access patterns + +This iterative approach reflects industry best practices where database designs undergo multiple refinement cycles before reaching production readiness. + +![Data model](/static/images/modernizr/2/stage02-21.png) + +## Design Validation Checkpoints + +As the system progresses through task 3.3, verify that the final design adheres to established naming conventions and structural standards. Specifically, ensure no entity prefixes are applied to primary key values such as `PROD_` or `USER_` or `USER_`. + +The validation process serves as quality assurance, catching potential issues before they propagate to subsequent implementation phases. Investing time in thorough validation at this stage prevents costly rework during later development phases as the quality of each stage's output depends on the quality of it's input. + +## Final Schema Verification + +Confirm that the final data model maintains the standardized naming convention for tables and indexes: + +- **Table 1**: Users Table with three Global Secondary Indexes named GSI1, GSI2, GSI3 +- **Table 2**: Products Table with two Global Secondary Indexes named GSI1, GSI2 +- **Table 3**: Orders Table with one Global Secondary Index named GSI1 + +This generic naming convention provides flexibility for future schema evolution while maintaining consistency across the application architecture. + +![Data model](/static/images/modernizr/2/stage02-22.png) + +The standardized naming approach also simplifies application code by providing predictable patterns for GSI references, reducing the likelihood of configuration errors during deployment. diff --git a/content/modernizr/02-data-modeling/data-modeling-06.en.md b/content/modernizr/02-data-modeling/data-modeling-06.en.md new file mode 100644 index 0000000..dc45b36 --- /dev/null +++ b/content/modernizr/02-data-modeling/data-modeling-06.en.md @@ -0,0 +1,77 @@ +--- +title: "2.6 Migration Contract" +menuTitle: "Migration Contract" +date: 2025-09-02T16:43:04-05:00 +weight: 37 +chapter: false +--- + +## The Migration Contract: Your Modernization Blueprint + +The Migration Contract represents the culmination of all previous analysis and design work. This JSON artifact serves as the definitive mapping specification that translates your existing MySQL schema into the optimized DynamoDB structure. Think of it as a detailed transformation blueprint that precisely defines how each piece of data moves from the relational model to the NoSQL implementation. + +The contract encapsulates important information including source table mappings, data transformation logic, attribute conversions, and relationship restructuring. Every subsequent stage of the modernization workflow depends on this contract, making its accuracy paramount to project success. + +## Initiating Contract Generation + +Execute the contract generation process with this command: + +```shell +Thanks. Please mark tasks 3 as completed, and continue with the task 4.1 `prompts/02-dynamodb-data-modeling/tasks.m` +``` + +## Understanding Contract Architecture + +The Migration Contract implements a transformation engine that supports multiple data conversion patterns. The system includes built-in validation guardrails that ensure the generated JSON adheres to strict specification requirements, preventing arbitrary or invalid transformations. + +Explore the `contracts` folder to understand the supported transformation patterns, validation rules, and architectural constraints. This reference documentation provides comprehensive details on how the contract functions as both a specification document and an execution engine for the data migration process. + +![Data model](/static/images/modernizr/2/stage02-23.png) + +## Contract Quality Assurance + +The contract generation process may require iterative refinement depending on the complexity of your data model and the AI system's interpretation of requirements. Success depends on meticulous validation of the generated contract against your established specifications. + +## Common Validation Issues and Solutions + +During contract generation, monitor for these frequent issues that require manual correction: + +::alert[DO NOT Blindly copy and paste the following prompts! They are examples of previous executions, you might not need them! please pay attention to the information in your `migrationContract.json` ]{type="error"} + +### Naming Convention Violations + +AI systems sometimes deviate from specified naming conventions, creating custom table or attribute names instead of using the standardized schema: + +```bash +You will need to validate the creation of the migration contract, I see you have defined your own table names, and you didn't used the table names I have provided, same for the GSIs I specifically asked for generic names for the GIS and the PK and SK to avoid issues or hardcoded values. To give you an example in the migration contract artifacts/stage-02/migrationContract.json the first table `UserOrdersCart` should be called `Users`, the partition key should be PK and the sort key SK, Please re-read the data_model and update my migration contract +``` + +### Invalid Transformation Methods + +The system may generate non-existent transformation functions instead of using the supported contract specifications: + +```bash +I noticed you have a made up transformation called `json-parse` it should be `json-construction` The format of that attribute is a map so you need to use JSON contruction, can you please update that attribute name and validate you have no created other made up methods? you need to follow the specifications as directed in the `contracts` folder +``` + +## Contract Validation Protocol + +Thoroughly review the generated `migrationContract.json` for: + +- **Schema Consistency**: Verify all table names, attribute names, and GSI references match your data model specifications +- **Transformation Validity**: Ensure all transformation methods exist in the contracts specification +- **Data Type Mapping**: Confirm MySQL data types are correctly mapped to DynamoDB equivalents +- **Relationship Preservation**: Validate that entity relationships are properly represented in the NoSQL structure + +## Workshop consistency for next steps + +Given the critical nature of this artifact, this workshop provides a verified migration contract as a fallback option. This ensures project continuity even if the AI-generated contract contains errors. + +To continue with this worksohp please: + +1. Rename your current `artifacts/stage-02` folder to `artifacts/stage-02-backup` +2. Copy the `core-outputs/stage-02` folder to the `artifacts` directory +3. Compare the provided artifacts with your generated versions to identify differences +4. Use the comparison analysis to improve future contract generation sessions + +This backup strategy ensures you can proceed with the workshop while learning from any discrepancies between AI-generated and verified outputs. diff --git a/content/modernizr/02-data-modeling/index.en.md b/content/modernizr/02-data-modeling/index.en.md new file mode 100644 index 0000000..835588b --- /dev/null +++ b/content/modernizr/02-data-modeling/index.en.md @@ -0,0 +1,100 @@ +--- +title: "Stage 2: Data modeling" +date: 2025-09-02T15:41:04-05:00 +weight: 30 +chapter: true +--- + +## DynamoDB Data Modeling Workshop + +Having completed the application analysis in Stage 1, you now understand the existing MySQL schema, identified access patterns, and established performance requirements. Stage 2 focuses on translating this relational data model into an optimized DynamoDB design that supports all identified access patterns while leveraging the advantages of NoSQL. + +This stage requires active collaboration between you as a human architect and and AI tools assisting with technical implementation. You'll guide the design process while the AI makes recommendations for DynamoDB data modeling practices. + +## Interactive Design Process + +The task list for this stage begins empty by design. This workshop emphasizes learning through hands-on collaboration with AI rather than following predetermined steps. You'll make critical architectural decisions while the AI provides technical expertise and validates your choices against DynamoDB best practices. + +This human-in-the-loop approach ensures you understand the reasoning behind each design decision while building practical experience with NoSQL data modeling principles. + +## Starting Your Design Session + +Let's begin by telling Cline to start working on data modeling. Use this command: + +```shell +Please open this file `prompts/02-dynamodb-data-modeling/tasks.md` and start working in the first available task. +``` + +![Start conversation](/static/images/modernizr/2/stage02-01.png) + +## Watching the Magic Happen + +You'll know the workshop is working because you'll see Cline actively creating new files in real-time! It uses a visual diff system (like track changes in a document) to show you exactly what's being added or modified. This helps you understand each step of the process. + +![Gitdiff](/static/images/modernizr/2/stage02-02.png) + +**Take your time** with each file that gets generated. Don't rush through this process! Read everything, understand what Cline is doing at each step and why, and don't hesitate to ask questions. This is interactive learning - if you get confused at any point, just ask Cline to explain what's happening. + +## Understanding the Variables + +Generative AI is by nature non deterministic. Some "temperature" or variability in an LLMs answers allows it to be more creative in problem solving. Because of this, every design session is unique, so what you see might be slightly different from someone else's experience. That's normal and expected! Instead of giving you exact screenshots to match, we'll provide guidance on the important concepts and decisions you'll encounter. + +One crucial thing to remember: **we need to support all 48 different access patterns** that were identified in Stage 1. Make sure this gets communicated clearly to Cline throughout the process. + +## Providing Traffic Information + +Cline will likely ask you about expected traffic patterns — basically, "How many people will be using this system?" Here's the information you should provide when asked: + +```shell +Our best marketing estimates put us with 1400 Queries per second. + +Authentication and User management represent about 20% of the traffic, We are expecting to have 250 login operations per second (this includes our projections for growth), Registration is about 10 per second and profile management another 10. + +Shopping cart operations are expected to be 25% of the traffic, where we have about 350 QPS for cart modifications and 250 QPS for people that is just viewing their shopping cart. + +Product Browsing and Search is the core of our traffic, 60% of it or 840 QPS, where we will have product search with 420 QPS, product listing with 420 and product details with 200 QPS. This is complemented by Categories operations that represents 15% of the traffic, where we expect to have 180 QPS for category navigation and 30 QPS for category searches. + +Order processing is the critical part of our business! this is what it generates the money, we are expecting to have 220 QPS representing about 15% of the traffic. Order creation 200 QPS, checkout validation 200 QPS and order management 120 QPS. + +Finally a very small set of operations are allocated to sellers, it is less than 1 QPS, representing 0.1% of the traffic. Maybe 0.05 QPS on product management, and 0.25 QPS on seller profiles, there is very low traffic on category management as they barely change. +``` + +![Questions](/static/images/modernizr/2/stage02-03.png) + +## The Collaborative Design Process + +As you work through this stage, you'll encounter several "checkpoint" moments where Cline stops and asks for your input. These aren't interruptions - they're important collaboration points designed to ensure the design stays on track and meets your specific needs. + +![Start conversation](/static/images/modernizr/2/stage02-05.png) + +## Being Patient with AI + +During this intensive design stage, you might occasionally see messages about rate limiting or throttling. This is normal! Simply wait a few seconds and try again. The system can handle about 4 requests per minute, which should be enough for our workshop. If your task fails to execute please re-try, and just ignore the message that says "Try breaking down the task into smaller steps" as this is not our case, we know it is rate limiting! + +![Start conversation](/static/images/modernizr/2/stage02-06.png) + +## Using Specialized AI Tools + +At some point, Cline will ask for permission to use the DynamoDB MCP Server — this is like accessing a specialized AI consultant who's an expert specifically in DynamoDB design. When asked, give permission for this. This expert AI will help analyze all the data we've collected and create a proper database design. + +![Start conversation](/static/images/modernizr/2/stage02-07.png) + +## Quality Control Checkpoint + +Cline will first create a summary file called `dynamodb_requirements.md`. This is like having an architect show you the summary of everything you've discussed before they start drawing the blueprints. + +::alert[ **Important:** Read this file carefully! Sometimes AI can accidentally add requirements that were never discussed, or miss important details. This is your chance to catch any errors before they become part of the final design.]{type="info"} + +![Start conversation](/static/images/modernizr/2/stage02-08.png) + +## Your First Database Design + +Once you approve the requirements summary, Cline will create your actual DynamoDB data model. This is exciting — you're seeing your new database structure come to life! Cline has generated the new data model file `artifacts/stage-02/dynamodb_data_model.md` please open it and read it carefully. + +![Start conversation](/static/images/modernizr/2/stage02-09.png) + +## What Comes Next + +After getting your initial design, the next step is validation — making sure this design is actually good and not just something that sounds impressive but won't work in practice. We'll examine the design carefully to ensure it's based on real requirements rather than AI imagination. + +Remember, this is a collaborative process where your input and decisions shape the final outcome. You're learning to be a database architect while the AI handles the technical implementation details! diff --git a/content/modernizr/03-dual-layer/index.en.md b/content/modernizr/03-dual-layer/index.en.md new file mode 100644 index 0000000..01c0939 --- /dev/null +++ b/content/modernizr/03-dual-layer/index.en.md @@ -0,0 +1,86 @@ +--- +title: "Stage 3: Dual Database Abstraction Layer" +date: 2025-09-02T15:41:04-05:00 +weight: 30 +chapter: true +--- +::alert[In this workshop this stage has already been completed for you. Please review the artifacts available on `/artifacts/stage-03` ]{type="info"} + +## Implementing Database Abstraction for Zero-Downtime Migration + +Stage 3 introduces a database abstraction layer that enables your application to simultaneously interact with both MySQL and DynamoDB databases. This dual-database approach is essential for achieving zero-downtime migrations by allowing gradual transition from the legacy relational system to the modern NoSQL architecture. + +The abstraction layer functions as an intelligent data access router which directs queries to appropriate databases based on configurable migration phases. This architecture pattern enables controlled, incremental migration while maintaining full application functionality throughout the transition process and the ability to roll-back if any issues are encountered. + +## Abstraction Layer Architecture + +The dual database system implements the [Repository Pattern](https://martinfowler.com/eaaCatalog/repository.html), a well-established software design pattern that encapsulates data access logic behind a consistent interface. This approach provides: + +- **Database Independence**: Application logic remains unaware of underlying database implementations +- **Simple Switching**: Runtime configuration determines which database handles specific operations +- **Consistent API**: Single interface supports both MySQL and DynamoDB operations +- **Transaction Management**: Coordinated transaction handling across both database systems + +### Configuration-Driven Routing + +The abstraction layer uses configuration parameters to determine data routing behavior: + +```json +{ + "databaseMode": "mysql-only", + "enableDualWrites": false, + "primaryReadSource": "mysql", + "fallbackEnabled": true +} +``` + +This configuration approach enables dynamic behavior modification without code changes, essential for production migration scenarios. + +## Migration Phase Support + +The abstraction layer supports multiple migration phases through configurable operation routing: + +### Phase 1: MySQL Only +- **Read Operations**: Routed exclusively to MySQL +- **Write Operations**: Directed to MySQL only +- **Use Case**: Baseline functionality before migration begins + +### Phase 2: Dual Write, MySQL Read +- **Read Operations**: MySQL remains the primary source +- **Write Operations**: Simultaneously executed on both databases +- **Use Case**: Shadow mode testing and data synchronization establishment + +### Phase 3: Dual Operations with MySQL Primary +- **Read Operations**: Query both databases with MySQL precedence +- **Write Operations**: Continue dual database writes +- **Use Case**: Validation phase ensuring DynamoDB consistency + +### Phase 4: Dual Operations with DynamoDB Primary +- **Read Operations**: DynamoDB becomes primary with MySQL fallback +- **Write Operations**: Maintain dual writes for rollback capability +- **Use Case**: Performance validation and final transition preparation + +### Phase 5: DynamoDB Only +- **Read Operations**: Exclusively from DynamoDB +- **Write Operations**: DynamoDB only +- **Use Case**: Complete migration with legacy system decommissioning + +## Error Handling and Resilience + +The abstraction layer implements comprehensive error handling strategies: + +- **Circuit Breaker Pattern**: Prevents cascading failures during database outages +- **Retry Logic**: Handles transient connection issues and temporary unavailability +- **Graceful Degradation**: Falls back to available database when one system is unavailable +- **Monitoring Integration**: Provides detailed metrics for migration progress tracking + +## Implementation Benefits + +This dual database approach provides several critical advantages: + +- **Risk Mitigation**: Immediate rollback capability at any migration phase +- **Data Validation**: Continuous comparison between old and new systems +- **Performance Monitoring**: Real-time performance comparison during migration +- **Business Continuity**: Zero application downtime during database transition + +The abstraction layer serves as the foundation that enables confident, controlled migration from legacy relational systems to modern NoSQL architectures while maintaining complete operational reliability. diff --git a/content/modernizr/04-ddb-connection/index.en.md b/content/modernizr/04-ddb-connection/index.en.md new file mode 100644 index 0000000..fc6ee7f --- /dev/null +++ b/content/modernizr/04-ddb-connection/index.en.md @@ -0,0 +1,37 @@ +--- +title: "Stage 4: DynamoDB Integration" +date: 2025-09-02T15:41:04-05:00 +weight: 30 +chapter: true +--- +::alert[In this workshop this stage has already been completed for you. Please review the artifacts available on `/artifacts/stage-04` ]{type="info"} + +## Setting Up DynamoDB Infrastructure + +Stage 4 is where you actually implement the DynamoDB side of your dual database system. Think of this as installing the wiring and connections for your second database engine - you're building all the infrastructure needed to make DynamoDB work with your application. + +This stage focuses on creating the actual DynamoDB tables, setting up the AWS SDK configuration, and implementing the data transformation logic that converts your MySQL data into a structure optimized for DynamoDB. + +## What You'll Build + +Using your migration contract as the blueprint, you'll create the actual DynamoDB tables with the exact specifications from your data model: + +- **Users Table**: With partition key (PK) and sort key (SK), plus the three GSIs (GSI1, GSI2, GSI3) you designed +- **Products Table**: With its partition/sort keys and two GSIs (GSI1, GSI2) +- **Orders Table**: With its keys and single GSI (GSI1) + +Each table needs to be configured with the right capacity settings, security policies, and monitoring to handle your application's traffic. + +You'll set up the connection between your application and DynamoDB using the AWS SDK, Setting up the DynamoDB client with proper authentication, you will connect to the proper AWS regions and make sure you configure the right information in the SDK settings for retries. + +At this stage you will implement the transformation logic that converts MySQL data into DynamoDB items. You will need to re-write your APIs to work with DynamoDB SDK instead of SQL, handle the difference between data types, and flatten the relational joins into single DynamoDB items. This step requires a lot of testing and validation to make sure you don't loose any data and you always provide the same API response regardless of the method, if you retrieve the elements from the Shopping cart, you should return the list of items regardless of the Database which is storing them. + +## Test driven development + +All the refactoring stages use Test Driven Development to ensure backward compatibility and avoid changing working/existing logic. You will first create a non-working unit test, and iteratively modify the code until the unit test passes. To ensure full compatibility we provide instructions to the LLMs to re-run the Unit, Integration, and End-to-end testing every time a new functionality is added. (This is why it takes so long to complete!) + +## Next Steps + +By building robust DynamoDB infrastructure in Stage 4, you create a reliable foundation for the migration control system you'll build in Stage 5. If the DynamoDB implementation is shaky, the entire migration process becomes risky. The transformation engine you build here ensures that data moving from MySQL to DynamoDB maintains its accuracy and completeness. This is critical for maintaining user trust during the migration. + +This stage transforms your migration contract from a paper plan into working code that can actually store and retrieve data from DynamoDB. Once Stage 4 is complete, you have a fully functional NoSQL backend ready to be integrated into your migration workflow. diff --git a/content/modernizr/05-app-refactoring/index.en.md b/content/modernizr/05-app-refactoring/index.en.md new file mode 100644 index 0000000..313571a --- /dev/null +++ b/content/modernizr/05-app-refactoring/index.en.md @@ -0,0 +1,128 @@ +--- +title: "Stage 5: Application Refactoring" +date: 2025-09-02T15:41:04-05:00 +weight: 30 +chapter: true +--- +::alert[In this workshop this stage has already been completed for you. Please review the artifacts available on `/artifacts/stage-05` ]{type="info"} + +## Building the Control System for Your Migration + +Stage 5 is where you build the actual control system that lets you safely switch between MySQL and DynamoDB. This provides the precise controls needed to navigate the transition from your old database system to the new one without issues. + +This stage takes the dual database abstraction layer from Stage 3 and the DynamoDB connectivity from Stage 4, then adds the **feature flag** API system that gives you complete control over your migration process. Feature Flags allow you to easily enable or disable features without the need to modify source code or deploy new software. + +## What You'll Build + +You'll create a backend REST API system that controls how your application uses the databases through simple API calls. The system is designed so your frontend never needs to change - all the database switching happens behind the scenes through configuration. + +The core of this system is a Feature Flag Controller that manages four boolean switches: + +- **READ_MYSQL**: Should the app read from MySQL? +- **READ_DDB**: Should the app read from DynamoDB? +- **WRITE_MYSQL**: Should the app write to MySQL? +- **WRITE_DDB**: Should the app write to DynamoDB? + +By turning these switches on and off in different combinations, you create the 5 migration phases that let you safely transition from MySQL to DynamoDB. + +### Migration Control API Endpoints + +You'll build REST API endpoints that give you remote control over your database migration: + +- `GET /api/migration/status` - Check what phase you're currently in and see all flag settings +- `POST /api/migration/phase` - Jump to a specific migration phase (1 through 5) +- `PUT /api/migration/flags` - Manually control individual feature flags for custom configurations +- `GET /api/migration/validation` - Monitor how well the dual-read validation is working + +### Phase Configuration System + +The system automatically manages the 5 migration phases by setting the right combination of feature flags: + +**Phase 1 - MySQL Only** (Your starting point) +- READ_MYSQL: ON, READ_DDB: OFF +- WRITE_MYSQL: ON, WRITE_DDB: OFF + +**Phase 2 - Dual Write + MySQL Read** (Safety mode) +- READ_MYSQL: ON, READ_DDB: OFF +- WRITE_MYSQL: ON, WRITE_DDB: ON + +**Phase 3 - Dual Write + Dual Read** (Validation mode) +- READ_MYSQL: ON, READ_DDB: ON +- WRITE_MYSQL: ON, WRITE_DDB: ON + +**Phase 4 - Dual Write + DynamoDB Read** (Transition mode) +- READ_MYSQL: OFF, READ_DDB: ON +- WRITE_MYSQL: ON, WRITE_DDB: ON + +**Phase 5 - DynamoDB Only** (Your end goal) +- READ_MYSQL: OFF, READ_DDB: ON +- WRITE_MYSQL: OFF, WRITE_DDB: ON + +### Validation and Monitoring System + +During phases 3 and 4 (when you're reading from both databases), the system automatically compares the results from MySQL and DynamoDB. This validation system: + +- **Logs Differences**: Records any inconsistencies between the two databases +- **Tracks Success Rates**: Shows you how often the databases return matching results +- **Provides Error Details**: Gives you specific information about what didn't match +- **Monitors Performance**: Compares response times between MySQL and DynamoDB + +### Frontend Transparency + +One of the coolest parts of this system is that your frontend application never knows about the migration. All your existing API endpoints continue to work exactly the same way: + +```javascript +// This code never changes during migration +const createUser = async (userData) => { + const response = await fetch('/api/users', { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify(userData) + }); + return response.json(); +}; +``` + +Behind the scenes, the backend automatically routes this request to MySQL, DynamoDB, or both databases based on your current migration phase, but the frontend never needs to know about it. + +### Super Admin Control Panel + +You'll also build a hidden web interface that lets authorized administrators control the migration through a browser instead of direct API calls. This admin panel: + +- **Shows Current Status**: Displays which phase you're in and all flag settings +- **Provides Phase Buttons**: One-click switching between migration phases +- **Offers Manual Controls**: Individual checkboxes for fine-grained flag control +- **Displays Validation Data**: Shows success rates and error information +- **Requires Special Access**: Only users with super admin privileges can access it + +## Controlling the Modernization + +By controlling database operations through feature flags, you can move forward or backward through the migration phases instantly. If something goes wrong in Phase 4, you can immediately switch back to Phase 2 and be running safely on MySQL again. + +Phases 3 and 4 let you test DynamoDB with real user traffic while still having MySQL as a safety net. You can see exactly how DynamoDB performs with your actual data and usage patterns before fully committing to it. + +Since all the database switching happens through configuration changes and your frontend never changes, users never experience any downtime or disruption. They continue using your application normally while you're migrating the database underneath. + +The API system gives you precise control over exactly how your application uses the databases. You can move through the phases step by step, or even create custom configurations for testing specific scenarios. + +During dual-read phases, the system automatically validates that both databases are returning the same results, giving you confidence that your migration contract and data transformations are working correctly. + +## Technical Implementation Details + +The system extends your existing dual database abstraction layer to include the feature flag controller. Your business logic continues to use the same repository interfaces, but now the repository implementation checks the feature flags to decide which database(s) to use for each operation. + +The system includes smart error handling - if DynamoDB operations fail during dual-write phases, the application continues using MySQL results so users never see errors. All DynamoDB failures are logged for investigation, but they don't break your application. + +### Authentication and Authorization + +The migration control APIs are protected by authentication middleware that requires super admin privileges. The system also includes user management functions to promote and demote users to/from super admin status. + +You'll extend your existing user table with a `super_admin` boolean field to control access to the migration interface. The system includes migration scripts to safely add this field to your existing database. + +## Building Your Migration Control System + +This stage transforms your modernization project from a planning exercise into a live, controllable system. You'll have the tools to safely navigate the transition from MySQL to DynamoDB with complete confidence that you can handle any issues that arise. + +The feature flag system gives you the power to migrate your database while keeping your application running smoothly for users, and the validation system ensures you catch any problems before they affect your production environment. + +By the end of this stage, you'll have a system that can safely migrate your database while providing you with all the monitoring, control, and safety features you need for a successful modernization project. diff --git a/content/modernizr/06-data-movement/etl-01.en.md b/content/modernizr/06-data-movement/etl-01.en.md new file mode 100644 index 0000000..be0c3fa --- /dev/null +++ b/content/modernizr/06-data-movement/etl-01.en.md @@ -0,0 +1,65 @@ +--- +title: "6.1 Testing Connectivity" +menuTitle: "Testing Connectivity" +date: 2025-09-02T16:43:04-05:00 +weight: 31 +chapter: false +--- + +## Setting Up the Data Pipeline + +This workshop combines stages 6 and 7 to give you hands-on experience with both infrastructure deployment and the actual data movement process. Think of this as setting up both the roads and the trucks that will move your data from MySQL to DynamoDB. + +The environment has been pre-configured with all the security permissions and network connectivity you need for the ETL (Extract, Transform, Load) process. However, you still need to provide the database password so the system can actually connect to your MySQL database. + +## Understanding AWS Glue + +AWS Glue is Amazon's managed ETL service - it's like having a smart data processing robot that can: +- **Extract** data from your MySQL database +- **Transform** it using your migration contract rules +- **Load** it into DynamoDB in the correct format + +The Glue scripts will use a pre-configured connection called `mysql-modernizr-connection` to securely connect to your database. This connection acts like a bridge between Glue and your MySQL server. + +## Configuring the Database Connection + +### Step 1: Access AWS Glue Console + +Navigate to the AWS console and open the Glue service. In the left sidebar, click on **Data Connections** to see all available database connections. + +![Migration control panel](/static/images/modernizr/6/stage06-02.png) + +### Step 2: Edit the MySQL Connection + +Find the connection named `mysql-modernizr-connection`, select it, and click **Edit**. This connection has most of the settings already configured - you just need to add the password. + +![Migration control panel](/static/images/modernizr/6/stage06-03.png) + +### Step 3: Add the Database Password + +Retrieve the database password from the file `tools/config.json` (it should be around line 16). Enter this password in the connection configuration and click **Save**. + +This password allows Glue to authenticate with your MySQL database. Without it, the ETL process can't access your data. + +### Step 4: Verify Connectivity + +Once the connection status shows "Ready", select the connection and click **Actions**, then **Test Connection**. + +![Migration control panel](/static/images/modernizr/6/stage06-04.png) + +This starts a connectivity validation process that typically takes about 1 minute. + +![Migration control panel](/static/images/modernizr/6/stage06-05.png) + +The test creates a temporary network connection to your database to verify that: +- The network path is open +- Authentication works with the provided credentials +- Glue can successfully query the database schema + +### Step 5: Confirm Success + +You should receive a success message confirming the connection works properly. This green light means Glue can now access your MySQL database for the ETL process. + +![Migration control panel](/static/images/modernizr/6/stage06-06.png) + +This successful connection test means your data pipeline is ready to begin extracting data from MySQL and transforming it according to your migration contract. You're ready to move on to the next step: generating the database views that will feed your ETL process. diff --git a/content/modernizr/06-data-movement/etl-02.en.md b/content/modernizr/06-data-movement/etl-02.en.md new file mode 100644 index 0000000..a0ac89c --- /dev/null +++ b/content/modernizr/06-data-movement/etl-02.en.md @@ -0,0 +1,77 @@ +--- +title: "6.2 Creating MySQL views" +menuTitle: "Creating MySQL views" +date: 2025-09-02T16:43:04-05:00 +weight: 32 +chapter: false +--- + +## Preparing Your Data for Migration + +Now that the connectivity is working, you need to prepare your MySQL data for the migration process. Think of this step like organizing your belongings before moving to a new house - you want to see how everything will look in the new format before actually moving it. + +You'll create MySQL views that show your relational data in a denormalized format - exactly how it will appear in DynamoDB. This preview helps you catch any issues with your migration contract before running the actual data transfer. + +## What Are MySQL Views? + +MySQL views are like virtual tables that show data from your real tables in a different format. For this migration, the views will: + +- **Combine Related Data**: Join information from multiple tables into single records +- **Apply Transformations**: Convert MySQL data types to DynamoDB-compatible formats +- **Test Your Migration Contract**: Verify that your transformation rules actually work with real data +- **Preview the Results**: Let you see exactly how your data will look in DynamoDB + +The views act like a "dress rehearsal" for your migration - you can see the final result without actually changing anything in your production database. + +### Step 1: Running the View Generation Process + +Run the following command to start the automated view creation process: + +```shell +Please execute all the tasks listed in this file `prompts/07-data-migration-execution/tasks.md` +``` + +![Migration control panel](/static/images/modernizr/6/stage06-07.png) + +This command triggers Cline (the AI assistant) to read your migration contract from Stage 2 and automatically generate the MySQL views based on your data model specifications. + +### Step 2: Watch the Automation Process + +The system will start by creating an artifacts folder, then reading the `migrationContract.json` file from Stage 2. This process involves multiple rapid file operations that might seem fast, but remember that Cline has rate limits (4 executions per minute), so you may need to be patient or retry if something times out. + +![Migration control panel](/static/images/modernizr/6/stage06-08.png) + +Each step of the process is automated, but Cline will ask for your approval before executing each task. This safety mechanism ensures you can review what's happening before it makes changes to your database. + +### Step 3: Approve Each View Creation + +As the process runs, Cline will ask for permission to create each MySQL view. You'll see detailed explanations of what each view will do and how it transforms your data. + +![Migration control panel](/static/images/modernizr/6/stage06-09.png) + +The system creates one view at a time, allowing you to: +- **Review the SQL**: See exactly how each view combines and transforms your data +- **Understand the Logic**: Learn how relational data becomes NoSQL items +- **Catch Issues Early**: Spot problems with the transformation logic before migration + +### Step 4: Test the Generated Views + +After each view is created, you should manually query it to verify the results. This step is crucial - you need to confirm that: + +- **Data Exists**: The views actually return records (not empty results) +- **Format is Correct**: The data matches your expected DynamoDB item structure +- **Transformations Work**: MySQL data types are properly converted +- **Relationships are Preserved**: Related data is correctly combined + +![Migration control panel](/static/images/modernizr/6/stage06-10.png) + +Run queries like: +```sql +SELECT * FROM user_view LIMIT 10; +SELECT * FROM product_view LIMIT 10; +SELECT * FROM order_view LIMIT 10; +``` + +Each query should return data that looks like it's ready to be inserted into DynamoDB, with all the transformations from your migration contract applied. + +With successful view creation and validation, you've proven that your migration contract works with real data. The next step will be to configure the AWS Glue ETL jobs that will use these views to actually migrate your data to DynamoDB. diff --git a/content/modernizr/06-data-movement/etl-03.en.md b/content/modernizr/06-data-movement/etl-03.en.md new file mode 100644 index 0000000..1d811d2 --- /dev/null +++ b/content/modernizr/06-data-movement/etl-03.en.md @@ -0,0 +1,72 @@ +--- +title: "6.3 Creating DynamoDB Tables" +menuTitle: "Creating DynamoDB Tables" +date: 2025-09-02T16:43:04-05:00 +weight: 33 +chapter: false +--- + +## Building Your NoSQL Database Structure + +Now that you have MySQL views showing how your data will look in DynamoDB format, it's time to create the actual DynamoDB tables that will store your migrated data. Think of this as building the containers that will hold your transformed data. + +The system will use the DynamoDB MCP server to automatically create tables based on your migration contract specifications. This ensures the tables have exactly the right structure to receive the data from your ETL process. + +## Understanding DynamoDB Table Creation + +The automation process will create three main tables based on your migration contract: + +- **Users Table**: With partition key (PK), sort key (SK), and the GSIs (Global Secondary Indexes) you designed in Stage 2 +- **Products Table**: With its specific key structure and GSIs for product searches +- **Categories Table**: With its keys and any necessary indexes for category operations + +## Running the Table Creation Process + +### Step 1: Automatic Execution + +This table creation process should happen automatically as part of the initial migration command you ran. The system reads your migration contract and creates each table according to your specifications. + +If the process stops or gets interrupted, you can simply ask Cline to "continue where you left off" or "restart the table creation process." + +### Step 2: Approve Each Table + +For each table creation, Cline will ask for your approval and show you exactly what's being created. Pay attention to: + +- **Table Names**: Ensure they match your expected naming (Users, Products, Categories) +- **Key Schema**: Verify the partition key (PK) and sort key (SK) are correct +- **GSI Configuration**: Check that all your planned indexes are included +- **Capacity Settings**: Confirm the read/write capacity makes sense for your data size + +![Migration control panel](/static/images/modernizr/6/stage06-11.png) + +This approval step is important because once a DynamoDB table is created with a specific key schema, you can't change it without deleting and recreating the table. + +### Step 3: Monitor Creation Status + +After you approve each table, the MCP server will send the creation request to AWS. You'll see a response showing the table in "CREATING" status. + +![Migration control panel](/static/images/modernizr/6/stage06-12.png) + +The "CREATING" status means AWS is: +- **Allocating Resources**: Setting up the infrastructure to support your table +- **Configuring Keys**: Implementing your partition key and sort key structure +- **Building Indexes**: Creating your GSIs in the background +- **Setting Permissions**: Configuring access controls for your application + +This process usually takes 1-3 minutes per table, depending on the complexity of your GSI configuration. + +### Step 4: Verifying Your Tables + +Once all tables show as created, verify them in the AWS DynamoDB console. You should see your three tables: `Users`, `Products`, and `Categories`. + +![Migration control panel](/static/images/modernizr/6/stage06-13.png) + +For each table, check: +- **Status**: Should be "ACTIVE" +- **Key Schema**: Matches your migration contract specifications +- **GSIs**: All indexes are listed and active +- **Item Count**: Should be 0 (empty tables ready for data) + +The tables are intentionally empty at this stage - they're like empty warehouses ready to receive inventory. The actual data will come from the ETL process in the next steps. + +With your DynamoDB tables successfully created and active, you're ready to move to the next phase: configuring and running the ETL jobs that will actually move your data from MySQL to DynamoDB. diff --git a/content/modernizr/06-data-movement/etl-04.en.md b/content/modernizr/06-data-movement/etl-04.en.md new file mode 100644 index 0000000..8796708 --- /dev/null +++ b/content/modernizr/06-data-movement/etl-04.en.md @@ -0,0 +1,138 @@ +--- +title: "6.4 Creating Glue Jobs" +menuTitle: "Creating Glue Jobs" +date: 2025-09-02T16:43:04-05:00 +weight: 34 +chapter: false +--- + +## Building the Data Movement Engine + +Now comes the exciting part - creating the actual machines that will move your data from MySQL to DynamoDB. These "machines" are called AWS Glue jobs, and they're like smart robots that can read data from one place, transform it according to your rules, and write it to another place. + +Think of this step as building and deploying the moving trucks that will transport your belongings from your old house (MySQL) to your new house (DynamoDB), making sure everything gets organized properly during the move. + +## What Are AWS Glue Jobs? + +AWS Glue jobs are automated scripts that run in the cloud to process your data. For this migration, each job will: + +- **Read from MySQL Views**: Use the views you created to get properly formatted data +- **Apply Transformations**: Convert the data according to your migration contract +- **Write to DynamoDB**: Insert the transformed data into your new tables +- **Handle Errors**: Deal with any issues that come up during the transfer +- **Report Progress**: Let you know how the migration is going + +Each job is written in Python using PySpark (a framework for processing large amounts of data quickly). Don't worry - the system generates all this code automatically based on your migration contract. + +## The Job Creation Process + +### Step 1: Generate Python Scripts + +The automation process will create specialized Python scripts for each of your tables. These scripts contain all the logic needed to: + +- Connect to your MySQL database using the views you created +- Read data in batches to handle large datasets efficiently +- Transform the data format from MySQL to DynamoDB +- Write the data to the correct DynamoDB tables +- Handle any errors or data quality issues + +![Migration control panel](/static/images/modernizr/6/stage06-14.png) + +You'll see Cline creating files with names like: +- `users_migration.py` - Handles migrating user data +- `product_migration.py` - Handles migrating product data +- `categories_migration.py` - Handles migrating category data + +### Step 2: Upload Supporting Files + +Once the Python scripts are created, they need to be stored in Amazon S3 (cloud storage) where Glue can access them. This is like putting your moving instructions in a place where all the moving trucks can read them. + +![Migration control panel](/static/images/modernizr/6/stage06-15.png) + +The system will upload each script file to S3, making sure they're in the right location with the correct permissions for Glue to use them. The process also uploads any supporting files the jobs might need, such as configuration files or utility libraries. This ensures the jobs have everything they need to run successfully. + +![Migration control panel](/static/images/modernizr/6/stage06-16.png) + +Please notice the workflow needs to run this process 3 times, once for each table. + +### Step 3: Create the Glue Jobs + +Finally, the system creates the actual Glue jobs in AWS. Each job is configured with: + +- **Script Location**: Where to find the Python code in S3 +- **Database Connection**: How to connect to your MySQL database +- **DynamoDB Permissions**: Authorization to write to your tables +- **Resource Allocation**: How much computing power to use +- **Error Handling**: What to do if something goes wrong + +![Migration control panel](/static/images/modernizr/6/stage06-17.png) + +Remember to approve each job creation when Cline asks - this gives you control over the process and lets you review what's being created. + +### Step 4: Verifying Job Creation + +After all jobs are created, you can see them in the AWS Glue console. They should appear as "Ready" and waiting to be executed. + +![Migration control panel](/static/images/modernizr/6/stage06-18.png) + +You'll see three jobs corresponding to your three tables: +- **Users ETL Job**: Ready to migrate user data +- **Products ETL Job**: Ready to migrate product data +- **Categories ETL Job**: Ready to migrate category data + +Each job is like a loaded moving truck, ready to start the migration when you give the signal. + +### Step 5: Start the Data Migration + +You have complete control over when to start the actual data migration. For this workshop, we'll start the migration immediately while your application is still using MySQL (Phase 1 of your feature flags from Stage 5). + +This approach is safe because: +- Your application continues running normally on MySQL +- The migration happens in the background +- You can validate the DynamoDB data before switching over +- If anything goes wrong, your application is unaffected + +![Migration control panel](/static/images/modernizr/6/stage06-19.png) + +When you approve the job executions, all three jobs will start running simultaneously, each handling its portion of the data migration. + +### Step 6: Monitor Progress + +Once the jobs start, you can watch their progress in the AWS Glue console. You'll see real-time information about: + +- **Job Status**: Running, completed, or failed +- **Records Processed**: How many items have been migrated +- **Processing Speed**: How fast the migration is progressing +- **Error Count**: Any issues that need attention + +![Migration control panel](/static/images/modernizr/6/stage06-20.png) + +The jobs will process your data in parallel, which means the migration completes faster than if you moved each table one at a time. + +### Step 7: Completion Confirmation + +When all jobs finish successfully, the system will notify you that the migration is complete. + +![Migration control panel](/static/images/modernizr/6/stage06-21.png) + +This completion message means: +- All your MySQL data has been successfully transformed +- The data has been written to the correct DynamoDB tables +- The format matches your migration contract specifications +- No critical errors occurred during the process + +## Validating the Migration + +### Step 8: Verify Your Data + +The final step is to check that your data actually made it to DynamoDB. Go to the AWS DynamoDB console and look at your tables - they should now contain data. + +![Migration control panel](/static/images/modernizr/6/stage06-22.png) + +For each table, verify: +- **Item Count**: Should match (approximately) the number of records in your MySQL tables +- **Data Format**: Items should have the structure defined in your migration contract +- **Key Values**: Partition keys and sort keys should be populated correctly +- **Attributes**: All the expected data fields should be present + +With your data successfully migrated to DynamoDB, you're ready to start testing your dual-database system and begin the controlled transition from MySQL to DynamoDB using the feature flag system you built in Stage 5. diff --git a/content/modernizr/06-data-movement/etl-05.en.md b/content/modernizr/06-data-movement/etl-05.en.md new file mode 100644 index 0000000..0f02dcc --- /dev/null +++ b/content/modernizr/06-data-movement/etl-05.en.md @@ -0,0 +1,94 @@ +--- +title: "6.5 Admin Portal" +menuTitle: "Admin Portal" +date: 2025-09-02T16:43:04-05:00 +weight: 35 +chapter: false +--- + +Now that your data has been successfully migrated to DynamoDB, it's time to access the admin portal that controls your feature flag system from Stage 5. You need special administrator access to safely control the migration between databases. + +The admin portal is where you'll manage the transition from MySQL-only operation to DynamoDB-only operation through the 5 migration phases you learned about in Stage 5. + +### What Is Super Admin Access? + +The migration control system is too powerful to let just anyone access it. Imagine if any employee could decide to switch your entire company's database system on a whim! That's why the system requires "super admin" privileges. + +In your user database, there's a special field called `super_admin` that acts like a VIP pass. When this field is set to `1` (true) for a user, it means they have permission to: View the hidden migration control panel, switch between migration phases, control individual feature flags, monitor the dual-database validation system and make critical decisions about your database migration. + +There's a small issue we need to fix first. During the data modeling process in Stage 2, the `super_admin` field wasn't included in the migration contract. This means the MySQL views don't include this field and the DynamoDB tables don't have this attribute. In other words no users currently have admin access to the migration panel, we need to manually add this field to both databases so you can access the control panel. + +## Setting Up Admin Access + +### Step 1: Update MySQL Database + +First, we'll give the `admin` user super admin privileges in the MySQL database, using the MySQL MCP Server: + +```shell +Could you please use the MySQL MCP server to update the `admin` user from the Users table and set the attribute `super_admin` to 1? +``` + +![Migration control panel](/static/images/modernizr/6/stage06-23.png) + +The MCP server will first retrieve the user information, then apply the update. + +### Step 2: Update DynamoDB Database + +Next, we need to make the same change in DynamoDB so both databases have consistent information: + +```shell +Could you please use the DynamoDB MCP server to update the `admin` user from the `Users` table and set the its attribute `super_admin` value to 1? +``` + +![Migration control panel](/static/images/modernizr/6/stage06-24.png) + +This ensures that: +- Both databases have the same user permissions +- The dual-database validation system won't report inconsistencies +- The admin user will have access regardless of which database the application is reading from + +### Why Both Databases? + +You might wonder why we need to update both databases. Remember, your application can be reading from MySQL, DynamoDB, or both depending on your current migration phase. If the `super_admin` field is missing from either database, the admin user might lose access when you switch phases. + +## Accessing the Migration Control Panel + +### Step 3: Navigate to the Admin Portal + +Once both database updates are complete, you can access the hidden admin portal at `[your-url]/store/admin/migration-control` + +This URL isn't linked anywhere in your normal application - it's a hidden page that only super administrators know about. + +### Step 4: Administrator Authentication + +The first time you access the admin portal, you'll need to log in with administrator credentials. The system will: + +- Verify that you're logged in as a valid user +- Check that your user account has `super_admin = 1` in the database +- Grant access to the migration control interface +- Remember your admin session for future visits + +If you don't have super admin privileges, you'll see an "Access Denied" message instead of the control panel. + +### Step 5: Begin Controlled Migration + +Once you're in the admin portal, you'll see the migration control interface with options to: + +- **View Current Status**: See which migration phase you're currently in +- **Switch Phases**: Move between the 5 migration phases safely +- **Monitor Validation**: Check how well the dual-database system is working +- **Control Individual Flags**: Fine-tune the feature flags for testing + +## Your First Modernization Step + +Now that you have access to the control panel, you can take your first step in the controlled migration process. Click the button to enable "Phase 2: Dual Write + MySQL Read". + +![Migration control panel](/static/images/modernizr/6/stage06-25.png) + +This phase is the first safe step in your migration because it: +- **Keeps Reading from MySQL**: Your application continues to serve data from the reliable MySQL database +- **Starts Writing to Both**: Every new data change goes to both MySQL and DynamoDB +- **Maintains Performance**: Users don't notice any difference in application speed +- **Builds Confidence**: You can monitor that DynamoDB writes are working correctly + +With the admin portal working and Phase 2 enabled, you've taken the first step in your controlled migration from MySQL to DynamoDB. Your application is now writing to both databases while still serving data from the reliable MySQL system, giving you the foundation to safely continue the migration process. diff --git a/content/modernizr/06-data-movement/etl-06.en.md b/content/modernizr/06-data-movement/etl-06.en.md new file mode 100644 index 0000000..4092e87 --- /dev/null +++ b/content/modernizr/06-data-movement/etl-06.en.md @@ -0,0 +1,111 @@ +--- +title: "6.6 Data Movement" +menuTitle: "Data Movement" +date: 2025-09-02T16:43:04-05:00 +weight: 36 +chapter: false +--- + +Now comes the most exciting part - actually testing your migration system with real user actions! You'll see firsthand how the feature flag system controls where data goes, and you'll prove that your migration from MySQL to DynamoDB actually works. + +This is your test flight - you want to see all the systems working together before you declare the mission successful. + +### Monitoring the Migration Process + +When you use the admin portal to toggle feature flags, you're actually calling REST API endpoints that control your application's database behavior. The system logs every phase change so you can track what's happening. + +![Migration control panel](/static/images/modernizr/6/stage06-26.png) + +These log messages are crucial for understanding: +- **When** changes happen (timestamps for troubleshooting) +- **What** changed (which flags were modified) +- **Who** made the change (accountability for production systems) +- **Current State** (confirmation of the new configuration) + +This logging gives you a complete audit trail of your migration process. + +## Testing Phase 2: Dual Write + MySQL Read + +Keep two browser tabs open: +- **Tab 1**: The migration control admin portal +- **Tab 2**: The online shopping store application + +This setup lets you control the migration and immediately test the results. + +While in Phase 2 (dual write + MySQL read), go to the shopping store and add an item to your cart. + +![Migration control panel](/static/images/modernizr/6/stage06-27.png) + +What happens behind the scenes: +- **Write Operation**: The system writes the cart item to BOTH MySQL and DynamoDB +- **Read Operation**: When the page loads your cart, it reads ONLY from MySQL +- **User Experience**: You see the item in your cart normally +- **Data Flow**: Data flows to both databases, but the app still trusts MySQL + +Check the MySQL database first. Look at the user views you created earlier - you should see the new cart item for the admin user. + +![Migration control panel](/static/images/modernizr/6/stage06-28.png) + +This confirms the MySQL write operation worked correctly, your existing database is still receiving updates and the application can still function normally if you need to roll back. + +Now check the AWS DynamoDB console. Open the `Users` table and look for the admin user's record - you should see the same cart item there too. + +![Migration control panel](/static/images/modernizr/6/stage06-29.png) + +This proves the dual-write system is working correctly, data transformation from MySQL to DynamoDB format is working, both databases now have the same information, you're ready to start reading from DynamoDB. + +## Testing Phase 5: DynamoDB Only + +Now for the big test! Go back to the migration control panel and switch directly to Phase 5 (DynamoDB Only). This simulates completing the entire migration process instantly. + +In Phase 5: +- **READ_MYSQL**: OFF (no reading from MySQL) +- **READ_DDB**: ON (all reads come from DynamoDB) +- **WRITE_MYSQL**: OFF (no writing to MySQL) +- **WRITE_DDB**: ON (all writes go to DynamoDB) + +Go back to your shopping cart and complete the checkout process for the item you added earlier. + +![Migration control panel](/static/images/modernizr/6/stage06-30.png) + +This transaction is critical because it: +- **Reads** your cart data from DynamoDB (not MySQL) +- **Processes** the checkout using DynamoDB data +- **Writes** the order information only to DynamoDB +- **Deletes** the cart item from DynamoDB only + +Refresh the DynamoDB Users table and look for changes in the admin user's record. You should see: + +- **Cart Item Gone**: The `CART#1` item has disappeared +- **New Order Created**: A new `ORDER#` record has appeared +- **Complete Transaction**: The checkout process worked entirely with DynamoDB + +![Migration control panel](/static/images/modernizr/6/stage06-31.png) + +This proves that your application can: +- Read complex data from DynamoDB correctly +- Process business logic with NoSQL data +- Write new records in the proper DynamoDB format +- Delete records when needed + +Finally, check your MySQL user views. You should NOT see the completed order there, because in Phase 5, the application doesn't write to MySQL anymore. + +![Migration control panel](/static/images/modernizr/6/stage06-32.png) + +This confirms the Phase 5 configuration is working correctly, your application is truly running on DynamoDB, MySQL is no longer part of your active system and the migration is functionally complete. + +## What You've Accomplished + +Congratulations! You have successfully migrated your application from MySQL to DynamoDB. Your test proves that: + +- **Data Migration Works**: Your historical data was correctly transferred from MySQL to DynamoDB using the ETL process. +- **Application Logic Works**: Your business logic (shopping cart, checkout process) works correctly with NoSQL data. +- **Performance Is Acceptable**: The application responds normally when reading and writing DynamoDB data. +- **Feature Flags Work**: You can control database usage precisely through the admin portal. +- **Rollback Is Possible**: Until Phase 5, you could have rolled back to MySQL at any time. + +### Technical Achievement + +You've built and operated a dual-database system that safely migrated data without downtime, provides complete control over the migration process, validates data consistency between systems and work a the same time with MySQL and DynamoDB. + +Your application is now successfully running on DynamoDB, with all the benefits of NoSQL scalability, performance, and cost-effectiveness. The modernization project is complete! diff --git a/content/modernizr/06-data-movement/index.en.md b/content/modernizr/06-data-movement/index.en.md new file mode 100644 index 0000000..f4b696a --- /dev/null +++ b/content/modernizr/06-data-movement/index.en.md @@ -0,0 +1,119 @@ +--- +title: "Stage 6 and 7: Infra and Data movement" +date: 2025-09-02T15:41:04-05:00 +weight: 30 +chapter: true +--- + +## Moving the data from MySQL to DynamoDB + +At this point, we have created the dual database layer that allows our application to connect to DynamoDB and MySQL, we have configured the connectivity to DynamoDB, and we have refactored the entire application implementing the applicaton access patterns we need to support. Now we need to do is to gradually move the data from one database to the other. There are several approaches you can follow in this regards, however we have choosen to use a Dual Write approach, where we will be sending data gradually to the DynamoDB table. This process consists of 5 different stages. + +## Phase 1: MySQL Only (Baseline State) + +This phase represents your application's current operational state, serving as the foundation for the migration process. The dual abstraction layer is active, but all database operations route exclusively to MySQL. This configuration validates that your abstraction layer functions correctly without introducing any changes to existing data flows. + +**Key Characteristics:** +- **Data Flow**: All read and write operations target MySQL exclusively +- **Performance Baseline**: Establish current system performance metrics for comparison +- **Risk Level**: Minimal - no changes to existing data operations +- **Validation Focus**: Confirm abstraction layer doesn't introduce performance overhead + +During this phase, implement comprehensive monitoring to establish baseline performance metrics. These measurements become reference points for evaluating the success of subsequent migration phases. Monitor response times, throughput rates, error frequencies, and resource utilization patterns to create a complete performance profile of your current system. + +The abstraction layer validation ensures that the additional software layer doesn't negatively impact application performance or introduce unexpected behavior changes. This phase provides confidence that your migration infrastructure is solid before introducing DynamoDB operations. + +## Phase 2: Dual Writes + MySQL Reads (Shadow Mode) + +This phase introduces DynamoDB writes while maintaining MySQL as the exclusive read source. This "shadow mode" operation allows comprehensive testing of your DynamoDB implementation without affecting user-facing functionality, providing a safe environment to validate data transformations and identify integration issues. + +**Shadow Mode Benefits:** +- **Risk Mitigation**: User experience remains unchanged as all reads come from MySQL +- **Data Validation**: Compare write operations between databases to ensure transformation accuracy +- **Performance Testing**: Evaluate DynamoDB write performance under real traffic patterns +- **Error Detection**: Identify and resolve integration issues without user impact + +Configure your dual database abstraction layer to write to both MySQL and DynamoDB simultaneously while routing all read operations to MySQL. Implement comprehensive logging to track any discrepancies between successful MySQL writes and failed DynamoDB operations. This logging provides debugging information for resolving integration issues. + +**Parallel Data Migration Process:** +While shadow mode operates, execute a ETL (Extract, Transform, Load) process to migrate existing MySQL data to DynamoDB. This bulk migration uses your migration contract specifications to transform historical data according to your new NoSQL data model. The ETL process should validate data integrity and provide detailed reports on migration success rates and any encountered issues. + +Monitor dual write success rates carefully, aiming for near-perfect parity between MySQL and DynamoDB write operations. Any persistent discrepancies indicate issues with your migration contract implementation or DynamoDB integration that must be resolved before proceeding. + +## Phase 3: Dual Operations with MySQL Primary (Validation Phase) + +This phase expands functionality to include reads from both databases while maintaining MySQL as the authoritative source. This configuration enables comprehensive validation of DynamoDB data accuracy and query performance while preserving the ability to fall back to MySQL for any inconsistencies. + +**Dual Read Implementation:** +- **Primary Response**: MySQL provides the official response to user requests +- **Validation Query**: DynamoDB executes parallel queries for comparison +- **Consistency Checking**: System compares responses and logs discrepancies +- **Fallback Logic**: Any DynamoDB errors automatically fall back to MySQL results + +This phase provides the first opportunity to compare query performance between MySQL and DynamoDB under real traffic conditions. Monitor response times, throughput capabilities, and resource utilization for both systems. These metrics inform capacity planning decisions and help identify optimization opportunities. + +**Data Consistency Validation:** +Implement automated comparison logic that validates data consistency between MySQL and DynamoDB responses. Log any discrepancies for analysis, paying particular attention to: +- **Data Transformation Accuracy**: Ensure migration contract transformations preserve data integrity +- **Timing Issues**: Identify any eventual consistency issues in DynamoDB reads +- **Query Logic Equivalence**: Verify that DynamoDB queries return functionally equivalent results + +This phase serves as a critical confidence-building period for development teams. Seeing consistent, accurate results from DynamoDB builds trust in the new system and provides validation that the migration effort is proceeding successfully. + +## Phase 4: DynamoDB Primary with MySQL Fallback (Performance Validation) + +This phase represents a significant milestone: DynamoDB becomes the primary data source while maintaining MySQL as a safety net. Users receive DynamoDB responses by default, with automatic fallback to MySQL if DynamoDB operations fail or return inconsistent results. + +**Primary Source Transition:** +- **Default Response Source**: DynamoDB provides primary responses to user requests +- **Performance Monitoring**: Track DynamoDB response times and accuracy under full load +- **Automatic Fallback**: MySQL serves as backup for any DynamoDB failures +- **Confidence Building**: Validate that DynamoDB can handle production traffic reliably + +Maintain dual write operations during this phase to ensure MySQL data remains current for potential rollback scenarios. This configuration provides maximum safety while allowing real-world validation of DynamoDB performance and reliability. + +Focus monitoring efforts on: +- **Response Time Comparison**: Measure DynamoDB performance against MySQL baseline +- **Error Rate Analysis**: Track any increase in application errors or timeouts +- **User Experience Impact**: Monitor user-facing performance metrics and satisfaction indicators +- **Capacity Utilization**: Ensure DynamoDB capacity settings handle production traffic appropriately + +Use real traffic patterns to identify optimization opportunities in your DynamoDB configuration. Adjust capacity settings, optimize query patterns, or refine GSI usage based on actual performance data rather than theoretical projections. + +## Phase 5: DynamoDB Only (Complete Migration) + +The final phase represents complete migration success: DynamoDB handles all database operations independently. MySQL writes cease, and the application operates entirely on the modern NoSQL infrastructure. + +**Migration Completion Criteria:** +- **Performance Validation**: DynamoDB consistently meets or exceeds MySQL performance benchmarks +- **Reliability Confirmation**: Error rates remain within acceptable thresholds over extended periods +- **Feature Parity**: All application functionality works correctly with DynamoDB +- **Operational Readiness**: Support teams are trained and confident with DynamoDB operations + +Plan careful decommissioning of MySQL infrastructure: +- **Data Archival**: Preserve historical MySQL data according to compliance requirements +- **Monitoring Cleanup**: Remove MySQL-specific monitoring and alerting +- **Code Cleanup**: Remove dual database abstraction layer complexity +- **Documentation Updates**: Update operational procedures and system documentation + +**Success Metrics:** +- **Performance Improvement**: Document quantifiable improvements in response times, throughput, or scalability +- **Operational Efficiency**: Measure improvements in operational overhead and maintenance requirements +- **Cost Optimization**: Calculate cost savings from improved resource utilization +- **Development Velocity**: Assess improvements in development team productivity with modern data access patterns + +Maintain enhanced monitoring during the initial post-migration period to quickly identify and resolve any issues that emerge in the fully migrated system. This vigilance ensures stable operation and builds confidence in the migration's long-term success. + +## Migration Control Panel + +To simplify all of this process we created an Admin portal that allows you to control the migration phases using feature flags. + +![Migration control panel](/static/images/modernizr/6/stage06-01.png) + +This process will consist of 3 different actions: + +1. Using the `migrationContract.json` generate MySQL views, and use MySQL MCP server to create them. +2. Using the DynamoDB MCP server create the respective DynamoDB tables that are specified in the `migrationContract.json` +3. Create the Glue ETL scripts and run them to move bulk data from MySQL to DynamoDB using the Data processing MCP server (Glue MCP Server). + + diff --git a/content/modernizr/07-conclusion/index.en.md b/content/modernizr/07-conclusion/index.en.md new file mode 100644 index 0000000..7d97d42 --- /dev/null +++ b/content/modernizr/07-conclusion/index.en.md @@ -0,0 +1,105 @@ +--- +title: "Conclusion and next steps" +date: 2025-09-02T15:41:04-05:00 +weight: 30 +chapter: true +--- + +## Congratulations on Your Database Modernization Journey! + +You've just completed something that many experienced developers find challenging - a complete database modernization from MySQL to DynamoDB. Think of what you've accomplished as successfully chaning a car engine from gas to electric, keeping both engines running at the same time! + +You've learned a systematic approach to modernizing any database system that you can apply throughout your career as a software developer. + +## What You've Actually Built + +Over the course of this workshop, you've constructed a database modernization system that includes: + +- **Stage 1**: MySQL analysis and understanding your existing data relationships +- **Stage 2**: NoSQL data modeling with a formal migration contract +- **Stage 3**: Dual database abstraction layer for seamless switching +- **Stage 4**: DynamoDB infrastructure with tables, indexes, and connections +- **Stage 5**: Feature flag control system for safe, controlled migration +- **Stage 6**: Automated ETL processes for data movement +- **Stage 7**: Real-world testing and validation + +You can use this approach for your own applications, the goal is that Users don't experience service interruption, while enabling multiple points where you could safely return to MySQL. In addition you can always have a comparison between MySQL and DynamoDB results with a admin portal to control the migration process. + +Through this process, you've developed skills that are directly applicable to professional software development: + +- **Database Design**: Understanding both relational and NoSQL data modeling principles +- **System Architecture**: Building abstraction layers that hide complexity from business logic +- **DevOps Practices**: Using infrastructure as code and automated deployment processes +- **Quality Assurance**: Implementing validation and testing systems for critical data operations +- **Risk Management**: Using feature flags and phased rollouts to minimize operational risk + +## Understanding the Time Investment + +### Why 11 Hours Matters + +This workshop compressed about 11 hours of work into a shorter experience by providing some pre-built components. In a real-world scenario, a complete database modernization project typically takes: + +- **Planning Phase**: 3-4 weeks of analysis and design +- **Development Phase**: 8-12 weeks of implementation and testing +- **Migration Phase**: 2-3 weeks of careful, controlled rollout +- **Validation Phase**: 1-2 weeks of monitoring and optimization + +The 11-hour timeline represents the core technical work, but doesn't include: + +- **Business Requirements Analysis**: Understanding what your application actually needs +- **Stakeholder Coordination**: Getting approval from management, users, and operations teams +- **Production Testing**: Extensive testing with real user loads and edge cases +- **Documentation**: Creating operational runbooks and training materials + +### The Importance of Taking It Seriously + +Database migrations are among the most critical operations in software development because: + +- **Data Is Irreplaceable**: Unlike code, which you can always rewrite, data represents your business's history and value. Losing customer data or order history can destroy a business. +- **Downtime Is Expensive**: Every minute your application is unavailable costs money and user trust. A poorly planned migration can cause hours or days of outages. +- **Complexity Is Hidden**: Databases often have subtle dependencies and edge cases that only appear under specific conditions. What works in testing might fail in production. +- **Rollback Is Difficult**: Once you've started writing to a new database format, rolling back becomes increasingly complicated and risky. + +## Continuing Your Database Education + +While AI tools like `Cline` can generate a lot of code and configurations automatically, you still need to understand the fundamentals to: + +- **Validate AI Output**: AI tools can often be a little too "creative" and may make mistakes or choose suboptimal approaches. You need to understand enough to spot problems and ask better questions. +- **Debug Issues**: When something goes wrong in production (and it will), you need to understand the underlying systems to diagnose and fix problems quickly. +- **Make Design Decisions**: AI can implement solutions, but you need to decide what problems to solve and what trade-offs to make. +- **Optimize Performance**: Fine-tuning database performance requires deep understanding of how data flows through your system. + +### Key Areas for Deeper Study + +- Learn about DynamoDB Streams for real-time data processing +- Understand Global Tables for multi-region applications +- Master advanced querying techniques and performance optimization +- Study different NoSQL data patterns. +- Learn when to choose NoSQL vs. relational databases. +- Understand eventual consistency and how it affects application design + +### Practical Next Steps + +- **Complete the Full Workshop**: If you want the complete experience, use the `clean-start` folder to run all 11 hours of the workshop from scratch. This will give you hands-on experience with every step of the process. +- **Practice Data Modeling**: Take existing relational schemas and practice converting them to different NoSQL formats. The more you practice, the more intuitive it becomes. + +## Final Thoughts + +Database modernization is both an art and a science. The technical skills you've learned here are just the beginning - the real expertise comes from applying these techniques to real-world problems with real consequences. + +As you continue your career in DynamoDB Data modeling, remember that every database migration teaches you something new. Each system has its own quirks, each business has its own constraints, and each team has its own culture. The systematic approach you've learned provides the framework, but experience will teach you the judgment. + +Most importantly, always remember that behind every database is a business that depends on that data. Your job isn't just to make the technology work - it's to help businesses succeed while protecting the data they trust you with. + +**Keep learning, keep building, and keep asking "why" when AI tools suggest solutions. The future of software development belongs to developers who understand both the power of AI assistance and the fundamentals that make technology reliable.** + +## Share Your Experience + +We'd love to hear about your experience with this workshop! Please share: + +- **What worked well** for your learning style +- **What was challenging** and could be explained better +- **How you plan to apply** these skills in your projects +- **What database modernization scenarios** you're facing in your work + +Please share your feedback as it helps us improve this workshop for future developers beginning their database modernization journey. diff --git a/content/modernizr/index.en.md b/content/modernizr/index.en.md new file mode 100644 index 0000000..4687e18 --- /dev/null +++ b/content/modernizr/index.en.md @@ -0,0 +1,29 @@ +--- +title: "LGAM: GenerativeAI Application Modernization" +date: 2025-09-01T09:46:04-05:00 +weight: 50 +--- + +## Database Modernization with AI-Powered Workflows + +In this module, you'll modernize an existing e-commerce application from a traditional MySQL-based architecture to a modern DynamoDB implementation. This represents a real-world scenario where legacy systems need to evolve to meet modern scalability and performance requirements. + +The application you'll be working with is an online shopping platform built with an Express.js backend and React frontend, currently running on MySQL. Your goal is to transform this into a cloud-native, serverless architecture while maintaining full functionality and improving performance. + +## The Database Modernizr Solution + +This workshop introduces the **Database Modernizr** - an AI-driven workflow that accelerates database modernization projects from traditional 12-18 month timelines to days. Rather than manual code rewriting, you'll leverage generative AI to systematically analyze existing systems and guide the modernization process. + +You'll work with multiple AWS MCP (Model Context Protocol) servers that provide specialized capabilities for different aspects of the modernization process. Model Context Protocol is an open standard for endpoints that an AI agent can call to expose functionality from databases, filesystems, or external APIs in a consistent way. These include MySQL analysis, DynamoDB modeling, and data processing tools. Additionally, you'll learn advanced prompt engineering techniques optimized for database design and application refactoring, minimizing the risk of AI hallucinations during the execution of this workflow. + +## Learning Objectives + +By completing this workshop, you'll gain hands-on experience with: +- Legacy system analysis and access pattern identification +- NoSQL data modeling principles and DynamoDB design patterns +- AI-assisted code refactoring and database abstraction layers +- Dual-write implementations for zero-downtime migrations +- Infrastructure as Code deployment strategies +- ETL processes for data migration between database systems + +This workshop demonstrates how AI can serve as a force multiplier in complex technical projects while maintaining human oversight for important architectural decisions. diff --git a/static/images/modernizr/0/setup-01.png b/static/images/modernizr/0/setup-01.png new file mode 100644 index 0000000..6c5a34a Binary files /dev/null and b/static/images/modernizr/0/setup-01.png differ diff --git a/static/images/modernizr/0/setup-02.png b/static/images/modernizr/0/setup-02.png new file mode 100644 index 0000000..bf40269 Binary files /dev/null and b/static/images/modernizr/0/setup-02.png differ diff --git a/static/images/modernizr/0/setup-03.png b/static/images/modernizr/0/setup-03.png new file mode 100644 index 0000000..4d6e82d Binary files /dev/null and b/static/images/modernizr/0/setup-03.png differ diff --git a/static/images/modernizr/0/setup-04.png b/static/images/modernizr/0/setup-04.png new file mode 100644 index 0000000..aa94862 Binary files /dev/null and b/static/images/modernizr/0/setup-04.png differ diff --git a/static/images/modernizr/0/setup-05.png b/static/images/modernizr/0/setup-05.png new file mode 100644 index 0000000..c9b11f0 Binary files /dev/null and b/static/images/modernizr/0/setup-05.png differ diff --git a/static/images/modernizr/0/setup-06.png b/static/images/modernizr/0/setup-06.png new file mode 100644 index 0000000..d7addff Binary files /dev/null and b/static/images/modernizr/0/setup-06.png differ diff --git a/static/images/modernizr/0/setup-07.png b/static/images/modernizr/0/setup-07.png new file mode 100644 index 0000000..70d182b Binary files /dev/null and b/static/images/modernizr/0/setup-07.png differ diff --git a/static/images/modernizr/0/setup-08.png b/static/images/modernizr/0/setup-08.png new file mode 100644 index 0000000..47bd810 Binary files /dev/null and b/static/images/modernizr/0/setup-08.png differ diff --git a/static/images/modernizr/0/setup-09.png b/static/images/modernizr/0/setup-09.png new file mode 100644 index 0000000..5e5806f Binary files /dev/null and b/static/images/modernizr/0/setup-09.png differ diff --git a/static/images/modernizr/0/setup-10.png b/static/images/modernizr/0/setup-10.png new file mode 100644 index 0000000..07d6a65 Binary files /dev/null and b/static/images/modernizr/0/setup-10.png differ diff --git a/static/images/modernizr/1/workflow-01.png b/static/images/modernizr/1/workflow-01.png new file mode 100644 index 0000000..4f0116b Binary files /dev/null and b/static/images/modernizr/1/workflow-01.png differ diff --git a/static/images/modernizr/1/workflow-02.png b/static/images/modernizr/1/workflow-02.png new file mode 100644 index 0000000..6252d5d Binary files /dev/null and b/static/images/modernizr/1/workflow-02.png differ diff --git a/static/images/modernizr/1/workflow-03.png b/static/images/modernizr/1/workflow-03.png new file mode 100644 index 0000000..6ddce6a Binary files /dev/null and b/static/images/modernizr/1/workflow-03.png differ diff --git a/static/images/modernizr/1/workflow-04.png b/static/images/modernizr/1/workflow-04.png new file mode 100644 index 0000000..f3020b2 Binary files /dev/null and b/static/images/modernizr/1/workflow-04.png differ diff --git a/static/images/modernizr/1/workflow-artifacts-01.png b/static/images/modernizr/1/workflow-artifacts-01.png new file mode 100644 index 0000000..5e8432a Binary files /dev/null and b/static/images/modernizr/1/workflow-artifacts-01.png differ diff --git a/static/images/modernizr/1/workflow-base-01.png b/static/images/modernizr/1/workflow-base-01.png new file mode 100644 index 0000000..9d624be Binary files /dev/null and b/static/images/modernizr/1/workflow-base-01.png differ diff --git a/static/images/modernizr/1/workflow-prompt-01.png b/static/images/modernizr/1/workflow-prompt-01.png new file mode 100644 index 0000000..c98f5d7 Binary files /dev/null and b/static/images/modernizr/1/workflow-prompt-01.png differ diff --git a/static/images/modernizr/1/workflow-prompt-02.png b/static/images/modernizr/1/workflow-prompt-02.png new file mode 100644 index 0000000..e4f5889 Binary files /dev/null and b/static/images/modernizr/1/workflow-prompt-02.png differ diff --git a/static/images/modernizr/2/stage02-01.png b/static/images/modernizr/2/stage02-01.png new file mode 100644 index 0000000..d065758 Binary files /dev/null and b/static/images/modernizr/2/stage02-01.png differ diff --git a/static/images/modernizr/2/stage02-02.png b/static/images/modernizr/2/stage02-02.png new file mode 100644 index 0000000..e064d7d Binary files /dev/null and b/static/images/modernizr/2/stage02-02.png differ diff --git a/static/images/modernizr/2/stage02-03.png b/static/images/modernizr/2/stage02-03.png new file mode 100644 index 0000000..37268a9 Binary files /dev/null and b/static/images/modernizr/2/stage02-03.png differ diff --git a/static/images/modernizr/2/stage02-04.png b/static/images/modernizr/2/stage02-04.png new file mode 100644 index 0000000..a3f76a9 Binary files /dev/null and b/static/images/modernizr/2/stage02-04.png differ diff --git a/static/images/modernizr/2/stage02-05.png b/static/images/modernizr/2/stage02-05.png new file mode 100644 index 0000000..3c263d5 Binary files /dev/null and b/static/images/modernizr/2/stage02-05.png differ diff --git a/static/images/modernizr/2/stage02-06.png b/static/images/modernizr/2/stage02-06.png new file mode 100644 index 0000000..09b957a Binary files /dev/null and b/static/images/modernizr/2/stage02-06.png differ diff --git a/static/images/modernizr/2/stage02-07.png b/static/images/modernizr/2/stage02-07.png new file mode 100644 index 0000000..00208a2 Binary files /dev/null and b/static/images/modernizr/2/stage02-07.png differ diff --git a/static/images/modernizr/2/stage02-08.png b/static/images/modernizr/2/stage02-08.png new file mode 100644 index 0000000..1beef0d Binary files /dev/null and b/static/images/modernizr/2/stage02-08.png differ diff --git a/static/images/modernizr/2/stage02-09.png b/static/images/modernizr/2/stage02-09.png new file mode 100644 index 0000000..357b6b4 Binary files /dev/null and b/static/images/modernizr/2/stage02-09.png differ diff --git a/static/images/modernizr/2/stage02-10.png b/static/images/modernizr/2/stage02-10.png new file mode 100644 index 0000000..ae77273 Binary files /dev/null and b/static/images/modernizr/2/stage02-10.png differ diff --git a/static/images/modernizr/2/stage02-11.png b/static/images/modernizr/2/stage02-11.png new file mode 100644 index 0000000..1d41cf1 Binary files /dev/null and b/static/images/modernizr/2/stage02-11.png differ diff --git a/static/images/modernizr/2/stage02-12.png b/static/images/modernizr/2/stage02-12.png new file mode 100644 index 0000000..7161626 Binary files /dev/null and b/static/images/modernizr/2/stage02-12.png differ diff --git a/static/images/modernizr/2/stage02-13.png b/static/images/modernizr/2/stage02-13.png new file mode 100644 index 0000000..4208edf Binary files /dev/null and b/static/images/modernizr/2/stage02-13.png differ diff --git a/static/images/modernizr/2/stage02-14.png b/static/images/modernizr/2/stage02-14.png new file mode 100644 index 0000000..51acf40 Binary files /dev/null and b/static/images/modernizr/2/stage02-14.png differ diff --git a/static/images/modernizr/2/stage02-15.png b/static/images/modernizr/2/stage02-15.png new file mode 100644 index 0000000..c3bfc5f Binary files /dev/null and b/static/images/modernizr/2/stage02-15.png differ diff --git a/static/images/modernizr/2/stage02-16.png b/static/images/modernizr/2/stage02-16.png new file mode 100644 index 0000000..5f05659 Binary files /dev/null and b/static/images/modernizr/2/stage02-16.png differ diff --git a/static/images/modernizr/2/stage02-17.png b/static/images/modernizr/2/stage02-17.png new file mode 100644 index 0000000..329bf7e Binary files /dev/null and b/static/images/modernizr/2/stage02-17.png differ diff --git a/static/images/modernizr/2/stage02-18.png b/static/images/modernizr/2/stage02-18.png new file mode 100644 index 0000000..a7fc65e Binary files /dev/null and b/static/images/modernizr/2/stage02-18.png differ diff --git a/static/images/modernizr/2/stage02-19.png b/static/images/modernizr/2/stage02-19.png new file mode 100644 index 0000000..b8d7ba0 Binary files /dev/null and b/static/images/modernizr/2/stage02-19.png differ diff --git a/static/images/modernizr/2/stage02-20.png b/static/images/modernizr/2/stage02-20.png new file mode 100644 index 0000000..f238547 Binary files /dev/null and b/static/images/modernizr/2/stage02-20.png differ diff --git a/static/images/modernizr/2/stage02-21.png b/static/images/modernizr/2/stage02-21.png new file mode 100644 index 0000000..29e3166 Binary files /dev/null and b/static/images/modernizr/2/stage02-21.png differ diff --git a/static/images/modernizr/2/stage02-22.png b/static/images/modernizr/2/stage02-22.png new file mode 100644 index 0000000..4669fb8 Binary files /dev/null and b/static/images/modernizr/2/stage02-22.png differ diff --git a/static/images/modernizr/2/stage02-23.png b/static/images/modernizr/2/stage02-23.png new file mode 100644 index 0000000..5896763 Binary files /dev/null and b/static/images/modernizr/2/stage02-23.png differ diff --git a/static/images/modernizr/6/stage06-01.png b/static/images/modernizr/6/stage06-01.png new file mode 100644 index 0000000..f864a9c Binary files /dev/null and b/static/images/modernizr/6/stage06-01.png differ diff --git a/static/images/modernizr/6/stage06-02.png b/static/images/modernizr/6/stage06-02.png new file mode 100644 index 0000000..d858193 Binary files /dev/null and b/static/images/modernizr/6/stage06-02.png differ diff --git a/static/images/modernizr/6/stage06-03.png b/static/images/modernizr/6/stage06-03.png new file mode 100644 index 0000000..d27409a Binary files /dev/null and b/static/images/modernizr/6/stage06-03.png differ diff --git a/static/images/modernizr/6/stage06-04.png b/static/images/modernizr/6/stage06-04.png new file mode 100644 index 0000000..900eea5 Binary files /dev/null and b/static/images/modernizr/6/stage06-04.png differ diff --git a/static/images/modernizr/6/stage06-05.png b/static/images/modernizr/6/stage06-05.png new file mode 100644 index 0000000..0f74df0 Binary files /dev/null and b/static/images/modernizr/6/stage06-05.png differ diff --git a/static/images/modernizr/6/stage06-06.png b/static/images/modernizr/6/stage06-06.png new file mode 100644 index 0000000..0ba934e Binary files /dev/null and b/static/images/modernizr/6/stage06-06.png differ diff --git a/static/images/modernizr/6/stage06-07.png b/static/images/modernizr/6/stage06-07.png new file mode 100644 index 0000000..1e76b2c Binary files /dev/null and b/static/images/modernizr/6/stage06-07.png differ diff --git a/static/images/modernizr/6/stage06-08.png b/static/images/modernizr/6/stage06-08.png new file mode 100644 index 0000000..c42d4c6 Binary files /dev/null and b/static/images/modernizr/6/stage06-08.png differ diff --git a/static/images/modernizr/6/stage06-09.png b/static/images/modernizr/6/stage06-09.png new file mode 100644 index 0000000..8fa2666 Binary files /dev/null and b/static/images/modernizr/6/stage06-09.png differ diff --git a/static/images/modernizr/6/stage06-10.png b/static/images/modernizr/6/stage06-10.png new file mode 100644 index 0000000..b6105a6 Binary files /dev/null and b/static/images/modernizr/6/stage06-10.png differ diff --git a/static/images/modernizr/6/stage06-11.png b/static/images/modernizr/6/stage06-11.png new file mode 100644 index 0000000..5756c5a Binary files /dev/null and b/static/images/modernizr/6/stage06-11.png differ diff --git a/static/images/modernizr/6/stage06-12.png b/static/images/modernizr/6/stage06-12.png new file mode 100644 index 0000000..8e2755c Binary files /dev/null and b/static/images/modernizr/6/stage06-12.png differ diff --git a/static/images/modernizr/6/stage06-13.png b/static/images/modernizr/6/stage06-13.png new file mode 100644 index 0000000..53bcea8 Binary files /dev/null and b/static/images/modernizr/6/stage06-13.png differ diff --git a/static/images/modernizr/6/stage06-14.png b/static/images/modernizr/6/stage06-14.png new file mode 100644 index 0000000..c558983 Binary files /dev/null and b/static/images/modernizr/6/stage06-14.png differ diff --git a/static/images/modernizr/6/stage06-15.png b/static/images/modernizr/6/stage06-15.png new file mode 100644 index 0000000..d33330c Binary files /dev/null and b/static/images/modernizr/6/stage06-15.png differ diff --git a/static/images/modernizr/6/stage06-16.png b/static/images/modernizr/6/stage06-16.png new file mode 100644 index 0000000..67932ba Binary files /dev/null and b/static/images/modernizr/6/stage06-16.png differ diff --git a/static/images/modernizr/6/stage06-17.png b/static/images/modernizr/6/stage06-17.png new file mode 100644 index 0000000..dbfacab Binary files /dev/null and b/static/images/modernizr/6/stage06-17.png differ diff --git a/static/images/modernizr/6/stage06-18.png b/static/images/modernizr/6/stage06-18.png new file mode 100644 index 0000000..0135713 Binary files /dev/null and b/static/images/modernizr/6/stage06-18.png differ diff --git a/static/images/modernizr/6/stage06-19.png b/static/images/modernizr/6/stage06-19.png new file mode 100644 index 0000000..86c52a3 Binary files /dev/null and b/static/images/modernizr/6/stage06-19.png differ diff --git a/static/images/modernizr/6/stage06-20.png b/static/images/modernizr/6/stage06-20.png new file mode 100644 index 0000000..fe6d917 Binary files /dev/null and b/static/images/modernizr/6/stage06-20.png differ diff --git a/static/images/modernizr/6/stage06-21.png b/static/images/modernizr/6/stage06-21.png new file mode 100644 index 0000000..59fe429 Binary files /dev/null and b/static/images/modernizr/6/stage06-21.png differ diff --git a/static/images/modernizr/6/stage06-22.png b/static/images/modernizr/6/stage06-22.png new file mode 100644 index 0000000..c182799 Binary files /dev/null and b/static/images/modernizr/6/stage06-22.png differ diff --git a/static/images/modernizr/6/stage06-23.png b/static/images/modernizr/6/stage06-23.png new file mode 100644 index 0000000..407c241 Binary files /dev/null and b/static/images/modernizr/6/stage06-23.png differ diff --git a/static/images/modernizr/6/stage06-24.png b/static/images/modernizr/6/stage06-24.png new file mode 100644 index 0000000..6fdc680 Binary files /dev/null and b/static/images/modernizr/6/stage06-24.png differ diff --git a/static/images/modernizr/6/stage06-25.png b/static/images/modernizr/6/stage06-25.png new file mode 100644 index 0000000..d980caa Binary files /dev/null and b/static/images/modernizr/6/stage06-25.png differ diff --git a/static/images/modernizr/6/stage06-26.png b/static/images/modernizr/6/stage06-26.png new file mode 100644 index 0000000..1a1641a Binary files /dev/null and b/static/images/modernizr/6/stage06-26.png differ diff --git a/static/images/modernizr/6/stage06-27.png b/static/images/modernizr/6/stage06-27.png new file mode 100644 index 0000000..e37d9bb Binary files /dev/null and b/static/images/modernizr/6/stage06-27.png differ diff --git a/static/images/modernizr/6/stage06-28.png b/static/images/modernizr/6/stage06-28.png new file mode 100644 index 0000000..bce21cf Binary files /dev/null and b/static/images/modernizr/6/stage06-28.png differ diff --git a/static/images/modernizr/6/stage06-29.png b/static/images/modernizr/6/stage06-29.png new file mode 100644 index 0000000..50a7ff3 Binary files /dev/null and b/static/images/modernizr/6/stage06-29.png differ diff --git a/static/images/modernizr/6/stage06-30.png b/static/images/modernizr/6/stage06-30.png new file mode 100644 index 0000000..126b166 Binary files /dev/null and b/static/images/modernizr/6/stage06-30.png differ diff --git a/static/images/modernizr/6/stage06-31.png b/static/images/modernizr/6/stage06-31.png new file mode 100644 index 0000000..6f48fad Binary files /dev/null and b/static/images/modernizr/6/stage06-31.png differ diff --git a/static/images/modernizr/6/stage06-32.png b/static/images/modernizr/6/stage06-32.png new file mode 100644 index 0000000..d9ad7cb Binary files /dev/null and b/static/images/modernizr/6/stage06-32.png differ