Skip to content

rif-x43/Competitve-Programming-Roadmap-CPP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 

Repository files navigation

CP Mastery Roadmap: Detailed Step-by-Step Guide

PHASE 1: Master C++ Fundamentals & STLs (Weeks 1-4)

WEEK 1: STL Basics & Setup

What You Need to Learn

  • Understanding containers and iterators in C++
  • Basic STL containers: vector, string, pair
  • How to optimize I/O for competitive programming
  • Setting up a competitive programming template

Detailed To-Do List

Day 1-2: STL Introduction

  1. Watch: GeeksforGeeks "STL in C++" introduction video (20 mins)
  2. Read: cppreference.com on containers overview
  3. Understand the difference between:
    • Sequence containers (vector, deque, list)
    • Associative containers (set, map)
    • Unordered containers (unordered_set, unordered_map)
  4. Write a simple program that creates a vector, adds elements, and prints them

Day 3-4: Vector & Strings Mastery

  1. Deep dive into vector:
    • Operations: push_back(), pop_back(), at(), [], size(), clear(), insert(), erase()
    • Iterators: how to iterate, reverse iterate
    • Common patterns: 2D vectors, vectors of pairs
  2. Learn string:
    • String operations: substr(), find(), rfind(), concatenation
    • Conversion: string to int, int to string
  3. Write programs:
    • Read N integers into a vector and print in reverse
    • Check if a string is a palindrome
    • Find all occurrences of a substring in a string

Day 5: Pair & Tuple

  1. Understand pair<T1, T2>:
    • Creating pairs
    • Accessing .first and .second
    • Vector of pairs (very common in CP)
  2. Basic usage of tuple (less common but useful)
  3. Practice:
    • Store coordinates as pairs and sort them
    • Create a vector of pair<int, string> for storing (ID, name)

Day 6: I/O Optimization & Basic Template

  1. Learn why I/O optimization matters in CP:
    • ios_base::sync_with_stdio(false) — stops syncing C and C++ I/O
    • cin.tie(NULL) — unties cin from cout
    • This can make I/O 10x faster
  2. Create your first CP template:
#include<bits/stdc++.h>
using namespace std;

int main(){
    ios_base::sync_with_stdio(false);
    cin.tie(NULL);
    
    // Your code here
    
    return 0;
}
  1. Practice: Solve 5 basic Codeforces problems using this template

Day 7: Review & First Contest

  1. Revise all concepts from the week
  2. Solve 10 Codeforces Div2 A problems (rating 800-1000)
  3. Time yourself: each should take 5-10 minutes
  4. Focus on correctness, not speed

WEEK 2: Set, Map, & Pair Mastery

What You Need to Learn

  • set: ordered unique elements
  • map: key-value pairs, ordered by key
  • unordered_set and unordered_map: faster but unordered
  • When to use which container
  • Iteration and searching in these containers

Detailed To-Do List

Day 1-2: Set Container Deep Dive

  1. Understand set<T>:
    • Ordered storage, no duplicates
    • Operations: insert(), erase(), find(), count(), lower_bound(), upper_bound()
    • Time complexity: O(log n) for all operations
  2. Learn the "binary search" functions:
    • lower_bound(x): iterator to first element >= x
    • upper_bound(x): iterator to first element > x
    • These are very important in CP!
  3. Practice problems:
    • Remove duplicates from an array using set
    • Find if a number exists in a large dataset
    • Find elements in range [a, b]
    • Count occurrences using multiset

Day 3-4: Map Container Deep Dive

  1. Understand map<K, V>:
    • Ordered by key, one value per key
    • Operations: insert(), [], at(), erase(), find(), count()
    • When you update a key using [], it auto-creates if doesn't exist
  2. Practice problems:
    • Frequency counting (word frequency, element frequency)
    • Student records: map<string, int> for (name, rollno)
    • Inverse mapping problems
    • Iterating through maps in order
  3. Important gotcha: map[key]++ will create key if it doesn't exist with value 0

Day 5: Unordered Set & Map

  1. Understand hash-based containers:
    • unordered_set and unordered_map — O(1) average, O(n) worst
    • Use when order doesn't matter and speed is critical
    • They can have hash collisions (rare but possible)
  2. When to use:
    • Ordered needed? → Use set/map
    • Only checking existence quickly? → Use unordered_set
    • Frequency counting where order doesn't matter? → Use unordered_map
  3. Practice: Solve the same problems as Day 3-4 using unordered versions

Day 6: Multiset & Multimap (Optional but Useful)

  1. multiset<T>: set that allows duplicates
    • Same operations as set
    • count(x) returns frequency
    • equal_range(x) returns iterator range of all x's
  2. multimap<K, V>: map that allows duplicate keys
    • Useful for one-to-many relationships
  3. Practice: Problems with frequency counting, duplicates

Day 7: Integration & Practice

  1. Solve 20 Codeforces problems that use set/map
    • Mix of A, B, and some C problems
  2. Problems should involve:
    • Frequency counting
    • Duplicate removal
    • Range queries using lower_bound/upper_bound
    • Ordering/sorting requirements

WEEK 3: Priority Queue, Deque, Stack, Queue

What You Need to Learn

  • priority_queue<T>: heap-based container
  • deque<T>: double-ended queue
  • stack<T> and queue<T>: LIFO and FIFO
  • When and how to use each
  • Custom comparators

Detailed To-Do List

Day 1-2: Priority Queue Mastery

  1. Understand priority_queue<T>:
    • By default, max-heap (largest element on top)
    • Operations: push(), pop(), top(), empty()
    • Time complexity: O(log n)
  2. Create min-heap:
    • priority_queue<int, vector<int>, greater<int>> minHeap;
    • This is confusing syntax but important!
  3. Custom comparators for priority queue:
    • Store pairs or custom structures with custom ordering
    • Example: store (priority, value) and order by priority
  4. Practice problems:
    • Kth largest element
    • Sliding window maximum (use deque, not pq)
    • Merge K sorted arrays
    • Hospital queue (priority-based)

Day 3: Deque Container

  1. Understand deque<T>:
    • Fast insertion/deletion at both ends: O(1)
    • Random access like vector: O(1)
    • Operations: push_back(), push_front(), pop_back(), pop_front(), [], at()
  2. When to use deque over vector:
    • When you need front operations
    • Sliding window problems
    • Implementation of actual queues/deques
  3. Practice:
    • Implement a sliding window using deque
    • Store front N elements and remove old ones

Day 4: Stack & Queue (LIFO & FIFO)

  1. stack<T>:
    • Operations: push(), pop(), top(), empty(), size()
    • Used in: DFS, expression evaluation, undo mechanisms
  2. queue<T>:
    • Operations: push(), pop(), front(), back(), empty(), size()
    • Used in: BFS, level-order traversal, simulations
  3. Practice problems:
    • Balanced parentheses checker (stack)
    • Queue simulation problems
    • Next greater element (using stack)

Day 5-6: Advanced Priority Queue Problems

  1. Solve 15 problems involving priority queues
  2. Include problems like:
    • Dijkstra's algorithm (preview)
    • Heap sort
    • Meeting rooms scheduling
    • Task scheduling by priority
  3. Understand how priority_queue optimizes certain problems

Day 7: Integration & Review

  1. Create comparison problems that test your container knowledge:
    • When to use set vs priority_queue?
    • When to use deque vs vector?
  2. Solve 15 mixed problems from Codeforces
  3. Review all container operations and time complexities
  4. Create a cheat sheet with all containers and their operations

WEEK 4: Algorithms & Templates + Comprehensive Practice

What You Need to Learn

  • Common competitive programming patterns
  • Creating a personal template library
  • Debugging techniques in CP
  • Time complexity analysis and estimation
  • Competitive programming macros

Detailed To-Do List

Day 1: Competitive Programming Macros & Templates

  1. Common macros to use:
#define ll long long
#define vi vector<int>
#define vll vector<ll>
#define pii pair<int,int>
#define all(x) x.begin(), x.end()
#define rep(i, n) for(int i = 0; i < n; i++)
#define repn(i, n) for(int i = 1; i <= n; i++)
#define pb push_back
#define mp make_pair
#define F first
#define S second
  1. Create your template file:
    • Headers
    • Macros
    • Common functions
    • Global variables space
    • Main function structure
  2. Practice using macros in 5 problems

Day 2: Time Complexity Analysis

  1. Big-O notation deep dive:
    • O(1) - constant
    • O(log n) - logarithmic (binary search, set operations)
    • O(n) - linear (single loop)
    • O(n log n) - linearithmic (sorting, balanced BST)
    • O(n²) - quadratic (nested loops, bubble sort)
    • O(2ⁿ) - exponential
  2. Calculate rough operations:
    • For N = 10⁵, which complexities work?
    • For N = 10⁶, which complexities work?
    • For N = 10⁹, which complexities work?
  3. Rule of thumb: ~10⁸ operations per second in modern judges
  4. Practice: Estimate complexity of 10 algorithm snippets

Day 3: Debugging Techniques for CP

  1. Common bugs in CP:
    • Integer overflow (use long long)
    • Off-by-one errors in loops
    • Wrong initialization
    • Not reading entire input
    • Forgetting edge cases (N=1, empty input, negative numbers)
  2. Debugging strategies:
    • Use small test cases to trace through manually
    • Print intermediate values
    • Use assertions for assumptions: assert(n >= 1 && n <= 1000)
    • Test edge cases deliberately
  3. Anti-debugging practices to avoid:
    • Submitting without local testing
    • Writing code too fast without understanding
    • Not reading problem statement carefully

Day 4-5: Comprehensive STL Practice

  1. Solve 30 Codeforces Div2 A-B level problems
  2. Each problem should reinforce:
    • Container selection
    • Algorithm choice
    • I/O handling
    • Time complexity
  3. Track your solving time - aim for <10 minutes per problem

Day 6: Binary Search Basics (Teaser for Phase 2)

  1. Linear search vs binary search concept
  2. Standard binary search template:
int left = 0, right = n - 1;
while(left <= right) {
    int mid = (left + right) / 2;
    if(arr[mid] == target) return mid;
    else if(arr[mid] < target) left = mid + 1;
    else right = mid - 1;
}
return -1; // not found
  1. Practice: 5 binary search problems

Day 7: Phase 1 Assessment

  1. Solve a Codeforces contest (Div2 A, B, possibly C)
  2. Review what went wrong, if anything
  3. Solve 10 problems from different sources
  4. Target current Codeforces rating: 900-1100

Phase 1 Resources

Learning Resources:

Practice Problems Sources:

  • Codeforces (filter by rating 800-1200)
  • AtCoder Beginner Contest (ABC)
  • CodeChef Easy problems

What You Should Build:

  • Personal template file (save in your editor)
  • List of important macros
  • Cheat sheet of all container operations with time complexities
  • Error log (mistakes you made)


PHASE 2: Core Algorithms Foundation (Weeks 5-12)

WEEK 5-6: Binary Search Deep Dive

What You Need to Learn

  • Binary search on arrays
  • Binary search on answer (predicate-based)
  • Lower bound and upper bound
  • Complex binary search patterns

Detailed To-Do List

Day 1-2: Standard Binary Search

  1. Prerequisites: array is sorted!
  2. Iterative binary search template:
int binarySearch(vector<int>& arr, int target) {
    int left = 0, right = arr.size() - 1;
    while(left <= right) {
        int mid = left + (right - left) / 2; // avoid overflow
        if(arr[mid] == target) return mid;
        else if(arr[mid] < target) left = mid + 1;
        else right = mid - 1;
    }
    return -1; // not found
}
  1. Why mid = left + (right - left) / 2? Because (left + right) / 2 can overflow.
  2. Solve 10 problems on basic binary search:
    • Find element in sorted array
    • Find first/last occurrence
    • Count occurrences

Day 3-4: Lower Bound & Upper Bound

  1. lower_bound(arr.begin(), arr.end(), x) returns iterator to first element >= x
  2. upper_bound(arr.begin(), arr.end(), x) returns iterator to first element > x
  3. These are built-in binary search functions (use them!)
  4. Example use cases:
    • Find first element >= 5: lower_bound()
    • Find first element > 5: upper_bound()
    • Count occurrences of 5: upper_bound() - lower_bound()
  5. Solve 10 problems using lower_bound/upper_bound

Day 5-6: Binary Search on Answer (Crucial Pattern)

  1. When you can't directly find the answer, but you can check if an answer works:
  2. Pattern:
bool canAchieve(int x) {
    // Check if we can achieve answer x
    // Return true/false
}

int left = minPossible, right = maxPossible;
int answer = left;
while(left <= right) {
    int mid = left + (right - left) / 2;
    if(canAchieve(mid)) {
        answer = mid;
        left = mid + 1; // try for larger answer
    } else {
        right = mid - 1; // reduce answer
    }
}
  1. Common problems:
    • Minimum time to finish jobs
    • Minimum largest element after splitting
    • Maximum minimum distance
  2. Solve 15 problems on binary search on answer

Day 7: Binary Search Integration

  1. Solve 20 mixed binary search problems
  2. Include problems where you need to combine with other concepts
  3. Target: Codeforces 1000-1200 rating

WEEK 7-8: Greedy Algorithms

What You Need to Learn

  • Greedy strategy identification
  • Proof of greedy correctness
  • When greedy fails
  • Common greedy patterns

Detailed To-Do List

Day 1: Greedy Fundamentals

  1. What is greedy?
    • At each step, make the locally optimal choice
    • Hope it leads to global optimum
    • Not always correct!
  2. When greedy works:
    • Optimal substructure: optimal solution contains optimal solutions to subproblems
    • Greedy choice property: local optimum leads to global optimum
    • Need to PROVE this for each problem
  3. Example: Coin change (minimize coins)
    • Greedy: always take largest coin that fits
    • Works for standard denominations (1, 5, 10, 25) but NOT always!

Day 2-3: Classic Greedy Problems

  1. Activity selection (interval scheduling):
    • Select max number of non-overlapping activities
    • Greedy: always choose activity that ends earliest
    • Prove: why this is optimal
  2. Huffman coding: assign shortest codes to frequent characters
  3. Fractional knapsack: greedily take items by value-to-weight ratio
  4. Solve 10 problems on classic greedy patterns

Day 4-5: Advanced Greedy Pattern Recognition

  1. Practice problems where greedy is NOT obvious:
    • Sort by one criterion to enable greedy on another
    • Example: sort meetings by end time, then greedily select
  2. Common sorts for greedy:
    • Sort by deadline, profit, ratio, start time, end time
    • Decide which sort enables greedy
  3. Solve 15 medium-difficulty greedy problems

Day 6: Proving Greedy Correctness

  1. For each problem, explain:
    • Why is this greedy choice safe?
    • What's the invariant we maintain?
    • Why doesn't a counter-example break it?
  2. This trains you to verify greedy works before coding
  3. Solve 10 problems where you MUST explain why greedy works

Day 7: Greedy Integration

  1. Solve 20 mixed greedy problems
  2. Include problems where greedy combines with sorting, binary search, or data structures
  3. Target: Codeforces 1100-1300 rating

WEEK 9-12: Dynamic Programming (The Most Important Topic!)

What You Need to Learn

  • DP fundamentals: states, transitions, base cases
  • Memoization (top-down DP)
  • Tabulation (bottom-up DP)
  • 1D DP patterns
  • 2D DP patterns
  • Space optimization

Detailed To-Do List

Day 1: DP Fundamentals

  1. What is DP?
    • Solving overlapping subproblems by storing results
    • Breaking problem into stages with decisions
    • Optimal substructure: optimal solution uses optimal subproblem solutions
  2. Three components of DP:
    • State: what information do we need to track?
    • Transition: how do we compute current state from previous states?
    • Base case: what are the initial states?
  3. Two DP approaches:
    • Memoization (top-down): start with full problem, recursively break down, cache results
    • Tabulation (bottom-up): start with base cases, build up to full problem
  4. Example: Fibonacci
// Memoization
map<int, long long> memo;
long long fib(int n) {
    if(n <= 1) return n;
    if(memo.count(n)) return memo[n];
    return memo[n] = fib(n-1) + fib(n-2);
}

// Tabulation
long long fib(int n) {
    vector<long long> dp(n+1);
    dp[0] = 0, dp[1] = 1;
    for(int i = 2; i <= n; i++) {
        dp[i] = dp[i-1] + dp[i-2];
    }
    return dp[n];
}
  1. Understand why DP is faster than recursion (exponential to polynomial)

Day 2-3: 1D DP Mastery

  1. Coin Change (minimize coins for amount):
    • State: dp[i] = minimum coins needed for amount i
    • Transition: dp[i] = min(dp[i-coin] + 1) for all coins
    • Base case: dp[0] = 0
  2. Longest Increasing Subsequence (LIS):
    • State: dp[i] = length of LIS ending at index i
    • Transition: for each j < i where arr[j] < arr[i], dp[i] = max(dp[i], dp[j] + 1)
    • Base case: dp[i] = 1
    • Note: naive is O(n²), can optimize to O(n log n) with binary search
  3. Climbing Stairs:
    • State: dp[i] = ways to reach stair i
    • Transition: dp[i] = dp[i-1] + dp[i-2]
  4. Rod Cutting:
    • State: dp[i] = maximum profit from rod of length i
    • Transition: try cutting at each position
  5. Solve 20 problems on 1D DP

Day 4-5: 2D DP Mastery

  1. 0/1 Knapsack:
    • State: dp[i][w] = maximum value with first i items and capacity w
    • Transition: dp[i][w] = max(dp[i-1][w], dp[i-1][w-weight[i]] + value[i])
    • Base case: dp[0][w] = 0
    • Space optimization: can reduce to 1D
  2. Grid Paths:
    • State: dp[i][j] = number of ways to reach cell (i,j)
    • Transition: dp[i][j] = dp[i-1][j] + dp[i][j-1]
    • Include obstacles: if obstacle, dp[i][j] = 0
  3. Longest Common Subsequence (LCS):
    • State: dp[i][j] = LCS length of first i chars of string1 and first j chars of string2
    • Transition: if s1[i-1] == s2[j-1] then dp[i][j] = dp[i-1][j-1] + 1 else max(dp[i-1][j], dp[i][j-1])
  4. Matrix Chain Multiplication:
    • State: dp[i][j] = minimum multiplications to multiply matrices i to j
    • Transition: try all split points
  5. Solve 20 problems on 2D DP

Day 6: Space Optimization & Pattern Recognition

  1. When you can optimize 2D DP to 1D:
    • If current row depends only on previous row
    • Iterate from right to left to avoid overwriting needed values
  2. When NOT to optimize:
    • When you need random access to previous states
  3. Recognize DP patterns in problem statements:
    • "count ways", "maximum/minimum", "is possible" → likely DP
    • "overlapping subproblems" mentioned in problem → DP
    • Exponential brute force solution → DP
  4. Practice identifying 10 problems as DP before coding

Day 7: Complex DP Problems

  1. DP combined with optimization:
    • DP with convex hull trick
    • DP with segment tree
    • (Don't focus too much now, come back later)
  2. Solve 20 hard 1D and 2D DP problems

Throughout Weeks 9-12:

  1. Total DP problems to solve: 100-120 problems
  2. Problem progression:
    • Week 9: 25 easy-medium 1D DP
    • Week 10: 25 easy-medium 2D DP
    • Week 11: 30 medium-hard DP (mixed)
    • Week 12: 20-25 hard DP, optimize solutions
  3. Weekly assessment: solve 5 problems in contest setting (1 problem every day)
  4. Target: Codeforces 1300-1500 rating after Phase 2

Phase 2 Resources

DP Learning:

  • YouTube: Abdul Bari DP playlist (essential!)
  • YouTube: Errichto DP discussions
  • GeeksforGeeks DP tutorials
  • Book: "Competitive Programming 3" chapters on DP

Practice Problems:

  • Codeforces DP problems (filter by tag)
  • AtCoder DP contest (great for learning)
  • LeetCode DP problems (premium)

What You Should Build:

  • DP template for memoization and tabulation
  • List of classic DP problems with templates
  • Recognition guide: "How to identify if a problem is DP"


PHASE 3: Data Structures Mastery (Weeks 13-20)

WEEK 13-14: Linear Data Structures - Stacks & Queues Advanced

What You Need to Learn

  • Monotonic stacks (very important!)
  • Monotonic queues (very important!)
  • Applications in real problems
  • Deque-based solutions

Detailed To-Do List

Day 1-2: Monotonic Stack Pattern

  1. What is a monotonic stack?
    • A stack that maintains elements in increasing or decreasing order
    • When adding a new element, pop elements that violate the order
    • We store indices, not values (to remember positions)
  2. Common problems:
    • Next Greater Element: for each element, find the next element to the right that is greater
    vector<int> nextGreater(vector<int>& arr) {
        vector<int> result(arr.size(), -1);
        stack<int> st; // store indices
        for(int i = 0; i < arr.size(); i++) {
            while(!st.empty() && arr[st.top()] < arr[i]) {
                result[st.top()] = arr[i];
                st.pop();
            }
            st.push(i);
        }
        return result;
    }
    • Previous Greater Element: similar but to the left
    • Next Smaller Element: similar but greater changes to smaller
    • Previous Smaller Element: similar
  3. Solve 10 problems on monotonic stack variants:
    • Stock span problem
    • Largest rectangle in histogram (classic!)
    • Trapping rain water (classic!)
  4. Time complexity: O(n) because each element is pushed and popped once

Day 3-4: Monotonic Queue Pattern

  1. What is a monotonic queue?
    • Deque that maintains elements in increasing or decreasing order
    • When adding new element, pop from back until order is maintained
    • When answer is needed, pop from front if out of window
  2. Sliding Window Maximum (most common):
    • For each window of size k, find the maximum
    vector<int> maxSlidingWindow(vector<int>& arr, int k) {
        deque<int> dq; // store indices
        vector<int> result;
        for(int i = 0; i < arr.size(); i++) {
            if(!dq.empty() && dq.front() < i - k + 1) dq.pop_front();
            while(!dq.empty() && arr[dq.back()] <= arr[i]) dq.pop_back();
            dq.push_back(i);
            if(i >= k - 1) result.push_back(arr[dq.front()]);
        }
        return result;
    }
  3. Sliding Window Minimum: similar but for minimums
  4. Optimize DP with Monotonic Queue:
    • Some DP problems use deque to optimize from O(n²) to O(n)
  5. Solve 15 problems on monotonic queue:
    • Sliding window max/min
    • Jump game optimizations
    • DP optimization with queue

Day 5: Stack Application - Expression & Bracket Problems

  1. Balanced Parentheses:
    • Check if brackets are balanced
    • Use stack to match pairs
  2. Postfix Evaluation:
    • Evaluate expressions in postfix notation
    • Push numbers, pop for operations
  3. Infix to Postfix Conversion:
    • Using stack with operator precedence
  4. Solve 10 problems on these applications

Day 6: Integration & Mixed Problems

  1. Solve 20 problems combining monotonic stack, monotonic queue, and stacks/queues
  2. Include problems from different online judges

Day 7: Assessment

  1. Solve 3 medium problems on linear data structures in contest setting
  2. Target: Codeforces 1400-1550 rating

WEEK 15-17: Trees - Mastery

What You Need to Learn

  • Tree basics and representations
  • DFS and BFS traversals
  • Binary Search Trees
  • Tree DP
  • Path problems on trees
  • LCA (Lowest Common Ancestor)

Detailed To-Do List

Day 1-2: Tree Basics & Traversals

  1. Tree representation:
    • Adjacency list: vector<int> adj[n]
    • Why not adjacency matrix? Takes O(n²) space, usually not needed for trees
  2. Tree properties:
    • n nodes, n-1 edges
    • No cycles
    • Connected
  3. DFS Traversal:
    • In-order, pre-order, post-order for binary trees
    • Generic DFS for any tree
    void dfs(int node, int parent) {
        // process node
        for(int child : adj[node]) {
            if(child != parent) {
                dfs(child, node);
            }
        }
    }
  4. BFS Traversal:
    • Level-order traversal
    • Use queue, good for finding distances
  5. Problems:
    • Find height of tree
    • Count nodes
    • Find diameter (longest path)
    • Find sum of all paths
  6. Solve 15 problems on tree traversals

Day 3: Binary Search Trees

  1. BST property: left < node < right
  2. Operations: insert, delete, search (O(log n) average, O(n) worst)
  3. In-order traversal gives sorted order
  4. Self-balancing BSTs: AVL, Red-Black (don't implement, just know they exist)
  5. In CP, usually use set or map for BST functionality
  6. Solve 10 problems on BST concepts (using set/map)

Day 4: Tree DP

  1. Tree DP pattern:
    • Process tree recursively
    • State: usually dp[node] = some property of subtree rooted at node
    • Transition: combine children's values
  2. Common problems:
    • Maximum path sum: find path (not necessarily root to leaf) with maximum sum
    • Tree diameter: longest path between any two nodes
    • Maximum subtree sum: find subtree with maximum total node values
    • Count paths with sum K: count paths from any node to any descendant with given sum
  3. Template:
    pair<int, int> dfs(int node, int parent) {
        int maxPathThroughNode = value[node];
        int maxPathStartingHere = value[node];
        for(int child : adj[node]) {
            auto [childMax, childStart] = dfs(child, node);
            // Update maxPathThroughNode and maxPathStartingHere
        }
        return {maxPathThroughNode, maxPathStartingHere};
    }
  4. Solve 20 problems on tree DP

Day 5: LCA (Lowest Common Ancestor)

  1. Naive approach:
    • Find path from both nodes to root
    • LCA is where paths meet
    • Time: O(n) per query
  2. Binary Lifting (Sparse Table):
    • Preprocess: dp[node][i] = 2^i-th ancestor of node
    • Query: bring both nodes to same level, then jump together
    const int LOG = 20; // log2(100000)
    int dp[MAXN][LOG];
    
    void dfs(int u, int p) {
        dp[u][0] = p;
        for(int i = 1; i < LOG; i++) {
            dp[u][i] = dp[dp[u][i-1]][i-1];
        }
        for(int v : adj[u]) {
            if(v != p) dfs(v, u);
        }
    }
    
    int lca(int u, int v) {
        if(depth[u] < depth[v]) swap(u, v);
        int diff = depth[u] - depth[v];
        for(int i = 0; i < LOG; i++) {
            if((diff >> i) & 1) u = dp[u][i];
        }
        if(u == v) return u;
        for(int i = LOG-1; i >= 0; i--) {
            if(dp[u][i] != dp[v][i]) {
                u = dp[u][i];
                v = dp[v][i];
            }
        }
        return dp[u][0];
    }
  3. Time: O(n log n) preprocessing, O(log n) per query
  4. Solve 10 problems on LCA

Day 6: Advanced Tree Problems

  1. Path queries on trees:
    • Distance between two nodes using LCA
    • Sum of node values on path
    • Count nodes on path satisfying condition
  2. Subtree queries:
    • Sum of all values in subtree
    • Count nodes in subtree
    • Find node with maximum value in subtree
  3. Solve 15 mixed tree problems

Day 7: Assessment

  1. Solve 3 tree problems in contest setting
  2. Include one DP tree problem, one LCA problem, one traversal problem
  3. Target: Codeforces 1500-1650 rating

WEEK 18-20: Graphs - Part 1 (Basic & BFS/DFS)

What You Need to Learn

  • Graph representations
  • BFS and DFS on graphs
  • Connected components
  • Cycle detection
  • Bipartite checking
  • Topological sorting

Detailed To-Do List

Day 1: Graph Representations

  1. Adjacency List (most common):
    vector<int> adj[n];
    adj[u].push_back(v); // add edge u -> v
    • Space: O(V + E)
    • Good for sparse graphs
  2. Adjacency Matrix:
    int adj[n][n]; // 1 if edge exists, 0 otherwise
    • Space: O(V²)
    • Good for dense graphs
  3. Edge List:
    vector<pair<int,int>> edges; // store all edges
    • Space: O(E)
    • Good for some algorithms (Kruskal's, Bellman-Ford)
  4. Choose representation based on problem:
    • Sparse graphs: adjacency list
    • Dense graphs: adjacency matrix
    • Special algorithms: edge list
  5. Directed vs Undirected graphs:
    • Directed: add edge only one way
    • Undirected: add edge both ways

Day 2: BFS - Breadth First Search

  1. Use queue to explore level by level
  2. Template:
    void bfs(int start) {
        vector<int> dist(n, -1);
        queue<int> q;
        q.push(start);
        dist[start] = 0;
        while(!q.empty()) {
            int u = q.front();
            q.pop();
            for(int v : adj[u]) {
                if(dist[v] == -1) {
                    dist[v] = dist[u] + 1;
                    q.push(v);
                }
            }
        }
    }
  3. Properties:
    • Finds shortest path in unweighted graphs
    • O(V + E) time complexity
    • Explores by distance
  4. Applications:
    • Shortest path in unweighted graphs
    • Finding connected components
    • Level-order traversal
  5. Solve 15 BFS problems

Day 3: DFS - Depth First Search

  1. Use recursion or stack to explore deeply
  2. Template:
    void dfs(int u, vector<bool>& visited) {
        visited[u] = true;
        for(int v : adj[u]) {
            if(!visited[v]) {
                dfs(v, visited);
            }
        }
    }
  3. Properties:
    • Explores one path completely before backtracking
    • O(V + E) time complexity
    • Can detect cycles, find components
  4. Applications:
    • All applications of BFS work with DFS too
    • Topological sort (DFS-based)
    • Cycle detection
  5. Solve 15 DFS problems

Day 4: Connected Components & Cycle Detection

  1. Connected Components:
    • Run BFS/DFS from each unvisited node
    • Count how many times we start
    int countComponents() {
        vector<bool> visited(n, false);
        int components = 0;
        for(int i = 0; i < n; i++) {
            if(!visited[i]) {
                components++;
                bfs(i, visited);
            }
        }
        return components;
    }
  2. Cycle Detection in Undirected Graphs:
    • If we reach a visited node that's not the parent, there's a cycle
    bool hasCycle(int u, int parent) {
        visited[u] = true;
        for(int v : adj[u]) {
            if(!visited[v]) {
                if(hasCycle(v, u)) return true;
            } else if(v != parent) {
                return true; // cycle found
            }
        }
        return false;
    }
  3. Cycle Detection in Directed Graphs (using colors):
    • White (0): unvisited
    • Gray (1): visiting
    • Black (2): finished
    bool hasCycle(int u) {
        color[u] = 1; // gray
        for(int v : adj[u]) {
            if(color[v] == 1) return true; // back edge = cycle
            if(color[v] == 0 && hasCycle(v)) return true;
        }
        color[u] = 2; // black
        return false;
    }
  4. Solve 15 problems on components and cycles

Day 5: Bipartite Graphs

  1. A graph is bipartite if nodes can be colored with 2 colors such that no adjacent nodes have same color
  2. Bipartite = no odd-length cycles
  3. Check using BFS/DFS coloring:
    bool isBipartite() {
        vector<int> color(n, -1);
        for(int i = 0; i < n; i++) {
            if(color[i] == -1) {
                queue<int> q;
                q.push(i);
                color[i] = 0;
                while(!q.empty()) {
                    int u = q.front();
                    q.pop();
                    for(int v : adj[u]) {
                        if(color[v] == -1) {
                            color[v] = 1 - color[u];
                            q.push(v);
                        } else if(color[v] == color[u]) {
                            return false;
                        }
                    }
                }
            }
        }
        return true;
    }
  4. Applications:
    • Graph coloring problems
    • Bipartite matching (later)
    • Job assignment problems
  5. Solve 10 bipartite problems

Day 6: Topological Sorting

  1. Linear ordering of vertices such that for every directed edge u→v, u comes before v
  2. Only possible for DAGs (Directed Acyclic Graphs)
  3. Kahn's Algorithm (BFS-based):
    vector<int> topologicalSort() {
        vector<int> indegree(n, 0);
        for(int u = 0; u < n; u++) {
            for(int v : adj[u]) {
                indegree[v]++;
            }
        }
        queue<int> q;
        for(int i = 0; i < n; i++) {
            if(indegree[i] == 0) q.push(i);
        }
        vector<int> result;
        while(!q.empty()) {
            int u = q.front();
            q.pop();
            result.push_back(u);
            for(int v : adj[u]) {
                indegree[v]--;
                if(indegree[v] == 0) q.push(v);
            }
        }
        return result;
    }
  4. DFS-based approach:
    • Do DFS and add nodes to stack in post-order
    • Reverse the stack
  5. Applications:
    • Task scheduling with dependencies
    • Course prerequisites
    • Compilation order
  6. Solve 10 topological sort problems

Day 7: Assessment

  1. Solve 4 graph problems in contest setting
  2. Include one BFS, one DFS, one bipartite, one topological sort
  3. Target: Codeforces 1600-1750 rating

Phase 3 Resources

Data Structure Learning:

  • YouTube: William Fiset (best for graph and tree videos)
  • GeeksforGeeks: Tree and Graph tutorials
  • CP-Algorithms: Tree and Graph sections

Practice Problems:

  • Codeforces (graph and tree tags)
  • AtCoder (tree problems are excellent)
  • LeetCode (good for binary trees)

What You Should Build:

  • Template for DFS and BFS
  • Tree traversal templates
  • Graph representation code
  • Segment tree implementation (coming next phase)


PHASE 4: Advanced Algorithms (Weeks 21-32)

WEEK 21-24: Graphs - Advanced (Shortest Paths, MST, Union-Find)

What You Need to Learn

  • Dijkstra's algorithm
  • Bellman-Ford algorithm
  • Floyd-Warshall algorithm
  • Minimum Spanning Tree (Kruskal & Prim)
  • Union-Find (Disjoint Set Union)

Detailed To-Do List

Day 1-2: Dijkstra's Algorithm

  1. Finds shortest path from source to all nodes in non-negative edge weights
  2. Greedy algorithm using priority queue
  3. Template:
    vector<long long> dijkstra(int start) {
        vector<long long> dist(n, 1e18);
        priority_queue<pair<long long, int>, 
                       vector<pair<long long, int>>, 
                       greater<pair<long long, int>>> pq;
        dist[start] = 0;
        pq.push({0, start});
        while(!pq.empty()) {
            auto [d, u] = pq.top();
            pq.pop();
            if(d > dist[u]) continue; // already processed
            for(auto [v, w] : adj[u]) { // adj stores {neighbor, weight}
                if(dist[u] + w < dist[v]) {
                    dist[v] = dist[u] + w;
                    pq.push({dist[v], v});
                }
            }
        }
        return dist;
    }
  4. Time: O((V + E) log V)
  5. Why not work with negative weights? Because of negative cycles and incorrect updates
  6. Problems:
    • Shortest path from A to B
    • Shortest path from A to all nodes
    • K-th shortest path
    • Constrained shortest path (avoid certain edges/nodes)
  7. Solve 15 Dijkstra problems

Day 3: Bellman-Ford Algorithm

  1. Finds shortest path with negative edge weights (but no negative cycles)
  2. Can detect negative cycles
  3. Template:
    vector<long long> bellmanFord(int start) {
        vector<long long> dist(n, 1e18);
        dist[start] = 0;
        for(int i = 0; i < n - 1; i++) {
            for(auto [u, v, w] : edges) { // edge list: {from, to, weight}
                if(dist[u] + w < dist[v]) {
                    dist[v] = dist[u] + w;
                }
            }
        }
        // Check for negative cycles
        for(auto [u, v, w] : edges) {
            if(dist[u] + w < dist[v]) {
                return {}; // negative cycle detected
            }
        }
        return dist;
    }
  4. Time: O(VE)
  5. Much slower than Dijkstra but works with negative weights
  6. Applications:
    • Currency arbitrage detection
    • Graph with negative weights
  7. Solve 8 Bellman-Ford problems

Day 4: Floyd-Warshall Algorithm

  1. Finds shortest path between all pairs of nodes
  2. Works with negative weights (but no negative cycles)
  3. Template:
    vector<vector<long long>> floydWarshall() {
        vector<vector<long long>> dist(n, vector<long long>(n, 1e18));
        for(int i = 0; i < n; i++) dist[i][i] = 0;
        for(auto [u, v, w] : edges) {
            dist[u][v] = min(dist[u][v], (long long)w);
        }
        for(int k = 0; k < n; k++) {
            for(int i = 0; i < n; i++) {
                for(int j = 0; j < n; j++) {
                    dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j]);
                }
            }
        }
        return dist;
    }
  4. Time: O(n³)
  5. Only use when n is small (typically n ≤ 500)
  6. Applications:
    • All-pairs shortest path
    • Transitive closure
    • Finding graph diameter
  7. Solve 8 Floyd-Warshall problems

Day 5: Minimum Spanning Tree - Kruskal's Algorithm

  1. Find subset of edges that connects all vertices with minimum total weight (for undirected graphs)
  2. Uses Union-Find to detect cycles
  3. Template:
    struct Edge {
        int u, v, w;
        bool operator<(const Edge& other) const {
            return w < other.w;
        }
    };
    
    long long kruskal() {
        sort(edges.begin(), edges.end());
        long long mst = 0;
        int edgeCount = 0;
        for(auto e : edges) {
            if(find(e.u) != find(e.v)) { // different components
                unite(e.u, e.v);
                mst += e.w;
                edgeCount++;
                if(edgeCount == n - 1) break;
            }
        }
        return mst;
    }
  4. Time: O(E log E)
  5. Greedy: always pick smallest edge that doesn't create cycle
  6. Solve 10 Kruskal problems

Day 6: Minimum Spanning Tree - Prim's Algorithm & Union-Find Details

  1. Prim's Algorithm:
    • Start from any node
    • Greedily add smallest edge connecting new node
    • Similar to Dijkstra but for MST
    long long prim() {
        vector<bool> inMST(n, false);
        priority_queue<pair<int,int>, vector<pair<int,int>>, greater<pair<int,int>>> pq;
        pq.push({0, 0}); // {weight, node}
        long long mst = 0;
        while(!pq.empty()) {
            auto [w, u] = pq.top();
            pq.pop();
            if(inMST[u]) continue;
            inMST[u] = true;
            mst += w;
            for(auto [v, weight] : adj[u]) {
                if(!inMST[v]) {
                    pq.push({weight, v});
                }
            }
        }
        return mst;
    }
  2. Time: O((V + E) log V)
  3. Choose based on graph density: Kruskal for sparse, Prim for dense

Day 7: Union-Find (Disjoint Set Union) Deep Dive

  1. Data structure to manage and merge disjoint sets efficiently
  2. Operations: find(x) and union(x, y)
  3. With Path Compression and Union by Rank:
    struct DSU {
        vector<int> parent, rank;
        DSU(int n) : parent(n), rank(n, 0) {
            iota(parent.begin(), parent.end(), 0);
        }
        int find(int x) {
            if(parent[x] != x) parent[x] = find(parent[x]); // path compression
            return parent[x];
        }
        bool unite(int x, int y) {
            x = find(x);
            y = find(y);
            if(x == y) return false;
            if(rank[x] < rank[y]) swap(x, y);
            parent[y] = x;
            if(rank[x] == rank[y]) rank[x]++;
            return true;
        }
    };
  4. Time: nearly O(1) amortized per operation
  5. Applications:
    • Kruskal's algorithm
    • Finding connected components
    • Detecting cycles
    • LCA with binary lifting
  6. Solve 15 Union-Find problems

WEEK 25-27: Advanced DP Topics

What You Need to Learn

  • DP with bitmasks
  • DP on graphs
  • DP on trees (rerooting)
  • Digit DP
  • Breaking complex problems into DP

Detailed To-Do List

Day 1-2: DP with Bitmasks

  1. Use bitmask to represent subset of elements
  2. Traveling Salesman Problem (TSP):
    • Visit all n cities exactly once with minimum cost
    • State: dp[mask][i] = minimum cost to visit cities in mask, ending at city i
    • Transition: for each unvisited city j, dp[mask | (1<<j)][j] = min(dp[mask | (1<<j)][j], dp[mask][i] + dist[i][j])
    vector<vector<long long>> dp(1<<n, vector<long long>(n, 1e18));
    dp[1][0] = 0; // start from city 0
    for(int mask = 1; mask < (1<<n); mask++) {
        for(int i = 0; i < n; i++) {
            if(!(mask & (1<<i))) continue; // i not in mask
            for(int j = 0; j < n; j++) {
                if(mask & (1<<j)) continue; // j already in mask
                dp[mask | (1<<j)][j] = min(dp[mask | (1<<j)][j], dp[mask][i] + dist[i][j]);
            }
        }
    }
  3. Time: O(2ⁿ * n²)
  4. Only works for small n (n ≤ 20)
  5. Other problems:
    • Subset sum with specific properties
    • Counting valid subsets
    • Assignment problems with constraints
  6. Solve 10 bitmask DP problems

Day 3: DP on Graphs

  1. Longest Path in DAG:
    • Topologically sort, then do DP
    • dp[i] = longest path ending at node i
    • Transition: dp[v] = max(dp[v], dp[u] + 1) for edge u→v
  2. Counting Paths with Constraints:
    • dp[i] = number of paths ending at i with certain property
  3. State Compression on Graphs:
    • Process nodes in order, DP with states
  4. Solve 10 DP on graphs problems

Day 4-5: DP on Trees (Rerooting & Subtree DP)

  1. Subtree DP (already covered in Phase 3):
    • dp[u] = property of subtree rooted at u
    • Most tree DP problems use this
  2. Rerooting DP (more advanced):
    • Compute answer for each node as root
    • Usually 2 DFS passes:
      • First pass: bottom-up, compute subtree values
      • Second pass: top-down, propagate from parent
  3. Example: Sum of distances from each node to all others
    // First DFS: compute subtree size and sum for each node
    void dfs1(int u, int p) {
        subtree_size[u] = 1;
        subtree_sum[u] = 0;
        for(int v : adj[u]) {
            if(v != p) {
                dfs1(v, u);
                subtree_size[u] += subtree_size[v];
                subtree_sum[u] += subtree_sum[v] + subtree_size[v];
            }
        }
    }
    // Second DFS: propagate answer
    void dfs2(int u, int p) {
        for(int v : adj[u]) {
            if(v != p) {
                // When moving root from u to v:
                // - Nodes in v's subtree get 1 closer
                // - Nodes outside get 1 farther
                ans[v] = ans[u] - subtree_size[v] + (n - subtree_size[v]);
                dfs2(v, u);
            }
        }
    }
  4. Solve 10 rerooting DP problems

Day 6: Digit DP

  1. Problems where you iterate through digits
  2. Count numbers <= N with certain property:
    • State: dp[pos][tight][...] = count of numbers
    • pos: current digit position
    • tight: whether we're still bounded by N
  3. Example: Count numbers without digit 3
    string s;
    int dp[20][2]; // -1 = not computed, else = answer
    
    int solve(int pos, int tight) {
        if(pos == s.length()) return 1;
        if(dp[pos][tight] != -1) return dp[pos][tight];
        
        int limit = tight ? (s[pos] - '0') : 9;
        int res = 0;
        for(int digit = 0; digit <= limit; digit++) {
            if(digit == 3) continue; // skip 3
            res += solve(pos + 1, tight && (digit == limit));
        }
        return dp[pos][tight] = res;
    }
  4. Solve 10 digit DP problems

Day 7: Assessment & Integration

  1. Solve 5 hard DP problems combining multiple DP techniques
  2. Include one bitmask, one graph DP, one tree DP, one digit DP, one mixed
  3. Target: Codeforces 1700-1850 rating

WEEK 28-30: String Algorithms

What You Need to Learn

  • String hashing
  • KMP (Knuth-Morris-Pratt)
  • Z-algorithm
  • Rabin-Karp
  • Trie data structure

Detailed To-Do List

Day 1-2: String Hashing

  1. Convert string to number for fast comparison
  2. Polynomial rolling hash:
    const long long MOD = 1e9 + 7;
    const long long BASE = 31;
    
    vector<long long> computeHash(string s) {
        vector<long long> hash(s.length() + 1, 0);
        vector<long long> pow_base(s.length() + 1);
        pow_base[0] = 1;
        for(int i = 0; i < s.length(); i++) {
            hash[i+1] = (hash[i] * BASE + (s[i] - 'a' + 1)) % MOD;
            pow_base[i+1] = (pow_base[i] * BASE) % MOD;
        }
        return hash;
    }
    
    long long getHash(int l, int r, vector<long long>& hash, vector<long long>& pow_base) {
        return (hash[r+1] - hash[l] * pow_base[r - l + 1] % MOD + MOD) % MOD;
    }
  3. Applications:
    • Check if two substrings are equal in O(1)
    • Find duplicate substrings
    • Palindrome checking
  4. Solve 8 string hashing problems

Day 3: KMP (Knuth-Morris-Pratt)

  1. Find all occurrences of pattern in text in O(n + m)
  2. Build failure function (LPS array):
    vector<int> buildLPS(string pattern) {
        int m = pattern.length();
        vector<int> lps(m, 0);
        int len = 0;
        for(int i = 1; i < m; i++) {
            while(len > 0 && pattern[i] != pattern[len]) {
                len = lps[len - 1];
            }
            if(pattern[i] == pattern[len]) len++;
            lps[i] = len;
        }
        return lps;
    }
  3. Use LPS to search:
    vector<int> findOccurrences(string text, string pattern) {
        vector<int> lps = buildLPS(pattern);
        vector<int> result;
        int j = 0;
        for(int i = 0; i < text.length(); i++) {
            while(j > 0 && text[i] != pattern[j]) {
                j = lps[j - 1];
            }
            if(text[i] == pattern[j]) j++;
            if(j == pattern.length()) {
                result.push_back(i - j + 1);
                j = lps[j - 1];
            }
        }
        return result;
    }
  4. Applications:
    • Pattern matching
    • Finding periods in strings
    • Lexicographically smallest rotation
  5. Solve 10 KMP problems

Day 4: Z-Algorithm

  1. Similar to KMP, finds all occurrences of pattern
  2. Build Z-array: z[i] = length of longest substring starting from i which matches prefix
  3. Template:
    vector<int> buildZArray(string s) {
        int n = s.length();
        vector<int> z(n);
        z[0] = n;
        int l = 0, r = 0;
        for(int i = 1; i < n; i++) {
            if(i > r) {
                l = r = i;
                while(r < n && s[r] == s[r - l]) r++;
                z[i] = r - l;
                r--;
            } else {
                int k = i - l;
                if(z[k] < r - i + 1) {
                    z[i] = z[k];
                } else {
                    l = i;
                    while(r < n && s[r] == s[r - l]) r++;
                    z[i] = r - l;
                    r--;
                }
            }
        }
        return z;
    }
  4. Use to find pattern:
    • Concatenate: pattern + "#" + text
    • Elements in Z-array equal to pattern.length() are matches
  5. Time: O(n)
  6. Solve 8 Z-algorithm problems

Day 5: Trie Data Structure

  1. Tree structure for storing strings, efficient prefix search
  2. Template:
    struct TrieNode {
        map<char, TrieNode*> children;
        bool isEnd = false;
    };
    
    class Trie {
        TrieNode* root = new TrieNode();
    public:
        void insert(string word) {
            TrieNode* node = root;
            for(char c : word) {
                if(!node->children[c]) node->children[c] = new TrieNode();
                node = node->children[c];
            }
            node->isEnd = true;
        }
        bool search(string word) {
            TrieNode* node = root;
            for(char c : word) {
                if(!node->children[c]) return false;
                node = node->children[c];
            }
            return node->isEnd;
        }
        bool startsWith(string prefix) {
            TrieNode* node = root;
            for(char c : prefix) {
                if(!node->children[c]) return false;
                node = node->children[c];
            }
            return true;
        }
    };
  3. Applications:
    • Dictionary with prefix search
    • Autocomplete
    • IP routing
    • XOR maximization (with binary trie)
  4. Solve 10 Trie problems

Day 6: Advanced String Problems

  1. Rabin-Karp rolling hash (alternative to KMP)
  2. Multiple pattern matching
  3. Palindrome problems with strings
  4. Solve 10 mixed string problems

Day 7: Assessment

  1. Solve 3 string problems in contest setting
  2. Include one KMP/Z-algorithm, one Trie, one hashing-based
  3. Target: Codeforces 1750-1900 rating

WEEK 31-32: Math & Number Theory

What You Need to Learn

  • Modular arithmetic
  • GCD, LCM, Extended Euclidean
  • Fermat's Little Theorem, Euler's Theorem
  • Prime factorization, Sieve
  • Basic combinatorics

Detailed To-Do List

Day 1: Modular Arithmetic

  1. Why modular arithmetic?
    • Prevent integer overflow
    • Results don't depend on large numbers
    • Given in most problems: "answer modulo 10⁹ + 7"
  2. Basic operations:
    const long long MOD = 1e9 + 7;
    long long add(long long a, long long b) { return ((a % MOD) + (b % MOD)) % MOD; }
    long long sub(long long a, long long b) { return ((a % MOD) - (b % MOD) + MOD) % MOD; }
    long long mul(long long a, long long b) { return ((a % MOD) * (b % MOD)) % MOD; }
  3. Division (modular inverse):
    • (a / b) mod p = (a * b^(-1)) mod p
    • Use Fermat's Little Theorem for prime modulus
  4. Solve 8 modular arithmetic problems

Day 2: GCD, LCM, Extended Euclidean

  1. GCD (Greatest Common Divisor):
    long long gcd(long long a, long long b) {
        return b == 0 ? a : gcd(b, a % b);
    }
    • Time: O(log min(a,b))
  2. LCM (Least Common Multiple):
    long long lcm(long long a, long long b) {
        return a / gcd(a, b) * b; // divide first to avoid overflow
    }
  3. Extended Euclidean Algorithm:
    • Finds x, y such that ax + by = gcd(a, b)
    long long extgcd(long long a, long long b, long long& x, long long& y) {
        if(b == 0) {
            x = 1; y = 0;
            return a;
        }
        long long x1, y1;
        long long g = extgcd(b, a % b, x1, y1);
        x = y1;
        y = x1 - (a / b) * y1;
        return g;
    }
  4. Applications:
    • Solving linear Diophantine equations
    • Modular inverse: if ax ≡ 1 (mod m), then x = inverse
    • Chinese Remainder Theorem
  5. Solve 10 GCD/LCM/EGCD problems

Day 3: Fermat's Little Theorem & Modular Inverse

  1. Fermat's Little Theorem: if p is prime and gcd(a,p)=1, then a^(p-1) ≡ 1 (mod p)
  2. Modular Inverse using Fermat:
    long long modInverse(long long a, long long p) {
        return power(a, p - 2, p); // a^(p-2) mod p
    }
    long long power(long long a, long long b, long long mod) {
        long long res = 1;
        a %= mod;
        while(b > 0) {
            if(b & 1) res = (res * a) % mod;
            a = (a * a) % mod;
            b >>= 1;
        }
        return res;
    }
  3. Euler's Theorem: a^φ(n) ≡ 1 (mod n) if gcd(a,n)=1
    • More general than Fermat's Little Theorem
  4. Applications:
    • Computing (a/b) mod p
    • Combinatorics with division
  5. Solve 8 Fermat/Euler problems

Day 4: Prime Factorization & Sieve of Eratosthenes

  1. Prime Factorization:
    vector<pair<long long, int>> primeFactorize(long long n) {
        vector<pair<long long, int>> factors;
        for(long long i = 2; i * i <= n; i++) {
            if(n % i == 0) {
                int cnt = 0;
                while(n % i == 0) {
                    cnt++;
                    n /= i;
                }
                factors.push_back({i, cnt});
            }
        }
        if(n > 1) factors.push_back({n, 1});
        return factors;
    }
    • Time: O(√n)
  2. Sieve of Eratosthenes (find all primes up to N):
    vector<bool> sieve(int n) {
        vector<bool> is_prime(n + 1, true);
        is_prime[0] = is_prime[1] = false;
        for(int i = 2; i * i <= n; i++) {
            if(is_prime[i]) {
                for(int j = i * i; j <= n; j += i) {
                    is_prime[j] = false;
                }
            }
        }
        return is_prime;
    }
    • Time: O(n log log n)
  3. Optimized Sieve with Factor Array:
    vector<int> spf(int n) { // smallest prime factor
        vector<int> spf(n + 1);
        iota(spf.begin(), spf.end(), 0);
        for(int i = 2; i * i <= n; i++) {
            if(spf[i] == i) { // i is prime
                for(int j = i * i; j <= n; j += i) {
                    if(spf[j] == j) spf[j] = i;
                }
            }
        }
        return spf;
    }
  4. Applications:
    • Factorize any number in O(log n) using SPF
    • Find number of divisors
    • Find sum of divisors
  5. Solve 10 prime/sieve problems

Day 5: Combinatorics (nCr, nPr)

  1. Computing nCr:
    • Small values: C(n,r) = n! / (r! * (n-r)!)
    • Precompute factorials and use modular inverse
    const int MAXN = 1e5 + 5;
    const long long MOD = 1e9 + 7;
    long long fact[MAXN], inv_fact[MAXN];
    
    long long power(long long a, long long b) {
        long long res = 1;
        while(b > 0) {
            if(b & 1) res = (res * a) % MOD;
            a = (a * a) % MOD;
            b >>= 1;
        }
        return res;
    }
    
    void precompute() {
        fact[0] = 1;
        for(int i = 1; i < MAXN; i++) fact[i] = (fact[i-1] * i) % MOD;
        inv_fact[MAXN-1] = power(fact[MAXN-1], MOD - 2);
        for(int i = MAXN - 2; i >= 0; i--) {
            inv_fact[i] = (inv_fact[i+1] * (i+1)) % MOD;
        }
    }
    
    long long nCr(int n, int r) {
        if(r > n || r < 0) return 0;
        return (fact[n] * inv_fact[r] % MOD) * inv_fact[n-r] % MOD;
    }
  2. Pascal's Triangle (for small n):
    • C(n,r) = C(n-1,r-1) + C(n-1,r)
    • DP approach
  3. Applications:
    • Counting paths, arrangements
    • Inclusion-exclusion principle
  4. Solve 10 combinatorics problems

Day 6-7: Math Integration & Assessment

  1. Mixed problems involving multiple math topics
  2. Solve 20 math problems total from Codeforces
  3. Include problems on:
    • Modular arithmetic with DP
    • Number theory with combinatorics
    • GCD/LCM with optimization
  4. Target: Codeforces 1800-1950 rating


PHASE 5: Specialized Topics (Weeks 33-40)

Choose ONE OR MORE based on interests:

OPTION A: Segment Trees & Range Queries (HIGHLY RECOMMENDED)

WEEK 33-35: Segment Trees

Day 1-2: Segment Tree Basics

  1. Binary tree structure for range queries
  2. Build, Query, Update operations
  3. Template:
    class SegmentTree {
        vector<long long> tree;
        int n;
        void build(vector<int>& arr, int node, int start, int end) {
            if(start == end) tree[node] = arr[start];
            else {
                int mid = (start + end) / 2;
                build(arr, 2*node, start, mid);
                build(arr, 2*node+1, mid+1, end);
                tree[node] = tree[2*node] + tree[2*node+1];
            }
        }
        void update(int node, int start, int end, int idx, int val) {
            if(start == end) tree[node] = val;
            else {
                int mid = (start + end) / 2;
                if(idx <= mid) update(2*node, start, mid, idx, val);
                else update(2*node+1, mid+1, end, idx, val);
                tree[node] = tree[2*node] + tree[2*node+1];
            }
        }
        long long query(int node, int start, int end, int l, int r) {
            if(r < start || end < l) return 0;
            if(l <= start && end <= r) return tree[node];
            int mid = (start + end) / 2;
            return query(2*node, start, mid, l, r) + query(2*node+1, mid+1, end, l, r);
        }
    public:
        SegmentTree(vector<int>& arr) {
            n = arr.size();
            tree.assign(4*n, 0);
            build(arr, 1, 0, n-1);
        }
        void update(int idx, int val) { update(1, 0, n-1, idx, val); }
        long long query(int l, int r) { return query(1, 0, n-1, l, r); }
    };
  4. Time: O(log n) per operation, O(n) build
  5. Solve 15 basic segment tree problems

Day 3-4: Lazy Propagation

  1. Optimize range updates
  2. Don't update all elements, mark for lazy update
  3. Template for range update + point query:
    class LazySegmentTree {
        vector<long long> tree, lazy;
        int n;
        void push(int node, int start, int end) {
            if(lazy[node] != 0) {
                tree[node] += (end - start + 1) * lazy[node];
                if(start != end) {
                    lazy[2*node] += lazy[node];
                    lazy[2*node+1] += lazy[node];
                }
                lazy[node] = 0;
            }
        }
        void updateRange(int node, int start, int end, int l, int r, int val) {
            push(node, start, end);
            if(start > r || end < l) return;
            if(l <= start && end <= r) {
                tree[node] += (end - start + 1) * val;
                if(start != end) {
                    lazy[2*node] += val;
                    lazy[2*node+1] += val;
                }
                return;
            }
            int mid = (start + end) / 2;
            updateRange(2*node, start, mid, l, r, val);
            updateRange(2*node+1, mid+1, end, l, r, val);
            push(2*node, start, mid);
            push(2*node+1, mid+1, end);
            tree[node] = tree[2*node] + tree[2*node+1];
        }
        long long query(int node, int start, int end, int idx) {
            push(node, start, end);
            if(start == end) return tree[node];
            int mid = (start + end) / 2;
            if(idx <= mid) return query(2*node, start, mid, idx);
            else return query(2*node+1, mid+1, end, idx);
        }
    public:
        LazySegmentTree(int size) {
            n = size;
            tree.assign(4*n, 0);
            lazy.assign(4*n, 0);
        }
        void updateRange(int l, int r, int val) { updateRange(1, 0, n-1, l, r, val); }
        long long query(int idx) { return query(1, 0, n-1, idx); }
    };
  4. Solve 10 lazy propagation problems

Day 5-6: Advanced Segment Tree

  1. 2D Segment trees for 2D range queries
  2. Segment tree with custom functions (max, min, GCD instead of sum)
  3. Persistent segment tree (advanced, optional)
  4. Solve 10 advanced segment tree problems

Day 7: Assessment

  1. Solve 3 segment tree problems in contest setting
  2. Include one basic, one with lazy propagation, one complex
  3. Target: Codeforces 1850-2000 rating

OPTION B: Game Theory (Alternative)

WEEK 33-35: Game Theory Fundamentals

Day 1-2: Nim & Grundy Numbers

  1. Nim game basics: several piles, take any number from one pile, win if take last
  2. Optimal strategy: XOR of all pile sizes = 0 → losing position
  3. Grundy numbers (nimbers):
    int mex(set<int> s) { // minimum excludant
        int m = 0;
        while(s.count(m)) m++;
        return m;
    }
    vector<int> grundy(n+1);
    for(int i = 0; i <= n; i++) {
        set<int> reachable;
        // for each move from position i, add grundy number of resulting position
        grundy[i] = mex(reachable);
    }
  4. Applications: determine winning/losing positions
  5. Solve 10 game theory problems

Day 3-7: Continued game theory: Sprague-Grundy theorem, complex games, assessment


OPTION C: Geometry (Alternative)

WEEK 33-35: Computational Geometry

Day 1-2: Basic Geometry

  1. Point, line, vector operations
  2. Distance, angle calculations
  3. Line intersection, segment intersection
  4. Solve 10 basic geometry problems

Day 3-7: Convex hull, closest pair of points, area calculations, assessment


WEEK 36-40: Choose & Practice

After completing your chosen specialized topic:

Week 36-37: Deep Practice

  1. Solve 25-30 problems specifically on your chosen topic
  2. Mix easy, medium, and hard problems
  3. Understand patterns and approaches

Week 38-39: Integration

  1. Solve 30 mixed problems that combine:
    • Your specialized topic with earlier topics
    • Example: Segment tree + DP, Game theory + graphs
  2. Include Codeforces contests

Week 40: Assessment & Planning

  1. Solve 5 hard problems in contest setting
  2. Evaluate your current rating (target 1900-2100+)
  3. Decide on next specialization if interested
  4. Plan your post-roadmap training


PHASE 6: Competitive Excellence (Weeks 41+)

Beyond the Roadmap - Mastery & Specialization

Weekly Schedule (Highly Recommended)

Monday-Thursday: Practice & Learning

  • 1-2 Codeforces contests per week (attend at scheduled time)
  • Solve 3-4 new hard problems daily
  • Study solutions from strong competitors
  • Review weak areas

Friday-Sunday: Upsolving & Optimization

  • Upsolve (solve) problems you couldn't solve in contests
  • Read editorials for problems you struggled with
  • Implement alternative solutions
  • Analyze your mistakes

Rating Progression Strategy

Rating 1600-1800:

  • Focus on pattern recognition in Div2 C-D problems
  • Practice speed (can you solve it under pressure?)
  • Study editorial problems in depth
  • Join virtual contests

Rating 1800-2000:

  • Participate in Div1 contests or higher Div2
  • Read multiple solutions per problem (different approaches)
  • Study older Codeforces problems by rating
  • Join online communities (Discord, forums)

Rating 2000+:

  • Participate in official contests
  • Practice AtCoder and ICPC problems
  • Deep study of advanced techniques
  • Consider mentorship

Error Tracking System

Create a spreadsheet to track:

  1. Problem name & link
  2. Your approach vs correct approach
  3. Why you failed (time limit, wrong logic, edge case, etc.)
  4. How to avoid this in future
  5. Similar problems you can solve now

Review this weekly!

Advanced Techniques Not Yet Covered (Optional)

  • Heavy-Light Decomposition: advanced tree problems
  • Square Root Decomposition: alternative to segment trees
  • Centroid Decomposition: advanced tree DP
  • Flow Algorithms: bipartite matching, max flow, min cost flow
  • Suffix Arrays & Automata: advanced string processing
  • Convex Hull Trick: optimization in DP
  • CHT + Segment Tree Merge: very advanced

Study these when you reach 2000+ rating.

Competitions to Participate In

  1. Codeforces: main platform, contests every week
  2. AtCoder: excellent problem quality, contests every weekend
  3. TopCoder: long format, different style
  4. Google Code Jam: annual competition, practice available
  5. ICPC: team-based, more strategic
  6. IOI Problems: educational, great for learning
  7. USACO: progression-based, ideal for learning

Community Involvement

  • Read Codeforces blogs and editorials
  • Comment on interesting solutions
  • Help others solve problems (teaching reinforces learning)
  • Join CP Discord servers
  • Find a study group

Mental Aspects of Competitive Programming

  1. Consistency beats intensity: 1 hour daily > 10 hours weekly
  2. Embrace plateaus: they happen before every level-up
  3. Celebrate small wins: solved a hard problem? That's progress!
  4. Don't compare: your journey is unique
  5. Take breaks: burnout is real, rest is productive
  6. Enjoy the process: if you're not having fun, ask why


Comprehensive Summary

Expected Timeline

  • End of Week 8 (Phase 1): Codeforces 1000-1200, comfortable with STLs
  • End of Week 20 (Phase 2): Codeforces 1200-1400, understand major algorithms
  • End of Week 32 (Phase 3-4): Codeforces 1500-1700, strong fundamentals
  • End of Week 40 (Phase 5): Codeforces 1800-2000+, specialization in one area
  • One year later: Codeforces 1900-2200+, consistent expert-level performance

Most Important Concepts to Master First

  1. STLs & Templates (Phase 1)
  2. Dynamic Programming (Phase 2) - can't skip this!
  3. Trees & Graphs (Phase 3)
  4. Binary Search (Phase 2)

These four form the foundation of 70% of CP problems.

Critical Success Factors

  1. Solve problems, not just watch tutorials
    • 80% hands-on practice, 20% learning
  2. Always understand your mistakes
    • Solving wrong teaches bad habits
  3. Code regularly
    • Programming is a skill, needs consistent practice
  4. Read multiple solutions
    • Different approaches teach different insights
  5. Don't skip topics
    • Each topic builds on previous ones

Daily Checklist

  • Solved at least 1 problem
  • Understood all mistakes in problems
  • Read and understood at least one editorial
  • Reviewed error log for patterns
  • Coded solution from scratch (not copy-paste)
  • Tested with edge cases

Resources to Bookmark

  1. Learning: CP-Algorithms, GeeksforGeeks, YouTube (William Fiset, Errichto)
  2. Practice: Codeforces, AtCoder, LeetCode
  3. Reference: CPReference, OEIS (for sequences)
  4. Inspiration: Codeforces blogs, GitHub solutions


Your First Month Action Plan (Weeks 1-4)

This Month, You Will:

Week 1:

  • Set up competitive programming template
  • Learn STLs: vector, string, pair
  • Solve 10 Codeforces Div2 A problems (rating 800-900)
  • Build confidence with easy problems

Week 2:

  • Master set, map, unordered containers
  • Solve 10 problems using these containers
  • Learn I/O optimization
  • Solve 10 more Codeforces problems (rating 900-1000)

Week 3:

  • Learn priority_queue, deque, stack, queue
  • Solve 10 problems with these containers
  • Binary search introduction
  • Solve 5 binary search problems

Week 4:

  • Comprehensive STL practice
  • Attempt your first Codeforces contest (Div2)
  • Review mistakes and weak areas
  • Target: Codeforces rating 900-1100

By End of Month 1:

  • ✅ Comfortable with C++ STLs
  • ✅ Can code without syntax errors quickly
  • ✅ Understand basic problem-solving
  • ✅ Codeforces rating 900-1100
  • ✅ Created personal template and cheat sheets

You've got a 10-month roadmap to reach expert level in competitive programming. Start Week 1 today, and in a year, you'll be solving problems many can't even understand. Good luck! 🚀

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published