ARRAY & HASHING: theres a trick to each question, maybe sorting, maybe using some trick for array 0217. Contains Duplicate: Hashset to determine if it appears more than once 0242. Valid Anagram: Hashmap to determine if str1 has same num of each char as str2
- Two Sum: Hashmap to find index
- Group Anagrams: count[0] * 26 to determine ID of each anagram (with ord(c) - ord('a')), then tuple as list cannot be used as a key in hashmap
- Top K Frequent Elements: Bucketsort into array (index is number of times it pops up), keep track of the number in a hashmap to use the array. for i in range(len(nums)) is by reference, * len(nums) is copy --> do range times, copy it len times.
- Product of Array Except Self: Iterate through twice to avoid O(n^2) runtime. Use Prefix and Postfix in order to avoid multiplying digit at index --> clever greedy algorithm, prefix multiplies everything infront, postfix multiplies everything behind.
- Valid Sudoku: Check rows/cols/squares with defaultdict(set) in order to check each individual row/col/square --> square key for each square should be r//3 and c//3 as // is integer div to return 0 1 2 x 0 1 2 for each of the 9 squares, but / is float point division.
- Encode and Decode Strings: Length of string+delimeter in order to make each encode unique --> number is str[i:j] with j being the first # after the number (length of string) res.append(str[j+1 : j+1+length]), then i = j + 1 + length --> to find next #.
- Longest Consecutive Sequence: Check if number is the start of a sequence (if n-1 is not in set) --> while (n+length) is in set, length += 1
- Flatten Nested List Iterator: Use the given .getInteger() .isInteger() .getList() functions!! basically run through iteratively def convert(thing): if thing.isInteger(): self.res.append(thing.getInteger()) -> else: (basically is a list) for item in thing.getList(): convert(item) ->-> for item in nestedList: convert(item)
- Most Visited Sector in a Circular Track: Basically if first sector is less than or equal to the last sector, then it means that we did not cross, or that we dont need to consider because we visit the first and the last most often, thus return [i for i in range(a, b + 1)]. Else if the last sector is bigger than the first sector, it means that we did cross, and the values in between are also a part of the most visited, meaning we must return [i for i in range(1, b + 1)] + [i for i in range(a, n + 1)]
STACK: maybe use more than 1 stack? maybe use recursion 0020. Valid Parantheses: Hashmap that maps closing parenthesis to opening parenthesis --> iterate thru string, if stumble upon closing parenthesis, if not empty and stack[-1] (last item in stack) is the matching parenthesis, stack.pop(). Else return false as the closing and open do not match. If stumble upon open, simply just stack.append(). Return true only if stack is empty at the end, else return false. 0155. Min Stack: Implement regular stack for the actual values, and a min stack for the min values at each index value. Pop from both, val = (val, minstack[-1] if min_stack else val) --> then minstack.append(val) --> basically the val is the minstack's most recent val if it is not empty. add the the current minimum val to the minstack. of course also add the actual given value to the first stack. this is o(1) as it simply uses two stacks, no iteration. 0150. Evaluate Reverse Polish Notation: Have a stack, simply do the operation on the two most recent stack values. Division is not integer division, just simply truncated to 0 (rounded down always). 0022. Generate Parantheses: Base case, When open == close == n. case 1: open < n, case 2: close < open --> stack.append("(") backtrack(open + 1, close) and stack.pop(), add to stack and add it to the res (res.append("".join(stack))) in order to add it to a empty list to add to res. stack.pop() backtracks after it recursively goes through to the result 0739. Daily Temperatures: for i, t in enumerate(temperatures) gets the index and the value! Use a res list and a stack. Keep track of values until bigger values comes along in stack, update each index value pair that was skipped over (skipped over means not in res list but added to the stack) basically use a stack and a result list 0853. Car Fleet: Can pair two lists together by doing pair = [[p, s] for i in zip(position, speed)]. WOW! Can also iterate through the sorted list (in reverse order) by doing for p, s in sorted(pair)[::-1] or sorted(pair, reverse = True) --> stack.append((target - p) / s) to append the time and then compare the times whenever there is 2 or more in the stack. Cars behind will never go ahead of the front car of the fleet, so pop the ones that are part of the same fleet, but not at the top. if len(stack) >= 2 and stack[-1] <= stack[-2] then stack.pop() --> this is because stack[-1] will be behind stack[-2] in terms of a linear line. so we must hceck if stack[-1] (most recent car that we are checking) will be reaching target faster (will catch up and be a part of the same fleet) --> return len(stack), also since it is stack[-1] vs stack[-2] : it will be checking stack[-2] (which is most recent car fleet) vs stack[-1] (most recent car we are checking) 0084. Largest Rectangle in Histogram: Keep track of a maxArea. Have the stack be a pair (start index of rectangle, height of rectangle) for each bar. For each i, h in the histogram, start = i initialize the start to the index that the bar was discovered at (unless changed later). If the stack is not empty and the height of the newly discovered bar is < the stack[-1][1] (height of the most recent stack rectangle), then index, height = stack.pop() --> pop from the stack and keep track of the values in order to update maxArea. Update maxArea = max(maxArea, h * (i - index)) --> this is because i is the current bar that we found (we stopped the rectangle's growth here) and index is the value that the rectangle started at (this was initialized to start, but then start was updated as necessary!). Then start = index --> in order to update the start value to extend backwards as the current bar can extend backwards to the bar that was popped from the stack (as the popped bar was higher in height (taller) ). Keep repeating through this while loop until the bar that we discovered cannot be extended further back anymore. Then simply stack.append((start, i)) in order to add this to the stack. For all the values that still remain in the stack after iterating through the histogram, check the maxArea = max(maxArea, h * (len(heights) - i)) in order to check their area against the maxArea as they extend to the end of the histogram as they did not have any other bar that was less than them, meaning that they could extend to the len(heights). 0844. Backspace String Compare: use two stacks and compare with pointers
TWO-POINTERS: maybe sort and keep track of some min/max in order to calculate the value (FIND THE TRICK!)
0125. Valid Palindrome: Left and Right Pointers, increment until you reach an alphanum --> create this by comparing ASCII values --> ord('A') <= ord(c) <= ord('Z') or ord('a') <= ord(c) <= ord('z') or ord('0') <= ord(c) <= ord('9') --> only for recursion you can have another function inside another function, when calling non-recursively, make sure to make it a separate helper function and call it with self.a lphaNum(c) --> must do that when calling other functions from an object in python.
0167. Two Sum II: Basically use the idea that it is sorted. Pointers left and right, increment and decrement based on whether the numbers[l] + numbers[r] > or < to target (CAN do this because it is sorted). If it
is > target, then r -= 1. If it is < target, then l += 1 (AS IT IS SORTED in INCREASING ORDER) --> else return [l + 1, r + 1] --> as they want 1 indexing.
0015. 3Sum: Basically use the idea that we can sort the array in order to do the same operation as Two Sum II with a left and right pointer. Make sure a == nums[i - 1]: continue --> since we cannot have the same a twice with the same subset of numbers. at the end, make sure to do res.append([a, nums[l], nums[r]]) then l += 1 and r -= 1 then do a while loop for while nums[l] == nums[l - 1] and l < r: l += 1 (can also do for r instead) --> the reason for this is that we need to find the other subsets of a that are also valid answers that are not duplicates.
0011. Container With Most Water: Basically have a left and right pointer, increment/decremet according to whatever is the shorter bar, maxArea = max(maxArea, min(height[l], height[r]) * (r - l)) -> because the min height bar will determine the max area.
0042. Trapping Rain Water: Basically we want to keep track of a maxLeft and a maxRight. The reason for that is because we know that the smaller one will be the bottleneck. The furthest to the left and furthest to the right bars will hold min(furthestleft, furthestright) * whatever space there is available of water. Thus we know that we can increment the pointers accordingly. if maxLeft < maxRight: then l += 1 (increment the left pointer) maxLeft = max(maxLeft, height[l]) --> this will make sure we update the next left bottleneck. Then we can simply do result += (maxLeft - height[l]) -> this will make sure that we hold water if our current bar is lower than the bottleneck. Else if it is the same as the bottleneck, then we will simply not hold any water at all (res += 0)
0557. Reverse Words in a String III: PYTHON Trick --> ' '.join(word[::-1] for word in s.split())
BINARY SEARCH: Maybe perform binary search more than once 0704. Binary Search: Always use self.function() when calling a function inside of a class --> also everything in the binarysearch function should be inside if right >= left statement because it cannot go out of bounds. 0074. Search a 2D Matrix: Find the row that it is contained in first. Then find the number within the row that it can be in. Return false is the row is not found at all. 0875. Koko Eating Bananas: Basic Idea is that any k value greater than max(piles) should be ignored as it does not help in improving the time. Do binary search for a value from l = 1, r = max(piles) as h could be super large, allowing k = 1 to also work. Simple binary search while l <= r. Need a for loop for p in pairs -> to see if k actual works in terms of being smaller or equal to hours. Do this by hours += math.ceil(p / k) --> math.ceil rounds up. Then check if hours <= h: res = min(res, k), r = k - 1 or else: l = k + 1 0153. Find Minimum in Rotated Sorted Array: O(log n) time complex -> use binary search: trick is to use nums[mid] > nums[r] --> start of sequence is at the right subarray as it would have been rotated. nums[mid] < nums[r] --> start of sequence is at the left subarray 0033. Search in Rotated Sorted Array: Determine which side is sorted -> if mid >= left -> left is sorted else: -> right is sorted. within each case, check whether target is in not sorted side or else sorted side. left is sorted -> if target > mid or target < left. right is sorted -> target < mid or target > right 0981. Time Based Key-Value Store: Each key must hold a [value, timestamp] pair in a defaultdict(list) -> so that non-initialized keys have default empty lists. Set can just simply add the values by doing self.storage[key].append([value, timestamp]). Get basically initializes a res = "" in order to simply return res if it is not found as an empty string. values = self.storage.get(key, []) -> this basically gets the list of [value, timestamp] pairs within that specific key (of course, res would not be updated if the key did not exist, thus it is okay.) -> Do binary search within the timestamps of the list, and make sure to keep updating res as a valid timestamp comes up (less than or equal to but also close as it can be to the given input timestamp.) -> if values[mid][1] <= timestamp: res = values[mid][0], l = mid + 1 -> in order to check the right side because it is still valid up to then. Else: r = mid - 1 in order to search the left-side now since the values[mid]'s timestamp is larger. Simply return res. This will be empty if it did not find anything, but it will be something (closest but smallest (even can be at the timestamp itself if found)) if found. 0004. Median of Two Sorted Arrays: Given two arrays, we need to find a partition of the total of both arrays in which we can calculate the median. We can do this by doing binary search on the smaller array in order to check if we have the correct partition of values from the left-side (Because partition of array A + parition of array B should be half of the array (left partition of the combined total array)). Check if the Amid > Bmid+1 -> then A has too many values -> do binary search with r = Amid - 1. If Bmid > Amid + 1 -> then B has too many values -> do binary search with l = mid + 1 --> the reason we do this is because to find the median, the left-side total partition will be in non-decreasing order (increasing) as the entire combined array must be sorted. If Amid <= Bmid + 1 and Bmid <= Amid + 1, then we have found the correct partition as the max values of the left partition of the total combined array are less than the min values of the right parition of the total combined right array -> if we have found the partition that is correct, we must check whether the length of the total combined array is odd or even --> if odd, then the median will be the middle value min(Amid + 1, Bmid + 1). if even, then it will be the two middle values divided by 2 (float division as we want a decimal) -> (max(Amid, Bmid) + min(Amid + 1, Bmid + 1) / 2) and we simply return one of those two values depending on what we found to be the cardinality of the modulo of the length of the combined array! 0034. Find First and Last Position of Element in Sorted Array: Basically run binary search for left bound and binary search for right bound in separate inside def functions, def findLeft: if nums[mid] < target, elif nums[mid] >= target, def findRight: if nums[mid] <= target, elif nums[mid] > target: --> these differing if statements will find leftmost target val and the rightmost target val 1095. Find in Mountain Array: Basically find the peak of the mountain by doing binary search, do binary search on the ascending part first and return if mid was found (this is because we want the smallest index, so we do the ascending part of the mountain first). then do binary search on the descending part, then at the end simply return -1 since it never got caught by any return statement in the binary searches.
SLIDING WINDOW: WHILE loop to check something with l and r pointers! 0121. Best Time to Buy and Sell Stock: Basic idea is to have a l and r pointer, if l < r then there is a profit, thus update maxProfit accordingly, but else: update l = r as r is the smallest value found so far. Then for both cases update r += 1. 0003. Longest Substring Without Repeating Characters: Use a two pointer system and check each substring, remove from the set until a the value is gone. for r in range(len(s)): while s[r] in set: l += 1, set.remove(s[l]) -> will start a new subset from the next val, can keep the ones that are not duplicates. set.add(s[r]), res = max(res, r - l + 1) 0424. Longest Repeating Character Replacement: Use a sliding window, set l = 0 and for r in range(len(s)) -> update hashmap (that keeps track of the number of values) -> map[s[r]] -> update the maxfrequency character as well: maxfreq = max(maxfreq, map[s[r]]) Since the only one that increased is the value at that point in the hashmap (key in hashmap). while (r - l + 1) - maxfreq > k: do map[s[l]] -= 1, l += 1 --> basically sliding the window to be smaller as that substring does not work. r - l + 1 is the length of the substring and it minus the maxfreq is the number of characters that need to be flipped in order to be repeating. This is because it would not be optimal to flip the most commonly occurring character. You do not need to decrement the maxfreq or to update the maxfreq at all at the sliding window step (l += 1) because maxfreq increasing is the only situation in which the resulting max length substring increases. This is because length of substring - maxfreq needs to be <= k, thus meaning maxfreq needs to increase to be as big as it possibly can be close to the size of the length. However, it being smaller would not help create a larger substring. -> res = max(res, r - l + 1) and this works at the end because the while loop makes sure that only valid substrings end up there for each r pointer. maxfreq is only updated when the length of the stirng actually increases, which is when there is a new r that we check. When the length is decreased, it only makes our potential res smaller. 0567. Permutation in String: Keep a count of how many of each character comes out in s1 and s2 by doing s1Count, s2Count = [0] * 26, [0] * 26 -> then make sure to check how many matching characters there are (should be 26 since characters that don't appear in each one will end up being 0 and ones that do come out will be 1) Use a sliding window the size of s1 since s1 is our smaller string, and iterate through with the sliding window in s2 by doing r += 1 and l += 1. Make sure to update the matchCount when we update the sliding window. However, we do not have to re-iterate through the values, we can just make sure to see if the updated values ever changed, and change the matchCount accordingly. 0076. Minimum Window Substring: Keep track of a need and a have integer in order to check if need == have. In order to check this, firstly add all values of t into a hashmap countT then need = len(countT) as we need the unique values of T (as duplicates do not count as long as we have atleast one of a character). Keep resLen = float("infinity") and res = [-1, -1] in order to make sure that we get the min value of resLen initially, and res because we need to make sure that l and r are updated into it. so whenever r - l + 1 < resLen: then res = [l, r] and resLen = r - l + 1. Make sure to only update have integer when countS[s[l]] < countT[s[l]] because we do not want to decrement it if we still have atleast one of that character in the window that we are focusing on at the time. However, while need == have, then make sure to shift the sliding window with l += 1 because we want to check what the minimum res of the window is that is still valid. 0239. Sliding Window Maximum: In this question, we can use a queue (q = collections.deque()) in order to keep the line-up for the maximum values. l = r = 0, while r < len(nums): basically step 1: update the queue for the new right value: while nums[r] > nums[q[-1]]: q.pop() and then after that while has finished, then q.append(r) -> the reason we append the index and not the value itself is because we can access the value anytime by doing nums[r] but the index is not accessible, but we need it in order to check the left pointer (basically makes this very nice for us.) step 2: update the queue for the left pointer! basically if l > q[0]: (because since q[i] holds indexes, if l is greater than q[0] that means that the previous nums[l] was the max value (thus kept at the front of the queue) and that now we have shifted the window to the right by one, thus we no longer consider that old maximum) q.popleft(). step 3: update the output! basically if (r + 1) >= k: we do output.append(nums[q[-1]]) because l and r both start at 0, we need to make sure it gets to the size of the sliding window before updating it for each window, right? -> so basically it goes through step 1 and 2 until it gets to the correct size, then it goes to step 3 for every window. 1052. Grumpy Bookstore Owner: Simply make a list of each minute's unhappy customers if no power was used. unhappy_list = list(mul, customers, grumpy) --> mul basically allows us to multiply each index of customers and grumpy together. with that, we calculate how many happy customers there are if we use the power at minute 0. ans = score = sum(customers) - sum(unhappy_list[minutes:]) (this allows us to total num of customers - total number of unhappy people after minute to the end, since we used the power at the start.) then simply iterate through and see what happens if we use the power at other minutes. for i in range(len(customers) - minutes): (start at 0, end at len(customers) - minutes since we dont wanna go out of bounds anyways, but pretty logical stuff here.) score -= unhappy_list[i], score += unhappy_list[i + minutes] (this is literally simply shifting the sliding window --> we can also do this with a while loop and l, r pointers, but we are simply removing the left pointer, and adding the right pointer incremented by 1.) then simply check if the number of happy customers if we use the power at the next minute: ans = max(ans, score) -> simply return ans.
LINKED LIST: Iterative, can reverse list, use pointers as iterators, but also make dummy node and return dummy.next || OR find a cycle. 0206. Reverse Linked List: Can solve iterative and recursively. Iterative is very easy, just have a prev, and curr pointer use temp and curr and prev and curr.next pointer and use basic logic to bump them around while curr: . -> recursive basically: keep track of the last value in the linkedList as newHead = self.reverseList(head.next) and then change the pointer to reverese by doing head.next.next = head all under if head.next:. then after that if, just make sure head.next points to nothing in order to remove the old pointer arrow, then return newHead (since we made sure to not change that after the last value in the linkedList if it is 1,2,3,4,5 -> then it was kept at 5 the whole time!) 0021. Merge Two Sorted Lists: Basically have a dummy = ListNode() because we can just simply return dummy.next -> its either going to be the start of the new linked list or it will be empty because both l1 and l2 are empty. also initialize tail = dummy as a pointer because then we can basically change the pointer as we go instead of changing the actual stuff within the pointer. -> basically while l1 and l2: if l1.val > l2.val: tail.next = l1, l1 = l1.next and else: tail.next = l2, l2 = l2.next then make sure to iterate tail = tail.next within the while loop. check at the end if l1: tail.next = l1 and elif l2: tail.next = l2. the reason we do this is because one of the lists may not be empty after the while loop. thus we can just append the start of the remaining part of one of the lists to the end of our dummy list. then simply return dummy.next (as dummy.next will either be the start of the list that has been merged, or it will be empty depending on whether l1 and l2 is empty) 0143. Reorder List: Simply find half of the list by doing slow = head, fast = head.next and then while fast and fast.next: fast = fast.next.next, slow = slow.next in order to find two halves of the list. The reason want to do this is because we want to reverse the second half of the list as we can then iterate backwards. To reverse the second half of the list, second = slow.next, slow.next = None in order to separate the two lists. Then do while second: tmp = second.next, second.next = prev, prev = second, second = tmp. Then in order to combine the two lists basically do second = prev because then it makes second point to the start of the reversed list. Then do while first and second: tmp1, tmp2 = first.next, second.next -> the reason we do this is in order to keep track of the current next pointers to iterate to the next values in both lists. Then we can do first.next = second, second = tmp1, then we can iterate first = tmp1, second = tmp2 -> that is the whole algorithm because we do not need to return this list, it simply exists due to dynamic allocation 0019. Remove Nth Node From End of List: Basically create a dummy node at the start of the list (return dummy.next) -> basically iterate l to be at dummy and r to be at head, iterate until there are n nodes between l and r -> while n > 0 and r: do r = r.next, n -= 1 then basically iterate until r until it reaches null -> while r: l = l.next, r = r.next then basically remove l.next by doing l.next = l.next.next -> because we iterated l from dummy node! the removed node is the l.next! return dummy.next 0138. Copy List with Random Pointer: Basically have two iterations, one where we create a deep copy where curr = head, while curr: (use a hashmap to map old to new deep copy) copy = Node(curr.val), oldToNew[curr] = copy (make sure to initialize oldToNew = { None : None } in order to map none to none -> no deep copy will be made if the original is one) curr = curr.next -> then second iteration update the pointers. The reason we have two iterations is because we need to create the deep copy nodes first in order to map them to each other. curr = head, while curr: copy = oldToNew[curr], copy.next = oldToNew[curr.next], copy.random = oldToNew[curr.random], curr = curr.next -> basically at the end return the deep copy head: return oldToNew[head] 0002. Add Two Numbers: dummy = ListNode() in order to start the return val with a dummy node so we can return dummy.next, curr = dummy have a pointer for the iteration. while l1 or l2 or carry: this is because we can basically make v1 null or v2 null depending on whether l1 or l2 is nullptr, but also we need to also continue it for or carry because 8 + 7 would give us a carry that we need to add, but we wouldn't be able to add it. then we do v1 = l1.val if l1 else 0, v2 = l2.val if l1 else 0, val = v1 + v2 + carry, then we must check if the val goes over 10 (essentially, is there a carry?) -> val %= 10 (as it gives us the remainder after // 10), carry = val // 10 (// is integer division, / is float division). then we can simply add curr.next = ListNode(val). Then we simply update the pointers for the next iteration: curr = curr.next, l1 = l1.next if l1 else None, l2 = l2.next if l2 else None -> basically making sure that if l1 is now done, then we make sure that l1 is returned as null because it would make v1 0 in the next iteration, while if l2 is finished, then it would be returned as nullptr: making v2 be 0 in the next iteration. However, if we do have a carry this while loop will continue. at the end, we can return dummy.next in order to return the start of the linked list (sum) -> using dummy node 0141. Linked List Cycle: Floyd's Hare and Tortoise algorithm -> basically have a slow and fast pointer -> slow = slow.next, fast = fast.next.next: do this while fast and fast.next: -> this is because is fast or fast.next -> null, this means that an "end" has been found. However, if a cycle exists, then it will never reach a nullptr. 0287. Find the Duplicate Number: Basically need to find the cycle first. The reason for this is because all the values in the array are going to be from 1->n-1 meaning that there will always be a cycle. The second step will be to find the duplicate number. This can be done by having two slow pointers (one starting at index 0, and the other one starting at where the cycle was found) -> this works because the distance from index 0 -> start of cycle and the distance from cycle detected index -> start of cycle will always be the same -> this is because of a mathematical proof regarding 2*slow = fast. Thus, since they must always go to the start of the cycle at some point (c-x) and p, they will be the same at one point. return slow or slow2, this doesn't matter since they will both be the same value (duplicate number at different indexes) -> they are pointing both to the index of the start of the cycle. 0146. LRU Cache: Basically use a hashmap to access in O(1), but to access an "least recently used thing" -> use a linkedlist where the leftmost node and rightmost nodes (after the dummy nodes) as each side as a dummy node at the start and end to bypass the annoying edgecase that the linkedlist has no nodes at all -> left is LRU and right is MRU (least recently used, most recently used) -> create a list struct class ListNode: def init(self, key, value): self.key, self.val = key, value -> the reason we need a key and val data member for the listnode is to make sure we can access the hashmap from the linkedlist (for when we remove the LRU, we also need to remove it from the hashmap) -> make a def insert(self, node) and def remove(self, node) function inside class LRU to remove from linkedlist and to add from LinkedList. Insert basically adds it to right before the right dummy -> prev, next = self.right.prev, self.right -> node.next = next, node.prev = prev, prev.next = node, next.prev = node. for the remove, we are basically removing from the left of the linkedlist after the left dummy node -> prev, next = self.left, self.left.next, prev.next = node, next.prev = node, node.next = next, node.prev = prev for each put and get, we must update the LRU linkedlist because then we have accessed a certain key. We can just remove it from the linkedlist and then re-insert it with the functions we made instead of trying to shift it. Shifting it in the linkedlist will be a pain in the ass, just remove it and then re-add it using the functions. 0023. Merge k Sorted Lists: Basically have a helper function that merges two linkedlists. In the main function, check to see if lists and len(lists) == 0: then return None for that edge case. While len(lists) > 1: basically mergedLists = [] -> list so that we can maintain lists as it is. for i in range(0, len(lists), 2): (increment two at a time because of the merging idea) -> l1 = lists[i], l2 = lists[i + 1] if (i + 1) < len(lists) else None, then basically mergedLists.append(self.mergeList(l1, l2)) and then at the end basically make lists = mergedLists, and return lists[0]. 0025. Reverse Nodes in k-Group: Basically have a helperfunct in ord to find the kth node from the currNode that we are on. -> this is simple as we can just do while currNode and k > 0: (in the helperfunct) -> currNode = currNode.next, k -= 1 --> then simply return currNode. The main function, we basically need to keep track of groupPrev and groupNext because we must update the pointers after we reverse the list (or the linkedlist will no longer be connected). To reverse, we must basically do prev, curr = kth.next, curr = groupPrev.next and then simply do while curr != groupNext: (not curr != None because we are reversing a group, not the entire linkedlist), then basically do temp = curr.next, curr.next = prev, prev = curr, curr = temp (basic reversal going on here) -> after the reversal to change the pointers around the group, we can simply do: temp = groupPrev.next (to keep a pointer to groupPrev.next before it gets updated as we need this to update the pointer after the group), groupPrev.next = kth (for the before node to point at the new head of the group), groupPrev = temp (temp still points to 1 (the head of the group before it got reversed) and we need to now make the groupPrev == the last value of the group in order to iterate to the next kth group iteration)
TREES: Sometimes must also keep track of more than 1 value in the recursion
0226. Invert Binary Tree: Can use pre-order or post-order for this, but I like to use post-order as it makes more sense to me. -> basically check the case if root is None because if it is, we can simply return None as the binarytree is empty. self.invertTree(root.left), self.invertTree(root.right), and then simply just do root.left, root.right = root.right, root.left and return root -> this does head recursion (backwards recursion) and it is very intuitive (post-order traversal)
0104. Maximum Depth of Binary Tree: Just simply have a base case for when root is nullptr: return 0, but in general return 1 + max(self.maxDepth(root.left), self.maxDepth(root.right))
0543. Diameter of Binary Tree: Diameter does not count the currNode as + 1. It only gets the max of left height and right height. Thus we need to keep track of a diameter as a global function, and return the height. So basically since in Python we cannot directly pass something through another function by reference, we must make the function inside of itself. so we can do diameter = 0, but inside the dfs function, we can do nonlocal diameter -> which basically tells the dfs function that diameter has been declared outside as a global variable. Then we can basically do left = dfs(root.left), right = dfs(root.right), diameter = max(diameter, left + right), return 1 + max(left, right) -> for the height, but to also update the diameter. make sure the dfs function is before the call to dfs inside of the main solution function. after declaring the dfs function inside the main solution function before the call of itself (dfs function), simply call the dfs(root) and return diameter.
0110. Balanced Binary Tree: Basically just use a dfs inside function and keep track of a bool to check balance and the height. -> this can be done by making the return val a list of [height, balanced] -> make sure to check left[1] and right[1] and abs(left[0] - right[0]) <= 1 for balanced. return [1 + max(left[0], right[0]), balanced] in order to return the height and the balanced bool
0100. Same Tree: Basically just use pre-order as we need to check and then iterate or else it would iterate in different speed -> whatever is recursively declared first would go all the way to the end first. first check if they are empty -> if not p and not q: return True, if p and q and p.val == q.val: return self.isSameTree(p.left, q.left) and self.isSameTree(p.right, q.right), else: return False.
0572. Subtree of Another Tree: Basically have a helperfunction that checks if the current passed in root is the start of the same subtree. In the main function, basically check if that is true by passing in root and subroot. else if it doesn't return true, we can check root.left and subroot and also root.right and subroot (we don't change subroot because we are only checking to see if the same subtree is within the main tree). Before that, we need some edge cases in which we can check if not subroot: always return True because regardless of whether root is null or not, if subroot is empty, it will always be valid. also check if not root: return False. This is because if the first statement didn't catch (meaning subroot is not empty), that means root is empty while subroot is not empty! This is not valid for obvious reasons.
0235. Lowest Common Ancestor of a Binary Search Tree: Basically we can check curr = root and whenever the split happens (where p goes to right and q goes to left subtree or vice versa, is where the common ancestor is! for obvious reasons. there can't be lower common ancestor because they went down different subtrees.) so basically check if curr.val > p.val and curr.val > q.val: make curr = curr.left, elif curr.val < p.val and curr.val < q.val: make curr = curr.right, else return curr -> basically check when the split occurs!
0102. Binary Tree Level Order Traversal: Basically do iterative BFS -> have a res, and use a q = collections.deque() and q.append(root) then while q: for i in range(len(q)): basically node = q.popleft() and check if node: level.append(node.val) q.append(node.left) q.append(node.right) and simply just do if level: res.append(level) -> outside of the while loop to basically see if level is not null or empty (just simply as a edge case) -> this is super easy, just simply BFS.
0199. Binary Tree Right Side View: BFS level based iteration! Basically popleft() from the queue until q is empty (per level) -> this is because we need to get the rightmost node at each level! -> basically while q: rightSide = None, for i in range(len(q)): node = popleft(), if node: rightSide = node, q.append(node.left), q.append(node.right) and outside of the for loop -> basically do if rightSide: res.append(rightSide) -> then outside of the while, just simply return res. -> this is basically level based BFS making sure that we append the rightmost node at each step.
1448. Count Good Nodes in Binary Tree: Basically we can do dfs! -> this is because DFS goes down a rabbit hole instead of checking each one that is a neighbour. -> basically have a nonlocal res variable along with a currMax and recursively go through each node. -> make sure to dfs(root.left), dfs(root.right) in the main function and the dfs function so we can go down each iteration
0098. Validate Binary Search Tree: Basically I need to keep track of a upper and lowerbound for each value, -> run dfs: if not node: return True, if not (lowerBound < node.val < upperBound): return false, return dfs(node.left, lowerBound, node.val) and dfs(node.right, node.val, upperBound) -> outside of the dfs function, in the main function: return dfs(root, float("-inf"), float("inf")) -> this is because left subtree (ALL VALUES) must always be smaller than node and right subtree (ALL VALUES) must be larger than node.
0230. Kth Smallest Element in a BST: Solution 1. Recurisvely go through in-order traversal to add all values into an array, sort the array and return array[k-1] (since k is 1 indexed) Solution 2. Iteratively have a stack, while curr or stack: while curr: stack.append(curr), curr = curr.left -> outside of that while loop: curr = stack.pop(), k -= 1, if k == 0: return curr.val -> curr = curr.right -> so basically go to the left until it is not possible, pop from the stack each time we have to return back up -> go to the right when going to left is not possible, this gets us to iterative in order: each time we pop, that is in-order iterative traversal
0105. Construct Binary Tree from Preorder and Inorder Traversal: Basically recursively create the tree by having a base case of if not preorder or not inorder: return None -> then basically create the "root" for each recursive instance, root = TreeNode(preorder[0]) as the first value in preorder will always be the "root" of each subtree (or the main tree in the case of the first recursive instance). Basically find the "mid" of the in-order traversal array because the "mid" will be the root value, meaning everything to the left of the mid is the leftsubtree in in-order array, and everything to the right would be the rightsubtree in in-order traversal. so basically do mid = inorder.index(preorder[0]) -> returns the index of the value of preorder[0] within in-order. But these arrays both contain the general list of indexes of the values in the same place, but just in different order. so If I cut at ith position for each subarray, the tree would cut the same nodes out in the big picture. Thus we can simply recursively call it by doing root.left = buildTree(preorder[1:mid + 1], inorder[:mid]), root.right = buildTree(preorder[mid + 1:], inorder[mid + 1:]) the reason we do preorder[1:mid + 1] for root.left is because both arrays have the same nodes in the same general range of indexes, but just in a different order.
0124. Binary Tree Maximum Path Sum: Basically a path from POINT A TO POINT B, NOT JUST SIMPLY STARTING AT THE MAIN ROOT OF THE TREE! -> we need to basically calculate in the dfs(root): leftMax = max(dfs(root.left), 0), rightMax = max(dfs(root.right), 0) (basically because it may be negative -> then simply do 0 instead), then basically update res = max(res, root.val + leftMax + rightMax) -> as the max path will always go from left to right since point a -> point b does not need to start from the root of the tree itself! then basically return root.val + max(leftMax, rightMax) -> because we can only take one path.
0297. Serialize and Deserialize Binary Tree: Serialize: have a list and append the root.val to the list (with recursion dfs) -> def dfs: if not root: make sure to res.append("N") -> simply call dfs(root) and return ",".join(res) in order to add everything to an empty string while having , in-between each value. In deserialize: split the values of the string back into a list by doing vals = data.split(",") -> conver the things back into a list splitting at each ",". Have a global variable self.i = 0 -> in order to keep track of which "node" we are looking at within our list. It must be global because it would not make sense to make it within the recursion or else it would reset back to 0. def dfs(): if vals[self.i] == "N": self.i += 1, return None. node = TreeNode(vals[self.i]), self.i += 1 (in order to iterate to the next node), node.left = dfs(), node.right(dfs), simply return node. -> outside of dfs, call root = dfs(), return root. Basic summary is -> turn it into list of values, then ",".join(res) to make it a string, make it back into a list by doing vals = data.split(",") and recursively make the the tree with a global variable self.i!
0515. Find Largest Value in Each Tree Row: use bfs --> queue = deque([(root, 0)]), while queue: node, level = queue.popleft(), if level == len(res): res.append(node.val) -> else: (new level) res[level] = max(res[level], node.val) -> if node.left: queue.append((node.left, level + 1)) -> if node.right: queue.append((node.right, level + 1)) -> -> return res
0449. Serialize and Deserialize BST: Simply add to a list, make sure to add "null" into the list as well, then return ",".join(res). for deserialize: simply values = ",".split(data) into a list, then have a q = deque([root]), root = TreeNode(int(values[0])), while q and i < len(values): 1. add leftNode then i += 1, 2. check if still in bounds (else break), 3. add rightnode, i += 1. -> -> return root.
BACKTRACKING -> ALWAYS CLEANUP! -> RECURSIVE LEAP OF FAITH (WHAT DO I NEED NOW FOR THE CURRENT RECURSIVE ITERATION) 0078. Subsets: Basically we can use recursive backtracking for this. Have a res = [] and a subset = [] as global variables within the main function itself. Have a def dfs(i) function where i indicates the index we are currently at nums[i]: if i >= lens(nums): res.append(subset), return -> because we have made decisions for each index of i, and now we are done! decision to add nums[i] to subset -> subset.append(nums[i]), dfs(i + 1) -> decision to NOT add nums[i] -> subset.pop(), dfs(i + 1) (subset.pop() because we appended, just THINK about OUR CURRENT RECURSIVE STEP, NOT OTHER STEPS. we must remove the one that we just added.) -> then outside the function simply call dfs(0) and return res. 0039. Combination Sum: Basically we can have a def dfs(i, currSubset, currSum): base cases: checking if currSum == target: res.append(currSubset), return -> check if it is invalid subtree: if i >= len(candidates[i]) or currSum > target: return -> our two recursive steps to go down separate recursive trees: 1. add ith value: currSubset.append(candidates[i]), dfs(i, currSubset, currSum + candidates[i]) -> 2. do NOT add ith value!: currSubset.pop() (to clean up the previous recursive tree, to split when that previous step occurred!), dfs(i, currSubset, currSum) -> outside the dfs function, call dfs by doing dfs(0, [], 0) (because our current i starts at index 0, currSubset is empty list [], and 0 for currSum because we have no values currently in consideration.) Then simply return res (which is a global list variable set before the def dfs function! res = []) 0046. Permutations: Have a res = [] list, simply have a def backtrack(i): if i >= len(nums): res.append(nums.copy()) basically to ensure than when we reach i being out of bounds, we append that to the final res. for j in range(i, len(nums)): nums[i], nums[j] = nums[j], nums[i] (flipping them), backtrack(i + 1), nums[i], nums[j] = nums[j], nums[i] -> simply call backtrack(0), return res. The main reason for this is because we need to flip, when there is 1 value from nums[i:j] then nums[i], nums[j] = nums[j], nums[i] doesn't do anything! The second flip just reverses what we did in the past 0090. Subsets II: Basically it is the same thing as Subset I, but this time since we are looking for the powerset -> meaning that we cannot have duplicate subsets, (1,2 and 2,1 are duplicates!) -> we need to make sure that we do not stumble upon the same recursive tree. To prevent this, we must do nums.sort() at the start of the main function in order to ensure that we can skip duplicate numbers (as going down duplicate numbers will end up going down the same recursive tree!) within def backtrack(i, subset): if i == len(nums): res.append(subset.copy()), return -> 1. decision to add nums[i] -> subset.append(nums[i]), backtrack(i + 1, subset), subset.pop() (in order to clean up that recursive tree) 2. decision to NOT add nums[i] -> while i + 1 < len(nums) and nums[i] == nums[i + 1]: i += 1 -> backtrack(i + 1, subset) -> outside of the function in the main subsetfunction: backtrack(0, []), return res -> this is mainly because we know that since it is sorted, the duplicate numbers will be next to each other -> skip until a new number is found and go to that recursive tree! 0040. Combination Sum II: Basically it is the same as Combination Sum, but since we cannot have duplicates, it we sort it and skip the duplicates just as in the same way as we do in Subsets II! -> We can do this by either including it by doing subset.append(candidates[i]), backtrack(i + 1, subset, sum), subset.pop(), then we can go down the recursive tree in which we do not add it by doing while i < len(candidates) and candidates[i] == candidates[i + 1]: i += 1 -> then do backtrack(i + 1, subset, sum) -> backtrack(0, [], 0), return res -> all in order to skip duplicate numbers (we must do candidates.sort() in the main function at the start in order to sort to skip duplicates!) 0079. Word Search: Basically have a hashset to check if the current position in the board has already been visited. First base case: if i == len(word): return True (because it has finished!) second base case: if (word[i] != board[r][c] or r < 0 or c < 0 or r >= rows or c >= rows): return False (because the word[i] should be what we are looking for, and r and c cannot go out of bounds) -> we can simply do path.add((r, c)), res = backtrack(i + 1, r + 1, c) or backtrack(i + 1, r - 1, c) or backtrack(i + 1, r, c + 1) or backtrack(i + 1, r, c - 1) and simply clean up by doing path.remove((r, c)) (must be encapsulated by another () in order to make it a tuple set) -> basically outside of the backtrack function, for r in range(rows): for c in range(cols): if backtrack(0, r, c): return True -> -> return False -> this problem is basic recursion logic! nothing too trickstery about this problem inherently. 0131. Palindrome Partitioning: Basically have a res = [] and a currPartitions = [] and a def dfs(i): if i >= len(s): res.append(currPartitions.copy()) (we need to make it copy to pass by copy), return -> for j in range(i, len(s)): if self.isPalindrome(s, i, j): currPartitions.append(s[i:j+1]) (we must do j+1 because it is exclusive, we want j to be a part of the partition), dfs(j + 1) (because we recursively want to check all the values in between j and the len(s)), currPartitions.pop() (to clean up that recursive tree!) -> dfs(0), return res. def isPalindrome(self, s, l, r): while l < r: if s[l] != s[r]: return False -> l += 1, r -= 1 -> return True 0017. Letter Combinations of a Phone Number: Basically we have to map the entire phone thing into a hashmap manually -> map = {"2" : "abc"} etc etc -> then def backtrack(i, currStr): if len(currStr) == len(digits): res.append(currStr), return -> for c in map[digits[i]]: backtrack(i + 1, currStr + c) -> -> if digits: backtrack(0, "") -> return res : the main idea behind this is for every value in the map that it maps to, recursively map the rest for each one. 0051. N-Queens: Basically recursively iterate through the rows from 0 -> n, while having a cols = set(), posDiag = set(), negDiag = set() in order to check if that spot is an invalid queen location. board = [["."] * n for i in range(n)] -> this basically creates a n x for i in range n board -> n*n board -> def backtrack(r): if r == n: copy = ["".join(row) for row in board] (basically add the same row together into a single string, then add each board into separate strings into a full res) -> if c in cols or (r + c) in posDiag or (r - c) in negDiag: continue (because we want to skip these invalid slots) -> cols.add(c), posDiag.add(r + c), negDiag.add(r - c), board[r][c] = "Q", backtrack(r + 1), (now we do clean up for the next recursive tree! -> what if we placed the first queen on a different initial row??), cols.remove(c), posDiag.remove(r + c), negDiag.remove(r - c), board[r][c] = "." -> -> backtrack(0), return res
1D DYNAMIC PROGRAMMING: Go bottom-up most of the time, have a base case -> it's basically the same thing as recursion --> Index of dp can be index or the actual number from nums as well (kind of like bucket sort kind of thing) 0070. Climbing Stairs: Basically dp[n-1] can take 1 step to get to n so = 1, dp[n-2] can take 1 step to n-1 or 2 step for n so = 2 (2 possible ways to get to n) -> for i in range(n-3, -1, -1): dp[i] = dp[i + 1] + dp[i + 2] -> return dp[0] <---> or have a one,two = 1 pointers, temp = one, one = one + two, two = temp -> basically iterating from n to the end by doing: for i in range(n-3, -1, -1). return one at the end 0746. Min Cost Climbing Stairs: Basically we would need to do bottom-up DP because we need to know the end of the route to know which one would be cheaper in the first place. Thus just leave two steps and go from for i in range(len(costs) - 3, -1, -1): cost[i] += min(cost[i + 1], cost[i + 2]) -> return min(cost[0], cost[1]) 0198. House Robber: Basically have two variables rob1, rob2 = 0, 0 (start outside the range of the list), and iterate for n in nums: temp = max(rob1 + n, rob2), rob1 = rob2, rob2 = temp -> return rob2 0213. House Robber II: Basically run house robber twice on nums[1:] and nums[:-1] (excluding last value, and excluding the first value) the reason for this is because we can only include either the first value or the last value, never both because the houses are supposedly in a circle. so bascially: return max(nums[0], self.helper(nums[1:]), self.helper(nums[:-1])) -> def helper(self, nums): rob1, rob2 = 0, 0, for n in nums: temp = max(rob1 + n, rob2), rob1 = rob2, rob2 = temp -> return rob2 0005. Longest Palindromic Substring: Basically start at each position in the string as the "middle pivot" and expand outwards to check if it is a palindrome. for i in range(len(s)): l, r = i, i, while i >= 0 and r < len(s) and s[l] == s[r]: if (r - l + 1) > resLen: resLen = r - l + 1, res = s[l:r + 1], l -= 1, r += 1 -> (this time for even length palindromes:) l, r = i, i + 1, while l >= 0 and r < len(s) and s[l] == s[r]: if (r - l + 1) > resLen: resLen = r - l + 1, res = s[l:r + 1], l -= 1, r += 1 -> -> return res 0647. Palindromic Substrings: Basically run through all possible start points in the string s, res += self.isPalindrome(s, i, i) (for odd cases where an exact middle exists l = r = middle), res += self.isPalindrome(s, i, i + 1) (for even cases where an exact middle does not exist -> l = i, r = i + 1), return res. In the isPalindrome function: basically have a numCount = 0, while l >= 0 and r < len(s) and s[l] == s[r]: numCount += 1, l -= 1, r += 1 -> return numCount (basically doing the expand from the given middle index outwards thing) -> the main function does two res += then calls to the isPalindrome function because it needs to account for even and odd cases. It does not need any checks to see which one to go under first becasue it will only run correctly if it is an odd palindrome or an even palindrome. 0091. Decode Ways: Can basically do through dp = { len(s) : 1 } -> basically setting the out of bounds as 1. for i in range(len(s) - 1, -1, -1): if s[i] == "0": return 0, else: dp[i] = dp[i + 1] (basically because one way of doing it doesn't mean we add it as a new way to decode the string. It still is ONE unified way to decode the string). if (i + 1 < len(s) and (s[i] == "1" or s[i] == "2" and s[i + 1] in "0123456")): dp[i] += dp[i + 2] (because dp[i] will already be dp[i + 1] with the previous case -> then dp[i + 2] will simply add it as a new method as its memoized from dp[i + 2] (we don't calculate the old paths again, we just take dp[i + 2])) -> -> return dp[0] 0322. Coin Change: Basically have dp = [amount + 1] * (amount + 1) because we want each value in the dp array to store the maximum possible value. and each item in the dp array represents a certain number of coins we want to go to from 0 -> amount so we have to do (amount + 1). dp[0] = 0 because to get to 0, we don't need any coins from the coin array. for a in range(1, amount + 1): for c in coins: if a - c >= 0: dp[a] = min(dp[a], 1 + dp[a - c]) -> -> -> return dp[amount] if dp[amount] != amount + 1 else -1. The reason for this is because we want to build up our solution from 0 -> each a in amount from range of 1 -> amount. we basically add + 1 to dp[a - c] because we are adding another coin to the possible minimum solution. 0152. Maximum Product Subarray: basically res = max(res) in order to initialize it as the max value in the res automatically. we can't say it is 0 or 1 or some hard-coded value because a max could be -1 or -something. Then keep track of a currMax, currMin = 1, 1 (we keep track of a currMin because consecutive negative numbers can also be a max! -5 * -10 = 50!) then bascially for n in nums: if n == 0: currMin, currMax = 1, 1, continue (because 0 will not do anything for our res! it only hard-sets it to 0. also we have to reset currMin and currMax because we are looking for maximum SUBARRAY, meaning that it should be connected. as soon as we hit a 0, we have to start a new subarray.) -> temp = currMax, currMax = max(n * currMax, n * currMin, n), currMin = min(n * temp, n * currMin, n) (we keep track of a currMax in a temp variable because we update it first!), res = max(res, currMax) -> return res 0139. Word Break: Basically have a dp cache = [False] * (len(s) + 1) -> in order to have the case base be out of bounds in order to check dp[i + len(w)] being True later. set dp[len(s)] = True, for i in range(len(s) - 1, -1, -1): for w in wordDict: if (i + len(w) <= len(s)) and s[i : i + len(w)] == w: (in order to check if the len of the substring under consideration is within bounds, and if it actually is w) dp[i] = dp[i + len(w)] (because our base case is True at len(s), thus it will be True. This will continue to be True until a false comes along, and then dp[0] will end up caching to False), if dp[i]: break -> -> return dp[0] 0300. Longest Increasing Subequence: Basically use a bottom-up approach in order to check each seubequence for each index value. LIS = [1] * len(nums), for i in range(len(nums) - 1, -1, -1): (to each each subsequence must do another for loop), for j in range(i + 1, len(nums)): if nums[i] < nums[j]: LIS[i] = max(LIS[i], 1 + LIS[j]) -> -> -> return max(LIS) -> returns the longest increasing subsequence in the entire array. 0416. Partition Equal Subset Sum: Basically use a dp set! if sum(nums) % 2: (basically if it is 1) then return False! this is because an odd sum can't be equally divided into two subsets. dp = set(), dp.add(0) (this is for the case in which we added no values so far in the current iteration! makes more sense after looking at the for loops!) target = sum(nums) // 2 (this is because for two subsets to have the same summed value, the target must be the sum(nums) // 2! both the subsets must have that value.) for i in range(len(nums) - 1, -1, -1): newDP = dp.copy(), for t in dp: newDP.add(t + nums[i]) -> dp = newDP (the reason we have a newDP is because we can't edit dp while we iterate through it, its simple computer science principles.) (since we added dp.add(0) at the start, we can add dp.add(t + nums[i]) meaning whatever nums[i] we are added + 0 will also be added to dp, not replacing 0, meaning we could get a subset with just simply that nums[i] if it equals the target). return True if target in dp else False 0119. Pascal's Triangle II: if rowIndex == 0: return [1] -> for i in range(1, rowIndex + 1) (because we want to go until rowIndex): curr = [1], for j in range(1, len(prev)): curr.append(prev[j - 1] + prev[j]) -> curr.append(1), prev = curr -> return prev (this is because each row starts and ends with 1, and we simply find the middle values by going from 1 to len(prev) so that we can fill it with prev[j - 1] + prev[j]) 0823. Binary Tree With Factors: Basically use a set in order to have constant .find() time. use a dp = { x : 1 for x in arr }, and simply have for i in arr: for j in arr: if j > i ** 0.5: break (since we can't have j be larger, as it would not be a factor of i.) -> if i // j in my_set and i % j == 0: (simply checking if i // j is in the my_set and also if j is a factor of i in order to basically see if j can be a child of i.) if i // j == j: dp[i] += dp[j] * dp[j] --> this is because we must multiply the ways to get to dp[j] together. else: (if i // j == some other value in the set.) dp[i] += dp[j] * dp[i // j] * 2 (multiply by two since we can do j, val or val, j). -> -> dp[i] % MOD -> return sum(dp.values()) % MOD -> we must mod at each iteration and at the end in order to circumvent overflow. --> also we return the total number of ways to make a tree.
2D DYNAMIC PROGRAMMING: 1269. Number of Ways to Stay in the Same Place After Some Steps: dp[steps][index] --> for steps in range(1, steps + 1): for index in range(maxLen): (maxLen = min (arrLen, steps // 2 + 1)) (because we can't return if too many steps away) --> dp[step][index] = dp[step - 1][index], if index > 0 (moved left): dp[step][index] += dp[step - 1][index - 1] -> if index < maxLen - 1 (moved right): dp[step][index] += dp[step - 1][index + 1] -> modulo at each step to make sure no overflow: dp[step][index] %= mod -> -> return dp[step][0]
GRAPHS: BFS, DFS! -> recursive leap of faith 0200. Number of Islands: Basically run have a visited = set() and iteratively run through each [row][col] by doing for r in range(rows): for c in range(cols): if (r, c) not in visited and grid[r][c] == "1": bfs(r, c), numIslands += 1 -> -> return numIslands (basically run BFS on 1 that has not been explored. This will allow a group of ones to be merged together to not do numIslands += 1. whenever a new group of 1s has been first discovered, numIslands += 1 will run.) just a run down of bfs(r, c): queue = collections.deque(), visited.add((r, c)), queue.append((r, c)), while queue: row, col = queue.popleft(), neighbours = [[1, 0], [-1, 0], [0, 1], [0, -1]], for dr, dc in neighbours: if (row + dr) in range(r) and (col + dc) in range(c) and (row + dr, col + dc) not in visited and grid[row + dr][col + dc] == "1": visited.add((row + dr, col + dc)), queue.append(row + dr, col + dc) 0133. Clone Graph: Basically have a hashmap cache each old node to a deep copied new node -> oldToNew = {}, def dfs(node): if not node: return -> if node in oldToNew: return oldToNew[node] (basically if oldToNew already has a depcopy node, we must use that to connect it to the neighbours, rather than creating a whole new node, that would really mess up the graph). -> copy = Node(node.val), oldToNew[node] = copy, for neigh in neighbors: copy.neighbors.append(dfs(neigh)) (this is the recursive leap of faith! -> we must call dfs on each neigh to create their copy. that is why we don't simply append(neigh), we must do dfs(neigh)). -> return copy -> newNode = dfs(node), return newNode 0695. Max Area of Island: rows, cols = len(grid), len(grid[0]), discovered = set(), maxArea, currArea = 0, 0 (we must also keep track of a currArea to avoid recursive split! if an island expands both top and bottom, then currArea will be passed in at its current state and not update unless it is used as a global variable.) def dfs(r, c): if r not in range(rows) or c not in range(cols) or (r, c) in discovered or grid[r][c] == 0: return -> discovered.add((r, c)), currArea += 1, maxArea = max(currArea, maxArea), directions = [[1, 0], [-1, 0], [0, 1], [0, -1]] for dr, dc in directions: dfs(r + dr, c + dc) -> -> for r in range(rows): for c in range(cols): if grid[r][c] == 1 and (r, c) not in discovered: dfs(r, c) -> -> return maxArea 0417. Pacific Atlantic Water Flow: Basically the idea of this problem is to start at the borders of the pacific and atlantic oceans and to recursively add if to set pac if r, c can be reach the pacific ocean, and atl set if r, c can reach the atlantic ocean. pac, atl = set(), set(), def dfs(r, c, visitSet, prevHeight): if (r, c) in visitSet or r < 0 or c < 0 or r >= rows or c >= cols or heights[r][c] < prevHeight: return -> visitSet.add((r, c)), dfs(r +/- 1, c +/- 1, visitSet, heights[r][c]) x 4 times for each combination of + 1 and - 1 for each r and c! (r + 1, c) (r - 1, c) (r, c + 1), (r, c - 1). Then basically in the main function, start the loop from the the four borders! for c in range(cols): dfs(0, c, pac, heights[0][c]), dfs(rows - 1, c, atl, heights[rows - 1][c]) -> for r in range(rows): dfs(r, 0, pac, heights[r][0]), dfs(r, cols - 1, atl, heights[r][cols - 1]). Last loop, check if each r, c is in both sets for them to be added to the res: for r in range(rows): for c in range(cols): if (r, c) in pac and (r, c) in atl: res.append([r, c]) -> return res 0130. Surrounded Regions: Same concept as 0417 Pacific Atlantic Water Flow: perform BFS on the border positions on the board, mark all the ones connected to any border O into a T (temp variable so we can freely change O's later that weren't changed into temp variables.) then bfs(r + 1, c), bfs(r - 1, c), bfs(r, c + 1), bfs(r, c - 1) -> basically do a brute force check through the entire board, if board[r][c] == "O": change board[r][c] = "X" -> elif board[r][c] == "T" (has been changed, meaning it is one of the non-capturable tiles): board[r][c] = "O" 0994. Rotting Oranges: Basically use a q instead of simple BFS because each group of items in q that get popped will be one time unit. first check the number of fresh in the grid to keep track of that number, since if there is a fresh that is not connected to the rotten oranges at all, then we will have to return -1. if it is a rotten orange, add to the queue.append([r, c]). while q and fresh: for i in range(len(q)): r, c = q.popleft(), for dr, dc in directions: if r + dr in range(rows) and c + dc in range(cols) and grid[r + dr][c + dc] == fresh: q.append([r + dr][c + dc]), fresh -= 1, grid[r + dr][c + dc] = rotten -> -> time += 1 (outside of loop of queue) -> return time if fresh == 0 else -1 (if there are still fresh (unreachable by the rotten), then we must return -1 as it is not possible) 0663. Walls and Gates: Run bfs from each gate, update with min(curr, newDistance) for each valid plot in the graph 0207. Course Schedule: Basically have a preMap = { i:[] for i in range(numCourses) } (for each course, map it to the list of pre). for crs, pre in prerequisites: preMap[crs].append(pre). -> visitSet = set() (this set is to detect cycles, as if there is a cycle, this means that there is a course paradox meaning that that set of courses are impossible to take as the prereqs have a loop, you can never have the prereqs for any course, because you need that course for other courses, but the other courses also need that course.) def dfs(crs): if crs in visitSet: return False -> if preMap[crs] == []: return True -> visitSet.add(crs), for pre in preMap[crs]: if not dfs(pre): return False ->-> (this is because we run the dfs on each prereq of the current course, testing to see if there is a loop.) visitSet.remove(crs) (add it to to the visitSet if we didn't catch false in that loop meaning that the current course is actually takable at some point, no paradox. Now we must run dfs for the other courses, that is why we must remove it from visitSet (as it is only meant to catch cycles)) preMap[crs] = [], return True (the reason we set preMap[crs] = [] is because we need to set it to be true the next time it gets caught by the dfs as a prereq.) -> for crs in range(numCourses): if not dfs(crs): return False -> return True. (this is because we want to run dfs on each crs, and the dfs function runs dfs on each crs's prereq.) Basic gist is, we have a graph: can we reach every single node in the graph from doing bfs at every node without creating any cycles? 0210. Course Schedule II: The same as course schedule II, but instead of returning true or false in the main function (you can return true or false in DFS.), maintain an output = [], visited = set(), and a cycle = set() -> visited.add if only it has all been visited recursively, cycle.add if it has simply been visited. then simply run dfs on the entirety of the prereqs of that crs: for pre in prereq[crs]: if not dfs(pre): return False ->-> cycle.remove(crs), visited.add(crs), output.append(crs), return True -> for c in range(numCourses): if not dfs(c): return [] -> -> return output 0684. Redundant Connection: Union Find algorithm -> connecting nodes to be each others parents based on the number of nodes they have (rank). def find(n): p = parent[p], while p != parent[p]: p = parent[p] -> return p: def union(n1, n2): p1, p2 = find(n1), find(n2), if p1 == p2: return False (found the duplicate edge) -> if rank[p1] > rank[p2]: parent[p2] = p1, rank[p1] += rank[p2] -> else: parent[p1] = p2, rank[p2] += rank[p1] -> return True -> for n1, n2 in edges: if not union(n1, n2): return [n1, n2] 0323. Number of Connected Components In An Undirected Graph: Essentially run union find as before, but this time, if p1 == p2: return 0, but after all the rank cases, return 1. In the main function: res = n, for n1, n2 in edges: res -= union(n1, n2) -> return res. The main idea for this is that we initialize res = n to consider all nodes as separate independent connected components, but then we decrement each time we find out that there is a new connection (combines into one independent connected component) 0178. Graph Valid Tree: Basically use an adjacencyList = { i:[] for i in range(n) } --> because we need to iterate through from 0 -> n-1 nodes, and also check their connected nodes. Have a visited = set() in order to check for cycles (as a tree cannot have cycles). Fill the adj.List by doing for n1, n2 in edges: adjList[n1] = n2, adjList[n2] = n1 (this is to make sure that each node has every connected node in its adj.List). def dfs(i, prev): if i in visited: return False -> visited.add(i), for j in adjList[i]: if j == prev: continue (BECAUSE WE CHECKED IT ALR, IT IS NOT A CYCLE IF PREV) -> if not dfs(j, i): return False -> return True -> return dfs(0, -1) and len(visited) == n (this is because the number of visited nodes must be n as we must visit all NODES, and we run dfs(0, -1) to ensure that we have no cycles. : In other words, NO CYCLE, BUT VISIT EVERY NODE) 0127. Word Ladder: Basically create a neighbor dictionary --> for each "pattern": hot: ot, ht, ho* --> have words that have 1 letter differences be the values for the each of the pattern keys. Then perform BFS for each possible word (checking possible patterns it could go down!) --> if endWord not in wordList: return 0 -> nei = defaultdict(list), wordList.append(beginWord), for word in wordList: for i in word: pattern = word[:i] + "" + word[i + 1:], nei[pattern].append(word) -> -> q = collections.deque([beginWord]), visit = set([beginWord]), while q: for i in range(len(q)): (for each word in q) word = q.popleft(), if word == endWord: return res, -> for j in word: pattern = word[:j] + "" + word[j + 1:], for neiWord in nei[pattern]: if neiWord not in visit: visit.add(neiWord), q.append(neiWord) -> -> -> res += 1 -> return 0 1361. Valid Binary Tree Nodes: 1. Create adj_list 2. Check all nodes must have exactly 1 parent except the root (no parent) 3. All nodes must be connected. (run dfs) 2050. Parallel Courses III: use adjacency list to create topological order (with queue), then use DFS for each one to find the max(total_time) taken (because events can be taken concurrently, but max must be taken as thats the time that will be the time that it atkes all things to finish, as the topological order is the ORDER in we must go in, only order considered, thats why we take the max of total_time. also for each nei, compare the max(total_time[nei], total_time[crs]) because the crs may take longer than the nei's total time.)
HEAP/PRIORITY QUEUE: 0703. Kth Largest Element in Stream: maintain a minheap of size k, as that will maintain the kth largest element at the top of the heap. 1046. Last Stone Weight: Basically maintain a max heap --> in order to do this in python, must simply use a min_heap, but with all -values of the original: stones = [-s for s in stones], heapq.heapify(stones), while len(stones) > 1: first = -heapq.heappop(stones), second = -heapq.heappop(stones) (first is the larger value, thus it must be the one at the front in order to get the difference between first and second), if first != second: heapq.heappush(stones, -(first - second)) (must negate it again before heap_push in order to maintain max_heap property) -> -> return abs(stones[0]) if stones else 0 (we must return abs(stones[0]) because it is a max_heap, thus meaning all the values have been reversed (thats how a max_Heap is implemented in python ^^ read the entire explanation)) 0973. Kth Closest Points to Origin: simply use a minheap and then pop to a res list in order to get the kth values, minHeap.append((distance, x, y)) (because the first value in each list element is the comparator (will heapify based on the first value! crazy python stuff)). heapq.heapify(minHeap). then to get the k closest points, simply pop from stack by doing: distance, x, y = heapq.heappop(minHeap), res.append((x, y)) -> also crazy python stuff that we can set the pop values to each individual variable (similar to assembly lea command) then simply return res. 0215. Kth Largest Element in an Array: Basically use a maxHeap --> maxHeap = [-i for i in nums], but declare currVal = max(nums), then iterate with while loop and return currVal. 0621. Task Scheduler: Simply use count = Counter(tasks), this basically creates a hashmap of [character, number of times it appears in task], then use a maxheap = [-cnt for cnt in count], since python has no built-in library maxheap, we must negate all the values within a minheap in order to make it a maxheap (inverted minheap). then simply use a q = deque() in order to remove from the maxheap and add to the q --> [cnt, time when it can re-enter maxheap]. when both q and maxheap are empty, then all tasks have been completed. heapq.heapify(maxHeap), while maxHeap or q: time += 1, if maxHeap: cnt = 1 + heapq.heappop(maxHeap) (we have to + 1 instead of - 1 in order to decrement the number of times we must consider it in our tasks because all the values have been negated (all values are negative, so + 1 decrements it)). if cnt: (if cnt not 0, because if it is 0, then we simply just disregard it as all tasks of that specific character are now done.) q.append([cnt, time + n]) (basically time + n as we need to check later when it can be added back to the maxheap to be considered available for the CPU.) -> if q and q[0][1] == time: (basically the earliest value in the q and the time of it has to match the time that it is now to be added back to the queue. we only consider the earliest value since the idletime is the same for all possible tasks, it does not vary per character.) heapq.heappush(maxHeap, q.popleft()[0]) (we take q.popleft()[0] as we dont need the time, only the cnt to be added back to the maxheap). -> return time 0355. Design Twitter: basically have a self.count = 0, self.user_map = defaultdict(set) (userId : set(followeeId)), self.tweet_map = defaultdict(list) (userId : list(self.count, tweetId)) --> the reason we have a count is to basically determine which tweet was made in accordance to iteration (time) (must also iterate downwards from 0, since we need the most recent tweets, meaning a minheap can be used with negative values. the most negative value will be at the front of the minheap, and it will also be the most recent tweet.) for def getNewsFeed: simply have a res, and a minHeap, for each followeeId within self.user_map[userId]: index = len(self.tweet_map[followeeId]) - 1 (index of the latest tweet of each individual followee). count, tweetId = self.tweet_map[followeeId][index] (to get that tweet), heapq.heappush(minHeap, (count, followeeId, tweetId, index - 1 (index - 1 in order to iterate afterwards within a while loop))) (basically we are keeping track of all the followeeId list last value in order to determine which values are the most recent by adding all of them into a minheap). while minHeap and len(res) < 10: count, followeeId, tweetId, index = heapq.heappop(minHeap), res.append(tweetId), if index >= 0: count, tweetId = self.tweet_map[followeeId][index], heapq.heappush(minHeap, (count, followeeId, tweetId, index - 1)) -> -> return res.
BIT MANIPULATION: (datalab basically) 0136. Single Number: Basically just have res = 0 and xor it with all the numbers. the numbers that appear twice will cancel out. 0191. Number of 1 Bits: Simply have a counter for the number of bits that are 1 by iterating through for i in range(32): shift each position --> if (n >> i) & 1 (mask with 1 to see if the shifted position is 1!) == 1: cnt += 1 ->-> return cnt (basically iterate through each bitshifted then masked LSB position, then cnt++ if it is a 1) 0342. Power of Four: Basically we need to check 1. is it positive 2. is it a power of 2. 3. is it a power of 4. --> basically we check if its a power of 2 by doing n & (n-1), because we need to check first that it only has one bit set to 1 in the 4 bits --> this is because a power of 4 only has one bit set to one in even slots. (0001, 0100, 0001 0000, 0100 0000), thus we need to check if it is a power of 2 since powers of 2 only have 1 bit set to 1, and this works well for us since all powers of 4 are also powers of 2. Then simply check that n & mask == n --> this is to check if it is a power of 5. (mask = 0x5555 5555) --> this is because 5 = 0101 --> since 1 can only be in bit[0] or bit[2] (even positions for powers of 4, but only 1 bit set to 1, this will give us n when masked with n.) so basically check that theres only 1 bit set to 1 (power of 2), then check that it is set in an even position (power of 4). 0779. K-th Symbol in Grammar: Basically find the parent by recursively going through the initial function --> parent = self.getKth(n - 1 (k + 1) // 2), then simply return parent if k % 2 == 1 else 1 - parent (this is because the recursive function gives us the parent since two values come from one parent, thus we need to go recursively up based on that criteria, that is why it is n - 1, and (k + 1) // 2. For the return statement, it is that because 0 gives us 01 and 1 gives us 10, meaning that if k is odd, it is the first child (left child) that is given by the parent (because we are 1-indexed), thus we simply return 0 if 01and 1 if 1. since we do 0-->01, 1-->10)
MATH & GEOMETRY: SOMETIMES PREFIX SUM!!! --> UTILIZE PREFIX SUM AND CANCELATIONS IN ORDER TO OPTIMIZE super Nice 0343. Integer Break: Basically use the idea of a pattern of what gets multiplied together for each number. 3 = 1 * 2, 4 = 2 * 2, 5 = 3 * 2, 6 = 3 * 3, 7 = 4 * 3, 8 = 3 * 3 * 2 --> basically the pattern is: multiply as many threes as you can, if remainder is 0, thats just the answer, if remainder is 1: add one to one of the threes to mulitply by 4, if remainder is 2, then simply multiply 2 to the answer. 0066. Plus One: Basically run backwards, if digits[i] == 9: digits[i] = 0, if i == 0: digits.insert(0, 1) (append 1 to the front of digits for carry over) -> -> else: digits[i] += 1, break (NEED THIS BREAK or it will keep adding 1.) 0202. Happy Number: Basically cycle stuff, use a set() to make sure the same thing didnt pop up again, if it did --> return False. have a recursive function def findHappy(numStr): if int(numStr) == 1: return True --> if int(numStr) in visitedSet: return False --> res = 0, visitedSet.add(int(numStr)), for n in range(len(numStr)): res += int(numStr[0]) ** 2 --> return findHappy(str(res)) --> return findHappy(str(n)) --> basically use string comprehension
GREEDY: 1793. Maximum Score of a Good Subarray: Simply use greedy by making res = mini = res[k], then while i < 0 or j > len(nums) - 1: if i == 0: j += 1 -> elif j == len(nums) - 1: i -= 1 -> elif nums[i - 1] < nums[j + 1]: j += 1 (must try j before trying the smaller values) -> else: i -= 1 -> mini = min(mini, nums[i], nums[j]), res = max(res, mini * (j - i + 1))
RECURSION: 0038. Count and Say: Basically recursively loop through for i in range(n - 1): recursive(res), keep a while loop basic recursion. ret += count, ret += currStr