DSA & Patterns Flashcards

(89 cards)

1
Q

What is Big-O notation and why does it matter in interviews?

A

Big-O describes how an algorithm’s time or space grows as input size n increases. Ignores constants and lower-order terms. Interviewers ALWAYS ask: what is the time and space complexity of your solution? Order from best to worst: O(1) < O(log n) < O(n) < O(n log n) < O(n²) < O(2ⁿ) < O(n!). Always state complexity unprompted at the end of your solution — it shows engineering maturity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the Big-O of all common Array operations?

A

Access by index: O(1). Search (unsorted): O(n). Search (sorted + binary search): O(log n). Insert at end (amortized): O(1). Insert at middle/beginning: O(n) (shift elements). Delete at end: O(1). Delete at middle/beginning: O(n). Space: O(n). Key insight: arrays are great for index access but expensive for insertions/deletions in the middle

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the Big-O of all common LinkedList operations?

A

Access by index: O(n). Search: O(n). Insert at head: O(1). Insert at tail: O(1) if tail pointer held — else O(n). Insert at middle (given node): O(1). Delete head: O(1). Delete tail: O(n) for singly linked — O(1) doubly linked. Delete middle (given node): O(1). Space: O(n). Key insight: LinkedList is great for O(1) head inserts/deletes but O(n) for random access

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the Big-O of all common HashMap/HashSet operations?

A

Insert: O(1) average — O(n) worst case (all keys hash to same bucket). Delete: O(1) average. Search/Get: O(1) average. Space: O(n). Key insight: HashMap is the most powerful tool in coding challenges — use it to reduce O(n²) brute force to O(n) by trading space for time. Always consider: can I precompute a frequency map or prefix sum?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the Big-O of all common Stack and Queue operations?

A

Stack (push/pop/peek): O(1) for all. Queue (enqueue/dequeue/peek): O(1) for all. Both use O(n) space. Stack implemented with: array or LinkedList. Queue implemented with: LinkedList or circular array (deque). Java: Stack class (legacy — use Deque) + ArrayDeque (fast stack+queue) + LinkedList (queue). Key insight: stack = LIFO (last in first out). Queue = FIFO (first in first out)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the Big-O of all common Tree operations?

A

BST (balanced): Search O(log n) — Insert O(log n) — Delete O(log n). BST (unbalanced/worst): O(n) for all. Binary Tree traversal (DFS/BFS): O(n) — must visit all nodes. Height of balanced tree: O(log n). Height of skewed tree: O(n). Space for DFS: O(h) where h=height — O(log n) balanced — O(n) skewed. Space for BFS: O(w) where w=max width — O(n) worst case

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the Big-O of all common Sorting algorithms?

A

Bubble Sort: O(n²) time — O(1) space — stable. Selection Sort: O(n²) — O(1) — not stable. Insertion Sort: O(n²) worst — O(n) nearly sorted — O(1) — stable — best for small/nearly sorted. Merge Sort: O(n log n) always — O(n) space — stable. Quick Sort: O(n log n) average — O(n²) worst — O(log n) space — not stable. Heap Sort: O(n log n) — O(1) — not stable. Counting/Radix: O(n+k) — for integers. Java Arrays.sort(): Timsort O(n log n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the Big-O of all common Heap/Priority Queue operations?

A

Insert: O(log n). Extract min/max: O(log n). Peek min/max: O(1). Build heap from array: O(n) (not O(n log n) — important!). Search: O(n). Space: O(n). Java: PriorityQueue (min-heap by default — use Collections.reverseOrder() for max-heap). Key insight: use heap when you need the k-th largest/smallest or need to repeatedly extract min/max. Top-K problems: O(n log k) with a heap of size k

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the Big-O of Graph traversal algorithms?

A

BFS: O(V+E) time — O(V) space (queue + visited set) — V=vertices E=edges. DFS: O(V+E) time — O(V) space (stack/recursion + visited). Dijkstra: O((V+E) log V) with min-heap. Bellman-Ford: O(VE) — handles negative weights. Floyd-Warshall (all pairs): O(V³). Topological Sort (Kahn’s or DFS): O(V+E). Union-Find (path compression + rank): O(α(n)) ≈ O(1) amortized

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the Big-O of Binary Search?

A

Time: O(log n). Space: O(1) iterative — O(log n) recursive (call stack). Requires: sorted array. Each step eliminates half the search space. Template: lo=0 hi=n-1. while(lo<=hi): mid=lo+(hi-lo)/2. If arr[mid]==target return mid. If arr[mid]<target lo=mid+1. Else hi=mid-1. Return -1. Key: use lo+(hi-lo)/2 not (lo+hi)/2 to avoid integer overflow. Applies to: sorted arrays + answer-space binary search (bisect on a value range)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

MEMORY BOOSTER: Big-O quick reference

A

O(1): HashMap get/put - array index access - stack push/pop. O(log n): binary search - balanced BST ops - heap insert/extract. O(n): linear scan - LinkedList traversal - tree traversal - BFS/DFS. O(n log n): merge sort - quicksort avg - heap sort - sorting. O(n²): bubble/selection/insertion sort - nested loops over array. O(2ⁿ): subsets - Fibonacci naive - exponential backtracking. O(n!): permutations. Space trick: O(n) HashMap to reduce time from O(n²) -> O(n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the 14 essential coding interview patterns?

A

1) Sliding Window. 2) Two Pointers. 3) Fast and Slow Pointers. 4) Merge Intervals. 5) Cyclic Sort. 6) In-place LinkedList Reversal. 7) Tree BFS. 8) Tree DFS. 9) Two Heaps. 10) Subsets/Combinations. 11) Modified Binary Search. 12) Bitwise XOR. 13) Top-K Elements. 14) K-way Merge. Bonus: Dynamic Programming + Backtracking + Graph (BFS/DFS/Union-Find) + Monotonic Stack. Recognizing the pattern is 80% of solving the problem

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the Sliding Window pattern and when do you use it?

A

Use when: contiguous subarray/substring problem + find max/min/average/longest/shortest window satisfying a condition. Avoids O(n²) nested loops. Types: Fixed window (size k) + Variable window (expand/shrink based on condition). Template: left=0 - for right in range(n): add arr[right] to window. While window invalid: remove arr[left] - left++. Update answer with current window. Examples: Maximum sum subarray of size k + Longest substring without repeating chars + Minimum window substring

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the Two Pointers pattern and when do you use it?

A

Use when: sorted array + find pair/triplet + remove duplicates + palindrome check + comparing two arrays. Two variants: Opposite ends (left=0 right=n-1 move toward each other — for sorted array pair sum) + Same direction (both start at 0 move at different speeds — for remove duplicates/merge). Examples: Two Sum II (sorted) + 3Sum + Container With Most Water + Remove Duplicates + Merge Sorted Array. Key: usually requires sorted input for opposite-ends variant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the Fast and Slow Pointers pattern?

A

Also called Floyd’s Tortoise and Hare. Use when: linked list cycle detection + finding middle of linked list + finding cycle start. Fast pointer moves 2 steps — slow moves 1 step. If cycle: fast and slow meet inside cycle. Middle of list: when fast reaches end — slow is at middle. Template: slow=head fast=head. While fast and fast.next: slow=slow.next - fast=fast.next.next. If slow==fast: cycle found. Examples: Linked List Cycle + Happy Number + Find Cycle Start

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the Merge Intervals pattern?

A

Use when: overlapping intervals — insert interval — merge intervals — find free slots. Sort by start time first. Template: sort intervals by start. result=[intervals[0]]. For each interval: if interval.start <= result.last.end: merge (update end = max(end - interval.end)). Else: add to result. Time: O(n log n) for sort. Examples: Merge Intervals + Insert Interval + Non-overlapping Intervals + Meeting Rooms II (use min-heap for count of simultaneous meetings)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is the Modified Binary Search pattern?

A

Use when: sorted or rotated-sorted array + finding boundary + answer-space search. Not just arr[mid]==target — apply binary search logic to ANY monotonic condition. Templates: find leftmost (when arr[mid]>=target: hi=mid). Find rightmost (when arr[mid]<=target: lo=mid). Rotated array: check which half is sorted then decide direction. Answer-space: binary search on the answer value (not array index) — works when: feasible(x) is monotonic. Examples: Search Rotated Array + Find Min in Rotated + Koko Eating Bananas + Capacity to Ship

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is the Tree BFS pattern?

A

Use when: level-by-level traversal + shortest path in unweighted tree/graph + minimum depth. Template: queue=deque([root]). While queue: level_size=len(queue). For i in range(level_size): node=queue.popleft(). Process node. Add node.left and node.right if they exist. Result collected per level. Time O(n) Space O(n) (max width). Examples: Binary Tree Level Order + Zigzag Level Order + Minimum Depth + Connect Level Order Siblings + BFS shortest path in grid

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the Tree DFS pattern?

A

Use when: path sum + path existence + tree structure problems + preorder/inorder/postorder. Recursive template: if not node: return base_case. left=dfs(node.left - …). right=dfs(node.right - …). return combine(node.val - left - right). Iterative: use explicit stack. Preorder: root-left-right. Inorder: left-root-right (gives sorted BST). Postorder: left-right-root. Examples: Path Sum + Max Path Sum + Diameter + Lowest Common Ancestor + Validate BST + Serialize/Deserialize

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the Two Heaps pattern?

A

Use when: find median from data stream + sliding window median + schedule tasks to minimize cost. Use a max-heap for lower half and min-heap for upper half. Keep them balanced (sizes differ by at most 1). Median: if equal size = average of tops. If unequal = top of larger heap. Template: add to max_heap -> rebalance by moving max_heap.top() to min_heap if top > min_heap.top() -> balance sizes. Examples: Find Median from Data Stream + Sliding Window Median

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is the Top-K Elements pattern?

A

Use when: find k largest/smallest/most frequent elements. Use heap of size k (not sort entire array). Min-heap of size k for k-largest: add element — if heap size > k: pop min — remaining k elements are k-largest. O(n log k) vs O(n log n) for full sort. Max-heap for k-smallest. Examples: Kth Largest Element + Top K Frequent Elements + K Closest Points to Origin + Find K Pairs with Smallest Sums. Java: PriorityQueue. Python: heapq.nlargest() or maintain heap manually

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is the Subsets / Combinations / Permutations pattern?

A

Use when: generate all subsets + all combinations + all permutations + string permutations. Backtracking template: result=[]. def backtrack(start - current): result.append(current[:]). For i in range(start - n): current.append(nums[i]). backtrack(i+1 - current). current.pop(). Subsets: O(2ⁿ). Permutations: O(n!). Combinations k of n: O(C(n-k)). Key: add result copy at different points: entry=subsets - leaf=permutations. Pruning: sort + skip duplicates

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is the Backtracking pattern?

A

Backtracking explores all possibilities by making choices and undoing them. Template: def backtrack(state): if goal: add to results + return. For each choice: if valid(choice): make choice (modify state). backtrack(state). undo choice (restore state). Use when: find all solutions (not just one) + constraint satisfaction + generate combinations/permutations/subsets. Key optimizations: pruning (skip invalid branches early) + sort input (enables deduplication + early termination). Examples: N-Queens + Sudoku Solver + Word Search + Combination Sum

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is the Dynamic Programming pattern?

A

DP solves problems with optimal substructure and overlapping subproblems. Recognize DP when: find min/max/count/true-false + choices at each step + optimal depends on subproblems. Two approaches: Top-down (memoization: recursion + cache). Bottom-up (tabulation: fill dp array iteratively — usually faster). State definition is everything: dp[i] = answer for first i elements. Transition: dp[i] = f(dp[i-1] - dp[i-2] - …). Base cases: dp[0] dp[1]. Examples: Fibonacci + Climbing Stairs + Coin Change + Knapsack + LCS + LIS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What is the Monotonic Stack pattern?
Monotonic Stack maintains a stack in increasing or decreasing order by popping elements that violate the order. Use when: find next greater element + previous smaller element + largest rectangle + trap rainwater. Increasing stack (pop when current > top): finds previous greater on left. Decreasing stack (pop when current < top): finds next smaller element. Template: stack=[]. For i in range(n): while stack and condition(stack[-1] - nums[i]): process(stack.pop()). stack.append(i). Examples: Daily Temperatures + Largest Rectangle + Trapping Rain Water + Next Greater Element
26
What is the Graph BFS/DFS pattern for coding interviews?
Use BFS when: shortest path (unweighted) + level-by-level exploration + connected components. Use DFS when: cycle detection + topological sort + path existence + strongly connected components. Grid as graph: 4 directions (up/down/left/right) + visited set/array to prevent revisit. Template BFS: queue=[start] - visited={start}. While queue: node=queue.popleft(). For neighbor in get_neighbors(node): if not visited: visited.add - queue.append. Examples: Number of Islands + Clone Graph + Word Ladder + Pacific Atlantic + Course Schedule (topological sort) + Alien Dictionary
27
What is the Union-Find (Disjoint Set Union) pattern?
Union-Find tracks which elements belong to the same connected component. Operations: find(x) (find root of x's component — with path compression: O(α(n))≈O(1)) + union(x-y) (merge components — with union by rank: O(α(n))). Template: parent=[i for i in range(n)]. rank=[0]*n. def find(x): if parent[x]!=x: parent[x]=find(parent[x]). Return parent[x]. def union(x-y): px-py=find(x)-find(y). If rank[px]rank[py]: parent[py]=px. Else: parent[py]=px - rank[px]++. Examples: Number of Connected Components + Redundant Connection + Accounts Merge + Number of Islands II
28
What is the Bit Manipulation / XOR pattern?
XOR properties: a XOR a = 0 (same = 0) + a XOR 0 = a + XOR is commutative and associative. Use when: find single number in pairs + find missing number + flip bits. Common tricks: x&(x-1) removes lowest set bit (count set bits — check power of 2). x&(-x) isolates lowest set bit. x^x=0 cancels pairs. Left shift: x<<1 = x*2. Right shift: x>>1 = x/2. Check bit i: (x>>i)&1. Set bit i: x|(1<
29
What is the Prefix Sum / Difference Array pattern?
Prefix sum: prefix[i] = sum of arr[0..i]. Sum of range [l-r] = prefix[r] - prefix[l-1] in O(1) after O(n) build. Use when: multiple range sum queries + subarray sum equals k (use HashMap: count of prefix sums seen). Difference array: for range update problems (add val to all elements in [l-r]). 2D prefix sum: for rectangle sum queries. Template for subarray sum = k: count=0 - prefix=0 - seen={0:1}. For num in nums: prefix+=num. count+=seen.get(prefix-k-0). seen[prefix]+=1. Examples: Subarray Sum Equals K + Range Sum Query + Product of Array Except Self
30
MEMORY BOOSTER: The 14 patterns
Window/Pointer: Sliding Window (subarray condition) + Two Pointers (sorted array pairs) + Fast/Slow (cycle detection). Intervals: Merge Intervals (sort by start). Search: Modified Binary Search (sorted/rotated/answer-space). Tree: BFS (levels/shortest) + DFS (paths/structure). Data structures: Two Heaps (median) + Top-K (heap of size k) + Monotonic Stack (next greater/smaller). Generation: Subsets+Backtracking (all possibilities). Graph: BFS/DFS + Union-Find (connected components). Math: XOR (cancel pairs) + Prefix Sum (range queries in O(1))
31
When do you use an Array vs LinkedList in coding interviews?
Array: random access by index needed + fixed or mostly append-only + cache-friendly (contiguous memory). LinkedList: frequent insertions/deletions at head or middle + don't need random access + implementing stack/queue. In practice: most interview problems use arrays. LinkedList problems are their own category (reverse - cycle - merge - nth from end). Java: ArrayList for dynamic array. ArrayDeque for double-ended queue (faster than LinkedList for stack/queue)
32
When do you use a HashMap in coding interviews?
Use HashMap when: need O(1) lookup + frequency counting + checking existence + caching intermediate results + Two Sum pattern (complement lookup) + grouping elements. Patterns: frequency map (count occurrences) + index map (store last seen index) + group anagrams (sorted string as key) + memoization (recursive call args as key). Default strategy: if brute force is O(n²) — think 'can I use a HashMap to reduce to O(n)?'
33
When do you use a Stack in coding interviews?
Use Stack when: matching brackets/parentheses + next greater/smaller element + DFS iteratively + function call simulation + expression evaluation + undo operations. Key insight: stack is perfect for problems where you need to remember what you saw PREVIOUSLY and reverse the order. Template for balanced brackets: for ch in s: if ch is open: stack.push(ch). Elif stack and matches(stack.top()-ch): stack.pop(). Else: return false. Return stack.isEmpty()
34
When do you use a Queue/Deque in coding interviews?
Queue: BFS traversal + level-by-level processing + sliding window maximum (deque). Deque (double-ended queue): sliding window max/min (maintain decreasing/increasing deque of indices — front=max — remove indices out of window from front — remove smaller elements from back). Java: ArrayDeque for both stack and queue (faster than Stack and LinkedList). Queue.add/poll. Deque.addFirst/addLast/peekFirst/pollFirst
35
When do you use a Heap/Priority Queue in coding interviews?
Use Heap when: repeatedly need min or max + Top-K problems + merge K sorted lists + find median dynamically + Dijkstra shortest path + scheduling problems. Min-heap: always gives smallest element. Max-heap: reverse comparator. Java: PriorityQueue (min-heap default). PriorityQueue(Collections.reverseOrder()) for max-heap. Custom comparator: PriorityQueue((a-b) -> a[0]-b[0]). Key: heap gives O(log n) insert and O(log n) extract-min — NOT O(1) search
36
When do you use a Trie in coding interviews?
Trie (prefix tree) stores strings where each node is a character. Use when: prefix search + autocomplete + word search in dictionary + spell checker. Operations: insert O(m) + search O(m) + startsWith O(m) — m=string length. Implementation: TrieNode has children[26] array or HashMap + boolean isEnd. Common problems: Implement Trie + Word Search II + Replace Words + Design Search Autocomplete. Space: O(alphabet_size * key_length * num_keys)
37
When do you use a Graph in coding interviews?
Use graph when: entities have relationships + network connectivity + paths between nodes + dependencies. Represent as: adjacency list (HashMap> — most common in interviews) + adjacency matrix (for dense graphs + O(1) edge lookup). BFS for: shortest path (unweighted) + level exploration. DFS for: connected components + cycle detection + topological sort. Bidirectional vs directed. Weighted vs unweighted. Recognize grid problems as graphs (each cell = node - 4/8 neighbors = edges)
38
When do you use a Binary Search Tree in coding interviews?
BST maintains sorted order: left < root < right. Use when: need sorted dynamic data + floor/ceiling queries + range queries + in-order traversal gives sorted sequence. BST operations: O(log n) average for balanced — O(n) worst case (use self-balancing: AVL - Red-Black - Java TreeMap). TreeMap/TreeSet in Java: provides sorted map + floor/ceiling/higher/lower methods in O(log n). Problems: Validate BST + Kth Smallest in BST + BST Iterator + Range Sum in BST
39
MEMORY BOOSTER: When to use which data structure
HashMap: O(1) lookup - frequency count - Two Sum - grouping. Stack: brackets - next greater/smaller - DFS - expression eval. Queue: BFS - level order. Deque: sliding window max/min. Heap: top-K - repeated min/max access - Dijkstra. Trie: prefix search - autocomplete - word dictionary. Graph (adj list): connectivity - BFS shortest path - DFS cycle detection - topological sort. TreeMap: sorted dynamic data - floor/ceiling queries. Array: random access. LinkedList: O(1) head insert/delete
40
What is the Binary Search template for coding interviews?
Iterative (preferred): lo=0 hi=n-1. While lo<=hi: mid=lo+(hi-lo)/2. If arr[mid]==target: return mid. If arr[mid]target: hi=mid-1. Else: lo=mid. Key: lo+(hi-lo)/2 prevents overflow. lo<=hi for exact match. lo
41
What is the DFS template for Trees?
Recursive: def dfs(node): if not node: return base. left=dfs(node.left). right=dfs(node.right). return combine(node.val-left-right). Iterative (preorder): stack=[root]. While stack: node=stack.pop(). process(node). If node.right: stack.push(node.right). If node.left: stack.push(node.left). Inorder iterative: stack=[]. curr=root. While curr or stack: while curr: stack.push(curr). curr=curr.left. curr=stack.pop(). process(curr). curr=curr.right
42
What is the BFS template for Trees and Graphs?
Tree level-order: from collections import deque. queue=deque([root]). While queue: size=len(queue). level=[]. For _ in range(size): node=queue.popleft(). level.append(node.val). if node.left: queue.append(node.left). if node.right: queue.append(node.right). result.append(level). Graph BFS: visited=set([start]). queue=deque([start]). While queue: node=queue.popleft(). for neighbor in graph[node]: if neighbor not in visited: visited.add(neighbor). queue.append(neighbor)
43
What is the DFS template for Graphs?
Recursive: visited=set(). def dfs(node): visited.add(node). for neighbor in graph[node]: if neighbor not in visited: dfs(neighbor). Iterative: visited=set([start]). stack=[start]. While stack: node=stack.pop(). for neighbor in graph[node]: if neighbor not in visited: visited.add(neighbor). stack.append(neighbor). Grid DFS: directions=[(0-1)(0--1)(1-0)(-1-0)]. def dfs(r-c): if out of bounds or visited or blocked: return. visited.add((r-c)). for dr-dc in directions: dfs(r+dr-c+dc)
44
What is the Backtracking template?
def solve(result - current - start - ...): if base_case: result.append(current[:]). return. for i in range(start - n): if not valid(i): continue. current.append(nums[i]). solve(result - current - i+1 - ...). current.pop(). Key decisions: 1) What is in current (partial solution)? 2) What is the base case? 3) What choices are available at each step? 4) How to skip duplicates (sort + if i>start and nums[i]==nums[i-1]: continue)? 5) Add result at entry (subsets) or at leaf (permutations)
45
What is the Dynamic Programming template?
Top-down (memoization): memo={}. def dp(state): if state in memo: return memo[state]. if base_case: return base_value. result = f(dp(subproblem1) - dp(subproblem2)). memo[state]=result. return result. Bottom-up (tabulation): dp=[0]*(n+1). dp[0]=base0. dp[1]=base1. For i in range(2-n+1): dp[i]=f(dp[i-1]-dp[i-2]). return dp[n]. Space optimization: often only need last 1-2 values -> use variables instead of array. Always define: state meaning + transition + base cases
46
What is the Sliding Window template?
Variable window: left=0. window_data={}. result=0. For right in range(len(s)): add s[right] to window. While window is invalid: remove s[left] from window. left+=1. result=max(result - right-left+1). Fixed window of size k: For right in range(len(arr)): add arr[right]. If right>=k: remove arr[right-k]. If right>=k-1: update answer. Key: track what makes window valid/invalid. Common window data: character frequency dict + element sum + distinct count
47
What is the Two Pointers template?
Opposite ends (sorted array): left=0. right=n-1. While left
48
What is the Topological Sort template?
Kahn's Algorithm (BFS-based): Build adjacency list + in-degree count. queue=[nodes with in-degree 0]. result=[]. While queue: node=queue.popleft(). result.append(node). For neighbor of node: in_degree[neighbor]-=1. If in_degree[neighbor]==0: queue.append(neighbor). If len(result)!=total_nodes: cycle detected (return []). Use for: course prerequisites + build order + task scheduling. Examples: Course Schedule + Alien Dictionary + Minimum Height Trees
49
What is the Dijkstra shortest path template?
heap=[(0-start)]. dist={start:0}. While heap: d-node=heappop(heap). If d>dist[node]: continue. For neighbor-weight in graph[node]: new_dist=d+weight. If new_distdist[node]: continue)
50
What is the Union-Find template?
parent=list(range(n)). rank=[0]*n. def find(x): if parent[x]!=x: parent[x]=find(parent[x]). Return parent[x]. def union(x-y): px-py=find(x)-find(y). If px==py: return False (already connected). If rank[px]rank[py]: parent[py]=px. Else: parent[py]=px. rank[px]+=1. Return True. count=n. Call union() -> if returns True: count-=1. count = number of connected components at end
51
MEMORY BOOSTER: Algorithm templates
Binary Search: lo+(hi-lo)/2 to avoid overflow. lo<=hi for exact. lo recurse -> undo choice. DP: define state + transition + base cases. Sliding window: expand right + shrink left while invalid. Two pointers: opposite ends (sorted) or same direction (fast/slow). Topological sort: in-degree + queue(0-degree nodes). Dijkstra: min-heap + skip stale entries
52
How do you recognize which pattern to use for Array problems?
Sorted array + find pair/triplet: Two Pointers. Subarray/substring satisfying condition: Sliding Window. Subarray sum: Prefix Sum + HashMap. Find element in sorted array: Binary Search. Rearrange in-place without extra space: Cyclic Sort. Range queries: Prefix Sum. Max in sliding window: Deque (Monotonic). Next greater element: Monotonic Stack. K-th largest: Heap. All subarrays/subsets: Backtracking or Subset pattern
53
How do you recognize which pattern to use for String problems?
Anagram/permutation in string: Sliding Window + frequency array. Longest without repeating: Sliding Window. Palindrome: Two Pointers from center or expand around center. Minimum window substring: Sliding Window + two freq maps. Decode/encode string: Stack. Parenthesis matching: Stack. Prefix matching: Trie. All permutations: Backtracking. String to number: Handle sign + overflow. Reverse words: Split or Two Pointers
54
How do you recognize which pattern to use for LinkedList problems?
Find middle: Fast/Slow Pointers. Detect cycle: Fast/Slow Pointers. Reverse entire list: Iterative (prev-curr-next). Reverse sublist: In-place reversal. Merge two sorted: Two Pointers on both lists. Find nth from end: Fast ahead by n then move both. Palindrome check: Fast/Slow + reverse second half. Intersect two lists: Length equalization or two-pointer trick. Sort list: Merge Sort on linked list
55
How do you recognize which pattern to use for Tree problems?
Level-by-level + shortest path: BFS. Path sum + structure + height: DFS. Serialize/deserialize: BFS or DFS with null markers. Lowest Common Ancestor: DFS recursion. Validate BST: DFS with min/max bounds. K-th smallest in BST: Inorder DFS. Diameter: DFS returning height. Max path sum: DFS returning max single path. Convert BST to sorted array: Inorder. Check if balanced: DFS returning height
56
How do you recognize which pattern to use for Graph problems?
Connected components: BFS or DFS or Union-Find. Shortest path (unweighted): BFS. Shortest path (weighted): Dijkstra. Cycle detection (directed): DFS with recursion stack. Cycle detection (undirected): Union-Find or DFS. Topological sort: Kahn's (BFS in-degree) or DFS post-order. Number of islands (grid): DFS/BFS flood fill. Clone graph: DFS + HashMap (original->copy). Bipartite check: BFS two-coloring
57
How do you recognize when to use Dynamic Programming?
DP signals: find minimum/maximum count or total + true/false existence question + choices at each step + optimal substructure (optimal solution contains optimal solutions to subproblems) + overlapping subproblems (same subproblem computed multiple times in recursion). Common DP categories: 1D (Fibonacci - Climbing Stairs - House Robber). 2D (Unique Paths - Edit Distance - LCS). Knapsack (0/1 Knapsack - Coin Change). Interval DP (Burst Balloons - Matrix Chain). String DP (Longest Palindromic Subsequence)
58
What are the most important Dynamic Programming problems to know?
1D DP: Fibonacci + Climbing Stairs + House Robber + Maximum Subarray (Kadane) + Jump Game. Unbounded Knapsack: Coin Change + Combination Sum IV. 0/1 Knapsack: Partition Equal Subset Sum + Target Sum. LCS family: Longest Common Subsequence + Edit Distance + Longest Common Substring. LIS: Longest Increasing Subsequence. String: Palindromic Substrings + Longest Palindromic Subsequence. Matrix: Unique Paths + Minimum Path Sum. Interval: Burst Balloons
59
What is Kadane's Algorithm and when do you use it?
Kadane's Algorithm finds the maximum sum subarray in O(n) time O(1) space. Template: max_sum=nums[0]. curr=nums[0]. For num in nums[1:]: curr=max(num - curr+num). max_sum=max(max_sum - curr). Key insight: at each position decide: start fresh (num alone) or extend previous subarray (curr+num). Extend if curr+num > num i.e. curr>0. Use for: Maximum Subarray + Maximum Product Subarray (track both max and min) + Circular subarray max
60
How do you approach a problem you have never seen before in an interview?
REACTO framework: 1) Restate problem in your own words (confirm understanding). 2) Examples (walk through 2-3 examples including edge cases). 3) Approach (explain brute force first then optimize - state time/space for each). 4) Code (write clean readable code - name variables well). 5) Test (trace through examples + edge cases: empty input - single element - all same). 6) Optimize (discuss further improvements). Talk continuously. Never sit in silence. Partial solution > no solution
61
What are the most common edge cases to check in coding interviews?
Arrays/Strings: empty input + single element + all same elements + already sorted + reverse sorted + duplicates + negative numbers + overflow (int vs long). Linked Lists: null head + single node + cycle. Trees: null root + single node + skewed (all left or all right) + balanced. Numbers: zero + negative + max/min integer (Integer.MAX_VALUE overflow). Graphs: disconnected + self-loop + no edges. Matrix: 1x1 + single row + single column. Always ask: what are the constraints? Can input be null/empty?
62
What are the most important LeetCode patterns by company (FAANG)?
All companies: Arrays + HashMap + Two Pointers + Binary Search + BFS/DFS + DP. Google: emphasis on graphs + algorithms + complex DP + design. Amazon: leadership principles + arrays + trees + OOP design. Meta/Facebook: trees + graphs + dynamic programming + string manipulation. Microsoft: arrays + trees + system design. Apple: arrays + OOP + bit manipulation. Start with Blind 75 or NeetCode 150 which cover all critical patterns systematically
63
MEMORY BOOSTER: Pattern recognition
Sorted array -> Two Pointers or Binary Search. Subarray condition -> Sliding Window. Subarray sum -> Prefix Sum + HashMap. Next greater/smaller -> Monotonic Stack. Tree paths/structure -> DFS. Tree levels/shortest -> BFS. Detect cycle (list) -> Fast/Slow pointers. Top-K -> Heap. All possibilities -> Backtracking. Find median dynamically -> Two Heaps. Graph connectivity -> Union-Find or BFS/DFS. Course prereqs -> Topological Sort. Minimize/maximize with choices -> DP. Range sum O(1) -> Prefix Sum
64
How do you solve Two Sum?
Problem: find two indices in array that sum to target. Brute force: O(n²) nested loops. Optimal O(n): HashMap stores complement. For i-num in enumerate(nums): complement=target-num. If complement in seen: return [seen[complement]-i]. seen[num]=i. Key insight: for each number - check if its complement was seen before. This is the canonical HashMap pattern. Variant Two Sum II (sorted array): Two Pointers from both ends.
65
How do you solve Longest Substring Without Repeating Characters?
Sliding Window + HashSet. left=0. seen=set(). max_len=0. For right in range(len(s)): while s[right] in seen: seen.remove(s[left]). left+=1. seen.add(s[right]). max_len=max(max_len-right-left+1). Return max_len. Optimization: use HashMap storing last index of each char to jump left pointer directly: left=max(left-last_seen[char]+1). O(n) time O(min(m-n)) space where m=charset size
66
How do you solve Valid Parentheses?
Stack problem. stack=[]. mapping={')':'(' - '}':'{' - ']':'['}. For char in s: if char in mapping: if not stack or stack[-1]!=mapping[char]: return False. stack.pop(). Else: stack.append(char). Return not stack. Key: push opening brackets + on closing bracket check if top matches. Return true only if stack is empty at end. O(n) time O(n) space
67
How do you solve Maximum Subarray (Kadane)?
Kadane's Algorithm. max_sum=curr=nums[0]. For num in nums[1:]: curr=max(num-curr+num). max_sum=max(max_sum-curr). Return max_sum. Meaning: curr = max subarray ending at current position. Reset to num alone if previous sum was negative (curr<0 means extending hurts). O(n) time O(1) space. Variant: if all negative -> max is max single element (handled correctly by this algorithm)
68
How do you solve Merge Intervals?
Sort by start time. result=[intervals[0]]. For start-end in intervals[1:]: if start<=result[-1][1]: result[-1][1]=max(result[-1][1]-end). Else: result.append([start-end]). Return result. Key: after sorting - if current interval's start <= previous end -> overlap -> merge by taking max of ends. O(n log n) for sort + O(n) for merge pass = O(n log n) total
69
How do you solve Course Schedule (cycle detection in directed graph)?
Topological sort (Kahn's). Build adjacency list + in-degree count. queue=[all nodes with in-degree 0]. count=0. While queue: node=queue.popleft(). count+=1. For neighbor of node: in_degree[neighbor]-=1. If in_degree[neighbor]==0: queue.append(neighbor). Return count==numCourses. If count < numCourses: cycle exists (some courses can never be reached). Alternatively: DFS with three states (0=unvisited-1=visiting-2=visited). Cycle if reach state-1 node during DFS
70
How do you solve Number of Islands?
DFS flood fill. For each cell in grid: if grid[r][c]=='1': count+=1. dfs(r-c). Return count. def dfs(r-c): if r<0 or r>=rows or c<0 or c>=cols or grid[r][c]!='1': return. grid[r][c]='0' (mark visited by overwriting). dfs(r+1-c). dfs(r-1-c). dfs(r-c+1). dfs(r-c-1). Key: modify grid in-place to mark visited (saves space) or use visited set. BFS alternative: same logic with queue. O(m*n) time and space
71
How do you solve Word Break?
DP. dp=[False]*(n+1). dp[0]=True (empty string). For i in range(1-n+1): For j in range(i): if dp[j] and s[j:i] in word_set: dp[i]=True. Break. Return dp[n]. Key: dp[i]=True means s[0:i] can be segmented. Check all possible last words ending at i. word_set=set(wordDict) for O(1) lookup. O(n²*m) time where m=avg word length for slicing. Optimization: only check words of lengths that exist in dict
72
How do you detect a cycle in a Linked List?
Floyd's Algorithm. slow=head. fast=head. While fast and fast.next: slow=slow.next. fast=fast.next.next. If slow==fast: return True. Return False. O(n) time O(1) space. To find cycle START: after detection keep slow at meeting point - reset fast to head. Move both one step at a time. They meet at cycle start. Explanation: distance from head to cycle start = distance from meeting point to cycle start
73
How do you find the Kth largest element in an array?
Min-Heap of size k: heap=[]. For num in nums: heappush(heap-num). If len(heap)>k: heappop(heap). Return heap[0]. O(n log k). Quickselect O(n) average: partition like quicksort - recurse only on side containing the k-th element. Java: PriorityQueue minHeap = new PriorityQueue<>(). Add elements. If size>k: poll(). Return minHeap.peek(). Alternative: sort in O(n log n) and return nums[n-k] (simplest but slower)
74
How do you solve Longest Common Subsequence (LCS)?
2D DP. dp=[[0]*(n+1) for _ in range(m+1)]. For i in range(1-m+1): For j in range(1-n+1): if text1[i-1]==text2[j-1]: dp[i][j]=dp[i-1][j-1]+1. Else: dp[i][j]=max(dp[i-1][j]-dp[i][j-1]). Return dp[m][n]. Key: dp[i][j]=LCS of first i chars of text1 and first j chars of text2. Transition: if chars match -> extend diagonal. Else -> take max of skipping one char from either. O(m*n) time and space. Related: Edit Distance uses same structure
75
How do you solve Coin Change (Minimum coins)?
Unbounded Knapsack DP. dp=[float('inf')]*(amount+1). dp[0]=0. For coin in coins: For amt in range(coin-amount+1): dp[amt]=min(dp[amt]-dp[amt-coin]+1). Return dp[amount] if dp[amount]!=inf else -1. Key: dp[amt]=min coins to make amount amt. For each coin: update dp[amt] using dp[amt-coin] (use same coin again - unbounded). Bottom-up is cleaner. O(amount*coins) time O(amount) space
76
MEMORY BOOSTER: Classic problem solutions
Two Sum: HashMap complement lookup O(n). Longest No-Repeat: Sliding Window + set/map O(n). Valid Parentheses: Stack match/mismatch O(n). Kadane (Max Subarray): curr=max(num-curr+num) O(n). Merge Intervals: sort by start + merge overlapping O(n log n). Course Schedule: Topological sort Kahn's (in-degree=0 queue) or DFS 3-states. Number of Islands: DFS flood fill mark '0'. Word Break: DP dp[i]=can segment first i chars. LCS: dp[i][j]=match?diagonal+1:max(up-left). Coin Change: dp[amt]=min(dp[amt]-dp[amt-coin]+1)
77
What is the framework for answering a System Design interview question?
RADIO framework: 1) Requirements (functional: what system does + non-functional: scale-latency-availability). 2) Architecture (high-level component diagram - don't code yet). 3) Data model (entities - DB choice - schema). 4) Interface (API design - key endpoints). 5) Optimizations (deep dive on bottlenecks - caching - sharding - load balancing). Spend: 5min requirements + 10min architecture + 10min data + 5min API + 20min deep dive. Drive the conversation - don't wait to be asked
78
What numbers should you know for System Design interviews?
Latency: L1 cache 1ns + RAM 100ns + SSD 100µs + Network 1ms (same DC) + HDD 10ms + Cross-continent 150ms. Throughput: single server handles ~1K-10K req/s. Storage: 1 char=1 byte + 1 photo~1MB + 1 video minute~10MB + 1 tweet~300 bytes. Scale: 1M users * 1 action/day = 12 requests/second. 100M DAU * 10 actions = 11K req/s -> need multiple servers. Availability: 99.9% = 8.7h downtime/year. 99.99% = 52min. 99.999% = 5min
79
What are the key components of a scalable system design?
DNS (domain resolution) + CDN (static content delivery) + Load Balancer (distribute traffic + health checks) + Web Servers (stateless app servers) + Cache (Redis/Memcached - reduce DB load) + Database (primary + read replicas + sharding for massive scale) + Message Queue (Kafka/SQS - async processing + decouple) + Object Storage (S3 - files/images/videos) + Search (Elasticsearch - full text) + Monitoring (metrics + alerts + distributed tracing)
80
How do you scale a database in system design?
Vertical scaling (bigger machine - limited). Read replicas (scale reads - async replication - eventual consistency). Caching (Redis in front of DB - cache frequent reads - cache-aside pattern). Sharding (horizontal partitioning - split data across multiple DB instances by user_id or geographic region - enables write scaling). CQRS (separate read DB from write DB). Denormalization (reduce joins at query time). Choose NoSQL (Cassandra/DynamoDB) for massive write-heavy workloads
81
What are the differences between SQL and NoSQL in system design?
SQL (PostgreSQL/MySQL): ACID + strong consistency + complex queries/joins + normalized schema + vertical scaling. Use for: financial transactions + complex relationships + reporting. NoSQL types: Document (MongoDB - flexible schema - JSON) + Key-Value (Redis/DynamoDB - O(1) get/put) + Column (Cassandra - high write throughput - time-series) + Graph (Neo4j - relationship queries). Use NoSQL for: massive scale + simple access patterns + flexible schema + horizontal scaling
82
What is the CAP Theorem in system design?
CAP: Consistency (all nodes see same data simultaneously) + Availability (every request gets response) + Partition Tolerance (system works despite network partitions). Theorem: can only guarantee 2 of 3. In practice: network partitions WILL happen so choose CP or AP. CP (sacrifice availability): HBase + MongoDB + Zookeeper - used for: financial systems + inventory. AP (sacrifice strong consistency): DynamoDB + Cassandra + CouchDB - used for: social media + recommendations. Most systems: AP with eventual consistency
83
What is consistent hashing and when is it used?
Consistent hashing distributes keys across nodes so adding/removing a node only remaps a small fraction of keys (vs naive hash % n which remaps almost all). Hash ring: nodes placed at positions on a circle. Key maps to nearest clockwise node. Add node: only keys between new node and predecessor remapped. Remove node: only that node's keys reassigned. Used for: distributed caches (Redis Cluster) + load balancers + distributed databases + Dynamo-style systems. Virtual nodes improve balance
84
What is the difference between horizontal and vertical scaling in system design?
Vertical scaling (Scale Up): add more resources to existing server (more CPU - RAM - SSD). Simple - no code changes - but has limits + single point of failure + expensive at high end. Horizontal scaling (Scale Out): add more servers to pool. No theoretical limit + fault tolerant + cost-effective with commodity hardware + requires: stateless services + load balancer + distributed data. Cloud makes horizontal scaling easy (Auto Scaling Groups). Microservices are designed for horizontal scaling
85
What is a Cache and how do you use it in system design?
Cache stores frequently-read data in fast storage (RAM). Strategies: Cache-Aside (app checks cache -> miss -> load from DB -> store in cache). Write-Through (write to cache + DB simultaneously). Write-Back (write to cache -> async to DB). Read-Through (cache loads from DB on miss). Cache eviction: LRU (least recently used) + LFU (least frequently used). Cache invalidation: TTL + event-based. Tools: Redis (rich data types + persistence) + Memcached (simpler + multi-threaded). Avoid: caching mutable data without invalidation strategy
86
How do you design a Rate Limiter?
Algorithms: Token Bucket (tokens refill at fixed rate - allows bursts - most common) + Fixed Window Counter (count requests per window - simple - boundary problem) + Sliding Window Log (accurate - memory intensive) + Sliding Window Counter (token bucket + fixed window hybrid). Storage: Redis (atomic operations - INCR + EXPIRE). Architecture: rate limiter as middleware + rules stored in cache + client gets 429 Too Many Requests with Retry-After header. Distributed: use Redis with Lua scripts for atomicity across servers
87
What is the design of a URL Shortener (TinyURL)?
Requirements: create short URL + redirect + analytics. Data model: id + long_url + short_code + created_at + user_id. Short code generation: MD5/SHA256 of long URL take first 7 chars (handle collisions) or auto-increment ID + Base62 encode (a-z A-Z 0-9 = 62 chars - 7 chars = 62^7 ≈ 3.5 trillion URLs). Storage: KV store (DynamoDB/Redis) is perfect (key=short_code value=long_url). Redirection: HTTP 301 (permanent - cached by browser) or 302 (temporary - trackable). Scale: cache popular URLs in Redis
88
What is the design of a Rate-Limited API (common interview question)?
Components: API Gateway (entry point) + Rate Limiter Service (Redis-backed) + Application Servers + Database. Flow: request arrives -> API Gateway checks rate limiter -> if allowed: forward to app server -> if exceeded: return 429. Rate limiter: Redis INCR counter with EXPIRE. Sliding window: use Redis sorted set (score=timestamp - remove old entries - count remaining). Headers: X-RateLimit-Limit + X-RateLimit-Remaining + X-RateLimit-Reset. Distributed: all API Gateway nodes share same Redis cluster
89
MEMORY BOOSTER: System design fundamentals
Framework: Requirements -> Architecture -> Data model -> API -> Optimizations. Scale numbers: 100M DAU * 10 actions = ~11K req/s. Components: DNS + CDN + LB + Stateless App Servers + Cache + DB + Queue + Object Storage. DB scaling: read replicas + caching + sharding. CAP: choose CP (consistency) or AP (availability) - partition tolerance always required. Consistent hashing: add/remove node remaps only neighbors. Cache strategies: Cache-Aside (most common). Rate Limiter: Token Bucket + Redis. URL shortener: Base62 encoding of auto-increment ID