Lecture 6 • 2017/01/26
  • Disjoint Sets
  • Connected components – set of nodes connected by a path
    • Every node in the set can be reached by every other node (path itself is irrelevant)
  • Partition – set of objects split into disjoint subsets
    • The union of all sets will produce the original set
    • No two sets share a common node unless those sets are the same set; every set is disjoint from all the other sets
  • Map vs Relation
    • Maps lay out a unidirectional property from elements in one set to the other
    • Relation defines a bidirectional connection (ie boolean matrix)
  • Equivalence – i is equivalent to j if they belong to the same set (are connected)
    • Reflexivity ∀ a ∈ S, (a, a) ∈ R
      • For all u ∈ V, there is a path of length 0 from u to u
    • Symmetry ∀ a, b ∈ S, (a, b) ∈ R ⇒ (b, a) ∈ R
      • For al u, v ∈ V, there is a path from u to v iff there is a path from v to u
    • Transitivity ∀ a, b, c ∈ S, (a, b) ∈ R ∩ (b, c) ∈ R ⇒ (a, c) ∈ R
      • For all u, v, w ∈ V, if there is a path from u to v and a path from v to w, there is a path from u to w
  • ADT (abstract data type)
    • find(i) returns representative of set that contains i
    • sameset(i, j) returns find(i) == find(j)
    • union (i, j) merges sets containing I and j
      • Does nothing if they are already in the same set
  • When merging trees, smaller tree should be merged below root of larger tree to minimize height; height will therefore only increase when the trees initially have the same height
    • Rank – upper bound on height of nodes
  • Path Compression – make all nodes in find path direct children of root
     * Pseudocode for Disjoint Set Optimization
    public class Disjoint {
         * Finds root node of a subset containing i
         * Also undergoes path compression, where the nodes will become
         * direct children of the root
         * Worst case is O(logn) (we can prove that the height is at most logn)
         * @param i node contained in subset
         * @return root node
        Node find(i) {
            if (i.parent == i) return i //i is the root
            return i.parent = find(i.parent) //set parent of i to root & return the root
Lecture 7 • 2017/01/31
  • Greedy Strategy – when offered a choice, pick the one that seems best at the moment in hopes of optimizing the overall solution
    • Prove that when given a choice, one of the optimal choices is the greedy choice; it is therefore always safe to make the greedy choice
    • Show all but one of the sub-problems resulting from the greedy choice are empty
  • Activity Selection Problem
    • Given a set S of n activities, a1, a2, …, an
      • With si = start time of activity i
      • And fi = finish time of activity i
      • What is the maximum number of “compatible activities”?
        • 2 activities are compatible if their intervals do not overlap
        • We wish to return the biggest valid subset (there may be multiple solutions, but we’ll find one of them)
    • Optimal Substructure
      • Let Sij = subset of activities in S that start after ai finished and finish before aj starts
        • Sij = { ak ∈ S: ∀ i, j    fi ≤ sk < fk, ≤ sj }
      • Aij = optimal solution = Aik ∪ {ak} ∪ Akj
    • Greedy Choice
      • Let Sij ≠ ∅
      • Let am be an activity in Sij where fm = min{ fk: ak ∈ Sij }
      • We know that am is used in one of the optimal solutions
        • Take any optimal solution without am and replace the first activity with it; it is still a valid solution since fm ≤ all other finish times
      • Sim = ∅, so choosing am leaves Smj as the only nonempty subproblem
    • Greedy Solution
      • Take the activity with the earliest finishing time and add it to the set. Continue with the remaining time frame (after the current finishing time) and repeat until there are no other valid activities.
  • Pseudocode
     * Algorithm to select a subset containing the greatest number
     * of compatible activities, where two activities are compatible
     * when there are no time conflicts
    public class MaxCountActivitySelector {
         * Recursively get a valid solution
         * called through recursiveSelector(S, 0, n+1)
         * @param S set of all activities, in order of finish time
         * @param i index of latest added activity
         * @return set containing the activities in our solution
        Set recursiveSelector(S, i) {
            m = i + 1 //get index of next activity
            while (m < S.size && S[m].start < S[i].finish)
                m++ //find first activity in S with an index within (i, n+1]
            if (m < S.size) return S[m] + recursiveSelector(S, m) //got first element; find rest
            return null //no more valid activities, close off set
         * Iteratively get a valid solution
         * @param S set of all activities, in order of finish time
         * @return set containing the activities in our solution
        Set iterativeSelector(S) {
            result = emptySet
            currentFinish = -1 //at first we accept the very first activity with the first finish
            for (i = 0; i < S.size; i++) {
                if (S[i].start >= currentFinish) {//valid activity, add to set
                    Result += S[i]
                    currentFinish = S[i].finish //set new finish
                //otherwise, activity starts before last one in set ends
                //cannot be added to the set, so continue searching
            return result
  • Typical Steps
    • Cast optimization problem as one in which we make a choice resulting in a subproblem
    • Prove that there is always an optimal solution that makes the greedy choice
    • Show that greedy choice & optimal solution to subproblem ⇒ optimal solution
    • Make greedy choice & solve top-down
    • May preprocess input (eg sorting activities by finish time)
  • Text Compression
    • Given a string X, efficiently encode X into a smaller string Y
      • Saves memory and/or bandwidth
    • Huffman encoding
      • Computer frequency f(c) for each character c
      • Encode high-frequency chars with short code words
      • No code word is a prefix for another code
      • Use optimal encoding tree to determine code words
    • Terms
      • Code – mapping of char to binary
      • Prefix code – binary code such that no code-word is prefix of another code-word
      • Encoding tree – tree representing prefix code
        • Each leaf stores a character (other nodes do not have chars)
        • Code word given by path from root to external node
          • Left child = 0, right child = 1
  • Pseudocode
     * Huffman's Algorithm for encoding Strings
     * Runs in O(n + d logd)
     * Where
     *  n   size of X
     *  d   # of distinct characters X
     * Uses heap-based priority queue
    public class Huffman {
         * Generate trie representing encoding
         * Basic procedure:
         * Get the two chars with the smallest frequencies
         * Make a node with those two chars as children
         * & with its valud being the summation of the two frequencies
         * Repeat with the remaining chars and the new node(s)
         * Once there is one node with all the chars mapped out,
         * you have found your trie
         * @param X string input to encode
         * @return generated trie
        Trie generateEncodingHeap(X) {
            Q = new Heap //empty max heap
            freq = distinctCharactersWithFrequencies(X)
            //maps each distinct char in X with its frequency in X
            for (CharFreq c : freq) { //for every unique char
                T = new Node(c.char) //make node storing the char
                Q.insert(c.frequency) //insert node at position relative to frequency
            while (Q.size > 1) {
                f1 = Q.minKey() //get smallest frequency
                t1 = Q.removeMin() //get char with that frequency & remove it
                f2 = Q.minKey() //get next smallest frequency
                t2 = Q.removeMin() //get char with that frequency & remove it
                T = join(t1, t2) //combine two into one node
                Q.insert(f1 + f2, T) //add back to heap at their combined frequency location
            return Q.removeMin() //return the resulting data
Lecture 8 • 2017/02/02
  • Graph G = (V, E)
    • V – set of vertices
    • E – set of edges ⊆ (V x V)
  • Types
    • Undirected – edge (u, v) = (v, u) & there are no self loops
    • Directed – (u, v) is edge from u to v, or u → v; self loops allowed
    • Weighted – each edge has associated weight, given as a function w: E → R
    • Dense – |E| ≈ |V|2
    • Sparse – |E| << |V|2
  • |E| = O(|V|2|)
  • Properties
    • If (u, v) ∈ E, then vertex v is adjacent to vertex u
      • Symmetric (reverse applies) if G is undirected
      • Not necessarily true if G is directed
    • If G is connected
      • There is a path between every pair of vertices
      • |E| ≥ |V| – 1
      • If |E| = |V| – 1, G is a tree
  • Vocabulary
    • Ingoing edges of u: { (v, u) ∈ E } edges pointing directly to u
    • Outgoing edges of u: { (u, v) ∈ E } edges pointing directly out from u
    • In-degree(u): |in(u)|
    • Out-degree(u): |out(u)|
  • Representations
    • Adjacency Lists
      • Array Adj of |V| lists
      • Every vertex has a list of adjacent vertices
      • If weighted, store weights within the adjacency lists
      • Space efficient when graph is sparse
      • Determine if edge (u, v) ∈ E is not efficient
        • Needs to search in u’s adjacency list. Θ(degree(u)) time
        • Θ(V) in worst case
    • Adjacency Matrix
      • |V| x |V| matrix A
      • A[i, j] = aij = (i, j) ∈ E ? 1 : 0
  • Can store weights instead of bits for weighted graphs
    • A = AT for undirected graphs
      • Space is Θ(V2) – not efficient for sparse graphs
      • Time to list all vertices adjacent to u: Θ(V)
      • //bigo
  • BFS DFS Pseudocode (Comp 250)
  • BFS – breadth-first search
    • Find all vertices on level n before proceeding to n + 1
    • Vertex is discovered the first time it is encountered during search
    • Vertex is finished if all vertices adjacent to it have been discovered
    • Colours
      • White – undiscovered Gray – discovered & not finished Black – finished
    • Result (given the graph G = (V, E) and source vertex s ∈ V)
      • d[v] = smallest # of edges from s to v for all v ∈ V
      • ∞ if v is not reachable from S
      • π[v] = u such that (u, v) is last edge on shortest path s to v
  • u is v’s predecessor
    • breadth first tree with root s containing all reachable vertices
    • Time Complexity
      Total RuntimeO(V + E)
  • DFS – depth-first search
    • Explore all edges out of most recent vertex v before backtracking and exploring other vertices
    • Continue until all reachable vertices from original source are discovered
      • If any undiscovered vertices remain, pick one as a new source and repeat DFS
    • Result (given the graph G = (V, E) and source vertex s ∈ V)
      • 2 timestamps on each vertex, with integer values b/t 1 & 2|V|
      • d[v] = discovery tie (v turns from white to gray)
      • f[v] = finishing time (v turns from gray to black)
      • π[v] = predecessor of v = u, such that v was discovered during scan of u’s adjacency list
    • Time Complexity
      Total RuntimeΘ(V + E)
  • Parenthesis Theorem
    • Theorem 1 – for all u, v, exactly one of the following holds
      • d[u] < f[u] < d[v] < f[v] or d[v] < f[v] < d[u] < f[u]
  • And neither u nor v is a descendant of the other
    • d[u] < d[v] < f[v] < f[u]
  • And v is a descendant of u
    • d[v] < d[u] < f[u] < f[v]
  • And u is a descendant of v
    • d[u] < d[v] < f[u] < f[v] cannot happen
Lecture 9 • 2017/02/07
  • White-path Theorem
    • Theorem 2 – v is a descendant of u iff at time d[u], there is a path u → v consisting of only white vertices (except for u, which was just colored gray)
    • Classification of Edges
      Tree EdgeIn depth-first forest (paths taken in DFS); found by exploring (u, v)
      Back Edge(u, v), where u is descendant of v (in depth-first tree); forms cycles w/ tree edges; self loops are back edges
      Forward Edge(u, v), where v is descendant of u, but not tree edge
      Cross EdgeAny other edge; can go between vertices in same or different depth-first tree(s)
    • Theorem 3 – in DFS of undirected graph, we get only tree & back edges; no forward or cross edges
  • DAG – Directed Acyclic Graph
    • Graph without cycles
    • Good for modeling processes & structures with partial order
      • a > b & b > c ⇒ a > c
      • Not all orders are in graph
    • Can always make total order (valid comparison for all unique item pairs) from partial order (may not be unique; but is existing)
    • Lemma 1 – directed graph G is acyclic iff a DFS of G yields no back edges
  • DAG to list – finding valid total order
    • Use DFS with timestamps (starting from any node)
    • Add item to the front of the list when it is finished
    • You will notice that the finishing time is strictly decreasing
    • Correctness proof – if (u, v) ∈ E, then f[v] < f[u]
  • SCC – Strongly Connected Components
    • G is strongly connected if every pair (u, v) ∈ G is reachable from one another
    • Is a maximal set of vertices C ⊆ V such that ∀ u, v ∈ C, u ↝ v & v ↝ u exist
  • Component Graph
    • GSCC = (VSCC, ESCC)
    • VSCC has one vertex for each SCC in G
    • ESCC has an edge if there is an edge between the corresponding SCC’s in G
  • GSCC is a DAG
  • Lemma 2 – for two distinct SCC’s in G, if there is a path connecting SCC1 to SCC2, there cannot be a path connecting SCC2 to SCC1
    Otherwise, they will not be separate SCC’s
  • Transpose of Directed Graph
    • GT = transpose of directed G
      • GT = (V, ET), ET = {(u, v): (v, u) ∈ E}
      • GT is G with all edges reversed
    • Creation time of Θ(V + E) using adjacency lists
    • G & GT have same SCC’s
  • SCC Algorithm
    • SCC(G) – Θ(V + E)
    • Call DFS(G) to find f[u] for all u
    • Compute GT
    • Call DFS(GT) using decreasing f[u] order (found in first DFS)
      • Start with some x ∈ C such that f(C) is maximum
    • Output vertices in each tree of the depth-first forest formed in second DFS as separate SCC
    • Works because we are visiting vertices of component graph in topologically sorted order
      • Running DFS on GT means we will not visit any v from u where v & u are in different components
      • Can only reach vertices in its SCC and vertices in SCC's already visited in second DFS
    • Lemma 3 – let C & C’ be distinct SCC’s in G = (V, E); if (u, v) &isn; E ∩ u ∈ C ∩ v ∈ C’, then f(C) > f(C’)
      • Corollary – if (u, v) ∈ ET, f(C) < f(C’)
Lecture 10 • 2017/02/09
  • MST – Minimum Spanning Tree
    • Has |V| – edges
    • Has no cycles
    • Might not be unique
  • Generic algorithm
    • Start with empty set
    • While A is not a spanning tree, find a safe edge (u, v) and add it to A
    • Results in A, which is a subset of some MST
  • Definitions
    • A cut partitions vertices into disjoint sets, S & V – S
    • An edge crosses a cut if one endpoint is in S & the other is in V – S
    • A light edge is the cross edge with minimal weight (may not be unique)
    • A cut respects A iff no edge in A crosses cut
  • Theorem 1 – safe edge – let (S, V – S) be any cut that respects A; the light edge (u, v) crossing the cut is safe for A
  • Kruskal’s Algorithm
    • Starts with each vertex in its own component
    • Repeatedly merge two components into one by connecting them through a light edge
    • Scans set of edges in monotonically increasing order by weight
    • Uses disjoint-set data structure to determine whether an edge connects vertices in different components
    • Time Complexity
      Initialize AO(1)
      First for loop|V| MAKE-SETs
      Sort EO(E logE)
      Second for loopO(E) FIND-SETs and UNIONs
      TotalO(E logV)
      • * Notice that |E| ≤ |V|2 ⇒ O(logE) = O(2logV) = O(logV)
  • Prim’s Algorithm
    • Builds one tree, so A is always a tree
    • Start from arbitrary “root” r
    • For each step, add light edge crossing cut (VA, V – VA) to A
      • VA = vertices A is incident on
  • Finding light edge
    • Use priority queue Q (which supports the following in O(log n)
      • Insert(Q, u, key) – insert u with key value key in Q
      • u = extractMin(Q) – extract item with minimum key value in Q
      • decreaseKey(Q, u, newKey) – decrease u’s key value to newKey
    • Each object in Q is vertex in V – VA
    • Key of v has minimum weight of any edge (u, v) where u ∈ VA
    • Vertex returned is v where (u, v) is light edge crossing (VA, V – VA) where u ∈ V
    • If such a v does not exist, the weight is infinity
    • Pseudocode
       * Prim's Algorithm for finding MST (minimum spanning tree)
       * Complexity
       * Using binary heaps       O(E logV)
       * Initialization           O(V)
       * Building initial queue   O(V)
       * V extractMin             O(V logV)
       * E decreaseKey            O(E logV)
       * Fibonacci heaps          O(E + V logV)
      public class Prim {
          MST findMST(graph, root) {
              MST mst = new MST()
              Q = new Queue(graph) //create queue with every vertex in graph
              //set keys to infinity for all nodes
              for (Node u : Q) {
                  u.key = INFINITY //key of vertex
                  u.pi = NIL //no predecessor yet
                  insert(Q, u) //add to queue
              decreaseKey(Q, root, 0) //start with arbitrary root; weight is 0
              while (!Q.isEmpty()) { //loop until Q is empty
                  u = extractMin(Q)
                  for (Vertex v : u.adjacentVertices) {
                      if (v.isin(Q) && weight(u, v) < v.key) { //update adjacent nodes
                          v.pi = u //set predecessor of v
                          decreaseKey(Q, v, weight(u, v)) //set new weight for v
                  mst.add(u) //add lowest weight to mst
              return mst