menu

Lecture 11 • 2017/02/14
  • Shortest path u to v contains the smallest weight sum of all the edges in that path (compared to any other valid path from u to v)
  • Variants
    • Single-source – shortest path from given source vertex to every other vertex ∈ V
    • Single-destination – shortest paths to given destination vertex
    • Single-pair – shortest path from u to v
    • All-pairs – find shortest path from u to v for all u, v ∈ V
  • Negative weight edges
    • Can be problematic
    • If there is a negative-weight cycle, we can keep going around it, resulting in w(s, v) = -∞
    • Only problematic if reachable from source
    • Some algorithms work only if there are no negative-weight edges; must specify
  • Cycles
    • Shortest paths cannot contain cycles
    • Cycles cannot have negative-weight
    • Assume cycles do not have zero-weight
    • Cycles with positive weights will add to the path length, so we can shorten it by omitting the cycle
  • Lemma – any subpath of a shortest path is a shortest path
  • Algorithm output (for each vertex v ∈ V)
    • d[v] = &delta(s, v)
      • initially ∞
      • reduces as algorithm progresses, but is always ≥ δ(s, v)
      • known as a shortest-path estimate
    • π[v] = predecessor of v on shortest path from s
      • Nil if no predecessor
      • Induces a tree → shortest-path tree
  • Triangle inequality
    • For all (u, v) ∈ E, δ(u, v) ≤ δ(u, x) + δ(x, v)
    • Shortest path u ↝ v ≤ weight of any other path u ↝ v
  • Upper bound property
    • d[v] always ≥ δ(s, v) for all v
    • if d[v] = δ(s, v), it will never change
  • No-path property
    • If δ(s, v) = ∞, d[v] is always ∞
  • Convergence property
    • If s ↝ u → v is shortest path, d[u] = δ(s, u)
    • After relax(u, v, w), d[v] = δ(s, v)
  • Path-relaxation property
    • If relax all edges in a path from s to v in order, d[v] = δ(s, v)
  • Dijkstra’s algorithm
    • No negative-weight edges
    • Weighted version of BFS
      • Priority queue instead of FIFO (first in, first out) queue
      • Keys are shortest-path weights (d[v])
    • Two sets of vertices
      • S – vertices whose final shortest-path weights are determined
      • Q – priority queue – (V – S)
    • Greedy – choose light edge at each step
    • Pseudocode
      /*
       * Pseudocode for finding shortest path
       */
      public class ShortestPath {
      
          /**
           * Setup default vars
           *
           * @param V graph
           * @param s source vertex
           */
          void initSingleSource(V, s) {
              for (Vertex v : V) {
                  d[v] = INFINITY
                  pi[v] = NIL
              }
              d[s] = 0 //source weight is always 0
          }
      
          /**
           * Relaxing an edge (setting minimum weight)
           *
           * @param u start vertex
           * @param v end vertex
           * @param w new weight from u to v
           */
          void relax(u, v, w) {
              if (d[v] > d[u] + w(u, v)) { //better weighting found
                  d[v] = d[u] + w[u, v]
                  pi[v] = u //set predecessor
              }
              //otherwise, keep original weighting
          }
      
          /**
           * Dijkstra's algorithm to find shortest path
           * Must have non negative weightings
           *
           * If binary heap, each operation takes O(log V) -> O(E logV)
           * With Fibonacci heaps, O(V logV + E)
           *
           * @param V all the vertices
           * @param E all the edges
           * @param w all the weightings
           * @param s source vertex
           * @return set of all vertices with their deltas and predecessors
           */
          Set dijkstra(V, E, w, s) {
              initSingleSource(V, s);
              S = new Set() //create empty set
              Q = V //move all vertices to Q
              while (!Q.isEmpty()) {
                  u = extractMin(Q)
                  S = S.add(u)
                  for (Vertex v : u.adjacentVertices) {
                      relax(u, v, w) //lower weighting if possible
                  }
              }
              return S
          }
      }
      
Lecture 12 • 2017/02/16
  • Bipartite graph – graph where vertices can be partitioned into 2 sets (A & B), where all edges cross the sets (no edges are from one set to the same set)
    • If made into a DFS tree, can be coloured in 2 colours where every edge spans from one colour to the other colour
    • Is bipartite iff it does not contain an odd cycle
    • From Math 240
  • Matching – subset of edges such that no two edges share a vertex
  • Perfect matching – every vertex in subset A has a matching in subset B and vice versa
  • Complete bipartite graph – every vertex in A is connected to every vertex in B and vice versa
  • Stable marriage problem
    • Goal is to find perfect matching
    • Pair is unstable if for an unmatched pair α-β, α prefers β to current match, and β prefers α to current match
    • Matchings are stable if there are no unstable pairs
    • Pseudocode
      /*
       * Gale-Shapley Algorithm for finding stable marriage solution
       */
      public class GaleShapley {
      
          /**
           * find stable matches
           * sets contain items that have an ordered preference
           * for the items in the other set
           *
           * O(n^2)
           * Best case Omega(n)
           *
           * @param A set A
           * @param B set B
           * @return matching pairs
           */
          Matching findMatches(A, B) {
              M = new Matching()
              while (A.hasUnmatched()) {
                  a = A.getFirstUnmatched() //get an unmatched item
                  b = a.orderedPrefs.removeFirst() //get first preference of that item
                  if (!b.isMatched()) {
                      M.add(a, b) //unmatched, add the matches
                  } else {
                      c = b.match //get b's current match
                      if (b.prefers(a, c)) {
                          //a is preferred, remove old matching
                          M.remove(c, b).add(a, b)
                      }
                  }
              }
              return M
          }
      
      }
      
Lecture 13 • 2017/02/21
  • Flow Network
  • G = (V, E) directed
  • Each edge (u, v) has capacity c(u, v) ≥ 0
  • Edges are also known as arcs
  • If (u, v) ∉ E, c(u, v) = 0
  • Source vertex s, sink vertex t, s ↝ v ↝ t for all v ∈ V
  • Positive flow p(u, v)
    • When annotated, an edge is given the values p(u, v) / c(u, v) (eg 1/2)
  • Capacity constraint – 0 ≥ p(u, v) ≥ c(u, v)
  • Skew symmetry – c(u, v) = -c(u, v)
  • Flow conservation – positive flow into vertex = positive flow out of vertex
  • Cancellation with positive flows
    • Without loss of generality, we can say positive flow goes either from u to v or from v to u, but not both
    • Eg if p(u, v) = 5 & p(v, u) = 3, it is equivalent to a flow of 2 from u to v
    • We denote this as f(u, v) = p(u, v) – p(v, u)   p(u, v) ≥ 0
  • Total flow of graph (|f|) is sum of all flows from source or all flows to sink
    • All vertices in between satisfy flow conservation
  • Naïve algorithm for maximal flow
    • Find a path and add the maximum flow for that path
    • Repeat until no path can have additional flow
    • Not good because our result depends on which path we fill first; not always correct
  • Residual graphs – used to find maximal flow
    • Given G = (V, E) with edge capacities c & flow f, we define residual graph Gf as
      • Having the same vertices as G
      • Having edges Ef with residual capacities cf where we can add or subtract flow from edges e ∈ E
    • For each edge e = (u, v) ∈ E
      • If f(e) < c(e), add forward edge (u, v) in Ef with residual capacity cf(e) = c(e) – f(e)
      • If f(e) > 0, add backward edge (v, u) in Ef with residual capacity cf(e) = f(e)
      • Example (note that sometimes one edge can result in both a forward and a backward edge)
        f(e)c(e)forwardbackward
        0110
        1101
        2312
  • Augmenting path – path from source s to sink t in residual graph Gf that allows us to increase flow
  • To use residual graphs to find maximal flow
    • Compute residual graph Gf
    • Find a path P
    • Augment flow f along path P
    • Let β be bottleneck; add to f(e) on edge of P
      • Add if forward edge, subtract if backward edge
  • Ford-Fulkerson algorithm
    • While there is still a s-t path in Gf, augment f to P (see above)
    • Update Gf and continue
    • Algorithm terminates because bottleneck β is strictly positive and flow is bounded (flow will not surpass bound)
    • Time Complexity
      • Let C = Σc(e)     e ∈ E outgoing from s
      • Finding s-t path takes O(|E|) (eg BFS or DFS)
      • Flow increases by at least 1 at each iteration
      • Algorithm runs in O(C * |E|)
Lecture 14 • 2017/02/23
  • s-t cut of flow network is cut(A, B) such that s ∈ A and t ∈ B
    • capacity is the Σc(e) for all edges e the cut crosses
    • flow is |f| = fout(A) – fin(A)
      • |f| is bounded by Σc(e) for all e ∈ cut(A, B)
  • Observations
    • Every flow must be ≤ capacity of every s-t cut
    • Value of maximum flow is less than capacity of minimum cut
  • Flow in Ford-Fulkerson
    • Terminates when no augmenting path in residual graph Gf
    • |f| = Σc(e)
      • In particular, fout(A) = Σc(e) and fin(A) = 0
      • Since v cannot be reacahable from s in Gf, there cannot be any resulting forward or backward edges
    • For any e = (u, v) ∈ cut(A, B), f(e) = c(e)
  • Computing min cut
    • Find Gf with Ford-Fulkerson
    • Run BFS or DFS of s
    • Reachable vertices define set A for the cut
    • Recall that min cut is minimum number of cuts needed to stop all flow from s to t
  • Bipartite matching with network flows
    • With bipartite with sets A and B
    • Connect s to every vertex in A
    • Keep directed edges from A to B
    • Connect every vertex in B to t
    • Get max flow with Ford-Fulkerson ⇒ max matching
  • Running time
    • General complexity if O(C * |E|) C = Σc(s, u)
    • If |A| = |B| = n, C = |A| = n, |E’| = |E| + 2n = m + 2n
    • Given m > n, O(n * (m + 2n)) = O(nm)
Midterm Review • 2017/03/07
  • Focus on direct application of concepts
  • 1 proof in the midterm
  • Nothing on probability
  • Be comfortable with running time
  • Proofs
    • Contradiction – assume opposite and prove that it is false
    • Cut & paste – used with graphs & greedy algorithms
      • Assume sub-problem is not optimal, and replace with optimal solution to show contradiction
    • Loop invariants – prove that loop structure is doing what it is intended to do
      • Must specify loop invariant property, initialization, maintenance (conserving property), termination (loop stops)
  • Hashing
    • Different types
    • Conflict resolution
    • Open addressing
    • Linear & quadratic probing
  • BST
    • Rotation
    • Self-balanced trees (AVL, RBT)
      • How to do operations (insertion)
  • Greedy algorithms
    • Activity-selection problem
  • Graph algorithms
    • Topological sort – get total order from partial order
      • Can be found from DAG
    • BFS, DFS
    • MST, Kruskal, Ford-Fulkerson, Dijkstra
    • Cut (respect, light, cross)
    • Safe edge
  • Bipartite graphs
    • Stable pairs
  • Network flow
    • Positive flow, capacity constraint, flow conservation