### Comp 251 Practice

###### Prof. Jérome Waldispühl

###### Use ↑ ↓ to navigate through questions; use ← → to show/hide answers

Hold shift and use arrows to jump to the top/bottom or show/hide all answers.

Click a question to jump to it and show/hide the answer

Hash the following values using linear probing into a table of size 5:

3, 11, 14, 23, 8

3, 11, 14, 23, 8

**23, 11, 8, 3, 14**

In linear probing, an element is moved into the next available space if its hashed location is already taken.

Execute MaxHeapify on the following array:

[24, 21, 23, 22, 36, 29]

[24, 21, 23, 22, 36, 29]

At each stage, the value in bold at index i is being compared to the values at index 2i and 2i +
1 (underlined). If either of those are bigger, it will swap with the biggest one and check that
value.

[24, 21,

[24,

[

[36,

[36, 24, 29, 22, 21, 23]

[24, 21,

**23**, 22, 36,__29__][24,

**21**, 29,__22__,__36__, 23][

**24**,__36__,__29__, 22, 21, 23][36,

**24**, 29,__22__,__21__, 23][36, 24, 29, 22, 21, 23]

Create an AVL tree by inserting the following keys in the given order:

3, 5, 7, 1, 5, 6

3, 5, 7, 1, 5, 6

root ← 3, 3.right ← 5, 5.right ← 7, 3.rotateLeft, 3.left ← 1, 7.left ← 5, 5.right ← 6, 5.rotateLeft, 7.rotateRight

Create a red black tree by inserting the following keys in the given order:

3, 5, 7, 1, 5, 6

3, 5, 7, 1, 5, 6

root ← 3.black(), 3.right ← 5.red(), 5.right ← 7.red(), 3.rotateLeft (3.red(), 5.black()), 3.left ← 1.red(), 3.black(), 7.left ← 5.red(), 5.right ← 6.red(), 5.rotateLeft, 7.rotateRight (6.black())

Using the greedy algorithm in class, find the biggest set of non overlapping activities from the
graph below.

a

_{1}, a_{4}, a_{7}
Find the Huffman encoding for "abracadabra".

Frequencies | ||||

a | b | c | d | r |

5 | 2 | 1 | 1 | 2 |

Encodings | ||||

a | b | c | d | r |

0 | 110 | 100 | 101 | 111 |

Given the graph below, find the depths of each vertex with respect to vertex s.

With the same graph, compute the DFS start/finish times; assume each edge is bidirectional.

There is more than one possible answer, depending on which child you choose when you start
off.

How is topological sort performed?

Start with an empty list that will represent the order. Pick any starting point in the graph and
compute DFS at that point. Once a node is finished, insert it in the front of the list. Once the
DFS cannot reach any more nodes, pick any of the remaining nodes as a starting point and repeat.

Find the strongly connected components of the following graph:

**{a, b, e}, {f, g}, {c, d}, {h}**

SCCs are found as follows: Call DFS(G) to find f[u] for all u. Call DFS(G

^{T}), but start with the vertices in order of decreasing f[u] (from DFS(G)). Output the vertices in each tree from DFS(G

^{T}) as a separate SCC).

Use Kruskal's algorithm to find the MST of the following graph:

Pick the lightest edge that doesn't connect two already connected nodes until all nodes are
connected.

What is the triangle inequality?

For all (u, v) ∈ E,

There exists a shortest path from u ↝ v. If u → v is the shortest path, our inequality holds true. We are assuming that all deltas are positive, so if u → x → v is the "shortest path", we can further shorten it removing deltas to x, resulting in δ(u, v).

**δ(u, v) ≤ δ(u, x) + δ(x, v)**There exists a shortest path from u ↝ v. If u → v is the shortest path, our inequality holds true. We are assuming that all deltas are positive, so if u → x → v is the "shortest path", we can further shorten it removing deltas to x, resulting in δ(u, v).

What are some similarities and differences in Prim's algorithm & Dijkstra's algorithm?

Both algorithms involve adding an edge with the smallest weight to a vertex V, and start off
with a source vertex S of weight 0 and all other vertices with weight ∞. At each stage of
Prim, we choose the lightest adjacent edge joining a vertex that isn't
already connected to our tree. The end result is a MST.

At each stage of Dijkstra, we update weights for vertices adjacent to the one we're working with by summing the weight of the current vertex with the weight of the edge connecting a given adjacent vertex. If the existing weight for that vertex is already smaller, we keep the original weighting; otherwise we save our new summed weighting. After all new weightings are calculated, we connect the vertex with the smallest weight to our graph. The end result is a shortest-path tree, meaning that the path from source vertex S to any vertex in our graph is the shortest path connecting those two vertices.

At each stage of Dijkstra, we update weights for vertices adjacent to the one we're working with by summing the weight of the current vertex with the weight of the edge connecting a given adjacent vertex. If the existing weight for that vertex is already smaller, we keep the original weighting; otherwise we save our new summed weighting. After all new weightings are calculated, we connect the vertex with the smallest weight to our graph. The end result is a shortest-path tree, meaning that the path from source vertex S to any vertex in our graph is the shortest path connecting those two vertices.

Use Gale-Shapley to match the following two groups. The first column of the two tables denote
the members in their respective groups, and the following three letters denote their preferences
in that order.

K | B | C | A |

L | A | C | B |

M | A | B | C |

A | K | L | M |

B | L | M | K |

C | M | L | K |

**KC, LA, MB**

For each α, get preferences. For each of those preferences, if one prefers α to current pairing or has no pairing, link; otherwise, continue down prefs.

Step 1: KB, LA, MA; result: KB, LA, M?

Step 2: MB; result: K?, LA, MB

Step 3: KC; result: KC, LA, MB; done

Find the max flow of the graph below using Ford-Fulkerson.

**19**

Find a path, computer bottleneck capacity, augment path, repeat until no more paths Examples of a series of augmented paths:

SADT (8), SCDT (2), SCDABT (4), SADBT (2), SCDBT (3), done (19)

SCDT (9), SABT (4), SADBT (6), done (19)

Use Needleman-Wunsch to compute the pairing and minimum distance between the following two
sequences:

ACTATA CAAAT

It is given that delta is 0 if there is a match and 1 otherwise.

ACTATA CAAAT

It is given that delta is 0 if there is a match and 1 otherwise.

**ACTATA– Distance = 4**

–C–AAAT

–C–AAAT

Dynamic programming; compute min delta for every valid pairing; note that two chars are only compared if coming from diagonal; otherwise it is insertion or deletion.

- | A | C | T | A | T | A | |

- | 0 | 1 | 2 | 3 | 4 | 5 | 6 |

C | 1 | 1 | 1 | 2 | 3 | 4 | 5 |

A | 2 | 1 | 2 | 2 | 2 | 3 | 4 |

A | 3 | 2 | 2 | 3 | 3 | 3 | 3 |

A | 4 | 3 | 3 | 3 | 3 | 4 | 3 |

T | 5 | 4 | 4 | 3 | 4 | 3 | 4 |

Backtrack by starting at the bottom right and finding the smallest value on the left, top, or diagonal.

Left = deletion, diagonal = match/substitution, up = insertion.

How is the Bellman-Ford algorithm done and how does it differ from Dijkstra?

The two differ in that Bellman-Ford allows for negative-weight edges, as it will catch a
negative cycle if it is part of the shortest path.
The two both operate by relaxing the graph using an augmented path, but if Bellman-Ford doesn't
converge after V(G) – 1 iterations, there is a negative weight cycle and no shortest path tree.

How is Karatsuba multiplication done compared to the naïve multiplication?

For x * y, both with size n, we may split them into two numbers each with half the size:

m = ⌈n/2⌉

a = ⌊x/2

c = ⌊y/2

The resulting product would be (2

= 2

The naïve algorithm calculates the formulat above, with four recursions and three additions to combine them.

Karatsuba's trick further converts the formula to

2

which results in only three distinct recursions (ac, bc, (a – b)(c – d)), but six operations.

As a result, it is faster than the grade-school algorithm for about 320-640 bits.

m = ⌈n/2⌉

a = ⌊x/2

^{m}⌋ b = x mod 2^{m}c = ⌊y/2

^{m}⌋ d = y mod 2^{m}The resulting product would be (2

^{m}a + b)(2^{m}c + d)= 2

^{2m}ac + 2^{m}(bc + ad) + bdThe naïve algorithm calculates the formulat above, with four recursions and three additions to combine them.

Karatsuba's trick further converts the formula to

2

^{2m}ac + 2^{m}(ac + bd – (a – b)(c – d)) + bdwhich results in only three distinct recursions (ac, bc, (a – b)(c – d)), but six operations.

As a result, it is faster than the grade-school algorithm for about 320-640 bits.

Use the Master Theorem to solve the following recurrences if possible:

- T(n) = 3T(n/2) + n
^{2} - T(n) = 2
^{n}T(n/2) + n^{n} - T(n) = 2T(n/2) + nlogn
- T(n) = 4T(n/2) + cn

- Θ(n
^{2}) (Case 3) - N/A (k + ε < n)
- Θ(nlog
^{2}n) (Case 2) - Θ(n
^{2})

What is the aggregate method and the accounting method?

Aggregate analysis involves computing the total running time on a case by case basis and
splitting it evenly among the operations, whereas the accounting method involves imposing extra
charges on inexpensive operations to pay for expensive operations later on.