ALGORITMY A DATOVÉ STRUKTURY II Ivana Černá Jaro 2019 Fakulta informatiky MU George Pólya: How to Solve It? What is the difference between method and device? A method is a device which you used twice. Andrew Hamilton, rektor Oxford University, Respekt 7/2015 Je nutné mít na paměti, že my studenty nepřipravujeme na konkrétní povolání, ale trénujeme jejich mysl, aby byli co nejlépe připraveni na změny, jež nás nevyhnutelně čekají, a které budou nejspíš dosti dramatické. i CONTENTS 1. Computational Complexity 1.1 Complexity of Problems and Algorithms 1.2 Amortized Complexity 2. Data Stuctures 2.1 Heaps 2.2 Disjoint-sets Data structures 3. Algorithm Design Techniques 3.1 Divide and Conquer 3.2 Dynamic Programming 3.3 Greedy Algorithms 4. Network Flow 5. String Matching 2 ADMINISTRATIVE STUFF Interactive syllabi https://is.muni.cz/auth/el/1433/jaro2019/IV003/index.qwarp follow discussion groups in IS 3 LITERATURE recommended textbooks • J. Kleinberg, E. Tardos: Algorithm Design. Addison-Wesley, 2006. • T. Cormen, Ch. Leiserson, R. Rivest, C. Stein: Introduction to Algorithms. 3rd Edition. MIT Press, 2009. • S. Dasgupta, Ch. Papadimitriou, U. \/az\ran\:Algorithms. McGraw Hill, 2007. slides and demo http://www.es.princeton.edu/~wayne/kleinberg-tardos/ 4 Part I Algorithmic Complexity 5 OVERVIEW Complexity of Problems and Algorithms Algorithm Complexity Analysis Recursive algorithms Iterative algorithms Amortized Complexity Amortized analysis 6 COMPLEXITY OF PROBLEMS AND ALGORITHMS A strikingly modern thought " As soon as an Analytic Engine exists, it will necessarily guide the future course of the science. Whenever any result is sought by its aid, the question will arise—By what course of calculation can these results be arrived at by the machine in the shortest time? " — Charles Babbage (1864) Models of computation: Turing machines Deterministic Turing machine. Simple and idealistic model NOI ^R>#:#-^lT ~^Jh) halt 0:1---^ YES # # # 1110 1 1 0 0 1 1 1 # # # Running time. Number of steps. Memory. Number of tape cells utilized. Caveat. No random access of memory. • Single-tape TM requires > n2 steps to detect ^z-bit palindromes. • Easy to detect palindromes in < cn steps on a real computer. 4 Models of computation: word RAM Word RAM. ^ assume w^log2, • Each memory location and input/output cell stores a w-bit integer. • Primitive operations: arithmetic/logic operations, read/write memory, array indexing, following a pointer, conditional branch, ... constant-time C-style operations (w = 64) input program output a[i] memory Running time. Number of primitive operations. Memory. Number of memory cells utilized. Caveat. At times, need more refined model (e.g., multiplying ^z-bit integers). Brute force Brute force. For many nontrivial problems, there is a natural brute-force search algorithm that checks every possible solution. • Typically takes 2n steps (or worse) for inputs of size n. • Unacceptable in practice. Ex. Stable matching problem: test all n\ perfect matchings for stability. Polynomial running time Desirable scaling property. When the input size doubles, the algorithm should slow down by at most some constant factor C. Def. An algorithm is poly-time if the above scaling property holds. There exist constants c > 0 and d > 0 such that, for every input of size n, the running time of the algorithm is bounded above by c nd primitive computational steps. <- choose c = 2d Polynomial running time We say that an algorithm is efficient if it has a polynomial running time. Theory. Definition is (relatively) insensitive to model of computation. Practice. It really works! • The poly-time algorithms that people develop have both small constants and small exponents. • Breaking through the exponential barrier of brute force typically exposes some crucial structure of the problem. n 120 Exceptions. Some poly-time algorithms in the wild have galactic constants and/or huge exponents. Q. Which would you prefer: 20^120 or nl + QmXnn ? Map graphs in polynomial time Mikkel Thorup* Department of Computer Science, University of Copenhagen Universitetsparken 1, DK-2100 Copenhagen East, Denmark mthorup@diku.dk Abstract Chen,Grigni,andPapadimitriou(WADS'97andSTOC98) have introduced a modified notion of planarity, where two faces are considered adjacent if they share at least one point. The corresponding abstract graphs are catted map graphs. Chen et.al. raised the question of whether map graphs can be recognized in polynomial time. They showed that the decision problem is in NP and presented a polynomial time algorithm for the special case where we allow at most 4 faces to intersect in any point — if only 3 are allowed to intersect in a point, we get the usual planar graphs. Chen et.al. conjectured that map graphs can be recognized in polynomial time, and in this paper, their conjecture is settled affirmatively. Worst-case analysis Worst case. Running time guarantee for any input of size n. • Generally captures efficiency in practice. • Draconian view, but hard to find effective alternative. Exceptions. Some exponential-time algorithms are used widely in practice because the worst-case instances don't arise. Other types of analyses Probabilistic. Expected running time of a randomized algorithm. Ex. The expected number of compares to quicksort n elements is 2n In n. Amortized. Worst-case running time for any sequence of n operations. Ex. Starting from an empty stack, any sequence of n push and pop operations takes 0{n) primitive computational steps using a resizing array. Also. Average-case analysis, smoothed analysis, competitive analysis, ... 10 PROBLEM COMPLEXITY algorithm complexity versus problem complexity • lower bound of the problem complexity • proof techniques • upper bound of the problem complexity • complexity of the particular algorithm solving the problem • problem complexity • given by a lower and upper bound • tight bounds 7 LOWER BOUND PROOF TECHNIQUES • information-theoretic arguments • decision tree • problem reduction • adversary arguments INFORMATION-THEORETIC ARGUMENTS based on counting the number of items in the problems' input that must be processed and the number of output items that need to be produced list all permutations of r?-elements sequence the number of permutation is n\; lower bound is fi(r?!); problem complexity is 0(r?!) evaluate a polynomial of degree n at a given point x lower bound is Q(n); problem complexity is 0(r?) product of two n-by-n matrices lower bound is ^(r?2) (size of the result); upper bound is (9(r?log27); problem complexity is 111 Traveling Salesperson lower bound is ^(r?2) (number of graph edges); upper bound is exponential; problem complexity is 111 9 DECISION TREES • many algorithms work by comparing items of their inputs • we study performance of such algorithms with a device called the decision tree • decision tree represents computations on inputs of length n • each internal node represents a key comparison • node's degree is proportional to the number of possible answers and node's subtrees contain information about subsequent comparisons • each leaf represents a possible outcome; the number of leaves must be at least as large as the number of possible outcomes • the algorithms's work on a particular input of size n can be traced by a path from the root to a leaf 10 DECISION TREES AND LOWER BOUNDS • a tree with a given number of leaves has to be tall enough • a tree with degree k and / leaves has depth at least [log^ /] • fi(log/(r?)) is the lower bound for problem complexity (/(r?) is the number of possible outputs for inputs of length n) DECISION TREES FOR SEARCHING A SORTED ARRAY Input sorted sequence of numbers (xi,... , xn), number x Output index / such that x; = x or NONE • key comparisons • n + 1 possible outcomes • lower bound fi(logr?) • lower bound is tight 12 DECISION TREES FOR SORTING ALGORITHMS Input sequence of pairwise different numbers (xi,... ,xn) Output permutation l~l such that xn(i) < xn(2) < • • • < *n(n) • key comparisons: x; < xj • n\ possible outcomes (number of permutations) • lower bound fi(logr?!) • logr?! e Q(n\ogn) (viz Stirling formulae) • lower bound is tight PROBLEM REDUCTION if we know a lower bound for problem Q and problem Q reduces to problem P then lower bound for Q is as well a lower bound for P Q element uniqueness problem in (xi,... , xn)? P Euclidean minimum spanning tree problem lower bound for the element uniqueness problem is fi(r?logr?) reduction graph with vertices (xi, 0),... (xn, 0) checking whether the minimum spanning tree contains a zero-length edge answers the question about uniqueness of the given numbers claim lower bound for the Euclidean minimum spanning tree problem is ft(r? log n) ADVERSARY ARGUMENTS The idea is that an all-powerful malicious adversary pretends to choose an input for the algorithm. When the algorithm asks a question about the input, the adversary answers in whatever way will make the algorithm do the most work. If the algorithm does not ask enough queries before terminating, then there will be several different inputs, each consistent with the adversary's answers, that should result in different outputs. In this case, whatever the algorithm outputs, the adversary can 'reveal' an input that is consistent with its answers, but contradists the algorithm's output, an then claim that that was the input that he was using all along. 15 ADVERSARY FOR THE MAXIMUM PROBLEM The adversary originally pretends that x\ = / for all i, and answers all comparison queries accordingly Whenever the adversary reveals that x\ < Xj, he marks x\ as an item that the algorithm knows (or should know) is not the maximum element. At most one element x\ is marked after each comparison. Note that xn is never marked. If the algorithm does less than n — 1 comparisons before it terminates, the adversary must have at least one other unmarked element x^ 7^ xn. In this case, the adversary can change the value of x^ from k to n + 1 making x^ the largest element, without being inconsistent with any of the comparisons that the algorithm has performed. However, xn is the maximum element in the original input, and x^ is the largest element in the modified input, so the algorithm cannot possibly give the correct answer for both cases. Any comparison-based algorithm solving the maximum element problem must perform at least n — 1 comparisons. The decision tree model gives lower bound log2 n. 16 ADVERSARY FOR THE MAXIM. AND MINIM. PROBLEM Similar arguments as for the maximum problem. Whenever the adversary reveals that x\ < xj, he marks x\ as an item that the algorithm knows is not the maximum element, and he marks xj as an item that the algorithm knows is not the minimum element. Whenever two already marked elements are compared, at most one new mark can be added. If the algorithm does less than [n/2\ + n — 2 comparisons before it terminates, the adversary must have at least two elements that can be both the maximum or both minimum, so the algorithm cannot possibly give the correct answer . Any comparison-based algorithm solving the maximum and minimum element problem must perform at least \n/2\ + n — 2 comparisons. 17 ALGORITHM COMPLEXITY ANALYSIS ALGORITHM COMPLEXITY ANALYSIS RECURSIVE ALGORITHMS 5. Divide and Conquer ► mergesort ► counting inversions ► randomized quicksort ► median and selection ► closest pair of points Section 9.3 Median and selection problems Selection. Given n elements from a totally ordered universe, find kth smallest. • Minimum: k=\\ maximum: k = n. • Median: k=[(n+ 1)/2J. • 0(n) compares for min or max. • 0(n log n) compares by sorting. • 0(n log k) compares with a binary heap. <— max heap with & smallest Applications. Order statistics; find the "top k"\ bottleneck paths, ... Q. Can we do it with 0(ri) compares? A. Yes! Selection is easier than sorting. 43 Randomized quickselect • Pick a random pivot element p e A. • 3-way partition the array into L, M, and R o Recur in one subarray—the one containing the kth smallest element. Quick-Select(A, k) Pick pivot pGA uniformly at random. (L, M, R) <- PARTITION-3-WAY(A,/?). <-0(w) If (k < I LI) Return Quick-Select(L, &). <-txo Else if (k > ILI + I Ml) Return Quick-Select^, k-\L\- \M\) <-Tin -1 - i) Else if (k = I LI) Return/?. Randomized quickselect analysis Intuition. Split candy bar uniformly => expected size of larger piece is 3A. T(n) < T(3n/ 4) +n T(ri) < 4n \ not rigorous: can t assume E[T(i)] < T(E[i]) Def. T(n, k) = expected # compares to select kih smallest in array of length < n. Def. T{n) = maxyt T(n, k). Proposition. T(n) < An. Pf. [ by strong induction on n] • Assume true for 1,2,1. • T(n) satisfies the following recurrence: can assume we always recur of larger of two subarrays since Tin) is monotone non-decreasing / T(n) < n + l/n[ 2T(n/2) + ... + 2T(n - 3) + 2T(n - 2) + 2T(n - 1) ] < n + l/n[ 8(n/^)+...+8(n-3) + 8(n-2) + 8(n-l)] < n + 1 / n (3n2) = 4 n. ■ tiny cheat: sum should start at T([n/2\) inductive hypothesis 45 Selection in worst-case linear time Goal. Find pivot element p that divides list of n elements into two pieces so that each piece is guaranteed to have < 7/10 n elements. Q. How to find approximate median in linear time? A. Recursively compute median of sample of < 2/10 n elements. T(n) = 0(1) 7 (7/10 ft) + T (2/10 n) + 0(n) if n= 1 otherwise two subproblems of different sizes! =^ T(n) = 0(//) we'll need to show this 46 Choosing the pivot element • Divide n elements into [n 15 J groups of 5 elements each (plus extra). Choosing the pivot element • Divide n elements into [n 15 J groups of 5 elements each (plus extra). • Find median of each group (except extra). medians Choosing the pivot element Divide n elements into [n 15 J groups of 5 elements each (plus extra). Find median of each group (except extra). Find median of [n 15 J medians recursively. Use median-of-medians as pivot element. medians median of medians / © © © © © o o © © © © © © © © © © © o © © o 0 © © © © © © © © © © © © © © © © © • © © © © © © n = 54 49 Median-of-medians selection algorithm Mom-Select(A, k) w <- I Al. If (n < 50) return kth smallest of element of A via mergesort. Group A into [n I 5 J groups of 5 elements each (ignore leftovers). B <— median of each group of 5. p <- m0m-select(5, [n I 10J) <- median of medians (L, M, R) <- Partition-3-way(A,/?). If (k < I LI) Return Mom-Select(L, k). Else if (k > ILI + I Ml) Return Mom-Select(#, ^ -1LI -1 Ml) Else Return p. 50 Analysis of median-of-medians selection algorithm • At least half of 5-element medians < p. medians Analysis of median-of-medians selection algorithm • At least half of 5-element medians < p. • At least |> / 5J / 2j = [n 110J medians < p. medians

p. medians Analysis of median-of-medians selection algorithm • At least half of 5-element medians > p. • At least [[n 15 J / 2 J = [n 110J medians > p. medians >p Analysis of median-of-medians selection algorithm • At least half of 5-element medians > p. • At least [[n 15 J / 2 J = [n 110J medians > p. • At least 3 [nl 10J elements > p. medians >p si medians/? ^ (^2^) (5?) (7^ ^24^ (^) \© ©©©©©©O@0© ©©0©©©0©O©0 ©©©©©©©©©©© ©©©©©©©©©0 Median-of-medians selection algorithm recurrence Median-of-medians selection algorithm recurrence. • Select called recursively with [n 15 J elements to compute MOM p. • At least 3 [nl 10J elements < p. • At least 3 [nl 10J elements > p. • Select called recursively with at most n -3 [nl 10J elements. Def. C(n) = max # compares on any array of n elements. C(n) < CO/5J) + C(n-3Ln/10j) + median of recursive computing median of 5 medians select (< 6 compares per group) partitioning (< n compares) Intuition. • C(n) is going to be at least linear in n => C(n) is super-additive. • Ignoring floors, this implies that C(n) < C(nl5 + n-3nll0)+lll5n = C(9nl 10)+ 11/5« ^> C(«) < 22«. Median-of-medians selection algorithm recurrence Median-of-medians selection algorithm recurrence. • Select called recursively with [n 15 J elements to compute MOM p. • At least 3 [nl 10J elements < p. • At least 3 [nl 10J elements > p. • Select called recursively with at most n -3 [nl 10J elements. Def. C(n) = max # compares on any array of n elements. C(n) < CO/5J) + C(n-3Ln/10j) + median of recursive computing median of 5 medians select (< 6 compares per group) partitioning (< n compares) Now, let's solve given recurrence. • Assume n is both a power of 5 and a power of 10 ? • Prove that C(n) is monotone non-decreasing. 58 Divide-and-conquer: quiz 4 Consider the following recurrence C(n) 0 if n < 1 C(Ln/5j) + C(n-3Ln/10j) + ^n if n > 1 Is C(w) monotone non-decreasing? A. Yes, obviously. B. Yes, but proof is tedious. C. Yes, but proof is hard. D. No. 59 Median-of-medians selection algorithm recurrence Analysis of selection algorithm recurrence. • T{n) = max # compares on any array of < n elements. • Tin) is monotone non-decreasing, but C(«) is not! if n< 50 T(n) < \ { max{ T(n- 1), T([n/5\) + T(n - 3[n/10j) + ^n) } if n > 50 Claim. Tin) < 44n. Pf. [ by strong induction ] • Base case: T(n) < 6n for n < 50 (mergesort). • Inductive hypothesis: assume true for 1,2,1. • Induction step: for n > 50, we have either T(n) < T(n-1) < 44n or T(n) < 7XL»/5J) + T(n-3[nl 10\)+ 11/5 n inductive hypothesis > < 44 (L« / 5J) + 44 (n - 3 [n I lOj) + 11/5 n < 44(n/5) + 44^-44(^/4)+ 11/5« ^ for w>50, 3 |V 10J > w/4 = 44«. ■ 60 Divide-cmd-conquer: quiz 5 Suppose that we divide n elements into [n/r\ groups of r elements each, and use the median-of-medians of these [n/r\ groups as the pivot. For which r is the worst-case running time of select 0{n) ? A. r = 3 B. r = 7 C. Both A and B. D. Neither A nor B. 61 Linear-time selection retrospective Proposition. [Blum-Floyd-Pratt-Rivest-Tarjan 1973] There exists a compare-based selection algorithm whose worst-case running time is 0(n). Time Bounds for Selection* Manuel Blum, Robert W. Floyd, Vaughan Pratt, Ronald L. Rivest, and Robert E. Tarjan Department of Computer Science, Stanford University, Stanford, California 94305 Received November 14, 1972 The number of comparisons required to select the i-th smallest of n numbers is shown to be at most a linear function of n by analysis of a new selection algorithm—PICK. Specifically, no more than 5.4305 n comparisons are ever required. This bound is improved for extreme values of and a new lower bound on the requisite number of comparisons is also proved. Theory. • Optimized version of BFPRT: < 5.4305 n compares. • Upper bound: [Dor-Zwick 1995] < 2.95 n compares. • Lower bound: [Dor-Zwick 1 999] > (2 + 2-80) n compares Practice. Constants too large to be useful. 62 ALGORITHM COMPLEXITY ANALYSIS ITERATIVE ALGORITHMS PEARSON 1. Stable Matching ► stable matching problem ► Gale-Shapley algorithm ► hospital optimality ► context JON KLEINBERG • EVA TARDOS Lecture slides by Kevin Wayne Copyright © 2005 Pearson-Addison Wesley http://www.cs.princeton.edu/~wayne/kleinberg-tardos Last updated on 2/5/18 3:54 PM 1. Stable Matching ► stable matching problem ► Gale-Shapley algorithm ► hospital optimality ► context Section 1.1 Matching med-school students to hospitals Goal. Given a set of preferences among hospitals and med-school students, design a self-reinforcing admissions process. Unstable pair. Hospital h and student s form an unstable pair if both: • h prefers s to one of its admitted students. • s prefers h to assigned hospital. Stable assignment. Assignment with no unstable pairs. • Natural and desirable condition. • Individual self-interest prevents any hospital-student side deal. Stable matching problem: input Input. A set of n hospitals //and a set of n students S. • Each hospital hGHranks students. one student per hospital (for now) • Each student s e S ranks hospitals. favorite 1 least favorite 1 favorite 1 least favorite 1 1st 2nd 1st 2nd 3rd Atlanta Xavier Yolanda Zeus Xavier Boston Atlanta Chicago Boston Yolanda Xavier Zeus Yolanda Atlanta Boston Chicago Chicago Xavier Yolanda Zeus Zeus Atlanta Boston Chicago hospitals' preference lists students' preference lists 4 Perfect matching Def. A matching M is a set of ordered pairs h-s with heHand s GS s.t. • Each hospital hGHappears in at most one pair of M. • Each student sGS appears in at most one pair of M. Def. A matching Mis perfect if IMI = \H\ = \S\ = n. 1st 2nd 3rd Atlanta Xavier Yolanda Zeus Boston Yolanda Xavier Zeus Chicago Xavier Yolanda Zeus a perfect matchin 1st 2nd 3rd Xavier Boston Atlanta Chicago Yolanda Atlanta Boston Chicago Zeus Atlanta Boston Chicago M = { A-Z, B-Y, C-X } Unstable pair Def. Given a perfect matching M, hospital h and student s form an unstable pair if both: • h prefers s to matched student. • s prefers h to matched hospital. Key point. An unstable pair h-s could each improve by joint action. 1st 2nd 3rd 1st 2nd 3rd Atlanta Xavier Yolanda Zeus Xavier Boston Atlanta Chicago Boston Yolanda Xavier Zeus Yolanda Atlanta Boston Chicago Chicago Xavier Yolanda Zeus Zeus Atlanta Boston Chicago A-Y is an unstable pair for matching M = { A-Z, B-Y, C-X } 6 Stable matching: quiz 1 Which pair is unstable in the matching { A-X, B-Z, C-Y } ? A. A-Y. B. B-X. C. B-Z. D. None of the above. 1st 2nd 3rd 1st 2nd 3rd Atlanta Xavier Yolanda Zeus Xavier Boston Atlanta Chicago Boston Yolanda Xavier Zeus Yolanda Atlanta Boston Chicago Chicago Xavier Yolanda Zeus Zeus Atlanta Boston Chicago Stable matching problem Def. A stable matching is a perfect matching with no unstable pairs. Stable matching problem. Given the preference lists of n hospitals and n students, find a stable matching (if one exists). 1st 2nd 1st 2nd 3rd Atlanta Xavier Yolanda Zeus Xavier Boston Atlanta Chicago Boston Yolanda Xavier Zeus Yolanda Atlanta Boston Chicago Chicago Xavier Yolanda Zeus Zeus Atlanta Boston Chicago a stable matching M = { A-X, B-Y, C-Z } Stable roommate problem Q. Do stable matchings always exist? A. Not obvious a priori. Stable roommate problem. • 2n people; each person ranks others from 1 to 2n- 1 • Assign roommate pairs so that no unstable pairs. 1st 2nd 3rd A B c D B C A D c A B D D A B C no perfect matching is stable A-B, C-D A-C, B-D A-D, B-C B-C unstable A-B unstable A-C unstable Observation. Stable matchings need not exist. 10 1. Stable Matching ► stable matching problem ► Gale-Shapley algorithm ► hospital optimality ► context Section 1.1 Gale-Shapley deferred acceptance algorithm An intuitive method that guarantees to find a stable matching. Gale-Shapley {preference lists for hospitals and students) initialize Mto empty matching. while (some hospital h is unmatched and hasn't proposed to every student) s <- first student on h's list to whom h has not yet proposed. if (s is unmatched) Add h—s to matching M. else if (s prefers h to current partner h1) Replace h'-s with h-s in matching M. else s rejects h. return stable matching M. 12 Proof of correctness: termination Observation 1. Hospitals propose to students in decreasing order of preference. Observation 2. Once a student is matched, the student never becomes unmatched; only "trades up." Claim. Algorithm terminates after at most n2 iterations of while loop. Pf. Each time through the while loop, a hospital proposes to a new student. Thus, there are at most n2 possible proposals. ■ 1st 2nd 3rd 4th 5th A V W X Y b W X Y V Z C X Y V W Z d Y V W z e V W X Y z n(n-l) 1st 2nd 3rd 4th 5th V C D E A w D E A B X D E A B C Y A B C D z B C D E 1 proposals Proof of correctness: perfect matching Claim. Gale-Shapley outputs a matching. Pf. • Hospital proposes only if unmatched. => matched to < 1 student • Student keeps only best hospital. => matched to < 1 hospital Claim. In Gale-Shapley matching, all hospitals get matched. Pf. [by contradiction] • Suppose, for sake of contradiction, that some hospital hGH\s unmatched upon termination of Gale-Shapley algorithm. • Then some student, say s e S, is unmatched upon termination. • By Observation 2, s was never proposed to. • But, h proposes to every student, since h ends up unmatched. * Claim. In Gale-Shapley matching, all students get matched. Pf. [by counting] • By previous claim, all n hospitals get matched. • Thus, all n students get matched. ■ Proof of correctness: stability Claim. In Gale-Shapley matching M*, there are no unstable pairs. Pf. Consider any pair h-s that is not in M*. • Case 1: h never proposed to s. hospitals propose in => h prefers its Gale-Shapley partner s' to s. <— decreasing order of preference => h-s is not unstable. • Case 2: h proposed to s. => s rejected h (either right away or later) => s prefers Gale-Shapley partner K to h. =^> h-s is not unstable. students only trade up • In either case, the pair h-s is not unstable. ■ Gale-Shapley matching M* h-s' h'-s 1 5 Summary Stable matching problem. Given n hospitals and n students, and their preference lists, find a stable matching if one exists. Theorem. [Gale-Shapley 1962] The Gale-Shapley algorithm guarantees to find a stable matching for any problem instance. COLLEGE ADMISSIONS AND THE STABILITY OF MARRIAGE D. GALE* and L. S. SHAPLEY, Brown University and the RAND Corporation 1. Introduction. The problem with which we shall be concerned relates to the following typical situation: A college is considering a set of n applicants of which it can admit a quota of only q. Having evaluated their qualifications, the admissions office must decide which ones to admit. The procedure of offering admission only to the q best-qualified applicants will not generally be satisfactory, for it cannot be assumed that all who are offered admission will accept. Accordingly, in order for a college to receive q acceptances, it will generally have to offer to admit more than q applicants. The problem of determining how many and which ones to admit requires some rather involved guesswork. It may not be known (a) whether a given applicant has also applied elsewhere; if this is known it may not be known (b) how he ranks the colleges to which he has applied; even if this is known it will not be known (c) which of the other colleges will offer to admit him. A result of all this uncertainty is that colleges can expect only that the entering class will come reasonably close in numbers to the desired quota, and be reasonably close to the attainable optimum in quality. 16 Stable matching: quiz 2 Do all executions of Gale-Shapley lead to the same stable matching? A. No, because the algorithm is nondeterministic. B. No, because an instance can have several stable matchings. C. Yes, because each instance has a unique stable matching. D. Yes, even though an instance can have several stable matchings and the algorithm is nondeterministic. 1. Stable Matching ► stable matching problem ► Gale-Shapley algorithm ► hospital optimality ► context Section 1.1 Understanding the solution For a given problem instance, there may be several stable matchings. 1st 2nd 3rd X B A c Y A B c Z A B c an instance with two stable matchings: S = { A-X, B-Y, C-Z } and S' = { A-Y, B-X, C-Z } 19 Understanding the solution Def. Student s is a valid partner for hospital h if there exists any stable matching in which /*and s are matched. Ex. • Both X and Y are valid partners for A. • Both X and Y are valid partners for B. • Z is the only valid partner for C. 1st 2nd 1st 2nd 3rd A X Y Z X B A c B Y X Z Y A B c c X Y Z Z A B c an instance with two stable matchings: S = { A-X, B-Y, C-Z } and S' = { A-Y, B-X, C-Z } 20 Stable matching: quiz 3 Who is the best valid partner for W in the following instance? . 6 stable matchings /»■ - { A-W, B-X, C-Y, D-Z } B. { A-X, B-W, C-Y, D-Z } { A-X, B-Y, C-W, D-Z } c. { A-Z, B-W, C-Y, D-X } D. { A-Z, B-Y, C-W, D-X } { A-Y, B-Z, C-W, D-X } 1st 2nd 3rd 4th A Y z X w B Z Y W X C w Y X z X Z W Y 1st 2nd 3rd 4th w D A B C X C B A D Y C B A D z D A B C Understanding the solution Def. Student s is a valid partner for hospital h if there exists any stable matching in which /*and s are matched. Hospital-optimal assignment. Each hospital receives best valid partner. • Is it a perfect matching? • Is it stable? Claim. All executions of Gale-Shapley yield hospital-optimal assignment. Corollary. Hospital-optimal assignment is a stable matching! 23 Hospital optimality Claim. Gale-Shapley matching M* is hospital-optimal. Pf. [by contradiction] • Suppose a hospital is matched with student other than best valid partner. • Hospitals propose in decreasing order of preference. => some hospital is rejected by a valid partner during Gale-Shapley • Let h be first such hospital, and let s be the first valid partner that rejects h. • Let M be a stable matching where h and s are matched. • When s rejects h in Gale-Shapley, s forms (or re-affirms) commitment to a hospital, say K. s prefers h' to h. stable matching M • Let s' be partner of K in M. • h! had not been rejected by any valid partner (including s') at the point when h is rejected by s. <— • Thus, K had not yet proposed to s' when h! proposed to s. because this is the first rejection by a valid partner h' prefers s to s', • Thus, h'-s is unstable in M, a contradiction. 24 Student pessimality Q. Does hospital-optimality come at the expense of the students? A. Yes. Student-pessimal assignment. Each student receives worst valid partner. Claim. Gale-Shapley finds student-pessimal stable matching M*. Pf. [by contradiction] • Suppose h-s matched in M* but h is not the worst valid partner for s. • There exists stable matching Min which s is paired with a hospital, say ti, whom s prefers less than h. s prefers h to ti, Let s' be the partner of h in M. By hospital-optimality, s is the best valid partner for h. h prefers s to s'. • Thus, h-s is an unstable pair in M, a contradiction stable matching M 25 Stable matching: quiz 4 Suppose each agent knows the preference lists of every other agent before the hospital propose-and-reject algorithm is executed. Which is true? A. No hospital can improve by falsifying its preference list. B. No student can improve by falsifying their preference list. C. Both A and B. D. Neither A nor B. 26 1. Stable Matching ► stable matching problem ► Gale-Shapley algorithm ► hospital optimality ► context Section 1.1 Extensions Extension 1. Some agents declare others as unacceptable. Extension 2. Some hospitals have more than one position Extension 3. Unequal number of positions and students. med-school student unwilling to work in Cleveland > 43K med-school students; only 31 K positions Def. Matching M is unstable if there is a hospital h and student s such that: • h and s are acceptable to each other; and • Either s is unmatched, or s prefers h to assigned hospital; and • Either h does not have all its places filled, or h prefers s to at least one of its assigned students. Theorem. There exists a stable matching. Pf. Straightforward generalization of Gale-Shapley algorithm. 29 Historical context National resident matching program (NRMP). • Centralized clearinghouse to match med-school students to hospitals. • Began in 1 952 to fix unraveling of offer dates. • Originally used the "Boston Pool" algorithm. • Algorithm overhauled in 1998. - med-school student optimal - deals with various side constraints z || I . | . . | \ stable matching no longer (e.g., allow couples to match together) «— guaranteedyt0 existy hospitals began making offers earlier and earlier, up to 2 years in advance The Redesign of the Matching Market for American Physicians: Some Engineering Aspects of Economic Design By Alvin E. Roth and Elliott Peranson* We report on the design of the new clearinghouse adopted by the National Resident Matching Program, which annually fills approximately 20,000 jobs for new physicians. Because the market has complementarities between applicants and between positions, the theory of simple matching markets does not apply directly. However, computational experiments show the theory provides good approximations. Furthermore, the set of stable matchings, and the opportunities for strategic manipulation, are surprisingly small. A new kind of "core convergence" result explains this; that each applicant interviews only a small fraction of available positions is important. We also describe engineering aspects of the design process. (JEL C78, B41, J44) e MATCH NATIONAL RESIDENT MATCHING PROGRAM' 2012 Nobel Prize in Economics Lloyd Shapley. Stable matching theory and Gale-Shapley algorithm. COLLEGE ADMISSIONS AND THE STABILITY OF MARRIAGE D. GALE* and L. S. SHAPLEY, Brown University and the RAND Corporation 1. Introduction. The problem with which we shall be concerned relates to Original applicationS\ the following typical situation: A college is considering a set of n applicants of ^ CO 1160 6 admissions and which it can admit a quota of only q. Having evaluated their qualifications, the admissions office must decide which ones to admit. The procedure of offering OppOSite-SeX marriage admission only to the q best-qualified applicants will not generally be satisfactory, for it cannot be assumed that all who are offered admission will accept. Alvin Roth. Applied Gale-Shapley to matching med-school students with hospitals, students with schools, and organ donors with patients. New York City high school match 8th grader. Ranks top-5 high schools. High school. Ranks students (and limit). Goal. Match 90K students to 500 high school programs. How Game Theory Helped Improve New York City's High School Application Process By TRACY TULLIS DEC. 5,2014 Tuesday was the deadline for eighth graders in New York City to submit applications to secure a spot at one of 426 public high schools. After months of school tours and tests, auditions and interviews, 75,000 students have entrusted their choices to a computer program that will arrange their school assignments for the coming year. The weeks of research and deliberation will be reduced to a fraction of a second of mathematical calculation: In just a couple of hours, all the sorting for the Class of 2019 will be finished. 32 Questbridge national college match Low-income student. Ranks colleges. College. Ranks students willing to admit (and limit). Goal. Match students to colleges. A modern application Content delivery networks. Distribute much of world's content on web. User. Preferences based on latency and packet loss. ^Akamai Web server. Preferences based on costs of bandwidth and co-location. Goal. Assign billions of users to servers, every 1 0 seconds. Algorithmic Nuggets in Content Delivery Bruce M. Maggs Ramesh K. Sitaraman Duke and Akamai UMass, Amherst and Akamai bmm@cs.duke.edu ramesh@cs.umass.edu This article is an editorial note submitted to CCR. It has NOT been peer reviewed. The authors take full responsibility for this article's technical content. Comments can be posted through CCR Online. ABSTRACT This paper "peeks under the covers" at the subsystems that provide the basic functionality of a leading content delivery network. Based on our experiences in building one of the largest distributed systems in the world, we illustrate how sophisticated algorithmic research has been adapted to balance the load between and within server clusters, manage the caches on servers, select paths through an overlay routing network, and elect leaders in various contexts. In each instance, we first explain the theory underlying the algorithms, then introduce practical considerations not captured by the theoretical models, and finally describe what is implemented in practice. Through these examples, we highlight the role of algorithmic research in the design of complex networked systems. The paper also illustrates the close synergy that exists between research and industry where research ideas cross over into products and product requirements drive future research. «—*. Content Authoritative Name Server DNS (Global and Local Load Balancing) 34 AMORTIZED COMPLEXITY AMORTIZED COMPLEXITY AMORTIZED ANALYSIS Data structures I, II, III, and IV /. Amortized Analysis II. Binary and Binomial Heaps III. Fibonacci Heaps IV. Union-Find Lecture slides by Kevin Wayne http://www.cs.princeton.edu/~wayne/kleinberg-tardos Last updated on 2/2/18 5:50 AM Data structures Static problems. Given an input, produce an output. Ex. Sorting, FFT, edit distance, shortest paths, MST, max-flow, ... Dynamic problems. Given a sequence of operations (given one at a time), produce a sequence of outputs. Ex. Stack, queue, priority queue, symbol table, union-find, .... Algorithm. Step-by-step procedure to solve a problem. Data structure. Way to store and organize data. Ex. Array, linked list, binary heap, binary search tree, hash table, ... Appetizer Goal. Design a data structure to support all operations in 0(1) time. • Init(w): create and return an initialized array (all zero) of length n. • Read(A, i)\ return element i in array. • Write(A, i, value): set element i in array to value. true in C or C++, but not Java Assumptions. ^ • Can malloc an uninitialized array of length n in 0(1) time. • Given an array, can read or write element i in 0(1) time. Remark. An array does Init in S(n) time and Read and Write in 0(1) time. 3 Appetizer Data structure. Three arrays A[l.. n], B[l.. n], and Cfl.. n], and an integer k. • A[i] stores the current value for Read (if initialized). • k = number of initialized entries. • C\j] = index of jth initialized element for j= 1, • |f c\J] = i, then B[i] =j fory = 1, ...,k. Theorem. A[i] is initialized iff both I < B[í] < k and C[B[i\\ = i. Pf. Ahead. 1 2 3 4 5 6 7 8 A[] H22 55 99 ? 33 ? ■ B[] 1 3 4 1 ? 2 ? ■ CM 4 6 2 3 ? ? ? ■ k = 4 A[4]=99, A[6] = 33, A[2] = 22, and A[3] = 55 initialized in that order Appetizer Init (A, n) k ^0. A«- malloc(ft). 5 <- malloc(«). C^malloc(n). Read (A, i) If (Is-Initialized (AD'])) Return A [/]. Else Return 0. Write (A, i, value) If (Is-Initialized (AD'])) A[i] <- value. Else & <- & + 1. AD] «- value. B[i] «- k. C[k] «- i. IS-lNITIALIZED (A, /) IF (1<#D] < *) and (C[fi[i]] = i) Return true. Else Return /ate. Appetizer Theorem. A[i] is initialized iff both 1 k, then A[i] clearly uninitialized. • If 1 < B[i] < k by coincidence, then we still can't have C[B[i]] = i because none of the entries Cfl.. k] can equal i. ■ 1 2 3 4 5 6 7 8 A[] H22 55 99 ? 33 ? ■ B[] 1 3 4 1 ? 2 ? ■ CM 4 6 2 3 ? ? ? ■ k = 4 A[4]=99, A[6] = 33, A[2] = 22, and A[3] = 55 initialized in that order Amortized Analysis ► binary counter ► multi-pop stack ► dynamic table Lecture slides by Kevin Wayne http://www.cs.princeton.edu/~wayne/kleinberg-tardos Last updated on 2/2/18 5:50 AM Amortized analysis Worst-case analysis. Determine worst-case running time of a data structure operation as function of the input size n. *\ can be too pessimistic if the only way to encounter an expensive operation is when there were lots of previous cheap operations Amortized analysis. Determine worst-case running time of a sequence of n data structure operations. Ex. Starting from an empty stack implemented with a dynamic table, any sequence of n push and pop operations takes 0{n) time in the worst case. 9 Amortized analysis: applications • Splay trees. • Dynamic table. • Fibonacci heaps. • Garbage collection. • Move-to-front list updating. • Push-relabel algorithm for max flow. • Path compression for disjoint-set union. • Structural modifications to red-black trees. • Security, databases, distributed computing, ... SIAM J. Alg. Disc. METH. © 1985 Society for Industrial and Applied Mathematics Vol. 6, No. 2, April 1985 016 AMORTIZED COMPUTATIONAL COMPLEXITY* ROBERT ENDRE TARJANt Abstract. A powerful technique in the complexity analysis of data structures is amortization, or averaging over time. Amortized running time is a realistic but robust complexity measure for which we can obtain surprisingly tight upper and lower bounds on a variety of algorithms. By following the principle of designing algorithms whose amortized complexity is low, we obtain "self-adjusting" data structures that are simple, flexible and efficient. This paper surveys recent work by several researchers on amortized complexity. ASM(MOS) subject classifications. 68C25, 68E05 10 THOMAS H CORMEN CHARLES E. LEISE RSON RONALD L RIVEST CLIFFORD STEIN INTRODUCTION TO ALGORITH MS THIRD EDITION Amortized Analysis ► binary counter ► multi-pop stock ► dynamic table Chapter 17 Binary counter Goal. Increment a k-b\t binary counter (mod 2k). Representation. aj = jth least significant bit of counter. value ^^^^^^^^ 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 ■ 2 0 0 0 0 0 0 1 0 3 0 0 0 0 0 0 1 ■ 4 0 0 0 0 0 1 0 0 5 0 0 0 0 0 1 0 ■ 6 0 0 0 0 0 1 1 0 7 0 0 0 0 0 1 1 ■ 8 0 0 0 0 1 0 0 0 9 0 0 0 0 1 0 0 ■ 10 0 0 0 0 1 0 1 0 11 0 0 0 0 1 0 1 1 12 0 0 0 0 1 1 0 0 13 0 0 0 0 1 1 0 ■ 14 0 0 0 0 1 1 1 0 15 0 0 0 0 1 1 1 ■ 16 0 0 0 1 0 0 0 0 Cost model. Number of bits flipped. Binary counter Goal. Increment a k-b\t binary counter (mod 2k). Representation. aj = jth least significant bit of counter. Counter value ^ ^ ^ ^ ^ ^ ^ ^ 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 ■ 2 0 0 0 0 0 0 1 0 3 0 0 0 0 0 0 1 ■ 4 0 0 0 0 0 1 0 0 5 0 0 0 0 0 1 0 1 6 0 0 0 0 0 1 1 0 7 0 0 0 0 0 1 1 ■ 8 0 0 0 0 1 0 0 0 9 0 0 0 0 1 0 0 ■ 10 0 0 0 0 1 0 1 0 11 0 0 0 0 1 0 1 1 12 0 0 0 0 1 1 0 0 13 0 0 0 0 1 1 0 ■ 14 0 0 0 0 1 1 1 0 15 0 0 0 0 1 1 1 ■ 16 0 0 0 1 0 0 0 0 Theorem. Starting from the zero counter, a sequence of n Increment Operations flips 0(nk) bitS. <- overly pessimistic upper bound Pf. At most k bits flipped per increment. ■ Aggregate method (brute force) Aggregate method. Analyze cost of a sequence of operations. Counter value <^<<^<<^<^<^<^<\N a, we store credits in data structure A to pay for future ops; when Ci < a, we consume credits in data structure A. • Initial data structure D0 starts with 0 credits. Credit invariant. The total number of credits in the data structure > 0. ^2* -0 « our job is to choose suitable amortized costs so that this invariant holds Accounting method (banker's method) Assign (potentially) different charges to each operation. • Di = data structure after ith operation. can be more or less than actual cost • a = actual cost of Ith operation. / • ci = amortized cost oUth operation = amount we charge operation i. • When ct > d, we store credits in data structure A to pay for future ops; when ct < d, we consume credits in data structure A. • Initial data structure D0 starts with 0 credits. Credit invariant. The total number of credits in the data structure > 0. i=l i=l Theorem. Starting from the initial data structure Do, the total actual cost of any sequence of n operations is at most the sum of the amortized costs. n n Pf. The amortized cost of the sequence of n operations is: J2d* ^ J2C*- ■ i = l A i = l T_ credit invariant Intuition. Measure running time in terms of credits (time = money). Binary counter: accounting method Credits. One credit pays for a bit flip. Invariant. Each 1 bit has one credit; each 0 bit has zero credits. Accounting. • Flip bit j from 0 to 1: charge 2 credits (use one and save one in bit j). increment 7 6 5 4 3 2 10 Binary counter: accounting method Credits. One credit pays for a bit flip. Invariant. Each 1 bit has one credit; each 0 bit has zero credits. Accounting. • Flip bit j from 0 to 1: charge 2 credits (use one and save one in bit j). • Flip bit y from 1 to 0: pay for it with the 1 credit saved in bit j. increment 7 6 5 4 3 2 10 19 Binary counter: accounting method Credits. One credit pays for a bit flip. Invariant. Each 1 bit has one credit; each 0 bit has zero credits. Accounting. • Flip bit j from 0 to 1: charge 2 credits (use one and save one in bit j). • Flip bit y from 1 to 0: pay for it with the 1 credit saved in bit j. 7 6 5 4 3 2 10 20 Binary counter: accounting method Credits. One credit pays for a bit flip. Invariant. Each 1 bit has one credit; each 0 bit has zero credits. Accounting. • Flip bit j from 0 to 1: charge 2 credits (use one and save one in bit j). • Flip bit y from 1 to 0: pay for it with the 1 credit saved in bit j. Theorem. Starting from the zero counter, a sequence of n Increment operations flips 0(ri) bits. the rightmost 0 bit pf. |- (unless counter overflows) • Each Increment operation flips at most one 0 bit to a 1 bit, so the amortized cost per Increment < 2. • Invariant => number of credits in data structure > 0. • Total actual cost of n operations < sum of amortized costs < 2«. ■ t accounting method theorem 21 Potential method (physicist's method) Potential function. O(A) maps each data structure A to a real number s.t.: . O(D0) = 0. • O(A) ^ 0 for each data structure A. Actual and amortized costs. • a = actual cost of ith operation. • 6i = ct + O(A) - O(A-i) = amortized cost of/* operation. Potential method (physicist's method) Potential function. O(A) maps each data structure A to a real number s.t.: . <|>(Do) = 0. • 0(A) ^ 0 for each data structure A. Actual and amortized costs. • a = actual cost of ith operation. • q = d + O(A) - O(A-i) = amortized cost ofoperation. Theorem. Starting from the initial data structure Do, the total actual cost of any sequence of n operations is at most the sum of the amortized costs. Pf. The amortized cost of the sequence of operations is: n n *(A-i)) i=l i=l n $(A>) i=l n 23 i=l Binary counter: potential method Potential function. Let 3>(D) = number of 1 bits in the binary counter D. . O(Do) = 0. • O(D0 > 0 for each A. increment Binary counter: potential method Potential function. Let 3>(D) = number of 1 bits in the binary counter D. . O(Do) = 0. • O(D0 > 0 for each A. increment 25 Binary counter: potential method Potential function. Let O(D) = number of 1 bits in the binary counter D. . o(D0) = 0. • O(A) > 0 for each A. 0 0 0 0 0 0 Binary counter: potential method Potential function. Let O(D) = number of 1 bits in the binary counter D. • <|>(Do) = 0. • O(A) > 0 for each A. Theorem. Starting from the zero counter, a sequence of n Increment operations flips 0(n) bits. Pf. • Suppose that the ith Increment operation flips u bits from 1 to 0. • The actual COSt a < U + \. <- operation flips at most one bit from 0 to 1 (no bits flipped to 1 when counter overflows) • The amortized cost ct = a - <£(A-i) < a + 1 — U <- potential decreases by 1 for U bits flipped from 1 to 0 and increases by 1 for bit flipped from 0 to 1 < 2. • Total actual cost of n operations < sum of amortized costs < In. ■ t potential method theorem 27 Famous potential functions Fibonacci heaps. $(H) = 2trees(H) + 2marks(H) Splay trees. $(T) = ^ [log2 size(x)_ Move-to-front. (L) = 2 inversions(L,L*) Preflow-push. $(/) = ^ height(v) v : excess(v) > 0 Red-black trees. $(T) = 'o if X 1 < if £ 0 if £ ,2 if X THOMAS H CORMEN CHARLES E. LEISE RSON RONALD L RIVEST CLIFFORD STEIN INTRODUCTION TO ALGORITHMS THIRD EDITION Amortized Analysis ► binary counter ► multi-pop stack ► dynamic table Section 17.4 Multipop stack Goal. Support operations on a set of elements: • Push(S,jc): add element x to stack S. • Pop(5): remove and return the most-recently added element. • Multi-PopCS, k): remove the most-recently added & elements. Multi-Pop (S, k) For i = 1 to k Pop (S). Exceptions. We assume Pop throws an exception if stack is empty. 30 Multipop stack Goal. Support operations on a set of elements: • Push(5,x): add element x to stack S. • Pop(5): remove and return the most-recently added element. • Multi-PopCS, k): remove the most-recently added & elements. Theorem. Starting from an empty stack, any intermixed sequence of n Push, Pop, and Multi-Pop operations takes 0(n2) time. Pf. overly pessimistic upper bound • Use a singly linked list. • Pop and Push take 0(1) time each. • Multi-Pop takes 0(n) time. ■ Multipop stack: aggregate method Goal. Support operations on a set of elements: • Pushes',x): add element x to stack S. • Pop(5): remove and return the most-recently added element. • Multi-PopCS, k): remove the most-recently added & elements. Theorem. Starting from an empty stack, any intermixed sequence of n Push, Pop, and Multi-Pop operations takes 0(n) time. Pf. • An element is popped at most once for each time that it is pushed. • There are < n Push operations. • Thus, there are < n Pop operations (including those made within Multi-Pop). ■ 32 Multipop stack: accounting method Credits. 1 credit pays for either a Push or Pop. Invariant. Every element on the stack has 1 credit. Accounting. • Push(5,x): charge 2 credits. - use 1 credit to pay for pushing x now - store 1 credit to pay for popping xat some point in the future • Pop(5): charge 0 credits. Theorem. Starting from an empty stack, any intermixed sequence of n Push, Pop, and Multi-Pop operations takes 0(n) time. Pf. • Invariant => number of credits in data structure > 0. • Amortized cost per operation < 2. • Total actual cost of n operations < sum of amortized costs < In. ■ t accounting method theorem 33 Multipop stack: potential method Potential function. Let O(D) = number of elements currently on the stack. • <£>(D0) = 0. • O(A) > 0 for each A. Theorem. Starting from an empty stack, any intermixed sequence of n Push, Pop, and Multi-Pop operations takes 0{n) time. Pf. [Case 1: push] • Suppose that the ith operation is a Push. • The actual cost a =1. • The amortized cost ct = a - O(A-i) = 1 + 1=2. 34 Multipop stack: potential method Potential function. Let O(D) = number of elements currently on the stack. • <|>(Do) = 0. • O(A) > 0 for each A. Theorem. Starting from an empty stack, any intermixed sequence of n Push, Pop, and Multi-Pop operations takes 0{n) time. Pf. [Case 2: pop] • Suppose that the ith operation is a Pop. • The actual cost a = 1. • The amortized cost ct = a - 0(D,_i) = 1-1=0. Multipop stack: potential method Potential function. Let O(D) = number of elements currently on the stack. • 0 for each A. Theorem. Starting from an empty stack, any intermixed sequence of n Push, Pop, and Multi-Pop operations takes 0{n) time. Pf. [Case 3: multi-pop] • Suppose that the ith operation is a Multi-Pop of k objects. • The actual cost a = k. • The amortized cost ct = a +0(A) - O(A-i) = k - k = 0. ■ Multipop stack: potential method Potential function. Let O(D) = number of elements currently on the stack. . <|>(Do) = 0. • O(A) > 0 for each A. Theorem. Starting from an empty stack, any intermixed sequence of n Push, Pop, and Multi-Pop operations takes 0(n) time. Pf. [putting everything together] • Amortized COSt Ci < 2. <- 2 for push; 0 for pop and multi-pop • Sum of amortized costs q of the n operations < In. • Total actual cost < sum of amortized cost < In. ■ t potential method theorem 37 Part I Data stuctures 18 OVERVIEW Heaps Disjoint-sets Data Structures 19 HEAPS Fibonacci Heaps ► preliminaries ► insert ► extract the minimum ► decrease key ► bounding the rank ► meld and delete THOMAS H CORMEN CHARLES E. LEISE RSON RONALD L RIVEST CLIFFORD STEIN Lecture slides by Kevin Wayne http://www.cs.princeton.edu/~wayne/kleinberg-tardos Last updated on 7/25/17 11:07 AM Priority queues performance cost summary operation linked list binary heap binomial heap Fibonacci heap t Make-Heap 0(1) 0(1) 0(1) 0(1) Is-Empty 0(1) 0(1) 0(1) 0(1) Insert 0(1) <9(log n) 0(\og n) 0(1) Extract-Min 0(n) 0(log n) 0(log n) 0(log n) Decrease-Key 0(1) 0(log n) 0(\og n) 0(1) Delete 0(1) 0(log n) (9(log n) 0(log n) Meld 0(1) 0(n) (9(log n) 0(1) Find-Min 0(n) 0(1) 0(log n) 0(1) t amortized Ahead. 0(1) Insert and Decrease-Key, O(log^) Extract-Min. Fibonacci heaps Theorem. [Fredman-Tarjan 1986] Starting from an empty Fibonacci heap, any sequence of m Insert, Extract-Min, and Decrease-Key operations involving n Insert operations takes 0(m + n\ogn) time. History. • Ingenious data structure and application of amortized analysis. • Original motivation: improve Dijkstra's shortest path algorithm from 0(m log n) to 0(m + n log n). • Also improved best-known bounds for all-pairs shortest paths, assignment problem, minimum spanning trees. Fibonacci heap: structure • Set of heap-ordered trees. \ each child no smaller than its parent heap-ordered tree heap H 39 44 Fibonacci heap: structure • Set of heap-ordered trees. • Set of marked nodes. \ used to keep trees bushy (stay tuned) min heap H Fibonacci heap: structure Heap representation. • Store a pointer to the minimum node. • Maintain tree roots in a circular, doubly-linked list. min 1 heap H Fibonacci heap: representation Node representation. Each node stores: • A pointer to its parent. • A pointer to any of its children. • A pointer to its left and right siblings. • Its rank = number of children. • Whether it is marked. min 1 heap H Fibonacci heap: representation Operations we can do in constant time: • Determine rank of a node. • Find the minimum element. • Merge two root lists together. • Add or remove a node from the root list. • Remove a subtree and merge into root list. • Link the root of a one tree to root of another tree. min 1 heap H Fibonacci heap: notation notation meaning n number of nodes rank(x) number of children of node x rank(H) max rank of any node in heap H trees(H) number of trees in heap H marks(H) number of marked nodes in heap H n = 14 rank(H) = 3 trees(H) = 5 marks(H) = 3 min Fibonacci heap: potential function Potential function. 0(7/) = trees(iZ) + 2 • marks(#) THOMAS H CORMEN CHARLES E. LEISERSON RONALD L RIVEST CLIFFORD STEIN INTRODUCTION TO ALGORITH MS THIRD EDITION Fibonacci Heaps ► preliminaries ► insert ► extract the minimum ► decrease key ► bounding the rank ► meld and delete Section 19.2 Fibonacci heap: insert • Create a new singleton tree. • Add to root list; update min pointer (if necessary). Fibonacci heap: insert • Create a new singleton tree. • Add to root list; update min pointer (if necessary). insert 21 Fibonacci heap: insert analysis Actual cost, a = 0(1). Change in potential. AO = <£(//*)-(//m) = +1. + one more tree; no change in marks Amortized cost, ct = a + AO =0(1). 0(7/) = trees(#) + 2 • marks(#) 24 23 7 ■■ 21 mm 1 3 heap H 35 41 i 44 1 7 THOMAS H CORMEN CHARLES E. LEISERSON RONALD L RIVEST CLIFFORD STEIN INTRODUCTION TO ALGORITH MS THIRD EDITION Fibonacci Heaps ► preliminaries ► insert ► extract the minimum ► decrease key ► bounding the rank ► meld and delete Section 19.2 Linking operation Useful primitive. Combine two trees Ti and T2 of rank k. • Make larger root be a child of smaller root. • Resulting tree T' has rank k+ 1. tree Ti tree T2 tree T' 19 Fibonacci heap: extract the minimum • Delete min; meld its children into root list; update min. • Consolidate trees so that no two roots have same rank. Fibonacci heap: extract the minimum • Delete min; meld its children into root list; update min. • Consolidate trees so that no two roots have same rank. Fibonacci heap: extract the minimum • Delete min; meld its children into root list; update min. • Consolidate trees so that no two roots have same rank. Fibonacci heap: extract the minimum • Delete min; meld its children into root list; update min. • Consolidate trees so that no two roots have same rank. Fibonacci heap: extract the minimum • Delete min; meld its children into root list; update min. • Consolidate trees so that no two roots have same rank. i rank 0 12 3 • t t • I min 7 w current 24 30 17 I 52 44 Fibonacci heap: extract the minimum • Delete min; meld its children into root list; update min. • Consolidate trees so that no two roots have same rank. Fibonacci heap: extract the minimum • Delete min; meld its children into root list; update min. • Consolidate trees so that no two roots have same rank. rank link 23 to 17 Fibonacci heap: extract the minimum • Delete min; meld its children into root list; update min. • Consolidate trees so that no two roots have same rank. rank link 17 to 7 Fibonacci heap: extract the minimum • Delete min; meld its children into root list; update min. • Consolidate trees so that no two roots have same rank. rank Fibonacci heap: extract the minimum • Delete min; meld its children into root list; update min. • Consolidate trees so that no two roots have same rank. rank 29 Fibonacci heap: extract the minimum • Delete min; meld its children into root list; update min. • Consolidate trees so that no two roots have same rank. Fibonacci heap: extract the minimum • Delete min; meld its children into root list; update min. • Consolidate trees so that no two roots have same rank. Fibonacci heap: extract the minimum • Delete min; meld its children into root list; update min. • Consolidate trees so that no two roots have same rank. Fibonacci heap: extract the minimum • Delete min; meld its children into root list; update min. • Consolidate trees so that no two roots have same rank. Fibonacci heap: extract the minimum • Delete min; meld its children into root list; update min. • Consolidate trees so that no two roots have same rank. Fibonacci heap: extract the minimum • Delete min; meld its children into root list; update min. • Consolidate trees so that no two roots have same rank. stop (no two trees have same rank) mm 52 41 Fibonacci heap: extract the minimum analysis Actual cost, a = 0(rank(H)) + 0(trees(H)). • 0(rank(H)) to meld min's children into root list. <— < rank(H) children • 0(mnk(H)) + 0(trees(H)) tO Update ITlin. <- < rank(H) + trees(H)- 1 root nodes • 0(rank(H)) + 0(trees(H)) to consolidate trees. <— number of roots decreases by 1 after each linking operation Change in potential. AO < rank(H') + 1 - trees(H). • No new nodes become marked. • trees(H') < rank(H') + 1. <- no two trees have same rank after consolidation Amortized cost. O(log^). • q = a + AO = 0(rank(H)) + 0(rank(H')). • The rank of a Fibonacci heap with n elements is O(log^). \ Fibonacci lemma (stay tuned) 0(7/) = trees(iZ) + 2 • marks(iZ) 36 Fibonacci heap vs. binomial heaps Observation. If only Insert and Extract-Min operations, then all trees are binomial trees. \ we link only trees of equal rank B0 B] ' I B; Binomial heap property. This implies rank(H) < \og2n. Fibonacci heap property. Our Decrease-Key implementation will not preserve this property, but we will implement it in such a way that rank(H)< log^n. THOMAS H CORMEN CHARLES E. LEISERSON RONALD L RIVEST CLIFFORD STEIN INTRODUCTION TO ALGORITH MS THIRD EDITION Fibonacci Heaps ► preliminaries ► insert ► extract the minimum ► decrease key ► bounding the rank ► meld and delete Section 19.3 Fibonacci heap: decrease key Intuition for deceasing the key of node x. • If heap-order is not violated, decrease the key of x. • Otherwise, cut tree rooted at x and meld into root list. decrease-key of x from 30 to 7 45 32 24 50 Fibonacci heap: decrease key Intuition for deceasing the key of node x. • If heap-order is not violated, decrease the key of x. • Otherwise, cut tree rooted at x and meld into root list. decrease-key of x from 23 to 5 24 50 40 Fibonacci heap: decrease key Intuition for deceasing the key of node x. • If heap-order is not violated, decrease the key of x. • Otherwise, cut tree rooted at x and meld into root list. decrease-key of 22 to 4 decrease-key of 48 to 3 decrease-key of 31 to 2 decrease-key of 17 to 1 50 41 Fibonacci heap: decrease key Intuition for deceasing the key of node x. • If heap-order is not violated, decrease the key of x. • Otherwise, cut tree rooted at x and meld into root list. • Problem: number of nodes not exponential in rank. Fibonacci heap: decrease key Intuition for deceasing the key of node x. • If heap-order is not violated, decrease the key of x. • Otherwise, cut tree rooted at x and meld into root list. • Solution: as soon as a node has its second child cut, cut it off also and meld into root list (and unmark it). min Fibonacci heap: decrease key Case 1. [heap order not violated] • Decrease key of x. • Change heap min pointer (if necessary). decrease-key of x from 46 to 29 min Fibonacci heap: decrease key Case 1. [heap order not violated] • Decrease key of x. • Change heap min pointer (if necessary). decrease-key of x from 46 to 29 min Fibonacci heap: decrease key Case 2a. [heap order violated] • Decrease key of x. • Cut tree rooted atx, meld into root list, and unmark. • If parent p of x is unmarked (hasn't yet lost a child), mark it; Otherwise, cut p, meld into root list, and unmark (and do so recursively for all ancestors that lose a second child). decrease-key of x from 29 to 15 min Fibonacci heap: decrease key Case 2a. [heap order violated] • Decrease key of x. • Cut tree rooted atx, meld into root list, and unmark. • If parent p of x is unmarked (hasn't yet lost a child), mark it; Otherwise, cut p, meld into root list, and unmark (and do so recursively for all ancestors that lose a second child). decrease-key of x from 29 to 15 min Fibonacci heap: decrease key Case 2a. [heap order violated] • Decrease key of x. • Cut tree rooted atx, meld into root list, and unmark. • If parent p of x is unmarked (hasn't yet lost a child), mark it; Otherwise, cut p, meld into root list, and unmark (and do so recursively for all ancestors that lose a second child). decrease-key of x from 29 to 15 min Fibonacci heap: decrease key Case 2a. [heap order violated] • Decrease key of x. • Cut tree rooted atx, meld into root list, and unmark. • If parent p of x is unmarked (hasn't yet lost a child), mark it; Otherwise, cut p, meld into root list, and unmark (and do so recursively for all ancestors that lose a second child). decrease-key of x from 29 to 15 min Fibonacci heap: decrease key Case 2a. [heap order violated] • Decrease key of x. • Cut tree rooted atx, meld into root list, and unmark. • If parent p of x is unmarked (hasn't yet lost a child), mark it; Otherwise, cut p, meld into root list, and unmark (and do so recursively for all ancestors that lose a second child). decrease-key of x from 29 to 15 min Fibonacci heap: decrease key Case 2b. [heap order violated] • Decrease key of x. • Cut tree rooted atx, meld into root list, and unmark. • If parent p of x is unmarked (hasn't yet lost a child), mark it; Otherwise, cut p, meld into root list, and unmark (and do so recursively for all ancestors that lose a second child). decrease-key of x from 35 to 5 min Fibonacci heap: decrease key Case 2b. [heap order violated] • Decrease key of x. • Cut tree rooted atx, meld into root list, and unmark. • If parent p of x is unmarked (hasn't yet lost a child), mark it; Otherwise, cut p, meld into root list, and unmark (and do so recursively for all ancestors that lose a second child). decrease-key of x from 35 to 5 min Fibonacci heap: decrease key Case 2b. [heap order violated] • Decrease key of x. • Cut tree rooted atx, meld into root list, and unmark. • If parent p of x is unmarked (hasn't yet lost a child), mark it; Otherwise, cut p, meld into root list, and unmark (and do so recursively for all ancestors that lose a second child). decrease-key of x from 35 to 5 x min 4fe -38 Fibonacci heap: decrease key Case 2b. [heap order violated] • Decrease key of x. • Cut tree rooted atx, meld into root list, and unmark. • If parent p of x is unmarked (hasn't yet lost a child), mark it; Otherwise, cut p, meld into root list, and unmark (and do so recursively for all ancestors that lose a second child). decrease-key of x from 35 to 5 x min Fibonacci heap: decrease key Case 2b. [heap order violated] • Decrease key of x. • Cut tree rooted atx, meld into root list, and unmark. • If parent p of x is unmarked (hasn't yet lost a child), mark it; Otherwise, cut p, meld into root list, and unmark (and do so recursively for all ancestors that lose a second child). decrease-key of x from 35 to 5 x min p 15 ■ 5 - 4\ -38 55 Fibonacci heap: decrease key Case 2b. [heap order violated] • Decrease key of x. • Cut tree rooted atx, meld into root list, and unmark. • If parent p of x is unmarked (hasn't yet lost a child), mark it; Otherwise, cut p, meld into root list, and unmark (and do so recursively for all ancestors that lose a second child). decrease-key of x from 35 to 5 x min p 15 ■ 5 - 4\ -38 II i - 1. • Since then, yt has lost at most one child (or yt would have been cut). • Thus, right now rank{yt) > i - 2. ■ if i = 1 if i > 2 Pf. 61 Bounding the rank Lemma 1. Fix a point in time. Let x be a node of rank k, and let yu ...,yk denote its current children in the order in which they were linked to x. Then: rank(yi) > 0 if i = 1 - 2 if i > 2 Def. Let Tk be smallest possible tree of rank k satisfying property. I A A F2 = 1 F3 = 2 F4 = 3 F5 = 5 F6 = 8 F7 = 13 62 Bounding the rank Lemma 1. Fix a point in time. Let x be a node of rank k, and let yu ...,yk denote its current children in the order in which they were linked to x. Then: rank(yi) > 0 if i = 1 - 2 if i > 2 Def. Let Tk be smallest possible tree of rank k satisfying property. F6 = 8 F7 = 13 F8 = F6 + F7 = 8+ 13 = 21 63 Bounding the rank Lemma 2. Let sk be minimum number of elements in any Fibonacci heap of rank k. Then sk > Fk+2, where Fk is the kth Fibonacci number. Pf. [by strong induction on k] • Base cases: so = 1 and 51 = 2. • Inductive hypothesis: assume sl > Fi+2 fori = 0, ...,k- 1. • As in Lemma 1, let let yu ...,yk denote its current children in the order in which they were linked to x. Sk > 1 + 1 + (So + Si + ... + Sk-2) (Lemma 1) > (1 +Fi) + F2 + F3 + ... + Fk (inductive hypothesis) — Fk+2. ■ (Fibonacci fact 1) 64 Bounding the rank Fibonacci lemma. Let H be a Fibonacci heap with n elements. Then, rank(H) < log^n, where cj) is the golden ratio = (1 + 75)/2^ 1.618. Pf. • Let H is a Fibonacci heap with n elements and rank k. • Then n > Fk+2 > fyk. t t Lemma 2 Fibonacci Fact 2 • Taking logs, we obtain rank(H) = k < log^n. ■ 65 Fibonacci fact 1 Def. The Fibonacci sequence is: 0,1,1,2,3,5,8,13,21,... {0 if k = 0 1 if k = 1 iVi + iV2 iffc>2 Fibonacci fact 1. For all integers k>0, Fk+2 = 1 + F0 + Fi + ... +Fk. Pf. [by induction on k] • Base case: F2= 1 +F0 = 2. • Inductive hypothesis: assume Fk+l= 1 +F0 + F1 + ... +F*_i. Fyt+2 = Fife + Fyfe+i (definition) = Fyfe + (1 + F0 + Fi + ... + Fyfe-i) (inductive hypothesis) = 1 + Fo + Fi + ... + Fyfe-i + Fyfe. ■ (algebra) 66 Fibonacci fact 2 Def. The Fibonacci sequence is: 0,1,1,2,3,5,8,13,21,... {0 if k = 0 1 if k = 1 iVi + iV2 iffc>2 Fibonacci fact 2. F^+2 > (j)*, where (j) = (1+V5)/2^ 1.618. Pf. [by induction on k] • Base cases: F2 = 1 > 1, F3 = 2 > (j). • Inductive hypotheses: assume/^ > ((>* andF^+1> §k+1 Fk+2 = Fk + Fjfc+i (definition) > + (|)^-2 (inductive hypothesis) = (j)^-2(l + (j)) (algebra) - ^-2 ^2 (

+l) = (j)*. ■ (algebra) 67 THOMAS H CORMEN CHARLES E. LEISERSON RONALD L RIVEST CLIFFORD STEIN INTRODUCTION TO ALGORITHMS THIRD EDITION Fibonacci Heaps ► preliminaries ► insert ► extract the minimum ► decrease key ► bounding the rank ► meld and delete Section 19.2, 19.3 Fibonacci heap: meld Meld. Combine two Fibonacci heaps (destroying old heaps). Recall. Root lists are circular, doubly-linked lists. Fibonacci heap: meld Meld. Combine two Fibonacci heaps (destroying old heaps). Recall. Root lists are circular, doubly-linked lists. min Fibonacci heap: meld analysis Actual cost, a = 0(1). Change in potential. AO = 0. Amortized cost. ct = a + AO = 0(1). O(H) = trees(iZ) + 2 • marks(//) mm Fibonacci heap: delete Delete. Given a handle to an element x, delete it from heap H. • decrease-key(7/,x, -oo). • Extract-M in (h). Amortized cost, ci = 0(rank(H)). • 0(1) amortized for Decrease-Key. • 0(rank(H)) amortized for Extract-Min. 0(#) = trees(iZ) + 2 • marks(#) Priority queues performance cost summary operation linked list binary heap binomial heap Fibonacci heap t Make-Heap 0(1) 0(1) 0(1) 0(1) Is-Empty 0(1) 0(1) 0(1) 0(1) Insert 0(1) 0(log n) <9(log n) 0(1) Extract-M in 0(n) 0(log n) 0(log n) 0(log n) Decrease-Key 0(1) 0(log n) 0(log n) 0(1) Delete 0(1) 0(log n) 0(log n) 0(log n) Meld 0(1) 0(n) 0(log n) 0(1) Find-Min 0(n) 0(1) 0(log H) 0(1) t amortized Accomplished. 0(1) Insert and Decrease-Key, 0(\ogn) Extract-Min. Heaps of heaps b-heaps. Fat heaps. 2-3 heaps. Leaf heaps. Thin heaps. Skew heaps. Splay heaps. Weak heaps. Leftist heaps. Quake heaps. Pairing heaps. Violation heaps. Run-relaxed heaps. Rank-pairing heaps. Skew-pairing heaps. Rank-relaxed heaps. Lazy Fibonacci heaps. DISJOINT-SETS DATA STRUCTURES PEARSON JON KLEINBERG • EVA TARDOS Union-Find ► naive linking ► link-by-size ► link-by-rank ► path compression ► link-by-rank with path compression ► context Lecture slides by Kevin Wayne Copyright © 2005 Pearson-Addison Wesley http://www.cs.princeton.edu/~wayne/kleinberg-tardos Last updated on 11/22/17 6:04 AM Disjoint-sets data type Goal. Support three operations on a collection of disjoint sets. • Make-Set(x): create a new set containing only element x. • Find(x): return a canonical element in the set containing x. • Union(x,)0: replace the sets containing x and y with their union. Performance parameters. • m = number of calls to Make-Set, Find, and Union. • n = number of elements = number of calls to Make-Set. Dynamic connectivity. Given an initially empty graph G, support three operations. • Add-NodeO): add node u. • Add-EdgeO, v): add an edge between nodes u and v. • Is-ConnectedO, v): is there a path between u and v? <- disjoint sets = connected components <- 1 Make-Set operation <- 1 Union operation <- 2 Find operations Disjoint-sets data type: applications Original motivation. Compiling Equivalence, Dimension, and Common statements in Fortran. An Improved Equivalence Algorithm Bernard A. Galler and Michael J. Fisher University of Michigan, Ann Arbor, Michigan An algorithm for assigning storage on the basis of EQUIVALENCE, DIMENSION and COMMON declarations is presented. The algorithm is based on a tree structure, and has reduced computation time by 40 percent over a previously published algorithm by identifying all equivalence classes with one scan of the EQUIVALENCE declarations. The method is applicable in any problem in which it is necessary to identify equivalence classes, given the element pairs defining the equivalence relation. Note. This 1 964 paper also introduced key data structure for problem. 3 Disjoint-sets data type: applications Applications. • Percolation. • Kruskal's algorithm. • Connected components. • Computing LCAs in trees. • Computing dominators in digraphs. • Equivalence of finite state automata. • Checking flow graphs for reducibility. • Hoshen-Kopelman algorithm in physics. • Hinley-Milner polymorphic type inference. • Morphological attribute openings and closings. • Matlab's Bw-Label function for image processing. • Compiling Equivalence, Dimension and Common statements in Fortran. 4 Union-Find ► naive linking ► link-by-size ► link-by-rank ► path compression ► link-by-rank with path compression ► context Disjoint-sets data structure Parent-link representation. Represent each set as a tree of elements. • Each element has an explicit parent pointer in the tree. • The root serves as the canonical element (and points to itself)- • Find(x): find the root of the tree containing x. • Union(x,)0: merge trees containing x and y. Union(3, 5) Disjoint-sets data structure Parent-link representation. Represent each set as a tree of elements. • Each element has an explicit parent pointer in the tree. • The root serves as the canonical element (and points to itself)- • Find(x): find the root of the tree containing x. • Union(x,)0: merge trees containing x and y. Union(3, 5) Disjoint-sets data structure Array representation. Represent each set as a tree of elements. • Allocate an array parent\\ Of length fl. <- must know number of elements n a priori • parent[i] = j means parent of element i is element j. 0 l 2 3 4 5 6 7 8 9 parent[] 7 5 7 ® 8 7 5 7 8 8 parent of 3 is 8 root Note. For brevity, we suppress arrows and self loops in figures. NaTve linking Naive linking. Link root of first tree to root of second tree. Union(5, 3) 9 43 0 5 2 9 NaTve linking Naive linking. Link root of first tree to root of second tree. Union(5, 3) NaTve linking Naive linking. Link root of first tree to root of second tree. Make-Set(x) Union(x, y) parent[x] ^- x. r <— FlND(x). s <- FlND(j). parent\f\ <— s. FlND(x) While (x z parent[x\) x <— parent\x\. Return x. NaTve linking: analysis Theorem. Using naive linking, a Union or Find operation can take S(n) time in the worst case, where n is the number of elements. max number of links on any path from root to leaf node Pf. / • In the worst case, Find takes time proportional to the height of the tree. • Height of the tree is n - 1 after the sequence of union operations: union(l,2), union(2,3), UlMION(«- 1,«). height = 2 height = 3 height = n-1 0 T 0 © 12 Union-Find ► no'ive linking ► link-by-size ► link-by-ronk ► path compression ► link-by-rank with path compression ► context Link-by-size Link-by-size. Maintain a tree size (number of nodes) for each root node. Link root of smaller tree to root of larger tree (breaking ties arbitrarily). Union(5, 3) Link-by-size Link-by-size. Maintain a tree size (number of nodes) for each root node. Link root of smaller tree to root of larger tree (breaking ties arbitrarily). Link-by-size Link-by-size. Maintain a tree size (number of nodes) for each root node. Link root of smaller tree to root of larger tree (breaking ties arbitrarily). Make-Set(x) parent[x] <— x. size[x] <— 1. FlND(x) While (x # parent[x\) x <— parent\x\. Return x. Union(x, y) r <- FlND(x). s <- FtND(y). If (r = s) Return. Else if (size[r] > size[s]) parentis] <— r. size[r] <— size[r] + size[s]. Else parent[r] <— s. size[s] <— size[r] + size[s]. link-by-size 16 Link-by-size: analysis Property. Using link-by-size, for every root node r : size[r] > 2heisht^r\ Pf. [ by induction on number of links ] • Base case: singleton tree has size 1 and height 0. • Inductive hypothesis: assume true after first i links. • Tree rooted at r changes only when a smaller (or equal) size tree rooted at s is linked into r. • Case 1 . [ height(r) > height(s) ] size'[r] > size\f[ > 2heiSht(r) <- inductive hypothesis = ^ height'(r) size = 8 (height = 2) Link-by-size: analysis Property. Using link-by-size, for every root node r : size[r] > 2heisht^r\ Pf. [ by induction on number of links ] • Base case: singleton tree has size 1 and height 0. • Inductive hypothesis: assume true after first i links. • Tree rooted at r changes only when a smaller (or equal) size tree rooted at s is linked into r. • Case 2. [height(r) < height(s) ] size'[r] = size[r] + size[s] > 2 Size[s] <- link-by-size > 2 • 2heiSht(s) <- inductive hypothesis size = 6 (height = 1) = 2height{s)+\ o 18 Link-by-size: analysis Theorem. Using link-by-size, any Union or Find operation takes 0(\ogn) time in the worst case, where n is the number of elements. Pf. • The running time of each operation is bounded by the tree height. • By the previous property, the height is < [\gn\. ■ t lg n - log2 n Note. The Union operation takes 0(1) time except for its two calls to Find. A tight upper bound Theorem. Using link-by-size, a tree with n nodes can have height = \gn. Pf. • Arrange 2k- 1 calls to Union to form a binomial tree of order k. • An orders binomial tree has 2k nodes and height k. ■ 20 Section 5.1.4 Union-Find ► naive linking ► link-by-size ► link-by-rank ► path compression ► link-by-rank with path compression ► context Link-by-rank Link-by-rank. Maintain an integer rank for each node, initially 0. Link root of smaller rank to root of larger rank; if tie, increase rank of new root by 1. Union(7, 3) Note. For now, rank = height. Link-by-rank Link-by-rank. Maintain an integer rank for each node, initially 0. Link root of smaller rank to root of larger rank; if tie, increase rank of new root by 1. Link-by-rank Link-by-rank. Maintain an integer rank for each node, initially 0. Link root of smaller rank to root of larger rank; if tie, increase rank of new root by 1. Make-Set(x) Union(x, y) parent[x] <— x. rank[x] <— 0. FlND(x) While (x * parent[x]) x <— parent\x\. Return x. r <- FlND(x). s <- FtND(y). If (r = s) Return. else if (rank[r] > rank[s]) parentis] <— r. else if (rank\f\ < rank[s]) parent\f\ <— s. Else parent\f\ <— s. rank[s] <— rank[s] + 1. link-by-rank 24 Link-by-rank: properties Property 1. If x is not a root node, then rank[x] < rank\parent[x\\. Pf. A node of rank k is created only by linking two roots of rank k-1. ■ Property 2. If x is not a root node, then rank[x] will never change again. Pf. Rank changes only for roots; a nonroot never becomes a root. ■ Property 3. If parent[x] changes, then rank[parent[x]] strictly increases. Pf. The parent can change only for a root, so before linking parent[x] =x. After x is linked-by-rank to new root r we have rank[r] > rank[x]. ■ rank = 3 ran Link-by-rank: properties Property 4. Any root node of rank k has > 2k nodes in its tree. Pf. [ by induction on k] • Base case: true for k = 0. • Inductive hypothesis: assume true for k-l. • A node of rank k is created only by linking two roots of rank k- 1. • By inductive hypothesis, each subtree has >2k~l nodes => resulting tree has >2k nodes. ■ Property 5. The highest rank of a node is < |_lg^J-Pf. Immediate from Property 1 and Property 4. ■ rank = 2 (8 nodes) O 26 Link-by-rank: properties Property 6. For any integer k > 0, there are < n 12k nodes with rank k. Pf. • Any root node of rank k has > 2k descendants. [Property 4] • Any nonroot node of rank k has > 2k descendants because: - it had this property just before it became a nonroot [Property 4] - its rank doesn't change once it became a nonroot [Property 2] - its set of descendants doesn't change once it became a nonroot • Different nodes of rank k can't have common descendants. [Property 1 ] ■ rank = 4 rank = 0 (11 nodes) Q 27 Link-by-rank: analysis Theorem. Using link-by-rank, any Union or Find operation takes 0(\ogn) time in the worst case, where n is the number of elements. Pf. • The running time of Union and Find is bounded by the tree height. • By Property 5, the height is < [\gn\. ■ Section 5.1.4 Union-Find ► naive linking ► link-by-size ► link-by-rank ► path compression ► link-by-rank with path compression ► context Path compression Path compression. When finding the root r of the tree containing x, change the parent pointer of all nodes along the path to point directly to r. before path compression Path compression Path compression. When finding the root r of the tree containing x, change the parent pointer of all nodes along the path to point directly to r. Path compression Path compression. When finding the root r of the tree containing x, change the parent pointer of all nodes along the path to point directly to r. Path compression Path compression. When finding the root r of the tree containing x, change the parent pointer of all nodes along the path to point directly to r. 33 Path compression Path compression. When finding the root r of the tree containing x, change the parent pointer of all nodes along the path to point directly to r. 34 Path compression Path compression. When finding the root r of the tree containing x, change the parent pointer of all nodes along the path to point directly to r. 35 Path compression Path compression. When finding the root r of the tree containing x, change the parent pointer of all nodes along the path to point directly to r. FlND(x) If (x ^ parent\x\) this Find implementation parent\x\ «- FlND(parent[x]). <- changes the tree structure (!) Return parent[x\. Note. Path compression does not change the rank of a node; so height(x) < rank[x] but they are not necessarily equal. 36 Path compression Fact. Path compression with naive linking can require Q(n)t\rr\e to perform a single Union or Find operation, where n is the number of elements. Pf. The height of the tree is n - 1 after the sequence of union operations: LlNI0N(l,2), LlNI0N(2,3), UNION(w- 1,«). ■ \ naive linking: link root of first tree to root of second tree Theorem. [Tarjan-van Leeuwen 1984] Starting from an empty data structure, path compression with naive linking performs any intermixed sequence of m>n Make-Set, Union, and Find operations on a set of n elements in 0(m\ogn) time. Pf. Nontrivial (but omitted). 37 Section 5.1.4 Union-Find ► naive linking ► link-by-size ► link-by-rank ► path compression ► link-by-rank with path compression ► context Link-by-rank with path compression: properties Property. The tree roots, node ranks, and elements within a tree are the same with or without path compression. Pf. Path compression does not create new roots, change ranks, or move elements from one tree to another. ■ before path compression after path compression Link-by-rank with path compression: properties Property. The tree roots, node ranks, and elements within a tree are the same with or without path compression. Corollary. Property 2, 4-6 hold for link-by-rank with path compression. Property 1. If x is not a root node, then rank[x] < rank{parent[x]]. Property 2. If x is not a root node, then rank[x] will never change again. Property 3. If parent[x] changes, then rank[parent[x]] strictly increases. Property 4. Any root node of rank k has > 2k nodes in its tree. Property 5. The highest rank of a node is < |_lg^J- Property 6. For any integer k > 0, there are < n 12k nodes with rank k. Bottom line. Property 1-6 hold for link-by-rank with path compression, (but we need to recheck Property 1 and Property 3) 40 Link-by-rank with path compression: properties Property 3. If parent[x] changes, then rank\parent[x\\ strictly increases. Pf. Path compression can make x point to only an ancestor of parent\x\. Property 1. If x is not a root node, then rank[x] < rank[parent[x]]. Pf. Path compression doesn't change any ranks, but it can change parents. If parent[x] doesn't change during a path compression, the inequality continues to hold; if parent[x] changes, then rank\parent\x\\ strictly increases. before path compression after path compression Iterated logarithm function Def. The iterated logarithm function is defined by: i * lg n 0 if n < 1 1 + lg* (lg n) otherwise n lg* n 1 0 2 1 [3,4] 2 [5, 16] 3 [17, 65536] 4 [65537, 265536] 5 iterated lg function Note. We have lg* n < 5 unless n exceeds the # atoms in the universe. 42 Analysis Divide nonzero ranks into the following groups: • { 1 } • {2} ' {3,4} • { 5,6,16 } • { 17,18,...,216} • { 65537,65538,..., 265536} Property 7. Every nonzero rank falls within one of the first lg* n groups. Pf. The rank is between 0 and [\gn\. [Property 5] Creative accounting Credits. A node receives credits as soon as it ceases to be a root. If its rank is in the interval { k + l,k+2,2k}, we give it 2k credits. i-;-" group k Proposition. Number of credits disbursed to all nodes is < n \g*n. Pf. • By Property 6, the number of nodes with rank > k + 1 is at most n n n - + - + • • • < — • Thus, nodes in group k need at most n credits in total. • There are < lg*^ groups. [Property 7] ■ 44 Running time of FIND Running time of Find. Bounded by number of parent pointers followed. • Recall: the rank strictly increases as you go up a tree. [Property 1] • Case 0: parent[x] is a root => only happens for one link per Find. • Case 1: rank[parent[x]] is in a higher group than rank[x]. • Case 2: rank[parent[x]] is in the same group as rank[x]. Case 1. At most \g*n nodes on path can be in a higher group. [Property 7] Case 2. These nodes are charged 1 credit to follow parent pointer. • Each time x pays 1 credit, rank{parent[x]] strictly increases. [Property 1] • Therefore, if rank[x] is in the group { k+1, ...,2k}, the rank of its parent will be in a higher group before x pays 2k credits. • Once rank[parent[x]] is in a higher group than rank[x], it remains so because: - rank[x] does not change once it ceases to be a root. [Property 2] - rank[parent[x]] does not decrease. [Property 3] - thus, x has enough credits to pay until it becomes a Case 1 node. ■ Link-by-rank with path compression Theorem. Starting from an empty data structure, link-by-rank with path compression performs any intermixed sequence of m >// Make-Set, Union, and Find operations on a set of n elements in 0(m\og*n) time. 46 Union-Find ► naive linking ► link-by-size ► link-by-rank ► path compression ► link-by-rank with path compression ► context Link-by-size with path compression Theorem. [Fischer 1 972] Starting from an empty data structure, link-by-size with path compression performs any intermixed sequence of m>n Make-Set, Union, and Find operations on a set of n elements in O(mloglogft) time. MASSACHUSETTS INSTITUTE OF TECHNOLOGY A. (. LA&QRATQRY Artificial Intelligence flemo Ho. £56 April W7Z EFFICIENCY OF EQUIVALENCE ALGORITHMS Kit,-.ael J. Fisi'ier 1, INTRODUCTION The equivalence problem is to determine the Finest partition on a set that is consistent vith a sequence of assertions of the form "x i j". A strategy for doing this on a computer processes the assertions serially, maintaining always in storage a representation of the partition defined by the assertions so far encountered. To process the command "x = y", the equivalence classes of x and y are determined. If they are the same, nothing further is dans; otherwise the two classes are merged together. 48 Link-by-size with path compression Theorem. [Hopcroft-Ullman 1 973] Starting from an empty data structure, link-by-size with path compression performs any intermixed sequence of m>n Make-Set, Union, and Find operations on a set of n elements in 0(m\og*ri) time. siam j. Comput. Vol. 2, No. 4, December 1973 SET MERGING ALGORITHMS* J. E. HOPCROFTf and J. D. ULLMANJ Abstract. This paper considers the problem of merging sets formed from a total of n items in such a way that at any time, the name of a set containing a given item can be ascertained. Two algorithms using different data structures are discussed. The execution times of both algorithms are bounded by a constant times nG(n), where G(n) is a function whose asymptotic growth rate is less than that of any finite number of logarithms of n. Key words, algorithm, algorithmic analysis, computational complexity, data structure, equivalence algorithm, merging, property grammar, set, spanning tree 49 Link-by-size with path compression Theorem. [Tarjan 1 975] Starting from an empty data structure, link-by-size with path compression performs any intermixed sequence of m>n Make-Set, Union, and Find operations on a set of n elements in 0(ma(m,n)) time, where a(m,n) is a functional inverse of the Ackermann function. Efficiency of a Good But Not Linear Set Union Algorithm ROBERT ENDRE TARJAN University of California, Berkeley, California abstract. Two types of instructions for manipulating a family of disjoint sets -which partition a universe of n elements are considered FIND (x) computes the name of the (unique) set containing element x UNION (A, B, C) combines sets A and B into a new set named C, A known algorithm for implementing sequences of these instructions is examined It is shown that, if t(m, n) is the maximum time required by a sequence of m > n FINDs and n — 1 intermixed UNIONb, then kima(m, n) < l{m, n) < Jc2ma(rw, n) for some positive constants fci and k2, where a{m, n) is related to a functional inverse of Ackermann's function and is very slow-growing. 50 Ackermann function Ackermann function. [Ackermann 1 928] A computable function that is not primitive recursive. n + 1 if rn — 0 A(m, n) — \ A(m — 1,1) if m > 0 and n — 0 A(m — 1, A(m, n — 1)) if m > 0 and n > 0 Zum Hilbertschen Aufbau der reellen Zahlen. Von Wilhelm Ackermann in Güttingen. Um den Beweis für die von Cantor aufgestellte Vermutung zu erbringen, daß sich die Menge der reellen Zahlen, d. h. der zahlentheoretischen Funktionen, mit Hilfe der Zahlen der zweiten Zahlklasse auszählen läßt, benutzt Hilbert einen speziellen Aufbau der zahlentheoretischen Funktionen. Wesentlich bei diesem Aufbau ist der Begriff des Typs einer Funktion. Eine Funktion vom Typ 1 ist eine solche, deren Argumente und Werte ganze Zahlen sind, also eine gewöhnliche zahlentheoretische Funktion. Die Funktionen vom Typ 2 sind die Funktionenfunktionen. Eine derartige Funktion ordnet jeder zahlentheoretischen Funktion eine Zahl zu. Eine Funktion vom Typ 3 ordnet wieder den Funktionenfunktionen Zahlen zu, usw. Die Definition der Typen läßt sich auch ins Transfinite fortsetzen, für den Gegenstand dieser Arbeit ist das aber nicht von Belang1). Note. There are many inequivalent definitions. 51 Ackermann function Ackermann function. [Ackermann 1 928] A computable function that is not primitive recursive. n + 1 if m — 0 A(m, n) — { A(m — 1,1) if m > 0 and n — 0 A(m — 1, A(m, n — 1)) if m > 0 and n > 0 Inverse Ackermann function. a(m,n) = mm{i > 1 : A(i, \ m/n\) > log2 n} " I am not smart enough to understand this easily. " — Raymond Seidel 52 Inverse Ackermann function Definition. Ex. \n/2\ iffc = l &k(n) — { 0 if n = 1 and k > 2 1 + afc(afc_i(n)) otherwise ai(n) = \n I 2]. a2(n) = \\gn] = # of times we divide n by 2, until we reach 1. a3(n) = lg*^ =# of times we apply the lg function to n, until we reach 1. a4(n) = # of times we apply the iterated lg function to n, until we reach 1. 2 t 65536 = 2 65536 times 1 2 3 4 5 6 7 8 9 10 1 1 12 13 14 15 16 216 265536 2 t65536 ai(n) 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 215 265535 huge 0 1 2 2 3 3 3 3 4 4 4 4 4 4 4 4 16 65536 2 T 65535 CL3(n) 0 1 2 2 3 3 3 3 3 3 3 3 3 3 3 3 4 5 65536 a4ji) 0 1 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 4 53 Inverse Ackermann function Definition. , ^ .ffc = 1 ttfc(n) = \ 0 if n = 1 and > 2 1 + afc(afc_i(n)) otherwise Property. For every n>5, the sequence ai(«),a2(«),a3(«),... converges to 3. Ex. [n = 9876!] ai(/i) > 1035163, a2(w) = 116812, a3(«) = 6, a4(w) = 4, a5(n) = 3. One-parameter inverse Ackermann. a(ri) = min { k: a*(«) < 3 }. Ex. a(9876!) = 5. Two-parameter inverse Ackermann. a(m, n) = min { k: a*(«) < 3 + m i n }. 54 A tight lower bound Theorem. [Fredman-Saks 1 989] In the worst case, any cell-probe(log^) algorithm requires Q(ma(m,n)) time to perform an intermixed sequence of m Make-Set, Union, and Find operations on a set of n elements. Cell-probe model. [Yao 1 981 ] Count only number of words of memory accessed; all other operations are free. The Cell Probe Complexity of Dynamic Data Structures Michael L. Fredman Michael E. Saks Bellcore and U.C. San Diego U.C. San Diego, Bellcore and Rutgers University Dynamic data structure problems involve the representation of data in memory in such a way as to permit certain types of modifications of the data (updates) and certain types of questions about the data (queries). This paradigm encompasses many fundamental problems in computer science. 1. Summary of Results register size from logn to polylog(«) only reduces the time complexity by a constant factor. On the other hand, decreasing the register size from logn to 1 increases time complexity by a logrt factor for one of the problems we consider and only a loglogn factor for some other problems. The purpose of this paper is to prove new lower and upper bounds on the time per operation to implement solutions to some familiar dynamic data structure problems including list representation, subset ranking, partial sums, and the set union problem . The main features of our lower bounds are: The first two specific data structure problems for which we obtain bounds are: n't Thev hold in the rp.ll nrfihp model of mmnnlalinn ÍA. Yan List Representation. This problem concerns the represention of an ordered list of at most n (not necessarily distinct) elements from the universe U = {1, 2 n }. The operations to be supported are report(&), which returns the k'h element of the list, insfrt/t u) which inserts element u into the list between the 55 Path compaction variants Path splitting. Make every node on path point to its grandparent. before path splitting 56 Path compaction variants Path halving. Make every other node on path point to its grandparent. before path halving Linking variants Link-by-size. Number of nodes in tree. Link-by-rank. Rank of tree. Link-by-random. Label each element with a random real number between 0.0 and 1.0. Link root with smaller label into root with larger label. Disjoint-sets data structures Theorem. [Tarjan-van Leeuwen 1984] Starting from an empty data structure, link-by- { size, rank } combined with { path compression, path splitting, path halving } performs any intermixed sequence of m >// Make-Set, Union, and Find operations on a set of n elements in 0(m a(m,n)) time. Worst-Case Analysis of Set Union Algorithms ROBERT E. TARJAN AT&T Bell Laboratories, Murray Hill, New Jersey AND JAN VAN LEEUWEN University of Utrecht, Utrecht, The Netherlands Abstract. This paper analyzes the asymptotic worst-case running time of a number of variants of the well-known method of path compression for maintaining a collection of disjoint sets under union. We show that two one-pass methods proposed by van Leeuwen and van der Weide are asymptotically optimal, whereas several other methods, including one proposed by Rem and advocated by Dijkstra, are slower than the best methods. 59 Part III Algorithm Design Techniques 20 Algorithmic paradigms Greed. Build up a solution incrementally, myopically optimizing some local criterion. Divide-and-conquer. Break up a problem into independent subproblems; solve each subproblem; combine solutions to subproblems to form solution to original problem. Dynamic programming. Break up a problem into a series of overlapping subproblems\combine solutions to smaller subproblems to form solution to large subproblem. fancy name for caching intermediate results in a table for later reuse 2 Peak Finding Closest Pair of Points Dynamic Programming Interval Scheduling Parenthesization Problem Knapsack Problem Sequence Alignement Bellman-Ford Algorithm Greedy Algorithms Coin Changing Interval Scheduling Interval Partitioning Scheduling to Minimize Lateness Optimal Caching Dijkstra's algorithm Minimum Spanning Trees DIVIDE AND CONQUER DIVIDE AND CONQUER Nothing is particularly hard if you divide it into small jobs. Henry Ford 22 Divide-and-conquer paradigm Divide-and-conquer. • Divide up problem into several subproblems (of the same kind). • Solve (conquer) each subproblem recursively. • Combine solutions to subproblems into overall solution. Most common usage. • Divide problem of size n into two subproblems of size nil. <-0{n) time • Solve (conquer) two subproblems recursively. • Combine two solutions into overall solution. <-0(«)time Consequence. • Brute force: @(n2). • Divide-and-conquer: 0(n\ogn). attributed to Julius Caesar DIVIDE AND CONQUER MAXIMAL AND MINIMAL ELEMENTS NAIVE ALGORITHM complexity: number of comparisons Algorithm: Iterative MaxMin Input: sequence S[l... n] Output: maximal and minimal element 1 max <- S[l]; min <- S[l] 2 for I: ^r- 2 to n do 3 4 if S[i] > max then max <— S[i] if S[i] < min then min ^- S[i] 5 return max, min 2(r? — 1) comparisons 23 DIVIDE AND CONQUER ALGORITHM divide the sequence into two equally sizek subsequences solve find maximal and minimal elements of both subsequences combine greater of the maximal elements is the maximal element of the whole sequence(t/?e same for the minimal element) Algorithm: MaxMin Input: sequence S[l... n], indices x,y Output: maximal and minimal element of S[x... y] 1 if y = x then return (S[x], 5[x]) 2 if y = x + 1 then return (max(S[x], S[y]), min(S[x], S[y])) 3 if y > x + 1 then 4 (/i,/2) <- MaxMin(S,x, L(x + y)/2j) 5 |_ (ri,rj2)^-MaxMin(S,L(x + y)/2j+l,y) 6 return (max(/i, ri), min(/2, r2)) 24 correctness induction w.r.t. the length of the sequence complexity T(n) = 1 for n = 2 T([n/2\)+ T(\n/2]) + 2 for n > 2 by induction w.r.t. n we can check that 7~(n) < |n — 2 n = 2 7(2) = landl<§-2-2 n > 2 assumption: the inequality is true for all /, 2 < / < n let us prove the inequality for n T(n)=T([n/2\)+T([n/2]) + 2 5 n 5 < - n 2 - 2 + - n 2 3 1 3 1 5 3 DIVIDE AND CONQUER PEAK FINDING 6.006 Introduction to Algorithms ALGORITHMS Lecture 2: Peak Finding Prof. Erik Demaine ID Peak Finding Given an array A [0.. n — 1]: A:- 00 -0(3 0 1 2 3 4 5 6 A[i] is a peak if it is not smaller than its neighbor (s): A[i-1] i4[i + l] where we imagine A[-l] =A[n] = -co Goal: Find any peak "Brüte Force" Algorithm Test all elements for peakyness for i in range (n): ifA[i-l] A[i + l]: return i A: 0 1 2 3 4 5 6 Algorithm IV2 max 04) — Global maximum is a local maximum A: m = 0 for i in range(l, n): if ;![£] >i4[m]:lQ(ö m = i return m 0 1 2 3 4 5 6 Cleverer Idea A e*a( A 1 \ \ o Look at any element ^4[i] and its neighbors A[i — l]&A[i + 1] — If peak: return i — Otherwise: locally rising on some side . • Must be a peak in that direction \^ • So can throw away rest of array, ns;,2? wtyf leaving A [: i] or A [i + 1: ] A: Sx J 0 1 2 3 4 5 6 Where to Sample? • Want to minimize the worst-case remaining elements in array - Balance A[: i] of length i with A[i + 1:] of length n — i — 1 - i = n — i — 1 - i = (n — l)/2: middle element - Reduce n to (n — l)/2 A: 0 1 2 3 4 5 6 Algorithm peakld(i4, i,f): m = L(i+;)/2J if A[m - 1] < A[m] > A[m + 1]: return m elif A[m - 1] > j4[m]: return peakld(y4, i,m — 1) elif A[m] < A[m + 1]: return peakld(y4,m + i4 0 1 2 3 4 5 6 Divide & Conquer • General design technique: 1. Divide input into part(s] 2. Conquer each part recursively 3. Combine result(s] to solve original problem • ID peak: 1. One half 2. Recurse 3. Return Divide & Conquer Analysis • Recurrence for time T(ri) taken by problem size n 1. Divide input into part(s]: W-li n2i ■■■» ^fe 2. Conquer each part recursively 3. Combine result(s] to solve original problem T(n) = divide cost + rCrii) + T(n2) + - +T(nk) + combine cost ID Peak Finding Analysis • Divide problem into 1 problem of size ~ ^ • Divide cost: 0(1) • Combine cost: 0(1) • Recurrence: Hn)= tQ + 0(1) Solving Recurrence S* 7(n T(n T(n T(n T(n + c = T -) + c + c + c + c + c = rGfe) + clgn = 7(1) + clgn = 0(lgn) 2D Peak Finding • Given n x n matrix of numbers • Want an entry not smaller than its (up to] 4 neighbors: mSM mim In km Km] Divide & Conquer #0 Looking at center element doesn't split the problem into pieces... Divide & Conquer #Vi • Consider max element in each column • ID algorithm would solve max array in 0 (lg n) time • But 0(n2) time to compute max array u u Divide & Conquer #1 • Look at center column • Find global max within • If peak: return it • Else: - Larger left/right neighbor - Larger max in that column - Recurse in left/right half • Base case: 1 column - Return global max within 3 5 7 2 5 8 9 8 6 3 9 0 6 8 9 8 w 1 2 9 9 9 3 5 9 8 Analysis #1 • 0 (ri) time to find max in column • O(lgn) iterations (like binary search] • 0 (n lg n) time total WW 5 7 2 5 9 8 9 8 WW 3 9 0 6 Km] 8 9 8 2 • Can we do better? Divide & Conquer #2 • Look at boundary, center row, and center column (window] • Find global max within • If it's a peak: return it • Else: - Find larger neighbor - Can't be in window - Recurse in quadrant, including green boundary f9 0 6W4 0 8 9 8 0 5 00000000 Correctness • Lemma: If you enter a quadrant, it contains a peak of the overall array [climb up] • Invariant: Maximum element of window never decreases as we descend in recursion • Theorem: Peak in visited quadrant is also peak in overall array Analysis #2 • Reduce nxn matrix to ~ - x - submatrix in 2 2 n) time [|window|] 0 T(n T(n T(n T(n + cn n + c- + cn 2 = T (-) + c- + c- + c n \8j 4 2 = 7(1) + c(l + 2 + 4 + - + ^ + | + n) DIVIDE AND CONQUER CLOSEST PAIR OF POINTS 5. Divide and Conquer ► mergesort ► counting inversions ► randomized quicksort ► median and selection ► closest pair of points Section 5.4 Closest pair of points Closest pair problem. Given n points in the plane, find a pair of points with the smallest Euclidean distance between them. Fundamental geometric primitive. • Graphics, computer vision, geographic information systems, molecular modeling, air traffic control. • Special case of nearest neighbor, Euclidean MST, Voronoi. fast closest pair inspired fast algorithms for these problems Closest pair of points Closest pair problem. Given n points in the plane, find a pair of points with the smallest Euclidean distance between them. Brute force. Check all pairs with G(n2) distance calculations. 1 D version. Easy 0(n log n) algorithm if points are on a line. Non-degeneracy assumption. No two points have the same x-coordinate. Closest pair of points: first attempt Sorting solution. • Sort by x-coordinate and consider nearby points. • Sort by y-coordinate and consider nearby points. Closest pair of points: first attempt Sorting solution. • Sort by x-coordinate and consider nearby points. • Sort by y-coordinate and consider nearby points. Closest pair of points: second attempt Divide. Subdivide region into 4 quadrants. Closest pair of points: second attempt Divide. Subdivide region into 4 quadrants. Obstacle. Impossible to ensure n/4 points in each piece. Closest pair of points: divide-and-conquer algorithm Divide: draw vertical line L so that nil points on each side. Conquer: find closest pair in each side recursively. Combine: find closest pair with one point in each side. Return best of 3 solutions. N> seems like 0(«2) o o • L o ° • o ° • o o o • «• / 21 8 ^ • 1^ ° ° o 0 0 o o o • o • • 70 How to find closest pair with one point in each side? Find closest pair with one point in each side, assuming that distance < 6. • Observation: suffices to consider only those points within 6 of line L. How to find closest pair with one point in each side? Find closest pair with one point in each side, assuming that distance < 6. • Observation: suffices to consider only those points within 6 of line L. • Sort points in 26-strip by their y-coordinate. • Check distances of only those points within 7 positions in sorted list! \ „ whv? • L • • o • o © ° o ° © • 12 o © 8 = min(12,21) • ° o • o © o • i-1 6 How to find closest pair with one point in each side? Def. Let st be the point in the 26-strip, with the ith smallest y-coordinate. Claim. If\j-i\ > 7, then the distance between 5, and 5,- is at least 6. Pf. Consider the 26-by-6 rectangle R in strip whose min y-coordinate is y-coordinate of st Distance between st and any point Sj above R is > 6. diameter is Subdivide R into 8 squares. j/ V2 < s At most 1 point per square. At most 7 other points can be in R. \ constant can be improved with more refined geometric packing argument R r ~i Closest pair of points: divide-and-conquer algorithm CL0SEST-pair(pi,/?2, ...,pn) Compute vertical line L such that half the points <- 0(ri) are on each side of the line. 61 «- closest-pair(points in left half). <- Tin I 2) 62 «- closest-pair(points in right half). <- Tin I 2) 6 <- min { 61 , 62 }. Delete all points further than 6 from line L. *- 0(n) Sort remaining points by y-coordinate. +- 0(n log n) Scan points in y-order and compare distance between each point and next 7 neighbors. If any of these <- 0(ri) distances is less than 6, update 6. Return 6. 74 Divide-and-conquer: quiz 6 T(n) = 9(1) ifn = l T([n/2\) + T([n/2]) + B(nlogn) if n > 1 A. 7T[/i) = 0(«). B. r(n) = 0(« log n). C. T(n) = G(n\og2n). D. T(n) = <3>(n2). I> What is the solution to the following recurrence? 75 Refined version of closest-pair algorithm Q. How to improve to 0(n log n) ? A. Don't sort points in strip from scratch each time. • Each recursive call returns two lists: all points sorted by x-coordinate, and all points sorted by y-coordinate. • Sort by merging two pre-sorted lists. Theorem. [Shamos 1 975] The divide-and-conquer algorithm for finding a closest pair of points in the plane can be implemented in 0(n log n) time. Pf. T{\n/2\) + T(\n/2\) + 9(n) if n > 1 if n 1 Illustrated Encyclopedia of BILLIARDS Divide-cmd-conquer: quiz 7 What is the complexity of the 2D closest pair problem? A. 0(«). B.