Prologue CZ.1.07/2.2.00/28.0041 Centrum interaktivních a multimediálních studijních opor pro inovaci výuky a efektivní učení evropský sociální soaaini ^h^h ministerstvo školství, opvzdsiavanr v*^^< fondVCR EVROPSKÁ UNIE MLÁDEŽE A TĚLOVÝCHOVY pro konkurenceschopnost ,*4JÍA'»*' INVESTICE DO ROZVOJE VZDĚLÁVÁNÍ You should spent most of your time thinking about what you should think about most of your time. 2/36 RANDOMIZED ALGORITHMS AND PROTOCOLS - 2020 WEB PAGE of the LECTURE RANDOMIZED ALGORITHMS AND PROTOCOLS - 2020 Prof. Jozef Gruska, DrSc Wednesday, 10.00-11.40, B410 http://www.f i.muni.cz/usr/gruska/random20 FINAL EXAM: You need to answer four questions out of five given to you. CREDIT (ZAPOČET): You need to answer three questions out of five given to you. EXERCISES/TUTORIALS CONTENTS - preliminary □ Basic concepts and examples of randomized algorithms B Types and basic design methods for randomized algorithms EXERCISES/TUTORIALS: Thursdays 14.00-15.40, C525 El Basics of probability theory Simple methods for design of randomized algorithms S Games theory and analysis of randomized algorithms TEACHER: RNDr. Matej Pivoluska PhD □ Basic techniques 1: moments and deviations Q Basic techniques II: tail probabilities inequalities 01 Probabilistic method 1: Language English □ Markov chains - random walks EE Algebraic techniques - fingerprinting NOTE: Exercises/tutorials are not obligatory EE Fooling the adversary - examples EE Randomized cryptographic protocols EE Randomized proofs EE Probabilistic method II: EE Quantum algorithms IV054 0. 5/36 IV054 0. 6/36 LITERATURE ■ R. Motwami, P. Raghavan: Randomized algorithms, Cambridge University Press, Part 1 UK, 1995 ■ J. Gruska: Foundations of computing, International Thompson Computer Press, USA. 715 pages, 1997 Games Theory and Analyses of Randomized Algorithms ■ J. Hromkovic: Design and analysis of randomized algorithms, Springer, 275 pages, 2005 ■ N. Alon, J. H. Spencer: The probabilistic method, Willey-lnterscience, 2008 CLASSICAL GAMES THEORY - BASIC CONCEPTS CLASSICAL GAMES THEORY BRIEFLY IV054 1. Games Theory and Analyses of Randomized Algorithms 9/36 BASIC CONCEPTS of CLASSICAL GAME THEORY We will consider games with two players, Alice and Bob. X and Y will be nonempty sets of their game (pure) strategies -X of Alice, Y of Bob. Mappings Px : X x Y —> R and py : X x Y —> R will be called payoff functions of Alice and Bob. The quadruple (X, Y,px,py) will be called a (mathematical) game. A mixed strategy will be a probability distribution on pure strategies. An element (x,y) G X x Y is said to be a Nash equilibrium of the game (X, Y,px,Pv) iff Px{x',y) < Px{x,y) for any x' G X, and py(x,y') < py(x,y) for all y' e V. Informally, Nash equilibrium is such a pair of strategies that none of the players gains by changing his/her strategy. A game is called zero-sum game if px(x,y) + py(x, y) = 0 for all x £ X and y e Y. IV054 1. Games Theory and Analyses of Randomized Algorithms 10/36 ONE of THE BASIC RESULTS POWER Of QUANTUM PHENOMENA One of the basic result of the classical game theory is that not every two-players zero-sum game has a Nash equilibrium in the set of pure strategies, but there is always a Nash equilibrium if players follow mixed strategies. It has been shown, for several zero-sum games, that if one of the players can use quantum tools and thereby quantum strategies, then he/she can increase his/her chance to win the game. This way, from a fair game, in which both players have the same chance to win if only classical computation and communication tools are used, an unfair game can arise, or from an unfair game a fair one. IV054 1. Games Theory and Analyses of Randomized Algorithms 11/36 IV054 1. Games Theory and Analyses of Randomized Algorithms 12/36 EXAMPLE - PENNY FLIP GAME Alice and Bob play with a box and a penny as follows: ■ Alice places a penny head up in a box. ■ Bob flips or does not flip the coin ■ Alice flips or does not flip the coin ■ Bob flips or does not flip the coin After the "game" is over, they open the box and Bob wins if the penny is head up. It is easy to check that using pure strategies chances to win are | for each player and there is no (Nash) equilibrium in the case of pure classical strategies. However, there is equilibrium if Alice chooses its strategy with probability \ and Bob chooses each of the four possible strategies with probability \. IV054 1. Games Theory and Analyses of Randomized Algorithms 13/36 VERSION of PRISONERS' DILEMMA from 1992 Two members of a gang are imprisoned, each in a separate cell, without possibility to communicate. However, police has not enough evidence to convict them on the principal charge and therefore police intends to put both of them for one year to jail on a lesser charge. Simultaneously police offer both of them so called Faustian bargain. Each prisoner gets a chance either to betray the other one by testifying that he committed the crime, or to cooperate with the other one by remaining silent. Here are payoffs they are offered: ■ If both betray, they will get into jail for 2 years. ■ If one betrays and second decides to cooperate, then first will get free and second will go to jail for 3 years. ■ If both cooperate they will go to jail for 1 year. What is the best way for them to behave? This game is a model for a variety of real-life situations involving cooperative behaviour. Game was originally framed in 1950 by M. Flood and M. Dresher IV054 1. Games Theory and Analyses of Randomized Algorithms 14/36 PRISONERS' DILEMMA - I. PRISONERS' DILEMMA - II. Two prisoners, Alice and Bob, can use, independently, any of the following two strategies: to cooperate or to defect (not to cooperate). The problem is that the payoff function (pa,Pb), m millions, is a very special one (first (second) value is payoff of Alice (of Bob): Alice CA Da Bob CB (3,3) (5,0) DB (0,5) (1,1) What is the best way for Alice and Bob to proceed in order to maximize their payoffs? A strategy Sa is called dominant for Alice if for any other strategy s'A of Alice and Sg of Bob, it holds PA(sA,sB) > Pa(s'a,sb). Clearly, defection is the dominant strategy of Alice (and also of Bob) in the case of Prisoners Dilemma game. Prisoners Dilemma game has therefore dominant-strategy equilibrium Alice r n Bo¥ Ca Da Cb (3,3) (5,0) DB (0,5) (1,1) IV054 1. Games Theory and Analyses of Randomized Algorithms 15/36 IV054 1. Games Theory and Analyses of Randomized Algorithms 16/36 BATTLE of SEX GAME Alice and Bob have to decide, independently of each other, where to spent the evening Alice prefers to go to opera (0), Bob wants to watch TV (T) - tennis. However, at the same time both of them prefer to be together than to be apart. Pay-off function is given by the matrix (columns are for Alice) (columns are for Bob) 0 T 0 (a,/3) (7,7) T (7,7) (/3, a) where a > (3 > 7. What kind of strategy they should choose? The two Nash equilibria are (O, O) and (T, T), but players are faced with tactics dilemma, because these equilibria bring them different payoffs. IV054 1. Games Theory and Analyses of Randomized Algorithms 17/36 COIN GAM E There are three coins: one fair, with both sides different, and two unfair, one with two heads and one with two tails. The game proceeds as follows. ■ Alice puts coins into a black box and shakes the box. ■ Bob picks up one coin. ■ Alice wins if coin is unfair, otherwise Bob wins Clearly, in the classical case, the probability that Alice wins is |. IV054 1. Games Theory and Analyses of Randomized Algorithms 18/36 FROM GAMES to LOWER BOUNDS for RANDOMIZED ALGORITHMS TWO-PERSON ZERO-SUM GAMES - EXAMPLE Next goal is to present, using zero-sum games theory, a method how to prove lower bounds for the average running time of randomized algorithms. This techniques can be applied to algorithms that terminate for all inputs and all random choices. IV054 1. Games Theory and Analyses of Randomized Algorithms 19/36 A two players zero-sum game is represented by an n x m payoff-matrix M with all rows and columns summing up to 0. Payoffs for n possible strategies of Alice are given in rows of M. Payoffs for m possible strategies of Bob are given in columns of M. is the amount paid by Bob to Alice if Alice chooses strategy / and Bob's choice is strategy j. The goal of Alice (Bob) is to maximize (minimize) her payoff. Example - stone-scissors-paper game PAYOFF-MATRIX Bob Alice Scissors Paper Stone Scissors 0 1 -1 Paper -1 0 1 Stone 1 -1 0 —» Table shows how much Bob has to pay to Alice Rules: Stone looses to paper and wins sissors.Paper looses to sissors and wins to stone.Sissors looses to stone and wins to paper. IV054 1. Games Theory and Analyses of Randomized Algorithms 20/36 STRATEGIES for ZERO-INFORMATION and ZERO-SUM GAMES (Games with players having no information about their opponents' strategies.) Theorem Observe that if Alice chooses a strategy ;, then she is guaranteed a payoff of min, My regardless of Bob's strategy. Oa — max min My < min max Mv = Ob i j j i An optimal strategy Oa for Alice is such an / that maximises min, My. Oa — max min My Often Oa < Ob- In our last (scissors-...) example, — 1 = Oa < Ob — -Hi- denotes therefore the lower bound on the value of the payoff Alice gains (from Bob) when she uses an optimal strategy. lf Ob — Oa we say that the game has a solution - a specific choice of strategies that leads to this solution. An optimal strategy Ob for Bob is such a j that minimizes max, My. Bob's optimal strategy ensures therefore that his payoff is at least g and 7 are so called optional strategies for Alice and Bob if 0A = 0B = Me7 Ob — min max My j i IV054 1. Games Theory and Analyses of Randomized Algorithms 21/36 IV054 1. Games Theory and Analyses of Randomized Algorithms 22/36 Example of the game which has a so- 0 12 lution (0A = Ob = 0) -10 1 -2 -1 0 What happens if a game has no solution ? There is no clear-cut strategy for any player. Way out: to use randomized strategies. Let Oa {Ob) denote the best possible (optimal) lower (upper) bound on the expected payoff of Alice (Bob). Then it holds: Alice chooses strategies according to a probability vector p — (pi,..., pn); p, is probability that Alice chooses strategy sa,i Bob chooses strategies according to a probability vector q — (c/i,..., qn); qj is a probability that Bob chooses strategy sbj- Oa = max min pTMq Ob = min max pTMq p q q p Payoff is now a random variable - if p, q are taken as column vectors then n m E[payoff] = pTMq = ^^P/My-qy ,=i j=i IV054 1. Games Theory and Analyses of Randomized Algorithms 23/36 Theorem (von Neumann Minimax theorem) For any two-person zero-sum game specified by a payoff matrix M it holds maxmin pT Mq — min maxp7 Mq p q ip Observe that once p is fixed, maxp min, pT Mq — min, maxp pTMq is a linear function and is minimized by setting to 1 the q, with the smallest coefficient in this linear function. This has interesting/important implications: If Bob knows the distribution p used by Alice, then his optimal strategy is a pure strategy. A similar comment applies in the opposite direction. This leads to a simplified version of the minimax theorem, where e/< denotes a unit vector with 1 at the /c-th position and 0 elsewhere. Theorem (Loomis' Theorem) For any two-persons zero-sum game maxmin p7Me, = min maxe,7Mq pj i i IV054 1. Games Theory and Analyses of Randomized Algorithms 25/36 YAO'S TECHNIQUE 1/3 Yao's technique provides an application of the game-theoretic results to the establishment of lower bounds for randomized algorithms. For a given algorithmic problem V let us consider the following payoff matrix. N P U T S Cl C2 C3 c4 deterministic algorithms Ai A2 A3 entries resources (i.e. used computation time) Bob - a designer choosing good algorithms Alice - an adversary choosing bad inputs Pure strategy for Bob corresponds to the choice of a deterministic algorithm. Optimal pure strategy for Bob corresponds to a choice of an optimal deterministic algorithm. IV054 1. Games Theory and Analyses of Randomized Algorithms 26/36 YAO'S TECHNIQUE 2/3 Let Vb be the worst-case running time of any deterministic algorithm of Bob Problem: How to interpret mixed strategies A mixed strategy for Bob is a probability distribution over (always correct) deterministic algorithms—so it is a Las Vegas randomized algorithm. An optimal mixed strategy for Bob is an optimal Las Vegas algorithm. Distributional complexity of a problem is an expected running time of the best deterministic algorithm for the worst distribution on the inputs. Loomis theorem implies that distributional complexity equals to the least possible time achievable by any randomized algorithm Reformulation of von Neumann+Loomis' theorem in the language of algorithms Corollary Let n be a problem with a finite set / of input instances and A be a finite set od deterministic algorithms for n. For any input / £ / and any algorithm A £ A, let T(i,A) denote computation time of A on input /. For probability distributions p over / and q over A, let ip denote random input chosen according to p and Aq a random algorithm chosen according to q. Then max min E[T(ip, Aq)] — min max E [T(ip, Aq)] pi ip max min E [Tlip, A)] — min max E [Tli, Aq)] p AeA q iei IV054 1. Games Theory and Analyses of Randomized Algorithms 27/36 IV054 1. Games Theory and Analyses of Randomized Algorithms 28/36 YAO'S TECHNIQUE 3/3 IMPLICATIONS OF YAO'S MINIMAX PRINCIPLE Consequence: Interpretation again Expected running time of the optimal deterministic algorithm for an arbitrarily chosen input distribution p for a problem n is a lower bound on the expected running time of the optimal (Las Vegas) randomized algorithm for n. Theorem(Yao's Minimax Principle) For all distributions p over / and q over A. m\nE[T(ip,A)] < max E[T(i, Aq)] Consequence: In order to prove a lower bound on the randomized complexity of an algorithmic problem, it suffices to choose any probability distribution p on the input and prove a lower bound on the expected running time of deterministic algorithms for that distribution. Interpretation: Expected running time of the optimal deterministic algorithm for any arbitrarily chosen input distribution p for a problem n is a lower bound on the expected running time of the optimal (Las Vegas) randomized algorithm for n. In other words, to determine a lower bound on the performance of all randomized algorithms for a problem P, derive instead a lower bound for any deterministic algorithm for P when its inputs are drawn from a specific probability distribution (of your choice). The power of this technique lies in □ the flexibility at the choice of p El the reduction of the task to determine lower bounds for randomized algorithms to the task to determine lower bounds for deterministic algorithms. (It is important to remember that we can expect that the deterministic algorithm "knows" the chosen distribution p.) The above discussion holds for Las Vegas algorithms only! IV054 1. Games Theory and Analyses of Randomized Algorithms 29/36 IV054 1. Games Theory and Analyses of Randomized Algorithms 30/36 GAMES TREES REVISITED A randomized algorithm for a game-tree T evaluations can be viewed as a probability distribution over deterministic algorithms for T, because the length of computation and the number of choices at each step are finite. Instead of AND-OR trees of depth 2k we can consider NOR-trees of depth 2k. Indeed, it holds: (a V b) A (c V d) = (a NOR £>)NOR(c NOR d) (\/) (nor) (nor) c^c^3 c^3^^ c^c^3 c^~^3 c^c^3 c^3^^) c^^^) c^3^3 Note: It's important to distinguish between: ■ the expected running time of the randomized algorithm with a fixed input (where probability is considered over all random choices made by the algorithm) ■ and ■ the expected running time of the deterministic algorithm when proving the lower bound (the average time is taken over all random input instances). IV054 1. Games Theory and Analyses of Randomized Algorithms 31/36 IV054 1. Games Theory and Analyses of Randomized Algorithms 32/36 LOWER BOUND FOR GAME TREE EVALUATION - I Assume now that each leaf of a NOR-tree is set up to have value 1 with probability _ 3-s/5 (observe that (1 — p) = p for such a p). Observe that if inputs of a NOR-gate have value 1 with probability p then its output value is also 1 with probability (1 — p)(l — p) — p. Consider now only depth-first pruning algorithms for tree evaluation. (They are such depth-first algorithms that make use of the knowledge that subtrees that provide no additional useful information can be "pruned away".) Of importance for the overall analysis is the following technical lemma: Lemma Let T be a NOR-tree each leaf of which is set to 1 with a fixed probability. Let W(T) denote the minimum, over all deterministic algorithms, of the expected number of steps to evaluate T. Then there is a depth-first pruning algorithm whose expected number of steps to evaluate T is W(T). The last lemma tells us that for the purposes of our lower bound, we may restrict our attention to the depth-first pruning algorithms. IV054 1. Games Theory and Analyses of Randomized Algorithms 33/36 .OWER BOUND FOR GAME TREE EVALUATION - II For a depth-first pruning algorithm evaluating a NOR-tree, let W(h) be the expected number of leaves the algorithm inspects in determining the value of a node at distance h from the leaves. It holds W(h) = pW(h - 1) + (1 - p)2W{h - 1) = (2 - p)W(h - 1) because with the probability 1 — p the first subtree produces 0 and therefore also the second tree has to be evaluated. If h — lg2 n, then the above recursion has a solution W(h) > n ,0.694 This implies: Theorem The expected running time of any randomized algorithm that always evaluates an instance of Tk correctly is at least n0,694, where n — 22k is the number of leaves. IV054 1. Games Theory and Analyses of Randomized Algorithms 34/36 RECENT RESULTS The upper bound for randomized game tree evaluation algorithms already shown, at the beginning of this chapter was n0,79, what is more than the lower bound n694 just shown. It was therefore natural to ask what does the previous theorem really says? For example, is our lower bound technique weak? ? No, the above result just says that in order to get a better lower bound another probability distribution on inputs may be needed. Two recent results put more light on the Game tree evaluation problem. ■ It has been shown that for our game tree evaluation problem the upper bound presented at the beginning is the best possible and therefore that 9(n0J9) is indeed the classical (query) complexity of the problem. ■ It has also been shown, by Farhi et al. (2009), that the upper bound for the case quantum computation tools can be used is 0(n05). IV054 1. Games Theory and Analyses of Randomized Algorithms 35/36 IV054 1. Games Theory and Analyses of Randomized Algorithms 36/36