IA169 System Verification and Assurance Verification of Systems with Probabilities Vojtěch Řehák Motivation example Fail-repair system idle working done repair error start end bug service ok reset What are the properties of the model? G(working =⇒ F done) NO G(working =⇒ F error) NO FG(working ∨ error ∨ repair) NO IA169 System Verification and Assurance – 12 2/31 Motivation example Fail-repair system idle working done repair error start 0.95 end bug 0.05 service ok reset What is the probability of reaching “done” from “working” with no visit of “error”? What is the probability of reaching “done” from “working” with at most one visit of “error”? What is the probability of reaching “done” from “working”? IA169 System Verification and Assurance – 12 3/31 Section Discrete-time Markov Chains (DTMC) IA169 System Verification and Assurance – 12 4/31 Probabilistic models Discrete-time Markov Chains (DTMC) Standard model for probabilistic systems. State-based model with probabilities on branching. Based on the current state, the succeeding state is given by a discrete probability distribution. Markov property (“memorylessness”) — only the current state determines the successors (the past states are irrelevant). Probabilities on outgoing edges sums to 1 for each state. Hence, each state has at least one outgoing edge (“no deadlock”). IA169 System Verification and Assurance – 12 5/31 DTMC examples Model of a queue 0 1 2 3 4 1/3 1/3 1/3 1/3 2/32/32/32/3 2/3 1/3 Queue for at most 4 items. In every time tick, a new item comes with probability 1/3 and an item is consumed with probability 2/3. What if a new items comes with probability p = 1/2 and an item is consumed with probability q = 2/3? IA169 System Verification and Assurance – 12 6/31 DTMC examples Model of the new queue 0 1 2 3 4 p p(1 − q) p(1 − q) p(1 − q) qq(1 − p)q(1 − p)q(1 − p) 1 − p 1 − q (1−p)(1−q) + pq (1−p)(1−q) + pq (1−p)(1−q) + pq IA169 System Verification and Assurance – 12 7/31 DTMC - formal definition Discrete-time Markov Chain is given by a set of states S, an initial state s0 of S, a probability matrix P : S × S → [0, 1], and an interpretation of atomic propositions I : S → AP. 1 2 5 4 3 1 0.95 0.05 1 1 1 P =       0 1 0 0 0 0 0 0.05 0 0.95 0 0 0 1 0 0 1 0 0 0 0 0 0 0 1       IA169 System Verification and Assurance – 12 8/31 Back to our questions Fail-Repair System idle working done repair error 1 0.95 0.05 1 1 1 What is the probability of reaching “done” from “working” with no visit of “error”? What is the probability of reaching “done” from “working” with at most one visit of “error”? What is the probability of reaching “done” from “working”? IA169 System Verification and Assurance – 12 9/31 Markov chain analysis Transient analysis distribution after k-steps reaching/hitting probability hitting time Long run analysis probability of infinite hitting stationary (invariant) distribution mean inter visit time long run limit distribution IA169 System Verification and Assurance – 12 10/31 Section Property Specification IA169 System Verification and Assurance – 12 11/31 Property specification languages Recall some non-probabilistic specification languages: LTL formulae ϕ ::= p | ¬ϕ | ϕ ∨ ϕ | X ϕ | ϕ U ϕ CTL formulae ϕ ::= p | ¬ϕ | ϕ ∨ ϕ | EX ϕ | E[ϕ U ϕ] | EG ϕ Syntax of CTL∗ state formula ϕ ::= p | ¬ϕ | ϕ ∨ ϕ | E ψ path formula ψ ::= ϕ | ¬ψ | ψ ∨ ψ | X ψ | ψ U ψ IA169 System Verification and Assurance – 12 12/31 Property specification languages We need to quantify probability that a certain behaviour will occur. Probabilistic Computation Tree Logic (PCTL) Syntax of PCTL state formula ϕ ::= p | ¬ϕ | ϕ ∨ ϕ | P bψ path formula ψ ::= X ϕ | ϕ U ϕ | ϕ U≤k ϕ where b ∈ [0, 1] is a probability bound, ∈ {≤, <, ≥, >}, and k ∈ N is a bound on the number of steps. A PCTL formula is always a state formula. α U≤k β is a bounded until saying that α holds until β within k steps. For k = 3 it is equivalent to β ∨ (α ∧ X β) ∨ (α ∧ X (β ∨ α ∧ X β)). Some tools also supports P=?ψ asking for the probability that the specified behaviour will occur. IA169 System Verification and Assurance – 12 13/31 PCTL examples We can also use derived operators like G, F, ∧, ⇒, etc. idle working done repair error 1 0.95 0.05 1 1 1 Probabilistic reachability P≥1( F done ) probability of reaching the state done is equal to 1 Probabilistic bounded reachability P>0.99( F≤6 done ) probability of reaching the state done in at most 6 steps is > 0.99 Probabilistic until P<0.96( (¬error) U (done) ) probability of reaching done with no visit of error is less than 0.96 IA169 System Verification and Assurance – 12 14/31 Qualitative vs. quantitative properties Qualitative PCTL properties P b ψ where b is either 0 or 1 Quantitative PCTL properties P b ψ where b is in (0, 1) In DTMC where zero probability edges are erased, it holds that P>0( X ϕ) is equivalent to EX ϕ there is a next state satisfying ϕ P≥1( X ϕ) is equivalent to AX ϕ the next states satisfy ϕ P>0( F ϕ) is equivalent to EF ϕ there exists a finite path to a state satisfying ϕ but P≥1( F ϕ) is not equivalent to AF ϕ There is no CTL formula equivalent to P≥1( F ϕ), and no PCTL formula equivalent to AF ϕ. IA169 System Verification and Assurance – 12 15/31 How the transient probabilities are computed? 1 2 5 4 3 1 0.95 0.05 1 1 1 P =        0 1 0 0 0 0 0 0.05 0 0.95 0 0 0 1 0 0 1 0 0 0 0 0 0 0 1        Probability in the k-th state when starting in 1 1 0 0 0 0 × P = 0 1 0 0 0 1 0 0 0 0 × P2 = 0 0 0.05 0 0.95 1 0 0 0 0 × P3 = 0 0 0 0.05 0.95 1 0 0 0 0 × P4 = 0 0.05 0 0 0.95 1 0 0 0 0 × P5 = 0 0 0.0025 0 0.9975 IA169 System Verification and Assurance – 12 16/31 How the transient probabilities are computed? 1 2 5 4 3 1 0.95 0.05 1 1 1 P =        0 1 0 0 0 0 0 0.05 0 0.95 0 0 0 1 0 0 1 0 0 0 0 0 0 0 1        Probability of being in 5 or 2 in the k-th state P × 0 1 0 0 1 T = 1 0.95 0 1 1 T P2 × 0 1 0 0 1 T = 0.95 0.95 1 0.95 1 T P3 × 0 1 0 0 1 T = 0.95 1 0.95 0.95 1 T P4 × 0 1 0 0 1 T = 1 0.9975 0.95 1 1 T P5 × 0 1 0 0 1 T = 0.9975 0.9975 1 0.9975 1 T IA169 System Verification and Assurance – 12 17/31 Unbounded reachability - optional slide Unbounded reachability Let p(s, A) be the probability of reaching a state in A from s. It holds that: p(s, A) = 1 for s ∈ A p(s, A) = s ∈succ(s) P(s, s ) ∗ p(s , A) for s ∈ A where succ(s) is a set of successors of s and P(s, s ) is the probability on the edge from s to s . Theorem The minimal non-negative solution of the above equations equals to the probability of unbounded reachability. IA169 System Verification and Assurance – 12 18/31 Section Long Run Analysis IA169 System Verification and Assurance – 12 19/31 Long run analysis 1 2 5 4 3 1 0.95 0.05 1 1 1 Recall that we reach the state 5(done) with probability 1. 1 2 5 4 3 1 0.95 0.05 1 1 0.5 0.5 What are the states visited infinitely often with probability 1? IA169 System Verification and Assurance – 12 20/31 Transient and recurrent states Definitions A state of DMTC is called transient iff there is a positive probability that the system will not return back to this state. A state s of DMTC is called recurrent iff, starting from s, the system eventually returns back to the s with probability 1. Theorem Every transient state is visited finitely many times with probability 1. Each recurrent state is with probability 1 either not visited or visited infinitely many times.1 1 This holds only in DTMC models with finitely many states. IA169 System Verification and Assurance – 12 21/31 Transient vs. recurrent states Which states are transient? Which states are recurrent? Decompose the graph representation onto strongly connected components. Theorem 1 A state is recurrent if and only if it is in a bottom strongly connected component. All other states are transient. 1 This holds only in DTMC models with finitely many states. IA169 System Verification and Assurance – 12 22/31 Irreducible Markov Chain For the sake of infinite behaviour, we will concentrate on bottom strongly connected components only. Definition A Markov chain is said to be irreducible if every state can be reached from every other state in a finite number of steps. Theorem A Markov chain is irreducible if and only if its graph representation is a single strongly connected component. Corollary All states of a finite irreducible Markov chain are recurrent. IA169 System Verification and Assurance – 12 23/31 Stationary (Invariant) Distribution Definition Let P be the transition matrix of a DTMC and λ be a probability distribution on its states. If λP = λ, then λ is a stationary (or steady-state or invariant or equilibrium) distribution of the DTMC. Question: How many stationary distributions can a Markov chain have? Can it be more than one? Can it be none? IA169 System Verification and Assurance – 12 24/31 Stationary Distributions Answer: It can be more that one. For example, in the Drunkard’s walk 1 2 3 4 1/2 1/2 1 1 1/2 1/2 both (1, 0, 0, 0) and (0, 0, 0, 1) are stationary distributions. But, this is not an irreducible Markov chain. IA169 System Verification and Assurance – 12 25/31 Stationary Distributions Theorem In every finite irreducible DTMC there is a unique invariant distribution. Q: Can it be none? Theorem For each finite DTMC, there is an invariant distribution. Q: How can we compute the invariant distribution of a finite irreducible Markov chain? IA169 System Verification and Assurance – 12 26/31 Stationary Distribution & Cut-sets Again, we can construct a set of equations that express the result. Theorem Let P be a transition matrix of a finite irreducible DTMC and π = (π1, π2, . . . , πn) be a stationary distribution corresponding to P. For any state i of the DTMC, we have j=i πjPj,i = j=i πi Pi,j. IA169 System Verification and Assurance – 12 27/31 Mean Portion of Visited States and Inter Visit Time Theorem Let us have a finite irreducible DTMC and the unique stationary distribution π. It holds that πi = limn→∞E( # of visits of state i during the first n steps)/n. Let us have a finite irreducible DTMC and the unique stationary distribution π. It holds that πi = 1/mi where mi is the mean inter visit time of state i. IA169 System Verification and Assurance – 12 28/31 Aperiodic Markov Chains For example: aperiodic periodic Definition A state s is periodic if there exists an integer ∆ > 1 such that length of every path from s to s is divisible by ∆. A Markov chain is periodic if any state in the chain is periodic. A state or chain that is not periodic is aperiodic. IA169 System Verification and Assurance – 12 29/31 Aperiodic Markov Chains Theorem Let us have a finite aperiodic irreducible DTMC and the unique stationary distribution π. It holds that π = limn→∞λPn where λ is an arbitrary distribution on states. Q: What this is good for? IA169 System Verification and Assurance – 12 30/31 DTMC Extensions - Communication and Nondeterminism Last remark on some DTMC extensions. Modules and synchronisation modules actions rewards Decision extension Markov Decision Processes (MDP) Pmin and Pmax properties IA169 System Verification and Assurance – 12 31/31