Real-Time Scheduling Priority-Driven Scheduling Fixed-Priority 128 Fixed-Priority Algorithms Recall that we consider a set of n tasks T = {T1, . . . , Tn} Any fixed-priority algorithm schedules tasks of T according to fixed (distinct) priorities assigned to tasks. We write Ti � Tj whenever Ti has a higher priority than Tj. To simplify our reasoning, assume that all tasks are in phase, i.e. ϕk = 0 for all Tk . We will remove this assumption at the end. 129 Fixed-Priority Algorithms – Reminder Recall that Fixed-Priority Algorithms do not have to be optimal. Consider T = {T1, T2} where T1 = (2, 1) and T2 = (5, 2.5) UT = 1 and thus T is schedulable by EDF If T1 � T2, then J2,1 misses its deadline If T2 � T1, then J1,1 misses its deadline We consider the following algorithms: � RM = assigns priorities to tasks based on their periods the priority is inversely proportional to the period pi � DM = assigns priorities to tasks based on their relative deadlines the priority is inversely proportional to the relative deadline Di (In all cases, ties are broken arbitrarily.) We consider the following questions: � Are the algorithms optimal? � How to efficiently (or even online) test for schedulability? 130 Maximum Response Time Which job of a task Ti has the maximum response time? As all tasks are in phase, the first job of Ti is released together with (first) jobs of all tasks that have higher priority than Ti. This means, that Ji,1 is the most preempted of jobs in Ti. It follows, that Ji,1 has the maximum response time. Note that this relies heavily on the assumption that tasks are in phase! Thus in order to decide whether T is schedulable, it suffices to test for schedulability of the first jobs of all tasks. 131 Optimality of RM for Simply Periodic Tasks Definition 16 A set {T1, . . . , Tn} is simply periodic if for every pair Ti, T� satisfying pi > p� we have that pi is an integer multiple of p� Example 17 The helicopter control system from the first lecture. Theorem 18 A set T of n simply periodic, independent, preemptable tasks with Di = pi is schedulable on one processor according to RM iff UT ≤ 1. i.e. on simply periodic tasks RM is as good as EDF Note: Theorem 18 is true in general, no "in phase" assumption is needed. 132 Proof of Theorem 18 By Theorem 14, every schedulable set T satisfies UT ≤ 1. We prove that if T is not schedulable according to RM, then UT > 1. Assume that a job Ji,1 of Ti misses its deadline at Di = pi. W.l.o.g., we assume that T1 � · · · � Tn according to RM. Let us compute the total execution time of Ji,1 and all jobs that preempt it: E = ei + n� �=i+1 � pi p� � e� = n� �=i pi p� e� = pi n� �=i u� ≤ pi n� �=1 u� = piUT Now E > pi because otherwise Ji,1 meets its deadline. Thus pi < E ≤ piUT and we obtain UT > 1. 133 Optimality of DM (RM) among Fixed-Priority Algs. Theorem 19 A set of independent, preemptable periodic tasks with Di ≤ pi that are in phase (i.e., ϕi = 0 for all i = 1, . . . , n) can be feasibly scheduled on one processor according to DM if it can be feasibly scheduled by some fixed-priority algorithm. Proof. Assume a fixed-priority feasible schedule with T1 � · · · � Tn. Consider the least i such that the relative deadline Di of Ti is larger than the relative deadline Di+1 of Ti+1. Swap the priorities of Ti and Ti+1. The resulting schedule is still feasible. DM is obtained by using finitely many swaps. � Note: If the assumptions of the above theorem hold and all relative deadlines are equal to periods, then RM is optimal among all fixed-priority algorithms. Note: no "in phase" assumption is needed here. 134 Fixed-Priority Algorithms: Schedulability We consider two schedulability tests: � Schedulable utilization URM of the RM algorithm. � Time-demand analysis based on response times. 135 Schedulable Utilization for RM Theorem 20 Let us fix n ∈ N and consider only independent, preemptable periodic tasks with Di = pi. � If T is a set of n tasks satisfying UT ≤ n(21/n − 1), then UT is schedulable according to the RM algorithm. � For every U > n(21/n − 1) there is a set T of n tasks satisfying UT ≤ U that is not schedulable by RM. Note: Theorem 20 holds in general, no "in phase" assumption is needed. 136 Schedulable Utilization for RM It follows that the maximum schedulable utilization URM over independent, preemptable periodic tasks satisfies URM = inf n n(21/n − 1) = lim n→∞ n(21/n − 1) = ln 2 ≈ 0.693 Note that UT ≤ n(21/n − 1) is a sufficient but not necessary condition for schedulability of T using the RM algorithm (an example will be given later) We say that a set of tasks T is RM-schedulable if it is schedulable according to RM. We say that T is RM-infeasible if it is not RM-schedulable. 137 Proof – Special Case To simplify, we restrict to two tasks and always assume p2 ≤ 2p1. (the latter condition is w.l.o.g., proof omitted) Outline: Given p1, p2, e1, denote by max_e2 the maximum execution time so that T = {(p1, e1), (p2, max_e2)} is RM-schedulable. We define Up1,p2 e1 to be UT where T = {(p1, e1), (p2, max_e2)}. We say that T fully utilizes the processor, any increase in an execution time causes RM-infeasibility. Now we find the (global) minimum minU of Up1,p2 e1 . Note that this suffices to obtain the desired result: � Given a set of tasks T = {(p1, e1), (p2, e2)} satisfying UT ≤ minU we get UT ≤ minU ≤ Up1,p2 e1 , and thus the execution time e2 cannot be larger than max_e2. Thus, T is RM-schedulable. � Given U > minU, there must be p1, p2, e1 satisfying minU ≤ Up1,p2 e1 < U where Up1,p2 e1 = UT for a set of tasks T = {(p1, e1), (p2, max_e2)}. However, now increasing e1 by a sufficiently small ε > 0 makes the set RM-infeasible without making utilization larger than U. 138 Proof – Special Case (Cont.) Consider two cases depending on e1: 1. e1 < p2 − p1 : Maximum RM-feasible max_e2 (with p1, p2, e1 fixed) is p2 − 2e1. Which gives the utilization Up1,p2 e1 = e1 p1 + max_e2 p2 = e1 p1 + p2 − 2e1 p2 = e1 p1 + p2 p2 − 2e1 p2 = 1 + e1 p2 � p2 p1 − 2 � As p2 p1 − 2 ≤ 0, the utilization Up1,p2 e1 is minimized by maximizing e1. 2. e1 ≥ p2 − p1 : Maximum RM-feasible max_e2 (with p1, p2, e1 fixed) is p1 − e1. Which gives the utilization Up1,p2 e1 = e1 p1 + max_e2 p2 = e1 p1 + p1 − e1 p2 = e1 p1 + p1 p2 − e1 p2 = p1 p2 + e1 p2 � p2 p1 − 1 � As p2 p1 − 1 ≥ 0, the utilization Up1,p2 e1 is minimized by minimizing e1. The minimum of Up1,p2 e1 is attained at e1 = p2 − p1. (Both expressions defining U p1,p2 e1 give the same value for e1 = p2 − p1.) 139 Proof – Special Case (Cont.) Substitute e1 = p2 − p1 into the expression for Up1,p2 e1 : Up1,p2 p2−p1 = p1 p2 + p2 − p1 p2 � p2 p1 − 1 � = p1 p2 + � 1 − p1 p2 � � p2 p1 − 1 � = p1 p2 + p1 p2 � p2 p1 − 1 � � p2 p1 − 1 � = p1 p2  1 + � p2 p1 − 1 �2  Denoting G = p2 p1 − 1 we obtain Up1,p2 p2−p1 = p1 p2 (1 + G2 ) = 1 + G2 p2/p1 = 1 + G2 1 + G Differentiating w.r.t. G we get G2 + 2G − 1 (1 + G)2 which attains minimum at G = −1 ± √ 2. Here only G = −1 + √ 2 > 0 is acceptable since the other root is negative. 140 Proof – Special Case (Cont.) Thus the minimum value of Up1,p2 e1 is 1 + ( √ 2 − 1)2 1 + ( √ 2 − 1) = 4 − 2 √ 2 √ 2 = 2( √ 2 − 1) It is attained at periods satisfying G = p2 p1 − 1 = √ 2 − 1 i.e. satisfying p2 = √ 2p1. The execution time e1 which at full utilization of the processor (due to max_e2) gives the minimum utilization is e1 = p2 − p1 = ( √ 2 − 1)p1 and the corresponding max_e2 = p1 − e1 = p1 − (p2 − p1) = 2p1 − p2. Scaling to p1 = 1, we obtain a completely determined example p1 = 1 p2 = √ 2 ≈ 1.41 e1 = √ 2−1 ≈ 0.41 max_e2 = 2− √ 2 ≈ 0.59 that fully utilizes the processor (no execution time can be increased) but has the minimum utilization 2( √ 2 − 1). 141 Proof Idea of Theorem 20 Fix periods p1 < · · · < pn so that (w.l.o.g.) pn ≤ 2p1. Then the following set of tasks has the smallest utilization among all task sets that fully utilize the processor (i.e., any increase in any execution time makes the set unschedulable). 0 p1 2p1 0 p2 0 p3 0 pn−1 0 pn ... T3 T2 T1 Tn Tn−1 ek = pk+1 − pk for k = 1, . . . , n − 1 en = pn − 2 n−1� k=1 ek = 2p1 − pn 142 Time-Demand Analysis Consider a set of n tasks T = {T1, . . . , Tn}. Recall that we consider only independent, preemptable, in phase (i.e. ϕi = 0 for all i) tasks without resource contentions. Assume that Di ≤ pi for every i, and consider an arbitrary fixed-priority algorithm. W.l.o.g. assume T1 � · · · � Tn. Idea: For every task Ti and every time instant t ≥ 0, compute the total execution time wi(t) (the time demand) of the first job Ji,1 and of all higher-priority jobs released up to time t. If wi(t) ≤ t for some time t ≤ Di, then Ji,1 is schedulable, and hence all jobs of Ti are schedulable. 143 Time-Demand Analysis � Consider one task Ti at a time, starting with highest priority and working to lowest priority. � Focus on the first job Ji,1 of Ti. If Ji,1 makes it, all jobs of Ti will make it due to ϕi = 0. � At time t for t ≥ 0, the processor time demand wi(t) for this job and all higher-priority jobs released in [0, t] is bounded by wi(t) = ei + i−1� �=1 � t p� � e� for 0 < t ≤ pi (Note that the smallest t for which wi(t) ≤ t is the response time of Ji,1, and hence the maximum response time of jobs in Ti). � If wi(t) ≤ t for some t ≤ Di, the job Ji,1 meets its deadline Di. � If wi(t) > t for all 0 < t ≤ Di, then the first job of the task cannot complete by its deadline. 144 Time-Demand Analysis – Example Example: T1 = (3, 1), T2 = (5, 1.5), T3 = (7, 1.25), T4 = (9, 0.5) This is schedulable by RM even though U{T1,...,T4} = 0.85 > 0.757 = URM(4) 145 Time-Demand Analysis � The time-demand function wi(t) is a staircase function � Steps in the time-demand for a task occur at multiples of the period for higher-priority tasks � The value of wi(t) − t linearly decreases from a step until the next step � If our interest is the schedulability of a task, it suffices to check if wi(t) ≤ t at the time instants when a higher-priority job is released and at Di � Our schedulability test becomes: � Compute wi(t) � Check whether wi(t) ≤ t for some t equal either to Di, or to j · pk where k = 1, 2, . . . , i and j = 1, 2, . . . , �Di/pk � 146 Time-Demand Analysis – Comments � Time-demand analysis schedulability test is more complex than the schedulable utilization test but more general: � Works for any fixed-priority scheduling algorithm, provided the tasks have short response time (Di ≤ pi) Can be extended to tasks with arbitrary deadlines � Still more efficient than exhaustive simulation. � Assuming that the tasks are in phase the time demand analysis is complete. We have considered the time demand analysis for tasks in phase. In particular, we used the fact that the first job has the maximum response time. This is not true if the jobs are not in phase, we need to identify the so called critical instant, the time instant in which the system is most loaded, and has its worst response time. 147 Critical Instant – Formally Definition 21 A critical instant tcrit of a task Ti is a time instant in which a job Ji,k in Ti is released so that Ji,k either does not meet its deadline, or has the maximum response time of all jobs in Ti. Theorem 22 In a fixed-priority system where every job completes before the next job in the same task is released, a critical instant of a task Ti occurs when one of its jobs Ji,k is released at the same time with a job from every higher-priority task. Note that the situation described in the theorem does not have to occur if tasks are not in phase! 148 Critical Instant and Schedulability Tests We use critical instants to get upper bounds on schedulability as follows: � Set phases of all tasks to zero, which gives a new set of tasks T � = {T� 1 , . . . , T� n} By Theorem 22, the response time of the first job J� i,1 of T� 1 in T � is at least as large as the response time of every job of Ti in T . � Decide schedulability of T � , e.g. using the timed-demand analysis. � If T � if schedulable, then also T is schedulable. � If T � is not schedulable, then T does not have to be schedulable. But may be schedulable, which make the time-demand analysis incomplete in general for tasks not in phase. 149 Dynamic vs Fixed Priority � EDF � pros: � optimal � very simple and complete test for schedulability � cons: � difficult to predict which job misses its deadline � strictly following EDF in case of overloads assigns higher priority to jobs that missed their deadlines � larger scheduling overhead � DM (RM) � pros: � easier to predict which job misses its deadline (in particular, tasks are not blocked by lower priority tasks) � easy implementation with little scheduling overhead � (optimal in some cases often occurring in practice) � cons: � not optimal � incomplete and more involved tests for schedulability 150