Real-Time Scheduling Scheduling of Reactive Systems [Some parts of this lecture are based on a real-time systems course of Colin Perkins http://csperkins.org/teaching/rtes/index.html] 1 Reminder of Basic Notions Jobs are executed on processors and need resources Parameters of jobs temporal: release time – ri execution time – ei absolute deadline – di derived params: relative deadline (Di), completion time, response time, ... functional: laxity type: hard vs soft preemptability interconnection precedence constraints (independence) resource what resources and when are used by the job Tasks = sets of jobs 2 Reminder of Basic Notions Schedule assigns, in every time instant, processors and resources to jobs valid schedule = correct (common sense) Feasible schedule = valid and all hard real-time tasks meet deadlines Set of jobs is schedulable if there is a feasible schedule for it Scheduling algorithm computes a schedule for a set of jobs Scheduling algorithm is optimal if it always produces a feasible schedule whenever such a schedule exists, and if a cost function is given, minimizes the cost We have considered scheduling of individual jobs 3 Scheduling Reactive Systems From this point on we concentrate on reactive systems i.e. systems that run for unlimited amount of time Recall that a task is a set of related jobs that jointly provide some system function. We consider various types of tasks Periodic Aperiodic Sporadic Differ in execution time patterns for jobs in the tasks Must be modeled differently Differing scheduling algorithms Differing impact on system performance Differing constraints on scheduling 4 Periodic Tasks A set of jobs that are executed repeatedly at regular time intervals can be modeled as a periodic task Time Ji,1 ri,1 Ji,2 ri,2 Ji,3 ri,3 Ji,4 ri,4 · · · ϕi Each periodic task Ti is a sequence of jobs Ji,1, Ji,2, . . . Ji,n, . . . The phase ϕi of a task Ti is the release time ri,1 of the first job Ji,1 in the task Ti ; tasks are in phase if their phases are equal The period pi of a task Ti is the minimum length of all time intervals between release times of consecutive jobs in Ti The execution time ei of a task Ti is the maximum execution time of all jobs in Ti The relative deadline Di is relative deadline of all jobs in Ti (The period and execution time of every periodic task in the system are known with reasonable accuracy at all times) 5 Periodic Tasks – Notation The 4-tuple Ti = (ϕi, pi, ei, Di) refers to a periodic task Ti with phase ϕi, period pi, execution time ei, and relative deadline Di For example: jobs of T1 = (1, 10, 3, 6) are released at times 1, 11, 21, . . ., execute for 3 time units, have to be finished in 6 time units (the first by 7, the second by 17, ...) Default phase of Ti is ϕi = 0 and default relative deadline is di = pi T2 = (10, 3, 6) satisfies ϕ = 0, pi = 10, ei = 3, Di = 6, i.e. jobs of T2 are released at times 0, 10, 20, . . ., execute for 3 time units, have to be finished in 6 time units (the first by 6, the second by 16, ...) T3 = (10, 3) satisfies ϕ = 0, pi = 10, ei = 3, Di = 10, i.e. jobs of T3 are released at times 0, 10, 20, . . ., execute for 3 time units, have to be finished in 10 time units (the first by 10, the second by 20, ...) 6 Periodic Tasks – Hyperperiod The hyper-period H of a set of periodic tasks is the least common multiple of their periods If tasks are in phase, then H is the time instant after which the pattern of job release/execution times starts to repeat 0 5 10 15 20 25 30 H H 7 Aperiodic and Sporadic Tasks Many real-time systems are required to respond to external events The tasks resulting from such events are sporadic and aperiodic tasks Sporadic tasks – hard deadlines of jobs e.g. autopilot on/off in aircraft Aperiodic tasks – soft deadlines of jobs e.g. sensitivity adjustment of radar surveilance system Inter-arrival times between consecutive jobs are identically and independently distributed according to a probability distribution A(x) Execution times of jobs are identically and independently distributed according to a probability distribution B(x) In the case of sporadic tasks, the usual goal is to decide, whether a newly released job can be feasibly scheduled with the remaining jobs in the system In the case of aperiodic tasks, the usual goal is to minimize the average response time 8 Scheduling – Classification of Algorithms Static vs Dynamic Static – decisions based on fixed parameters assigned to tasks/jobs before their activation Dynamic – decisions based on dynamic parameters that may change during computation Off-line vs Online Off-line – sched. algorithm is executed on the whole task set before activation Online – schedule is updated at runtime every time a new task enters the system Optimal vs Heuristic Optimal – algorithm computes a feasible schedule and minimizes cost of soft real-time jobs Heuristic – algorithm is guided by heuristic function; tends towards optimal schedule, may not give one 9 Scheduling – Clock-Driven Decisions about what jobs execute when are made at specific time instants these instants are chosen before the system begins execution Usually regularly spaced, implemented using a periodic timer interrupt Scheduler awakes after each interrupt, schedules jobs to execute for the next period, then blocks itself until the next interrupt E.g. the helicopter example with the interrupt every 1/180 th of a second Typically in clock-driven systems: All parameters of the real-time jobs are fixed and known A schedule of the jobs is computed off-line and is stored for use at runtime; thus scheduling overhead at run-time can be minimized Simple and straight-forward, not flexible 10 Scheduling – Priority-Driven Assign priorities to jobs, based on some algorithm Make scheduling decisions based on the priorities, when events such as releases and job completions occur Priority scheduling algorithms are event-driven Jobs are placed in one or more queues; at each event, the ready job with the highest priority is executed (The assignment of jobs to priority queues, along with rules such as whether preemption is allowed, completely defines a priority-driven alg.) Priority-driven algs. make locally optimal scheduling decisions Locally optimal scheduling is often not globally optimal Priority-driven algorithms never intentionally leave idle processors Typically in priority-driven systems: Some parameters do not have to be fixed or known A schedule is computed online; usually results in larger scheduling overhead as opposed to clock-driven scheduling Flexible – easy to add/remove tasks or modify parameters 11 Clock-Driven & Priority-Driven Example T1 T2 T3 pi 3 5 10 ei 1 2 1 Clock-Driven: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 · · · · · · · · · Priority-driven: T1 T2 T3 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 12 Real-Time Scheduling Scheduling of Reactive Systems Clock-Driven Scheduling 13 Current Assumptions Fixed number, n, of periodic tasks T1, . . . , Tn Parameters of periodic tasks are known a priori Execution time ei,k of each job Ji,k in a task Ti is fixed For a job Ji,k in a task Ti we have ri,1 = ϕi = 0 (i.e., synchronized) ri,k = ri,k−1 + pi We allow aperiodic jobs assume that the system maintains a single queue for aperiodic jobs Whenever the processor is available for aperiodic jobs, the job at the head of this queue is executed We treat sporadic jobs later 14 Static, Clock-Driven Scheduler Construct a static schedule offline The schedule specifies exactly when each job executes The amount of time allocated to every job is equal to its execution time The schedule repeats each hyperperiod i.e. it suffices to compute the schedule up to hyperperiod Can use complex algorithms offline Runtime of scheduling algorithm is not relevant Can compute a schedule that optimizes some characteristics of the system e.g. a schedule where the idle periods are nearly periodic (useful to accommodate aperiodic jobs) 15 Example T1 = (4, 1), T2 = (5, 1.8), T3 = (20, 1), T4 = (20, 2) Hyperperiod H = 20 0 4 8 12 16 20 24 Aper. T1 T2 T3 T4 16 Implementation of Static Scheduler Store pre-computed schedule as a table Each entry (tk , T(tk )) gives a decision time tk scheduling decision T(tk ) which is either a task to be executed, or idle (denoted by I) The system creates all tasks that are to be executed: Allocates memory for the code and data Brings the code into memory Scheduler sets the hardware timer to interrupt at the first decision time t0 = 0 On receipt of an interrupt at tk : Scheduler sets the timer interrupt to tk+1 If previous task overrunning, handle failure If T(tk ) = I and aperiodic job waiting, start executing it Otherwise, start executing the next job in T(tk ) 17 Example T1 = (4, 1), T2 = (5, 1.8), T3 = (20, 1), T4 = (20, 2) Hyperperiod H = 20 0 4 8 12 16 20 24 Aper. T1 T2 T3 T4 tk 0.0 1.0 2.0 3.8 4.0 5.0 6.0 · · · T(tk ) T1 T3 T2 I T1 I T4 · · · 18 Implementation of Static Scheduler Input: the table (tk , T(tk )) for k = 0, 1, . . . , n − 1 Task SCHEDULER: set the number of decisions i := 0 and table entry k := 0; set the timer to expire at t0; do forever: accept timer interrupt; if an aperiodic job is executing, preempt it; current task T := T(tk ); increment i by 1; compute the next table entry k = i mod n; set the timer to expire at i/n ∗ H + tk ; if the current task T = I, execute the job at the head of the aperiodic queue; else let the task T execute; sleep; end do. End SCHEDULER 19 Frame Based Scheduling Arbitrary table-driven cyclic schedules flexible, but inefficient Relies on accurate timer interrupts, based on execution times of tasks High scheduling overhead Easier to implement if a structure is imposed Make scheduling decisions at periodic intervals (frames) of length f Execute a fixed list of jobs within each frame; no preemption within frames How to choose the size of frames? How to compute a schedule? To simplify further development, assume that all parameters are in N and choose frame sizes in N 20 Frame Based Scheduling – Frame Size 0. Necessary condition for avoiding preemption of jobs is f ≥ max i ei (i.e. we want each job to have a chance to finish within a frame) 1. To minimize the number of entries in the cyclic schedule, the hyper-period should be an integer multiple of the frame size, i.e. pi/f − pi/f = 0 for some task Ti. 2. To allow scheduler to check that jobs complete by their deadline, at least one frame should lie between release time of a job and its deadline, i.e. 2 ∗ f − gcd(pi, f) ≤ Di for all tasks Ti Example: T1 = (4, 1.0), T2 = (5, 1.8), T3 = (20, 1.0), T4 = (20, 2.0) Then f ∈ N satisfies 0.–2. iff f = 2. 21 Frame Based Scheduling – Frame Size – Example Example 1 T1 = (4, 1.0), T2 = (5, 1.8), T3 = (20, 1.0), T4 = (20, 2.0) Then f ∈ N satisfies 0.–2. iff f = 2. With f = 2 is schedulable: 22 Frame Based Scheduling – Job Slices Sometimes a system cannot meet all three frame size constraints simultaneously (and even if it meets the constraints, no non-preemptive schedule is feasible) Can be solved by partitioning a job with large execution time into slices with shorter execution times This, in effect, allows preemption of the large job To construct a schedule, we have to make three kinds of design decisions (that cannot be taken independently): Choose a frame size based on constraints Partition jobs into slices Place slices into frames 23 Frame Based Scheduling – Job Slices – Example Consider T1 = (4, 1), T2 = (5, 2, 7), T3 = (20, 5) Cannot satisfy constraints: 1. ⇒ f ≥ 5 but 3. ⇒ f ≤ 4 Solve by splitting T3 into T3,1 = (20, 1), T3,2 = (20, 3), and T3,3 = (20, 1) (Other splits exist) Result can be scheduled with f = 4 24 Frame Based Scheduling – Job Slices Assuming that preemption is allowed in arbitrary places, jobs are independent and there are no resource contentions, there is a (pseudo)polynomial time algorithm which chooses an appropriate frame size f partitions jobs into slices places slices into frames (whiteb.) 25 Frame Based Scheduling – Algorithm Compute all frame sizes satisfying conditions 1. and 2. (not necessarily 0.) For every frame size f construct a network flow graph: (the number of frames in one hyperperiod is F) Vertices: a vertex for each job Ji a vertex for each frame j where j = 1, . . . , F source and sink Edges: from Ji to j of capacity f if Ji can be scheduled in the frame j from source to Ji of capacity ei from every frame to sink of capacity f Theorem 2 Max flow assigned to every edge from source to Ji is ei iff the jobs can be partitioned into slices and the slices placed into frames so that the resulting schedule is feasible. The flows assigned to edges of the form (Ji, j) determine the schedule. 26 Frame Based Scheduling – Network Flow Example The example with T1 = (4, 1), T2 = (5, 2, 7), T3 = (20, 5) and f = 4 Note that T2 has four jobs J2,1, J2,2, J2,3, J2,4 in one hyperper. The only job of T3 released in one hyperper. can be placed into any frame 27 Frame Based Scheduling – Cyclic Executive Modify previous table-driven scheduler to be frame based Table that drives the scheduler has F entries, where F = H/f The k-th entry L(k) lists the names of the job slices that are to be scheduled in frame k (L(k) is called scheduling block) Each job slice is implemented by a procedure Cyclic executive executed by the clock interrupt that signals the start of a frame: If an aperiodic job is executing, preempts it; if a periodic overruns, handles the overrun Determines the appropriate scheduling block for this frame Executes the jobs in the scheduling block Executes jobs from the head of the aperiodic job queue for the remainder of the frame Less overhead than pure table driven cyclic scheduler, since only interrupted on frame boundaries, rather than on each job 28 Scheduling Aperiodic Jobs So far, aperiodic jobs scheduled in the background after all jobs with hard deadlines This may unnecessarily delay aperiodic jobs Note: There is no advantage in completing periodic jobs early Ideally, finish periodic jobs by their respective deadlines. Slack Stealing: Slack time in a frame = the time left in the frame after all (remaining) slices execute Schedule aperiodic jobs ahead of periodic in the slack time of periodic jobs The cyclic executive keeps track of the slack time left in each frame as the aperiodic jobs execute, preempts them with periodic jobs when there is no more slack As long as there is slack remaining in a frame and the aperiodic jobs queue is non-empty, the executive executes aperiodic jobs, otherwise executes periodic Reduces resp. time for aper. jobs, but requires accurate timers 29 Example Assume that the aperiodic queue is never empty. Aperiodic at the ends of frames: 0 4 8 12 16 20 24 Aper. T1 T2 T3 T4 Slack stealing: 0 4 8 12 16 20 24 Aper. T1 T2 T3 T4 30 Slack Stealing – cont. Sl. S. Standard Rel. aper. Period. 31 Frame Based Scheduling – Sporadic Jobs Let us allow sporadic jobs i.e. hard real-time jobs whose release and exec. times are not known a priori The scheduler determines whether to accept a sporadic job when it arrives (and its parameters become known) Perform acceptance test to check whether the new sporadic job can be feasibly scheduled with all the jobs (periodic and sporadic) in the system at that time Acceptance check done at the beginning of the next frame; has to keep execution times of the parts of sporadic jobs that have already executed If there is sufficient slack time in the frames before the new job’s deadline, the new sporadic job is accepted; otherwise, rejected Among themselves, sporadic jobs scheduled according to EDF This is optimal for sporadic jobs Note: rejection is often better than missing deadline e.g. a robotic arm taking defective parts off a conveyor belt: if the arm cannot meet deadline, the belt may be slowed down or stopped 32 S1(17, 4.5) released at 3 with abs. deadline 17 and execution time 4.5; acceptance test at 4; must be scheduled in frames 2, 3, 4; total slack in these frames is 4, i.e. rejected S2(29, 4) released at 5 with abs. deadline 29 and exec. time 4; acc. test at 8; total slack in frames 3-7 is 5.5, i.e. accepted S3(22, 1.5) released at 11 with abs. deadline 22 and exec. time 1.5; acc. test at 12; 2 units of slack in frames 4, 5 as S3 will be executed ahead of the remaining parts of S2 by EDF – check whether there will be enough slack for the remaining parts of S2, accepted S4(44, 5.0) is rejected (only 4.5 slack left) 33 Handling Overruns Overruns may happen due to failures e.g. unexpectedly large data over which the system operates, hardware failures, etc. Ways to handle overruns: Abort the overrun job at the beginning of the next frame; log the failure; recover later e.g. control law computation of a robust digital controller Preempt the overrun job and finish it as an aperiodic job use this when aborting job would cause “costly” inconsistencies Let the overrun job finish – start of the next frame and the execution jobs scheduled for this frame are delayed This may cause other jobs to be delayed depends on application 34 Clock-drive Scheduling: Conclusions Advantages: Conceptual simplicity Complex dependencies, communication delays, and resource contention among jobs can be considered when constructing the static schedule Entire schedule in a static table No concurrency control or synchronization needed Easy to validate, test and certify Disadvantages: Inflexible If any parameter changes, the schedule must be usually recomputed Best suited for systems which are rarely modified (e.g. controllers) Parameters of the jobs must be fixed As opposed to most priority-driven schedulers 35 Real-Time Scheduling Scheduling of Reactive Systems Priority-Driven Scheduling 36 Current Assumptions Single processor Fixed number, n, of independent periodic tasks i.e. there is no dependency relation among jobs Jobs can be preempted at any time and never suspend themselves No aperiodic and sporadic jobs No resource contentions Moreover, unless otherwise stated, we assume that Scheduling decisions take place precisely at release of a job completion of a job (and nowhere else) Context switch overhead is negligibly small i.e. assumed to be zero There is an unlimited number of priority levels 37 Fixed-Priority vs Dynamic-Priority Algorithms A priority-driven scheduler is on-line i.e. it does not precompute a schedule of the tasks It assigns priorities to jobs after they are released and places the jobs in a ready job queue in the priority order with the highest priority jobs at the head of the queue At each scheduling decision time, the scheduler updates the ready job queue and then schedules and executes the job at the head of the queue i.e. one of the jobs with the highest priority Fixed-priority = all jobs in a task are assigned the same priority Dynamic-priority = jobs in a task may be assigned different priorities Note: In our case, a priority assigned to a job does not change. There are job-level dynamic priority algorithms that vary priorities of individual jobs – we won’t consider such algorithms. 38 Fixed-priority Algorithms – Rate Monotonic Best known fixed-priority algorithm is rate monotonic (RM) scheduling that assigns priorities to tasks based on their periods The shorter the period, the higher the priority The rate is the inverse of the period, so jobs with higher rate have higher priority RM is very widely studied and used Example 3 T1 = (4, 1), T2 = (5, 2), T3 = (20, 5) with rates 1/4, 1/5, 1/20, respectively The priorities: T1 T2 T3 0 4 8 12 16 20 T3 T2 T1 39 Fixed-priority Algorithms – Deadline Monotonic The deadline monotonic (DM) algorithm assigns priorities to tasks based on their relative deadlines the shorter the deadline, the higher the priority Observation: When relative deadline of every task matches its period, then RM and DM give the same results Proposition 1 When the relative deadlines are arbitrary DM can sometimes produce a feasible schedule in cases where RM cannot. 40 Rate Monotonic vs Deadline Monotonic T1 = (50, 50, 25, 100), T2 = (0, 62.5, 10, 20), T3 = (0, 125, 25, 50) DM is optimal (with priorities T2 T3 T1): 50 100 150 200 250 0 62.5 125 187.5 250 0 125 250 20 82.5 145 207.5 50 175 T3 T2 T1 RM is not optimal (with priorities T1 T2 T3): 50 100 150 200 250 0 62.5 125 187.5 250 0 125 250 20 82.5 145 207.5 50 175 T3 T2 T1 41 Dynamic-priority Algorithms Best known is earliest deadline first (EDF) that assigns priorities based on current deadlines At the time of a scheduling decision, the job queue is ordered by earliest deadline Another one is the least slack time (LST) The job queue is ordered by least slack time Recall that the slack time of a job Ji at time t is equal to di − t − x where x is the remaining computation time of Ji at time t Comments: There is also a strict LST which reassigns priorities to jobs whenever their slacks change relative to each other – won’t consider Standard “non real-time” algorithms such as FIFO and LIFO are also dynamic-priority algorithms We focus on EDF here, leave some LST for homework 42 EDF – Example T1 = (2, 1) and T2 = (5, 2.5) 0 1 2 3 4 5 6 7 8 9 10 T2 T1 Note that the processor is 100% “utilized”, not surprising :-) 43 Summary of Algorithms In what follows we consider Dynamic-priority algorithms: EDF Fixed-priority algorithms: RM and DM We consider the following questions: Are the algorithms optimal? How to efficiently (or even online) test for schedulability? 44