تکنیک های اضافهبار کارآمد برای برنامه ریزی اولیه پشتیبان گیری در سیستم های زمان واقعی
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|7210||2004||20 صفحه PDF||سفارش دهید||12900 کلمه|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Journal of Parallel and Distributed Computing, Volume 64, Issue 5, May 2004, Pages 629–648
In real-time systems, tasks have deadlines to be met despite the presence of faults. Primary-Backup (PB) scheme is one of the most important schemes that has been employed for fault-tolerant scheduling of real-time tasks, wherein each task has two versions and the versions are scheduled on two different processors with time exclusion. There have been techniques proposed for improving schedulability of the PB-based scheduling, of which Backup–Backup (BB) overloading is among the most popular ones. In this technique two or more backups can share/overlap in time on a processor. In this paper, we propose two new techniques that accommodate more tasks and/or tolerate faults effectively. In the first technique, called dynamic grouping, the processors are dynamically grouped into logical groups in order to achieve efficient overloading of tasks on resources, thereby improving the schedulability and the reliability of the system. In the second technique, called PB overloading, the primary of a task can share/overlap in time with the backup of another task on a processor. The intuition is that, for a primary (or backup), the PB-overloading can assign an earlier start time than that of the BB-overloading, thereby increasing the schedulability. We conduct schedulability and reliability analysis of the proposed techniques through simulation and analytical studies. Our studies show that dynamic grouping improves the schedulability more than static grouping, and offers graceful degradation with increasing faults. Also, PB-overloading improves the schedulability more than BB-overloading, and offers reliability comparable to that of BB-overloading. The proposed techniques are generic that they can be incorporated into many fault-tolerant non-preemptive scheduling algorithms.
In real-time systems the correctness of the results depend not only on the logical correctness, but also on the time at which the results are produced . Multiprocessors and multicomputers based systems have emerged as a powerful computing means for realtime applications such as avionic control and nuclear plant control, because of their capability for high performance and reliability. The problem of scheduling real-time tasks in such systems is to determine when and on which processor a given task is executed. The allocation and scheduling can be performed either statically or dynamically. In static algorithms , the assignment of tasks to processors and the time at which the tasks start execution are determined a priori. Static algorithms are often used to schedule periodic tasks and are not applicable to aperiodic tasks whose arrival times and deadlines are not known a priori. Scheduling such tasks requires a dynamic scheduling algorithm [14,21]. In dynamic scheduling, when a new set of tasks arrive at the system, the scheduler determines the feasibility of scheduling these new tasks without jeopardizing the guarantees that have been provided for the previously scheduled tasks. For predictable executions, schedulability analysis must be done before tasks’ execution begun. For feasibility analysis, the tasks’ worst case computation times must be taken into account. A feasible schedule is generated if timing and otherrequirements of the tasks can be satisfied, i.e., if the schedulability analysis is successful. Tasks are dispatched according to this feasible schedule. Due to the critical nature of tasks in a hard real-time system, it is essential that every task admitted in the system completes its execution even in the presence of faults. Therefore, fault-tolerance is an important requirement in such systems [5,8]. Scheduling multiple versions of tasks on different processors provides faulttolerance[6,7,9–13,15,17,18,23,25,26,28]. Overload handling techniques to deal with timing faults are proposed in [16,22]. One of the approaches that is used for fault-tolerant scheduling of real-time tasks is the primary-backup (PB) model, in which two versions of a task are scheduled on two different processors and an acceptance test is used to check the correctness of the execution result [7,10,15,23]. The backup version is executed only if the output of the primary version fails the acceptance test, otherwise it is deallocated from the schedule. In , processor failures are handled by maintaining backup or contingency schedules. The backup schedules are generated assuming that a (optimal) schedule exists and is enhanced with the addition of ‘‘ghost’’ tasks which function as backup tasks. PB-based fault-tolerant scheduling has been enhanced through various techniques such as adaptive scheduling and overloading techniques. In adaptive fault-tolerant scheduling, there are two types of adaptiveness reported in the literature: (1) scheduling algorithms that allow primary and backup versions to overlap in execution and control the overlap interval in a continuous manner based on the fault rate observed in the system ; (2) scheduling algorithms that exploit the flexibility provided by the tasks in terms of selecting the suitable faulttolerant scheme, at run-time, based on the current state of the system. The adaptive scheduler reported in  belongs to the latter category, which selects one of Triple Modular Redundancy (TMR), PB, or Primary- Exception (PE) schemes, at run-time, based on the system state. In the context of fault-tolerant scheduling, the term ‘‘overloading’’ refers to scheduling of more than one task (version) on the same/overlapping time interval on a processor. Fault-tolerant scheduling algorithms [6,7,15] have employed overloading techniques as a means to conserve system resources
نتیجه گیری انگلیسی
In this paper, we have proposed two new techniques to be used in a primary-backup based fault-tolerant dynamic scheduling algorithm in multiprocessor realtime systems. The first technique is called dynamic grouping, in which the processors are dynamically grouped into logical groups in order to achieve efficient overloading of tasks, thereby improving the schedulability and the reliability of the system. The second technique is called primary-backup-overloading, in which the primary of a task can be scheduled onto the same or overlapping time interval with the backup of another task on a processor in order to increase system utilization, thereby improving the schedulability. The proposed techniques can be incorporated into any dynamic non-preemptive scheduling algorithm. For our simulation studies, we have incorporated the proposed techniques into the Spring scheduling algorithm, a well-known dynamic scheduling algorithm. Our simulation and analytical studies show that the proposed dynamic grouping technique offers significantly better guarantee ratio (15% gain) than the static grouping, and offers a graceful degradation in the performance (guarantee ratio) as number of faults increase under all the interesting conditions that we have simulated in the system. The proposed primarybackup overloading offers better schedulability (25% gain) than BB-overloading under all the interesting conditions that we have simulated in the system. Wehave also shown that the TTSFPB (a reliability metric) of PB-overloading is upper bounded by twice that of BBoverloading (TTSFPB), which is a much smaller value than the MTTF of the system. Hence, PB-overloading is an effective technique than BB-overloading for many practical situations. Future work includes extending the proposed and existing overloading techniques to periodic tasks with preemptive scheduling.