محاسبات اسلک برای الگوریتم های DVS در سیستم های زمان واقعی اولویت ثابت با استفاده از تجزیه و تحلیل اسلک مایع
کد مقاله | سال انتشار | تعداد صفحات مقاله انگلیسی |
---|---|---|
7270 | 2011 | 16 صفحه PDF |
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Journal of Systems Architecture, Volume 57, Issue 9, October 2011, Pages 850–865
چکیده انگلیسی
This work presents a scheduling algorithm to reduce the energy of hard real-time tasks with fixed priorities assigned in a rate-monotonic policy. Sets of independent tasks running periodically on a processor with dynamic voltage scaling (DVS) are considered as well. The proposed online approach can cooperate with many slack-time analysis methods based on low-power work demand analysis (lpWDA) without increasing the computational complexity of DVS algorithms. The proposed approach introduces a novel technique called low-power fluid slack analysis (lpFSA) that extends the analysis interval produced by its cooperative methods and computes the available slack in the extended interval. The lpFSA regards the additional slack as fluid and computes its length, such that it can be moved to the current job. Therefore, the proposed approach provides the cooperative methods with additional slack. Experimental results show that the proposed approach combined with lpWDA-based algorithms achieves more energy reductions than do the initial algorithms alone.
مقدمه انگلیسی
Dynamic voltage scaling (DVS) is a standard technique for managing the power consumption of systems [21]. A DVS processor can vary its operating frequency and voltage during runtime to use the quadratic relationship between energy consumption and supply voltage of CMOS technology. In recent years, computations and communication have moved steadily toward mobile and portable devices with limited power supply. Therefore, many primary IC producers have developed modern processors with DVS capability, including Intel’s XScale® [10], the mobile Athlon® by AMD [1] and SamSung’s Cortex® [19]. Many studies have focused on scheduling real-time applications on DVS processors [2], [5], [6], [8], [9], [11], [12], [17], [18] and [20]. The developed approaches differ in many aspects, such as on-line and off-line scheduling algorithms, handling discrete/continuous voltage levels, assuming average-case execution time (ACET), best-case execution time (BCET), or worst-case execution time (WCET) of each task, allowing intra-task/inter-task voltage transitions, and assuming fixed/dynamic priority assignment. However, these approaches have a common objective and encounter the same difficulties. Because reducing the supply voltage decreases maximum achievable clock speed [15], most DVS algorithms for real-time systems reduce supply voltage dynamically to the lowest possible level while satisfying the soft/hard timing constraints of a system. To satisfy the timing constraints of real-time tasks, DVS technique can utilize slack times when adjusting voltage levels. Consequently, the energy efficiency of a DVS algorithm markedly depends on the accuracy of computing available slack. Many previous studies have investigated slack time analysis [5], [8], [11], [12], [14] and [17] while assuming a feasible schedule. Pillai and Shin [17] proposed a cycle-conserving rate-monotonic (ccRM) scheduling scheme that contains off-line and on-line algorithms. The off-line algorithm computes the worst-case response time of each task and derives the maximum speed needed to meet all task deadlines. It recomputes the utilization by comparing the actual time for completed tasks with WCET schedule. In other words, when a task completes early, they have to compare the used actual processor cycles to a pre-computed worst-case execution time schedule. This WCET schedule is also called canonical schedule[2] whose length could be the least common multiplier of task periods. ccRM is a conservative method, as it only considers possible slack time before the next task arrival (NTA) of current job. Gruian proposed a DVS method for off-line task stretching and on-line slack distribution [8]. The off-line part of this method consists of two separate techniques. One focuses on the intra-task stochastic voltage scheduling that employs a task-execution length probability function. The second technique computes stretching factors by using a response time analysis. It is similar to Pillar and Shin’s off-line technique, but instead of adopting a stretching factor for all tasks that before NTA, Gruian assigns different stretching factor to the individual task within the longest task period. Kim et al. [12] proposed a greedy on-line algorithm called the low-power work-demand analysis (lpWDA) that derives slack from low-priority tasks, as opposed to the method in [8] and [17] that gains slack time from high-priority tasks. This algorithm also balances the gap in voltage levels between high-priority and low-priority tasks. Its analysis interval limited by the longest of task periods is longer than NTA. Thus, lpWDA gains more energy saving than the previous rate-monotonic (RM) DVS schemes applying NTA. Many slack time analysis methods considered additional assumptions [5], [9], [14] and [16]. Kim et al. proposed a preemption-aware DVS algorithm based on lpWDA, which is composed of accelerated-completion (lpWDA-AC) and delayed-preemption (lpWDA-DP) techniques to decrease the preemption times of DVS schedules [14]. lpWDA-AC attempts to avoid preemption by adjusting voltage/clock speed, such that it is higher than the lowest possible values computed using lpWDA. lpWDA-DP postpones preemption points by delaying an activated high-priority task as late as possible while guaranteeing a feasible task schedule. Both techniques reduce energy consumption more than the initial ccRM and lpWDA techniques on the assumption of context-switching overhead. Mochocki et al. [16] also proposed a transition-aware DVS algorithm for decreasing the number of voltage/speed adjustments, called the low-power limited-demand analysis with transition overhead (lpLDAT) scheme, which accounts for both time and energy transition overhead. Its algorithm computes an efficient speed level based on average-case workload; notably, this speed can be used as a limiter. If the limiter is higher than the speed predicted by lpWDA, lpLDAT knows that lpWDA is being too aggressive and applies the limiter to the present schedule. On the assumption of transition overhead, this technique with slack time analysis also saves considerable energy when compared with that by the previous methods. He and Jia [9] developed a fixed-priority scheduling with threshold (FPPT) scheme that eliminates unnecessary context switches, thereby saving energy. FPPT assigns each task a pair of predefined priority and corresponding preemption threshold. He and Jia applied a novel algorithm to compute a static slowdown factors by formulating the problem as a linear optimization problem. In addition, they considered energy consumption of a task set under different preemption threshold assignments. Chen and Hsu [5] proposed a tree structure corresponding to a set of pinwheel tasks [15]. They also proposed a DVS (the low-power job contiguous real-time scheduling, lpJCRT) algorithm, which manipulates a tree structure to distribute available slack evenly among other tasks. Experimental results obtained by Kim et al. [12] indicated that recent DVS algorithms for fixed-priority real-time tasks are less efficient than that of dynamic-priority tasks, leading to more improvements for a better DVS method. The main reason for energy inefficiency of RM DVS scheduling is that, in RM schedules, priority-based slack-stealing methods do not work as efficiently as they do in earliest-deadline first (EDF) scheduling [12]. However, in EDF schedules, high-priority tasks play an efficient slack distributor of tasks because their slack can be utilized fully by tasks starting before NTA. Therefore, the energy saving achieved by EDF scheduling algorithms, such as that by the ccEDF [17], DRA and AGR [2] is close to the theoretical lower bound [13]. This work improves the on-line slack computation capability of RM DVS algorithms. Based on existing RM DVS algorithms, we propose an on-line slack-time computation scheme using fluid slack analysis, which computes the length of potential slack in an interval longer than the longest of task periods. The proposed method does not need to compute or perform a simulation for stochastic data, which varies according to different applications. With a slight modification, lpFSA can be applied to many RM DVS scheduling scheme with various assumptions, including transition and preemption criteria. Moreover, the proposed algorithm has a time complexity of O(n) where n is the number of tasks. Therefore, it does not increase computational complexity of the existing on-line DVS algorithms. Experimental results indicate that existing RM DVS algorithms combined with the proposed method can reduce energy consumption by 5–25% compared with that by initial algorithms such as lpWDA, lpLDAT, etc. The remainder of this paper is organized as follows. Section 2 explains the motivation of this work. The basic idea of fluid slack analysis is proposed in Section 3. Section 4 describes the proposed technique and algorithm. Section 5 provides theorems to prove the schedulability of lpFSA as well as lpWDA. We present the performance evaluation in Section 6. Section 7 gives conclusions and the directions for future work.
نتیجه گیری انگلیسی
In this paper, we proposed a novel on-line slack estimation algorithm based on the concept of fluid slack analysis called lpFSA. This method is the first in its class that many existing RM DVS methods can serve as a host algorithm. lpFSA cooperating with host algorithms can further decrease energy consumption without increasing time complexities. Experimental results indicate that lpFSA can reduce overall energy consumption by up to 25% when compared with that of the initial schemes. Several directions will prove worthy for future work. Although this work focused on RM scheduling, the proposed fluid slack analysis can be applied to other scheduling policies such as earliest-deadline first (EDF) and EDF* [2]. Additionally, the existence of STO in the lpFSA hampers the transmission of slack in an analysis interval. Future work will try to prevent the STOs by relaxing job release times, thereby increasing available slack. Finally, the off-line slack-stretching technique proposed by Gruian [8] benefits the slack prediction of lpFSA. The future work will also decrease computational complexity of this technique and combine it with lpFSA.