We consider the use of lock-free techniques for implementing shared objects in real-time Pfair-scheduled multiprocessor systems. Lock-free objects are more economical than locking techniques when implementing relatively simple objects such as buffers, stacks, queues, and lists. However, the use of such objects on real-time multiprocessors is generally considered impractical due to the need for analytical real-time guarantees. In this paper, we explain how the quantum-based nature of Pfair scheduling enables the effective use of such objects on real-time multiprocessors and present analysis specific to Pfair-scheduled systems. In addition, we show that analytical improvements can be obtained by using such objects in conjunction with group-based scheduling techniques. In this approach, a group of tasks is scheduled as a single entity (called a supertask in the Pfair literature). Such grouping prevents tasks from executing simultaneously, and hence from executing in parallel. Consequently, grouping tasks can improve the worst-case scenario with respect to object contention. Grouping also enables the use of less costly uniprocessor algorithms when all tasks sharing an object reside within the same group. We illustrate these optimizations with a case study that focuses on shared queues. Finally, we present and experimentally evaluate a simple heuristic for grouping tasks in order to reduce object contention. Though the analysis presented herein focuses specifically on Pfair-scheduled systems, the observations and techniques should be applicable to other quantum-scheduled systems as well.
There has been much recent work on scheduling techniques that ensure fairness, temporal isolation, and timeliness among real-time tasks multiplexed on a set of processors. Real-time tasks differ from conventional processes in that they must satisfy timing constraints in order to ensure correct operation. The most common form of constraint is the use of per-job deadlines. Under task models that use thistype of constraint, each task is invoked repeatedly, and each such invocation is called a job of the task. Furthermore, each job must complete execution within a fixed amount of time, called its relative deadline. Real-time systems are often classified by their strictness. Systems in which it is desirable, but not necessary, to meet
all constraints are called soft real-time systems. For example, a video decoder usually has soft real-time constraints: timing violations may be acceptable, provided they occur rarely. Techniques for verifying the correctness of such systems vary significantly, often depending on their intended use. Systems in which it is unacceptable to violate any constraints are called hard real-time systems. For instance, control systems used to coordinate the operation of mechanical parts on assembly lines often require strict constraints. For hard real-time systems, constraints are guaranteed by subjecting the systems to worst-case analysis.Worst-case analysis consists of analyzing scenarios that are conservatively worse than any situation that can actually arise under normal operation, i.e., if constraints cannot be violated under a “worse-case” scenario, then they cannot possibly be violated under any realistic scenario. For instance, one common technique used to construct a worst-case scenario is to assume that each job in the system consumes the maximum amount of processor time possible, called the
job’s worst-case execution time. In the real system, it is highly unlikely, if not impossible, that all jobs will exhibit this property. However, such an assumption guarantees that
the constructed scenario is a conservative approximation of any real scenario. Due to the practical differences between conventional and hard real-time systems, the philosophies underlying the designs of these systems are also very different. In a conventional system, average-case performance is paramount, and the quality of a system is typically determined through measurement. The use of heuristics at runtime is common and designs are often very complex. Hard real-time systems, on the other hand, require predictability most of all. Complex
designs and the online use of heuristics both reduce predictability, which results in overly conservative worstcase scenarios. Such overestimation is undesirable because it leads to chronically underutilized resources. In addition, the quality of a system is determined analytically rather than empirically, i.e., by determining how far the system can be stressed before timing constraints can no longer be guaranteed. Empirical testing is almost always insufficient for this purpose due to the need to focus on worst-case situations.
In this paper, we considered the use of lock-free objects as an alternative to traditional semaphore-based locking in Pfair-scheduled real-time multiprocessor systems. Due to
the difficulties involved in predicting the impact of remote interference, the use of lock-free objects has generally been considered impractical in real-time multiprocessor systems. However, we demonstrated that the use of quantum-based scheduling provides a sufficient increase in predictability to enable the use of lock-free objects in such systems. We illustrated
this point by presenting analysis specific to Pfairscheduled systems. We also demonstrated how lock-free overheads can be reduced by using group-based scheduling techniques, such as supertasking. Finally, we presented a simple heuristic for assigning tasks to supertasks and experimentally evaluated its effectiveness. Although the results presented herein apply only to Pfair scheduling, the ideas underlying these results are easily applied to other quantumbased
multiprocessor systems as well. Related topics, such as supertasking and locking synchronization, are discussed in detail in [15], along with more detailed coverage of this topic