برنامه های نقشه برداری جریان کار با انواع سکوهای تخصصی ناهمگن
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|21887||2011||18 صفحه PDF||سفارش دهید||محاسبه نشده|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Parallel Computing, Volume 37, Issue 8, August 2011, Pages 410–427
In this paper, we study the problem of optimizing the throughput of coarse-grain workflow applications, for which each task of the workflow is of a given type, and subject to failures. The goal is to map such an application onto a heterogeneous specialized platform, which consists of a set of processors that can be specialized to process one type of tasks. The objective function is to maximize the throughput of the workflow, i.e., the rate at which the data sets can enter the system. If there is exactly one task per processor in the mapping, then we prove that the optimal solution can be computed in polynomial time. However, the problem becomes NP-hard if several tasks can be assigned to the same processor. Several polynomial time heuristics are presented for the most realistic specialized setting, in which tasks of the same type can be mapped onto the same processor, but a processor cannot process two tasks of different types. Also, we give an integer linear program formulation of this problem, which allows us to find the optimal solution (in exponential time) for small problem instances. Experimental results show that the best heuristics obtain a good throughput, much better than the throughput obtained with a random mapping. Moreover, we obtain a throughput close to the optimal solution in the particular cases on which the optimal throughput can be computed (small problem instances or particular mappings).
Mapping applications onto parallel platforms is a difficult challenge. Several scheduling and load-balancing techniques have been developed for homogeneous architectures (see  for a survey), but the advent of heterogeneous platforms, such as grid platforms or even clouds, have rendered the mapping problem even more difficult. In this paper we deal with scheduling and mapping strategies for coarse-grain workflow applications, for which each task of the workflow is of a given type, and subject to failures. In such applications, a series of data sets enters the workflow and progresses from task to task until the final result is computed. The type of a task determines its computation requirements, and the failure rate of a task determines an amount of extra computations to be done when failures occur. After an initialization delay, a new data set is completed every period and it exits the workflow. The period is therefore defined as the longest cycle-time to operate a task, and it is the inverse of the throughput that can be achieved. We target coarse-grain applications and platforms in which the cost of communications is negligible in comparison to the cost of computations. The scheduling problem is to map such an application onto a target platform, with the objective to minimize the period, or equivalently, to maximize the throughput. Each task of the workflow must be mapped onto a processor, and the target platform is heterogeneous. Moreover, each processor can be specialized to process one type of tasks. We consider several mapping rules: we require the mapping to be one-to-one (a processor is assigned at most one task), or specialized (a processor is assigned a set of tasks of same type), or fully general. On the theoretical side, our main contributions are (i) a polynomial time algorithm to find the one-to-one mapping which minimizes the period, (ii) the proof that the problem becomes NP-hard for specialized and general mappings, and (iii) an integer linear programming formulation of the specialized mapping problem. In practice, the specialized mapping problem has many applications, not only in the field of distributed computing and workflows, but also for production systems. In the distributed computing context, an example of workflow application would be processing images in batches on a SaaS (Software as a Service) platform. For such an application, the failure rate may be impacted by the complexity of the service . Another field of application is the important problem of sequencing pipelined queries on a set of web services, as presented in . Each service does not provide the same amount of data on the output as on the input, because of the selectivity of the services; this problem is quite similar to our problem of mapping unreliable tasks, where service selectivities correspond to task failure rates. We further consider typed services. On the production side, a relevant use case is a micro-factory  composed of several cells that provide functions as assembly or machining. Due to the piece, actuator and cell sizes, it is impossible for human operators to directly interfere with the physical system. So it needs a highly automated command. The complexity of this command makes it mandatory to develop a distributed system to support this control. But, at this scale, the physical constraints are not totally controlled so there is a need to take faults into account in the automated command. Typically, the tasks to be performed in the micro-factory are typed. The paper is organized as follows. Section 2 gives an overview of related work. Then, Section 3 details the application, the platform and execution models, and the optimization problems. The complexity results are given in Section 4. Section 5 provides an integer linear program to solve the specialized mapping problem, which turns out to be NP-hard, and we propose several polynomial time heuristic solutions to this problem. The heuristics are extensively studied through a set of experiments in Section 6, and finally we conclude and give future work directions in Section 7.
نتیجه گیری انگلیسی
In this paper, we have investigated a throughput optimization problem for coarse-grain workflow applications with task types and task failures. The problem deals with assigning tasks to machines, either performing a one-to-one mapping (one task per machine), or a specialized mapping (several tasks of the same type per machine), or a general mapping. On the theoretical side, we proved that the optimal one-to-one mapping can be found in polynomial time, while the problem becomes NP-hard for specialized and general mappings. Since general mappings are not usable in practice because of reconfiguration costs, we focused on specialized mappings and proposed several polynomial heuristics to solve the problem. Also, we gave a linear programming formulation of the problem, and thus we were able to compute the optimal solution on small problem instances, and therefore, to evaluate the absolute performance of our heuristics on such instances. Experimental results suggest that some heuristics return mappings with a throughput close to the optimal, and the sophisticated heuristics return results much better than a random mapping. We have already extended this work to a failure model in which the failure rate is influenced by the machine characteristics (rates fi,u depending on both the task Ti and the machine Mu on which the task is mapped), see . The problem becomes much more complex since the xi values cannot be computed in advance any more, they depend on the mapping. As a future work, it would also be interesting to investigate other mapping rules, as for instance the mapping of one task onto several machines. In such a case, different instances of a task would be handled by different machines. This would allow to obtain a better throughput when a task is time consuming (bottleneck). Also, we could tackle general DAG workflows rather than restricting to in-trees, which are relevant mainly in the micro-factory context. Moreover, we could handle communication costs and therefore do not focus only on coarse-grain applications. Finally, other objective functions could be considered, as for instance minimizing the total time required to obtain a given number of data sets, or the average time needed to output one data set.