چارچوب های متاهیوریستیک موازی و توزیع بر اساس به اشتراک گذاری دانش سفارشی
|کد مقاله||سال انتشار||تعداد صفحات مقاله انگلیسی||ترجمه فارسی|
|8066||2012||15 صفحه PDF||سفارش دهید|
نسخه انگلیسی مقاله همین الان قابل دانلود است.
هزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.
این مقاله تقریباً شامل 10947 کلمه می باشد.
هزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:
|شرح||تعرفه ترجمه||زمان تحویل||جمع هزینه|
|ترجمه تخصصی - سرعت عادی||هر کلمه 90 تومان||16 روز بعد از پرداخت||985,230 تومان|
|ترجمه تخصصی - سرعت فوری||هر کلمه 180 تومان||8 روز بعد از پرداخت||1,970,460 تومان|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Journal of Parallel and Distributed Computing, Volume 72, Issue 4, April 2012, Pages 564–578
We propose a new distributed and parallel meta-heuristic framework to address the issues of scalability and robustness in the optimization problem. The proposed framework, named PADO (Parallel And Distributed Optimization framework), can utilize heterogeneous computing and communication resources to achieve scalable speedup while maintaining high solution quality. Specifically, we combine an existing meta-heuristic framework with a loosely coupled distributed island model for scalable parallelization. Based on a mature sequential optimization framework, we implement a population-based meta-heuristic algorithm with an island model for parallelization. The coordination overhead of previous approaches is significantly reduced by using a partially ordered knowledge sharing (POKS) model as an underlying model for distributed computing. The resulting framework can encompass many meta-heuristic algorithms and can solve a wide variety of problems with minimal configuration. We demonstrate the applicability and the performance of the framework with a traveling salesman problem (TSP), multi-objective design space exploration (DSE) problem of an embedded multimedia system, and a drug scheduling problem of cancer chemotherapy.
Meta-heuristics  are a computational method that optimizes a problem by iteratively improving candidate solution(s) with a given measure of solution quality. Meta-heuristic algorithms make few or no assumption about the problem being optimized and can search numerous candidate solutions by tight incorporation of both random and probabilistic methods. Meta-heuristic algorithms aim for a near optimal solution within tractable time for NP-complete combinatorial optimization problems. Meta-heuristic optimization algorithms can be classified into two types: population-based and single-state. In population-based optimization, solution candidates (i.e., members of population) keep evolving into better ones via interaction of individuals within a population. Genetic algorithm (GA) , particle swarm optimization (PSO) , and differential evolution (DE)  are examples of population-based optimization algorithms. In contrast, the single-state (also known as local search or solution-based) optimization starts with an initial candidate solution and modifies it to achieve a better solution. Hill climbing (HC) , simulated annealing (SA) , and Tabu search (TS)  are examples of single-state optimization algorithms. Note that population-based optimization is exploration oriented due to its capability to broadly cover the solution space. In contrast, single-state optimization is exploitation oriented, because it mainly improves the quality of a local solution. In this paper, we are concerned with population-based optimization to solve multi-objective optimization problems. Because a multi-objective problem usually has a set of Pareto-optimal solutions, we need to broadly explore the solution space. The current implementation of the proposed technique is based on the Opt4J  framework, which supports various meta-heuristic optimization algorithms applicable to a wide variety of optimization problems. In population-based optimization algorithms, the computation is often very demanding because of a large population size, a complex fitness evaluation, and/or a stringent convergence threshold. Thus, several parallel techniques for meta-heuristic algorithms have been proposed to enhance the convergence speed with acceptable solution quality. The taxonomy of the existent parallel techniques and their basic concepts can be found in ,  and . The well-known parallelization techniques for population-based meta-heuristic algorithms are of two types: a master–slave model and an Island model. The master–slave model includes a master and one or more corresponding slaves being assigned to evaluate an individual when a master needs to assess the fitness of an individual. Unlike the master–slave model, the Island model divides the entire population into separate sub-populations, permitting populations to overlap. Each Island proceeds with the local evolution of its sub-population on potentially heterogeneous computational resources. Occasionally, the Island model asynchronously performs migration steps (i.e., exchange of individuals) to disseminate new solution candidates to improve the overall solution quality in a cooperative manner. Both models have advantages and limitations. In the master–slave model, a robust and powerful master node can serve as a centralized repository of the population, which simplifies data collection, analysis, and recovery from failures of slave nodes. However, the master node can be both a performance bottleneck and a single source of failure. The Island model offers high evolution speed due to decreased sub-population size while compensating low solution diversity by migration. The distributed nature of the Island model also provides robustness against node failures. Motivated by the need for a scalable and robust solution, we adopt the Island model for parallelization of meta-heuristic algorithms. In this approach, multiple Islands (i.e., sub-populations) independently evolve and asynchronously exchange their individuals. To achieve good solution quality, the likelihood of being trapped in a local optimum needs to be reduced by increasing population diversity. The population diversity can be improved by increasing the migration ratio, which has been the main approach in previous work. Increasing the migration ratio, however, can adversely affect the convergence speed, because it increases the communication and synchronization overhead. We must also account for the Island topology, as it restricts the migrations. We propose to adopt the loosely coupled distributed computational model of partially ordered knowledge sharing (POKS)  for minimizing the communication overhead and improving the scalability of distributed execution. In the POKS model, processors disseminate knowledge, a locally optimal solution in our context, to the other processors. Each processor receives multiple units of knowledge from other processors. We define an ordering on knowledge by means of the solution quality in our context. Each processor maintains the knowledge of the highest order among which it receives. Different from those in a synchronized approach, our system’s processors may see different versions of the knowledge at a given point in time during the process of evolution. The in-network replacement of inferior knowledge decreases the communication overhead by propagating only winning individuals. By not relying on the existence of other nodes or persistent connectivity, the POKS model offers robustness against node/communication failures. In this paper, we propose a novel methodology to parallelize population-based meta-heuristic algorithms, which is realized in our framework, named PADO (Parallel And Distributed Optimization framework).1 PADO consists of two parts: The frontend is based on the Opt4J framework that provides user interfaces for algorithm and problem specification. The backend is the cyber-application framework  (in short, cyber-framework) that provides a programming environment for representing, manipulating, and sharing knowledge across the network under minimal assumptions on connectivity based on the POKS model. The cyber-framework supports shared-memory parallelism as well as distributed modes of operation. PADO bridges Opt4J and the cyber-framework via a parallel processing parameter configuration file. Both Opt4J and the cyber-framework provide configuration parameters that are easily adjusted depending on the optimization problem. As a result, a sequential version of the algorithm and problem specified in Opt4J is converted and executed in a parallel and distributed Island model on the cyber-framework with minimal configuration effort in the PADO framework. To prove effectiveness of our approach, we address the design space exploration (DSE) problem in an embedded system design, one of the numerous application areas of meta-heuristic algorithms, as a real-life case study. In an embedded system design, system performance can greatly benefit from optimization of key design parameters. To find optimal parameters with acceptable overhead, DSE is often performed by using meta-heuristic algorithms. Examples of this DSE problem include optimization of task scheduling ; memory architecture  and ; bus architecture ; automotive communication architecture  and ; and energy consumption . In many cases, the design-optimization process must deal with multiple objective functions, which requires Pareto optimality in the solution. In this paper, the DSE problem is to find an optimal mapping of tasks to the processors, considering the trade-offs between throughput performance and resource requirements. For this, we must determine the number of processors, the buffer sizes for data communication, and the static mapping of the tasks to the processors. Additionally, we use the TSP (traveling salesman problem) as well as a drug scheduling problem to show the viability of the proposed framework. The paper is organized as follows: in Section 2, we introduce a DSE problem that is solved in this paper as an illustrative optimization example. In Section 3, we present some background on the POKS model, the cyber-framework, and Opt4J. Section 4 presents the detailed architecture of PADO. In Section 5, we demonstrate the experimental results from three case studies. The first tackles a TSP that is a well-studied NP-complete problem. The second case study tries to solve the DSE problem presented in Section 2 and the last case study solves a drug scheduling problem of cancer chemotherapy . In Section 6, we review existing parallel meta-heuristic frameworks for comparison. We present our main contributions and conclude our paper in Section 7, along with a description of future work.
نتیجه گیری انگلیسی
We propose PADO, a novel framework for parallel and distributed meta-heuristic optimization based on partially ordered knowledge sharing to solve the optimization problems that are frequently encountered in various domains of application. The underlying POKS model provides a way to share the global view of evolution progress through asynchronous communication of shared knowledge. The partial ordering of knowledge significantly reduces the overhead of communication and storage. In the frontend of PADO, users specify problems and algorithms by using the interfaces provided by Opt4J with their configuration parameters. In the backend, the cyber-framework disseminates optimal solutions from the local populations based on the POKS model. The key features of the PADO framework that differentiates it from the related work reviewed in Section 6 can be summarized as follow: (1) MOEA support. Real-world applications such as the DSE problem used in our case study often involve a multitude of constraints and optimization objectives. In PADO, MOEA optimization is supported by the combination of both Opt4J’s MOEA modules and the partial order semantics of the POKS model. In particular, individuals are equipped with a partial ordering that defines the relationship between the dominant and the dominated solutions. Only the dominant solution set of the current approximation of the Pareto front is kept in the local knowledge base and disseminated to other Islands. Pareto fronts propagate like waves through the network and can cancel each other based on the ordering, whenever they interfere. (2) Performance. PADO aims at speeding up parallel evolutionary methods without sacrificing the solution quality. By adopting the shared knowledge model, we can significantly reduce the number of redundant migrations in an Island-based parallelization. This is confirmed by our experimental results in Section 5. (3) Scalability to heterogeneous architectures. The PADO framework takes advantage of a heterogeneous network because it uses the cyber-framework’s capability to perform parallel and distributed processing on various combinations of computing architectures and network infrastructure, ranging from multi-core, multi-threaded architectures to distributed computing resources on the cloud. The heterogeneity in terms of hardware performance also can be considered to keep balance of evolution speed of each Island as we indicate future research. (4) Composition of heterogeneous optimizers. The Island model enables each node to run with different configurations, possibly pursuing different objectives. In the cyber-framework, a cyber-node can run different cyber-applications. For example, one cyber-node executes a SA to maximize the throughput, while the other cyber-node executes a GA to minimize the buffer size. As long as their genotypes are compatible, individuals can be exchanged among Islands. So if we are using different meta-heuristics in different cyber-nodes, then we may combine the advantages of local search, population-based, and other types of algorithms. We have not yet explored this possibility. This is another topic for future research. (5) Robustness. An evolutionary process should be robust against failures of computation/communication. In PADO, the loosely coupled Island model mitigates the impact of a single (master) node failure. Further, the underlying knowledge dissemination performed by the cyber-framework does not demand a central coordinator for managing the network topology nor a global state of evolution, which makes the PADO approach opportunistically robust against node and link failures. All Islands are self-organizing, and the shared knowledge is the only interface between Islands. In this model, an Island does not have to be concerned about the failure of others. In the current implementation, the population size of each Island and the crossover/mutation ratio are set manually and remain constant. Runtime, adaptive population balancing is worth pursuing in the future. For example, knowledge containing the progress of an ongoing evolution on each node can serve as a monitor about the status of other Islands, which can be further used to balance the workload among Islands. This feature is of great importance given the current trends of distributed computing environment, such as shared Internet-based clusters employed by many users (e.g., cloud computing). Interestingly, this is by itself a runtime optimization problem that could lead to a reflective version of PADO.