دانلود مقاله ISI انگلیسی شماره 25074
ترجمه فارسی عنوان مقاله

تعریف محدودیت حرکت در بهینه سازی ساختاری با برنامه ریزی خطی پی در پی. قسمت اول: الگوریتم بهینه سازی

عنوان انگلیسی
Move limits definition in structural optimization with sequential linear programming. Part I: Optimization algorithm
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
25074 2003 17 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Computers & Structures, Volume 81, Issue 4, March 2003, Pages 197–213

ترجمه کلمات کلیدی
حرکت محدودیت - خطای خطی - منطقه اعتماد -
کلمات کلیدی انگلیسی
SLP, Move limits, Linearization error, Trust region,
پیش نمایش مقاله
پیش نمایش مقاله  تعریف محدودیت حرکت در بهینه سازی ساختاری با برنامه ریزی خطی پی در پی. قسمت اول: الگوریتم بهینه سازی

چکیده انگلیسی

A variety of numerical methods have been proposed in literature in purpose to deal with the complexity and non-linearity of structural optimization problems. In practical design, sequential linear programming (SLP) is very popular because of its inherent simplicity and because linear solvers (e.g. Simplex) are easily available. However, SLP performance is sensitive to the definition of proper move limits for the design variables which task itself often involves considerable heuristics. This research presents a new SLP algorithm (LESLP––linearization error sequential linear programming) that implements an advanced technique for defining the move limits. The LESLP algorithm is formulated so to overcome the traditional limitations of the SLP method. The new algorithm is successfully tested in weight minimization problems of truss structures with up to hundreds of design variables and thousands of constraints: sizing and configuration problems are considered. Optimization problems of non-truss structures are also presented. The key-ideas of LESLP and the discussion on numerical efficiency of the new algorithm are presented in a two-part paper. The first part concerns the basics of the LESLP formulation and provides potential users with a guide to programming LESLP on computers. In a companion paper, the numerical efficiency, advantages and drawbacks of LESLP are discussed and compared to those of other SLP algorithms recently published or implemented in commercial software packages.

مقدمه انگلیسی

An optimization problem is defined by the objective function and inequality constraint functions and it is formulated as follows: equation(1) where (x1,x2,…,xN) are the N design variables; W(x1,x2,…,xN) is the objective function; Gk(x1,x2,…,xN) are the NC inequality constraint functions; xjl and xju are the lower and upper bounds of the jth design variable. Expressions (1) show that optimization problems usually have very complex and highly non-linear implicit formulations. Therefore, the sequential linear programming (SLP) method is a popular approach to deal with the complexity and non-linearity of optimization problems. Here, the original non-linear problem is replaced with a linearized problem, built using linear approximations of the objective function and constraints. Each non-linear function of the problem is linearized about a point Poi and replaced with its first-order Taylor series expansion. The original non-linear problem changes into the following linearized one: equation(2) The vector defines the linearization point Poi in the design space while the superscript i refers to the current optimization cycle (ith iteration). If the optimization problem is convex the linearized constraints lie entirely outside the feasible region; hence the optimum found with the linearized sub-problem will be infeasible. However, the convergence to the solution of the original problem can be achieved after a few relinearizations. The SLP method is hence a recursive procedure consisting of the formulation and resolution of a series of linearly approximated sub-problems, where each intermediate solution is chosen as the starting point of the subsequent sub-problem. In practical optimization, the SLP method is very popular because of its simple theoretical foundation and because reliable linear solvers are readily available (i.e. Simplex packages) [1]. Besides, the method performs very well in convex programming problems with nearly linear objective and constraint inequality functions [2]. However, because of their inherent conceptual simplicity, the SLP techniques are not globally convergent [3]. Moreover, numerical problems like convergence to local or infeasible optima and objective function oscillations may occur if the method is not well controlled [1]. Finally, for under-constrained problems, where there are fewer active constraints than there are design variables, the method often performs poorly because the linearized domain is unbounded [4]. In view of these limitations and of its simplicity, SLP is often considered a “poor” method by many theoreticians. For instance, Arora objected that the SLP method should not be used as a black box for solving optimization problems because there is no descent function; moreover, there are considerable heuristics involved in the formulation of the approximate sub-problems [5]. However, it is generally acknowledged that “well coded” SLP algorithms may outperform very sophisticated optimization methods. An SLP algorithm is said to be well coded if it includes tools for enhancing the numerical efficiency. In particular, the use of proper move limits is seen as the best way to make the SLP method reliable and efficient. The move limits are additional side constraints that define a region of the design space where the solution of the linearized sub-problem will lie. This region is located in the neighbourhood of the linearization point and is called the move limit domain. The move limit domain is built in purpose to have a reasonable accuracy of the linear approximation also when design variables are considerably perturbed within the optimization process. Care should be taken in the definition of move limits: they must not be too large in order to avoid oscillations in numerical solution, and not too small in order to have a reasonable convergence rate without ending trapped into local optima or infeasible designs. In addition, move limit definition should include line search procedures in order to improve the cost function in each iteration. Since Pope introduced move limits in the early ’70s, researchers proposed a variety of techniques to define the move limits. Haftka and Gurdal [1] suggests to choose as move limit the 10–30% of the value that a design variable has at the beginning of the iteration cycle. The move limits are gradually shrunk as the design approaches the optimum and they should be reduced if the cost function does not improve or the constraints are more violated than at the previous iteration. Vanderplaats and Kodyalam [6] adopt the following criterion: the move limits are very large in the first design cycles and they are then reduced by 50% if the design violates the constraints more than at the previous iteration; anyhow the move limits are never reduced to less than 25%. John et al. [7] solve the linearized sub-problems with the Simplex algorithm. Move limits are reduced by means of the α parameter each time the design does not improve in the current iterate. The optimal step size α is determined with reanalysis techniques based on quadratic approximations of the cost function. Schittowsky et al. [8] implemented an SLP algorithm which adjusts move limits according to trust region models. The technique uses a penalty function strategy to build the descent function. Numerical tests proved that the algorithm is suitable also for large-scale optimization problems. However, the very large number of parameters (σ, ρ1, ρ2) required in input is a considerable drawback. Yu Chen [9] and [10] defines the move limits using the gradients of constraint equations. The move limits are evaluated each iteration or only in the first design cycle and then shrunk by means of an user supplied factor. The basic idea is to approach the constraint boundaries very quickly. Intermediate designs are accepted or rejected according to improvements in cost function or reduction of constraint violation found with inexact line search based on reanalysis techniques. Yu Chen’s techniques are very interesting because the definition of move limits does not involve any heuristic criteria and hence the user gets rid of guesswork. However, particularly in the case of optimization problems with many design variables and constraints, control parameters are needed to preserve the numerical efficiency (CPU time and number of design cycles). The numerical results showed that the inexact line search leads to reduce the number of design cycles but the average time required for a design cycle gets substantially higher. Pourazady and Fu [11] used the SLP method in shape optimization problems. They offer a move limit reduction scheme in which the move limits are reduced from the initially prescribed value in an exponential fashion based on the iteration number taken as exponent. This approach ensures termination of the optimization process but sub-optimal designs may be reached due to premature termination; also, reduction of move limits is sometimes enforced when unnecessary. The previous discussion emphasizes the fact that the move limit definition is a critical point because the efficiency of an approximate method such as SLP depends on the quality of approximation substantially. Authors proposed a variety of approximate optimization algorithms in which the move limits have less or more importance. In general, move limits are determined based on the difference that exists between the non-linear functions and the linearized functions. The movements are then shrunk every time the responses from the exact and approximate structural analyses differ more than a fixed limit. See Refs. [12], [13] and [14] for details. Ideally, an approximate optimization method should include elements from each of the formulations summarised above. More realistically, an efficient and robust approximate method for structural optimization should define the move limits guaranteeing that the approximate sub-problem portrays the original non-linear problem well. Besides, high accuracy of the approximation will make it unlikely to have cost function oscillations and will avoid that the numerical solution ends trapped into infeasible regions. Wujek and Renaud [15] presented a review of different approximate methods for structural optimization focusing on the move limits definition. They concluded that a proper choice of the move limits should ensure that the objective function improves monotonously, that each intermediate solution is feasible, that the design variable movement is controlled in order to maintain the approximation error at a reasonable level. The requirements listed above were fulfilled by the approach to the definition of move limits proposed by Lamberti and Pappalettere [16]. They suggest to calculate the move limits in two steps: basing first on the entity of non-linearity and then adjusting the move limit amplitude with fast line-searches so that the cost function is very likely to improve. Furthermore, exact line-search is performed every time that the cost function does not improve at the end of a design cycle. The resulting algorithm (LEAML) was superior over other SLP techniques but involved some heuristics. For this reason, the original formulation of Ref. [16] is substantially modified in the present research. The new algorithm (LESLP––linearization error sequential linear programming) is formulated so to eliminate the uncertainties and the guesswork in the process of defining and updating the move limits, providing schemes to accept or reject intermediate designs. Here, the key-ideas of LESLP are discussed and a basic guide to programming is provided. The issues of the numerical behaviour and efficiency of LESLP are discussed in detail in a companion paper [17].