مقایسه کارایی عددی الگوریتم های مبتنی بر برنامه ریزی خطی مختلف پی در پی برای مسایل بهینه سازی ساختاری
کد مقاله | سال انتشار | تعداد صفحات مقاله انگلیسی |
---|---|---|
25036 | 2000 | 16 صفحه PDF |
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Computers & Structures, Volume 76, Issue 6, 30 July 2000, Pages 713–728
چکیده انگلیسی
Amongst the different optimisation methods, the Sequential Linear Programming (S.L.P.) is very popular because of its conceptual simplicity and of the large availability of LP commercial packages (i.e. Simplex algorithm). Unfortunately, the numerical efficiency of the S.L.P. method depends meaningfully on a proper choice of the move limits that are adopted for the optimisation variables. In this paper the effect on the numerical solution of different move limits definition criteria has been investigated. Two different approaches (CGML and LEAML) for the definition of the move limits in Sequential Linear Programming are described and compared in terms of numerical efficiency in the solution of six problems of weight minimisation of bar trusses structures.
مقدمه انگلیسی
As well known, an optimisation problem is defined by an objective function and different constraints equations and a common analytical form is the following: equation(1) where: • (x1,x2,…,xN) are the N optimisation variables; • W(x1,x2,…,xN) is the objective function to be minimised or maximised; • Gk(x1,x2,…,xN) are the NC constraint equations of the optimisation problem; • αj and βj are restrictions put on the jth optimisation variable. As well known, the Sequential Linear Programming method (S.L.P.) is a recursive procedure that consists in the formulation and the resolution of a series of linearly approximated sub-problems, where each intermediate solution is the starting point for the subsequent sub-problem. The procedure is very simple, because in and around of a point , each nonlinear function of the problem is replaced by the linear part of its Taylor series expansion. The nonlinear problem changes into the following linearized one: equation(2) where is the point of linearization at the ith iteration. Despite of the conceptual simplicity of the method, the S.L.P. techniques are not globally convergent [1] and some problems, like convergence to a local or an unfeasible optimum or objective function oscillations, could arise if the method is not well controlled during successive iterations [2]. Solution quality involves, obviously, the control of search parameters of various linearized solutions. A powerful control technique is the use of the move limits procedure, proposed by Pope in 1973 [3]. This technique limits, in each iteration, the domain wherein the solution is searched to the intersection between the linearized constraint domain and an around of the linearization point. A great attention is requested in the definition of move limits, that must be not too large in order to avoid oscillations in numerical solutions and not too small in order to avoid that the algorithm accumulates to a local optimum or its convergence is not rapid. Schittowsky et al. proved that the S.L.P. technique is very efficient if compared to other methods, from a numerical point of view, when a proper choice of the move limits is adopted [4]. Different authors proposed move limits definition criteria for the Sequential Linear Programming method. Haftka suggests as move limit for a generic variable the 10–30% of the initial variable value in the iteration cycle and reduces the move limits amplitude if no improvement of the objective function is achieved after the solution of the linearised sub-problem [2]. Vanderplaats adopts the following criterion: the move limits are very large at the beginning of the algorithm and then they are reduced by 50% every time when the constraints at the current iteration are more violated than at the previous iteration; anyway move limits are never reduced to less than 25% [5]. Ramakrishnan et al. use a search direction technique to correct the current intermediate solution when it is not better than the previous intermediate solution and then adjust the move limits amplitude, according to the value of the step α that has been used in the directional search [6]. Schittowsky reports a technique to adapt the move limits derived from Fletcher et al., but this technique is based on a penalty function strategy, that depends on a big number of parameters [4]. Yu Chen defines the move limits using the gradients of the constraint equations at each iteration, or only at the first iteration and then reducing their amplitude with an user supplied factor; even if Yu Chen's techniques calculate the move limits, a lot of control parameters are needed [7], [8] and [9]. It is remarkable that the definition of proper move limits is an important feature for all the approximated methods of optimisation, even if the adopted approximation is not merely linear (such is done in the S.L.P. method). An efficient and robust approximated method for optimisation should define the move limits in order to ensure that the approximated sub-problem reasonably portrays the original nonlinear problem. High accuracy in the approximated sub-problem will reduce the risk that the objective function is not improved after an optimisation cycle or that the numerical solution ends into an unfeasible region. Renaud et al. report and discuss different approximated methods for structural optimisation focusing on the move limits definition. A proper choice of the move limits should ensure that the objective function is always decreasing, that the intermediate solutions are always feasible, that the design variable movement is controlled in order to maintain the error in the approximation at a reasonable level [10]. Most of the requirements for a proper choice of the move limits, indicated by Renaud et al., are encompassed with a new approach to the definition of the move limits proposed by Lamberti and Pappalettere [11]. They suggest to calculate the move limits amplitude basing on the entity of nonlinearity (Linearisation Error Amplitude Control Move Limits or LEAML) [11]. In this paper Yu Chen's approach (Constraints Gradient based Move Limits or CGML methods) is partially modified: in particular, the move limits are recalculated until the intermediate solution improves meaningfully and then they are reduced by means of an user supplied factor. In the following, the modified Yu Chen's approach will be compared to the technique of move limits definition based on the linearisation error amplitude control proposed in Ref. [11]. The comparison will be done first in terms of description of the different way to define the move limits and to handle the S.L.P. algorithm that is faced to the move limits definition in the numerical procedure; then in terms of the numerical results that are obtained in the calculations [12].
نتیجه گیری انگلیسی
Two different approaches, Constraints Gradient based Move Limits (CGML) and Linearisation Error Amplitude Control Move Limits (LEAML), for the definition of the move limits in the Sequential Linear Programming method for structural optimisation, have been described, compared and discussed. The numerical results, obtained for weight minimisation problems of bar trusses structures, have shown that both the approaches are very efficient if compared to other approximated methods, described in literature. In addition, they require CPU times that are comparable to those needed by the techniques available in literature. Both the options are practically equivalent in terms of final structural weight, but the LEAML requires smaller CPU times than the CGML to reach the optimal solution.