بهینه سازی عددی برای حساب تغییرات با گرادیان های در فضاهای غیر هیلبرت سوبولوف با استفاده از گرادیان مزدوج و معادلات دیفرانسیلی نرمالی از شدیدترین نزول
کد مقاله | سال انتشار | تعداد صفحات مقاله انگلیسی |
---|---|---|
10439 | 2009 | 7 صفحه PDF |
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Nonlinear Analysis: Theory, Methods & Applications, Volume 71, Issue 12, 15 December 2009, Pages e665–e671
چکیده انگلیسی
The purpose of this paper is to illustrate the application of numerical optimization methods for nonquadratic functionals defined on non-Hilbert Sobolev spaces. These methods use a gradient defined on a norm-reflexive and hence strictly convex normed linear space. This gradient is defined by Michael Golomb and Richard A. Tapia in [M. Golomb, R.A. Tapia, The metric gradient in normed linear spaces, Numer. Math. 20 (1972) 115–124]. It is also the same gradient described by Jean-Paul Penot in [J.P. Penot, On the convergence of descent algorithms, Comput. Optim. Appl. 23 (3) (2002) 279–284]. In this paper we shall restrict our attention to variational problems with zero boundary values. Nonzero boundary value problems can be converted to zero boundary value problems by an appropriate transformation of the dependent variables. The original functional changes by such a transformation. The connection to the calculus of variations is: The notion of a relative minimum for the Sobolev norm for p positive and large and with only first derivatives and function values is related to the classical weak relative minimum in the calculus of variations. The motivation for minimizing nonquadratic functionals on these non-Hilbert Sobolev spaces is twofold. First, a norm equivalent to this Sobolev norm approaches the norm used for weak relative minimums in the calculus of variations as p approaches infinity. Secondly, the Sobolev norm is both norm-reflexive and strictly convex so that the gradient for a non-Hilbert Sobolev space consists of a singleton set; hence, the gradient exists and is unique in this non-Hilbert normed linear space. Two gradient minimization methods are presented here. They are the conjugate gradient methods and an approach that uses differential equations of steepest descent. The Hilbert space conjugate gradient method of James Daniel in [J. Daniel, The Approximate Minimization of Functionals, Prentice-Hall, Inc., Englewood Cliffs, New Jersey, 1971], is one conjugate gradient method extended to a conjugate gradient procedure for a non-Hilbert normed linear space. As a reference see Ivie Stein Jr., [I. Stein Jr., Conjugate gradient methods in Banach spaces, Nonlinear Anal. 63 (2005) e2621–e2628] where local convergence theorems are given. The approach using a differential equation of steepest descent is motivated and described by James Eells Jr. in [J. Eells Jr., A setting for global analysis, Bull. Amer. Math. Soc. 72 (1966) 751–807]. Also a normalized differential equation of steepest descent is used as a numerical minimization procedure in connection with starting methods such as higher order Runge–Kutta methods described by Baylis Shanks in [E. Baylis Shanks, Solutions of differential equations by evaluations of functions, Math. Comput. 20 (1966) 21–38], and higher order multi-step methods such as Adams–Bashforth described by Fred T. Krogh in [F.T. Krogh, Predictor-corrector methods of high order with improved stability characteristics, J. Assoc. Comput. Mach. 13 (1966) 374–385]. Efficiency in steepest descent is the goal here. By taking a larger step size with a higher order numerical method such as Adams–Bashforth, the differential equation of steepest descent approach turns out to be more efficient and accurate than the iterative method of steepest descent of the type used by Cauchy in 1847; Haskell B. Curry in The method of steepest descent for non-linear minimization problems, Quart. Appl. Math. 2 (1944) 258–261; and Richard H. Byrd and Richard A. Tapia, An extension of Curry’s theorem to steepest descent in normed linear spaces, Math. Programming 9 (2) (1975) 247–254. S.I. Al’ber and Ja. I. Al’ber in [S.I. Al’ber, Ja.I. Al’ber, Application of the method of differential descent to the solution of non-linear systems, Ž. Vyčisl. Mat. i Mat. Fiz. 7 (1967) 14–32 (in Russian)], and others have used the differential equation of steepest descent approach. Our numerical methods for solving initial value problems in differential equations are carried out in non-Hilbert function spaces. Examples are described for minimizing the arc length functional, minimizing surface area functionals in non-parametric form, and solving pendent and sessile drop problems including boundary conditions that are not rotationally symmetric. The pendent and sessile drop problems are similar to those problems considered by Henry C. Wente in [H.C. Wente, The symmetry of sessile and pendent drops, Pacific J. Math. 88 (2) (1980) 387–397], and in [H.C. Wente, The stability of the axially symmetric pendent drop, Pacific J. Math. 88 (2) (1980) 421–470], and by Robert Finn in [R. Finn, Equilibrium Capillary Surfaces, Springer-Verlag, New York, 1986]. In dealing with the problem of minimizing locally the sum of surface tension energy and potential energy due to gravity subject to a fixed volume constraint, one can apply Courant’s penalty method described in the Appendix of Lecture Notes by Richard Courant, [R. Courant, Calculus of variations and supplementary notes and exercises, 1945–1946, Revised and Amended by Jurgen Moser, Supplementary Notes by Martin Kruskal and Hanan Rubin, Mathematics, New York University, New York, 1956–1957]. The numerical minimization is carried out in non-Hilbert function spaces for the penalty or augmented function. The Lagrange multiplier can be computed here from Courant’s penalty method as described by Magnus R. Hestenes in [M.R. Hestenes, Optimization Theory—The Finite Dimensional Case, John Wiley & Sons, New York, 1975], on p. 307. Also the more numerically stable method of multipliers of Hestenes and Powell can be used to convert the constrained problem to an unconstrained problem. It is described on pp. 307–308 in the above reference of Hestenes [M.R. Hestenes, Optimization Theory — The Finite Dimensional Case, John Wiley & Sons, New York, 1975].
مقدمه انگلیسی
The purpose of this paper is to illustrate the application of certain gradient methods in numerical optimization for nonquadratic functionals defined on non-Hilbert Sobolev spaces. The methods and applications described here are for boundary value problems in the calculus of variations having function values of zero on the boundary. These methods use a gradient defined on norm-reflexive and hence strictly convex Banach spaces. The gradient is that defined by Golomb and Tapia [1]. It is the same as that defined by Penot [2]. There will be these two gradient methods presented here: the conjugate gradient methods and the differential equations of steepest descent. The conjugate gradient methods are described in Stein [3]. The differential equations of steepest descent are described in Eells [4], and we use a normalized differential equation of steepest descent given inThe initial value problem described in (1.1) is solved numerically in a non-Hilbert Sobolev space by first using a starting method such as Taylor series or a Runge–Kutta method given by formula (8–10), equation 19, p. 34, in Shanks [19] and then applying a higher order Adams–Bashforth method. An eighth order Adams–Bashforth method is described in Krogh [5]. A sixteenth order method is derived in this paper. Applications are described.
نتیجه گیری انگلیسی
Several applications are described here. The first is that of minimizing an arc length functional with some computations given in Stein [3]. Also computations for determining geodesics can be carried out by minimizing the arc length functional subject to an equality constraint and applying Courant’s penalty method described in Courant [13] on p. 272. An extension of the method of multipliers of Hestenes and Powell can also be applied to convert the constrained problem to an unconstrained problem. It has the advantage that it is more numerically stable than Courant’s penalty method. The method of multipliers of Hestenes and Powell is described on p. 308 in Hestenes [14] for the finite dimensional case. Minimization for certain surface area functionals can be carried out directly using nonparametric representations for the surface area functional instead of the Dirichlet integral. The computations can be carried out for finding a pendent drop subject to gravity with a fixed volume constraint. One problem of interest is the so-called “critical drop.” This provides an example in nonparametric form of a singular quadratic functional of the type described by Morse [18], but for multiple integrals where the strengthened Legendre condition fails to hold on the boundary. For references see Wente [15] and [16] and Finn [17]. For the axially symmetric critical pendent drop, choose the units of measure so that View the MathML source, take V=2.88279515⋯ to be the volume constraint and let r=.9176221⋯ be the radius of the dropper. Then the drop height is 1.478⋯ and the Lagrange multiplier is λ=1.089⋯. The strengthened Legendre condition fails to hold on the circular boundary of the dropper for this nonparametric problem.