دانلود مقاله ISI انگلیسی شماره 110511
کد مقاله سال انتشار مقاله انگلیسی ترجمه فارسی تعداد کلمات
110511 2018 17 صفحه PDF سفارش دهید 14377 کلمه
خرید مقاله
پس از پرداخت، فوراً می توانید مقاله را دانلود فرمایید.
عنوان انگلیسی
Quasi-Newton algorithm for optimal approximate linear regression design: Optimization in matrix space
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Journal of Statistical Planning and Inference, Available online 27 March 2018

پیش نمایش مقاله
پیش نمایش مقاله

چکیده انگلیسی

Given a linear regression model and an experimental region for the independent variable, the problem of finding an optimal approximate design calls for minimizing a convex optimality criterion over a convex set of information matrices of feasible approximate designs. For numerical solution pure gradient methods are often used by design theorists, as vertex direction, vertex exchange, multiplicative algorithms, or combinations hereof. These methods have two major deficiencies: a slow convergence rate after a quick but rough approximation to the optimum, and often a large support of the obtained nearly optimal design. The latter feature is related to the fact that the methods optimize in the space of design measures which is usually of high or even infinite dimension, whereas the dimension of the information matrices is often small or moderate. For such situations a quasi-Newton method is revisited which was originally established by Gaffke & Heiligers (1996). In the present paper new possibilities of its application are demonstrated. The algorithm optimizes in matrix space. It shows a good global and an excellent local convergence behavior resulting in an accurate approximation of the optimum. A crucial subroutine solves convex quadratic minimization over the set of information matrices via repeated linear minimization over that set, providing thus the quasi-Newton step of the algorithm. This may also be of interest as a tool for computing an approximate design from a given information matrix and such that the support size of the design keeps Carathéodory’s bound. Illustrations are given for D- and I-optimality in particular multivariate random coefficient regression models and for T-optimal discriminating design in univariate polynomial models. Moreover, the behavior of the algorithm is tested for cases of larger dimensions: D- and I-optimal design for a third order polynomial model in several variables on a discretized cube.

خرید مقاله
پس از پرداخت، فوراً می توانید مقاله را دانلود فرمایید.