|کد مقاله||سال انتشار||تعداد صفحات مقاله انگلیسی||ترجمه فارسی|
|110511||2018||17 صفحه PDF||سفارش دهید|
نسخه انگلیسی مقاله همین الان قابل دانلود است.
هزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.
این مقاله تقریباً شامل 14377 کلمه می باشد.
هزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Journal of Statistical Planning and Inference, Available online 27 March 2018
Given a linear regression model and an experimental region for the independent variable, the problem of finding an optimal approximate design calls for minimizing a convex optimality criterion over a convex set of information matrices of feasible approximate designs. For numerical solution pure gradient methods are often used by design theorists, as vertex direction, vertex exchange, multiplicative algorithms, or combinations hereof. These methods have two major deficiencies: a slow convergence rate after a quick but rough approximation to the optimum, and often a large support of the obtained nearly optimal design. The latter feature is related to the fact that the methods optimize in the space of design measures which is usually of high or even infinite dimension, whereas the dimension of the information matrices is often small or moderate. For such situations a quasi-Newton method is revisited which was originally established by Gaffke & Heiligers (1996). In the present paper new possibilities of its application are demonstrated. The algorithm optimizes in matrix space. It shows a good global and an excellent local convergence behavior resulting in an accurate approximation of the optimum. A crucial subroutine solves convex quadratic minimization over the set of information matrices via repeated linear minimization over that set, providing thus the quasi-Newton step of the algorithm. This may also be of interest as a tool for computing an approximate design from a given information matrix and such that the support size of the design keeps CarathÃ©odoryâs bound. Illustrations are given for D- and I-optimality in particular multivariate random coefficient regression models and for T-optimal discriminating design in univariate polynomial models. Moreover, the behavior of the algorithm is tested for cases of larger dimensions: D- and I-optimal design for a third order polynomial model in several variables on a discretized cube.