دانلود مقاله ISI انگلیسی شماره 24677
عنوان فارسی مقاله

# الگوریتم ها برای طراحی رگرسیون خطی تقریبی با استفاده از مدل مرتبه اول با ناهمسانی

کد مقاله سال انتشار مقاله انگلیسی ترجمه فارسی تعداد کلمات
24677 2014 11 صفحه PDF سفارش دهید 8693 کلمه
خرید مقاله
پس از پرداخت، فوراً می توانید مقاله را دانلود فرمایید.
عنوان انگلیسی
Algorithms for approximate linear regression design with application to a first order model with heteroscedasticity
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : DOI: 10.1016/j.csda.2013.07.029, Volume 71, March 2014, Pages 1113–1123

کلمات کلیدی
ماتریس اطلاعات - معیار بهینگی - تندترین شیب فرود - روش شبه نیوتن - بهره وری - طراحی ثابت -
پیش نمایش مقاله

#### چکیده انگلیسی

The basic structure of algorithms for numerical computation of optimal approximate linear regression designs is briefly summarized. First order methods are contrasted to second order methods. A first order method, also called a vertex direction method, uses a local linear approximation of the optimality criterion at the actual point. A second order method is a Newton or quasi-Newton method, employing a local quadratic approximation. Specific application is given to a multiple first order regression model on a cube with heteroscedasticity caused by random coefficients with known dispersion matrix. For a general (positive definite) dispersion matrix the algorithms work for moderate dimension of the cube. If the dispersion matrix is diagonal, a restriction to invariant designs is legal by equivariance of the model and the algorithms also work for large dimension.

#### مقدمه انگلیسی

The concept of approximate linear regression design can be summarized as follows, where for shortness we omit the details on the linear regression model. Let XX be a given design region. For every x∈Xx∈X there is given an information matrix M(x)M(x) which is a nonnegative definite p×pp×p matrix. The set {M(x):x∈X}{M(x):x∈X} is assumed to be compact. An approximate design ξξ consists of a finite subset of XX called the support of ξξ and of corresponding weights, View the MathML sourceξ=(x1x2…xrw1w2…wr),where r∈N,xi∈X,wi>0,∑i=1rwi=1. Turn MathJax on The information matrix of ξξ is defined to be View the MathML sourceM(ξ)=∑i=1rwiM(xi). Turn MathJax on In particular, the set of all information matrices of approximate designs constitutes a convex and compact set which is the convex hull of the one-point information matrices, View the MathML sourceM={M(ξ):ξ an approximate design}=Conv{M(x):x∈X}. Turn MathJax on Design optimality refers to an optimality criterion ΦΦ. Here we deal only with optimality criteria whose domain is the set View the MathML sourcePD(p) of all positive definite p×pp×p matrices, and moreover we assume ΦΦ to be convex, View the MathML sourceΦ:PD(p)⟶Rconvex. Turn MathJax on An approximate design ξ∗ξ∗ is said to be ΦΦ-optimal, if its information matrix M∗=M(ξ∗)M∗=M(ξ∗) is positive definite and minimizes Φ(M)Φ(M) over all View the MathML sourceM∈M∩PD(p). In particular, only designs with nonsingular information matrices are considered for optimality. Statistically this means that a design should allow the estimation of all pp regression coefficients. Popular optimality criteria are the D-criterion and L-criteria given by View the MathML sourceΦD(M)=(det(M))−1/p,ΦL(M)=tr(WM−1)for all M∈PD(p), Turn MathJax on where ΦLΦL employs a given “weight matrix” View the MathML sourceW∈PD(p). We note that especially for the D-criterion other (equivalent) formulations are used, e.g., −logdet(M)−logdet(M). The version we prefer has the advantage of being positive and positively homogeneous of order −1, so that a natural notion of efficiency of an information matrix (or of a design) can be used (see Section 2). The problem of computing a ΦΦ-optimal design constitutes a convex minimization problem on the (convex) set View the MathML sourceM∩PD(p). For practical programming purposes as well as for gaining more generality we consider minimization problems in a vector variable mm (instead of a matrix variable MM) of the following (more general) type. equation(1.1) View the MathML sourceminimizeϕ(m)over m∈M∩A, Turn MathJax on View the MathML sourcewhere 0̸≠A⊆Rq,A convex and open,ϕ:A⟶R convex , Turn MathJax on View the MathML sourceand0̸≠M⊆cl(A),M convex and compact , Turn MathJax on where View the MathML sourcecl(A) denotes the closure of AA. A condition which obviously ensures the existence of an optimal solution to (1.1) is that for some m0∈M∩Am0∈M∩A the set {m∈M∩A:ϕ(m)≤ϕ(m0)}{m∈M∩A:ϕ(m)≤ϕ(m0)} is compact. This is usually satisfied for optimal design problems on D- or L-optimality, as well as for many other optimality criteria, when stated like (1.1). Moreover, strict convexity of the D- and L-criteria implies uniqueness of the optimal solution to (1.1). Typically in the context of optimal design, the set MM is given as the convex hull of a compact family m(x)∈Rq,x∈Xm(x)∈Rq,x∈X, i.e., equation(1.2) View the MathML sourceM=Conv{m(x):x∈X},where {m(x):x∈X} is compact. Turn MathJax on Then, one is interested in an optimal design rather than in an optimal vector m∗m∗ (an optimal solution to (1.1)) which puts the additional problem of finding a decomposition of the vector m∗∈Mm∗∈M, according to (1.2): View the MathML sourceFind r∈N,x1,…,xr∈X, and w1,…,wr>0,∑i=1rwi=1, s.t.m∗=∑i=1rwim(xi). Turn MathJax on Then a corresponding design ξ∗ξ∗ has support points xixi and weights wi,i=1,…,rwi,i=1,…,r. The problem of finding a decomposition of a given vector View the MathML sourcem∈M=Conv{m(x):x∈X} will be solved, where the number rr will not exceed Caratheodory’s bound q+1q+1, (see Section 2). In our algorithms the iteration process employs qq-dimensional vectors mm instead of designs ξξ, which has the advantage of a finite dimension qq. In particular, no heuristics in the iteration process are needed for removing support points with small weights or substitution of a cluster of support points by a single point. A design is constructed in a final step after the iteration process. This is a novel feature of our methods. Further viewpoints presented in the paper might be interesting, though not completely new. Classical algorithms for optimal design require at each step minimization of a linear function over the “vertex set” {m(x):x∈X}{m(x):x∈X}. Here a reduction to “cc-fractional minimization” is introduced which proves to be useful in applications to a particular model in Section 3. A first order method using vertex directions (“cc-fractional steepest descent”) is contrasted to a second order method of quasi-Newton type. As a main difference in the performance of both methods a greater accuracy of the latter in approaching the optimum is observed. An important ingredient of our quasi-Newton method is a modification of an algorithm of Higgins and Polak (1990) to minimize a convex quadratic function over MM. This may be interesting by itself since it is one of the very few algorithms from optimization theory dealing with a feasible region defined as a convex hull like (1.2), which is the typical situation in optimal approximate linear regression design. Also it constitutes a tool for computing a decomposition of a point from the convex hull to obtain a corresponding design. The paper is arranged as follows. In Section 2 the basic structure of the algorithms for solving (1.1) is outlined. In Section 3 we focus to a particular model: multiple first order regression on the cube [−1,1]k[−1,1]k with heteroscedasticity caused by random coefficients as studied previously by Graßhoff et al. (2012). We apply the algorithms for moderate dimensions k≤10k≤10. The case of a diagonal dispersion matrix of the random coefficients considered in Section 4 allows a reduction by invariance, which reduces the computational effort considerably. A brief discussion of the results is given in Section 5. For convenience the modified Higgins–Polak algorithm is outlined in Appendix A, and a further Appendix B refers to the particular model from Section 3 and describes how to find the global minimum of a given quadratic function (in general, neither convex nor concave) over the cube View the MathML source[−1,1]k. This is to achieve cc-fractional linear minimization within the algorithms applied to the particular model from Section 3, but it forces restriction to moderate kk.

خرید مقاله
پس از پرداخت، فوراً می توانید مقاله را دانلود فرمایید.