The Effect of Estimating Weights in Weighted Least Squares
Academic Article

Overview

Identity

Additional Document Info

Other

View All

Overview

abstract

In weighted least squares, it is typical that the weights are unknown and must be estimated. Most packages provide standard errors assuming that the weights are known. This is fine for sufficiently large sample sizes, but what about for small-to-moderate sample sizes? The investigation of this article into the effect of estimating weights proceeds under the assumption typical in practicethat one has a parametric model for the variance function. In this context, generalized least squares consists of (a) an initial estimate of the regression parameter, (b) a method for estimating the variance function, and (c) the number of iterations in reweighted least squares. By means of expansions for the covariance, it is shown that each of (a)(c) can matter in problems of small to moderate size. A few conclusions may be of practical interest. First, estimated standard errors assuming that the weights are known can be too small in practice. The investigation indicates that a simple bootstrap operation results in corrected standard errors that adjust nonparametrically for the effect of estimating weights. Second, one does not need to do many iterative reweightings before the effect of the initial estimate disappears; in the theory given three iterations suffice. Third, if one is not going to iterate, it is probably advisable to make ones initial estimate more robust than unweighted least squares; for example, an M estimate. Theory in this article is fairly general in that the variance can be a parametric function of the mean and/or exogenous variables. The underlying distribution for the data is allowed to be general, and the number of iterative reweightings is allowed to vary. Thus the results apply to quasilikelihood estimates in generalized linear models. Most methods of variance-function estimation are included. 1976 Taylor & Francis Group, LLC.