Skip to main content

Advertisement

Log in

Variance components in errors-in-variables models: estimability, stability and bias analysis

  • Original Article
  • Published:
Journal of Geodesy Aims and scope Submit manuscript

Abstract

Although total least squares has been substantially investigated theoretically and widely applied in practical applications, almost nothing has been done to simultaneously address the estimation of parameters and the errors-in-variables (EIV) stochastic model. We prove that the variance components of the EIV stochastic model are not estimable, if the elements of the random coefficient matrix can be classified into two or more groups of data of the same accuracy. This result of inestimability is surprising as it indicates that we have no way of gaining any knowledge on such an EIV stochastic model. We demonstrate that the linear equations for the estimation of variance components could be ill-conditioned, if the variance components are theoretically estimable. Finally, if the variance components are estimable, we derive the biases of their estimates, which could be significantly amplified due to a large condition number.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  • Adcock RJ (1877) Note on the method of least squares. Analyst 4:183–184

    Article  Google Scholar 

  • Bates D, Watts DG (1980) Relative curvature measures of nonlinearity (with discussions). J R Stat Soc B42:1–25

    Google Scholar 

  • Bates D, Watts DG (1988) Nonlinear regression analysis and its applications. Wiley, New York

    Book  Google Scholar 

  • Beale EML (1960) Confidence regions in non-linear estimation (with discussions). J R Stat Soc B22:41–88

    Google Scholar 

  • Björck  (1996) Numerical methods for least squares problems. SIAM, Philadelphia

    Book  Google Scholar 

  • Box MJ (1971) Bias in nonlinear estimation (with discussions). J Roy Stat Soc B33:171–201

    Google Scholar 

  • Clarke GPY (1980) Moments of the least squares estimators in a nonlinear regression model. J R Stat Soc B42:227–237

    Google Scholar 

  • Deming WE (1931) The application of least squares. Philos Mag 11:146–158

    Google Scholar 

  • Deming WE (1934) On the application of least squares–II. Philos Mag 17:804–829

    Google Scholar 

  • Deming WE (1964) Statistical adjustment of data. Dover, New York

    Google Scholar 

  • Dennis JE Jr, Schnabel RB (1996) Numerical methods for unconstrained optimization and nonlinear equations. SIAM classics in applied mathematics, SIAM, Philadelphia

  • Eshagh M (2010) Variance component estimation in linear ill-posed problems: TSVD issue. Acta Geod Geophys Hung 45:184–194

    Article  Google Scholar 

  • Eshagh M (2011) On the estimation of variance in unstable condition adjustment models. Acta Geod Geophys Hung 46:71–83

    Article  Google Scholar 

  • Fang X (2011) Weighted total least squares solutions for applications in geodesy. Wissenschaftliche Arbeiten der Fachrichtung Geodäsie und Geoinformatik, Ph.D. Dissertation Nr.294, Leibniz University Hannover

  • Fletcher R (2000) Practical methods of optimization, 2nd edn. Wiley, New York

    Book  Google Scholar 

  • Gerhold GA (1969) Least-squares adjustment of weighted data to a general linear equation. Am J Phys 37:156–161

    Article  Google Scholar 

  • Golub GH, van Loan CF (1980) An analysis of the total least squares problem. SIAM J Numer Anal 17:883–893

    Article  Google Scholar 

  • Grafarend EW (1985) Variance-covariance component estimation: theoretical results and geodetic applications. Stat Decis 4:407–441

    Google Scholar 

  • Hartley HO, Rao JNK (1967) Maximum-likelihood estimation for the mixed analysis of variance model. Biometrika 54:93–108

    Article  Google Scholar 

  • Helmert FR (1907) Die Ausgleichungsrechnung nach der Methode der kleinsten Quadrate, Zweite edn. Teubner, Leipzig

    Google Scholar 

  • Horn SD, Horn RA, Duncan DB (1975) Estimating heteroscedastic variances in linear models. J Am Stat Assoc 70:380–385

    Article  Google Scholar 

  • Koch KR (1986) Maximum likelihood estimate of variance components. Bull Géod 60:329–338

    Article  Google Scholar 

  • Koch KR (1999) Parameter estimation and hypothesis testing in linear models, 2nd edn. Springer, Berlin

  • Koch KR, Kusche J (2007) Comments on Xu et al. (2006) Variance component estimation in linear inverse ill-posed models, J Geod 80(1):69–81, J Geod 81:629–631

    Article  Google Scholar 

  • Kubik K (1966) Schätzfunktionen für Varianzen, Kovarianzen und für andere Parameter in Ausgleichsaufgaben, Ph.D. thesis, ITC, Delft

  • Kubik K (1970) The estimation of the weights of measured quantities within the method of least squares. Bull Géod 44:21–40

    Google Scholar 

  • Kummell CH (1879) Reduction of observation equations which contain more than one observed quantity. Analyst 6:97–105

    Article  Google Scholar 

  • LaMotte LR (1973) Quadratic estimation of variance components. Biometrics 29:311–330

    Article  Google Scholar 

  • Li Y, Tang H, Lin X (2009) Spatial linear mixed models with covariate measurement errors. Stat Sin 19:1077–1093

    Google Scholar 

  • Magnus JR, Neudecker H (1988) Matrix differential calculus with applications in statistics and econometrics. Wiley, New York

    Google Scholar 

  • Mann ME, Emanuel KA (2006) Atlantic hurricane trends linked to climate change. EOS Trans AGU 87:233–241

    Article  Google Scholar 

  • Markovsky I, van Huffel S (2007) Overview of total least squares methods. Signal Proc 87:2283–2302

    Article  Google Scholar 

  • Marquardt DW (1963) An algorithm for least squares estimation of nonlinear parameters. SIAM J Appl Math 11:431–441

    Article  Google Scholar 

  • Patterson HD, Thompson R (1975) Maximum likelihood estimation of components of variance. In: Proceedings of the 8th international biometric conference, pp 197–207

  • Pearson K (1901) On lines and planes of closest fit to systems of points in space. Philos Mag 2:559–572

    Article  Google Scholar 

  • Pincus R (1974) Estimability of parameters of the covariance matrix and variance components. Math Operationsforsch Stat 5:245–248

    Article  Google Scholar 

  • Pope AJ (1972) Some pitfalls to be avoided in the iterative adjustment of nonlinear problems. In: Proceedings of the 38th annual meeting, American society of photogrammetry, Washington, pp 449–471

  • Pukelsheim F (1976) Estimating variance components in linear models. J Multivar Anal 6:626–629

    Article  Google Scholar 

  • Pukelsheim F, Styan GPH (1978) Nonnegative definiteness of the estimated dispersion matrix in a multivariate linear model, Technical Report No. 125, Department of Statistics, Stanford University, California

  • Rao CR (1971a) Estimation of variance and covariance components: MINQUE theory. J Multivar Anal 1:257–275

    Article  Google Scholar 

  • Rao CR (1971b) Minimum variance quadratic unbiased estimation of variance components. J Multivar Anal 1:445–456

    Article  Google Scholar 

  • Rao CR, Kleffe J (1988) Estimation of variance components and applications. north-holland, Amsterdam

    Google Scholar 

  • Ratkowsky DA (1983) Nonlinear regression modeling: a practical unified approach. Marcel Dekker, New York

    Google Scholar 

  • Schaffrin B (1983) Varianz-Kovarianz-Komponenten-Schätzung bei der Ausgleichung heterogener Wiederholungsmessungen, C282. Deutsche Geodätische Kommission, München

  • Schaffrin B, Wieser A (2008) On weighted total least-squares adjustment for linear regression. J Geod 82:415–421

    Article  Google Scholar 

  • Schaffrin B, Felus YA (2009) An algorithmic approach to the total least-squares problem with linear and quadratic constraints. Stud Geophys Geod 53:1–16

    Article  Google Scholar 

  • Schaffrin B, Snow K (2010) Total least-squares regularization of Tykhonov type and an ancient racetrack in Corinth. Linear Algebra Appl 432:2061–2076

    Article  Google Scholar 

  • Searle SR (1971) Linear models. Wiley, New York

    Google Scholar 

  • Searle SR, Casella G, McCulloch CE (1992) Variance components. Wiley, New York

    Book  Google Scholar 

  • Seber G, Wild C (1989) Nonlinear regression. Wiley, New York

    Book  Google Scholar 

  • Snow K (2012) Topics in total least-squares adjustment within the errors-in-variables model: Singular cofactor matrices and prior information, Technical Report No. 502, Geodetic Science, The Ohio State University, Columbus

  • Teunissen P (1989a) First and second moments of non-linear least-squares estimators. Bull Géod 63:253–262

    Article  Google Scholar 

  • Teunissen P (1989b) Nonlinear inversion of geodetic and geophysical data: diagnosing nonlinearity. Lecture Notes Earth Sciences, vol 29. Springer, Berlin, pp 241–264

  • Teunissen P, Amiri-Simkooei AR (2008) Least-squares variance component estimation. J Geod 82:65–82

    Article  Google Scholar 

  • van Huffel S, Vandewalle J (1989) Analysis and properties of the generalized total least squares problem \(AX\approx B\) when some or all columns in \(A\) are subject to error. SIAM J Matrix Anal Appl 10:2577–2596

    Article  Google Scholar 

  • van Huffel S, Vandewalle J (1991) The total least squares problem: computational aspects and analysis. SIAM, Philadelphia

    Book  Google Scholar 

  • Wang N, Davidian M (1996) A note on covariate measurement error in nonlinear mixed effects models. Biometrika 83:801–812

    Article  Google Scholar 

  • Wang N, Lin X, Gutierrez R, Carroll RJ (1998) Bias analysis and SIMEX approach in generalized linear mixed measurement error models. J Am Stat Assoc 93:249–261

    Article  Google Scholar 

  • Wulff SS, Birkes D (2005) Minimum variance unbiased invariant estimation of variance components under normality. Statistics 39:53–65

    Google Scholar 

  • Xu PL (2009) Iterative generalized cross-validation for fusing heteroscedastic data of inverse ill-posed problems. Geophys J Int 179:182–200

    Article  Google Scholar 

  • Xu PL, Shimada S (2000) Least squares estimation in multiplicative noise models. Commun Stat B29:83–96

    Article  Google Scholar 

  • Xu PL, Shen YZ, Fukuda Y, Liu YM (2006) Variance component estimation in inverse ill-posed linear models. J Geod 80:69–81

    Article  Google Scholar 

  • Xu PL, Liu YM, Shen YZ, Fukuda Y (2007a) Estimability analysis of variance and covariance components. J Geod 81:593–602

    Article  Google Scholar 

  • Xu PL, Liu YM, Shen YZ, Fukuda Y (2007b) Reply to the comments by K.R. Koch and J. Kusche on Xu et al. (2006) Variance component estimation in linear inverse ill-posed models, J Geod 80:69–8. J Geod 81:633–635

    Google Scholar 

  • Xu PL, Liu JN, Shi C (2012) Total least squares adjustment in partial errors-in-variables models: algorithm and statistical analysis. J Geod 86:661–675

    Article  Google Scholar 

Download references

Acknowledgments

The authors thank the associate editor, Prof. A. Dermanis, for his thoroughly constructive and valuable comments and for his patience with our revisions, which help clarify some of subtle points for the best interest of the reader, in particular, in Sects. 2 and 3. They also thank the fourth reviewer for slightly improving the English of the text. This work is partially supported by a Grant-in-Aid for Scientific Research (C25400449) and the National Foundation of Natural Science of China (No. 41231174).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peiliang Xu.

Appendix: the matrices required for the MINQUE estimation of variance components

Appendix: the matrices required for the MINQUE estimation of variance components

To apply the MINQUE method to estimate the variance components of (2) in association with the linearized EIV model (10), with the design matrix \(\mathbf {A}(\hat{{\varvec{\beta }}},\hat{\overline{\mathbf {a}}})\) in (17), we have to preparatorily compute the normal, the projection and other necessary matrices. We should note that all these matrices, including any quantity computed with these matrices, depend on the estimates \(\hat{{\varvec{\beta }}}\) and \(\hat{\overline{\mathbf {a}}}\). For conciseness of notations, we will not write down such dependence without confusion. By definition, the normal matrix, denoted by \(\mathbf {N}\), is given as follows:

$$\begin{aligned} \mathbf {N}&= \left[ \begin{array}{c@{\quad }c} (\hat{\overline{\mathbf {A}}})^T &{} \mathbf {0} \\ (\hat{{\varvec{\beta }}}\otimes \mathbf {I}_n) &{} \mathbf {I}_a \end{array} \right] \left[ \begin{array}{c@{\quad }c} {\varvec{\Sigma }}_{0y}^{-1} &{} \mathbf {0} \\ \mathbf {0} &{} {\varvec{\Sigma }}_{0a}^{-1} \end{array} \right] \left[ \begin{array}{l@{\quad }l} \hat{\overline{\mathbf {A}}} &{} {(\hat{{\varvec{\beta }}})^T\otimes \mathbf {I}_n}\\ \mathbf {0} &{} \mathbf {I}_a \end{array} \right] \nonumber \\&= \left[ \begin{array}{c@{\quad }c} (\hat{\overline{\mathbf {A}}})^T{\varvec{\Sigma }}_{0y}^{-1}\hat{ \overline{\mathbf {A}}} &{} (\hat{{\varvec{\beta }}})^T\otimes {(\hat{\overline{\mathbf {A}}})^T{\varvec{\Sigma }}_{0y}^{-1}} \\ \hat{{\varvec{\beta }}}\otimes ({\varvec{\Sigma }}_{0y}^{-1}\hat{ \overline{\mathbf {A}}}) &{} {\varvec{\Sigma }}_{0a}^{-1} + {\hat{{\varvec{\beta }}}(\hat{{\varvec{\beta }}})^T} \otimes {\varvec{\Sigma }}_{0y}^{-1} \end{array} \right] \nonumber \\&= \left[ \begin{array}{l@{\quad }l} \mathbf {N}_{\beta } &{} \mathbf {N}_{\beta \overline{a}} \\ \mathbf {N}_{\overline{a}\beta } &{} \mathbf {N}_{\overline{a}} \end{array} \right] . \end{aligned}$$
(48a)

The inverse of the normal matrix \(\mathbf {N}\) with the four blocks is symbolically represented by the matrix \(\mathbf {Q}\), namely,

$$\begin{aligned} \mathbf {Q} = \mathbf {N}^{-1} = \left[ \begin{array}{cc} \mathbf {Q}_{\beta } &{} \mathbf {Q}_{\beta \overline{a}} \\ \mathbf {Q}_{\overline{a}\beta } &{} \mathbf {Q}_{\overline{a}} \end{array} \right] , \end{aligned}$$
(48b)

with the size of each of the blocks \(\mathbf {Q}_{\beta }\), \(\mathbf {Q}_{\beta \overline{a}}\), \(\mathbf {Q}_{\overline{a}\beta }\) and \(\mathbf {Q}_{\overline{a}}\) corresponding to that of \(\mathbf {N}_{\beta }\), \(\mathbf {N}_{ \beta \overline{a}}\), \(\mathbf {N}_{\overline{a}\beta }\) and \(\mathbf {N}_{\overline{a}}\), respectively.

To compute the projection matrix, we first compute the following \(\mathbf {H}\) matrix:

$$\begin{aligned} \mathbf {H}&= \left[ \begin{array}{l@{\quad }l} \hat{\overline{\mathbf {A}}} &{} {( \hat{{\varvec{\beta }}})^T\otimes \mathbf {I}_n} \\ \mathbf {0} &{} \mathbf {I}_a \end{array} \right] \left[ \begin{array}{cc} \mathbf {Q}_{\beta } &{} \mathbf {Q}_{\beta \overline{a}} \\ \mathbf {Q}_{\overline{a}\beta } &{} \mathbf {Q}_{\overline{a}} \end{array} \right] \left[ \begin{array}{l@{\quad }l} (\hat{\overline{\mathbf {A}}})^T &{} \mathbf {0} \\ (\hat{{\varvec{\beta }}}\otimes \mathbf {I}_n) &{} \mathbf {I}_a \end{array} \right] \nonumber \\&= \left[ \begin{array}{c@{\quad }c} \mathbf {H}_y &{} \mathbf {H}_{ya} \\ \mathbf {H}_{ay} &{} \mathbf {H}_a \end{array} \right] , \end{aligned}$$
(48c)

or explicitly by

$$\begin{aligned}&\mathbf {H}_y = \hat{\overline{\mathbf {A}}} \mathbf {Q}_{\beta }(\hat{\overline{\mathbf {A}}})^T +\hat{\overline{\mathbf {A}}}\mathbf {Q}_{\beta \overline{a}} (\hat{{\varvec{\beta }}}\otimes \mathbf {I}_n) + {(\hat{{\varvec{\beta }}})^T\otimes \mathbf {I}_n}\mathbf {Q}_{\overline{a}\beta }(\hat{\overline{\mathbf {A}}})^T\\&\qquad + {(\hat{{\varvec{\beta }}})^T\otimes \mathbf {I}_n}\mathbf {Q}_{\overline{a}}(\hat{{\varvec{\beta }}}\otimes \mathbf {I}_n),\\&\mathbf {H}_{ya} = \hat{\overline{\mathbf {A}}}\mathbf {Q}_{\beta \overline{a}} + {(\hat{{\varvec{\beta }}})^T\otimes \mathbf {I}_n}\mathbf {Q}_{\overline{a}},\\&\mathbf {H}_{ay} = \mathbf {H}_{ya}^T =\mathbf {Q}_{\overline{a}\beta }(\hat{\overline{\mathbf {A}}})^T+\mathbf {Q}_{\overline{a}}(\hat{{\varvec{\beta }}}\otimes \mathbf {I}_n),\\&\mathbf {H}_a = \mathbf {Q}_{\overline{a}}. \end{aligned}$$

With (14) in mind, the projection matrix in a linear model is defined by \(\mathbf {R}=\mathbf {Z}\!_A^{\bot }=\mathbf {I}_{ya}-\mathbf {Z}\!_A\), where \(\mathbf {I}_{ya}\) is an identity matrix with the dimension of sizes \(\mathbf {y}\) and \(\mathbf {A}\). As a result, we can finally obtain the projection matrix as follows:

$$\begin{aligned} \mathbf {R}&= \mathbf {I}_{ya} - \mathbf {H}{\varvec{\Sigma }}_0^{-1} \nonumber \\&= \left[ \begin{array}{cc} \mathbf {I}_n &{} \mathbf {0} \\ \mathbf {0} &{} \mathbf {I}_a \end{array} \right] - \mathbf {H}\left[ \begin{array}{cc} {\varvec{\Sigma }}_{0y}^{-1} &{} \mathbf {0} \\ \mathbf {0} &{} {\varvec{\Sigma }}_{0a}^{-1} \end{array} \right] \nonumber \\&= \left[ \begin{array}{cc} \mathbf {I}_n - \mathbf {H}_y{\varvec{\Sigma }}_{0y}^{-1} &{} -\mathbf {H}_{ya}{\varvec{\Sigma }}_{0a}^{-1} \\ -\mathbf {H}_{ay}{\varvec{\Sigma }}_{0y}^{-1} &{} \mathbf {I}_a - \mathbf {H}_a{\varvec{\Sigma }}_{0a}^{-1} \end{array} \right] \nonumber \\&= \left[ \begin{array}{cc} \mathbf {R}_y &{} \mathbf {R}_{ya} \\ \mathbf {R}_{ay} &{} \mathbf {R}_a \end{array} \right] . \end{aligned}$$
(49)

Finally, the matrix \(\mathbf {P}\) in Sect. 3.2 is given as follows:

$$\begin{aligned} \mathbf {P}&= {\varvec{\Sigma }}_{0}^{-1}\mathbf {R} \nonumber \\&= \left[ \begin{array}{cc} {\varvec{\Sigma }}_{0y}^{-1} &{} \mathbf {0} \\ \mathbf {0} &{} {\varvec{\Sigma }}_{0a}^{-1} \end{array} \right] \left[ \begin{array}{cc} \mathbf {I}_n - \mathbf {H}_y{\varvec{\Sigma }}_{0y}^{-1} &{} -\mathbf {H}_{ya}{\varvec{\Sigma }}_{0a}^{-1} \\ -\mathbf {H}_{ay}{\varvec{\Sigma }}_{0y}^{-1} &{} \mathbf {I}_a - \mathbf {H}_a{\varvec{\Sigma }}_{0a}^{-1} \end{array} \right] \nonumber \\&= \left[ \begin{array}{cc} {\varvec{\Sigma }}_{0y}^{-1} - {\varvec{\Sigma }}_{0y}^{-1}\mathbf {H}_y{\varvec{ \Sigma }}_{0y}^{-1} &{} -{\varvec{\Sigma }}_{0y}^{-1}\mathbf {H}_{ya}{\varvec{\Sigma }}_{0a}^{-1} \\ -{\varvec{\Sigma }}_{0a}^{-1}\mathbf {H}_{ay}{\varvec{\Sigma }}_{0y}^{-1} &{} {\varvec{\Sigma }}_{0a}^{-1} - {\varvec{\Sigma }}_{0a}^{-1}\mathbf {H}_a{\varvec{\Sigma }}_{0a}^{-1} \end{array} \right] \nonumber \\&= \left[ \begin{array}{cc} \mathbf {P}_y &{} \mathbf {P}_{ya} \\ \mathbf {P}_{ay} &{} \mathbf {P}_a \end{array} \right] . \end{aligned}$$
(50)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, P., Liu, J. Variance components in errors-in-variables models: estimability, stability and bias analysis. J Geod 88, 719–734 (2014). https://doi.org/10.1007/s00190-014-0717-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00190-014-0717-9

Keywords

Navigation