Abstract
We study the asymptotic and exact Fisher information (FI) matrices of Markov switching vector autoregressive moving average (MS VARMA) models. In a related paper (2017), we propose a method to derive an explicit expression in closed form for the asymptotic FI matrix of the underlying model, and use such a matrix to derive the asymptotic covariance matrix of the Gaussian maximum likelihood (ML) estimator of the parameters in the MS VARMA model. In this paper, the exact FI matrix of a Gaussian MS VARMA process is considered for a time series of length T in relation to the exact ML estimation method. Furthermore, we prove that the Gaussian exact FI matrix converges in probability to the asymptotic FI matrix when the sample size T goes to infinity.
Similar content being viewed by others
References
Bao Y, Hua Y (2014) On the Fisher information matrix of a vector ARMA process. Econ Lett 123:14–16
Cavicchioli M (2014) Analysis of the likelihood function for Markov switching VAR(CH) models. J Time Series Anal 35(6):624–639
Cavicchioli M (2017) Asymptotic Fisher information matrix of Markov switching VARMA models. J Multivar Anal 157:124–135
Francq C, Zakoïan JM (2001) Stationarity of multivariate Markov switching ARMA models. J Econom 102:339–364
Hamilton JD (1990) Analysis of time series subject to changes in regime. J Econom 45:39–70
Hamilton JD (1994) Time series analysis. Princeton University Press, Princeton
Klein A, Mélard G (2014) An algorithm for the exact Fisher information matrix of a vector ARMAX time series. Linear Algebra Appl 446:1–24
Klein A, Spreij P (1996) On Fisher’s information matrix of an ARMAX process and Sylvester’s resultant matrices. Linear Algebra Appl 237(238):579–590
Klein A, Spreij P (2006) An explicit expression for the Fisher information matrix of a multiple time series process. Linear Algebra Appl 417(1):140–149
Klein A, Mélard G, Saidi A (2008) The asymptotic and exact Fisher information matrices of a vector ARMA process. Stat Probab Lett 78(12):1430–1433
Krolzig HM (1997) Markov-switching vector autoregressions: modelling, statistical inference and application to business cycle analysis. Springer, Berlin
Magnus JR, Neudecker H (1999) Matrix differential calculus with applications in statistics and econometrics, 2nd edn. Wiley, Chichester
Mélard G, Klein A (1994) On a fast algorithm for the exact information matrix of a Gaussian ARMA time series. IEEE Trans Signal Process 42:2201–2203
Peeters RLM, Hanzon B (1997) Symbolic computation of Fisher information matrices. In: Control conference (ECC) 1997 European, Brussels
Söderström T, Stoica P (1989) System identification. Prentice-Hall, New York
Verlucchi M (2009) Regime switching: Italian financial markets over a century. Stat Methods Appl 18(1):67–86
Yang M (2000) Some properties of vector autoregressive processes with Markov-switching coefficients. Econ Theory 16:23–43
Zadrozny PA, Mittnik S (1994) Kalman filtering methods for computing information matrices for time-invariant periodic and generally time-varying VARMA models and samples. Comput Math Appl 28(4):107–119
Acknowledgements
Work financially supported by FAR (2017) research Grant of the University of Modena and Reggio Emilia, Italy. We are grateful to the Editor-in-Chief Prof. Tommaso Proietti, and two anonymous referees for their very useful suggestions and remarks which improved the final version of the paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
Proof of Theorem 2
Each equation is meant to be evaluated on the ML estimates of the parameters, without explicit mention. To prove the theorem we use matrix differential calculus and linear algebra of matrices. See Magnus and Neudecker (1999). From Cavicchioli (2017), Appendix (A.2), it follows that
So it remains to prove the convergence in probability of \( {{\mathcal {J}}}_T ({{\varvec{\alpha }}}_m)\) to \({{\mathcal {F}}}_a ({{\varvec{\alpha }}}_m) \) when T goes to infinity. By Cavicchioli (2017), Appendix (A.2), the first derivatives of \({\text {log}}_e \eta _{m t} = {\text {log}}_e \eta _{m t} ({{\varvec{\theta }}})\) with respect to the components of \({{\varvec{\alpha }}}_m\) are given by
for all \(i = 1, \dots , p\), \(j = 1, \dots , q\) and \(m = 1, \dots , M\). Recall Equations (A.2)–(A.4) from Cavicchioli (2017):
for all \(i = 1, \dots , p\), \(j = 1, \dots , q\) and \(m = 1, \dots , M\). From the first order conditions (FOC) of the ML estimation method, we get
hence
where \({\widehat{{\varvec{u}}}}_t\) is the residual estimate. Now we compute the second derivatives of \({\text {log}}_e \eta _{m t}\) with respect to \({{\varvec{\nu }}}_m\) and the components of \({{\varvec{\alpha }}}_m\). By (13) and (16), we have
Thus
hence \( {\text {plim}}_{T \rightarrow \infty } \, {{\mathcal {J}}}_T ({{\varvec{\nu }}}_m) \, = \, \pi _m \, {{\varvec{\varTheta }}}_{m}^{- 1} (1)^\top \, {{\varvec{\varOmega }}}_{m}^{- 1}\, {{\varvec{\varTheta }}}_{m}^{- 1} (1) = {{\mathcal {F}}}_a ({{\varvec{\nu }}}_m) \) by (7). By (13) and (17), we obtain
for all \(i = 1, \dots , p\) and \(m = 1, \dots , M\). Thus
Since \({{\mathcal {J}}}_T ({{\varvec{\gamma }}}_m, {{\varvec{\nu }}}_m) = {{\mathcal {J}}}_T ({{\varvec{\nu }}}_m, {{\varvec{\gamma }}}_m)^\top \), we have
by (10). By (13) and (18), we get
Using (19) gives
for all \(j = 1, \dots , q\) and \(m = 1, \dots , M\). Then \( {\text {plim}}_{T \rightarrow \infty } \, {{\mathcal {J}}}_T ({{\varvec{\nu }}}_m, {{\varvec{\delta }}}_m) = {{\varvec{0}}} = {{\mathcal {F}}}_a ({{\varvec{\nu }}}_m, {{\varvec{\delta }}}_m) \) because \(E(\mathbf{z}_{m, \tau } \, \xi _{m t | T}) = {{\varvec{0}}}\).
Finally, we compute the second derivatives of \({\text {log}}_e \eta _{m t}\) with respect to the pairs \(({{\varvec{\gamma }}}_m, {{\varvec{\gamma }}}_m)\), \(({{\varvec{\gamma }}}_m, {{\varvec{\delta }}}_m)\), and \(({{\varvec{\delta }}}_m, {{\varvec{\delta }}}_m)\). By (14) and (17), we have
for all \(i, j = 1, \dots , p\) and \(m = 1, \dots , M\). Thus
which implies \( {\text {plim}}_{T \rightarrow \infty } {{\mathcal {J}}}_T ({{\varvec{\gamma }}}_m) = {{\mathcal {F}}}_a ({{\varvec{\gamma }}}_m) \) by (8). By (14) and (18), we get
for every \( i = 1, \dots , p\), \( j = 1, \dots , q\) and \(m = 1, \dots , M\). Thus
Since \( {{\mathcal {J}}}_T ({{\varvec{\delta }}}_m, {{\varvec{\gamma }}}_m) = {{\mathcal {J}}}_T ({{\varvec{\gamma }}}_m, {{\varvec{\delta }}}_m)^\top \), we have
which is equal to \( {{\mathcal {F}}}_a ({{\varvec{\delta }}}_m, {{\varvec{\gamma }}}_m)\) by (11). By (15) and (18), we get
for every \(i, j = 1,\dots , q\) and \(m = 1, \dots , M\). Hence
where
Since \({\text {plim}}_{T \rightarrow \infty } \mathbf{R}_{m, T} (\ell + n, h + k) = \mathbf{R}_{m} (\ell + n, h + k)\), it follows that \( {\text {plim}}_{T \rightarrow \infty } \, {{\mathcal {J}}}_T ({{\varvec{\delta }}}_m) = {{\mathcal {F}}}_a ({{\varvec{\delta }}}_m) \) by (9). This completes the proof. \(\square \)
Rights and permissions
About this article
Cite this article
Cavicchioli, M. A note on the asymptotic and exact Fisher information matrices of a Markov switching VARMA process. Stat Methods Appl 29, 129–139 (2020). https://doi.org/10.1007/s10260-019-00472-y
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10260-019-00472-y