Files

Abstract

Estimation of the parameters of the reduced rank regression model in a Bayesian method requires the solution of two identification problems: global or strong identification and local identification. Traditionally Bayesians, and to a large extent frequentists, have relied on zero-one identifying restrictions which require the researcher to impose an order on the variables to achieve global identification. Examples of this approach include Geweke (1996), Bauwens and Lubrano (1993), Kleibergen (1997), Kleibergen and Paap (1997), and Kleibergen and van Dijk (1994). This ordering relies on a, priori knowledge of which variables enter the reduced rank relations. For example, the cointegrating error correction model requires knowledge of which variables are 1(0) or cointegrate. Incorrect ordering may result in an estimated space for the cointegrating vectors that does not have the true cointegrating space as a subset, effectively misspecifying the model. In this paper, we present an estimation method which does not require a priori ordering by using restrictions similar to those used in maximum likelihood estimation by Anderson (1951) of the reduced rank regression model generally, and by Johansen (1988) in an error correction model specifically. As with much of the recent work, we focus on the cointegrating error correction model to show our approach. Local identification is achieved by nesting the reduced rank model within a full rank model with a well behaved posterior distribution. This approach is due to Kleibergen (1997) and is consistent with the principle of a "datatranslated likelihood" suggested by Box and Tiao (1973). In nesting the reduced rank model in a full rank model, we use a transformation from the potentially reduced rank matrix II to the matrices a, @ and A where A = 0 restricts II to a lower rank. Results from Roy (1952) enable us to derive the Jacobian for this transformation.

Details

PDF

Statistics

from
to
Export
Download Full History