Action Filename Size Access Description License
Show more files...


In 1967 Kakwani showed that any estimator which has a sampling error of the form H(06 is unbiased, provided that the estimator has a finite expectation, H is even, and E symmetrically distributed about zero. Fuller and Battese (1973) used this result to establish unbiasedness of the GLS estimator in a variance components model, and Mehta and Swamy (1976) did the same for a variant of Zellner's two-step estimator under normality. In this paper we prove that Kakwani's result is applicable to the iterated GLS estimators at each step of the iteration, and conditions for the finiteness of the expectations are derived, implying unbiasedness of the iterated GLS estimators under certain stated conditions. Assuming normal disturbances, unbiasedness of the maximum likelihood estimator is proved under assumptions earlier made by Oberhofer and Kmenta (1974) to guarantee convergence of the iteration procedure to the solution of the ML-equations.


Downloads Statistics

Download Full History