Go to main content
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DublinCore
EndNote
NLM
RefWorks
RIS
Cite
Citation

Files

Abstract

In this paper it is argued that conclusions cannot be sturdy if they are based upon unchecked dogmatic prior information. The vehicle chosen to evaluate models is their out-of-sample prediction performance. If model M predicts systematically better than model N we should stop using N, but if the difference in predictive quality is mainly caused by a few very influential observations there is reason for serious doubt. The testing point of view of McAleer et al. (1985) and many others is adopted, but it is demonstrated that some of the tests may be misleading. The author agrees with the conclusion of Learner (1985) that sensitivity analysis is important but he prefers different tools of analysis and a different reporting style.

Details

PDF

Statistics

from
to
Export
Download Full History