This paper focuses on the practice of serial correlation correcting of the Linear Regression Model (LRM) by modeling the error. Simple Monte Carlo experiments are used to demonstrate the following points regarding this practice. First, the common factor restrictions implicitly imposed on the temporal structure of yt and xt appear to be completely unreasonable for any real world application. Second, when one compares the Autocorrelation-Corrected LRM (ACLRM) model estimates with estimates from the (unrestricted) Dynamic Linear Regression Model (DLRM) encompassing the ACLRM there is no significant gain in efficiency! Third, as expected, when the common factor restrictions do not hold the LRM model gives poor estimates of the true parameters and estimation of the ACLRM simply gives rise to different misleading results! On the other hand, estimates from the DLRM and the corresponding VAR model are very reliable. Fourth, the power of the usual Durbin Watson test (DW) of autocorrelation is much higher when the common factor restrictions do hold than when they do not. But, a more general test of autocorrelation is shown to perform almost as well as the DW when the common factor restrictions do hold and significantly better than the DW when the restrictions do not hold. Fifth, we demonstrate how simple it is to, at least, test the common factor restrictions imposed and we illustrate how powerful this test can be.