Past literature on managed futures funds has found little evidence that the top performing funds can be predicted. But, the past literature has used small datasets and methods which had little power to reject the null hypothesis of no performance persistence. The objective of this research is to determine whether performance persists for managed futures advisors using large datasets and methods which have power to reject the null hypothesis. We use data from public funds, private funds, and commodity trading advisors (CTAs). The analysis proceeds in four steps. First, a regression approach is used to determine whether after adjusting for changes in overall returns and differences in leverage that funds all have the same mean returns. Second, we use Monte Carlo methods to demonstrate that Elton, Gruber, and Rentzler's methods have little power to reject false null hypotheses and will reject true null hypotheses too often. Third, we conduct an out-of-sample test of various methods of selecting the top funds. Fourth, since we do find some performance persistence, we seek to explain the sources of this performance persistence by using regressions of (a) returns against CTA characteristics, (b) return risk against CTA characteristics, (c) returns against lagged returns, and (d) changes in investment against lagged returns. The performance persistence could exist due to either differences in cost or differences in the skill of the manager. Our results favor skill as the explanation since returns were positively correlated with cost. The performance persistence is statistically significant, but is small relative to the variation in the data (only 2-4% of the total variation). But, the performance persistence is large relative to the mean. Monte Carlo methods showed that the methods used in past research could often not reject false null hypotheses and would reject true null hypotheses too often.