Out-of-sample testing is a critical component of designing and evaluating trading systems. Trading systems are often developed and optimized using historical data, which can lead to overfitting – a situation where the system is excessively tuned to past data, resulting in poor performance on new, unseen data. Out-of-sample testing involves evaluating the trading system on data that was not used in the development process, allowing traders to gauge the system’s performance on new data and assess its robustness to market changes. By testing the system on a separate and distinct dataset, traders can be more confident that the system’s performance is not simply due to chance or overfitting, and that it is more likely to perform well in future market conditions.
Out-of-sample testing is a crucial step in designing and evaluating trading systems, allowing traders to make more informed and effective decisions in dynamic and ever-changing financial markets. But is it free of well-known biases such as overfitting, data-snooping, and look-ahead? Reference [1] investigated this issue. It pointed out,
In this paper, we examine the sources of excessively large Sharpe ratios associated with popular multifactor asset pricing models. Sharpe ratios remain too large to reconcile with leading economic models after applying simple, robust estimates of tangency portfolio weights, as well as under conventional pseudo-out-of-sample research designs that rely only on past data. We argue that the most compelling explanation behind these excessive Sharpe ratios involves a subtle form of look-ahead bias such that factors included in models, or alternatively the characteristics and portfolios from which factors are extracted, are selected based on prior research outcomes linking such characteristics with cross-sectional variation in returns…
Our results have a variety of implications. First, researchers should be cautious in interpreting common out-of-sample research designs as providing assessments of factor models that are free of hindsight bias, because the samples analyzed often overlap heavily with samples previously analyzed in the literature establishing anomalous return patterns. Given the continuous and organic nature of asset pricing research, it is difficult to conduct bias-free validation analyses, but our paper attempts to make progress in this direction. Second, we interpret the much smaller Sharpe ratios associated with popular multifactor models that we obtain using alternative evaluation approaches as good news. This is because real-time investors who ‘factor invest’ using these models after they are proposed do not achieve exorbitant Sharpe ratios.
In short, out-of-sample testing also suffers, albeit subtly, from biases such as overfitting, data-snooping, and look-ahead.
We agree with the authors. We also believe that out-of-sample tests such as walk-forward analysis also suffer from selection bias.
Then how do we minimize these biases?
Let us know what you think in the comments below or in the discussion forum.
References
[1] Easterwood, Sara and Paye, Bradley S., High on High Sharpe Ratios: Optimistically Biased Factor Model Assessments (2023). https://ssrn.com/abstract=4360788
Further questions
What's your question? Ask it in the discussion forum
Have an answer to the questions below? Post it here or in the forum