Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Combinatorial PurgedKFold Cross-Validation for Deep Reinforcement Learning
Latest

Combinatorial PurgedKFold Cross-Validation for Deep Reinforcement Learning

Last Updated on April 4, 2022 by Editorial Team

Author(s): Berend

Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.

Image byΒ Author.

This article is written by Berend Gort & Bruce Yang, core team members of the Open-Source project AI4Finance. This project is an open-source community sharing AI tools for finance, and a part of the Columbia University in New York. GitHubΒ link:

AI4Finance Foundation

Introduction

Our previous article described the Combinatorial PurgedKFold Cross-Validation method in detail for classifiers (or regressors) with regular predictions. This strategy allows for several backtest paths instead of a single historical one.

What qualifies as a valid empirical finding? A researcher will almost always identify a misleadingly profitable technique, a false positive, after many trials. Random samples include patterns, and a methodical search across a broad space of methods will eventually lead to discovering one that profits from the random data’s chance configurationΒΉ. An investment strategy is likely to be fit for such flukes. The reason is that a set of parameters is tuned to maximize the performance of a backtest. β€œBacktest overfitting” is the term for this phenomenon.

A statistically correct improved analysis can be performed on multiple backtest paths, reducing the risk of negligible false discovery. Our previous article described what traditional Combinatorial Purged Cross-Validation could do for regular Machine Learning Backtests. The previousΒ article:

The Combinatorial Purged Cross-Validation method

In this article, we write about an idea, not its execution. We need feedback on the ideation process and to create a discussion.

The novel idea is Combinatorial PurgedKFold Cross-Validation (CPCV) Deep Reinforcement Learning (DRL); CPCV DRL. There are some real concerns about the feasibility of this idea, and the solution might be problematic.

Applying these ideas in the DRL field is desired as the method is sample efficient in terms of backtests. Specifically, many coins have been only listed in the crypto field since ~ 2018. Therefore limited data is present to train and test a model. The rest of the rationale is present in our previousΒ article.

Short intro: DRL forΒ trading

Successful traders have their trading systems and can make sound judgments based on fundamental and technical analyses. However, due to: the information asymmetry of the crypto market, a lack of trading expertise, and human frailty, these analysis approaches are typically challenging to put into reality for ordinary investors. Reinforcement learning agents can learn effective policies for sequential choice problems and have surpassed humans in various tasks, for example, 2019 (!) Dota 2 World Championship. Just imagine it is alreadyΒ 2022.

So, can an RL agent train, learn from trading experiences, and automatically make good trading decisions? If so, what is the trading performance of such anΒ agent?

DRL requires an environment of the crypto trading space that fully represents the actual exchange. This environment is used to train/test the DRL agents. For example, in Atari gaming, the state of the domain is defined as the last X continuous game screens in activities, and it is a complete depiction of the environment. Unlike the deterministic and noise-free environment described above, crypto prices are known to be volatile, and the trading environment changes withΒ time.

In crypto trading, the state vector could be all of the features from the dataset and the information about the trading account. For example: how much cash is in the account, the technical indicators, and the amount of crypto the account holds from the tradeable pairs. Figure 1 shows how a DRL could be profitable in a cryptocurrency exchange. There is a state (trader) of which an action can be performed (buy/sell/hold). The environment, in this case, is the trade market, and the reward system is theΒ profit.

Fig 1: DRL structure for cryptocurrency trading: a trader (agent), that can either buy, hold or sell (action) on a CFD exchange (environment) in order to obtain profits (reward). Image byΒ Author.

OK, that was the quick summary. From here, we will assume the reader knows about DRL (trading) algorithms and wants to backtest a strategy. In the next section, we will shortly describe how to perform a β€˜single’ backtest.

Traditional DRL Backtesting

Benchmark

The goal of the trading bot is to maximize portfolio value whilst hedging risks like market crashes. Therefore, the bot needs to be benchmarked; a valuable comparison with other trading strategies.

Equal weight (EQW) is a proportional measurement technique that assigns equal weight to each cryptocurrency in a portfolio, index, or index fund. In straightforward terms: if the trading account has 1 million US dollars and there are 4 cryptocurrencies of interest, distribute 250k to each coin, which is called an EQW-portfolio.

Thus, when measuring the group’s performance as a whole, the crypto wallets of the smallest firms are assigned the same statistical importance, or weight, as those of the biggest crypto wallets. The initial portfolio value is distributed according to:

After that, the current value of the portfolio is computedΒ as:

The return of this strategy is then the final portfolio value divided by the initial portfolio value. Thus, the equal-weight investment strategy is equivalent to distributing the cash equally to each crypto trading pair and leaving itΒ there.

Sharpe Ratio

The Sharpe ratio is a financial statistic that compares the performance of an investment (e.g., a crypto investment) to that of a risk-free asset after risk is taken into account. It is defined as the difference between the investment’s returns and the risk-free rate of return, divided by its standard deviation (i.e., its volatility). It is the increased return that an investor obtains due to an increase in risk. The Sharpe ratio is one of the most widely used methods for calculating risk-adjusted return. The formula for the Sharpe ratio is asΒ follows:

R(a) is the asset return, and R(b) is the risk-free return. E [R(a) βˆ’ R(b)] is the expected value of the excess of the asset return over the benchmark return, and Οƒ(a) is the standard deviation of the asset excess return. The DRL agent’s goal is to maximize return whilst minimizing risks. Therefore the Sharpe ratio will be used as a target for hyperparameter optimization.

Model Selection

The historical dataset exists for three different periods:

  • Training data (in-sample, IS)
  • Validation data (in-sample, IS)
  • Trade data (out-of-sample, OOS)

Figure 2, IS periods consist of data for training and validation stages. During hyperparameter optimization, the train and validation sets are used to obtain a well-trained model. After for example 100 trials, the model with the highest Sharpe ratio is selected. The evaluation of this selection process is displayed in Figure two by the yellow β€˜eval’ checkpoint.

If the Sharpe ratio of the trained model is the highest so far during this particular optimization, the model is saved for the next phase (model simulation).

Fig 2: Data splitting, training, and validating during the model selection phase. The best model from the validation evaluation results is saved on the disk for real-time deployment purposes. Image byΒ Author.

Model Simulation Example

After that, the out-of-sample trade data is used to backtest the real-time trading performance (Figure 3) of the β€˜best’ agent. Thus, the DRL agent is not trained or validated on the OOS data to produce results assumed as real-time trading performance.

Fig 3: After the best model originates from the model selection phase, the remaining out-of-sample trade data is used to evaluate the real-time performance of the DRL agent. Image byΒ Author.

In this stage, the actual profitability of the agent is computed. There are five main metrics to evaluate theΒ results.

  • Cumulative return: subtract the portfolio’s final value from its starting value and divide it by the initialΒ value.
  • Annualized returns: they are the geometric mean of the strategy’s annual earnings.
  • Annualized volatility: the annualized standard deviation of the portfolio return.
  • Sharpe ratio: subtract the annualized risk-free rate from the annualized return and then divide by the annualized volatility.
  • Max drawdown: The highest point value is subtracted by the lowest point in the timeΒ frame.

Explanatory Analysis andΒ Results

The results are given in Figure 4. The 1m, 5m, …, 2h, 4h are the candle-widths of the crypto data. These exact results are obtained by running the hyperparameter tuner for 100 trials on each timeframe, and then running the Model Simulation process.

Fig 4: Table showing the metrics of trading on different time-frames. Image byΒ Author.

Figure 4 demonstrates that the DRL agents outperform the equal-weight benchmark in multiple perspectives. As can be seen from Fig 4, every single DRL agent outperforms the benchmark in terms of the Sharpe ratio. The Sharpe ratio for the equal-weight strategy is 0.02, compared to 0.47 for the 2h and 0.55 for the 4h. The annualized return of the agent is significantly higher as well. The equal-weight strategy manages 47.2%, which is in this particular timeframe worse than the others. Therefore, these findings demonstrate that the proposed DRL agent can effectively develop a trading strategy that outperforms the baseline.

Based on Figure 4, either the 2h or 4h timeframe seems to be best. The excess returns on the 2h timeframe are significant enough to ignore the decrease in annual volatility. Therefore, the 2h model is selected and will be deployed on the OOS trade data. Figure 5 shows the selected model for deployment versus the benchmark, to visually show that it is less volatile, has less drawdown, less risk, and increased returns.

Fig 5: Out-of-sample trading results of all the 2h timeframe DRL agent (PPO) compared to the equal-weight strategy. Image by theΒ author.

The BacktestΒ Fluke

  • We repeat:

A researcher will almost always identify a misleadingly profitable technique, a false positive, after many trials. Random samples include patterns, and a methodical search across a broad space of methods will eventually lead to discovering one that profits from the random data’s chance configurationΒΉ. An investment strategy is likely to be fit for such flukes. Especially when a set of parameters is tuned to maximize the performance of a backtest. β€œBacktest overfitting” is the term for this phenomenon.

  • Concerning the previousΒ section:

We suspect there is an issue in the Model Selection part of the proposed backtest strategy. As stated, we have performed 100 hyperparameter trials on the validation results and selected the model that delivers the highest Sharpe ratio for that time frame. The more trials we do, the higher the probability that we just got β€˜lucky’ on the validation set.

Let us move on to the novel idea, which reduces the risk for the hyperparameter optimizer to find a β€˜lucky’ result. We do not want to get lucky; we want to be calculated.

The novel idea: CPCVΒ DRL

If the reader has not read our previous article yet, we advise you to read that first. In that article, we describe the Combinatorial PurgedKFold Cross-Validation method for traditional machine learning. Article:

The Combinatorial Purged Cross-Validation method

We present the main result from the CombPurgedKFoldCV method (FigureΒ 6).

Figure 6: Combinatorial PurgedKFoldCV. The data can be split in 15 possible ways (S1-S15) with N=6 groups of which k=2 validation groups. Image byΒ Author.

For each data split, it is the same story as with traditional cross-validation: we train on the blue data and test on the read data. However, in the next section, we present how Combinatorial Purged K-fold Cross-Validation could be applied toΒ DRL.

Fig 7: Example of a single split and the actions predicted during validation. Image byΒ Author.

See Figure 7; we save all the actions during the validation stage instead of looking only at the Sharpe ratio for one split. Thus, we train and validate as usual, except we look at the best-predicted actions of the DRL agent, which represent the basic trading strategy.

Note that we have highlighted validation groups now (Figure 2, FigureΒ 6).

  • Split 1 || trainΒ : (G3, S1), (G4, S1), (G5, S1), (G6, S1) || validationΒ : (G1, S1), (G2, S1) || β†’ DRL agent 1 β†’ actions y(1,1), y(1,2) on (G1, S1), (G2,Β S1)
  • Split 2|| trainΒ : (G2, S2), (G4, S2), (G5, S2), (G6, S2) || validationΒ : (G1, S2), (G3, S1) || β†’ DRL agent 2 β†’ actions y(1,2), y(3,2) on (G1, S2), (G3,Β S2)
  • …….
  • Split 15|| trainΒ : (G1, S15), (G2, S15), (G3, S15), (G4, S15) || validationΒ : (G5, S15), (G6, S15) || β†’ DRL agent 15 β†’ actions y(5,15), y(6,15) on (G5, S15), (G6,Β S15)

DRL itself is a walk-forward algorithm; it is a sequential algorithm. So for each split, the DRL agent will do a walk-forward path through the validation data to get actions. We save these actions in a matrix corresponding to the groups and splits presented in FigureΒ 6.

See Figure 8 for the new Model Selection process. First, we choose which part to use for the in-sample part (train and validation data) and which data we want to do the final backtest on (trade data). After that, we train the DRL agent on the train data of each split (Figure 6) and evaluate each agent in a walk-forward manner on the validation data, producing actions at each dataΒ point.

These actions at each data point are stored according to Figure 6. Therefore, we can see the actions produced by the DRL agent in the validation stage as β€˜predictions’ (like in the previous article). These β€˜actions/predictions’ represent the basic trading strategy produced for that time period. Finally, just like in the previous article, we can form 5 unique paths on which we can backtest our strategy.

In our hyperparameter optimizer, we use the average Sharpe ratio of all of the 5 backtest paths as the final evaluator of the optimization loop. We have significantly reduced the risk of overfitting a single validation path even after manyΒ trials!

Fig 8: Novel idea of Combinatorial PurgedKFold during DRL Model Selection phase. Image byΒ Author.

Because researchers will choose the backtest with the highest estimated Sharpe ratio, even if the Sharpe ratio is zero, WF’s high variance leads to faulty findings. That is why, when performing WF backtesting, it is critical to account for the number ofΒ trials.

Due to CPCV DRL, it is now possible to calculate the Family-Wise Error Rate (FWER), False Discovery Rate (FDR), or Probability of Backtest Overfitting (PBO)Β². In contrast, CPCV derives the distribution of Sharpe ratios from a large number of paths, j = 1, … , πœ‘. The average Sharpe Ratio of the distribution is the mean E[{y i,j } (j=1,…,πœ‘ )] = πœ‡ and variance 𝜎²[{y i,j } (j=1,…,πœ‘ )] = 𝜎²(i).

CPCV, on the other hand, calculates the Sharpe ratio distribution from a large number of pathways, j = 1, … , πœ‘. The mean E[{y i,j} j=(1,…, πœ‘)] =πœ‡ and variance 𝜎²[{y i,j }(j=1,…, πœ‘)] = 𝜎²(i) are the average Sharpe Ratios of the distribution. The sample mean of CPCV pathways has a varianceΒ of:

Where 𝜎²(i) is the variance of the Sharpe ratios across paths for strategy i, and Ξ±(i) is the average off-diagonal correlation among {y i,j } (j=1,…,πœ‘). CPCV leads to fewer false than CV and WF because Ξ±(i)< 1 implies that the variance of the sample means is lower than the variance of the sample. Imagine we have (almost) no correlation between the backtest paths to put matters into perspective. If no correlation is assumed, then Ξ±(i) β‰ͺ 1. The Equation above will reduceΒ to:

As a result, the lower the variance of CPCV, the statistically valid Sharpe ratio E[y(i)i] will be reported with zero variance in the limit of Ο†β†’βˆž (amount of paths to infinity). Because the strategy is chosen from i = 1,…, I will have the highest statistically valid Sharpe Ratio, there will be no selection bias.

Finally, the more path’s the less the variance will become and the closer we get to the statistically valid Sharpe ratio. However, the amount of paths is upper bounded by the amount of available data. Still, for a large enough number of paths πœ‘, CPCV could make the variance of the backtest so small as to make the probability of a false discovery negligible.

Conclusion

In the previous article, the Combinatorial Purged Cross-Validation method is explored. Then, in this article, a suggestion is made to extend this method to the DRL domain. The issues regarding traditional WF methods are summarized. After that, an introduction to DRL for trading is given. Subsequently, an overview of traditional DRL backtesting is provided, which is a complete backtesting analysis. Finally, the novel idea of doing a Combinatorial Purged Cross-Validation with DRL is presented in a possibly feasible manner. The method could bring the Model Selection process closer to the statistically valid SharpeΒ Ratio.

Thanks for getting to the bottom of β€œCombinatorial PurgedKFold Cross-Validation for Deep Reinforcement Learning”!

~ Berend &Β Bruce

Reference

  1. Bailey, D. and Lopez de Prado, M., 2014. The Deflated Sharpe Ratio: Correcting for Selection Bias, Backtest Overfitting and Non-Normality. SSRN Electronic Journal.

2. Prado, M. L. de. (2018). Advances in financial machine learning.


Combinatorial PurgedKFold Cross-Validation for Deep Reinforcement Learning was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓