paint-brush
Adaptive trend following trading strategy based on Renkoby@sermal
10,006 reads
10,006 reads

Adaptive trend following trading strategy based on Renko

by Sergey MalchevskiyNovember 19th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Today I’m going to show how to create an algorithmic trading strategy on Python. This strategy uses my original research from one previous article. This current article consists of these parts:

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - Adaptive trend following trading strategy based on Renko
Sergey Malchevskiy HackerNoon profile picture

Today I’m going to show how to create an algorithmic trading strategy on Python. This strategy uses my original research from one previous article. This current article consists of these parts:

  1. Concept
  2. Algorithm description
  3. Trading strategy development
  4. Backtesting and analyzing the result
  5. Further problems discussion
  6. Conclusions

Concept

Financial time-series have a high level of noise in data. Would be good to have an ability to reduce a noise. In this article it is proposed to use Renko brick size optimization. The key idea of the approach is to quantify the quality of a Renko chart and try to get an optimal brick size for using in a trading. If you are not familiar with the Renko charts will be better follow the link of the article.

The optimization of quality over time is called an “adaptivity” in this context.

Algorithm description

This trading strategy is a typical trend following, but the following based on the last Renko direction, not the price of moving average. The basic steps are:

  1. If the Renko chart is empty then build the chart using brick size optimization. Adaptivity implies that volatility level could be important for optimizing of the process. In this example, the optimal brick size is inside of IQR of absolute price changes (e.g. daily) for the last N days. Also, you can choose any range for optimization (based on ATR indicator, fixed, or percentage of the last price).

IQR explanation

2. If the Renko chart is not empty then get the last market price and add to the Renko. If the new brick is not built then pass to the next iteration. Otherwise, if the new brick is following the current direction then part of current position should be covered (0 – 100%). If the new brick is built in the opposite direction then the current position should be closed, it means that trend has been changed. The Renko chart should be empty.

3. Repeat these steps while the price data continues to feed.

Trading strategy development

I will use Catalyst framework for developing the trading strategy. How to install the framework and a few examples you can find on the website.

Catalyst is an algorithmic trading library for crypto-assets written in Python. It allows trading strategies to be easily expressed and backtested against historical data (with daily and minute resolution), providing analytics and insights regarding a particular strategy’s performance.

Basically, Catalyst script consists of a few parts: initialize, handle_data, analyze, and run_algorithm execution. Let’s code the algorithm.

First of all, required libraries should be specified, pyrenko module you can find on github.

<a href="https://medium.com/media/459ed8059d09005143f37019720a4acc/href">https://medium.com/media/459ed8059d09005143f37019720a4acc/href</a>

Some information from tutorial:

Every catalyst algorithm consists of at least two functions you have to define:

initialize(context)

handle_data(context, data)

Before the start of the algorithm, catalyst calls the initialize() function and passes in a context variable. context is a persistent namespace for you to store variables you need to access from one algorithm iteration to the next.

After the algorithm has been initialized, catalyst calls the handle_data() function on each iteration, that’s one per day (daily) or once every minute (minute), depending on the frequency we choose to run our simulation. On every iteration, handle_data() passes the same contextvariable and an event-frame called data containing the current trading bar with open, high, low, and close (OHLC) prices as well as volume for each crypto asset in your universe.

Our initialize function looks like this:

<a href="https://medium.com/media/62a936cecdb61ddc157729acdd0f536e/href">https://medium.com/media/62a936cecdb61ddc157729acdd0f536e/href</a>

We work with ETH/BTC crypto pair. The basic timeframe is hourly (60T). The Renko chart uses 15 (15 * 24 hours)days of data. We cover 16.6% of the position amount after each new brick in the current direction. Commission are similar to commission on Bitfinex exchange. Also, we use the slippage value, it looks more how it will be going in real mode.

The general logic of algorithm is in handle_data function:

<a href="https://medium.com/media/dc7127a603afb6ebfc442ec32000bc49/href">https://medium.com/media/dc7127a603afb6ebfc442ec32000bc49/href</a>

Firstly, we check if the model is empty we should get the data, calculate IQR, optimize the brick size, build the Renko chart, and open the order. Otherwise, we get the last price and put it to Renko chart. Then check how much new brick size built and what is the direction of the last brick. Each block of the code contains a comment, this helps you to match code and algorithm.

Additional information has been passed using record function. This information used in analyze function that runs after algorithm execution. In this function, we can draw some graphs, calculate the performance of the algorithm, and etc. Variable perf contains basic information of the performance, also this variable contains information that we added using record function.

<a href="https://medium.com/media/8c4ee4436f08a74f926e936fc33c2938/href">https://medium.com/media/8c4ee4436f08a74f926e936fc33c2938/href</a>

The last part of the script is run_algorithm that contains a period of the backtesting, capital, cryptocurrency, and names of the functions that we described above.

<a href="https://medium.com/media/695ab36d5f02c1841a03ccd62ea2ce27/href">https://medium.com/media/695ab36d5f02c1841a03ccd62ea2ce27/href</a>

In this example, we work with daily data feeding (data_frequency parameter).

Backtesting and analyzing the result

Let’s run our script in Catalyst environment by command:

<a href="https://medium.com/media/5a17033221b272657f078ceaed06cfc0/href">https://medium.com/media/5a17033221b272657f078ceaed06cfc0/href</a>

We get something like this:

Terminal window after launching the script

Basic metrics you can find in the output of the terminal window, these metrics we output in analyze function. Total return of algorithm is 252.55% with -18.74% maximum drawdown. This is not bad for almost 1 year. You can use Sortino ratio for comparing different algorithms, I considered this metric in this article. Beta is very close to 0 and Alpha is positive, it means that our algorithm is market-neutral and we beat the benchmark. If you are not familiar with this metrics I recommend you this article.

Result graphs in Catalyst

The blue line on the first graph is an equity of the algorithm, the red line is an equity of the benchmark (ETH/BTC asset). The second graph contains the price of ETH/BTC (grey color) and Renko price (yellow color). Brick size is shown on the third graph (blue color), vertical red lines are time moments when the Renko chart was rebuilt. The number of created Renko bricks is shown on the fourth graph, the position amount is shown on the fifth plot. The last graph contains a drawdown.

Let’s get an additional information based on our result. The further analysis uses perf variable in csv-format. I use pyfolio library for this purpose.

It is a Python library for performance and risk analysis of financial portfolios developed by Quantopian Inc. It works well with the Zipline open source backtesting library.

First of all, draw the returns of algorithm and compare the equity with benchmark:

Equities

Algorithm returns

Summary stats

Summary statistics contains basic metrics, some of them we got in the output of analyze function. This variant is more extended, metrics such as Daily value at risk or Annual volatility could be very useful in strategy evaluating.

The next graphs describe drawdown of our strategy:

Drawdown

Top-5 drawdown periods

Drawdown is one of the key metric to estimate a reliable of the strategy, also the achievement of critic level of a drawdown could be a trigger to re-optimize the strategy.

Top-5 drawdown periods table

These graphs describe our returns by different angles: distribution of monthly returns and box-plots of returns for different timeframes (daily, weekly, monthly):

Distribution of monthly returns

Box-plots of returns

Let’s look at volatility of algorithm as a monthly moving average:

Moving average of volatility

Decreasing of Sharpe ratio (e.g. negative) also could be a trigger for re-optimizing process in the lifetime pipeline:

Moving average of Sharpe ratio

Further problems discussion

Creating a reliable algorithmic trading strategy is a difficult process that includes different steps. The general trading idea is necessary, but not sufficient condition. I suggest to think about these problems to get stable and reliable strategy:

  1. Attempt to use minute data resolution to take into consideration data that we get intraday. Now the algorithm uses daily resolution only, it means that we lose data and price movements.
  2. Change market orders to limit orders. This will allow to reduce commissions, because taker commission is higher than maker commission. Some exchanges even pay to you for limit orders, it calls a rebate, this is kind of a reward.
  3. Carry out a lot of experiments with different assets to create a reliable portfolio of assets and tune a money-management between them.
  4. Develop and follow the re-optimization — forwarding rule to get the moment when we should change some parameters of the model (length of history, cover ratio, timeframe, and etc.). This rule includes frequency of optimization, time periods for optimization and forwarding (or walk-forwarding) processes, minimal requirements of metrics to accept the algorithm as working.
  5. Develop or choose the execution framework to run the algorithm in the production mode. Even if you can get a reliable trading strategy that approved on tons backtests you can fail in a real mode, because you will get a lot of errors or imperfections in an infrastructure (inside or outside of your ecosystem). For example, you can use Catalyst in backtesting mode, but you can’t use it in production for this algorithm, because Catalyst doesn’t support trading on margin account now.

Conclusions

  1. Created the algorithmic trading strategy based on theoretical research. This algorithm tries to adapt to a volatility level, reduce a noise, and follow the trend.
  2. The algorithm has a positive result. Demostrated the different metrics and graphs of performance.
  3. Suggested an advice on how to improve this research.
  4. Source code you can get on github (catalyst script and ipython-notebook for advanced analytics).