<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Drawdown &#8211; The Financial Hacker</title>
	<atom:link href="https://financial-hacker.com/tag/drawdown/feed/" rel="self" type="application/rss+xml" />
	<link>https://financial-hacker.com</link>
	<description>A new view on algorithmic trading</description>
	<lastBuildDate>Sun, 12 Jan 2020 13:08:08 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Build Better Strategies! Part 3: The Development Process</title>
		<link>https://financial-hacker.com/build-better-strategies-part-3-the-development-process/</link>
					<comments>https://financial-hacker.com/build-better-strategies-part-3-the-development-process/#comments</comments>
		
		<dc:creator><![CDATA[jcl]]></dc:creator>
		<pubDate>Mon, 22 Feb 2016 16:46:32 +0000</pubDate>
				<category><![CDATA[3 Most Useful]]></category>
		<category><![CDATA[System Development]]></category>
		<category><![CDATA[Cycles]]></category>
		<category><![CDATA[Detrending]]></category>
		<category><![CDATA[Drawdown]]></category>
		<category><![CDATA[Medallion fund]]></category>
		<category><![CDATA[Money management]]></category>
		<category><![CDATA[Spectral filter]]></category>
		<category><![CDATA[Walk forward analysis]]></category>
		<guid isPermaLink="false">http://www.financial-hacker.com/?p=1191</guid>

					<description><![CDATA[This is the third part of the Build Better Strategies series. In the previous part we&#8217;ve discussed the 10 most-exploited market inefficiencies and gave some examples of their trading strategies. In this part we&#8217;ll analyze the general process of developing a model-based trading system. As almost anything, you can do trading strategies in (at least) &#8230; <a href="https://financial-hacker.com/build-better-strategies-part-3-the-development-process/" class="more-link">Continue reading<span class="screen-reader-text"> "Build Better Strategies! Part 3: The Development Process"</span></a>]]></description>
										<content:encoded><![CDATA[<p>This is the third part of the <a href="http://www.financial-hacker.com/build-better-strategies/">Build Better Strategies</a> series. In the previous part we&#8217;ve discussed the 10 most-exploited market inefficiencies and gave some examples of their trading strategies. In this part we&#8217;ll analyze the general process of developing a model-based trading system. As almost anything, you can do trading strategies in (at least) two different ways: There&#8217;s the <strong>ideal way</strong>, and there&#8217;s the <strong>real way</strong>. We begin with the <strong>ideal development process</strong>, broken down to 10 steps.<span id="more-1191"></span></p>
<h3>The ideal model-based strategy development<br />
Step 1: The model</h3>
<p>Select one of the known market inefficiencies listed in the <a href="http://www.financial-hacker.com/build-better-strategies-part-2-model-based-systems/">previous part</a>, or discover a new one. You could eyeball through price curves and look for something suspicious that can be explained by a certain market behavior. Or the other way around, &nbsp;theoretize about a behavior pattern and check if you can find it reflected in the prices. If you discover something new, feel invited to post it here! But be careful: Models of non-existing inefficiencies (such as&nbsp;<a href="http://www.financial-hacker.com/seventeen-popular-trade-strategies-that-i-dont-really-understand/">Elliott Waves</a>) already outnumber real inefficiencies by a large amount. It is not likely that a real inefficiency remains unknown to this day.</p>
<p>Once you&#8217;ve decided for a model, determine which <strong>price curve anomaly</strong> it would produce, and describe it with a quantitative formula or at least a qualitative criteria. You&#8217;ll need that for the next step. As an example we&#8217;re using the <strong>Cycles Model</strong> from the previous part:</p>
<p>[latex display=&#8221;true&#8221;]y_t ~=~ \hat{y} + \sum_{i}{a_i sin(2 \pi t/C_i+D_i)} + \epsilon[/latex]</p>
<p>(Cycles are not to be underestimated. One of the most successful funds in history &#8211; Jim Simons&#8217; <strong>Renaissance Medallion</strong> fund &#8211; is rumored to exploit cycles in price curves by analyzing their lengths (<em><strong>C<sub>i</sub></strong></em>), phases (<em><strong>D<sub>i</sub></strong></em>) and amplitudes (<em><strong>a<sub>i</sub></strong></em>) with a Hidden Markov Model. Don&#8217;t worry, we&#8217;ll use a somewhat simpler approach in our example.)</p>
<h3>Step 2: Research</h3>
<p>Find out if the hypothetical anomaly really appears in the price curves of the assets that you want to trade. For this you first need enough historical data of the traded assets &#8211; D1, M1, or Tick data, dependent on the time frame of the anomaly. How far back? As far as possible, since you want to find out the lifetime of your anomaly and the market conditions under which it appears. Write a script to detect and display the anomaly in price data. For our Cycles Model, this would be the frequency spectrum:</p>
<figure id="attachment_1160" aria-describedby="caption-attachment-1160" style="width: 1599px" class="wp-caption alignnone"><a href="http://www.financial-hacker.com/wp-content/uploads/2015/11/spectrum.png"><img fetchpriority="high" decoding="async" class="wp-image-1160 size-full" src="http://www.financial-hacker.com/wp-content/uploads/2015/11/spectrum.png" alt="" width="1599" height="321" srcset="https://financial-hacker.com/wp-content/uploads/2015/11/spectrum.png 1599w, https://financial-hacker.com/wp-content/uploads/2015/11/spectrum-300x60.png 300w, https://financial-hacker.com/wp-content/uploads/2015/11/spectrum-1024x206.png 1024w, https://financial-hacker.com/wp-content/uploads/2015/11/spectrum-1200x241.png 1200w" sizes="(max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px" /></a><figcaption id="caption-attachment-1160" class="wp-caption-text">EUR/USD frequency spectrum, cycle amplitude vs. cycle length in bars</figcaption></figure>
<p>Check out how the spectrum changes over the months and years. Compare with the spectrum of random data (with <a href="http://www.financial-hacker.com/hackers-tools-zorro-and-r/" target="_blank" rel="noopener noreferrer">Zorro</a> you can use the <a href="http://zorro-project.com/manual/en/detrend.htm" target="_blank" rel="noopener noreferrer">Detrend</a> function for randomizing price curves). If you find no clear signs of the anomaly, or no significant difference to random data, improve your detection method. And if you then still don&#8217;t succeed, go back to step 1.</p>
<h3>Step 3: The algorithm</h3>
<p>Write an algorithm that generates the&nbsp;<strong>trade signals</strong> for buying in the direction of the anomaly. A market inefficiency has normally only a <strong>very weak effect</strong> on the price curve. So your algorithm must be really good in distinguishing it from random noise. At the same time it should be as simple as possible, and rely on as few free parameters as possible. &nbsp;In our example with the Cycles Model, the script reverses the position at every valley and peak of a sine curve that runs ahead of the dominant cycle:</p>
<pre class="prettyprint">function run()
{
  vars Price = series(price());
  var Phase = DominantPhase(Price,10);
  vars Signal = series(sin(Phase+PI/4));

  if(valley(Signal))
    reverseLong(1); 
  else if(peak(Signal))
    reverseShort(1);
}</pre>
<p>This is the core of the system. Now it&#8217;s time for a first backtest. The precise performance does not matter much at this point &#8211; just determine whether the algorithm has an edge or not. Can&nbsp;it produce a series of profitable trades at least in certain market periods or situations? If not, improve the algorithm or write a another one that exploits the same anomaly with a different method. But do not yet use any stops, trailing, or other bells and whistles. They would only distort the result, and give you the illusion of profit where none is there. Your algorithm must be able to produce positive returns either with pure reversal, or at least with a timed exit.</p>
<p>In this step you must also decide about the <strong>backtest data</strong>. You normally need M1 or tick data for a realistic test. Daily data won&#8217;t do. The data amount depends on the lifetime (determined in step 2) and the nature of the price anomaly. Naturally, the longer the period, the better the test &#8211; but more is not always better. Normally it makes no sense to go further back than 10 years, at least not when your system exploits some real market behavior. Markets change extremely in a decade. Outdated historical price data can produce very misleading results. Most systems that had an edge 15 years ago will fail miserably on today&#8217;s markets. But they can deceive you with a seemingly profitable backtest.</p>
<h3>Step 4: The filter</h3>
<p>No market inefficiency exits all the time. Any market goes through periods of random behavior. It is essential for any system to have a filter mechanism that detects if the inefficiency is present or not. The filter is at least as important as the trade signal, if not more &#8211; but it&#8217;s often forgotten in trade systems. This is our example script with a filter:</p>
<pre class="prettyprint">function run()
{
  vars Price = series(price());
  var Phase = DominantPhase(Price,10);
  vars Signal = series(sin(Phase+PI/4));
  vars Dominant = series(BandPass(Price,rDominantPeriod,1));
  var Threshold = 1*PIP;
  ExitTime = 10*rDominantPeriod;
	
  if(Amplitude(Dominant,100) &gt; Threshold) {
    if(valley(Signal))
      reverseLong(1); 
    else if(peak(Signal))
      reverseShort(1);
  }
}</pre>
<p>We apply a bandpass filter centered at the dominant cycle period to the price curve and measure its amplitude. If the amplitude is above a threshold, we conclude that the inefficiency is there, and we trade. The trade duration is now also restricted to a maximum of 10 cycles since we found in step 2 that dominant cycles appear and disappear in relatively short time.</p>
<p>What can go wrong in this step is falling to the temptation to add a filter just because it improves the test result. Any filter must have a rational reason in the market behavior or in the used signal algorithm. If your algorithm only works by adding irrational filters: back to step 3.</p>
<h3>Step 5: Optimizing (but not too much!)</h3>
<p>All parameters of a system affect the result, but only a few directly determine entry and exit points of trades dependent on the price curve. These &#8216;adaptable&#8217; parameters should be identified and optimized. In the above example, trade entry is determined by the phase of the forerunning sine curve and by the filter threshold, and trade exit is determined by the exit time. Other parameters &#8211; such as the filter constants of the <strong>DominantPhase</strong> and the <strong>BandPass</strong> functions &#8211; need not be adapted since their values do not depend on the market situation.</p>
<p>Adaption is an optimizing procdure, and a big opportunity to fail without even noticing it. Often, genetic or brute force methods are applied for finding the &#8220;best&#8221; parameter combination at a profit peak in the parameter space. Many platforms even have &#8220;optimizers&#8221; for this purpose. Although this method indeed produces the best backtest result, it won&#8217;t help at all for the live performance of the system. In fact, a recent study (<a href="http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2745220">Wiecki et.al. 2016</a>) showed that the better you optimize your parameters, the worse your system will fare in live trading! The reason of this paradoxical effect is that optimizing to maximum profit fits your system mostly to the noise in the historical price curve, since noise affects result peaks much more than market inefficiencies.</p>
<p>Rather than generating top backtest results, correct optimizing has other purposes:</p>
<ul style="list-style-type: square;">
<li>It can determine the <strong>susceptibility</strong> of your system to its parameters. If the system is great with a certain parameter combination, but loses its edge when their values change a tiny bit: back to step 3.</li>
<li>It can identify the parameter&#8217;s <strong>sweet spots</strong>. The sweet spot is the area of highest parameter robustness, i.e. where small parameter changes have little effect on the return. They are not the peaks, but the centers of broad hills in the parameter space.</li>
<li>It can adapt the system to different assets, and enable it to trade a <strong>portfolio</strong> of assets with slightly different parameters.&nbsp;It can also extend the <strong>lifetime</strong> of the system by adapting it to the current market situation in regular time intervals, parallel to live trading.</li>
</ul>
<p>This is our example script with entry parameter optimization:</p>
<pre class="prettyprint">function run()
{
  vars Price = series(price());
  var Phase = DominantPhase(Price,10);
  vars Signal = series(sin(Phase+optimize(1,0.7,2)*PI/4));
  vars Dominant = series(BandPass(Price,rDominantPeriod,1));
  ExitTime = 10*rDominantPeriod;
  var Threshold = optimize(1,0.7,2)*PIP;
	
  if(Amplitude(Dominant,100) &gt; Threshold) {
    if(valley(Signal))
      reverseLong(1); 
    else if(peak(Signal))
      reverseShort(1);
  }
}</pre>
<p>The two <a href="http://manual.zorro-project.com/optimize.htm" target="_blank" rel="noopener noreferrer">optimize</a> calls use a start value (<strong>1.0</strong> in both cases) and a range (<strong>0.7..2.0</strong>) for determining the sweet spots of the two essential parameters of the system. You can identify the spots in the profit factor curves (red bars) of the two parameters that are generated by the optimization process:</p>
<figure id="attachment_1377" aria-describedby="caption-attachment-1377" style="width: 391px" class="wp-caption alignnone"><a href="http://www.financial-hacker.com/wp-content/uploads/2016/02/CyclesDev_EURUSD_p1.png"><img decoding="async" class="wp-image-1377 size-full" src="http://www.financial-hacker.com/wp-content/uploads/2016/02/CyclesDev_EURUSD_p1.png" alt="" width="391" height="321" srcset="https://financial-hacker.com/wp-content/uploads/2016/02/CyclesDev_EURUSD_p1.png 391w, https://financial-hacker.com/wp-content/uploads/2016/02/CyclesDev_EURUSD_p1-300x246.png 300w" sizes="(max-width: 391px) 85vw, 391px" /></a><figcaption id="caption-attachment-1377" class="wp-caption-text">Sine phase in pi/4 units</figcaption></figure>
<figure id="attachment_1376" aria-describedby="caption-attachment-1376" style="width: 391px" class="wp-caption alignnone"><a href="http://www.financial-hacker.com/wp-content/uploads/2016/02/CyclesDev_EURUSD_p2.png"><img decoding="async" class="wp-image-1376 size-full" src="http://www.financial-hacker.com/wp-content/uploads/2016/02/CyclesDev_EURUSD_p2.png" alt="" width="391" height="321" srcset="https://financial-hacker.com/wp-content/uploads/2016/02/CyclesDev_EURUSD_p2.png 391w, https://financial-hacker.com/wp-content/uploads/2016/02/CyclesDev_EURUSD_p2-300x246.png 300w" sizes="(max-width: 391px) 85vw, 391px" /></a><figcaption id="caption-attachment-1376" class="wp-caption-text">Amplitude threshold in pips</figcaption></figure>
<p>In this case the optimizer would select a parameter value of about 1.3 for the sine phase and about 1.0 (not the peak at 0.9) for the amplitude threshold for the current asset (EUR/USD). The exit time is not optimized in this step, as we&#8217;ll do that later together with the other exit parameters when risk management is implemented.</p>
<h3>Step 6: Out-of-sample analysis</h3>
<p>Of course the parameter optimization improved the backtest performance of the strategy, since the system was now better adapted to the price curve. So the test result so far is worthless. For getting an idea of the real performance, we first need to split the data into in-sample and out-of-sample periods. The in-sample periods are used for training, the out-of-sample periods for testing. The best method for this is <a href="http://manual.zorro-project.com/numwfocycles.htm" target="_blank" rel="noopener noreferrer">Walk Forward Analysis</a>. It uses a rolling window into the historical data for separating test and training periods.</p>
<p>Unfortunately, WFA adds two more parameters to the system: the training time and the test time of a WFA cycle. The test time should be long enough for trades to properly open and close, and small enough for the parameters to stay valid. The training time is more critical. Too short training will not get enough price data for effective optimization, training too long will also produce bad results since the market can already undergo changes during the training period. So the training time itself is a parameter that had to be optimized.</p>
<p>A five&nbsp;cycles walk forward analysis (add &#8220;<strong>NumWFOCycles = 5;</strong>&#8221; to the above script) reduces the backtest performance from 100% annual return to a more realistic 60%. For preventing that WFA still produces too optimistic results just by a lucky selection of test and training periods, it makes also sense to perform WFA several times with slightly different starting points of the simulation. If the system has an edge, the results should be not too different. If they vary wildly: back to step 3.</p>
<h3>Step 7: Reality Check</h3>
<p>Even though the test is now out-of-sample, the mere development process &#8211; selecting algorithms, assets, test periods and other ingredients by their performance &#8211; has added a lot of <strong>selection bias</strong> to the results. Are they caused by a real edge of the system, or just by biased development? Determining this with some certainty is the hardest part of strategy development.</p>
<p>The best way to find out is <a href="http://www.financial-hacker.com/whites-reality-check/">White&#8217;s Reality Check</a>. But it&#8217;s also the least practical because it requires strong discipline in parameter and algorithm selection. Other methods are not as good, but easier to apply:</p>
<ul>
<li><strong>Montecarlo</strong>. Randomize the price curve by shuffling without replacement, then train and test again. Repeat this many times. Plot a distribution of the results (an example of this method can be found in chapter 6 of the <a href="http://www.amazon.de/Das-B%C3%B6rsenhackerbuch-Finanziell-algorithmische-Handelssysteme/dp/1530310784" target="_blank" rel="noopener noreferrer">Börsenhackerbuch</a>). Randomizing removes all price anomalies, so you hope for significantly worse performance. But if the result from the real price curve lies not far east of the random distribution peak, it is probably also caused by randomness. That would mean: back to step 3.</li>
<li><strong>Variants.</strong> It&#8217;s the opposite of the Montecarlo method: Apply the trained system on variants of the price curve and hope for positive results. Variants that maintain most anomalies are <a href="http://www.financial-hacker.com/better-tests-with-oversampling/">oversampling</a>, detrending, or inverting the price curve. If the system stays profitable with those variants, but not with randomized prices, you might really have found a solid system.</li>
<li><strong>Really-out-of-sample (ROOS) Test</strong>. While developing the system, ignore the last year (2015) completely. Even delete all 2015 price history from your PC. Only when the system is completely finished, download the data and run a 2015 test. Since the 2015 data can be only used once this way and is then tainted, you can&nbsp;not modify the system anymore if it fails in 2015. Just abandon it. Assemble all your metal strength and go back to step 1.</li>
</ul>
<h3>Step 8: Risk management</h3>
<p>Your system has so far survived all tests. Now you can concentrate on reducing its risk and improving its performance. Do not touch anymore the entry algorithm and its parameters. You&#8217;re now optimizing the exit. Instead of the simple timed and reversal exits that we&#8217;ve used during the development phase, we can now apply various trailing stop mechanisms. For instance:</p>
<ul style="list-style-type: square;">
<li>Instead of exiting after a certain time, raise the stop loss by a certain amount per hour. This has the same effect, but will close unprofitable trades sooner and profitable trades later.</li>
<li>When a trade has won a certain amount, place the stop loss at a distance above the break even point. Even when locking a profit percentage does not improve the total performance, it&#8217;s good for your health. Seeing profitable trades wander back into the losing zone can cause serious ulcers.</li>
</ul>
<p>This is our example script with the initial timed exit replaced by a stop loss limit that rises at every bar:</p>
<pre class="prettyprint">function run()
{
  vars Price = series(price());
  var Phase = DominantPhase(Price,10);
  vars Signal = series(sin(Phase+optimize(1,0.7,2)*PI/4));
  vars Dominant = series(BandPass(Price,rDominantPeriod,1));
  var Threshold = optimize(1,0.7,2)*PIP;

  Stop = ATR(100);
  for(open_trades)
    TradeStopLimit -= TradeStopDiff/(10*rDominantPeriod);
	
  if(Amplitude(Dominant,100) &gt; Threshold) {
    if(valley(Signal))
      reverseLong(1); 
    else if(peak(Signal))
      reverseShort(1);
  }
}</pre>
<p>The&nbsp;<strong>for(open_trades)</strong> loop increases the stop level of all open trades by a fraction of the initial stop loss distance at the end of every bar.</p>
<p>Of course you now have to optimize and run a walk forward analysis again with the exit parameters. If the performance didn&#8217;t improve, think about better exit methods.</p>
<h3>Step 9: Money management</h3>
<p>Money management serves three purposes. First, reinvesting your profits. Second, distributing your capital among portfolio components. And third, quickly finding out if a trading book is useless. Open the &#8220;Money Management&#8221; chapter and read the author&#8217;s investment advice. If it&#8217;s &#8220;invest 1% of your capital per trade&#8221;, you know why he&#8217;s writing trading books. He probably has not yet earned money with real trading.</p>
<p>Suppose your trade volume at a given time <em><strong>t</strong></em> is&nbsp;<em><strong>V(t)</strong></em>. If your system is profitable, on average your capital <em><strong>C</strong></em> will rise proportionally to <b><i>V</i></b> with a growth factor <em><strong>c</strong></em>:</p>
<p>[latex display=&#8221;true&#8221;]\frac{dC}{dt} = c V(t)~~\rightarrow~~ C(t) = C_0 + c \int_{0}^{t}{V(t) dt}[/latex]</p>
<p>When you follow trading book advices and always invest a fixed percentage <em><strong>p</strong></em>&nbsp;of your capital, so that <em><strong>V</strong><strong><em>(t)</em> = p C(t)</strong></em>, your capital will grow exponentially with exponent&nbsp;<em><strong>p c</strong></em>:</p>
<p>[latex display=&#8221;true&#8221;]\frac{dC}{dt} ~=~ c p C(t)&nbsp;~~\rightarrow~~&nbsp;C(t) ~=~ C_0 e^{p c t}[/latex]</p>
<p>Unfortunately your capital will also undergo random fluctuations, named <strong>Drawdowns</strong>. Drawdowns are proportial to the trade volume <em><strong>V(t)</strong></em>. On leveraged accounts with no limit to drawdowns, it can be shown from statistical considerations that the maximum drawdown depth <em><strong>D<sub>max</sub></strong></em> grows proportional to the square root of time <em><strong>t</strong></em>:</p>
<p>[latex display=&#8221;true&#8221;]{D_{max}}(t) ~=~ q V(t) \sqrt{t}[/latex]</p>
<p>So, with the fixed percentage investment:</p>
<p>[latex display=&#8221;true&#8221;]{D_{max}}(t) ~=~ q p C(t) \sqrt{t}[/latex]</p>
<p>and at the time <em><strong>T = 1/(q p)<sup>2</sup></strong></em>:</p>
<p>[latex display=&#8221;true&#8221;]{D_{max}}(T) ~=~ q p C(T) \frac{1}{q p} ~=~ C(T)[/latex]</p>
<p>You can see that around the time <em><strong>T&nbsp;= 1/(q p)<sup>2</sup></strong></em> a drawdown will eat up all your capital <em><strong>C(T)</strong></em>, no matter how profitable your strategy is and how you&#8217;ve choosen&nbsp;<em><strong>p</strong></em>! That&#8217;s why the 1% rule is a bad advice. And why I advise clients not to raise the trade volume proportionally to their accumulated profit, but to its square root &#8211; at least on leveraged accounts. Then, as long as the strategy does not deteriorate, they keep a safe distance from a margin call.</p>
<p>Dependent on whether you trade a single asset and algorithm or a portfolio of both, you can calculate the optimal investment with several methods. There&#8217;s the OptimalF formula by <strong>Ralph Vince</strong>, the Kelly formula by <strong>Ed Thorp</strong>, or <a href="http://www.financial-hacker.com/get-rich-slowly/">mean/variance optimization</a> by <strong>Harry Markowitz</strong>. Usually you won&#8217;t hard code reinvesting in your strategy, but calculate the investment volume externally, since you might want to withdraw or deposit money from time to time. This requires the overall volume to be set up manually, not by an automated process. A formula for proper reinvesting and withdrawing can be found in the Black Book.</p>
<h3>Step 10: Preparation for live trading</h3>
<p>You can now define the <strong>user interface</strong> of your trading system. Determine which parameters you want to change in real time, and which ones only at start of the system. Provide a method to control the trade volume, and a &#8216;Panic Button&#8217; for locking profit or cashing out in case of bad news. Display all trading relevant parameters in real time. Add buttons for re-training the system, and provide a method for comparing live results with backtest results, such as the <a href="http://www.financial-hacker.com/the-cold-blood-index/">Cold Blood Index</a>. Make sure that you can supervise the system from whereever you are, for instance through an online status page. Don&#8217;t be tempted to look onto it every five minutes. But you can make a mighty impression when you pull out your mobile phone on the summit of Mt. Ararat and explain to your fellow climbers: &#8220;Just checking my trades.&#8221;</p>
<h3>The real strategy development</h3>
<p>So far the theory. All fine and dandy, but how do you really develop a trading system? Everyone knows that there&#8217;s a huge gap between theory and practice. This is the real development process as testified by many seasoned algo traders:</p>
<p><strong>Step 1.</strong>&nbsp;Visit trader forums and find the thread about the new indicator with the fabulous returns.</p>
<p><strong>Step 2.</strong>&nbsp;Get the indicator working with a test system after a long coding session. Ugh, the backtest result does not look this good. You must have made some coding mistake. Debug. Debug some more.</p>
<p><strong>Step 3.</strong>&nbsp;Still no good result, but you have more tricks up your sleeve. Add a trailing stop. The results now look already better. Run a week analysis. Tuesday is a particular bad day for this strategy? Add a filter that prevents trading on Tuesday. Add more filters that prevent trades between 10 and 12 am, and when the price is below $14.50, and at full moon except on Fridays. Wait a long time for the simulation to finish. Wow, finally the backtest is in the green!</p>
<p><strong>Step 4.</strong> Of course you&#8217;re not fooled by in-sample results. After optimizing all 23 parameters, run a walk forward analysis. Wait a long time for the simulation to finish. Ugh, the result does not look this good. Try different WFA cycles. Try different bar periods. Wait a long time for the simulation to finish. Finally, with a 19-minutes bar period and 31 cycles, you get a sensational backtest result! And this completely out of sample!</p>
<p><strong>Step 5.</strong> Trade the system live.</p>
<p><strong>Step 6.</strong> Ugh, the result does not look this good.</p>
<p><strong>Step 7.</strong> Wait a long time for your bank account to recover. Inbetween, write a trading book.</p>
<hr>
<p>I&#8217;ve added the example script to the 2016 script repository. In the next part of this series we&#8217;ll look into the data mining approach with machine learning systems. We will examine price pattern detection, regression, neural networks, deep learning, decision trees, and support vector machines.</p>
<p style="text-align: right;"><strong>⇒&nbsp;<a href="http://www.financial-hacker.com/build-better-strategies-part-4-machine-learning/">Build Better Strategies &#8211; Part 4</a></strong></p>
]]></content:encoded>
					
					<wfw:commentRss>https://financial-hacker.com/build-better-strategies-part-3-the-development-process/feed/</wfw:commentRss>
			<slash:comments>61</slash:comments>
		
		
			</item>
		<item>
		<title>The Cold Blood Index</title>
		<link>https://financial-hacker.com/the-cold-blood-index/</link>
					<comments>https://financial-hacker.com/the-cold-blood-index/#comments</comments>
		
		<dc:creator><![CDATA[jcl]]></dc:creator>
		<pubDate>Mon, 26 Oct 2015 12:50:51 +0000</pubDate>
				<category><![CDATA[3 Most Useful]]></category>
		<category><![CDATA[Indicators]]></category>
		<category><![CDATA[System Evaluation]]></category>
		<category><![CDATA[Cold Blood Index]]></category>
		<category><![CDATA[Data mining bias]]></category>
		<category><![CDATA[Drawdown]]></category>
		<category><![CDATA[Grid trading]]></category>
		<category><![CDATA[Zorro]]></category>
		<guid isPermaLink="false">http://www.financial-hacker.com/?p=83</guid>

					<description><![CDATA[You&#8217;ve developed a new trading system. All tests produced impressive results. So you started it live. And are down by $2000 after 2 months. Or you have a strategy that worked for 2 years, but revently went into a seemingly endless drawdown. Situations are all too familiar to any algo trader. What now? Carry on in cold blood, &#8230; <a href="https://financial-hacker.com/the-cold-blood-index/" class="more-link">Continue reading<span class="screen-reader-text"> "The Cold Blood Index"</span></a>]]></description>
										<content:encoded><![CDATA[<p>You&#8217;ve developed a new trading system. All tests produced impressive results. So you started it live. And are down by $2000 after 2 months. Or you have a strategy that worked for 2 years, but revently went into a seemingly endless drawdown. Situations are all too familiar to any algo trader. What now? <strong>Carry on in cold blood, or pull the brakes in panic?</strong> <br />     Several reasons can cause a strategy to lose money right from the start. It can be already<strong> expired</strong> since the market inefficiency disappeared. Or the system is worthless and the test falsified by some <strong>bias</strong> that survived all reality checks. Or it&#8217;s a <strong>normal drawdown</strong> that you just have to sit out. In this article I propose an algorithm for deciding very early whether or not to abandon a system in such a situation.<span id="more-83"></span></p>
<p>When you start a trading strategy, you&#8217;re almost always under water for some time. This is a normal consequence of <strong>equity curve volatility</strong>. It is the very reason why you need initial capital at all for trading (aside from covering margins and transaction costs). Here you can see the typical bumpy start of a trading system:</p>
<figure id="attachment_252" aria-describedby="caption-attachment-252" style="width: 735px" class="wp-caption alignnone"><img loading="lazy" decoding="async" class="wp-image-252 size-full" src="http://www.financial-hacker.com/wp-content/uploads/2015/09/z5zulu3.png" alt="z5zulu3" width="735" height="323" srcset="https://financial-hacker.com/wp-content/uploads/2015/09/z5zulu3.png 735w, https://financial-hacker.com/wp-content/uploads/2015/09/z5zulu3-300x132.png 300w" sizes="(max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 984px) 61vw, (max-width: 1362px) 45vw, 600px" /><figcaption id="caption-attachment-252" class="wp-caption-text">CHF grid trader, initial live equity curve</figcaption></figure>
<p>You can estimate from the live equity curve that this system was rather profitable (it was a grid trader exploiting the CHF price cap). It started in July 2013 and had earned about 750 pips in January 2014, 7 months later. Max drawdown was ~400 pips from September until November. So the raw return of that system was about 750/400 ~= 180%. Normally an excellent value for a trade system. But you can also see from the curve that you were down 200 pips about six weeks into trading, and thus had lost almost half of your minimum initial capital. And if you had started the system in September, you had even stayed under water for more than 3 months! This is a psychologically difficult situation. Many traders panic, pull out, and this way <strong>lose money even with highly profitable systems</strong>. Algo trading unaffected by emotions? Not true.</p>
<h3>Not so out of sample</h3>
<p>The basic problem: you can never fully trust your test results. No matter how out-of-sample you test it, a strategy still suffers from a certain amount of <strong>Data-Snooping Bias</strong>. The standard method of measuring bias &#8211; <strong><a href="http://www.financial-hacker.com/whites-reality-check/">White&#8217;s Reality Check</a></strong> &#8211; works well for simple mechanically generated systems, as in the <strong><a href="http://www.financial-hacker.com/trend-and-exploiting-it/">Trend Experiment</a></strong>. But all human decisions about algorithms, asset selection, filters, training targets, stop/takeprofit mechanisms, WFO windows, money management and so on add new bias, since they are normally affected by testing. The out-of-sample data is then not so out-of-sample anymore. While the bias by training or optimization can be measured and even eliminated with walk forward methods, the <strong>bias introduced by the mere development process is unknown</strong>. The strategy might still be profitable, or not anymore, or not at all. You can only find out by comparing live results permanently with test results.</p>
<p>You could do that with no risk by trading on a demo account. But if the system is really profitable, demo time is sacrificed profit and thus expensive. Often very expensive, as you must demo trade a long time for some result significancy, and many strategies have a limited lifetime anyway. So you normally demo trade a system only a few weeks for making sure that the script is bug-free, then you go live with real money.</p>
<h3>Pull-out conditions</h3>
<p>The simplest method of comparing live results is based on the <strong>maximum drawdown</strong> in the test. This is the pull-out inequality:</p>
<p style="text-align: center;"><em><strong>[pmath size=18]E ~&lt;~ C + G t/y &#8211; D[/pmath]</strong></em></p>
<p><em><strong>E</strong></em> = Current account equity<br /> <em><strong>C</strong></em> = Initial account capital<br /> <em><strong>G</strong></em> = Test profit<br /> <em><strong>t</strong></em> = Live trading period<br /> <em><strong>y</strong></em> = Test period<br /> <em><strong>D</strong></em> = Test maximum drawdown</p>
<p>This formula means simply that you should pull out when the live trading drawdown exceeds the maximum drawdown from the test. Traders often check their live results this way, but there are many problems involved with this method:</p>
<ul style="list-style-type: square;">
<li>The maximum backtest drawdown is more or less random.</li>
<li>Drawdowns grow with the test period, thus longer test periods produce worse maximum drawdowns and later pull-out signals.</li>
<li>The drawdown time is not considered.</li>
<li>The method does not work when profits are reinvested by some money management algorithm.</li>
<li>The method does not consider the unlikeliness that the maximum drawdown happens already at live trading start.</li>
</ul>
<p>For those reasons, the above pullout inequality is often modified for taking the drawdown length and growth into account. The maximum drawdown is then assumed to <strong>grow with the square root of time</strong>, leading to this modified formula:</p>
<p style="text-align: center;"><strong><em>[pmath size=18]E ~&lt;~ C + G t/y &#8211; D sqrt{{t+l}/y}[/pmath]</em></strong></p>
<p><em><strong>E</strong></em> = Current account equity<br /> <em><strong>C</strong></em> = Initial account capital<br /> <em><strong>G</strong></em> = Test profit<br /> <em><strong>t</strong></em> = Live trading period<br /> <b><i>y</i></b> = Test period<br /> <em><strong>D</strong></em> = Maximum drawdown depth<br /> <b>l</b> = Maximum drawdown length</p>
<p> This was in fact the algorithm that I often suggested to clients for supervising their live results. It puts the drawdown in relation to the test period and also considers the drawdown length, as the probability of being inside the worst drawdown right at live trading start is <em><strong>l/y</strong></em>. Still, the method does not work with a profit reinvesting system. And it is dependent on the rather random test drawdown. You could address the latter issue by taking the drawdown from a Montecarlo shuffled equity curve, but this produces new problems since trading results have often serial correlation.</p>
<p>After this lenghty introduction for motivation, here&#8217;s the proposed algorithm that overcomes the mentioned issues.</p>
<h3>Keeping cold blood</h3>
<p>For finding out if we really must immediately stop a strategy, we calculate the deviation of the current live trading situation from the strategy behavior in the test. For this we do not use the maximum drawdown, but the backtest equity or balance curve:</p>
<ol>
<li>Determine a time window of length <em><strong>l </strong></em>(in days) that you want to check. It&#8217;s normally the length of the current drawdown; if your system is not in a drawdown, you&#8217;re probably in cold blood anyway. Determine the drawdown depth <em><strong>D</strong></em>,  i.e. the net loss during that time.</li>
<li>Place a time window of same size <em><strong>l </strong></em>at the start of the test balance curve.</li>
<li>Determine the balance difference <em><strong>G</strong></em> from end to start of the window. Increase a counter <em><strong>N</strong></em> when <em><strong>G &lt;= D</strong></em>. </li>
<li>Move the window forward by 1 day.</li>
<li>Repeat steps 3 and 4 until the window arrived at the end of the balance curve. Count the steps with a counter <em><strong>M</strong></em>.</li>
</ol>
<p>Any window movement takes a sample out of the curve. We have <em><strong>N</strong></em> samples that are similar or worse, and <em><strong>M-N</strong></em> samples that are better than the current trading situation. The probability to <strong>not</strong> encounter such a drawdown in <em><strong>T</strong></em> out of <em><strong>M</strong></em> samples is a simple combinatorial equation:</p>
<p style="text-align: center;"><em><strong>[pmath size=18]1-P ~=~ {(M-N)!(M-T)! }/ {M!(M-N-T)!}[/pmath]</strong></em></p>
<p><em><strong>N</strong></em> = Number of  <em><strong>G &lt;= D</strong></em> occurrences<br /> <em><strong>M</strong></em> = Total samples = <em><strong>y-l+1</strong></em><br /> <em><strong>l </strong></em>= Window length in days<em><strong><br /> </strong></em><em><strong>y</strong></em> = Test time in days<br /> <em><strong>T</strong></em> = Samples taken = <em><strong>t-l+1<br /> </strong><strong>t</strong></em> = Live trading time in days</p>
<p><em><strong>P</strong></em> is the <strong>cold blood index</strong> &#8211; the similarity of the live situation with the backtest. As long as <em><strong>P</strong></em> stays above 0.1 or 0.2, probably all is still fine. But if <em><strong>P</strong></em> is very low or zero, either the backtest was strongly biased or the market has significantly changed. The system can still be profitable, just less profitable as in the test. But when the current loss <em><strong>D</strong></em> is large in comparison to the gains so far, we should stop.</p>
<p>Often we want to calculate <strong>P</strong> soon after the begin of live trading. The window size <strong><em>l</em> </strong>is then identical to our trading time <em><strong>t</strong></em>,<em><strong> </strong></em>hence <em><strong>T == 1</strong></em>. This simplifies the equation to: </p>
<p style="text-align: center;"><em><strong>[pmath size=18]P ~=~ N/M[/pmath]</strong></em></p>
<p>In such a situation I&#8217;d give up and pull out of a painful drawdown as soon as <em><strong>P</strong></em> drops below 5%.</p>
<p>The slight disadvantage of this method is that you must perform a backtest with the same capital allocation, and store its balance or equity curve in a file for later evaluation during live trading. However this should only take a few lines of code in a strategy script. </p>
<p>Here&#8217;s a small example script for Zorro that calculates <em><strong>P</strong></em> (in percent) from a stored balance curve when a trading time <strong>t</strong> and drawdown of length <em><strong>l</strong></em> and depth <em><strong>D</strong></em> is given:</p>
<pre>int TradeDays = 40;    <em>// t, Days since live start</em>
int DrawDownDays = 30; <em>// l, Days since you're in drawdown</em>
var DrawDown = 100;    <em>// D, Current drawdown depth in $</em>

string BalanceFile = "Log\\BalanceDaily.dbl"; <em>// stored double array</em>

var logsum(int n)
{
  if(n &lt;= 1) return 0;
  return log(n)+logsum(n-1);
}

void main()
{
  int CurveLength = file_length(BalanceFile)/sizeof(var);
  var *Balances = file_content(BalanceFile);

  int M = CurveLength - DrawDownDays + 1;
  int T = TradeDays - DrawDownDays + 1;
 
  if(T &lt; 1 || M &lt;= T) {
    printf("Not enough samples!");
    return;
  }
 
  var GMin=0., N=0.;
  int i=0;
  for(; i &lt; M-1; i++)
  {
    var G = Balances[i+DrawDownDays] - Balances[i];
    if(G &lt;= -DrawDown) N += 1.;
    if(G &lt; GMin) GMin = G;
  } 

  var P;
  if(TradeDays &gt; DrawDownDays)
    P = 1. - exp(logsum(M-N)+logsum(M-T)-logsum(M)-logsum(M-N-T));
  else
    P = N/M;

  printf("\nTest period: %i days",CurveLength);
  printf("\nWorst test drawdown: %.f",-GMin);
  printf("\nM: %i N: %i T: %i",M,(int)N,T);
  printf("\nCold Blood Index: %.1f%%",100*P);
}</pre>
<p>Since my computer is unfortunately not good enough for calculating the factorials of some thousand samples, I&#8217;ve summed up the logarithms instead &#8211; therefore the strange <strong>logsum</strong> function in the script.</p>
<h3>Conclusion</h3>
<ul style="list-style-type: square;">
<li>Finding out early whether a live trading drawdown is &#8216;normal&#8217; or not can be essential for your wallet.</li>
<li>The backtest drawdown is a late and inaccurate criteria.</li>
<li>The Cold Blood Index calculates the precise probability of such a drawdown based on the backtest balance curve.</li>
</ul>
<p>I&#8217;ve added the script above to the 2015 scripts collection. I also have suggested to the Zorro developers to implement this method for automatically analyzing drawdowns while live trading, and issue warnings when <em><strong>P</strong></em> gets dangerously low. This can also be done separately for components in a portfolio system. This feature will probably appear in a future Zorro version. </p>
]]></content:encoded>
					
					<wfw:commentRss>https://financial-hacker.com/the-cold-blood-index/feed/</wfw:commentRss>
			<slash:comments>27</slash:comments>
		
		
			</item>
	</channel>
</rss>
