<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Detrending &#8211; The Financial Hacker</title>
	<atom:link href="https://financial-hacker.com/tag/detrending/feed/" rel="self" type="application/rss+xml" />
	<link>https://financial-hacker.com</link>
	<description>A new view on algorithmic trading</description>
	<lastBuildDate>Sat, 26 Mar 2022 13:08:54 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Petra on Programming: A New Zero-Lag Indicator</title>
		<link>https://financial-hacker.com/petra-on-programming-a-new-zero-lag-indicator/</link>
					<comments>https://financial-hacker.com/petra-on-programming-a-new-zero-lag-indicator/#comments</comments>
		
		<dc:creator><![CDATA[Petra Volkova]]></dc:creator>
		<pubDate>Wed, 15 Jan 2020 15:04:55 +0000</pubDate>
				<category><![CDATA[Indicators]]></category>
		<category><![CDATA[Programming]]></category>
		<category><![CDATA[Detrending]]></category>
		<category><![CDATA[Ehlers]]></category>
		<guid isPermaLink="false">https://financial-hacker.com/?p=3178</guid>

					<description><![CDATA[I was recently hired to code a series of indicators based on monthly articles in the Stocks &#38; Commodities magazine, and to write here about the details of indicator programming. Looking through the magazine, I found many articles useful, some a bit weird, some a bit on the esoteric side. So I hope I won&#8217;t &#8230; <a href="https://financial-hacker.com/petra-on-programming-a-new-zero-lag-indicator/" class="more-link">Continue reading<span class="screen-reader-text"> "Petra on Programming: A New Zero-Lag Indicator"</span></a>]]></description>
										<content:encoded><![CDATA[
<p><em>I was recently hired to code a series of indicators based on monthly articles in the <strong>Stocks &amp; Commodities</strong> magazine, and to write here about the details of indicator programming. Looking through the magazine, I found many articles useful, some a bit weird, some a bit on the esoteric side. So I hope I won&#8217;t have to code Elliott waves or harmonic figures one day. But this first one is a very rational indicator invented by a famous algo trader.</em></p>
<p><span id="more-3178"></span></p>
<p>You know the problem with indicators: the more accurate they are, the more they trail behind the price. In his books and seminars, <strong>John Ehlers</strong> has repeatedly tackled this problem, and proposed many indicators with little, if not zero, lag. His new indicators from the article &#8220;Reflex: A New Zero-Lag Indicator&#8221; in Stocks &amp; Commodities 2/2020 are supposed to remove trend from the price curve with almost no lag. They come in two flavours, <strong>ReFlex</strong> and <strong>TrendFlex</strong>.</p>
<p>The algorithm of the <strong>ReFlex</strong> indicator:</p>
<ul>
<li>Smooth the price curve with a lowpass filter.</li>
<li>Draw a line from the price <strong>N</strong> bars back to the current price.</li>
<li>Take the average vertical distance from line to price.</li>
<li>Divide the result by its standard deviation.</li>
</ul>
<p>The algorithm of the <strong>TrendFlex</strong> indicator:</p>
<ul>
<li>Smooth the price curve with a lowpass filter.</li>
<li>Take the average difference of the last <strong>N</strong>  prices to the current price.</li>
<li>Divide the result by its standard deviation.</li>
</ul>
<p>So the indicators are a filtered and normalized moving average, but of price differences. Ehlers provided TradeStation code that is very similar to Zorro&#8217;s lite-C, so coding the indicators was a matter of minutes. Here are the C versions:</p>
<pre class="prettyprint">var ReFlex(vars Data,int Period)<br />{<br />	var a1,b1,c1,c2,c3,Slope,DSum;<br />	vars Filt = series(Data[0]), MS = series(0);<br />	int i;<br />	<br />//Gently smooth the data in a SuperSmoother<br />	a1 = exp(-1.414*2*PI/Period);<br />	b1 = 2*a1*cos(1.414*PI/Period);<br />	c2 = b1;<br />	c3 = -a1*a1;<br />	c1 = 1-c2-c3;<br />	Filt[0] = c1*(Data[0]+Data[1])/2 + c2*Filt[1] + c3*Filt[2];<br /><br />//Period is assumed cycle period<br />	Slope = (Filt[Period]-Filt[0]) / Period;<br /><br />//Sum the differences<br />	for(i=1,DSum=0; i&lt;=Period; i++)<br />		DSum += (Filt[0] + i*Slope) - Filt[i];<br />	DSum /= Period;<br />	<br />//Normalize in terms of Standard Deviations<br />	MS[0] = .04*DSum*DSum + .96*MS[1];<br />	if(MS[0] &gt; 0.) return DSum/sqrt(MS[0]);<br />	else return 0.;<br />}</pre>
<pre class="prettyprint">var TrendFlex(vars Data,int Period)<br />{<br />	var a1,b1,c1,c2,c3,Slope,DSum;<br />	vars Filt = series(Data[0]), MS = series(0);<br />	int i;<br />	<br />//Gently smooth the data in a SuperSmoother<br />	a1 = exp(-1.414*2*PI/Period);<br />	b1 = 2*a1*cos(1.414*PI/Period);<br />	c2 = b1;<br />	c3 = -a1*a1;<br />	c1 = 1-c2-c3;<br />	Filt[0] = c1*(Data[0]+Data[1])/2 + c2*Filt[1] + c3*Filt[2];<br /><br />//Sum the differences<br />	for(i=1,DSum=0; i&lt;=Period; i++)<br />		DSum += Filt[0] - Filt[i];<br />	DSum /= Period;<br />	<br />//Normalize in terms of Standard Deviations<br />	MS[0] = .04*DSum*DSum + .96*MS[1];<br />	if(MS[0] &gt; 0.) return DSum/sqrt(MS[0]);<br />	else return 0.;<br />}</pre>
<p><strong>Data</strong> is the price curve and <strong>Period</strong> is the <strong>N</strong> from the above algorithm description. If you are familiar with C programming, the only unusual line that you encounter is this one:</p>
<p><strong>vars Filt = series(Data[0]), MS = series(0);</strong></p>
<p>This line allocates time series to the variables <strong>Filt</strong> and <strong>MS</strong>. So far so good. But if you allocate a time series for an indicator, it is very important to fill it with the correct initial value. Here it is <strong>Data[0]</strong>, the most recent price, for the <strong>Filt</strong> series. What happened if we filled it with 0 instead? <strong>Filt</strong> is the lowpass filtered price, from Ehlers&#8217; &#8220;SuperSmoother&#8221; algorithm. If a lowpass filter starts with 0, it will creep veeerrry slooowly from 0 up to the filtered price. This would make the initial values of that filter useless. So this is my first important tip for indicator programming, initialize a time series always with a value in the range that it is supposed to generate.</p>
<p>Why are we using global time series at all? We could have defined static arrays instead. And I can see that some of my colleagues who coded the same indicators for other platforms did that. And that would be ok when the platform only supported strategies with a single asset and a single time frame. But when the indicators are called with different time frames and different price curves in the same strategy, static arrays get all mixed up and the indicators produce wrong values. If in doubt, use series. That was my second tip.</p>
<p>A small C script for plotting the indicators for the 2019 SPY curve:</p>
<pre class="prettyprint">function run()<br />{<br />   BarPeriod = 1440;<br />   StartDate = 20181220;<br />   EndDate = 20191231;<br />   asset("SPY");<br />   vars Prices = series(priceClose());<br />   plot("ReFlex",ReFlex(Prices,20),NEW|LINE,RED);<br />   plot("TrendFlex",TrendFlex(Prices,20),LINE,BLUE);<br />}</pre>
<p>The result:</p>
<p><a href="https://financial-hacker.com/wp-content/uploads/2020/01/Reflex_SPY.png"><img fetchpriority="high" decoding="async" class="alignnone  wp-image-3187" src="https://financial-hacker.com/wp-content/uploads/2020/01/Reflex_SPY-300x197.png" alt="" width="582" height="382" srcset="https://financial-hacker.com/wp-content/uploads/2020/01/Reflex_SPY-300x197.png 300w, https://financial-hacker.com/wp-content/uploads/2020/01/Reflex_SPY-1024x674.png 1024w, https://financial-hacker.com/wp-content/uploads/2020/01/Reflex_SPY-768x506.png 768w, https://financial-hacker.com/wp-content/uploads/2020/01/Reflex_SPY.png 1197w" sizes="(max-width: 582px) 85vw, 582px" /></a></p>
<p>You can see that the indicators have indeed almost no lag. Remains the question how and for what you can use them. In the article, Ehlers is quite silent about the purpose of his indicators. So my first idea is using them for trade signals. Since they follow the peaks and valley of the price curve with no lag, my idea was entering a long position at a valley, and reversing to short at a peak. So I added these lines to the plotting script:</p>
<pre class="prettyprint">vars Signals = series(ReFlex(Prices,20));<br />if(valley(Signals)) enterLong();<br />if(peak(Signals)) enterShort();</pre>
<p>But alas, the result is no good. And it does not get much better when I used variants for generating the trade signals, like<span style="font-size: inherit;"> crossovers or zones, and increase the test period. Sure enough, some variants generate positive results, but they would not pass a reality check.</span></p>
<p>However, <span style="font-size: inherit;">the indicators are in the +/-2 range and have really almost no lag. This makes them a good choice as trendless inputs to a machine learning system, such as a deep learning ANN. Maybe someone finds another good use for them!</span></p>
<p>The indicators in the C language for the Zorro platform are available in the scripts 2020 archive. They can be added to any trading system with an #include statement.</p>
<p>Reference: John Ehlers, Reflex: A New Zero-Lag Indicator,  <a href="http://technical.traders.com/">Stocks&amp;Commodities</a> 02/2020 </p>
]]></content:encoded>
					
					<wfw:commentRss>https://financial-hacker.com/petra-on-programming-a-new-zero-lag-indicator/feed/</wfw:commentRss>
			<slash:comments>10</slash:comments>
		
		
			</item>
		<item>
		<title>Build Better Strategies! Part 3: The Development Process</title>
		<link>https://financial-hacker.com/build-better-strategies-part-3-the-development-process/</link>
					<comments>https://financial-hacker.com/build-better-strategies-part-3-the-development-process/#comments</comments>
		
		<dc:creator><![CDATA[jcl]]></dc:creator>
		<pubDate>Mon, 22 Feb 2016 16:46:32 +0000</pubDate>
				<category><![CDATA[3 Most Useful]]></category>
		<category><![CDATA[System Development]]></category>
		<category><![CDATA[Cycles]]></category>
		<category><![CDATA[Detrending]]></category>
		<category><![CDATA[Drawdown]]></category>
		<category><![CDATA[Medallion fund]]></category>
		<category><![CDATA[Money management]]></category>
		<category><![CDATA[Spectral filter]]></category>
		<category><![CDATA[Walk forward analysis]]></category>
		<guid isPermaLink="false">http://www.financial-hacker.com/?p=1191</guid>

					<description><![CDATA[This is the third part of the Build Better Strategies series. In the previous part we&#8217;ve discussed the 10 most-exploited market inefficiencies and gave some examples of their trading strategies. In this part we&#8217;ll analyze the general process of developing a model-based trading system. As almost anything, you can do trading strategies in (at least) &#8230; <a href="https://financial-hacker.com/build-better-strategies-part-3-the-development-process/" class="more-link">Continue reading<span class="screen-reader-text"> "Build Better Strategies! Part 3: The Development Process"</span></a>]]></description>
										<content:encoded><![CDATA[<p>This is the third part of the <a href="http://www.financial-hacker.com/build-better-strategies/">Build Better Strategies</a> series. In the previous part we&#8217;ve discussed the 10 most-exploited market inefficiencies and gave some examples of their trading strategies. In this part we&#8217;ll analyze the general process of developing a model-based trading system. As almost anything, you can do trading strategies in (at least) two different ways: There&#8217;s the <strong>ideal way</strong>, and there&#8217;s the <strong>real way</strong>. We begin with the <strong>ideal development process</strong>, broken down to 10 steps.<span id="more-1191"></span></p>
<h3>The ideal model-based strategy development<br />
Step 1: The model</h3>
<p>Select one of the known market inefficiencies listed in the <a href="http://www.financial-hacker.com/build-better-strategies-part-2-model-based-systems/">previous part</a>, or discover a new one. You could eyeball through price curves and look for something suspicious that can be explained by a certain market behavior. Or the other way around, &nbsp;theoretize about a behavior pattern and check if you can find it reflected in the prices. If you discover something new, feel invited to post it here! But be careful: Models of non-existing inefficiencies (such as&nbsp;<a href="http://www.financial-hacker.com/seventeen-popular-trade-strategies-that-i-dont-really-understand/">Elliott Waves</a>) already outnumber real inefficiencies by a large amount. It is not likely that a real inefficiency remains unknown to this day.</p>
<p>Once you&#8217;ve decided for a model, determine which <strong>price curve anomaly</strong> it would produce, and describe it with a quantitative formula or at least a qualitative criteria. You&#8217;ll need that for the next step. As an example we&#8217;re using the <strong>Cycles Model</strong> from the previous part:</p>
<p>[latex display=&#8221;true&#8221;]y_t ~=~ \hat{y} + \sum_{i}{a_i sin(2 \pi t/C_i+D_i)} + \epsilon[/latex]</p>
<p>(Cycles are not to be underestimated. One of the most successful funds in history &#8211; Jim Simons&#8217; <strong>Renaissance Medallion</strong> fund &#8211; is rumored to exploit cycles in price curves by analyzing their lengths (<em><strong>C<sub>i</sub></strong></em>), phases (<em><strong>D<sub>i</sub></strong></em>) and amplitudes (<em><strong>a<sub>i</sub></strong></em>) with a Hidden Markov Model. Don&#8217;t worry, we&#8217;ll use a somewhat simpler approach in our example.)</p>
<h3>Step 2: Research</h3>
<p>Find out if the hypothetical anomaly really appears in the price curves of the assets that you want to trade. For this you first need enough historical data of the traded assets &#8211; D1, M1, or Tick data, dependent on the time frame of the anomaly. How far back? As far as possible, since you want to find out the lifetime of your anomaly and the market conditions under which it appears. Write a script to detect and display the anomaly in price data. For our Cycles Model, this would be the frequency spectrum:</p>
<p><figure id="attachment_1160" aria-describedby="caption-attachment-1160" style="width: 1599px" class="wp-caption alignnone"><a href="http://www.financial-hacker.com/wp-content/uploads/2015/11/spectrum.png"><img decoding="async" class="wp-image-1160 size-full" src="http://www.financial-hacker.com/wp-content/uploads/2015/11/spectrum.png" alt="" width="1599" height="321" srcset="https://financial-hacker.com/wp-content/uploads/2015/11/spectrum.png 1599w, https://financial-hacker.com/wp-content/uploads/2015/11/spectrum-300x60.png 300w, https://financial-hacker.com/wp-content/uploads/2015/11/spectrum-1024x206.png 1024w, https://financial-hacker.com/wp-content/uploads/2015/11/spectrum-1200x241.png 1200w" sizes="(max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px" /></a><figcaption id="caption-attachment-1160" class="wp-caption-text">EUR/USD frequency spectrum, cycle amplitude vs. cycle length in bars</figcaption></figure></p>
<p>Check out how the spectrum changes over the months and years. Compare with the spectrum of random data (with <a href="http://www.financial-hacker.com/hackers-tools-zorro-and-r/" target="_blank" rel="noopener noreferrer">Zorro</a> you can use the <a href="http://zorro-project.com/manual/en/detrend.htm" target="_blank" rel="noopener noreferrer">Detrend</a> function for randomizing price curves). If you find no clear signs of the anomaly, or no significant difference to random data, improve your detection method. And if you then still don&#8217;t succeed, go back to step 1.</p>
<h3>Step 3: The algorithm</h3>
<p>Write an algorithm that generates the&nbsp;<strong>trade signals</strong> for buying in the direction of the anomaly. A market inefficiency has normally only a <strong>very weak effect</strong> on the price curve. So your algorithm must be really good in distinguishing it from random noise. At the same time it should be as simple as possible, and rely on as few free parameters as possible. &nbsp;In our example with the Cycles Model, the script reverses the position at every valley and peak of a sine curve that runs ahead of the dominant cycle:</p>
<pre class="prettyprint">function run()
{
  vars Price = series(price());
  var Phase = DominantPhase(Price,10);
  vars Signal = series(sin(Phase+PI/4));

  if(valley(Signal))
    reverseLong(1); 
  else if(peak(Signal))
    reverseShort(1);
}</pre>
<p>This is the core of the system. Now it&#8217;s time for a first backtest. The precise performance does not matter much at this point &#8211; just determine whether the algorithm has an edge or not. Can&nbsp;it produce a series of profitable trades at least in certain market periods or situations? If not, improve the algorithm or write a another one that exploits the same anomaly with a different method. But do not yet use any stops, trailing, or other bells and whistles. They would only distort the result, and give you the illusion of profit where none is there. Your algorithm must be able to produce positive returns either with pure reversal, or at least with a timed exit.</p>
<p>In this step you must also decide about the <strong>backtest data</strong>. You normally need M1 or tick data for a realistic test. Daily data won&#8217;t do. The data amount depends on the lifetime (determined in step 2) and the nature of the price anomaly. Naturally, the longer the period, the better the test &#8211; but more is not always better. Normally it makes no sense to go further back than 10 years, at least not when your system exploits some real market behavior. Markets change extremely in a decade. Outdated historical price data can produce very misleading results. Most systems that had an edge 15 years ago will fail miserably on today&#8217;s markets. But they can deceive you with a seemingly profitable backtest.</p>
<h3>Step 4: The filter</h3>
<p>No market inefficiency exits all the time. Any market goes through periods of random behavior. It is essential for any system to have a filter mechanism that detects if the inefficiency is present or not. The filter is at least as important as the trade signal, if not more &#8211; but it&#8217;s often forgotten in trade systems. This is our example script with a filter:</p>
<pre class="prettyprint">function run()
{
  vars Price = series(price());
  var Phase = DominantPhase(Price,10);
  vars Signal = series(sin(Phase+PI/4));
  vars Dominant = series(BandPass(Price,rDominantPeriod,1));
  var Threshold = 1*PIP;
  ExitTime = 10*rDominantPeriod;
	
  if(Amplitude(Dominant,100) &gt; Threshold) {
    if(valley(Signal))
      reverseLong(1); 
    else if(peak(Signal))
      reverseShort(1);
  }
}</pre>
<p>We apply a bandpass filter centered at the dominant cycle period to the price curve and measure its amplitude. If the amplitude is above a threshold, we conclude that the inefficiency is there, and we trade. The trade duration is now also restricted to a maximum of 10 cycles since we found in step 2 that dominant cycles appear and disappear in relatively short time.</p>
<p>What can go wrong in this step is falling to the temptation to add a filter just because it improves the test result. Any filter must have a rational reason in the market behavior or in the used signal algorithm. If your algorithm only works by adding irrational filters: back to step 3.</p>
<h3>Step 5: Optimizing (but not too much!)</h3>
<p>All parameters of a system affect the result, but only a few directly determine entry and exit points of trades dependent on the price curve. These &#8216;adaptable&#8217; parameters should be identified and optimized. In the above example, trade entry is determined by the phase of the forerunning sine curve and by the filter threshold, and trade exit is determined by the exit time. Other parameters &#8211; such as the filter constants of the <strong>DominantPhase</strong> and the <strong>BandPass</strong> functions &#8211; need not be adapted since their values do not depend on the market situation.</p>
<p>Adaption is an optimizing procdure, and a big opportunity to fail without even noticing it. Often, genetic or brute force methods are applied for finding the &#8220;best&#8221; parameter combination at a profit peak in the parameter space. Many platforms even have &#8220;optimizers&#8221; for this purpose. Although this method indeed produces the best backtest result, it won&#8217;t help at all for the live performance of the system. In fact, a recent study (<a href="http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2745220">Wiecki et.al. 2016</a>) showed that the better you optimize your parameters, the worse your system will fare in live trading! The reason of this paradoxical effect is that optimizing to maximum profit fits your system mostly to the noise in the historical price curve, since noise affects result peaks much more than market inefficiencies.</p>
<p>Rather than generating top backtest results, correct optimizing has other purposes:</p>
<ul style="list-style-type: square;">
<li>It can determine the <strong>susceptibility</strong> of your system to its parameters. If the system is great with a certain parameter combination, but loses its edge when their values change a tiny bit: back to step 3.</li>
<li>It can identify the parameter&#8217;s <strong>sweet spots</strong>. The sweet spot is the area of highest parameter robustness, i.e. where small parameter changes have little effect on the return. They are not the peaks, but the centers of broad hills in the parameter space.</li>
<li>It can adapt the system to different assets, and enable it to trade a <strong>portfolio</strong> of assets with slightly different parameters.&nbsp;It can also extend the <strong>lifetime</strong> of the system by adapting it to the current market situation in regular time intervals, parallel to live trading.</li>
</ul>
<p>This is our example script with entry parameter optimization:</p>
<pre class="prettyprint">function run()
{
  vars Price = series(price());
  var Phase = DominantPhase(Price,10);
  vars Signal = series(sin(Phase+optimize(1,0.7,2)*PI/4));
  vars Dominant = series(BandPass(Price,rDominantPeriod,1));
  ExitTime = 10*rDominantPeriod;
  var Threshold = optimize(1,0.7,2)*PIP;
	
  if(Amplitude(Dominant,100) &gt; Threshold) {
    if(valley(Signal))
      reverseLong(1); 
    else if(peak(Signal))
      reverseShort(1);
  }
}</pre>
<p>The two <a href="http://manual.zorro-project.com/optimize.htm" target="_blank" rel="noopener noreferrer">optimize</a> calls use a start value (<strong>1.0</strong> in both cases) and a range (<strong>0.7..2.0</strong>) for determining the sweet spots of the two essential parameters of the system. You can identify the spots in the profit factor curves (red bars) of the two parameters that are generated by the optimization process:</p>
<p><figure id="attachment_1377" aria-describedby="caption-attachment-1377" style="width: 391px" class="wp-caption alignnone"><a href="http://www.financial-hacker.com/wp-content/uploads/2016/02/CyclesDev_EURUSD_p1.png"><img decoding="async" class="wp-image-1377 size-full" src="http://www.financial-hacker.com/wp-content/uploads/2016/02/CyclesDev_EURUSD_p1.png" alt="" width="391" height="321" srcset="https://financial-hacker.com/wp-content/uploads/2016/02/CyclesDev_EURUSD_p1.png 391w, https://financial-hacker.com/wp-content/uploads/2016/02/CyclesDev_EURUSD_p1-300x246.png 300w" sizes="(max-width: 391px) 85vw, 391px" /></a><figcaption id="caption-attachment-1377" class="wp-caption-text">Sine phase in pi/4 units</figcaption></figure></p>
<p><figure id="attachment_1376" aria-describedby="caption-attachment-1376" style="width: 391px" class="wp-caption alignnone"><a href="http://www.financial-hacker.com/wp-content/uploads/2016/02/CyclesDev_EURUSD_p2.png"><img loading="lazy" decoding="async" class="wp-image-1376 size-full" src="http://www.financial-hacker.com/wp-content/uploads/2016/02/CyclesDev_EURUSD_p2.png" alt="" width="391" height="321" srcset="https://financial-hacker.com/wp-content/uploads/2016/02/CyclesDev_EURUSD_p2.png 391w, https://financial-hacker.com/wp-content/uploads/2016/02/CyclesDev_EURUSD_p2-300x246.png 300w" sizes="(max-width: 391px) 85vw, 391px" /></a><figcaption id="caption-attachment-1376" class="wp-caption-text">Amplitude threshold in pips</figcaption></figure></p>
<p>In this case the optimizer would select a parameter value of about 1.3 for the sine phase and about 1.0 (not the peak at 0.9) for the amplitude threshold for the current asset (EUR/USD). The exit time is not optimized in this step, as we&#8217;ll do that later together with the other exit parameters when risk management is implemented.</p>
<h3>Step 6: Out-of-sample analysis</h3>
<p>Of course the parameter optimization improved the backtest performance of the strategy, since the system was now better adapted to the price curve. So the test result so far is worthless. For getting an idea of the real performance, we first need to split the data into in-sample and out-of-sample periods. The in-sample periods are used for training, the out-of-sample periods for testing. The best method for this is <a href="http://manual.zorro-project.com/numwfocycles.htm" target="_blank" rel="noopener noreferrer">Walk Forward Analysis</a>. It uses a rolling window into the historical data for separating test and training periods.</p>
<p>Unfortunately, WFA adds two more parameters to the system: the training time and the test time of a WFA cycle. The test time should be long enough for trades to properly open and close, and small enough for the parameters to stay valid. The training time is more critical. Too short training will not get enough price data for effective optimization, training too long will also produce bad results since the market can already undergo changes during the training period. So the training time itself is a parameter that had to be optimized.</p>
<p>A five&nbsp;cycles walk forward analysis (add &#8220;<strong>NumWFOCycles = 5;</strong>&#8221; to the above script) reduces the backtest performance from 100% annual return to a more realistic 60%. For preventing that WFA still produces too optimistic results just by a lucky selection of test and training periods, it makes also sense to perform WFA several times with slightly different starting points of the simulation. If the system has an edge, the results should be not too different. If they vary wildly: back to step 3.</p>
<h3>Step 7: Reality Check</h3>
<p>Even though the test is now out-of-sample, the mere development process &#8211; selecting algorithms, assets, test periods and other ingredients by their performance &#8211; has added a lot of <strong>selection bias</strong> to the results. Are they caused by a real edge of the system, or just by biased development? Determining this with some certainty is the hardest part of strategy development.</p>
<p>The best way to find out is <a href="http://www.financial-hacker.com/whites-reality-check/">White&#8217;s Reality Check</a>. But it&#8217;s also the least practical because it requires strong discipline in parameter and algorithm selection. Other methods are not as good, but easier to apply:</p>
<ul>
<li><strong>Montecarlo</strong>. Randomize the price curve by shuffling without replacement, then train and test again. Repeat this many times. Plot a distribution of the results (an example of this method can be found in chapter 6 of the <a href="http://www.amazon.de/Das-B%C3%B6rsenhackerbuch-Finanziell-algorithmische-Handelssysteme/dp/1530310784" target="_blank" rel="noopener noreferrer">Börsenhackerbuch</a>). Randomizing removes all price anomalies, so you hope for significantly worse performance. But if the result from the real price curve lies not far east of the random distribution peak, it is probably also caused by randomness. That would mean: back to step 3.</li>
<li><strong>Variants.</strong> It&#8217;s the opposite of the Montecarlo method: Apply the trained system on variants of the price curve and hope for positive results. Variants that maintain most anomalies are <a href="http://www.financial-hacker.com/better-tests-with-oversampling/">oversampling</a>, detrending, or inverting the price curve. If the system stays profitable with those variants, but not with randomized prices, you might really have found a solid system.</li>
<li><strong>Really-out-of-sample (ROOS) Test</strong>. While developing the system, ignore the last year (2015) completely. Even delete all 2015 price history from your PC. Only when the system is completely finished, download the data and run a 2015 test. Since the 2015 data can be only used once this way and is then tainted, you can&nbsp;not modify the system anymore if it fails in 2015. Just abandon it. Assemble all your metal strength and go back to step 1.</li>
</ul>
<h3>Step 8: Risk management</h3>
<p>Your system has so far survived all tests. Now you can concentrate on reducing its risk and improving its performance. Do not touch anymore the entry algorithm and its parameters. You&#8217;re now optimizing the exit. Instead of the simple timed and reversal exits that we&#8217;ve used during the development phase, we can now apply various trailing stop mechanisms. For instance:</p>
<ul style="list-style-type: square;">
<li>Instead of exiting after a certain time, raise the stop loss by a certain amount per hour. This has the same effect, but will close unprofitable trades sooner and profitable trades later.</li>
<li>When a trade has won a certain amount, place the stop loss at a distance above the break even point. Even when locking a profit percentage does not improve the total performance, it&#8217;s good for your health. Seeing profitable trades wander back into the losing zone can cause serious ulcers.</li>
</ul>
<p>This is our example script with the initial timed exit replaced by a stop loss limit that rises at every bar:</p>
<pre class="prettyprint">function run()
{
  vars Price = series(price());
  var Phase = DominantPhase(Price,10);
  vars Signal = series(sin(Phase+optimize(1,0.7,2)*PI/4));
  vars Dominant = series(BandPass(Price,rDominantPeriod,1));
  var Threshold = optimize(1,0.7,2)*PIP;

  Stop = ATR(100);
  for(open_trades)
    TradeStopLimit -= TradeStopDiff/(10*rDominantPeriod);
	
  if(Amplitude(Dominant,100) &gt; Threshold) {
    if(valley(Signal))
      reverseLong(1); 
    else if(peak(Signal))
      reverseShort(1);
  }
}</pre>
<p>The&nbsp;<strong>for(open_trades)</strong> loop increases the stop level of all open trades by a fraction of the initial stop loss distance at the end of every bar.</p>
<p>Of course you now have to optimize and run a walk forward analysis again with the exit parameters. If the performance didn&#8217;t improve, think about better exit methods.</p>
<h3>Step 9: Money management</h3>
<p>Money management serves three purposes. First, reinvesting your profits. Second, distributing your capital among portfolio components. And third, quickly finding out if a trading book is useless. Open the &#8220;Money Management&#8221; chapter and read the author&#8217;s investment advice. If it&#8217;s &#8220;invest 1% of your capital per trade&#8221;, you know why he&#8217;s writing trading books. He probably has not yet earned money with real trading.</p>
<p>Suppose your trade volume at a given time <em><strong>t</strong></em> is&nbsp;<em><strong>V(t)</strong></em>. If your system is profitable, on average your capital <em><strong>C</strong></em> will rise proportionally to <b><i>V</i></b> with a growth factor <em><strong>c</strong></em>:</p>
<p>[latex display=&#8221;true&#8221;]\frac{dC}{dt} = c V(t)~~\rightarrow~~ C(t) = C_0 + c \int_{0}^{t}{V(t) dt}[/latex]</p>
<p>When you follow trading book advices and always invest a fixed percentage <em><strong>p</strong></em>&nbsp;of your capital, so that <em><strong>V</strong><strong><em>(t)</em> = p C(t)</strong></em>, your capital will grow exponentially with exponent&nbsp;<em><strong>p c</strong></em>:</p>
<p>[latex display=&#8221;true&#8221;]\frac{dC}{dt} ~=~ c p C(t)&nbsp;~~\rightarrow~~&nbsp;C(t) ~=~ C_0 e^{p c t}[/latex]</p>
<p>Unfortunately your capital will also undergo random fluctuations, named <strong>Drawdowns</strong>. Drawdowns are proportial to the trade volume <em><strong>V(t)</strong></em>. On leveraged accounts with no limit to drawdowns, it can be shown from statistical considerations that the maximum drawdown depth <em><strong>D<sub>max</sub></strong></em> grows proportional to the square root of time <em><strong>t</strong></em>:</p>
<p>[latex display=&#8221;true&#8221;]{D_{max}}(t) ~=~ q V(t) \sqrt{t}[/latex]</p>
<p>So, with the fixed percentage investment:</p>
<p>[latex display=&#8221;true&#8221;]{D_{max}}(t) ~=~ q p C(t) \sqrt{t}[/latex]</p>
<p>and at the time <em><strong>T = 1/(q p)<sup>2</sup></strong></em>:</p>
<p>[latex display=&#8221;true&#8221;]{D_{max}}(T) ~=~ q p C(T) \frac{1}{q p} ~=~ C(T)[/latex]</p>
<p>You can see that around the time <em><strong>T&nbsp;= 1/(q p)<sup>2</sup></strong></em> a drawdown will eat up all your capital <em><strong>C(T)</strong></em>, no matter how profitable your strategy is and how you&#8217;ve choosen&nbsp;<em><strong>p</strong></em>! That&#8217;s why the 1% rule is a bad advice. And why I advise clients not to raise the trade volume proportionally to their accumulated profit, but to its square root &#8211; at least on leveraged accounts. Then, as long as the strategy does not deteriorate, they keep a safe distance from a margin call.</p>
<p>Dependent on whether you trade a single asset and algorithm or a portfolio of both, you can calculate the optimal investment with several methods. There&#8217;s the OptimalF formula by <strong>Ralph Vince</strong>, the Kelly formula by <strong>Ed Thorp</strong>, or <a href="http://www.financial-hacker.com/get-rich-slowly/">mean/variance optimization</a> by <strong>Harry Markowitz</strong>. Usually you won&#8217;t hard code reinvesting in your strategy, but calculate the investment volume externally, since you might want to withdraw or deposit money from time to time. This requires the overall volume to be set up manually, not by an automated process. A formula for proper reinvesting and withdrawing can be found in the Black Book.</p>
<h3>Step 10: Preparation for live trading</h3>
<p>You can now define the <strong>user interface</strong> of your trading system. Determine which parameters you want to change in real time, and which ones only at start of the system. Provide a method to control the trade volume, and a &#8216;Panic Button&#8217; for locking profit or cashing out in case of bad news. Display all trading relevant parameters in real time. Add buttons for re-training the system, and provide a method for comparing live results with backtest results, such as the <a href="http://www.financial-hacker.com/the-cold-blood-index/">Cold Blood Index</a>. Make sure that you can supervise the system from whereever you are, for instance through an online status page. Don&#8217;t be tempted to look onto it every five minutes. But you can make a mighty impression when you pull out your mobile phone on the summit of Mt. Ararat and explain to your fellow climbers: &#8220;Just checking my trades.&#8221;</p>
<h3>The real strategy development</h3>
<p>So far the theory. All fine and dandy, but how do you really develop a trading system? Everyone knows that there&#8217;s a huge gap between theory and practice. This is the real development process as testified by many seasoned algo traders:</p>
<p><strong>Step 1.</strong>&nbsp;Visit trader forums and find the thread about the new indicator with the fabulous returns.</p>
<p><strong>Step 2.</strong>&nbsp;Get the indicator working with a test system after a long coding session. Ugh, the backtest result does not look this good. You must have made some coding mistake. Debug. Debug some more.</p>
<p><strong>Step 3.</strong>&nbsp;Still no good result, but you have more tricks up your sleeve. Add a trailing stop. The results now look already better. Run a week analysis. Tuesday is a particular bad day for this strategy? Add a filter that prevents trading on Tuesday. Add more filters that prevent trades between 10 and 12 am, and when the price is below $14.50, and at full moon except on Fridays. Wait a long time for the simulation to finish. Wow, finally the backtest is in the green!</p>
<p><strong>Step 4.</strong> Of course you&#8217;re not fooled by in-sample results. After optimizing all 23 parameters, run a walk forward analysis. Wait a long time for the simulation to finish. Ugh, the result does not look this good. Try different WFA cycles. Try different bar periods. Wait a long time for the simulation to finish. Finally, with a 19-minutes bar period and 31 cycles, you get a sensational backtest result! And this completely out of sample!</p>
<p><strong>Step 5.</strong> Trade the system live.</p>
<p><strong>Step 6.</strong> Ugh, the result does not look this good.</p>
<p><strong>Step 7.</strong> Wait a long time for your bank account to recover. Inbetween, write a trading book.</p>
<hr>
<p>I&#8217;ve added the example script to the 2016 script repository. In the next part of this series we&#8217;ll look into the data mining approach with machine learning systems. We will examine price pattern detection, regression, neural networks, deep learning, decision trees, and support vector machines.</p>
<p style="text-align: right;"><strong>⇒&nbsp;<a href="http://www.financial-hacker.com/build-better-strategies-part-4-machine-learning/">Build Better Strategies &#8211; Part 4</a></strong></p>
]]></content:encoded>
					
					<wfw:commentRss>https://financial-hacker.com/build-better-strategies-part-3-the-development-process/feed/</wfw:commentRss>
			<slash:comments>61</slash:comments>
		
		
			</item>
		<item>
		<title>White&#8217;s Reality Check</title>
		<link>https://financial-hacker.com/whites-reality-check/</link>
					<comments>https://financial-hacker.com/whites-reality-check/#comments</comments>
		
		<dc:creator><![CDATA[jcl]]></dc:creator>
		<pubDate>Mon, 14 Sep 2015 08:48:17 +0000</pubDate>
				<category><![CDATA[Research]]></category>
		<category><![CDATA[System Evaluation]]></category>
		<category><![CDATA[Aronson]]></category>
		<category><![CDATA[Data mining bias]]></category>
		<category><![CDATA[Detrending]]></category>
		<category><![CDATA[Momentum]]></category>
		<category><![CDATA[White's reality check]]></category>
		<guid isPermaLink="false">http://www.financial-hacker.com/?p=188</guid>

					<description><![CDATA[This is the third part of the Trend Experiment article series. We now want to evaluate if the positive results from the 900 tested trend following strategies are for real, or just caused by Data Mining Bias. But what is Data Mining Bias, after all? And what is this ominous White&#8217;s Reality Check? Suppose you &#8230; <a href="https://financial-hacker.com/whites-reality-check/" class="more-link">Continue reading<span class="screen-reader-text"> "White&#8217;s Reality Check"</span></a>]]></description>
										<content:encoded><![CDATA[<p>This is the third part of the <a href="http://www.financial-hacker.com/trend-delusion-or-reality/" target="_blank" rel="noopener noreferrer">Trend Experiment</a> article series. We now want to evaluate if the positive results from the 900 tested trend following strategies are for real, or just caused by <strong>Data Mining Bias</strong>. But what is Data Mining Bias, after all? And what is this ominous <strong>White&#8217;s Reality Check</strong>?<span id="more-188"></span></p>
<p>Suppose you want to trade by moon phases. But you&#8217;re not sure if you shall buy at full moon and sell at new moon, or the other way around. So you do a series of moon phase backtests and find out that the best system, which opens positions at any first quarter moon, achieves 30% annual return. Is this finally the proof that astrology works?</p>
<p>A trade system based on a nonexisting effect has normally a profit expectancy of zero (minus trade costs). But you won&#8217;t get zero when you backtest variants of such a system. Due to statistical fluctuations, some of them will produce a positive and some a negative return. When you now pick the best performer, such as the first quarter moon trading system, you might get a high return and an impressive equity curve in the backtest. Sadly, its test result is not necessarily caused by clever trading. It might be just by clever selecting the random best performer from a pool of useless systems.</p>
<p>For finding out if the 30% return by quarter moon trading are for real or just the fool&#8217;s gold of Data Mining Bias, <strong>Halbert White</strong> (1959-2012) invented a test method in 2000.  <strong>White&#8217;s Reality Check </strong>(aka <strong>Bootstrap Reality Check</strong>) is explained in detail in the book &#8216;Evidence-Based Technical Analysis&#8217; by <strong>David Aronson</strong>. It works this way:</p>
<ol>
<li>Develop a strategy. During the development process, keep a record of all strategy variants that were tested and discarded because of their test results, including all abandoned algorithms, ideas, methods, and parameters. It does not matter if they were discarded by human decision or by a computer search or optimizing process.</li>
<li>Produce balance curves of all strategy variants, using detrended trade results and no trade costs. Note down the profit <strong>P</strong> of the best strategy.</li>
<li>Detrend all balance curves by subtracting the mean return per bar (not to be confused with detrending the trade results!). This way you get a series of curves with the same characteristics of the tested systems, but zero profit.</li>
<li>Randomize all curves by bootstrap with replacement. This produces new curves from the random returns of the old curves. Because the same bars can be selected multiple times, most new curves now produce losses or profits different from zero.</li>
<li>Select the best performer from the randomized curves, and note down its profit.</li>
<li> Repeat steps 4 and 5 a couple 1000 times.</li>
<li>You now have a list of several 1000 best profits. The median <strong>M</strong> of that list is the Data Mining Bias by your strategy development process.</li>
<li>Check where the original best profit <strong>P</strong> appears in the list. The percentage of best bootstrap profits greater than <strong>P</strong> is the so-called <strong>p-Value</strong> of the best strategy. You want the p-Value to be as low as possible. If <strong>P</strong> is better than 95% of the best bootstrap profits, the best strategy has a real edge with 95% probability.</li>
<li><strong>P</strong> minus <strong>M</strong> minus trade costs is the result to be expected in real trading the best strategy.</li>
</ol>
<p>The method is not really intuitive, but mathematically sound. However, it suffers from a couple problems that makes WRC difficult to use  in real strategy development:</p>
<ul style="list-style-type: square;">
<li>You can see the worst problem already in step 1. During strategy development you&#8217;re permanently testing ideas, adding or removing parameters, or checking out different assets and time frames. Putting aside all discarded variants and producing balance curves of all combinations of them is a cumbersome process. It gets even more diffcult with machine learning algorithms that optimize weight factors and usually do not produce discarded variants. However, the good news are that you can easily apply the WRC when your strategy variants are produced by a transparent mechanical process with no human decisions involved. That&#8217;s fortunately the case for our trend experiment.</li>
<li>WRC tends to type II errors. That means it can reject strategies although they have an edge. When more irrelevant variants &#8211; systems with random trading and zero profit expectancy &#8211; are added to the pool,  more positive results can be produced in steps 4 and 5, which reduces the probability that your selected strategy survives the test. WRC can determine rather good that a system is profitable, but can not as well determine that it is worthless.</li>
<li>It gets worse when variants have a negative expectancy.  WRC can then over-estimate Data Mining Bias (see paper 2 at the end of the article). This could theoretically also happen with our trend systems, as some variants may suffer from a phase reversal due to the delay by the smoothing indicators, and thus in fact trade against the trend instead of with it.</li>
</ul>
<h3>The Experiment</h3>
<p>First you need to collect daily return curves from all tested strategies. This requires adding a few lines to the <strong>Trend.c</strong> script from the <a href="http://www.financial-hacker.com/trend-and-exploiting-it/">previous article</a>:</p>
<pre class="prettyprint"><span style="color: #0000ff;">// some global variables</span>
int Period;
var Daily[3000];
...
<span style="color: #0000ff;">// in the run function, set all trading costs to zero</span>
 Spread = Commission = RollLong = RollShort = Slippage = 0;<span style="color: #0000ff;">
...
// store daily results in an equity curve</span>
  Daily[Day] = Equity;
}
...
<span style="color: #0000ff;">// in the objective function, save the curves in a file for later evaluation</span>
string FileName = "Log\\TrendDaily.bin";
string Name = strf("%s_%s_%s_%i",Script,Asset,Algo,Period);
int Size = Day*sizeof(var); 
file_append(FileName,Name,strlen(Name)+1);
file_append(FileName,&amp;Size,sizeof(int));
file_append(FileName,Daily,Size);</pre>
<p>The second part of the above code stores the equity at the end of any day in the <strong>Daily</strong> array. The third part stores a string with the name of the strategy, the length of the curve,  and the equity values itself in a file named <strong>TrendDaily.bin</strong> in the <strong>Log</strong> folder. After running the 10 trend scripts, all 900 resulting curves are collected together in the file.</p>
<p>The next part of our experiment is the <strong>Bootstrap.c</strong> script that applies White&#8217;s Reality Check. I&#8217;ll write it in two parts. The first part just reads the 900 curves from the <strong>TrendDaily.bin</strong> file, stores them for later evaluation, finds the best one and displays a histogram of the profit factors. Once we got that, we did already 80% of the work for the Reality Check. This is the code:</p>
<pre class="prettyprint">void _plotHistogram(string Name,var Value,var Step,int Color)
{
  var Bucket = floor(Value/Step);
  plotBar(Name,Bucket,Step*Bucket,1,SUM+BARS+LBL2,Color);
}

typedef struct curve
{
  string Name;
  int Length;
  var *Values;
} curve;

curve Curve[900];
var Daily[3000];

void main()
{
  byte *Content = file_content("Log\\TrendDaily.bin");
  int i,j,N = 0;
  int MaxN = 0;
  var MaxPerf = 0.0;
	
  while(N&lt;900 &amp;&amp; *Content)
  {
<span style="color: #0000ff;">// extract the next curve from the file</span>
    string Name = Content;
    Content += strlen(Name)+1;
    int Size = *((int*)Content);
    int Length = Size/sizeof(var); // number of values
    Content += 4;
    var *Values = Content;
    Content += Size;

<span style="color: #0000ff;">// store and plot the curve</span>		
    Curve[N].Name = Name;
    Curve[N].Length = Length;
    Curve[N].Values = Values;
    var Performance = 1.0/ProfitFactor(Values,Length);
    printf("\n%s: %.2f",Name,Performance);
    _plotHistogram("Profit",Performance,0.005,RED);

<span style="color: #0000ff;">// find the best curve</span>		
    if(MaxPerf &lt; Performance) {
      MaxN = N; MaxPerf = Performance;
    }
    N++;
  }
  printf("\n\nBenchmark: %s, %.2f",Curve[MaxN].Name,MaxPerf); 
}</pre>
<p>Most of the code is just for reading and storing all the equity curves. The indicator <strong>ProfitFactor </strong>calculates the profit factor of the curve, the sum of all daily wins divided by the sum of all daily losses. However, here we need to consider the array order. Like many platforms, Zorro stores time series in reverse chronological order,  with the most recent data at the begin. However we stored the daily equity curve in straight chronological order. So the losses are actually wins and the wins actually losses, which is why we need to inverse the profit factor. The curve with the best profit factor will be our benchmark for the test.</p>
<p>This is the resulting histogram, the profit factors of all 900 (or rather, 705 due to the trade number minimum) equity curves:</p>
<p><figure id="attachment_244" aria-describedby="caption-attachment-244" style="width: 833px" class="wp-caption alignnone"><a href="http://www.financial-hacker.com/wp-content/uploads/2015/09/trend_s.png"><img loading="lazy" decoding="async" class="wp-image-244 size-full" src="http://www.financial-hacker.com/wp-content/uploads/2015/09/trend_s.png" alt="" width="833" height="201" srcset="https://financial-hacker.com/wp-content/uploads/2015/09/trend_s.png 833w, https://financial-hacker.com/wp-content/uploads/2015/09/trend_s-300x72.png 300w" sizes="(max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 984px) 61vw, (max-width: 1362px) 45vw, 600px" /></a><figcaption id="caption-attachment-244" class="wp-caption-text">Profit factor distribution (without trade costs)</figcaption></figure></p>
<p>Note that the profit factors are slightly different to the parameter charts of the previous article because they were now calculated from daily returns, not from trade results. We removed trading costs, so the histogram is centered at a profit factor 1.0, aka zero profit. Only a few systems achieved a profit factor in the 1.2 range, the two best made about 1.3. Now we&#8217;ll see what White has to say to that. This is the rest of the <strong>main</strong> function in <strong>Bootstrap.c</strong> that finally applies his Reality Check:</p>
<pre>plotBar("Benchmark",MaxPerf/0.005,MaxPerf,50,BARS+LBL2,BLUE);	
printf("\nBootstrap - please wait");
int Worse = 0, Better = 0;
for(i=0; i&lt;1000; i++) {
  var MaxBootstrapPerf = 0;
  for(j=0; j&lt;N; j++) {
    randomize(BOOTSTRAP|DETREND,Daily,Curve[j].Values,Curve[j].Length);
    var Performance = 1.0/ProfitFactor(Daily,Curve[j].Length);
    MaxBootstrapPerf = max(MaxBootstrapPerf,Performance);
  }
  if(MaxPerf &gt; MaxBootstrapPerf)
    Better++;
  else
    Worse++;
  _plotHistogram("Profit",MaxBootstrapPerf,0.005,RED);
  progress(100*i/SAMPLES,0);
}
printf("\nBenchmark beats %.0f%% of samples!",
  (var)Better*100./(Better+Worse));
</pre>
<p>This code needs about 3 minutes to run; we&#8217;re sampling the 705 curves 1000 times. The <strong>randomize</strong> function will shuffle the daily returns by bootstrap with replacement; the <strong>DETREND</strong> flag tells it to subtract the mean return from all returns before. The number of random curves that are better and worse than the benchmark is stored, for printing the percentage at the end. The <strong>progress</strong> function moves the progress bar while Zorro grinds through the 705,000 curves. And this is the result:</p>
<p><figure id="attachment_245" aria-describedby="caption-attachment-245" style="width: 703px" class="wp-caption alignnone"><a href="http://www.financial-hacker.com/wp-content/uploads/2015/09/bootstrap_s.png"><img loading="lazy" decoding="async" class="wp-image-245 size-full" src="http://www.financial-hacker.com/wp-content/uploads/2015/09/bootstrap_s.png" alt="" width="703" height="201" srcset="https://financial-hacker.com/wp-content/uploads/2015/09/bootstrap_s.png 703w, https://financial-hacker.com/wp-content/uploads/2015/09/bootstrap_s-300x86.png 300w" sizes="(max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 984px) 61vw, (max-width: 1362px) 45vw, 600px" /></a><figcaption id="caption-attachment-245" class="wp-caption-text">Bootstrap results (red), benchmark system (black)</figcaption></figure></p>
<p>Hmm. We can see that the best system &#8211; the black bar &#8211; is at the right side of the histogram, indicating that it might be significant. But only with about 80% probability (the script gives a slightly different result every time due to the randomizing). 20% of the random curves achieve better profit factors than the best system from the experiment. The median of the randomized samples is about 1.26. Only the two best systems from the original distribution (first image) have profit factors above 1.26 &#8211; all the rest is at or below the bootstrap median.</p>
<p>So we have to conclude that this simple way of trend trading does not really work. Interestingly, one of those 900 tested systems is a system that I use for myself since 2012, although with additional filters and conditions. This system has produced good live returns and a positive result by the end of every year so far. And there&#8217;s still the fact that EUR/USD and silver in all variants produced better statistics than S&amp;P500. This hints that some trend effect exists in their price curves, but the profit factors by the simple algorithms are not high enough to pass White&#8217;s Reality Check. We need a better approach for trend exploitation. For instance, a filter that detects if trend is there or not. This will be the topic of the <a href="http://www.financial-hacker.com/the-market-meanness-index/" target="_blank" rel="noopener noreferrer">next articl</a>e of this series. We will see that a filter can have a surprising effect on reality checks. Since we now have the <strong>Bootstrap </strong>script for applying White&#8217;s Reality Check, we can quickly do further experiments.</p>
<p>The <strong>Bootstrap.c</strong> script has been added to the 2015 script collection downloadable on the sidebar.</p>
<h3>Conclusion</h3>
<ul style="list-style-type: square;">
<li>None of the 10 tested low-lag indicators, and none of the 3 tested markets shows significant positive expectancy with trend trading.</li>
<li>There is evidence of a trend effect in currencies and commodities, but it is too weak or too infrequent for being effectively exploited with simple trade signals by a filtered price curve.</li>
<li>We have now a useful code framework for comparing indicators and assets, and for further experiments with trade strategies.</li>
</ul>
<h3>Papers</h3>
<ol>
<li>Original paper by Dr. H. White:  <a href="http://www.ssc.wisc.edu/~bhansen/718/White2000.pdf" target="_blank" rel="noopener noreferrer">White2000</a></li>
<li>WRC modification by P. Hansen: <a href="http://www-siepr.stanford.edu/workp/swp05003.pdf" target="_blank" rel="noopener noreferrer">Hansen2005</a></li>
<li>Stepwise WRC Testing by J. Romano, M. Wolf: <a href="http://www.ssc.wisc.edu/~bhansen/718/RomanoWolf2005.pdf" target="_blank" rel="noopener noreferrer">RomanoWolf2005</a></li>
<li>Technical Analysis examined with WRC, by P. Hsu, C. Kuan: <a href="http://front.cc.nctu.edu.tw/Richfiles/15844-930305.pdf" target="_blank" rel="noopener noreferrer">HsuKuan2006</a></li>
<li>WRC and its Extensions by V. Corradi, N. Swanson: <a href="http://econweb.rutgers.edu/nswanson/papers/corradi_swanson_whitefest_1108_2011_09_06.pdf" target="_blank" rel="noopener noreferrer">CorradiSwanson2011</a></li>
</ol>
]]></content:encoded>
					
					<wfw:commentRss>https://financial-hacker.com/whites-reality-check/feed/</wfw:commentRss>
			<slash:comments>32</slash:comments>
		
		
			</item>
	</channel>
</rss>
