Neural networks programming with prorealtime
Forums › ProRealTime English forum › ProBuilder support › Neural networks programming with prorealtime
- This topic has 126 replies, 8 voices, and was last updated 1 year ago by MobiusGrey.
Tagged: data mining, machine learning
-
-
08/28/2018 at 3:20 AM #79210
It will be great if some of you can help me with this project.
I only wish I could Leo and I feel sad that only Nicolas is helping you, but you also need folks with more time than Nicolas can spare.
Your work is ground breaking as far as most of us are concerned. I did read the reference you posted and broadly understood, but it’s the coding I struggle with.
Tomorrow I will see if I can contact @Maz as he is a capable member who sticks in my mind as one who similarly tried to move forward with complex collaborative projects on here.
thanks a lot, the little issue i have is m daughter is 2 weeks old 🙂
08/28/2018 at 9:19 AM #79227thanks a lot, the little issue i have is m daughter is 2 weeks old
Oh Wow! You are a hero managing to achieve anything other than ‘survive’ … with a new precious little bundle to care for ! 🙂 Awww
I’ve just emailed Maz as promised, his profile mentions … developing neural networks and machine-learning … so I feel sure any input from Maz will benefit this Thread.
Cheers
08/28/2018 at 9:29 AM #7922908/28/2018 at 9:33 AM #79231thanks a lot, the little issue i have is m daughter is 2 weeks old
Oh Wow! You are a hero managing to achieve anything other than ‘survive’ … with a new precious little bundle to care for ! Awww
I’ve just emailed Maz as promised, his profile mentions … developing neural networks and machine-learning … so I feel sure any input from Maz will benefit this Thread.
Cheers
The truth is, now with a baby I feel more eager to succeed in algorithmic trading rather that my paid-8-hours-job 🙂
Thanks for email Maz, it will be great to get an inside opinion in this project
1 user thanked author for this post.
08/28/2018 at 9:36 AM #79232Condition to BUY at line 5 was only a dummy test because of the required BUY instruction in ProBacktest. Anyway, do “GRAPH a” shows a value of 10k at first candlestick of the backtest?
yeah, first value to show in graph barindex is 10001.
the condition of buying at 10000 doesn’t work but at 10001, I imagine due to the program reads the code after the first bar, then the next is in the market.
08/31/2018 at 10:58 PM #79496Hi all,
I almost complete the main body of the neural network.
I made a modification, I don’t use a quadratic cost function but a cross-entropy cost function, and this is a very good thing because we do not have to worry about the initial values for the weights!
Maybe someone can already complete the rest of the code and create an indicator out of it.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230// Hyperparameters to be optimized// ETA=1 //known as the learning rate///////////////// CLASSIFIER /////////////myATR=average[20](range)+std[20](range)ExtraStopLoss=MyATR//ExtraStopLoss=3*spread*pipsize//for long tradesclassifierlong=0FOR scanL=1 to candlesback DOIF classifierlong[scanL]=1 thenBREAKENDIFLongTradeLength=ProfitRiskRatio*(close[scanL]-(low[scanL]-ExtraStopLoss[scanL]))IF close[scanL]+LongTradeLength < high-spread*pipsize thenIF lowest[scanL+1](low) > low[scanL]-ExtraStopLoss[scanL]+spread*pipsize thenclassifierlong=1candleentrylong=barindex-scanLBREAKENDIFENDIFNEXT//for short tradesclassifiershort=0FOR scanS=1 to candlesback DOIF classifiershort[scanS]=1 thenBREAKENDIFShortTradeLength=ProfitRiskRatio*((high[scanS]-close[scanS])+ExtraStopLoss[scanS])IF close[scanS]-ShortTradeLength > low+spread*pipsize thenIF highest[scanS+1](high) < high[scanS]+ExtraStopLoss[scanS]-spread*pipsize thenclassifiershort=1candleentryshort=barindex-scanSBREAKENDIFENDIFNEXT///////////////////////// NEURONAL NETWORK /////////////////////variable1= // to be defined//variable2= // to be defined//variable3= // to be defined//variable4= // to be defined// >>> LEARNING PROCESS <<<IF classifierlong=1 or classifiershort=1 THENcandleentry=max(candleentrylong,candleentryshort)Y1=classifierlongY2=classifiershort// >>> INPUT NEURONS <<<input1=variable1[barindex-candleentry]input2=variable2[barindex-candleentry]input3=variable3[barindex-candleentry]input4=variable4[barindex-candleentry]FOR i=1 to 10 DO // THIS HAVE TO BE IMPROVEDETAi=ETA/i// >>> FIRST LAYER OF NEURONS <<<F1=a11*input1+a12*input2+a13*input3+a14*input4+Fbias1F2=a21*input1+a22*input2+a23*input3+a24*input4+Fbias2F3=a31*input1+a32*input2+a33*input3+a34*input4+Fbias3F4=a41*input1+a42*input2+a43*input3+a44*input4+Fbias4F5=a51*input1+a52*input2+a53*input3+a54*input4+Fbias5F6=a61*input1+a62*input2+a63*input3+a64*input4+Fbias6F1=1/(1+EXP(-1*F1))F2=1/(1+EXP(-1*F2))F3=1/(1+EXP(-1*F3))F4=1/(1+EXP(-1*F4))F5=1/(1+EXP(-1*F5))F6=1/(1+EXP(-1*F6))// >>> OUTPUT NEURONS <<<output1=b11*F1+b12*F2+b13*F3+b14*F4+b15*F5+b16*F6+Obias1output2=b21*F1+b22*F2+b23*F3+b24*F4+b25*F5+b26*F6+Obias2output1=1/(1+EXP(-1*output1))output2=1/(1+EXP(-1*output2))// >>> PARTIAL DERIVATES OF COST FUNCTION <<<// ... CROSS-ENTROPY AS COST FUCTION ...// COST = - ( (Y1*LOG(output1)+(1-Y1)*LOG(1-output1) ) - (Y2*LOG(output2)+(1-Y2)*LOG(1-output2) )DerObias1 = (output1-Y1) * 1DerObias2 = (output2-Y2) * 1Derb11 = (output1-Y1) * F1Derb12 = (output1-Y1) * F2Derb13 = (output1-Y1) * F3Derb14 = (output1-Y1) * F4Derb15 = (output1-Y1) * F5Derb16 = (output1-Y1) * F6Derb21 = (output2-Y2) * F1Derb22 = (output2-Y2) * F2Derb23 = (output2-Y2) * F3Derb24 = (output2-Y2) * F4Derb25 = (output2-Y2) * F5Derb26 = (output2-Y2) * F6//Implementing BackPropagationObias1=Obias1-ETAi*DerObias1Obias2=Obias2-ETAi*DerObias2b11=b11-ETAi*Derb11b12=b12-ETAi*Derb12b13=b11-ETAi*Derb13b14=b11-ETAi*Derb14b15=b11-ETAi*Derb15b16=b11-ETAi*Derb16b21=b11-ETAi*Derb21b22=b12-ETAi*Derb22b23=b11-ETAi*Derb23b24=b11-ETAi*Derb24b25=b11-ETAi*Derb25b26=b11-ETAi*Derb26// >>> PARTIAL DERIVATES OF COST FUNCTION (LAYER) <<<DerFbias1 = (output1-Y1) * b11 * F1*(1-F1) * 1 + (output2-Y2) * b21 * F1*(1-F1) * 1DerFbias2 = (output1-Y1) * b12 * F2*(1-F2) * 1 + (output2-Y2) * b22 * F2*(1-F2) * 1DerFbias3 = (output1-Y1) * b13 * F3*(1-F3) * 1 + (output2-Y2) * b23 * F3*(1-F3) * 1DerFbias4 = (output1-Y1) * b14 * F4*(1-F4) * 1 + (output2-Y2) * b24 * F4*(1-F4) * 1DerFbias5 = (output1-Y1) * b15 * F5*(1-F5) * 1 + (output2-Y2) * b25 * F5*(1-F5) * 1DerFbias6 = (output1-Y1) * b16 * F6*(1-F6) * 1 + (output2-Y2) * b26 * F6*(1-F6) * 1Dera11 = (output1-Y1) * b11 * F1*(1-F1) * input1 + (output2-Y2) * b21 * F1*(1-F1) * input1Dera12 = (output1-Y1) * b11 * F1*(1-F1) * input2 + (output2-Y2) * b21 * F1*(1-F1) * input2Dera13 = (output1-Y1) * b11 * F1*(1-F1) * input3 + (output2-Y2) * b21 * F1*(1-F1) * input3Dera14 = (output1-Y1) * b11 * F1*(1-F1) * input4 + (output2-Y2) * b21 * F1*(1-F1) * input4Dera21 = (output1-Y1) * b12 * F2*(1-F2) * input1 + (output2-Y2) * b22 * F2*(1-F2) * input1Dera22 = (output1-Y1) * b12 * F2*(1-F2) * input2 + (output2-Y2) * b22 * F2*(1-F2) * input2Dera23 = (output1-Y1) * b12 * F2*(1-F2) * input3 + (output2-Y2) * b22 * F2*(1-F2) * input3Dera24 = (output1-Y1) * b12 * F2*(1-F2) * input4 + (output2-Y2) * b22 * F2*(1-F2) * input4Dera31 = (output1-Y1) * b13 * F3*(1-F3) * input1 + (output2-Y2) * b23 * F3*(1-F3) * input1Dera32 = (output1-Y1) * b13 * F3*(1-F3) * input2 + (output2-Y2) * b23 * F3*(1-F3) * input2Dera33 = (output1-Y1) * b13 * F3*(1-F3) * input3 + (output2-Y2) * b23 * F3*(1-F3) * input3Dera34 = (output1-Y1) * b13 * F3*(1-F3) * input4 + (output2-Y2) * b23 * F3*(1-F3) * input4Dera41 = (output1-Y1) * b14 * F4*(1-F4) * input1 + (output2-Y2) * b24 * F4*(1-F4) * input1Dera42 = (output1-Y1) * b14 * F4*(1-F4) * input2 + (output2-Y2) * b24 * F4*(1-F4) * input2Dera43 = (output1-Y1) * b14 * F4*(1-F4) * input3 + (output2-Y2) * b24 * F4*(1-F4) * input3Dera44 = (output1-Y1) * b14 * F4*(1-F4) * input4 + (output2-Y2) * b24 * F4*(1-F4) * input4Dera51 = (output1-Y1) * b15 * F5*(1-F5) * input1 + (output2-Y2) * b25 * F5*(1-F5) * input1Dera52 = (output1-Y1) * b15 * F5*(1-F5) * input2 + (output2-Y2) * b25 * F5*(1-F5) * input2Dera53 = (output1-Y1) * b15 * F5*(1-F5) * input3 + (output2-Y2) * b25 * F5*(1-F5) * input3Dera54 = (output1-Y1) * b15 * F5*(1-F5) * input4 + (output2-Y2) * b25 * F5*(1-F5) * input4Dera61 = (output1-Y1) * b16 * F6*(1-F6) * input1 + (output2-Y2) * b26 * F6*(1-F6) * input1Dera62 = (output1-Y1) * b16 * F6*(1-F6) * input2 + (output2-Y2) * b26 * F6*(1-F6) * input2Dera63 = (output1-Y1) * b16 * F6*(1-F6) * input3 + (output2-Y2) * b26 * F6*(1-F6) * input3Dera64 = (output1-Y1) * b16 * F6*(1-F6) * input4 + (output2-Y2) * b26 * F6*(1-F6) * input4//Implementing BackPropagationFbias1=Fbias1-ETAi*DerFbias1Fbias2=Fbias2-ETAi*DerFbias2Fbias3=Fbias3-ETAi*DerFbias3Fbias4=Fbias4-ETAi*DerFbias4Fbias5=Fbias5-ETAi*DerFbias5Fbias6=Fbias6-ETAi*DerFbias6a11=a11-ETAi*Dera11a12=a12-ETAi*Dera12a13=a13-ETAi*Dera13a14=a14-ETAi*Dera14a21=a21-ETAi*Dera21a22=a22-ETAi*Dera22a23=a23-ETAi*Dera23a24=a24-ETAi*Dera24a31=a31-ETAi*Dera31a32=a32-ETAi*Dera32a33=a33-ETAi*Dera33a34=a34-ETAi*Dera34a41=a41-ETAi*Dera41a42=a42-ETAi*Dera42a43=a43-ETAi*Dera43a44=a44-ETAi*Dera44a51=a51-ETAi*Dera51a52=a52-ETAi*Dera52a53=a53-ETAi*Dera53a54=a54-ETAi*Dera54a61=a61-ETAi*Dera61a62=a62-ETAi*Dera62a63=a63-ETAi*Dera63a64=a64-ETAi*Dera64//GradientNorm = SQRT(DerObias1*DerObias1 + DerObias2*DerObias2+Derb11*Derb11+Derb12*Derb12+Derb13*Derb13+Derb14*Derb14+Derb15*Derb15+Derb16*Derb16 + Derb21*Derb21+Derb22*Derb22+Derb23*Derb23+Derb24*Derb24+Derb25*Derb25+Derb26*Derb26 + DerFbias1*DerFbias1+DerFbias2*DerFbias2+DerFbias3+DerFbias3+DerFbias4*DerFbias4+DerFbias4*DerFbias5+DerFbias6*DerFbias6 + Dera11*Dera11+Dera12*Dera12+Dera13*Dera13+Dera14*Dera14 + Dera21*Dera21+Dera22*Dera22+Dera23*Dera23+Dera24*Dera24 + Dera31*Dera31+Dera32*Dera32+Dera33*Dera33+Dera34*Dera34 + Dera41*Dera41+Dera42*Dera42+Dera43*Dera43+Dera44*Dera44 + Dera51*Dera51+Dera52*Dera52+Dera53*Dera53+Dera54*Dera54 + Dera61*Dera61+Dera62*Dera62+Dera63*Dera63+Dera64*Dera64)NEXTENDIF/////////////////// NEW PREDICTION ///////////////////// >>> INPUT NEURONS <<<input1=variable1input2=variable2input3=variable3input4=variable4// >>> FIRST LAYER OF NEURONS <<<F1=a11*input1+a12*input2+a13*input3+a14*input4+Fbias1F2=a21*input1+a22*input2+a23*input3+a24*input4+Fbias2F3=a31*input1+a32*input2+a33*input3+a34*input4+Fbias3F4=a41*input1+a42*input2+a43*input3+a44*input4+Fbias4F5=a51*input1+a52*input2+a53*input3+a54*input4+Fbias5F6=a61*input1+a62*input2+a63*input3+a64*input4+Fbias6F1=1/(1+EXP(-1*F1))F2=1/(1+EXP(-1*F2))F3=1/(1+EXP(-1*F3))F4=1/(1+EXP(-1*F4))F5=1/(1+EXP(-1*F5))F6=1/(1+EXP(-1*F6))// >>> OUTPUT NEURONS <<<output1=b11*F1+b12*F2+b13*F3+b14*F4+b15*F5+b16*F6+Obias1output2=b21*F1+b22*F2+b23*F3+b24*F4+b25*F5+b26*F6+Obias2output1=1/(1+EXP(-1*output1))output2=1/(1+EXP(-1*output2))09/01/2018 at 11:57 AM #79502I think that it might be difficult for anyone to understand the whole concept of your project and the code. Could you please take some time to explain us?
Defining what variables to inject into the neural networks is also something we should work on 😉 any idea?
09/01/2018 at 12:50 PM #79509Hi Nicolas,
I should be honest, is my first Neural Network. While I read about the topic I try to define for trading with prorealtime
Basically:
We give 4 input and get 2 outputs: the outputs are values from 0 to 1.
output1= near to 1 and output2= near to 0 –> we go long and opposite for short ( what happend if both near to 1? maybe open pending position maybe) , if both output near to 0 we don’t trade
As inputs we define 4 data my ideas are: in fact, can be what ever as an input
Possible inputs12345678SMA20=average[20](close)SMA200=average[200](close)SMA2400=average[2400](close) //in 5 min time frame this is the value of SMA 200 periods in hourlyvariable1= RSI[14](close) // or to be definedvariable3= (close-SMA20)/SMA20 *100 //or to be definedvariable3= (SMA20-SMA200)/SMA200 *100 //or to be definedvariable4= (SMA200-SMA2400)/SMA2400 *100 // to be definedBut first, The neural network must learn:
so we look in the past for possible wining trades ( with the classifier) and get the values of inputs just in that moment. With and algorithm called “decent gradient” the neural network start to modifies all his values, the more data the more it learns, if data change the neural network change. The way it modify the values a11…a16, a21… a26, .. etc is for trying to find the minimums values of a cost function which measure the errors by using the partial derivates of this values (Dera11,…dera16, dera21… dera26…etc)
Is like we make an optimisation of variable while running the code at the same time!!! The market changes, our neural network changes!!!
Another part of the code is running again the neural network to predict and hopefully we get a winning trade.
I will be glad if the code predicts more that 50%, If we want better predictions we need to increase the number of neurons and inputs. Or even go further like deep learning wich implicates more layer of neurons.
Here I add again the video which inspired me, the document that I found the easiest to follow, and the concept (attached) of the neural network which I am building
http://neuralnetworksanddeeplearning.com/
https://www.youtube.com/watch?v=ILsA4nyG7I0
09/01/2018 at 3:28 PM #79519Hi all,
My first neural network is working.
I think I get hooked !
please test it as much as possible and to see it in action in an strategy will be cool!
Here my code and photo with some predictions
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301// Hyperparameters to be optimized// ETA=1 //known as the learning rate// candlesback=7 // for the classifier//ProfitRiskRatio=2 // for the classifier//spread=0.9 // for the classifier///////////////// CLASSIFIER /////////////myATR=average[20](range)+std[20](range)ExtraStopLoss=MyATR//ExtraStopLoss=3*spread*pipsize//for long tradesclassifierlong=0FOR scanL=1 to candlesback DOIF classifierlong[scanL]=1 thenBREAKENDIFLongTradeLength=ProfitRiskRatio*(close[scanL]-(low[scanL]-ExtraStopLoss[scanL]))IF close[scanL]+LongTradeLength < high-spread*pipsize thenIF lowest[scanL+1](low) > low[scanL]-ExtraStopLoss[scanL]+spread*pipsize thenclassifierlong=1candleentrylong=barindex-scanLBREAKENDIFENDIFNEXT//for short tradesclassifiershort=0FOR scanS=1 to candlesback DOIF classifiershort[scanS]=1 thenBREAKENDIFShortTradeLength=ProfitRiskRatio*((high[scanS]-close[scanS])+ExtraStopLoss[scanS])IF close[scanS]-ShortTradeLength > low+spread*pipsize thenIF highest[scanS+1](high) < high[scanS]+ExtraStopLoss[scanS]-spread*pipsize thenclassifiershort=1candleentryshort=barindex-scanSBREAKENDIFENDIFNEXT///////////////////////// NEURONAL NETWORK ///////////////////// ...INITIAL VALUES...once a11=1once a12=-1once a13=1once a14=-1once a21=1once a22=1once a23=-1once a24=1once a31=-1once a32=1once a33=-1once a34=1once a41=1once a42=-1once a43=1once a44=-1once a51=-1once a52=1once a53=-1once a54=1once a61=1once a62=-1once a63=1once a64=-1once Fbias1=0once Fbias2=0once Fbias3=0once Fbias4=0once Fbias5=0once Fbias6=0once b11=1once b12=-1once b13=1once b14=-1once b15=1once b16=-1once b21=-1once b22=1once b23=-1once b24=1once b25=-1once b26=1once Obias1=0once Obias2=0// ...DEFINITION OF INPUTS...SMA20=average[min(20,barindex)](close)SMA200=average[min(200,barindex)](close)SMA2400=average[min(2400,barindex)](close) //in 5 min time frame this is the value of SMA 200 periods in hourlyvariable1= RSI[14](close) // or to be definedvariable2= (close-SMA20)/SMA20 *100 //or to be definedvariable3= (SMA20-SMA200)/SMA200 *100 //or to be definedvariable4= (SMA200-SMA2400)/SMA2400 *100 // to be defined// >>> LEARNING PROCESS <<<// If the classifier has detected a wining trade in the past//IF hour > 7 and hour < 21 thenIF BARINDEX > 2500 THENIF classifierlong=1 or classifiershort=1 THENIF hour > 7 and hour < 21 thencandleentry=max(candleentrylong,candleentryshort)Y1=classifierlongY2=classifiershort// >>> INPUT FOR NEURONS <<<input1=variable1[barindex-candleentry]input2=variable2[barindex-candleentry]input3=variable3[barindex-candleentry]input4=variable4[barindex-candleentry]FOR i=1 to 10 DO // THIS HAVE TO BE IMPROVEDETAi=ETA - ETA/10*(i-1) //Learning Rate// >>> FIRST LAYER OF NEURONS <<<F1=a11*input1+a12*input2+a13*input3+a14*input4+Fbias1F2=a21*input1+a22*input2+a23*input3+a24*input4+Fbias2F3=a31*input1+a32*input2+a33*input3+a34*input4+Fbias3F4=a41*input1+a42*input2+a43*input3+a44*input4+Fbias4F5=a51*input1+a52*input2+a53*input3+a54*input4+Fbias5F6=a61*input1+a62*input2+a63*input3+a64*input4+Fbias6F1=1/(1+EXP(-1*F1))F2=1/(1+EXP(-1*F2))F3=1/(1+EXP(-1*F3))F4=1/(1+EXP(-1*F4))F5=1/(1+EXP(-1*F5))F6=1/(1+EXP(-1*F6))// >>> OUTPUT NEURONS <<<output1=b11*F1+b12*F2+b13*F3+b14*F4+b15*F5+b16*F6+Obias1output2=b21*F1+b22*F2+b23*F3+b24*F4+b25*F5+b26*F6+Obias2output1=1/(1+EXP(-1*output1))output2=1/(1+EXP(-1*output2))// >>> PARTIAL DERIVATES OF COST FUNCTION <<<// ... CROSS-ENTROPY AS COST FUCTION ...// COST = - ( (Y1*LOG(output1)+(1-Y1)*LOG(1-output1) ) - (Y2*LOG(output2)+(1-Y2)*LOG(1-output2) )DerObias1 = (output1-Y1) * 1DerObias2 = (output2-Y2) * 1Derb11 = (output1-Y1) * F1Derb12 = (output1-Y1) * F2Derb13 = (output1-Y1) * F3Derb14 = (output1-Y1) * F4Derb15 = (output1-Y1) * F5Derb16 = (output1-Y1) * F6Derb21 = (output2-Y2) * F1Derb22 = (output2-Y2) * F2Derb23 = (output2-Y2) * F3Derb24 = (output2-Y2) * F4Derb25 = (output2-Y2) * F5Derb26 = (output2-Y2) * F6//Implementing BackPropagationObias1=Obias1-ETAi*DerObias1Obias2=Obias2-ETAi*DerObias2b11=b11-ETAi*Derb11b12=b12-ETAi*Derb12b13=b11-ETAi*Derb13b14=b11-ETAi*Derb14b15=b11-ETAi*Derb15b16=b11-ETAi*Derb16b21=b11-ETAi*Derb21b22=b12-ETAi*Derb22b23=b11-ETAi*Derb23b24=b11-ETAi*Derb24b25=b11-ETAi*Derb25b26=b11-ETAi*Derb26// >>> PARTIAL DERIVATES OF COST FUNCTION (LAYER) <<<DerFbias1 = (output1-Y1) * b11 * F1*(1-F1) * 1 + (output2-Y2) * b21 * F1*(1-F1) * 1DerFbias2 = (output1-Y1) * b12 * F2*(1-F2) * 1 + (output2-Y2) * b22 * F2*(1-F2) * 1DerFbias3 = (output1-Y1) * b13 * F3*(1-F3) * 1 + (output2-Y2) * b23 * F3*(1-F3) * 1DerFbias4 = (output1-Y1) * b14 * F4*(1-F4) * 1 + (output2-Y2) * b24 * F4*(1-F4) * 1DerFbias5 = (output1-Y1) * b15 * F5*(1-F5) * 1 + (output2-Y2) * b25 * F5*(1-F5) * 1DerFbias6 = (output1-Y1) * b16 * F6*(1-F6) * 1 + (output2-Y2) * b26 * F6*(1-F6) * 1Dera11 = (output1-Y1) * b11 * F1*(1-F1) * input1 + (output2-Y2) * b21 * F1*(1-F1) * input1Dera12 = (output1-Y1) * b11 * F1*(1-F1) * input2 + (output2-Y2) * b21 * F1*(1-F1) * input2Dera13 = (output1-Y1) * b11 * F1*(1-F1) * input3 + (output2-Y2) * b21 * F1*(1-F1) * input3Dera14 = (output1-Y1) * b11 * F1*(1-F1) * input4 + (output2-Y2) * b21 * F1*(1-F1) * input4Dera21 = (output1-Y1) * b12 * F2*(1-F2) * input1 + (output2-Y2) * b22 * F2*(1-F2) * input1Dera22 = (output1-Y1) * b12 * F2*(1-F2) * input2 + (output2-Y2) * b22 * F2*(1-F2) * input2Dera23 = (output1-Y1) * b12 * F2*(1-F2) * input3 + (output2-Y2) * b22 * F2*(1-F2) * input3Dera24 = (output1-Y1) * b12 * F2*(1-F2) * input4 + (output2-Y2) * b22 * F2*(1-F2) * input4Dera31 = (output1-Y1) * b13 * F3*(1-F3) * input1 + (output2-Y2) * b23 * F3*(1-F3) * input1Dera32 = (output1-Y1) * b13 * F3*(1-F3) * input2 + (output2-Y2) * b23 * F3*(1-F3) * input2Dera33 = (output1-Y1) * b13 * F3*(1-F3) * input3 + (output2-Y2) * b23 * F3*(1-F3) * input3Dera34 = (output1-Y1) * b13 * F3*(1-F3) * input4 + (output2-Y2) * b23 * F3*(1-F3) * input4Dera41 = (output1-Y1) * b14 * F4*(1-F4) * input1 + (output2-Y2) * b24 * F4*(1-F4) * input1Dera42 = (output1-Y1) * b14 * F4*(1-F4) * input2 + (output2-Y2) * b24 * F4*(1-F4) * input2Dera43 = (output1-Y1) * b14 * F4*(1-F4) * input3 + (output2-Y2) * b24 * F4*(1-F4) * input3Dera44 = (output1-Y1) * b14 * F4*(1-F4) * input4 + (output2-Y2) * b24 * F4*(1-F4) * input4Dera51 = (output1-Y1) * b15 * F5*(1-F5) * input1 + (output2-Y2) * b25 * F5*(1-F5) * input1Dera52 = (output1-Y1) * b15 * F5*(1-F5) * input2 + (output2-Y2) * b25 * F5*(1-F5) * input2Dera53 = (output1-Y1) * b15 * F5*(1-F5) * input3 + (output2-Y2) * b25 * F5*(1-F5) * input3Dera54 = (output1-Y1) * b15 * F5*(1-F5) * input4 + (output2-Y2) * b25 * F5*(1-F5) * input4Dera61 = (output1-Y1) * b16 * F6*(1-F6) * input1 + (output2-Y2) * b26 * F6*(1-F6) * input1Dera62 = (output1-Y1) * b16 * F6*(1-F6) * input2 + (output2-Y2) * b26 * F6*(1-F6) * input2Dera63 = (output1-Y1) * b16 * F6*(1-F6) * input3 + (output2-Y2) * b26 * F6*(1-F6) * input3Dera64 = (output1-Y1) * b16 * F6*(1-F6) * input4 + (output2-Y2) * b26 * F6*(1-F6) * input4//Implementing BackPropagationFbias1=Fbias1-ETAi*DerFbias1Fbias2=Fbias2-ETAi*DerFbias2Fbias3=Fbias3-ETAi*DerFbias3Fbias4=Fbias4-ETAi*DerFbias4Fbias5=Fbias5-ETAi*DerFbias5Fbias6=Fbias6-ETAi*DerFbias6a11=a11-ETAi*Dera11a12=a12-ETAi*Dera12a13=a13-ETAi*Dera13a14=a14-ETAi*Dera14a21=a21-ETAi*Dera21a22=a22-ETAi*Dera22a23=a23-ETAi*Dera23a24=a24-ETAi*Dera24a31=a31-ETAi*Dera31a32=a32-ETAi*Dera32a33=a33-ETAi*Dera33a34=a34-ETAi*Dera34a41=a41-ETAi*Dera41a42=a42-ETAi*Dera42a43=a43-ETAi*Dera43a44=a44-ETAi*Dera44a51=a51-ETAi*Dera51a52=a52-ETAi*Dera52a53=a53-ETAi*Dera53a54=a54-ETAi*Dera54a61=a61-ETAi*Dera61a62=a62-ETAi*Dera62a63=a63-ETAi*Dera63a64=a64-ETAi*Dera64//GradientNorm = SQRT(DerObias1*DerObias1 + DerObias2*DerObias2+Derb11*Derb11+Derb12*Derb12+Derb13*Derb13+Derb14*Derb14+Derb15*Derb15+Derb16*Derb16 + Derb21*Derb21+Derb22*Derb22+Derb23*Derb23+Derb24*Derb24+Derb25*Derb25+Derb26*Derb26 + DerFbias1*DerFbias1+DerFbias2*DerFbias2+DerFbias3+DerFbias3+DerFbias4*DerFbias4+DerFbias4*DerFbias5+DerFbias6*DerFbias6 + Dera11*Dera11+Dera12*Dera12+Dera13*Dera13+Dera14*Dera14 + Dera21*Dera21+Dera22*Dera22+Dera23*Dera23+Dera24*Dera24 + Dera31*Dera31+Dera32*Dera32+Dera33*Dera33+Dera34*Dera34 + Dera41*Dera41+Dera42*Dera42+Dera43*Dera43+Dera44*Dera44 + Dera51*Dera51+Dera52*Dera52+Dera53*Dera53+Dera54*Dera54 + Dera61*Dera61+Dera62*Dera62+Dera63*Dera63+Dera64*Dera64)NEXTENDIFENDIF//ENDIF/////////////////// NEW PREDICTION ///////////////////// >>> INPUT NEURONS <<<input1=variable1input2=variable2input3=variable3input4=variable4// >>> FIRST LAYER OF NEURONS <<<F1=a11*input1+a12*input2+a13*input3+a14*input4+Fbias1F2=a21*input1+a22*input2+a23*input3+a24*input4+Fbias2F3=a31*input1+a32*input2+a33*input3+a34*input4+Fbias3F4=a41*input1+a42*input2+a43*input3+a44*input4+Fbias4F5=a51*input1+a52*input2+a53*input3+a54*input4+Fbias5F6=a61*input1+a62*input2+a63*input3+a64*input4+Fbias6F1=1/(1+EXP(-1*F1))F2=1/(1+EXP(-1*F2))F3=1/(1+EXP(-1*F3))F4=1/(1+EXP(-1*F4))F5=1/(1+EXP(-1*F5))F6=1/(1+EXP(-1*F6))// >>> OUTPUT NEURONS <<<output1=b11*F1+b12*F2+b13*F3+b14*F4+b15*F5+b16*F6+Obias1output2=b21*F1+b22*F2+b23*F3+b24*F4+b25*F5+b26*F6+Obias2output1=1/(1+EXP(-1*output1))output2=1/(1+EXP(-1*output2))ENDIFreturn output1 as "prediction long", output2 as "prediction short", threshold as "threshold"6 users thanked author for this post.
09/01/2018 at 10:36 PM #79538Leo
I found the theory and your work on it an eyeopener…
As mentioned by Nicolas, the choice of variables is a crucial element. Presently all your variables are correlated together as there are all issued from the price action. Why not incorporate a variable independent from the price action such as Volume ?
Also I would try the DPO (Detrended Price Oscillator) indicator as it gives a better sense of a cycle’s typical high/low range as well as its duration.
With these new variables you will have two momentum variables and 2 cyclic variables. Therefore the intermediate functions can be reduced from 4 to 2… This would accelerate the processing time.
I cannot simulate it (trial version only..)
Anyway, GREAT JOB!
09/02/2018 at 12:01 AM #79540Hi didi059,
Thanks for your motivation words.
As input you can choose whatever, even (only in strategy) values in other timeframes.
The initial values for the parameters, does’t matter anymore. In fact, that’s the beauty of the Neural Network: the algorithm find this values and optimise them continuously.
Now that we implement Artificial Intelligent in ProRealTime the options are almost infinitive. Therefore, It is not possible for me to test your set of inputs.
09/02/2018 at 9:07 AM #79552Good input by @didi059 about Volume as an input (but still no Volumes on Forex pairs and CFD). About DPO, it uses future data in its default setting, so don’t count on it for trading systems. Anyway, thank you Leo for your last code, I’ll try to take time this week to get a better comprehension of it. I must admit that I’m still worried about the way you are using back propagation and its accuracy because of curve fitting.
Did you try to load the indicator only until a date? To simulate a forward test of what it has learned? With an encapsulation of the code like this:
1234if date<20180701 then// all the indicator's learning functions codeendif// other codes that plot signalsEDIT: I know it doesn’t work like that because of learning continuously and each new signal is similar as an unique forward test, but it could be a good proof of concept.
09/02/2018 at 2:51 PM #79568Hi Nicolas,
Curve fitting? 43 parameters curved fitted! don’t have any doubt that is curve fitting… the thing is the code is curving fitting continuously and adapting continuously, and the more neurons the system have more and more “curve fitting packs” can be “store” in the neural network. That’s why I saw a great future for this way of working.
But I am not naive, I know that the market will do whatever it wants and a prediction in recent data maybe is not valid.
I do not implement the code yet in an strategy. In an strategy we can even set as input variable some other values in different time frame, for example. RSI[14] on 5 min, RSI[14] on 15 min, RSI[14] on 1 hour and RSI[14] on 4hours.
About your opinion of the back propagation algorithm, I am not comfortable either (in line 131 I wrote “this have to be improved”). I am working in a new one.
Let me finished the new code and then review it.
by the way, I am not comfortable with the classifier either.
09/02/2018 at 6:35 PM #79577I have nearly got a System ready to post, but odd thing happened?
I was optimising 4 variables over 100k bars (2 x TP and 2 x SL, one each for Long and Short) the Profit / result was over 6K. I had one eye on the TV and entered 1 x TP as 100, but then realised it should have been 150.
So just to make sure, I re-optimised on that 1 x TP only, but with value as 150 the profit was near 3K.
Is this how the self-learning neural network is supposed to operate?? Optimise over 7000 of combinations and profit / result can work out more than optimising over 14 combinations … even though same values are shown as optimum??
The only explanation can be that Leo’s Neurals were self-learning and changing the variables within the neural network during the 7000 optimising combinations?? (echoes of Skynet / Terminator here … scary but exciting?? 🙂 🙂 )
Hope above makes sense, just say or ask away if not?
09/02/2018 at 8:47 PM #79591Hard to say GraHal,
I would optimise hyperparameters, i.e. the parameters that control the learning algorithm like ETA… more that optimise is to find the correct one. For the data I choose as input ETA is around 0.1.
Another parameter to be optimised is the threshold for decision take, the output in order to predict like 0.6 or 0.7 (output is always values from 0 to 1 )
The use of pending orders is highly recommended.
You can also test different kind of inputs like Volume.
Values for the classifier are irrelevant just set one combination and run.
Test 100K should be taking ages or ?
1 user thanked author for this post.
-
AuthorPosts
Find exclusive trading pro-tools on