Neural networks programming with prorealtime
Forums › ProRealTime English forum › ProBuilder support › Neural networks programming with prorealtime
- This topic has 126 replies, 8 voices, and was last updated 1 year ago by MobiusGrey.
Tagged: data mining, machine learning
-
-
09/02/2018 at 9:21 PM #79593
Hi all here is my last version.
It store the last 5 inputs data and improve the algorithm over this information, we prioritise more the most recent data.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345// Hyperparameters to be optimized// ETA=1 //known as the learning rate// candlesback=7 // for the classifier//ProfitRiskRatio=2 // for the classifier//spread=0.9 // for the classifier///////////////// CLASSIFIER /////////////myATR=average[20](range)+std[20](range)ExtraStopLoss=MyATR//ExtraStopLoss=3*spread*pipsize//for long tradesclassifierlong=0FOR scanL=1 to candlesback DOIF classifierlong[scanL]=1 thenBREAKENDIFLongTradeLength=ProfitRiskRatio*(close[scanL]-(low[scanL]-ExtraStopLoss[scanL]))IF close[scanL]+LongTradeLength < high-spread*pipsize thenIF lowest[scanL+1](low) > low[scanL]-ExtraStopLoss[scanL]+spread*pipsize thenclassifierlong=1candleentrylong=barindex-scanLBREAKENDIFENDIFNEXT//for short tradesclassifiershort=0FOR scanS=1 to candlesback DOIF classifiershort[scanS]=1 thenBREAKENDIFShortTradeLength=ProfitRiskRatio*((high[scanS]-close[scanS])+ExtraStopLoss[scanS])IF close[scanS]-ShortTradeLength > low+spread*pipsize thenIF highest[scanS+1](high) < high[scanS]+ExtraStopLoss[scanS]-spread*pipsize thenclassifiershort=1candleentryshort=barindex-scanSBREAKENDIFENDIFNEXT///////////////////////// NEURONAL NETWORK ///////////////////// ...INITIAL VALUES...once a11=1once a12=1once a13=1once a14=1once a21=1once a22=1once a23=1once a24=1once a31=1once a32=1once a33=1once a34=1once a41=1once a42=1once a43=1once a44=1once a51=1once a52=1once a53=1once a54=1once a61=1once a62=1once a63=1once a64=1once Fbias1=0once Fbias2=0once Fbias3=0once Fbias4=0once Fbias5=0once Fbias6=0once b11=1once b12=1once b13=1once b14=1once b15=1once b16=1once b21=1once b22=1once b23=1once b24=1once b25=1once b26=1once Obias1=0once Obias2=0// ...DEFINITION OF INPUTS...SMA20=average[min(20,barindex)](close)SMA200=average[min(200,barindex)](close)SMA2400=average[min(2400,barindex)](close) //in 5 min time frame this is the value of SMA 200 periods in hourlyvariable1= RSI[14](close) // or to be definedvariable2= (close-SMA20)/SMA20 *100 //or to be definedvariable3= (SMA20-SMA200)/SMA200 *100 //or to be definedvariable4= (SMA200-SMA2400)/SMA2400 *100 // to be defined// >>> LEARNING PROCESS <<<// If the classifier has detected a wining trade in the past//IF hour > 7 and hour < 21 then//STORING THE LEARNING DATAIF classifierlong=1 or classifiershort=1 THENBBBBBcandleentry=BBBBcandleentryBBBBBY1=BBBBY1BBBBBY2=BBBBY2BBBBcandleentry=BBBcandleentryBBBBY1=BBBY1BBBBY2=BBBY2BBBcandleentry=BBcandleentryBBBY1=BBY1BBBY2=BBY2BBcandleentry=BcandleentryBBY1=BY1BBY2=BY2Bcandleentry=max(candleentrylong,candleentryshort)BY1=classifierlongBY2=classifiershortENDIFIF BARINDEX > 2500 THENIF classifierlong=1 or classifiershort=1 THENIF hour > 8 and hour < 21 thenFOR i=1 to 5 DO // THIS HAVE TO BE IMPROVEDIF i = 1 THENcandleentry=BBBBBcandleentryY1=BBBBBY1Y2=BBBBBY2ENDIFIF i = 2 THENcandleentry=BBBBcandleentryY1=BBBBY1Y2=BBBBY2ENDIFIF i = 3 THENcandleentry=BBBcandleentryY1=BBBY1Y2=BBBY2ENDIFIF i = 4 THENcandleentry=BBcandleentryY1=BBY1Y2=BBY2ENDIFIF i = 5 THENcandleentry=BcandleentryY1=BY1Y2=BY2ENDIF// >>> INPUT FOR NEURONS <<<input1=variable1[barindex-candleentry]input2=variable2[barindex-candleentry]input3=variable3[barindex-candleentry]input4=variable4[barindex-candleentry]ETAi=(ETA/5)*i //Learning Rate// >>> FIRST LAYER OF NEURONS <<<F1=a11*input1+a12*input2+a13*input3+a14*input4+Fbias1F2=a21*input1+a22*input2+a23*input3+a24*input4+Fbias2F3=a31*input1+a32*input2+a33*input3+a34*input4+Fbias3F4=a41*input1+a42*input2+a43*input3+a44*input4+Fbias4F5=a51*input1+a52*input2+a53*input3+a54*input4+Fbias5F6=a61*input1+a62*input2+a63*input3+a64*input4+Fbias6F1=1/(1+EXP(-1*F1))F2=1/(1+EXP(-1*F2))F3=1/(1+EXP(-1*F3))F4=1/(1+EXP(-1*F4))F5=1/(1+EXP(-1*F5))F6=1/(1+EXP(-1*F6))// >>> OUTPUT NEURONS <<<output1=b11*F1+b12*F2+b13*F3+b14*F4+b15*F5+b16*F6+Obias1output2=b21*F1+b22*F2+b23*F3+b24*F4+b25*F5+b26*F6+Obias2output1=1/(1+EXP(-1*output1))output2=1/(1+EXP(-1*output2))// >>> PARTIAL DERIVATES OF COST FUNCTION <<<// ... CROSS-ENTROPY AS COST FUCTION ...// COST = - ( (Y1*LOG(output1)+(1-Y1)*LOG(1-output1) ) - (Y2*LOG(output2)+(1-Y2)*LOG(1-output2) )DerObias1 = (output1-Y1) * 1DerObias2 = (output2-Y2) * 1Derb11 = (output1-Y1) * F1Derb12 = (output1-Y1) * F2Derb13 = (output1-Y1) * F3Derb14 = (output1-Y1) * F4Derb15 = (output1-Y1) * F5Derb16 = (output1-Y1) * F6Derb21 = (output2-Y2) * F1Derb22 = (output2-Y2) * F2Derb23 = (output2-Y2) * F3Derb24 = (output2-Y2) * F4Derb25 = (output2-Y2) * F5Derb26 = (output2-Y2) * F6//Implementing BackPropagationObias1=Obias1-ETAi*DerObias1Obias2=Obias2-ETAi*DerObias2b11=b11-ETAi*Derb11b12=b12-ETAi*Derb12b13=b11-ETAi*Derb13b14=b11-ETAi*Derb14b15=b11-ETAi*Derb15b16=b11-ETAi*Derb16b21=b11-ETAi*Derb21b22=b12-ETAi*Derb22b23=b11-ETAi*Derb23b24=b11-ETAi*Derb24b25=b11-ETAi*Derb25b26=b11-ETAi*Derb26// >>> PARTIAL DERIVATES OF COST FUNCTION (LAYER) <<<DerFbias1 = (output1-Y1) * b11 * F1*(1-F1) * 1 + (output2-Y2) * b21 * F1*(1-F1) * 1DerFbias2 = (output1-Y1) * b12 * F2*(1-F2) * 1 + (output2-Y2) * b22 * F2*(1-F2) * 1DerFbias3 = (output1-Y1) * b13 * F3*(1-F3) * 1 + (output2-Y2) * b23 * F3*(1-F3) * 1DerFbias4 = (output1-Y1) * b14 * F4*(1-F4) * 1 + (output2-Y2) * b24 * F4*(1-F4) * 1DerFbias5 = (output1-Y1) * b15 * F5*(1-F5) * 1 + (output2-Y2) * b25 * F5*(1-F5) * 1DerFbias6 = (output1-Y1) * b16 * F6*(1-F6) * 1 + (output2-Y2) * b26 * F6*(1-F6) * 1Dera11 = (output1-Y1) * b11 * F1*(1-F1) * input1 + (output2-Y2) * b21 * F1*(1-F1) * input1Dera12 = (output1-Y1) * b11 * F1*(1-F1) * input2 + (output2-Y2) * b21 * F1*(1-F1) * input2Dera13 = (output1-Y1) * b11 * F1*(1-F1) * input3 + (output2-Y2) * b21 * F1*(1-F1) * input3Dera14 = (output1-Y1) * b11 * F1*(1-F1) * input4 + (output2-Y2) * b21 * F1*(1-F1) * input4Dera21 = (output1-Y1) * b12 * F2*(1-F2) * input1 + (output2-Y2) * b22 * F2*(1-F2) * input1Dera22 = (output1-Y1) * b12 * F2*(1-F2) * input2 + (output2-Y2) * b22 * F2*(1-F2) * input2Dera23 = (output1-Y1) * b12 * F2*(1-F2) * input3 + (output2-Y2) * b22 * F2*(1-F2) * input3Dera24 = (output1-Y1) * b12 * F2*(1-F2) * input4 + (output2-Y2) * b22 * F2*(1-F2) * input4Dera31 = (output1-Y1) * b13 * F3*(1-F3) * input1 + (output2-Y2) * b23 * F3*(1-F3) * input1Dera32 = (output1-Y1) * b13 * F3*(1-F3) * input2 + (output2-Y2) * b23 * F3*(1-F3) * input2Dera33 = (output1-Y1) * b13 * F3*(1-F3) * input3 + (output2-Y2) * b23 * F3*(1-F3) * input3Dera34 = (output1-Y1) * b13 * F3*(1-F3) * input4 + (output2-Y2) * b23 * F3*(1-F3) * input4Dera41 = (output1-Y1) * b14 * F4*(1-F4) * input1 + (output2-Y2) * b24 * F4*(1-F4) * input1Dera42 = (output1-Y1) * b14 * F4*(1-F4) * input2 + (output2-Y2) * b24 * F4*(1-F4) * input2Dera43 = (output1-Y1) * b14 * F4*(1-F4) * input3 + (output2-Y2) * b24 * F4*(1-F4) * input3Dera44 = (output1-Y1) * b14 * F4*(1-F4) * input4 + (output2-Y2) * b24 * F4*(1-F4) * input4Dera51 = (output1-Y1) * b15 * F5*(1-F5) * input1 + (output2-Y2) * b25 * F5*(1-F5) * input1Dera52 = (output1-Y1) * b15 * F5*(1-F5) * input2 + (output2-Y2) * b25 * F5*(1-F5) * input2Dera53 = (output1-Y1) * b15 * F5*(1-F5) * input3 + (output2-Y2) * b25 * F5*(1-F5) * input3Dera54 = (output1-Y1) * b15 * F5*(1-F5) * input4 + (output2-Y2) * b25 * F5*(1-F5) * input4Dera61 = (output1-Y1) * b16 * F6*(1-F6) * input1 + (output2-Y2) * b26 * F6*(1-F6) * input1Dera62 = (output1-Y1) * b16 * F6*(1-F6) * input2 + (output2-Y2) * b26 * F6*(1-F6) * input2Dera63 = (output1-Y1) * b16 * F6*(1-F6) * input3 + (output2-Y2) * b26 * F6*(1-F6) * input3Dera64 = (output1-Y1) * b16 * F6*(1-F6) * input4 + (output2-Y2) * b26 * F6*(1-F6) * input4//Implementing BackPropagationFbias1=Fbias1-ETAi*DerFbias1Fbias2=Fbias2-ETAi*DerFbias2Fbias3=Fbias3-ETAi*DerFbias3Fbias4=Fbias4-ETAi*DerFbias4Fbias5=Fbias5-ETAi*DerFbias5Fbias6=Fbias6-ETAi*DerFbias6a11=a11-ETAi*Dera11a12=a12-ETAi*Dera12a13=a13-ETAi*Dera13a14=a14-ETAi*Dera14a21=a21-ETAi*Dera21a22=a22-ETAi*Dera22a23=a23-ETAi*Dera23a24=a24-ETAi*Dera24a31=a31-ETAi*Dera31a32=a32-ETAi*Dera32a33=a33-ETAi*Dera33a34=a34-ETAi*Dera34a41=a41-ETAi*Dera41a42=a42-ETAi*Dera42a43=a43-ETAi*Dera43a44=a44-ETAi*Dera44a51=a51-ETAi*Dera51a52=a52-ETAi*Dera52a53=a53-ETAi*Dera53a54=a54-ETAi*Dera54a61=a61-ETAi*Dera61a62=a62-ETAi*Dera62a63=a63-ETAi*Dera63a64=a64-ETAi*Dera64//GradientNorm = SQRT(DerObias1*DerObias1 + DerObias2*DerObias2+Derb11*Derb11+Derb12*Derb12+Derb13*Derb13+Derb14*Derb14+Derb15*Derb15+Derb16*Derb16 + Derb21*Derb21+Derb22*Derb22+Derb23*Derb23+Derb24*Derb24+Derb25*Derb25+Derb26*Derb26 + DerFbias1*DerFbias1+DerFbias2*DerFbias2+DerFbias3+DerFbias3+DerFbias4*DerFbias4+DerFbias4*DerFbias5+DerFbias6*DerFbias6 + Dera11*Dera11+Dera12*Dera12+Dera13*Dera13+Dera14*Dera14 + Dera21*Dera21+Dera22*Dera22+Dera23*Dera23+Dera24*Dera24 + Dera31*Dera31+Dera32*Dera32+Dera33*Dera33+Dera34*Dera34 + Dera41*Dera41+Dera42*Dera42+Dera43*Dera43+Dera44*Dera44 + Dera51*Dera51+Dera52*Dera52+Dera53*Dera53+Dera54*Dera54 + Dera61*Dera61+Dera62*Dera62+Dera63*Dera63+Dera64*Dera64)NEXTENDIFENDIF//ENDIF/////////////////// NEW PREDICTION ///////////////////// >>> INPUT NEURONS <<<input1=variable1input2=variable2input3=variable3input4=variable4// >>> FIRST LAYER OF NEURONS <<<F1=a11*input1+a12*input2+a13*input3+a14*input4+Fbias1F2=a21*input1+a22*input2+a23*input3+a24*input4+Fbias2F3=a31*input1+a32*input2+a33*input3+a34*input4+Fbias3F4=a41*input1+a42*input2+a43*input3+a44*input4+Fbias4F5=a51*input1+a52*input2+a53*input3+a54*input4+Fbias5F6=a61*input1+a62*input2+a63*input3+a64*input4+Fbias6F1=1/(1+EXP(-1*F1))F2=1/(1+EXP(-1*F2))F3=1/(1+EXP(-1*F3))F4=1/(1+EXP(-1*F4))F5=1/(1+EXP(-1*F5))F6=1/(1+EXP(-1*F6))// >>> OUTPUT NEURONS <<<output1=b11*F1+b12*F2+b13*F3+b14*F4+b15*F5+b16*F6+Obias1output2=b21*F1+b22*F2+b23*F3+b24*F4+b25*F5+b26*F6+Obias2output1=1/(1+EXP(-1*output1))output2=1/(1+EXP(-1*output2))ENDIFreturn output1 coloured(0,150,0) style(line,2) as "prediction long" , output2 coloured(200,0,0) style(line,2) as "prediction short",0.5 coloured(0,0,200) as "0.5", 0.6 coloured(0,0,200) as "0.6", 0.7 coloured(0,0,200) as "0.7", 0.8 coloured(0,0,200) as "0.8"1 user thanked author for this post.
09/02/2018 at 9:27 PM #79595Test 100K should be taking ages or ?
Yes it was taking ages, but I was doing other tasks so – for once – I didn’t mind!
I wasn’t optimising any variables / hyperparameters in your code … just my TB and SL. During the running of 7000 combinations over 100k bars it was if the neural network was self-learning and auto-changing values within your code to give higher overall profit than if I had simply input the optimised value of TP and SL and then pressed Probacktest my System to give a one combination result.
Maybe I will try the same exercise again tomorrow and post results to prove I am not deluded!? 🙂
09/03/2018 at 9:11 AM #79615I just realised why I might have been confusing you Leo! 🙂
My two most recent posts above were while I was optimising my Strategy / System (not your Indicator).
I should have posted on my Systems Topic (I will in future to save confusion).
If a Mod / Nicolas wants to move – mine #79577, Leos answer, #79591 and mine #79595 – to my Systems Topic below then feel free? And then delete this post?
09/09/2018 at 10:31 PM #80054Hi all,
Here another version of the neural network, I improved a bit the back propagation loop.
I also change the inputs ( it can be what ever you want as long as ETA is calibrated)
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394// Hyperparameters to be optimized// ETA=0.05 //known as the learning rate//candlesback=7 // for the classifier//ProfitRiskRatio=2 // for the classifier//spread=0.9 // for the classifier//P1=20 //FOR CURVE AS INPUT//P1=200 //FOR CURVE AS INPUT///////////////// CLASSIFIER /////////////myATR=average[20](range)+std[20](range)ExtraStopLoss=MyATR//ExtraStopLoss=3*spread*pipsize//for long tradesclassifierlong=0FOR scanL=1 to candlesback DOIF classifierlong[scanL]=1 thenBREAKENDIFLongTradeLength=ProfitRiskRatio*(close[scanL]-(low[scanL]-ExtraStopLoss[scanL]))IF close[scanL]+LongTradeLength < high-spread*pipsize thenIF lowest[scanL+1](low) > low[scanL]-ExtraStopLoss[scanL]+spread*pipsize thenclassifierlong=1candleentrylong=barindex-scanLBREAKENDIFENDIFNEXT//for short tradesclassifiershort=0FOR scanS=1 to candlesback DOIF classifiershort[scanS]=1 thenBREAKENDIFShortTradeLength=ProfitRiskRatio*((high[scanS]-close[scanS])+ExtraStopLoss[scanS])IF close[scanS]-ShortTradeLength > low+spread*pipsize thenIF highest[scanS+1](high) < high[scanS]+ExtraStopLoss[scanS]-spread*pipsize thenclassifiershort=1candleentryshort=barindex-scanSBREAKENDIFENDIFNEXT///////////////////////// NEURONAL NETWORK ///////////////////// ...INITIAL VALUES...once a11=1once a12=1once a13=1once a14=1once a21=1once a22=1once a23=1once a24=1once a31=1once a32=1once a33=1once a34=1once a41=1once a42=1once a43=1once a44=1once a51=1once a52=1once a53=1once a54=1once a61=1once a62=1once a63=1once a64=1once Fbias1=0once Fbias2=0once Fbias3=0once Fbias4=0once Fbias5=0once Fbias6=0once b11=1once b12=1once b13=1once b14=1once b15=1once b16=1once b21=1once b22=1once b23=1once b24=1once b25=1once b26=1once Obias1=0once Obias2=0// ...DEFINITION OF INPUTS...//ANGLE DEFINITIONONCE PANGLE1=ROUND(SQRT(P1/2))CURVE1=AVERAGE[P1](CLOSE)ANGLE1=ATAN(CURVE1-CURVE1[1])*180/3.1416ANGLEAVERAGE1=WeightedAverage[PANGLE1](ANGLE1)ONCE PANGLE2=ROUND(SQRT(P2/2))CURVE2=AVERAGE[P2](CLOSE)ANGLE2=ATAN(CURVE2-CURVE2[1])*180/3.1416ANGLEAVERAGE2=WeightedAverage[PANGLE2](ANGLE2)variable1= (close-CURVE1)/CURVE1 *100 //or to be definedvariable2= (CURVE1-CURVE2)/CURVE2 *100 //or to be definedvariable3= ANGLEAVERAGE1 // to be definedvariable4= ANGLEAVERAGE2 // to be defined// >>> LEARNING PROCESS <<<// If the classifier has detected a wining trade in the past//IF hour > 7 and hour < 21 then//STORING THE LEARNING DATAIF classifierlong=1 or classifiershort=1 THENcandleentry0010=candleentry0009Y10010=Y10009Y20010=Y20009candleentry0009=candleentry0008Y10009=Y10008Y20009=Y20008candleentry0008=candleentry0007Y10008=Y10007Y20008=Y20007candleentry0007=candleentry0006Y10007=Y10006Y20007=Y20006candleentry0006=candleentry0005Y10006=Y10005Y20006=Y20005candleentry0005=candleentry0004Y10005=Y10004Y20005=Y20004candleentry0004=candleentry0003Y10004=Y10003Y20004=Y20003candleentry0003=candleentry0002Y10003=Y10002Y20003=Y20002candleentry0002=candleentry0001Y10002=Y10001Y20002=Y20001candleentry0001=max(candleentrylong,candleentryshort)Y10001=classifierlongY20001=classifiershortENDIFIF BARINDEX > 1000 THENIF classifierlong=1 or classifiershort=1 THENIF hour > 8 and hour < 21 thenFOR i=1 to 10 DO // THERE ARE BETTER IDEASETAi=ETA*(0.7*i/10+0.3) //Learning RateIF i = 1 THENcandleentry=candleentry0010Y1=Y10010Y2=Y20010ENDIFIF i = 2 THENcandleentry=candleentry0009Y1=Y10009Y2=Y20009ENDIFIF i = 3 THENcandleentry=candleentry0008Y1=Y10008Y2=Y20008ENDIFIF i = 4 THENcandleentry=candleentry0007Y1=Y10007Y2=Y20007ENDIFIF i = 5 THENcandleentry=candleentry0006Y1=Y10006Y2=Y20006ENDIFIF i = 6 THENcandleentry=candleentry0005Y1=Y10005Y2=Y20005ENDIFIF i = 7 THENcandleentry=candleentry0004Y1=Y10004Y2=Y20004ENDIFIF i = 8 THENcandleentry=candleentry0003Y1=Y10003Y2=Y20003ENDIFIF i = 9 THENcandleentry=candleentry0002Y1=Y10002Y2=Y20002ENDIFIF i = 10 THENcandleentry=candleentry0001Y1=Y10001Y2=Y20001ENDIF// >>> INPUT FOR NEURONS <<<input1=variable1[barindex-candleentry]input2=variable2[barindex-candleentry]input3=variable3[barindex-candleentry]input4=variable4[barindex-candleentry]// >>> FIRST LAYER OF NEURONS <<<F1=a11*input1+a12*input2+a13*input3+a14*input4+Fbias1F2=a21*input1+a22*input2+a23*input3+a24*input4+Fbias2F3=a31*input1+a32*input2+a33*input3+a34*input4+Fbias3F4=a41*input1+a42*input2+a43*input3+a44*input4+Fbias4F5=a51*input1+a52*input2+a53*input3+a54*input4+Fbias5F6=a61*input1+a62*input2+a63*input3+a64*input4+Fbias6F1=1/(1+EXP(-1*F1))F2=1/(1+EXP(-1*F2))F3=1/(1+EXP(-1*F3))F4=1/(1+EXP(-1*F4))F5=1/(1+EXP(-1*F5))F6=1/(1+EXP(-1*F6))// >>> OUTPUT NEURONS <<<output1=b11*F1+b12*F2+b13*F3+b14*F4+b15*F5+b16*F6+Obias1output2=b21*F1+b22*F2+b23*F3+b24*F4+b25*F5+b26*F6+Obias2output1=1/(1+EXP(-1*output1))output2=1/(1+EXP(-1*output2))// >>> PARTIAL DERIVATES OF COST FUNCTION <<<// ... CROSS-ENTROPY AS COST FUCTION ...// COST = - ( (Y1*LOG(output1)+(1-Y1)*LOG(1-output1) ) - (Y2*LOG(output2)+(1-Y2)*LOG(1-output2) )DerObias1 = (output1-Y1) * 1DerObias2 = (output2-Y2) * 1Derb11 = (output1-Y1) * F1Derb12 = (output1-Y1) * F2Derb13 = (output1-Y1) * F3Derb14 = (output1-Y1) * F4Derb15 = (output1-Y1) * F5Derb16 = (output1-Y1) * F6Derb21 = (output2-Y2) * F1Derb22 = (output2-Y2) * F2Derb23 = (output2-Y2) * F3Derb24 = (output2-Y2) * F4Derb25 = (output2-Y2) * F5Derb26 = (output2-Y2) * F6//Implementing BackPropagationObias1=Obias1-ETAi*DerObias1Obias2=Obias2-ETAi*DerObias2b11=b11-ETAi*Derb11b12=b12-ETAi*Derb12b13=b11-ETAi*Derb13b14=b11-ETAi*Derb14b15=b11-ETAi*Derb15b16=b11-ETAi*Derb16b21=b11-ETAi*Derb21b22=b12-ETAi*Derb22b23=b11-ETAi*Derb23b24=b11-ETAi*Derb24b25=b11-ETAi*Derb25b26=b11-ETAi*Derb26// >>> PARTIAL DERIVATES OF COST FUNCTION (LAYER) <<<DerFbias1 = (output1-Y1) * b11 * F1*(1-F1) * 1 + (output2-Y2) * b21 * F1*(1-F1) * 1DerFbias2 = (output1-Y1) * b12 * F2*(1-F2) * 1 + (output2-Y2) * b22 * F2*(1-F2) * 1DerFbias3 = (output1-Y1) * b13 * F3*(1-F3) * 1 + (output2-Y2) * b23 * F3*(1-F3) * 1DerFbias4 = (output1-Y1) * b14 * F4*(1-F4) * 1 + (output2-Y2) * b24 * F4*(1-F4) * 1DerFbias5 = (output1-Y1) * b15 * F5*(1-F5) * 1 + (output2-Y2) * b25 * F5*(1-F5) * 1DerFbias6 = (output1-Y1) * b16 * F6*(1-F6) * 1 + (output2-Y2) * b26 * F6*(1-F6) * 1Dera11 = (output1-Y1) * b11 * F1*(1-F1) * input1 + (output2-Y2) * b21 * F1*(1-F1) * input1Dera12 = (output1-Y1) * b11 * F1*(1-F1) * input2 + (output2-Y2) * b21 * F1*(1-F1) * input2Dera13 = (output1-Y1) * b11 * F1*(1-F1) * input3 + (output2-Y2) * b21 * F1*(1-F1) * input3Dera14 = (output1-Y1) * b11 * F1*(1-F1) * input4 + (output2-Y2) * b21 * F1*(1-F1) * input4Dera21 = (output1-Y1) * b12 * F2*(1-F2) * input1 + (output2-Y2) * b22 * F2*(1-F2) * input1Dera22 = (output1-Y1) * b12 * F2*(1-F2) * input2 + (output2-Y2) * b22 * F2*(1-F2) * input2Dera23 = (output1-Y1) * b12 * F2*(1-F2) * input3 + (output2-Y2) * b22 * F2*(1-F2) * input3Dera24 = (output1-Y1) * b12 * F2*(1-F2) * input4 + (output2-Y2) * b22 * F2*(1-F2) * input4Dera31 = (output1-Y1) * b13 * F3*(1-F3) * input1 + (output2-Y2) * b23 * F3*(1-F3) * input1Dera32 = (output1-Y1) * b13 * F3*(1-F3) * input2 + (output2-Y2) * b23 * F3*(1-F3) * input2Dera33 = (output1-Y1) * b13 * F3*(1-F3) * input3 + (output2-Y2) * b23 * F3*(1-F3) * input3Dera34 = (output1-Y1) * b13 * F3*(1-F3) * input4 + (output2-Y2) * b23 * F3*(1-F3) * input4Dera41 = (output1-Y1) * b14 * F4*(1-F4) * input1 + (output2-Y2) * b24 * F4*(1-F4) * input1Dera42 = (output1-Y1) * b14 * F4*(1-F4) * input2 + (output2-Y2) * b24 * F4*(1-F4) * input2Dera43 = (output1-Y1) * b14 * F4*(1-F4) * input3 + (output2-Y2) * b24 * F4*(1-F4) * input3Dera44 = (output1-Y1) * b14 * F4*(1-F4) * input4 + (output2-Y2) * b24 * F4*(1-F4) * input4Dera51 = (output1-Y1) * b15 * F5*(1-F5) * input1 + (output2-Y2) * b25 * F5*(1-F5) * input1Dera52 = (output1-Y1) * b15 * F5*(1-F5) * input2 + (output2-Y2) * b25 * F5*(1-F5) * input2Dera53 = (output1-Y1) * b15 * F5*(1-F5) * input3 + (output2-Y2) * b25 * F5*(1-F5) * input3Dera54 = (output1-Y1) * b15 * F5*(1-F5) * input4 + (output2-Y2) * b25 * F5*(1-F5) * input4Dera61 = (output1-Y1) * b16 * F6*(1-F6) * input1 + (output2-Y2) * b26 * F6*(1-F6) * input1Dera62 = (output1-Y1) * b16 * F6*(1-F6) * input2 + (output2-Y2) * b26 * F6*(1-F6) * input2Dera63 = (output1-Y1) * b16 * F6*(1-F6) * input3 + (output2-Y2) * b26 * F6*(1-F6) * input3Dera64 = (output1-Y1) * b16 * F6*(1-F6) * input4 + (output2-Y2) * b26 * F6*(1-F6) * input4//Implementing BackPropagationFbias1=Fbias1-ETAi*DerFbias1Fbias2=Fbias2-ETAi*DerFbias2Fbias3=Fbias3-ETAi*DerFbias3Fbias4=Fbias4-ETAi*DerFbias4Fbias5=Fbias5-ETAi*DerFbias5Fbias6=Fbias6-ETAi*DerFbias6a11=a11-ETAi*Dera11a12=a12-ETAi*Dera12a13=a13-ETAi*Dera13a14=a14-ETAi*Dera14a21=a21-ETAi*Dera21a22=a22-ETAi*Dera22a23=a23-ETAi*Dera23a24=a24-ETAi*Dera24a31=a31-ETAi*Dera31a32=a32-ETAi*Dera32a33=a33-ETAi*Dera33a34=a34-ETAi*Dera34a41=a41-ETAi*Dera41a42=a42-ETAi*Dera42a43=a43-ETAi*Dera43a44=a44-ETAi*Dera44a51=a51-ETAi*Dera51a52=a52-ETAi*Dera52a53=a53-ETAi*Dera53a54=a54-ETAi*Dera54a61=a61-ETAi*Dera61a62=a62-ETAi*Dera62a63=a63-ETAi*Dera63a64=a64-ETAi*Dera64//GradientNorm = SQRT(DerObias1*DerObias1 + DerObias2*DerObias2+Derb11*Derb11+Derb12*Derb12+Derb13*Derb13+Derb14*Derb14+Derb15*Derb15+Derb16*Derb16 + Derb21*Derb21+Derb22*Derb22+Derb23*Derb23+Derb24*Derb24+Derb25*Derb25+Derb26*Derb26 + DerFbias1*DerFbias1+DerFbias2*DerFbias2+DerFbias3+DerFbias3+DerFbias4*DerFbias4+DerFbias4*DerFbias5+DerFbias6*DerFbias6 + Dera11*Dera11+Dera12*Dera12+Dera13*Dera13+Dera14*Dera14 + Dera21*Dera21+Dera22*Dera22+Dera23*Dera23+Dera24*Dera24 + Dera31*Dera31+Dera32*Dera32+Dera33*Dera33+Dera34*Dera34 + Dera41*Dera41+Dera42*Dera42+Dera43*Dera43+Dera44*Dera44 + Dera51*Dera51+Dera52*Dera52+Dera53*Dera53+Dera54*Dera54 + Dera61*Dera61+Dera62*Dera62+Dera63*Dera63+Dera64*Dera64)NEXTENDIFENDIF//ENDIF/////////////////// NEW PREDICTION ///////////////////// >>> INPUT NEURONS <<<input1=variable1input2=variable2input3=variable3input4=variable4// >>> FIRST LAYER OF NEURONS <<<F1=a11*input1+a12*input2+a13*input3+a14*input4+Fbias1F2=a21*input1+a22*input2+a23*input3+a24*input4+Fbias2F3=a31*input1+a32*input2+a33*input3+a34*input4+Fbias3F4=a41*input1+a42*input2+a43*input3+a44*input4+Fbias4F5=a51*input1+a52*input2+a53*input3+a54*input4+Fbias5F6=a61*input1+a62*input2+a63*input3+a64*input4+Fbias6F1=1/(1+EXP(-1*F1))F2=1/(1+EXP(-1*F2))F3=1/(1+EXP(-1*F3))F4=1/(1+EXP(-1*F4))F5=1/(1+EXP(-1*F5))F6=1/(1+EXP(-1*F6))// >>> OUTPUT NEURONS <<<output1=b11*F1+b12*F2+b13*F3+b14*F4+b15*F5+b16*F6+Obias1output2=b21*F1+b22*F2+b23*F3+b24*F4+b25*F5+b26*F6+Obias2output1=1/(1+EXP(-1*output1))output2=1/(1+EXP(-1*output2))ENDIFreturn output1 coloured(0,150,0) style(line,2) as "prediction long" , output2 coloured(200,0,0) style(line,2) as "prediction short",0.5 coloured(0,0,200) as "0.5", 0.6 coloured(0,0,200) as "0.6", 0.7 coloured(0,0,200) as "0.7", 0.8 coloured(0,0,200) as "0.8"09/11/2018 at 7:39 PM #8020009/11/2018 at 8:37 PM #80206My two most recent posts above were while I was optimising my Strategy / System (not your Indicator).
Here is an example with screen shots. I feel like I’m losing my grip ! ha
If I optimise P1 and P2 then the top result (values 38 and 100) is Leo 18.
If I then insert values 38 and 100 into my System I get result Leo 19.
Anybody any suggestions or comments, esp Leo?
Edit / PS
Don’t worry just yet … I may have the reason, more later! 🙂
09/11/2018 at 9:14 PM #80211I’ve got into the habit of naming my variables A+ Line number, so for Line 8 (Line 11 in my System) I used …
1P2=A11 //FOR CURVE AS INPUTBut there is also an A11 in Leo’s code and so I was making P2 = A11 (by mistake) but it produced great results!! 🙂
Leo’s Neural Network code is beyond me, but I will try and explore what is going on re my blunder! 🙂
Also I will set my corrupted System going on Demo Forward Test and report results over on my own Topic using Leo’s Neural Network code.
So in summary … a storm in a tea cup, but I may have unearthed something interesting?
09/12/2018 at 3:56 AM #80219I read you fisrt code and I were thinking on that, then my daugther finally slept againg (4:33 am) and cotinue reading.
You take one weight A11 and reset it every time with a value 38. Do not feel disappointed, I do not know either what effect take it that particular neuron and propagate to the others. I can imagine the effect was like drinking alcohol to that neuron. Haha
1 user thanked author for this post.
09/30/2018 at 12:29 PM #81619Awesome Video about the Math of neural networks. It is what I am coding here.
09/30/2018 at 9:17 PM #81637Leo
There is software published by Google allowing Deep Learning . “Tensorflow”
In the following presentation , you will find interesting topics
In the mean time I made already a list of additional data that be included in the system:
Ret : daily return of the asset
HV : realized volatility for the past 5 sessions
M5 Momentum 5
M10 Momentum 10
VIX9d or formally VXST 9-day volatility index of the S&P 500 (= market sentiment short term)
VIX of S&P 500 (= market sentiment long term)
VVIX (= market sentiment momentum)
MO month of the year (seasonality)
DAY day of the week (seasonality)10/01/2018 at 5:38 AM #81642Hi didi059,
there are other libraries for machine learning like sciki-learn but we need to learn another language of programming and other interface with the brocker. So far I do not know how to implement for implement those libraries. We know from the beginning that prorealtime is not the right language for artificial intelligent.
Those indicator sound good, I keep them in mind.
10/01/2018 at 4:21 PM #81685Hi Leo,
Any idea of I should do to make it universaly work with stocks?
Best,
Chris
10/02/2018 at 7:14 AM #81723Hi, Actually is already universal, If it is working that the issue 🙂
From line 106 to 123 you add your indicators or variable and the output is a prediction of going long or short base on those variable you choose
Actuallly in an strategy, those variables ca be in other time frames
2 users thanked author for this post.
10/06/2018 at 2:10 PM #82157Hi,
thanks for the code.
If I’m not wrong you use “if barindex > xxxx then” to create a learning period.
Do you think that if I replace BARINDEX by “DEFPARAM PRELOADBARS = xxx” it could allow to create a learning period a start to trade immediately?
1 user thanked author for this post.
10/15/2018 at 4:05 AM #82757Yes you are right.
I use that for show it as an indicator. Defparam bars are also used for learning process.
I would let “if barindex > xxxx then” feature inside because there are so much values involve that better have time for load everything.
-
AuthorPosts
Find exclusive trading pro-tools on