Neural networks programming with prorealtime
Forums › ProRealTime English forum › ProBuilder support › Neural networks programming with prorealtime
- This topic has 126 replies, 8 voices, and was last updated 1 year ago by MobiusGrey.
Tagged: data mining, machine learning
-
-
08/24/2018 at 6:12 PM #79003
Discussion about the trading strategy made by GraHal from the classifier code can be found by following this link: Long only trading with trend confirmation on 5 minutes timeframe
1 user thanked author for this post.
08/26/2018 at 7:15 AM #79072Hi all here is the complete classifier.
Now that I completed it, I realised that the clasiffier can be whatever you want for example using fractals and a loop for store the positions of the points. Or whatever method to detects the exactly change of tendency.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253//Variables://candlesback=7//ProfitRiskRatio=2//spread=1.5myATR=average[20](range)+std[20](range)ExtraStopLoss=MyATR//ExtraStopLoss=3*spread*pipsize//for long tradesclassifierlong=0FOR scanL=1 to candlesback DOIF classifierlong[scanL]=1 thenBREAKENDIFLongTradeLength=ProfitRiskRatio*(close[scanL]-(low[scanL]-ExtraStopLoss[scanL]))IF close[scanL]+LongTradeLength < high-spread*pipsize thenIF lowest[scanL+1](low) > low[scanL]-ExtraStopLoss[scanL]+spread*pipsize thenclassifierlong=1candleentrylong=barindex-scanLBREAKENDIFENDIFNEXTIF classifierlong=1 thenDRAWSEGMENT(candleentrylong,close[barindex-candleentrylong],barindex,close[barindex-candleentrylong]+LongTradeLength) COLOURED(0,150,0)DRAWELLIPSE(candleentrylong-1,low[barindex-candleentrylong]-ExtraStopLoss,barindex+1,high+ExtraStopLoss) COLOURED(0,150,0)ENDIF//for short tradesclassifiershort=0FOR scanS=1 to candlesback DOIF classifiershort[scanS]=1 thenBREAKENDIFShortTradeLength=ProfitRiskRatio*((high[scanS]-close[scanS])+ExtraStopLoss[scanS])IF close[scanS]-ShortTradeLength > low+spread*pipsize thenIF highest[scanS+1](high) < high[scanS]+ExtraStopLoss[scanS]-spread*pipsize thenclassifiershort=1candleentryshort=barindex-scanSBREAKENDIFENDIFNEXTIF classifiershort=1 thenDRAWSEGMENT(candleentryshort,close[barindex-candleentryshort],barindex,close[barindex-candleentryshort]-ShortTradeLength) COLOURED(150,0,0)DRAWELLIPSE(candleentryshort-1,high[barindex-candleentryshort]+ExtraStopLoss,barindex+1,low-ExtraStopLoss) COLOURED(150,0,0)ENDIFreturn08/26/2018 at 8:19 AM #79073can be found by following this link:
The link comes up with 404 not found for me yesterday and today.
Hey thank you so much for sharing your latest @Leo … this is new exciting work!
Cheers
08/26/2018 at 10:12 PM #79135Hi all,
I write the main structure of the neural network.
Still a lot work in process… for example to figure out the initial values for the weights and bias. I add a photo with the scheme
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071///////////////// CLASSIFIER /////////////myATR=average[20](range)+std[20](range)ExtraStopLoss=MyATR//ExtraStopLoss=3*spread*pipsize//for long tradesclassifierlong=0FOR scanL=1 to candlesback DOIF classifierlong[scanL]=1 thenBREAKENDIFLongTradeLength=ProfitRiskRatio*(close[scanL]-(low[scanL]-ExtraStopLoss[scanL]))IF close[scanL]+LongTradeLength < high-spread*pipsize thenIF lowest[scanL+1](low) > low[scanL]-ExtraStopLoss[scanL]+spread*pipsize thenclassifierlong=1candleentrylong=barindex-scanLBREAKENDIFENDIFNEXT//for short tradesclassifiershort=0FOR scanS=1 to candlesback DOIF classifiershort[scanS]=1 thenBREAKENDIFShortTradeLength=ProfitRiskRatio*((high[scanS]-close[scanS])+ExtraStopLoss[scanS])IF close[scanS]-ShortTradeLength > low+spread*pipsize thenIF highest[scanS+1](high) < high[scanS]+ExtraStopLoss[scanS]-spread*pipsize thenclassifiershort=1candleentryshort=barindex-scanSBREAKENDIFENDIFNEXT///////////////////////// NEURONAL NETWORK /////////////////////variable1= // to be defined//variable2= // to be defined//variable3= // to be defined//variable4= // to be definedIF classifierlong=1 or classifiershort=1 THENcandleentry=max(candleentrylong,candleentryshort)// >>> INPUT NEURONS <<<input1=variable1[barindex-candleentry]input2=variable2[barindex-candleentry]input3=variable3[barindex-candleentry]input4=variable4[barindex-candleentry]// >>> FIRST LAYER OF NEURONS <<<F1=a11*input1+a12*input2+a13*input3+a14*input4+Fbias1F2=a21*input1+a22*input2+a23*input3+a24*input4+Fbias2F3=a31*input1+a32*input2+a33*input3+a34*input4+Fbias3F4=a41*input1+a42*input2+a43*input3+a44*input4+Fbias4F5=a51*input1+a52*input2+a53*input3+a54*input4+Fbias5F6=a61*input1+a62*input2+a63*input3+a64*input4+Fbias6F1=1/(1+EXP(-1*F1))F2=1/(1+EXP(-1*F2))F3=1/(1+EXP(-1*F3))F4=1/(1+EXP(-1*F4))F5=1/(1+EXP(-1*F5))F6=1/(1+EXP(-1*F6))// >>> OUTPUT NEURONS <<<output1=b11*F1+b12*F2+b13*F3+b14*F4+b15*F5+b16*F6+Obias1output2=b21*F1+b22*F2+b23*F3+b24*F4+b25*F5+b26*F6+Obias2output1=1/(1+EXP(-1*output1))output2=1/(1+EXP(-1*output2))08/27/2018 at 8:11 AM #79140For the weights, I would suggest using the percentage of a similarity, as a coefficient. Like the way I did in this previous version: https://www.prorealcode.com/topic/neural-networks-programming-with-prorealtime/#post-78789
08/27/2018 at 9:36 AM #79150For the weights, I would suggest using the percentage of a similarity, as a coefficient. Like the way I did in this previous version: https://www.prorealcode.com/topic/neural-networks-programming-with-prorealtime/#post-78789
but we need 4*6+6+2*6+2 = 44 initial values :s
I still reading and learning about this topic
08/27/2018 at 9:57 AM #79152You are referring to this kind of line: F1=a11*input1+a12*input2+a13*input3+a14*input4+Fbias1 , right?
I assume that a11 … a14 are different weights to give more or less importance to each of your input. That’s how weights are used in neural network, such as perceptron for instance. There is no one way to determine how weights are calculated or initiated.
Like I said in my previous post you can set them as a statistical percentage of good value. So, let’s assume that all your inputs (from input1 to input4) are boolean variables (0=false, 1=true). For example, if your input1 is a value that gives better accuracy in your previous trades results (and you’ll have to make a study for this.. like the way I did in my example ), you can weight the input like this:
input1 was true for 53% when a classifierlong occurred so a11=0.53
F1 = 0.53*1+a12*input2+a13*input3+a14*input4+Fbias1
This is a rough idea of how you can temper an input.
1 user thanked author for this post.
08/27/2018 at 3:46 PM #79179Thanks Nicolas,
Yes indeed, so far I read is a lot of try and error for the initial values, some others said with experience, one can get a good initial values. the actual values will are achieve during the learning process of the neural network.
Talking about the learning process…
Imagine that we build our Neural Network with success.
How do we do the learning process?
Is the fuction: DEFPARAM PreLoadBars = 50000 ?
it means that 50k candles are loaded and the code is run through all of those candles? that will be great for the learning process.
08/27/2018 at 4:59 PM #7918508/27/2018 at 8:16 PM #79194Sorry Nicolas, I could not fully understand the concept of PreLoadBars = 10000.
At the moment of activating the strategy , it means that the program is run 10000 units of time before barindex=1?
I would like to understand correct because the Neuronal Network should learn before taking decisions for trading, otherwise we have to “wait” many “barindex” before actually buy or sell… that can be a pin in the neck using this methodology of trading.
another way is to back test using the function Graph and “try” to export-import all the 44 values as initial values for a quicker learning process…
puff… what if are eager to create a bigger Neural Network with hundreds for weights !!!?
I put a lot of hope in that “preloadbar ” fuction. Am I right? or naive?
08/27/2018 at 8:38 PM #79195Hi all,
I complete the calculus the partial derivates for the learning process of the neuronal network. I will really appreciate if someone can confirm or correct them if I miss something.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133///////////////// CLASSIFIER /////////////myATR=average[20](range)+std[20](range)ExtraStopLoss=MyATR//ExtraStopLoss=3*spread*pipsize//for long tradesclassifierlong=0FOR scanL=1 to candlesback DOIF classifierlong[scanL]=1 thenBREAKENDIFLongTradeLength=ProfitRiskRatio*(close[scanL]-(low[scanL]-ExtraStopLoss[scanL]))IF close[scanL]+LongTradeLength < high-spread*pipsize thenIF lowest[scanL+1](low) > low[scanL]-ExtraStopLoss[scanL]+spread*pipsize thenclassifierlong=1candleentrylong=barindex-scanLBREAKENDIFENDIFNEXT//for short tradesclassifiershort=0FOR scanS=1 to candlesback DOIF classifiershort[scanS]=1 thenBREAKENDIFShortTradeLength=ProfitRiskRatio*((high[scanS]-close[scanS])+ExtraStopLoss[scanS])IF close[scanS]-ShortTradeLength > low+spread*pipsize thenIF highest[scanS+1](high) < high[scanS]+ExtraStopLoss[scanS]-spread*pipsize thenclassifiershort=1candleentryshort=barindex-scanSBREAKENDIFENDIFNEXT///////////////////////// NEURONAL NETWORK /////////////////////variable1= // to be defined//variable2= // to be defined//variable3= // to be defined//variable4= // to be definedIF classifierlong=1 or classifiershort=1 THENcandleentry=max(candleentrylong,candleentryshort)Y1=classifierlongY2=classifiershort// >>> INPUT NEURONS <<<input1=variable1[barindex-candleentry]input2=variable2[barindex-candleentry]input3=variable3[barindex-candleentry]input4=variable4[barindex-candleentry]// >>> FIRST LAYER OF NEURONS <<<F1=a11*input1+a12*input2+a13*input3+a14*input4+Fbias1F2=a21*input1+a22*input2+a23*input3+a24*input4+Fbias2F3=a31*input1+a32*input2+a33*input3+a34*input4+Fbias3F4=a41*input1+a42*input2+a43*input3+a44*input4+Fbias4F5=a51*input1+a52*input2+a53*input3+a54*input4+Fbias5F6=a61*input1+a62*input2+a63*input3+a64*input4+Fbias6F1=1/(1+EXP(-1*F1))F2=1/(1+EXP(-1*F2))F3=1/(1+EXP(-1*F3))F4=1/(1+EXP(-1*F4))F5=1/(1+EXP(-1*F5))F6=1/(1+EXP(-1*F6))// >>> OUTPUT NEURONS <<<output1=b11*F1+b12*F2+b13*F3+b14*F4+b15*F5+b16*F6+Obias1output2=b21*F1+b22*F2+b23*F3+b24*F4+b25*F5+b26*F6+Obias2output1=1/(1+EXP(-1*output1))output2=1/(1+EXP(-1*output2))// >>>PARTIAL DERIVATES OF COST FUNCTION <<<// COST = (1/2)* ( (Y1-output1)^2 + (Y2-output2)^2 )DerObias1 = (Y1-output1) * output1*(1-output1) * 1DerObias2 = (Y2-output2) * output2*(1-output2) * 1Derb11 = (Y1-output1) * output1*(1-output1) * F1Derb12 = (Y1-output1) * output1*(1-output1) * F2Derb13 = (Y1-output1) * output1*(1-output1) * F3Derb14 = (Y1-output1) * output1*(1-output1) * F4Derb15 = (Y1-output1) * output1*(1-output1) * F5Derb16 = (Y1-output1) * output1*(1-output1) * F6Derb21 = (Y2-output2) * output2*(1-output2) * F1Derb22 = (Y2-output2) * output2*(1-output2) * F2Derb23 = (Y2-output2) * output2*(1-output2) * F3Derb24 = (Y2-output2) * output2*(1-output2) * F4Derb25 = (Y2-output2) * output2*(1-output2) * F5Derb26 = (Y2-output2) * output2*(1-output2) * F6DerFbias1 = (Y1-output1) * output1*(1-output1) * b11 * F1*(1-F1) * 1 + (Y2-output2) * output2*(1-output2) * b21 * F1*(1-F1) * 1DerFbias2 = (Y1-output1) * output1*(1-output1) * b12 * F2*(1-F2) * 1 + (Y2-output2) * output2*(1-output2) * b22 * F2*(1-F2) * 1DerFbias3 = (Y1-output1) * output1*(1-output1) * b13 * F3*(1-F3) * 1 + (Y2-output2) * output2*(1-output2) * b23 * F3*(1-F3) * 1DerFbias4 = (Y1-output1) * output1*(1-output1) * b14 * F4*(1-F4) * 1 + (Y2-output2) * output2*(1-output2) * b24 * F4*(1-F4) * 1DerFbias5 = (Y1-output1) * output1*(1-output1) * b15 * F5*(1-F5) * 1 + (Y2-output2) * output2*(1-output2) * b25 * F5*(1-F5) * 1DerFbias6 = (Y1-output1) * output1*(1-output1) * b16 * F6*(1-F6) * 1 + (Y2-output2) * output2*(1-output2) * b26 * F6*(1-F6) * 1Dera11 = (Y1-output1) * output1*(1-output1) * b11 * F1*(1-F1) * input1 + (Y2-output2) * output2*(1-output2) * b21 * F1*(1-F1) * input1Dera12 = (Y1-output1) * output1*(1-output1) * b11 * F1*(1-F1) * input2 + (Y2-output2) * output2*(1-output2) * b21 * F1*(1-F1) * input2Dera13 = (Y1-output1) * output1*(1-output1) * b11 * F1*(1-F1) * input3 + (Y2-output2) * output2*(1-output2) * b21 * F1*(1-F1) * input3Dera14 = (Y1-output1) * output1*(1-output1) * b11 * F1*(1-F1) * input4 + (Y2-output2) * output2*(1-output2) * b21 * F1*(1-F1) * input4Dera21 = (Y1-output1) * output1*(1-output1) * b12 * F2*(1-F2) * input1 + (Y2-output2) * output2*(1-output2) * b22 * F2*(1-F2) * input1Dera22 = (Y1-output1) * output1*(1-output1) * b12 * F2*(1-F2) * input2 + (Y2-output2) * output2*(1-output2) * b22 * F2*(1-F2) * input2Dera23 = (Y1-output1) * output1*(1-output1) * b12 * F2*(1-F2) * input3 + (Y2-output2) * output2*(1-output2) * b22 * F2*(1-F2) * input3Dera24 = (Y1-output1) * output1*(1-output1) * b12 * F2*(1-F2) * input4 + (Y2-output2) * output2*(1-output2) * b22 * F2*(1-F2) * input4Dera31 = (Y1-output1) * output1*(1-output1) * b13 * F3*(1-F3) * input1 + (Y2-output2) * output2*(1-output2) * b23 * F3*(1-F3) * input1Dera32 = (Y1-output1) * output1*(1-output1) * b13 * F3*(1-F3) * input2 + (Y2-output2) * output2*(1-output2) * b23 * F3*(1-F3) * input2Dera33 = (Y1-output1) * output1*(1-output1) * b13 * F3*(1-F3) * input3 + (Y2-output2) * output2*(1-output2) * b23 * F3*(1-F3) * input3Dera34 = (Y1-output1) * output1*(1-output1) * b13 * F3*(1-F3) * input4 + (Y2-output2) * output2*(1-output2) * b23 * F3*(1-F3) * input4Dera41 = (Y1-output1) * output1*(1-output1) * b14 * F4*(1-F4) * input1 + (Y2-output2) * output2*(1-output2) * b24 * F4*(1-F4) * input1Dera42 = (Y1-output1) * output1*(1-output1) * b14 * F4*(1-F4) * input2 + (Y2-output2) * output2*(1-output2) * b24 * F4*(1-F4) * input2Dera43 = (Y1-output1) * output1*(1-output1) * b14 * F4*(1-F4) * input3 + (Y2-output2) * output2*(1-output2) * b24 * F4*(1-F4) * input3Dera44 = (Y1-output1) * output1*(1-output1) * b14 * F4*(1-F4) * input4 + (Y2-output2) * output2*(1-output2) * b24 * F4*(1-F4) * input4Dera51 = (Y1-output1) * output1*(1-output1) * b15 * F5*(1-F5) * input1 + (Y2-output2) * output2*(1-output2) * b25 * F5*(1-F5) * input1Dera52 = (Y1-output1) * output1*(1-output1) * b15 * F5*(1-F5) * input2 + (Y2-output2) * output2*(1-output2) * b25 * F5*(1-F5) * input2Dera53 = (Y1-output1) * output1*(1-output1) * b15 * F5*(1-F5) * input3 + (Y2-output2) * output2*(1-output2) * b25 * F5*(1-F5) * input3Dera54 = (Y1-output1) * output1*(1-output1) * b15 * F5*(1-F5) * input4 + (Y2-output2) * output2*(1-output2) * b25 * F5*(1-F5) * input4Dera61 = (Y1-output1) * output1*(1-output1) * b16 * F6*(1-F6) * input1 + (Y2-output2) * output2*(1-output2) * b26 * F6*(1-F6) * input1Dera62 = (Y1-output1) * output1*(1-output1) * b16 * F6*(1-F6) * input2 + (Y2-output2) * output2*(1-output2) * b26 * F6*(1-F6) * input2Dera63 = (Y1-output1) * output1*(1-output1) * b16 * F6*(1-F6) * input3 + (Y2-output2) * output2*(1-output2) * b26 * F6*(1-F6) * input3Dera64 = (Y1-output1) * output1*(1-output1) * b16 * F6*(1-F6) * input4 + (Y2-output2) * output2*(1-output2) * b26 * F6*(1-F6) * input4GradientNorm = SQRT(DerObias1*DerObias1 + DerObias2*DerObias2+Derb11*Derb11+Derb12*Derb12+Derb13*Derb13+Derb14*Derb14+Derb15*Derb15+Derb16*Derb16 + Derb21*Derb21+Derb22*Derb22+Derb23*Derb23+Derb24*Derb24+Derb25*Derb25+Derb26*Derb26 + DerFbias1*DerFbias1+DerFbias2*DerFbias2+DerFbias3+DerFbias3+DerFbias4*DerFbias4+DerFbias4*DerFbias5+DerFbias6*DerFbias6 + Dera11*Dera11+Dera12*Dera12+Dera13*Dera13+Dera14*Dera14 + Dera21*Dera21+Dera22*Dera22+Dera23*Dera23+Dera24*Dera24 + Dera31*Dera31+Dera32*Dera32+Dera33*Dera33+Dera34*Dera34 + Dera41*Dera41+Dera42*Dera42+Dera43*Dera43+Dera44*Dera44 + Dera51*Dera51+Dera52*Dera52+Dera53*Dera53+Dera54*Dera54 + Dera61*Dera61+Dera62*Dera62+Dera63*Dera63+Dera64*Dera64)08/27/2018 at 9:10 PM #79197Now every time there is a new input generated by the Classifier, every weight or bias can be improve ( the neural network learns) by using the equation I attached (epsilon is a small value that can be a variable to be optimised in the walk forward testing) .
It will be great if some of you can help me with this project.
In fact, now that I finished the back-propagation algorithm, it is very near already for create an indicator for prediction using Neural Network
All I do, is to apply what I learn in here:
08/27/2018 at 9:41 PM #79200At the moment of activating the strategy , it means that the program is run 10000 units of time before barindex=1?
Just try if a variable had incremented on the first bar:
123456789defparam preloadbars=10000a = a + 1if a = 0 thenbuy at marketendifgraph aif a >=10.000 on the first bar of the backtest, then the code been read and executed 10 thousands times (I have a very bad feeling about this! 🙂 )
08/27/2018 at 10:45 PM #79206It will be great if some of you can help me with this project.
I only wish I could Leo and I feel sad that only Nicolas is helping you, but you also need folks with more time than Nicolas can spare.
Your work is ground breaking as far as most of us are concerned. I did read the reference you posted and broadly understood, but it’s the coding I struggle with.
Tomorrow I will see if I can contact @Maz as he is a capable member who sticks in my mind as one who similarly tried to move forward with complex collaborative projects on here.
08/28/2018 at 3:19 AM #79209 -
AuthorPosts
Find exclusive trading pro-tools on