Discussing the strategy VECTORIAL DAX (M5)
Forums › ProRealTime English forum › ProOrder support › Discussing the strategy VECTORIAL DAX (M5)
- This topic has 1,260 replies, 124 voices, and was last updated 2 months ago by Greger2346.
-
-
11/28/2019 at 7:35 PM #113879
Edurecio – thank you for your contribution to this topic but please only post in one language (the language of the forum that you are posting in) otherwise threads become very large if every possible language is catered for.
The platform has a built in translation tool that others can use if they feel the need.
I’m sorry, VONASI. It will not happen again.
Can I rectify my text?
Can you modify it?
11/28/2019 at 10:09 PM #113884Can I rectify my text? Can you modify it?
You have 5 minutes after posting to edit your post. After that only moderators can edit it. I will delete the non-English part for you as you have requested it.
1 user thanked author for this post.
12/04/2019 at 12:14 PM #114164Getting quieter by the week on Vectorials??
Anybody still having ongoing success and if Yes … with what version please?
The versions I’ve been Forward Testing have shown less and less profitable trades as the weeks went by and notably in the last few weeks. Is it a similar scenario with other users??
I do think it’s a great strategy, maybe all versions need a fresh round of optimisation??
Or more practicable … let’s work on just 1 or 2 versions??
12/04/2019 at 2:57 PM #11417712/04/2019 at 3:08 PM #114178Anybody still having ongoing success and if Yes … with what version please?
Unfortunately, big loss with V2 on NAS and DAX, and V6.1 MOD on DAX. I’ve relaunched them with the smallest position size possible… But yeah it looks overoptimized (especially v.6.1) 🙁
12/04/2019 at 3:34 PM #114184I was thinking that may be due to the calculation period used. I’ve noticed that my live versions were not taking always the same trades as in backtests… Maybe it’s due to the history available when launching the code live (max 99999 bars), that might explain why it’s so badly performing ?
12/04/2019 at 4:27 PM #114190live versions were not taking always the same trades as in backtest
Isn’t above likely due to a fixed spread & fixed min distance in backtest compared with a variable spread and variable minimum distance in Live?
12/04/2019 at 4:34 PM #11419112/04/2019 at 4:41 PM #114192Good afternoon, attached screenshots of the strategy in Nasdaq, this is real, I expected him to do two bad operations to activate it in real. I also have Version 6.1 on the Dax and I have been losing since I put it on.
1 user thanked author for this post.
12/04/2019 at 4:52 PM #114196Good afternoon, attached screenshots of the strategy in Nasdaq, this is real, I expected him to do two bad operations to activate it in real. I also have Version 6.1 on the Dax and I have been losing since I put it on.
Hello Nacho, this is with 1 unit of US Tech 100 or 0.5 ?
12/04/2019 at 5:07 PM #11419712/04/2019 at 7:52 PM #114209Hello, Just a note about taking profit manually. I do it sometimes but there’s a catch to it.
Generally speaking, while in i.e. a position, there can be multiple similar signals which normally weren’t taken. So if you take profit manually (and restart the system instantly) there’s a chance that the strategy becomes out of sync a short period. The same goes for often stopping & restarting a system. There’s a 50/50 chance you start out-of-sync but perhaps I’am wrong here.
Beside that, in 5.0p there’s a connection where the resetcounter can set to 0 or 1. If set to experimental, stopping and restarting the system could have influence because of the tradecounter.
12/04/2019 at 8:31 PM #114213Hello.
As I said last week, I am trying to do a different optimization, month by month.
My approach is different so as not to fall into an over optimization.
When we optimize a system for a period of time we obtain the best parameters for that period of time, but they are the BEST, we cannot expect better results than those in the future because they are calculated over a past time with already known results and the calculation on the best possible results. They are valid parameters but we do not know if they will be valid in the future. I would even say that no data calculated in the past will be completely or perfectly valid in the future because the results will not be repeated.I tried to make a “walkforward” based on the past and also validating it in the past.
For this I took the parameters that seemed most relevant to me (PeriodA, PeriodB, ChandelierA, ChandelierB, Angle1 and Angle2) and optimized them month by month.
So that FOR EXAMPLE, with the values of each of the months of August 2018 to September I obtained an average. That average was introduced into the system that would have to work in October and I observed how it behaved with respect to the expected values, the optimization values of that month of November once passed and with the original values proposed by Fifi. Explained in another way, I did a “BACK FORWARD” based on the average of each of the previous months and applied them to the beginning month. Month by month, and adding the month prior to the calculation of the average for the following month.The gains offered by a system with its values calculated in this way are lower than the values we obtain by making a total optimization of the system with the data already known, but as I said before, we cannot expect those values to be repeated, so In the future, future earnings lower than expected, how much? … I think they will be close to my calculation based on averages.
Let us think that in mathematics, the value of a function at a point is its derivative (in this case, a month) and an optimization is the derivative of each of its points. On the other hand, the sum of the values that are below that derivative, is the integral, which in this case is the sum of the average values of each month. This concept is not exact but it is approximate.
Now I am busy with other things but in the next few days I will share my results.
I assume that they do not have to be the best, nor that they are perfect but they could serve for a better long-term optimization so as not to over-optimize.This is just another way of looking for possible valid values for the system and also a small thought that the result of a total optimization BASED ON THE PAST does not guarantee that this result will be valid in the future.
12/04/2019 at 10:34 PM #114214So if you take profit manually (and restart the system instantly) there’s a chance that the strategy becomes out of sync a short period. The same goes for often stopping & restarting a system.
Which is why something like my Robustness Tester is worth using. It can simulate random starting points and random trades and show you whether the original equity curve was just a lucky fit or whether the strategy performs well no matter when you trade.
1 user thanked author for this post.
12/04/2019 at 10:45 PM #114215Edurecio – A while back I spent a lot of time trying to create a self optimising strategy that checked what had worked best recently and then used those settings until the self optimisation said that something else worked better recently. It produced nice equity curves in back testing but completely failed in forward testing. In every strategy there is some sort of variable that is fixed – whether it is an actual value or just using closing price instead of median price. So something is optimised and fixed in every strategy before we even try to check what worked best recently for other variables and hope that it works for a bit longer with a new value for them. I gave up on the concept of self optimisation and it proved to me that regularly re-optimising does not work either.
Just my humble opinion.
-
AuthorPosts
Find exclusive trading pro-tools on