v11 bugs

  • This topic has 108 replies, 13 voices, and was last updated 1 year ago by avatarGraHal.
Viewing 15 posts - 1 through 15 (of 109 total)
  • #154127

    v11 is a powerful tool and it is wonderful to have it after such a long wait. There are some issues to be aware of though:

    1. it fails to save changes. This was already an issue with 10.3 but now much worse. My work flow now includes exporting EVERY time I close the algo I’m working on, then importing it when I restart PRT. The changes eventually register but always seems to be a day behind.

    2. exporting often causes it to crash! A half dozen times already I have got 2 dialogue boxes, one says ‘Export was successful’, plus a smaller one that says ‘Export in progress’ that never goes away. Then I have to close PRT via Task Manager and restart.

    3. big discrepancy between a straight backtest and an optimization that includes those same values. For example, I had trailing stop settings of 0.29 % with a step of 0.028%. I ran an optimization with a range of values above and below that. It returned 0.31% with a step of 0.03%, but gave worse performance than what I started with. When I clicked the 0.29 / 0.028 line in the optimization box, I got a completely different (worse) result than the original backtest. TBT checked in both cases.

    I have passed all this on to PRT, just posting here for others to be aware. Anyone else getting similar or different problems?

    #154145

    I confirm it fails to save changes. I thought it was me and I was tired but it seems that I am not alone.

    3. big discrepancy between a straight backtest and an optimization that includes those same values. For example, I had trailing stop settings of 0.29 % with a step of 0.028%. I ran an optimization with a range of values above and below that. It returned 0.31% with a step of 0.03%, but gave worse performance than what I started with. When I clicked the 0.29 / 0.028 line in the optimization box, I got a completely different (worse) result than the original backtest. TBT checked in both cases.

    I have not noticed it yet, but its … frightening

    #154152

    1/ never had an issue with saving and i’m using v11 since more than a year, i’m hitting CTRL+S very often though ..

    2/ I thought I was alone (because of custom version), do you have a lot of indicators/strategies in your platform?

     

    #154154

    Another bug : sometimes when you backtest a large historic of data (1M for example / 2010-2020), PRT runs the backtest on 2017-2020 only. Then I launch the backtest again and it runs on 2010-2020. Not a “big” problem but a loss of time.

    #154167

    3. big discrepancy between a straight backtest and an optimization that includes those same values. For example, I had trailing stop settings of 0.29 % with a step of 0.028%. I ran an optimization with a range of values above and below that. It returned 0.31% with a step of 0.03%, but gave worse performance than what I started with. When I clicked the 0.29 / 0.028 line in the optimization box, I got a completely different (worse) result than the original backtest. TBT checked in both cases.

    Yes, I have the same problem. Results of a parameter optimization are often not identical to the result of a simple backtest with the same parameters. The difference is usually small, maybe 5-10% over 1 million bars, but still, this should not happen.

    Even worse : When I run a backtest twice, (in a duplicated version), the second result is sometimes slightly different from the first run ! And a third run may give a third slightly different result again ! Does not make me very confident of the results, indeed.

    #154168

    do you have a lot of indicators/strategies in your platform?

    At a guess I would say there are +300 strategies. Do you think that is causing the saving issue?

    #154169

    Results of a parameter optimization are often not identical to the result of a simple backtest with the same parameters.

    Optimization are not made with TBT, while simple backtests do. So results can be different, but I’m sure you are aware of that?

    #154172

    I know tick by tick can cause changes, but in my recent optimizations, “tick-mode” on the right side of the list has always been zero. Usually, it does not occur in my systems.

    I had a similar problem in version 10.3 : Optimizations were different from simple backtests, and duplication of a backtest did not necessarily yield an identical result. The reason turned out to be an inconsistency in the ADX indicator, because this indicator was not reproducible when used in a backtest (as could be shown by graphing the ADX values). So, in one run the ADX had a value of, e.g., 10.23, and in the next run of the same backtest the ADX was 10.41 at the same bar. This is where inconsistent backtest results in v10.3 came from in my systems. But this has been almost entirely corrected by PRT now, deviations in ADX values have now become very small (although not zero !).

    #154179

    Even if tick-mode column is equal to zero, it doesn’t mean that the exit prices have been tested precisely on the minimal timeframe available. “tick-mode” column is there to alert where a takeprofit and a stoploss could have been triggered in the same bar (and not tested which one triggered first)  and therefore could make a huge difference if there are many cases.

    #154180

    but when you click on a line it should give a detailed report with TBT, no? and that should be the same as a straight backtest with TBT checked … but mine are off by +20%

    #154195

    Im also having the saving problem. I cant rename an indicator, i press ctrl+s and when i exit the window it goes back to as it was before. I have to add the indicator the chart for it to be saved..

    #154197

    … This is when I change the name of the indicator. And another bug (atleast I think it is) is that the indicator automaticly gets added to the chart when exiting out from it..

    #154207

    Ok, I rechecked. Tick-by-tick mode or not makes no difference in the system I checked, exactly the same result in 2 backtests. However, when I duplicated the original backtest 10 times and let all these 10 systems run, I got 3 different results : 7493,40, 7494,00, and 7459,60. All the same system, just duplicated, and run in non-tick-by-tick mode. Maybe due to some minor differences in ADX ?

    Backtest optimization was ok with this one, but I have seen several systems in the past few days where it was not and optimization showed results different from the backtest. Tick-by-tick never makes any difference in these systems, because both tp and sl are quite high and are therefore never triggered in the same bar (10 second chart).

    I will write more as soon as I find another example where optimization is off by several %.

    #154209

    Were you using historical mode (where the chart does not update)? If not, there could be different results as the time period will have changed by a few bars between each run.

    #154210

    Yes, I used historical mode. However, the detailed report shows only results of closed positions both in normal and in historical mode (not of positions still open), so there should be no difference at all in principle.

    And so does the parameter optimization list – it adds up only results of closed positions.

    1 user thanked author for this post.
Viewing 15 posts - 1 through 15 (of 109 total)

Create your free account now and post your request to benefit from the help of the community
Register or Login