Writing fast (and butt-ugly) PRT code – speed up indicators

Forums ProRealTime English forum ProOrder support Writing fast (and butt-ugly) PRT code – speed up indicators

Viewing 10 posts - 1 through 10 (of 10 total)
  • #33840
    Maz

    As a trader-programmer you prefer to spend as much time as you can on the premise of your problem; less so the quirky logic optimization required to make your relatively simple indicator finish calculating some time this century. Optimizing your PRT indicator to make it run as quickly as machine-ly possible is something of a dark art – that is, at least if you’re used to a rock solid programming language like C, or like most others in fact!

    Have you ever wondered if the bull market will be over by the time your backtest finishes? Or by the time your indicator loads on 200k candles? Let’s take a closer look at their interpreted “language” and find out what makes things go snappier….

    In a normal programming language: If you had a bunch of conditions that you wanted to test for, say 10 or so, which all had to be true in order to proceed to the next part of the program, what would normally be the fastest way get a logical true or false? Would you create a variable for each sub-condition and then AND them all together? No! Of course you wouldn’t! Because that would waste unnecessary memory and time, to keep assigning conditions to a stack of variables in RAM; variables which you are never going to use again would be wasteful – you’d think. Do you see where I am going with this yet? 🙂

    Let’s look at an example. Do you think this code would be

    a) The fastest possible way to do this?
    b) The slowest possible way to do this?

    You might be forgiven in thinking that the above would be fast. Why should it be fast? Because you aren’t wasting resources. You are assigning a bunch of unique conditions inline to a single variable stack (it’s a stack in PRT – bc1[x] for each candle). But like me if you thought that is fast in PRT, you would be very mistaken. It’s the slowest possible way to get the answer to bc1! Go figure! (using brackets around the conditions makes no difference).

    Here is a (slightly) faster method:

    So, writing inefficient code; wasting resources by assigning a variable stack to a bunch of sub-conditions is faster. This by the way (above) was some original code I found on the site. So, can we make it even faster? Yes… we can. By this logic, the more you abstract the faster your program gets. But there comes a point where too much abstraction is unnecessary, resource hungry and pretty ugly!

    Here is the fastest possible solution:

    Now, is that ugly or is that ugly? The joys of an interpreted language. Nevertheless I hope this has given some insight into what you might want to consider if speed is an issue for your indicator or backtest.

    All the best,

    M

    PS: even the fastest one is painstakingly slow!

     

    Total of 13 users thanked author for this post. Here are last 10 listed.
    #33850

    This topic deserves to be shared on the blog so that it is not “lost” in the forum! 🙂 I take care of it, another great input from Maz! thank you very much.

    3 users thanked author for this post.
    #33872

    Very interesting, thanks Maz.

    I was aware a condition used several times in various “if” statements would benefit from being calculated and stored in a variable replacing the condition in the “if” statement more than once, but I wasn’t aware that would be also true even if used only once? I thought it was due to PRT calling a stored variable value being faster than re-doing the calculation, and so worth it only if calculation is done more than once, something I summarised in my mind as: PRT time(calculate once only) < PRT time (calculate once + use it again) < PRT time(calculate it more than once).

    If what you say is true, I’d be interested to know what makes “calculating a condition inside a variable and use the variable in the if statement only once” faster than just “calculating it once inside the if statement“? That’s a real question, not the start of a counter-argument. Without quantified speed test for both ways to experimentally back this up, I’d need to understand theoretically the logic behind this in order to give up the logic I used to have in my first paragraph.

    Also, and I guess that’s more a question to Nicolas in the wake of his meetings with PRT in which he is informed in advance of what’s coming soon: Nicolas you mentionned a few times (when discussing “call” speed improvements) there was a new engine coming up soon which would increase speed, do you know if what Maz is highlighting is involved in the redesign? The answer to this might well decide if our more complex codes would still benefit from some “ugly” rewriting anyway, or on the contrary if it’s somehow too late to start a rewriting exercise for the longest complex pieces of codes because the new engine could improve this before we’re even done rewriting…

    1 user thanked author for this post.
    #33877

    Thank you Maz, very interesting!

    #33883
    Maz

    The problem is with their compiler. Somewhere along the line the interpreted (user) code has to be compiled into machine code. It seems that writing statements more akin to how you would do it in assembler would help their compiler do the job. Bottom line really is they (PRT) need to work on their compiler because they can’t expect their user base to know this or care about it much – as it’s well beyond the scope of trading strategy and belongs more in the software engineering field.

     

    1 user thanked author for this post.
    #33884

    @Noobywan I don’t know yet. Though, I know for sure that the new engine to come will be faster enough to use CALL as function like we do in other programming language. About what Maz discovered here is new to me. I’ll forward this thread to whom it may concern to get his point of view.

    Bottom line really is they (PRT) need to work on their compiler because they can’t expect their user base to know this or care about it much – as it’s well beyond the scope of trading strategy and belongs more in the software engineering field.

    I agree, that’s why I’ll make your post a blog article in the “learning” category for future references.

    1 user thanked author for this post.
    #33926
    #75299
    Seb

    I assume this way of coding will help to speed up live trading execution as well?

    #75351

    The line

    is probably the most concise, but for sure not the easiest to understand and maintain.

    According to the divide-and-conquer strategy, i think using a different variable for each condition is the easiest way to deeal with the code and will be less time consuming when editing a strategy.

    I don’t care which coding is more beautiful or uglier, what I care is that when I need to edit the code the faster I can do the better. After months without looking at that code, a single line with so many condition is harder to understand.

    As for resources, well…. with such a huge number of GBs and TBs available…. it’s not a few more variable that can be considered a waste!

    Speed to move data in small steps can sometimes be made by direct move between registers and registerd to/from memory, while a complicated assignement involves a heavy use of the stack.

    In conclusion, my opinion is that

    is far more beautiful!

     

     

    #241792

    Very interesting read and many thanks to the OP!

     

    More housekeeping for me, not only this but as Roberto said (years ago now), revisiting something years old could ‘challenge’ the old grey matter (in my case anyway) when viewed on a single line.

    1 user thanked author for this post.
Viewing 10 posts - 1 through 10 (of 10 total)

Create your free account now and post your request to benefit from the help of the community
Register or Login