top of page
Writer's pictureVincent D.

Major Updates of Hedge Signals and Strategy Tools

We have just released a new version of our strategy, along with updates to several other tools (and new tools). In fact, this is the most significant update we've implemented since I began working on this project in early 2021. You might wonder why these sudden changes have occurred. At the end of September, Nasdaq unexpectedly discontinued several critical datasets, providing little to no notice. It actually required extensive research and many Google searches to finally locate this press release:


Effective after the close of business on Friday, September 29, 2023, Nasdaq will cease the calculation and dissemination of the following indexes:


VOLS

VOLQ


The reasons behind Nasdaq's decision to discontinue these datasets remain unknown to me. The VOLQ index was introduced not long ago and had been actively promoted by Nasdaq. The CME Group even launched a futures product based on it. It seems particularly inconsiderate given that one of the benefits of this signal, as highlighted in a Nasdaq white paper, was its utility as a robust bottom indicator. The abrupt removal of this signal, in the middle of a market correction, leaves me wondering why. In fact, it was the VOLQ and VOLS that signaled a re-entry point back in September.


As if that weren't enough, a few days after this initial issue, the Nasdaq data feed for the Dark Pool stopped functioning. Through collaboration with some TradingView representative, we discovered that Nasdaq Data Link/Quandl (Nasdaq bought Quandl some years ago) had discontinued reporting this data without notice, opting instead to sell it as part of an institutionnal investor package. Although traces of this dataset still appear in Google searches, they now lead to a non-existent page error. FINRA, the regulatory agency that compiles this data, does make it available to the public for free, which they claim is an act of goodwill toward retail investors. However, there's a significant drawback: their data is delayed by three days, rendering it virtually useless since the maximum lead these data can provide is about that same duration.


To provide a brief overview, Dark Pool data represents stocks that have been traded over-the-counter by usually big investors and can sometimes offer an early signal before a real market downturn. A prime example occurred on February 20, 2020, when we observed an 8.06 standard deviation move in selling volume in the Dark Pool coinciding with the initial red candles from all-time highs. No price action-based hedging strategy that I know would have been able to issue a sell signal before February 24-26th.

Additional Reasons for Implementing Changes

The fact that this important data suddenly became permanently unavailable during a downtrend added significant stress to our team at a time when we could least afford it. This situation forced us to return to the design phase. While it may not have affected the core of our strategy, it impacted several substantial elements surrounding it. You may detect a hint of frustration in this text, and indeed, I was quite irritated. However, after a night or two of feeling angry, I began to see this issue as an opportunity to integrate some of the changes that had come to my mind over the past year. Although we launched the S&P500 strategy in April of this year, I had actually finished coding it by mid-August 2022 before I started assisting Jennifer with the completion of our BTC Strategy. I made one minor adjustment before its public release in April 2023, but most of the ideas I had contemplated over the last year had not been implemented.


Here is a summary of all changes that I made to the strategy:

(If you prefer to skip the details, please go directly to the final section to understand the impact of these changes.)

1. Replacing VOLQ, VOLS: The equations used by Nasdaq to calculate VOLQ and VOLS are detailed in their white paper. However, it is impossible for me to compute them in TradingView, as the options data required is not available on that platform. Consequently, I spent a night working through the equations and managed to emulate VOLQ and VOLS using a different but similar dataset that I processed. The result is a signal that very closely mirrors the original VOLQ and VOLS. I reintegrated this signal into the strategy, and after I minor tunning it almost automatically aligned with the original strategy's statistics in terms of success rate and return.


2. Not replacing DarkPool: I invested a considerable amount of time searching for a direct alternative to the Dark Pool data. I have now identified a data source without any lag, meaning that the data for the day will be released after the market closes, as it was previously. However, the challenge is that this data will not be accessible through TradingView. Therefore, my focus for the coming weeks will be on developing a Python-based Dark Pool indicator that will be available on Google Colab, and will also include the feature to send automatic SMS alerts.


3. Accounting for the negative impact of loosing DarkPool: Not having the DarkPool signal instantly reduced the strategy's return by about one-third while positively affecting the strategy's hit rate. The primary benefit of incorporating DarkPool data into the strategy was to exit the market before the onset of a correction. Sometimes, it generated false alerts, and at other times, it indicated only a minor drop, such as the one we successfully hedged from June 16th to June 26th of this year. However, occasionally it preempted major downturns like the one in 2020. Despite some false alerts that never resulted in significant losses—and sometimes even yielded gains—the additional profit generated during massive corrections made the DarkPool data worthwhile.

To replace it, I investigated other datasets associated with the same demographic of wealthy investors who trade in the DarkPool, looking for indicators of upcoming market movements. The solution partly lay in the options market. It involved applying high-pass filters and other mathematical techniques (which I prefer to keep confidential) to the Put-to-Call ratio and then analyzing the statistics of the signal, much like we did for the DarkPool data. By doing this, I managed to create a signal that issued far fewer false alerts than the DarkPool and than our previous version of Put-to-call indicator, while providing about a one-day lead compared to other metrics. This had a tremendously positive impact on both the hit rate and the return rate of the strategy.


In severe market corrections like those in 2020, I cannot fully recover the lead time that the DarkPool data provided. While the previous strategy version hedged on Friday, February 21st of that year, the new one hedges on Monday, February 24th. Despite this, the overall return is now higher than before. Moreover, this new strategy does not require after-hours trading. The DarkPool signal was always reported between 6 PM and 8:30 PM Eastern Time, which necessitated trading after hours on the day of the signal or pre-market the following day. I realize many people cannot trade during these times, so these signals did not always result in the return suggested by the strategy. For instance, the June 16th signal we received around 8:30 PM was only acted upon on Monday, June 19th (US stock market was closed, but we used the open Canadian market to hedge). Also, the after-market price sometimes varied significantly from the closing price, particularly during earnings season.


Nevertheless, the positive takeaway is that this new strategy, which generates even greater returns, does not compel us to trade after hours as the DarkPool did. So, there will be no more inconvenient 8:30 PM hedge signals.


4. Adding redundancy: The inherent risk of any data-driven strategy is that if a correction unfolds in an unprecedented manner, the unique data signature might not trigger, or could trigger very late, a sell or a buy signal. We encountered such a blind spot in our BTC strategy, which I realized in a moment of epiphany in January and subsequently corrected before any harm was done. In contrast, price action-driven strategies, while generally not yielding very high returns, do not suffer from this problem. Regardless of how the correction unfolds, a fast EMA will inevitably cross a slower one, and then it will cross back during the new uptrend.


To mitigate this, the strategy incorporates several modalities, each capable of triggering a buy or sell signal. There is also some "safety switch" that will automatically have us re-enter the market at some point. While I wasn’t concerned about the sell side due to one particular dataset, I sought to introduce more redundancy on the buy side of the strategy to be sure to react quickly no matter what. The fact that the data issue forced us to recode the entire strategy provided the opportunity to stress test it in various scenarios and to design new safety features that will ensure we re-enter the market as quickly as possible during an uptrend. I am more confident than ever that it will not fail in this respect. These new modalities have also contributed to an increase in the strategy’s return.


5. Removing condition: One of the pitfalls of designing a strategy is overfitting. Here is what ChatGPT has to say about overfitting:


Overfitting occurs when a model learns the details and noise in the training data to the extent that it negatively impacts the performance on new, unseen data. Essentially, the model becomes too tailored or "fit" to the specific data it was trained on, and fails to generalize to new data. The more complex the model, the higher the risk of overfitting. Think of it like a tailor making a suit. If the tailor makes a suit that fits one person's body perfectly, down to every tiny contour and irregularity, that suit might not fit anyone else. Similarly, a very complex model might fit the training data perfectly but perform poorly on new data. The danger of overfitting grows with the complexity of the model because a more complex model has the capacity to capture more and more of the minute details (or noise) in the training data.


Although ChatGPT explains this in the context of AI, manually designing a strategy leads to the same problem. To use its example of a suit perfectly tailored for someone in the context of the stock market, the most overfitted algorithm we could think of would be one that has recorded the date of a strategy. This would lead to the maximum historical return and a 100% hit rate but would be ineffective in the future.


There are steps we can take to minimize overfitting, like dividing data into three buckets (something we have done), but reducing the dimensionality of the algorithm was also a method I envisioned to minimize overfitting while maximizing the expected future return. Indeed, the S&P500 strategy is the result of work I started in early 2021, and I felt that it had accumulated too many internal complex conditions over that period. I'm not referring to the amount of different data, since these data add redundancy, but I felt that the conditions determining whether we should hedge based on these data values were becoming overly complex or where sometime overlapping, potentially leading to some overfitting. So, one of the challenges I set for myself was to simplify the conditions as much as possible while continuing to improve the strategy's return and hit rate. Since an example is worth a thousand words, consider changing conditions like this:


bool longConditionBEAR = ((ta.crossover(diffangleRWB,0.23) ) and (VIX<23 or (varVIX>0.18 andVIX<30) ) and not (doji_data or QQQdoji_data) and not (width9D1M<-0.069 )) or (BearSignal and diffangleRWB>0.2) or (percentconfB>5 and not ((sumslope>0.17 and not BearSignal) or f_somethingHappened(testbearT,10)) and varVIX>0.18 )


to something more like that:


bool longConditionBEAR = ((ta.crossover(diffangleRWB,0.23) ) and (VIX<23 or (varVIX>0.18 andVIX<30) )



6. Create two different strategy environments: Until this last correction, the strategy had a near-perfect hit rate since the beginning of 2023 on the S&P500, keeping us out during market downturns, while not missing any uptrends. When we switched from bear market mode to bull in June, I wrote in a blog article that the strategy would behave very differently, by not being as defensive as it was over the last year. Despite saying that, we did have a Hedge triggered immediatly by the DarkPool that had us surfing a successful short. That type of hedge in a bull market was the exception, not the norm. What we experienced in August was what I was expecting: not hedging on a move down on low volatility. Although we did that for half a month, on August 16th, one of the datasets finally told us to hedge. This data was not just any data. This one, somehow related to market breadth, is the most important dataset in the strategy. In fact, someone could hedge successfully using only this dataset with a 70% success rate. This 70% is only because this data is usually the last one to trigger a buy. In fact, if we only look at the sell side, 100% of the time that we see this dataset over the threshold, the market is not doing well. This dataset is not signaling a sell in all our hedge events and also, not all corrections where it flashes a sell signal are massive ones, but 100% of the major corrections have this data going over its threshold. So, this is why it is so important and that I consider it our main risk indicator. When we bought back into the market thanks to other modalities, although I am pretty sure that this buy will end up on the winning side, I didn’t like the fact that we weren't in a very defensive stance considering that this risk-related indicator was significantly in the red. Therefore, in an acknowledgement that a risky market environment doesn’t require the same strategy as a less-risky one, I created a new mode where the strategy is more defensive in that market environment. The strategy was already doing something similar in a bear market by switching into a defensive stance and it served us well, so I thought doing something similar when the risk seems high was a positive move. Therefore, an orange background will now highlight in our Hedge signal when this part of the strategy is triggered.



I also made a standalone indicator where you can visualize this dataset (and a redundant dataset provided by another data provider). This will allow you to visualize the level of risk in the current environment. This one is called ☂️ WU SP500 Hedging Signal - Risk Dataset.


The reason I made it available separately is twofold. The first reason is because it plays such an important role in the strategy and now generates a different strategy behavior. The second reason is because this correction made me realize something: by trying to optimize the strategy return, we have somehow drifted away from our primary objective, which is sleeping well during drawdowns. I was okay with not being hedged for most of this correction since it was on very moderate volatility, but considering that our risk indicator rose that high, it could have made more sense for some people to be market-neutral even if this would have meant re-entering the market at the same price or higher. So depending on the type of investor each person is, some could decide in the future to stay hedged as long as this indicator is on. In any case, I am sure many will appreciate being able to monitor that indicator.


7. Correcting the drift of the two redundant indicator or our risk signal: If you remember, one of the things that I didn’t like when we received our hedge signal mid-August was that not only did the blue signal stop on the threshold, but the strategy also got repainted as that number was slightly revised each night, with the effect of shifting our hedge. Having a signal that stops on a threshold is something that statistically will happen, although I would prefer them to always be far under or far over the line. That being said, what really bothered me here is that while this happen the other redundant signal (the orange line) was actually very far from its threshold. You see, sometimes these two signals trigger a hedge on the same day; other times, one leads the other, but having them that far apart was unprecedented. Since this can happen, I made a fix that now requires them to be relatively close in case there is only one that fires a signal or if not, to be considerably over the signal. The rationale behind it is that, in the worst case, this could delay the hedge unnecessarily by one day, but in the best case, it could prevent hedging on a false signal. I think I should have anticipated that issue from the beginning, but I feel that this new version do solves this issue in a robust manner.


8. Update to our Margin Risk Indicator:

Since our Margin indicator also used the hedge signal internally as a watchdog, sometimes going off margin when the hedge signal was leading the margin one, I had to update our Margin Risk Indicator. I also took the opportunity to do some tuning, as the new hedge signal changed some statistics. Additionally, following some comments that I received, I decided to make our internal aggregate risk signal, which we use for deciding when to go on and off margin, available as an external indicator. This one, called "☂️ WU S&P500 Margin Risk Indicator - Internal Signal," shows the sum of all the different signals we compute for analyzing the risk of going on margin. The higher this signal is, the more favorable the environment for being on margin. One of the conditions for going on Margin signal is having this indicator over 0 (Note that this one could eventually dip under that threshold and we could remain on margin).

The issue with this signal alone is that, like with every calm moment in life, it is eventually followed by a storm. The CNN Fear and Greed Index also suffers from this, as every time it reaches a high value, we end up being overbought and have at least a minor correction. The colors on our margin indicator are there to help assess this risk. It is not a direct function of the level of the internal signal, but rather a function of some backtested conditions on each signal that signify we are approaching a top. You can now see the numeric value associated with the color by going into the parameters of this new signal and clicking on "See the risk level associated with color." Note that the theoretical maximum value for the color is 15. Under most circumstances, the strategy will go off margin when over 12. I hope that giving you access to this internal signal use in our margin signal will help you in your own everyday investment decision.

I am really happy with how this signal has performed so far since going live. It had been in the red for more than a year after dipping below the threshold in December 2021. I was starting to lose faith that it would ever go back into the green, something it finally did in May of this year. This generated a very good signal that, unfortunately, we didn't play. Please note that at this moment, the recent correction has this signal going back considerably into the red. I don't expect to see it turn green for at least another month of a stable market with low volatility (it didn't really like the 3 green candles on big gaps). So don't expect us to go on margin for a while, but be sure that when it will flash green we will take the signal.


The impact of all these changes:

All these data going offline, combined with all the modifications that I made, result in a slightly different trade history. For the first part of 2023, nothing really changed since we were in a bear market mode where no change has been made. However, on the same night that we exited the bear market, we had a successful hedge signal triggered by DarkPool. This signal will no show up anymore. This one will not display even if you load the previous version of the strategy, as the data now outputs a zero value.


The changes we made to our internal risk indicator, more specifically on accounting for data drift, had an impact on shifting our sell signal from August 21st to September 18th. Despite almost a one-month difference, this shift had only a modest +0.85% price difference. The final change was on our last buy signal that shifted since our last sell also shifted, but also because our last buy was triggered by VOLQ, which doesn't exist anymore.



Even though the last hedge did not go as I would have liked, there was nothing that had me worried to the point that would have made me modify the strategy right away. I definitely would have kept it this way since it would have ended up within the original strategy's hit rate and still yielded a good 2023 return. The fact that 3 important datasets went offline almost simultaneously forced me to do it. I respected my rule not to use the present data (I left out the period from July 2020 to now during the redesign phase). This is also probably one of the reasons why the new modified hedge signals for the ongoing correction are not necessarly perfect (I would have preferred to unhedge on Wednesday, November 1st). In a year or two, optimizing the strategy on these dates will be possible without infringing on my three data bucket rule. But I am still happy with the results. In fact this align relatively well with the level at which our QQQ strategy use by the IOFund trigged over that correction.


Speaking of the results, the new overall stats are incredible, particularly since it doesn't involve trading after hours and it is with a strategy that is about 30% simpler. The overall hit rate over the last 20 years went from 72-74% to 83.4%, which is the highest I have ever seen for a hedge signal. The return over the same period went from $41.5K per $1K invested to $62K. I had a version that yoelded more return on a lower hit rate but I preferred this version that prioritized success rate over return. Here are the overall key stats of the strategy.



Although some of these stats are very impressive, at no point do I expect that we will match them in the future. If someone shows you such stats with confidence and tells you that this is what he will do in the future, you should be skeptical. In all the machine learning research projects that I have done and published (you can search for them on Google Scholar), underperforming by 10% compared to the results we had on past data is very typical. This is one of the reasons I like the fact that the hit rate is now at 81.4%, as this allows us to realistically expect something in the 70% range, which is still very solid considering the ratio of the average winning trade to the average losing trade. Results that are 10% under these results would have more than a 10% negative impact on the 20-year return, as this number is the result of compounding. But if we were to only achieve a quarter of these results, it would still be incredible, particularly if played on UPRO.


Conclusion

It's already been more than two years since I made the first trade based on the initial version of this algo. It changed considerably when I started working with the IOFund in early 2022 for making a Nasdaq dedicated version. The Nasdaq version, despite some minor changes throughout the years, will celebrate its second anniversary in February. This will mark the beginning of a period that is sufficiently long to conduct a thorough analysis of the two-year results. The challenging market conditions over these two years have been highly instructive, and I believe that the changes we've recently implemented, based on what we've learned, are more than mere quick fixes. I anticipate that these adjustments will positively influence the future performance of the S&P500 version of this strategy. This new version of our S&P500 strategy is probably the biggest incremental change that we've made since the very beginning. It’s the first version where I am comfortable with almost every trade it has made in the past. I expect that the changes I made will have a long-term positive impact on the strategy's future returns. That being said, I must say upfront, I was forced to change the strategy at a time when my schedule was relatively busy and while we were in the middle of a correction. There were some days over the last month when I coded from early morning to the middle of the night. Coding it in TradingView is not the long part. What takes me hours is all the optimization and analysis that is done in a C program, and that I gave a teaser of in this blog post. The version I uploaded, I think, is very good, but I still have some ideas in mind that I wanted to investigate; it’s just that time was not on my side. Now that this is completed and I am very comfortable with the results, I will probably go back to testing these ideas. This means that I could very well make another set of changes at the end of 2023 or early 2024. I expect them to be more minors, but stay tuned !



To summarize:


☂️ WU SP500 Hedging Signal (2.0) - New version: Check your SP500 member page FAQ section

☂️ WU SP500 Hedging Signal - Risk Dataset - has been added to your TV Invite-only scripts

☂️ WU SP500 Margin Risk Indicator - Internal Signal : has been added to your TV Invite-only scripts


WU SP500 Dark Pool volume - no longer available



19 comments

19 Comments


Excellent Blog


One time I saw a video on your service? Can you please provide the link?

Like

Aamir
Jan 28

Re: Dark Pools - Some research led me to Intrinio.com. They offer two products providing dark pool data:


- https://intrinio.com/financial-market-data/stock-prices-tick-history

- https://intrinio.com/financial-market-data/stock-prices-delayed-sip


Could this be a sufficient replacement?


(Note: I'm not associated with Intrinio nor have I used their services :))


Like

Aamir
Jan 28

Hi Vincent,


I'm eager to hear your insights on integrating your S&P 500 and BTC strategies into a broader portfolio or asset allocation plan.


Have you conducted any analyses or research to determine the ideal long-term portfolio structure that best complements these strategies? Does your overall portfolio approach align with frameworks like the Permanent Portfolio, All Weather, or similar asset allocation models, or do you not adhere to such strategies?


Thanks!

Like

Gordon Johnson
Dec 28, 2023

Vincent,

An interesting read to be sure. I am not sure where I should post my question. It has to do with hedge methodology. Almost all my funds are tied up in retirement funds. I can either raise cash by selling partial positions and buy 2x or 3x inverse funds, OR, I can sell deep in the money calls and use the cash to buy a natural hedge, so to speak.


I am not looking for specific advice but am wondering if you our your team has ever modeled one strategy versus the other? It would seem that if we thought a down turn would be a certain size oe duration, we could use that info to target specific options.


Like

Twingems
Nov 11, 2023

Hello Vincent,


Making lemonade from lemons. Thank you for taking the opportunity to improve the strategy. The market is an organism and it's constantly changing which is why no one strategy works for any one market. I was wondering which WU signals work for time periods beyond "DAY." I believe the Hedge signal is only optimized for "Day" but the other signals seem to change with the time selected. I don't know if that information is reliable. Have you considered trying to make the hedge signal work with the 4-hour interval? I think in some instances it is more sensitive and could have better results. I hope you are taking the time to relax this weekend! Let's ho…

Like
Vincent D.
Vincent D.
Nov 11, 2023
Replying to

Haha, I did have this lemonade analogy in mind when I wrote that text. Most of the stuff I made is only on a daily timeframe. I often use the Band-breaker successfully on shorter timeframes like 1-3 minutes, but besides that, I have mostly stayed on daily charts for backtesting. About the hedge signal, there is some data that are reported with delay, with even some that stabilize only around the close, but I could try to test it on a 4-hour interval. It could indeed be useful. About our UPRO, it didn't take long to rebound from where we went. I looked yesterday in our WU account, something I don't do often, and our UPRO was now only about…

Like
bottom of page