Hi Guys,
A few weeks ago, I suggested that a top-down, econometric-driven approach to making investment decisions might be beneficial to many Forum members. If you favor an aggressive frequent trading policy, this type of analyses is definitely not your cup of tea.
I’m sure everyone has been exposed to the financial communities warnings that by missing just a few of the best performing days, irreparable damage would do violence to your portfolio. Of course, these same sell-side hucksters often omit demonstrating the very positive portfolio impact on returns that missing the worst performing days would similarly produce. Both presentations are just bad science. The likelihood of either event series happening is near zero, and it is a worthless exercise to worry about a nonevent.
Market timing can have a major impact on end wealth, especially when considering the basic equity/fixed income portfolio asset allocation mix. A prudent question is: How does an amateur investor accomplish this task with respectable reliability? The market unknowns and uncertainties will never be fully eliminated. This smells like a job for probability theory and statistical analyses.
Several models have been developed and updated for many decades that confront this issue. I have often mentioned, but not to endorse, such a model proposed and developed by Elaine Garzarelli. After some initial successes, her predictions have recorded mixed results in more recent market stress tests.
Garzarelli’s market timing model includes 14 separate indicators. These 14 signaling components are grouped into 4 equally weighted sectors. The four groupings reflect Cyclical, Monetary, Value, and Market Sentiment factors.
The Cyclical group contains industrial production and corporate earnings measurements. The Monetary group consists of seven elements related to monetary policy such as interest rates, yield curves, and money supply. The Value group is comprised of inverted composite corporate earnings yield-to-interest rate metrics and P/E equations. The Sentiment group incorporates surveys of the number of bullish financial advisors and mutual fund cash levels. All this is very complex with a lot of computer-driven, numbers crunching, linear curve fitting analytics.
More simple models do exist and have demonstrated reasonable prediction accuracy. As Albert Einstein remarked: “Everything should be made as simple as possible, but not one bit simpler. ”
A much simpler method is to deploy moving averages to modify the baseline equity/fixed income mix. Simply increase the fixed income allocation when an equity market proxy like the S&P 500 Index price record penetrates below its 200-day moving average, and reverse the adjustment when the penetration is in the opposite direction.
An alternate path to forecasting market behavior is to explore corporate earnings growth rate possibilities. Excluding speculative perturbations (which in excess contribute to the creation of bubbles and panics), earnings growth directly impacts the fundamental returns delivered by the equity markets.
A tight correlation exists between corporate profits and GDP growth rate. So, if GDP growth rate could be reliably forecasted, then tightly-linked market movements could be accurately assessed. The simplest definition of a recession is two consecutive quarters of negative GDP growth rate. Therefore, if we can reliably project an upcoming recession, we can anticipate poor equity market performance.
One challenging alternate portfolio realignment approach is to adjust the top-tier asset allocation holdings as a function of a recession probability projection (RPP). There are many complex models used to complete that RPP analysis. None are totally reliable signaling instruments.
One such RPP model has been developed and updated by Credit Suisse. Credit Suisse (CS) wisely cautions that “ modeling is an aid to judgment, not a substitute for it.” That’s a very perceptive warning for any complex econometric model. A paper that summarizes the CS analytical approach is found at the following confusing and extended address:
http://doc.research-and-analytics.csfb.com/docView?language=ENG&source=ulg&format=PDF&document_id=856579291&serialid=ZXa19to77uOvxxu3QDrFZhjlKfCBy8H58U1BxvxgQG4= The CS construction is representative of a host of competing formulations; I have no idea of the relative accuracy or the false signal frequency of these alternate RPP models or even of the CS model itself.
The CS model includes 7 factors like the Fed Funds rate, S&P 500 percentage change, payroll growth, housing permits, consumer expectations, jobless claims, the TED short term interest rate spread, and relative energy prices. Most of the required input data is entered for either 6-month periods or year-over-year changes.
Some modelers use over two hundred signal generators. I have no idea how they handle this data overload condition. I get confused when the factors reach the high single digit level, and become seriously suspicious over potential data mining contamination.
I prefer a modern form of Occam’s Razor: The simplest explanation (read model) is usually the most robust and reliable. Simple RPP models also exist.
The New York Fed has examined the recession forecasting issue extensively. They have examined several independent variables to guide a recession prediction, and have concluded that “in predicting recessions two or more quarters in the future, the yield curve dominates the other variables.”
They conclude that: “The yield curve – specifically the spread between the interest rates on the ten-year treasury note and the three-month Treasury bill – is a valuable forecasting tool.”
To examine their findings and judge if it is a candidate to be added to your decision-making toolbox, I suggest you access this summary paper:
http://www.newyorkfed.org/research/current_issues/ci2-7.pdf The Fed’s correlation suggests that if the spread decreases to 0.22 % (almost flat yield curve), the probability of recession climbs to 20 %. This value was extracted from a table published in the referenced paper. Recently, in applying that model from 1960 to 2010 the NY Fed calculated 11 recession signals. All 8 recessions recorded in that timeframe were correctly identified. However, three false signals were triggered. Nothing in the econometric modeling arena is perfect.
I personally check this single parameter Fed model at least on a weekly basis. The requisite data are commonly available in any daily newspaper’s financial section. It is yet another excellent illustration of Occam’s Razor in action.
Of course none of these methods and techniques are flawless. Since some residual uncertainty always remains, I make my equity/fixed income mix adjustments incrementally over time and more energetically as the probabilities depart from the threshold 20 % recession probability tipping point that I established for my purposes based on the referenced study. You get to choose your own tipping point.
The overarching goal is to control and mitigate risk. This approach will attenuate the downside dangers that are always present and ready to take a huge bite from end wealth accumulation. The simple mathematics are such that recovering from any percentage loss requires a yet higher percentage gain. Investing is never easy, so…….
Simplicity is always good.
By the way, current application of the techniques outlined in this submittal (even the more complex formulations) conclude that the probabilities of a near-term recession are low, single digit. That’s comforting given the declining economic indicators and the exacerbating nature of recent market performance. But none of these techniques are foolproof.
Best Regards.