Hi Guys,
I recently created a rather mild storm when I proposed that, in general, it is a waste of precious resources (like your limited time) to seek and study financial economic and investment forecasts that are typically generated annually about this time every year. I was unprepared for the soft uproar.
Until now, I had never really explored this topic with any committed research, but I merely based my apparently controversial opinion on generic experiences with such forecasts.
I suppose my upfront biased opinion was likely fortified by a large body of familiar quotes attributed to some very noteworthy and prestigious practitioners and world leaders.
To illustrate, Winston Churchill famously observed that "If you put two economists in a room, you get two opinions, unless one of them is Lord Keynes, in which case you get three opinions.” That certainly has been my experience.
It is not extraordinary that as informed private investors we demand the performance track record and financial history for a portfolio candidate mutual fund manager. Yet we do not impose that same rather benign requirement when judging the merits and shortcomings from any forecaster that comes within earshot or eyeshot.
Perhaps the reason for that laxity or oversight is that such minimal scorecards are rarely available. That’s too bad since trust must be established by prior performance assessments. In baseball parlance, all I seek is a well documented batting average.
Many years ago I subscribed to the now defunct Worth Magazine. Each year that monthly magazine published many stock picker selections, sector performance estimates, and market return forecasts from expert consultants, market gurus, and financial writers.
For several years I saved these forecasts and compared them against realized results and new annual forecasts. Accuracy performance was dismal, and the future annual forecast dramatically differed from the previous year’s projections. I wrote to the Worth publishers and challenged them to maintain, to score, and to annually report updates on their predictions. I wanted a scorecard, and to my surprise they acknowledged the request with positive action.
As a minimum, that decision demonstrated courage from an unlikely quarter. Unfortunately, Worth’s assembled experts did not improve with age, and their yearly scorecard remained dismal. Perhaps that’s why they stopped publishing their magazine a few years later.
I still believe that it is essential when establishing credibility that any forecaster owes his public a fair accounting of his prognostications record. Given today’s technology that task is a simple matter.
The questions to be addressed are simplicity themselves. How many experts participated? What was the average prediction? How accurate (mostly inaccurate) were each forecaster’s prediction this last year? What is each forecaster’s accumulated accuracy record? These are not difficult demands.
How did each expert compare to the mean and/or median forecast? How many experts were more accurate than the group average performance? At this moment I’m thinking in terms of “The Wisdom of the Crowds”. Maybe some group herding instincts come into play here.
Some of these questions are being routinely addressed in studies, both academic and in the popular media. Here is a Link to one such study reported by the Cleveland Federal Reserve Bank:
http://www.clevelandfed.org/research/Commentary/2007/0315.pdfIt is interesting to note that the Cleveland Fed is here testing the accuracy of private forecasters while simultaneously ignoring their own depressingly poor national GDP growth rate extrapolations. In a direct way, this is equivalent to “the pot calling the kettle black”.
The authors of the referenced article reinforce my earlier ad hoc assertions with the following summary paragraph:
“We find little evidence that any forecaster consistently predicts better than the consensus (median) forecast and, further, we find that forecasters who gave better than-average predictions in one year were unable to sustain their superior forecasting performance—at least no more than random chance would suggest.”
This conclusion mirrors similar findings from extensive S&P SPIVA and Persistence scorecard studies. Prediction persistency is a challenging chore for any forecaster. The researchers failed to identify any “Hot” hand phenomenon. The future, with its unfathomable Black Swan events, is forever uncertain and eludes our forecasting capabilities.
The Cleveland Fed finding is also consistent with the research that is summarized in the Guru section of the CXO Advisory Group website which focuses on market expert’s stock selection prescience. Over a long timeframe, CXO demonstrates that market wizards struggle to maintain a 50 % accuracy scorecard. Ken Fischer seems to be an imperfect exception, but an exception nevertheless.
Making predictions is easy work; a fair scoring of those predictions introduces the predictor to hell’s fire. Sometimes experts are spot on-target; sometimes they completely miss. Luck often impacts outcomes. Misguided or overconfident forecasters should be held accountable.
I long remember a Forbes magazine article in about 1993 in which global strategy guru Barton Biggs projected an extended US bear equity market; he endorsed a foreign Emerging market exposure. I partially acted on his recommendation. A 3-year Emerging market disaster followed. My portfolio still retains erosive burn scars from that ill-timed move. But I learned.
In summary, the current empirical evidence is overwhelming. From a personal experience perspective, from industry Guru evaluations, from collections of mutual fund management performance assessments, and from academic studies of the economic elite’s forecasting record, the assembled data clearly demonstrates that the experts are no more successful at projections than a fair coin flip.
The game these experts play is very asymmetric in outcome attributions. Their potential clientele bear all the financial risk while a forgiving, uncritical media and forgetful investor cohort permit the myth to continue.
Okay, I accept that my arguments might not be completely compelling. Forecaster prescience and follies are debatable stuff that might inspire MFO discussion further down the road, and maybe even some heated controversy. So be it.
Merry Christmas.