Last week the Wall Street Journal published a story slamming Morningstar (“The Morningstar Mirage”), arguing the firm’s star ratings were virtually useless as predictors of performance. The Journal showed that both five-star funds and one-star funds regressed toward the mean over time. But it overstated its case because the funds didn’t regress all the way: five-star funds ended up doing much better than one star funds three, five and even ten years on. The pattern is striking: higher stars predicted higher future star ratings over all the horizons the Journal examined.
Morningstar, my former employer, has seized on this fact to argue that the star rating is “moderately predictive.” But weakly predicting performance within a peer group of actively managed funds is not a feat. You could do the same by simply sorting on expense ratio. Indeed, that’s what Morningstar found in a 2010 study: expense ratios slightly outdid star ratings on “success ratio,” the percentage of funds that both survived and outperformed their peers. It’s unclear to me whether the star ratings have added any information beyond what’s captured by low expenses.
The Journal would have been more persuasive if it had demonstrated that even five-star funds ended up underperforming index funds (which I believe is the case). After all, the practical choice facing investors is not between an active fund and its average or median peer, but between an active fund and an appropriately matched investible index fund. The distinction is of practical importance because it’s much harder to beat an index fund than it is to beat a median peer. For example, the Vanguard Total Stock Market ETF VTI is in the top 18%, 13%, and 10% of its category over the trailing 5, 10 and 15 years as of September end. A fund that beats its peer group but not its logical index fund competitor is still a loser as far as the investor is concerned. Peer group comparisons make more sense when you’re dealing with an asset class that lacks cheap, efficient index trackers.
I think most knowledgeable investors would agree that picking funds solely on historical performance is a dumb thing to do. The Journal demonstrated through anecdotes and fund flow data that many investors and advisors act like financial astrologers, switching funds with the ebb and flow of the star ratings. This touches the bigger story: A huge chunk of the nation’s savings is managed by useless or even harmful “helpers”, either because they lack the requisite knowledge or because their incentives are skewed.
If you walk into a bank or brokerage and ask for financial advice, a salesperson (who will call himself an advisor or wealth manager) will steer you into pricey, mediocre proprietary products. This salesperson will have some rudimentary investment knowledge, but he will mostly be regurgitating whatever sales scripts were given to him to push the firm’s model portfolios. (These models, by the way, are designed to hit that sweet spot of looking complicated enough for clients to feel as if they’re getting value for their money, but not so unconventional as to lag the benchmark by much.)
Most people calling themselves financial advisors are salespeople, not technical experts. This wouldn’t be a problem if not for the fact that many misrepresent their expertise. There’s even a lucrative sub-industry of phony qualifications and awards for financial advisors, such the “Five Star Professional” designation that mostly signals the advisor is enough of a huckster to pay for a phony award.
If 80% of the least competent and honest advisors disappeared and were replaced by index funds or robo advisors, the world would be better in almost every way. I’ve seen how billion-dollar registered investment advisors, the wealth-management units of major banks, and small-time brokers manage money. The better ones offer overpriced closet index strategies; the worst ones cheat their clients.
The problem has a simple cause. If someone needs an advisor, they’re poorly equipped to assess the competence of one, so they often resort to shortcuts like picking the person who is the most likeable, successful, or confident. Competition does not drive out technically inept financial advisors (but it does drive out socially inept ones) because the typical prospect assumes a successful advisor must have been successful on behalf of his clients, and many don’t know how to gauge whether their advisor did a good job or whether they got value for the fees they paid. As a result, there’s little correlation between an advisor’s investing or financial planning chops and his ability to make tons of money as a purveyor of “advice.”
Even if the star rating disappeared, the underlying problem would still exist: bad investors and bad advisors would still control hundreds of billions of dollars. They won’t behave more rationally when deprived of one piece of performance information. In fact, they would simply chase raw performance, a far worse outcome than chasing risk-adjusted return, which the star rating measures.
In an ideal world, Morningstar would include fees as an explicit component of the star rating calculation. Doing this would nudge the least sophisticated investors into lower-cost funds and encourage fund companies to lower expense ratios, particularly for funds on the cusp of a higher star rating. Fees need not be so large a component that companies could turn awful-performing funds into four- or five-star funds by slashing expense ratios; that would delegitimize the rating in the eyes of performance chasers and fund company marketers, defeating the purpose of including fees in the first place. The fact that Morningstar hasn’t done so, despite study after study showing that fees are among the best predictors of relative performance is puzzling, if you take Morningstar at face value that the star rating exists solely to help individual investors make better decisions.
The Journal raised a good point about how Morningstar makes money off the star ratings. Incentives are powerful. The human brain is a marvelous machine for rationalizing one’s self interest as society’s interest. The 4% of revenue that the star ratings and related intellectual property licenses account for sounds small, but the marginal dollar earned from licensing is pure profit. If that IP-licensing revenue disappeared, Morningstar’s stock price would take a bat to the head.
However, the Journal went too far when it insinuated that Morningstar’s fund analysts were somehow influenced by BlackRock to upgrade the company’s “parent rating” to positive. I worked as a fund analyst for years. My former colleagues were sincere. No one brought up advertising or licensing revenue from such and such client when discussing a fund and I felt no implicit pressure to be nice to our biggest clients. We had an idealism that was only possible due to our isolation from the marketing and sales side of the business. The dirty work done by the sales and marketing folks paid our salaries, but we could act in ways that hurt them without suffering financial consequences. I’m sure some of them resented us for that, me especially, because I was unsparing in my criticism. I’m grateful that I could speak my mind, something that wouldn’t have been possible if Morningstar simply paid lip service to its stated values.