Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

In this Discussion

Here's a statement of the obvious: The opinions expressed here are those of the participants, not those of the Mutual Fund Observer. We cannot vouch for the accuracy or appropriateness of any of it, though we do encourage civility and good humor.

    Support MFO

  • Donate through PayPal

Morningstar Analyst Ratings for Mutual Funds

I would be interested to hear what people here think of Morningstar's new analyst ratings. As of 12/31/11 they have rated 410 funds. However, only 17 have received "negative" ratings. Seems there is a massive positive bias. Now Morningstar response is that it will even out over time......however, they have 21 index funds with "metal" ratings which are supposed to be reserved for funds with "sustainable advantages over a relevant benchmark". Now if somebody can explain how an index fund can have a sustainable advantage over the benchmark it is tracking I am all ears. I think the cheapest index funds should get their "Neutral" rating by their own definition "Fund that isn’t likely to deliver standout returns, but also isn’t likely to significantly underperform". Yet, not one index fund has been rated lower then bronze?

You can see their full listing of rated funds (as of 12/31/11), as well as a breakdown of their current distribution of ratings here http://www.wallstreetrant.com/2012/01/morningstar-ramps-up-analyst-ratings.html

Comments

  • Yer makin' sense about those Index funds.
  • good stuff, WSR!
  • Fund Alarm,

    Thanks, glad you liked it!
  • One thing you can count on from M* is that they do not make decisions without much thought. Just as the star ratings have become a marketing tool, not only for M* but also for the funds that receive 4 and 5 stars, it's hard to imagine the new analyst ratings are not being considered from a marketing standpoint, too. I can see it now, an advertisement for a fund that touts its "5-star rating and gold analyst rating from M*".

    To-date, of the 167 funds we currently track closely, only 34 have analyst ratings, with 20 rated gold, 11 silver, and 3 bronze. A few are surprising, a couple are very strange. How the individual M* Pillar ratings equate to the overall analyst rating is certainly not considtent. Five positive ratings does not guarantee Gold, and one neutral or negative rating might still yield a gold. So, on the surface at least, the star rating appears to be more consistent (even if a fund is stuck in the wrong asset class, which is more common than we might think). Clearly personality of the analysts and their biases will influence their ratings. That will be good or not so good. My experience is that some of the analysts really do not have a good grasp of some of the funds they review. But I guess that will happen whether it's M* or someone else. But make no mistake, this will be used for marketing purposes.
  • Of course this will be used for marketing (for which M* will receive a healthy licensing fee). That does not make the analyst ratings inherently suspect (just as one may question the somewhat subjective classifications but not question the objective awarding of stars once funds are classified).

    That said, forward looking ratings must be subjective at some point.

    The 5+ pillars not equaling gold doesn't bother me as much as it does others. Suppose the +/0/- ratings equate to "scores" of 67-100 (+), 35-66 (neutral), 0-34 (-). Then its easy to see a 5+ rated fund getting a "score" of 350 (5 x 70), and a fund with four pluses and a negative getting a score of 400+, way above that.

    I'm pretty sure I read somewhere in M* (not going back to look for it now) that they were looking at the fund overall, and it wasn't a matter of adding up the number of plusses. It sounded to me more like what I described above - dividing into thirds (or tertiles if one prefers) not providing adequate granularity to infer total "score".

  • For the most part, their analyst ratings are garbage.

    "Beware of false knowledge; it is more dangerous than ignorance." -- George Bernard Shaw
  • @BobC you say of the funds that "we" track -- what exactly is your business if you don't mind me asking?

    Also, in regards to the marketing......Yes I think it will definitely be used for marketing purposes, especially for the fund companies receiving the ratings. I think it will be interesting to see how this affects flows. I think it will be rather dramatic unfortunately.....Although, I think for those with good ratings it will help retain assets that otherwise would leave just because the "star" rating went down.
  • Reply to @WallStreetRanter: First, good to have you here at the Observer! I am a principal in a fee-only RIA firm in Columbus. Our client portfolios include mutual funds, ETFs, and individual stocks. Over the years, we have tracked anywhere from 120-180 funds at a time, including those we are currently using in accounts, as well as those we are watching as potential replacements, or those that are well known but not ones we would use. This gives us a broad spectrum of what's out there. We will add or delete funds as our research and the markets dictate. The list is generated from screens we do for each of the asset classes we use. There is no way we could monitor 26,000 mutual funds, 1,400 ETFs, 600 closed-end funds, and all the other options out there. So we do our screens and keep our eyes and ears open to what is going on "out there". Among other research, we have a subscription to Morningstar Office, that has complete coverage of domestic and global funds, ETFs, stocks, bonds, money markets, hedge funds, 529s and SMAs, all updated every day. Hope this helps.
  • edited February 2012
    @BobC Thanks for the info!

    Fortunately there aren't actually 26,000 individual mutual funds---unless you were to count each individual share class of every fund, But point taken! You mean you can't monitor the inner workings of over 8,400 different funds? But why not;)
  • I think that the "new ratings" are a waste of time.

    Given how poorly most funds performed during the downturn, think that they are simply being rolled out now to provide - as suggested above - another metric that M* can market (and funds can include in advertising) even if the "traditional" star ratings are poor or unexceptional.

    I am a subscriber to M*'s print products, prompting an additional critique. On the detailed print products, the new ratings replace several blocks of descriptions, which summarized the people or process at/followed-by each fund. Frankly, found the old (displaced) information much more useful than the "new ratings".

    Also - the "new ratings" could have been presented compactly on print page - but M* chose NOT to do so - wasting a lot of space. Think that they did this to make the whole process more opaque and mysterious.

    But key observation is that they have produced an apparently wise metric, that can be sold or licensed to funds, and used in advertising, even if the funds historical "risk adjusted" performance is unremarkable.

    "When the Lord hands you lemons, make lemonade." - This is Morningstar's "lemonade". Drink up!

  • MJG
    edited February 2012
    Hi Wall Street Ranter,

    First an honest disclaimer: I am not an expert on any Morningstar rating system.

    However, I have used the earlier Morningstar “Star” ratings as a partial negative input into my mutual fund buying decisions. I am only loosely familiar with their evolving Olympic metal award system.

    I used the Star ratings to eliminate candidate funds when these funds sported a one or two star rating; I retained for further assessment, those funds that were given a three to five star rating. Accumulated practical data from Morningstar user studies reported that the lower rated funds indeed produced lower future returns, whereas the upper echelon of rated funds tended to deliver average to above average rewards but freely migrated within the upper three ratings.

    The Star system has matured over time. When it was originally introduced, it only compared equity returns to the S&P 500 Index, a very flawed benchmark for international and small cap funds. Over the years, Morningstar has significantly improved their product.

    The Star system is one-dimensional and formulaic in construction. Morningstar has always cautioned users that it is backward looking. The number ratings were assigned assuming a Bell curve distribution of past performance. Remember that the five stars represent a Bell curve that has been segmented into 5 sections. These sections are symmetric, just like the Bell curve. So the number of 5 stars granted should equal the number of 1 star mutual fund outfits that suffer fund outflows as a penalty for that rating.

    I suspect Morningstar recognized these limitations, and their Olympic award scheme is an attempt to enhance their overall system’s robustness. The Olympic formula is less rigorous since it incorporates evaluators opinions and has five evaluation dimensions. For example, fund management tenure is now an integral part of the Morningstar assessment.

    Morningstar will deploy their Olympic awards as a supplement to their Star rankings; it will not replace the Stars. Yes, it should contribute to the firms bottom-line.

    Morningstar has always had a difficult time explaining why their 5-star funds did not remain 5-star in subsequent periods. Performance persistence is a critical shortcoming in the history of most mutual funds. That dismal finding suggests that mutual fund managers are often more lucky than skilled in assembling their portfolios and in their timing decisions. Morningstar has suffered the same faith in their top-tier selections, and in their manager-of-the-year awards. Predicting the future is hazardous to your wealth and your reputation.

    Standard and Poors has assessed fund management persistence and the relative returns delivered by active management contrasted to their passive counterparts for many years. Their reports on the matter are published several times each year and are titled Persistence scorecards and SPIVA documents. Here is a Link to the S&P website and their documents:

    http://www.standardandpoors.com/indices/spiva/en/us

    Historically these reports conclude that fund managers struggle to beat their benchmarks, and have difficulty maintaining performance persistence. Investment category winners change in the marketplace and managers fail to adapt and adopt quickly enough. The fund cost hurdle is too high to overcome in many instances. The percentage of winners is usually below those expected from random occurances assuming a Bell curve distribution.

    Costs matter greatly. That is why Morningstar has historically granted more than 3-stars to Index funds with low costs. The cost structure of actively managed funds shifts the percentage of winners in the active category below the average market return that the Index fund earned. Hence, in a comparative Bell curve environment comparison (the Morningstar 5-star system), the Index mutual fund can and does get an above average ranking.

    The same will be true in the new Olympic award system since performance is one of the 5 evaluation categories. Also, since some of the inputs to the new system are highly subjective, I imagine the rankings will be even more distorted.

    However, I do not object to an attempt to make the assessments more forward looking. The warts in the old system have been exposed; only time will tell if the new approach will be more prescient. Overall, I welcome Morningstar’s more embracing experiment. I wish them well.

    Best Regards.
  • @MJG yes, I have no objection to attempting to give forward looking measurements. So far my objection is just in the way in which these "metals" are being awarded (and what does a 'gold' rating really mean if it's given out for replicating an indexes return?). I am also concerned about how this can affect fund flows (although, obviously that is out of morningstar's control).
  • Hi MJG,
    Please check your link re:S&P. It brings up a blank page when I attempt to retrieve it.
    prinx
  • MJG
    edited February 2012
    Reply to @prinx:

    Hi Prinx,

    Thanks for the heads-up. The Link worked when I originally submitted it. I checked. I checked just now and experienced a problem also.

    I independently visited the S&P site and gained access with the following:

    http://www.standardandpoors.com/indices/spiva/en/us

    This appears to be identical to the original address. I'm puzzled why the original seems to be contaminated.

    Good luck with this repost. It is slow, but it does get you there.
  • Reply to @WallStreetRanter: good point, New Constructs monitors the inner workings of all the equity funds. Our ratings are based on the combined ratings of the individual stocks held by a fund.
    http://www.newconstructs.com/nc/fundscreener/fund-screener-premium.htm
  • Reply to @MJG: Both links work for me. Perhaps the fact that the data is stored on a server named "Blob" has something to do with the sluggish response.
  • Just wanted to bump this conversation because now we can have a one year look back of these ratings.....you can see the complete post here "Performance of Morningstar's New Analyst Ratings For Mutual Funds in 2012".

    But here is the gist of it
    image

    The Wall Street Ranter
  • Reply to @dtrainer: I've seen you work on MarketWatch.Com, F- !!!
  • Reply to @WallStreetRanter: If you don't like M*'s analyst ratings, might I suggest you come up with something better. Most of us here at MFO and the old FundAlarm Website are not country rubes. When selecting funds we use M* along with Lipper and other web resources as reference guides before purchasing a fund.
  • edited February 2013
    Reply to @WallStreetRanter:

    Hi WSR. From David's February commentary:
    Morningstar’s “analyst ratings” have come in for a fair amount of criticism lately. Chuck Jaffe notes that, like the stock analysts of yore, Morningstar seems never to have met a fund that it doesn’t like. “The problem,” Jaffe writes, “is the firm’s analysts like nearly two-thirds of the funds they review, while just 5% of the rated funds get negative marks. That’s less fund watchdog, and more fund lap dog” (“The Fund Industry’s Worst Offenders of 2012,” 12/17/12). Morningstar, he observes, “howls at that criticism.”
    You and Mr. Jaffe see things the same way.

    Another thing that irks me a bit with the Olympic system is the inconsistency. For example...

    image

    image

    How can you possibly have an awful 2.5 ER, be "Neutral" on process (unless it's meant to be a pun...but it's not) and still receive a Gold?

    Compare with:

    image

    image

    Apparently that 1.0 ER at Fairholme causes the downgrade.

    Finally...

    image

    image

    See the "Positive" for performance?

    image

    (I know, it's really about long, long term performance...hard to argue=).)

    image
  • edited February 2013
    Reply to @MJG: MJG, you remain a poet=). Thanks for the SPIVA Persistence update:

    image

    Dodge & Cox funds are good examples of a high-profile funds receiving 5 stars (based only on relative performance) leading up to 2008. They have been 2-3 stars ever since, but with the Olympic system they still hold Gold ratings (which is subjective and only partially based on performance).
Sign In or Register to comment.