I'd suggest skipping M* entirely when researching bond fund allocation. It's easy to find almost any fund's allocation, using their info, and that info usually has the distinct advantage of being basically accurate. M* bollixed up their bond analysis several years ago, and it's really not worth the time using them for bond allocation info any longer.
You seem to be conflating "allocation" with "analysis". M* reports funds' allocations exactly as reported by the funds. On the other hand, M* calculates its own weighted average of credit quality. It does this because a simple average isn't meaningful.
To understand this, think about star ratings. M* grades on a bell curve. That makes sense because fewer than 20% of funds perform at 'A' (5*) level, while lots more than 20% perform at a mediocre 'C' (3*) level.
If you don't like this bell curve, you can always look at Lipper ratings, which rate fully 20% of the funds "A' (5). But it's not as helpful. (I think the Lipper ratings are more helpful because they rate different aspects of the fund, like consistency and returns, but in terms of the scale they use, their unweighted scale isn't as helpful.)
Similar idea with credit ratings. According to S&P data (see figure below), virtually no AAA bonds (0.00%) default within a year, almost as few (0.17%) BBB bonds default, while a quarter (26.82%) of CCC bonds default within the span of a year. Now that default doesn't mean they go bust, more likely they just stop paying interest.
Still, think about a portfolio containing one AAA bond and one CCC bond, vs. a portfolio containing two BBB bonds. The latter has less than a 0.34% chance of any bond defaulting. The former has better than a 1/4 chance of defaulting, though the impact is cut in half since the CCC bond represents only half the portfolio.
Both portfolios "average" BBB in credit rating, if we take a simple average. That average doesn't tell us anything; these two portfolios have such disparate risk profiles.
What we can do with the first portfolio is weight the bond ratings by their impact. So while we might rate AAA as "A" (giving it a numeric value of 1), we might rate CCC bonds as 'Z' (giving it a numeric value of 26, corresponding to its 26+% chance of defaulting). Now when we average the two bonds, we get somewhere around 13, which might correspond to "high quality" junk.
That gives us a better sense of what to expect from the portfolio in terms of defaults. If getting a sense of portfolio default risk is what we want from an average credit rating, then this method of averaging is more meaningful.
If what we want from an average credit rating is to compress the bond allocation (how many A's, how many B's, etc.) down to a single number, then a simple average is better. Personally, if I want to know what the allocation of bonds is, I just look at a bar chart or table showing the whole distribution. It's not as though there are that many grades of bonds that the picture is confusing.
