Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

In this Discussion

Here's a statement of the obvious: The opinions expressed here are those of the participants, not those of the Mutual Fund Observer. We cannot vouch for the accuracy or appropriateness of any of it, though we do encourage civility and good humor.

    Support MFO

  • Donate through PayPal

Another view on monte carlo

https://www.yahoo.com/finance/news/why-hate-monte-carlo-analysis-140252542.html
I started reading & then closed. More for those still working.

Derf

Comments

  • I think people know that I'm not enamored with "Monte Carlo". That's because IMHO it is misunderstood, misrepresented, and misapplied. So it may come as a surprise, but I think the criticisms in the column miss the mark.

    Given a numeric problem, one can try to find an equation that "solves" the problem. Sometimes that is not possible. For example, consider x² -x - 2 = 0. One can algebraically solve this equation exactly (x = 2 and x = -1). Or one can use numeric methods (e.g. Newton's method) to have a computer plug numbers in and gradually get close to the solution(s).

    Monte Carlo is just another numeric technique for "solving" a problem, in this case a problem in probabilities. It plugs in random numbers and sees what pops out. By doing this enough times, the distribution of outcomes is very close to the actual outcome probabilities.

    It's a tool. No better or worse than the model it is "solving". And that is the problem. The models are usually simplistic, not accounting for skew in market results, not accounting for short term persistence of returns, not accounting for the starting point (current state of the market), and so on.

    This article goes off on tangents. It criticizes Monte Carlo models because the market is not perfectly efficient. But it never says why the efficiency or inefficiency of the market should have any particular effect on the likelihood of different market outcomes. Even if market inefficiencies did have an effect, that could be incorporated into a model. At worst what you have is not a problem with Monte Carlo as a tool, but with the models that are fed into the tool.

    The article goes on to say that planners misuse the results by encouraging people to focus on the most probable outcomes. I usually see the opposite: people are encouraged to follow a plan where success is "certain". (Think 4% rule.) Regardless, this is a problem with the way Monte Carlo is "sold" to DIYers and applied by planners, not with the tool itself.

    ISTM that the criticisms in the article have nothing to do with Monte Carlo, but with any technique (e.g. as looking at historical returns) that predicts possible outcomes. The clue is in the line: "I'm not a fan of financial plans that use straight-line projections or Monte Carlo risk analysis to support investment proposals."
  • Oh, Lord! Let's just hope that ol' MJG doesn't see this one.
    :)
  • Hi msf,

    Any tool for whatever purpose is dangerous if it is “misunderstood, misrepresented, and misapplied”. That’s as close to a universal truth as anything.

    But a Monte Carlo analysis properly constructed, parametrically used, and carefully interpreted can yield meaningful investment insights. Doing a carefully selected set of simulations can locate rough return boundaries and risks. What if scenarios can be rapidly explored. No specific forecasted outcomes are typically output, but a projection of possibilities with the odds helps planning purposes.

    It’s an imperfect tool, but no perfect tool exists. Monte Carlo analyses have served me well over decades. It helped me to identify what was important and what was only of secondary impact.

    If you haven’t used it, I suggest you give it a more detailed examination. You just might decide to add it to your investment tool kit. It is not difficult given the resources currently available on the Internet.

    Best Wishes
  • Hi Old Joe,

    Thanks for keeping me in mind. My reply wasn’t all that bad! Or was it? No it wasn’t.

    Best Wishes
  • @MJG- for you, no, it wasn't.

    :) Just kidding. Glad to see that you're still around.

    OJ
  • I've always agreed with MJG on this one. It's a great tool for what it is. Can anyone find a better one? It's probability. There is no exact.
  • @MikeM 's comment illustrates what I meant in saying that the Monte Carlo method is misunderstood. The method is nothing but a technique that "solves a problem by generating suitable random numbers and observing that fraction of the numbers obeying some property or properties."
    http://mathworld.wolfram.com/MonteCarloMethod.html

    In our current context, that "problem" is a model for portfolio values vs. time. We could build a performance model by saying, e.g. that the value of a portfolio in one year has a 1/4 chance of being flat, a 1/4 chance of being down 5%, a 1/4 chance of being up 5%, and a 1/4 chance of being up 10%.

    A Monte Carlo tool would take these odds, iterates 30 times with a random number generator to simulate 30 years of investing, and come up with a result. It would repeat this, say 100K times, and produce a histogram of the results.

    Obviously the model I gave (1/4 down, 1/4 flat, 1/2 up) is both simplistic and wrong. But that's a problem only with the model, not with the Monte Carlo method. When MJG speaks of resources on the internet (e.g. Vanguard's Nest Egg Simulator) he is conflating a model builder with a Monte Carlo solver. To speak of such resources as Monte Carlo tools is to propagate a fundamental misunderstanding of what Monte Carlo methods are.

    Here's how David Blanchett and Wade Pfau describe this confusion:
    One common criticism is that such tools may not incorporate the “fat tailed” nature of return distributions, as well as things like autocorrelation (which is when returns of a variable, like inflation, are correlated over time).

    But this argument is like saying all cars are slow. There are no constraints to Monte Carlo simulation, only constraints users create in a model (or constraints that users are forced to deal with when using someone else’s model). Non-normal asset-class returns and autocorrelations can be incorporated into Monte Carlo simulations, albeit with proper care. Like any model, you need quality inputs to get quality outputs.
    They go on to suggest that at least it's good at hitting the broad side of a barn (" if the calculated success rate is low, say 15-25%, we know that a client plan generating these numbers is in danger").

    I'm not convinced even of this any more. As I wrote in another post, Pfau used Monte Carlo simulation to show that, post retirement, a rising allocation to equities has a better chance of success than the traditional declining glide path. But then he changed his model to incorporate mean regression of bond yields (starting from a current low yield) and reached a completely opposite conclusion - the traditional glide path works better.
    https://mutualfundobserver.com/discuss/discussion/51256/these-are-professionals-don-t-try-this-at-home

    If professionals, using much more sophisticated models than anything you'd find on the web (or are likely to design yourself) can come up with contradictory results, I'm not going to have much faith in the results put out by these free programs. Their code, their Monte Carlo simulations are likely perfect. It's their models that I question.

    To the extent that people view the two (simulations and models) as one and the same, Monte Carlo methods are fundamentally misunderstood. The author of the original article, like most everyone else, conflates the model with the simulation technique. It criticizes the latter for supposed flaws in the former.
  • Have you written about ORP and his modeling and methods, at least some of which he publishes onsite? (Too lazy to delve, at least right now.)
  • See https://www.i-orp.com/ModelDescription/MonteCarlo.html

    It makes curious use of historical returns, see Appendices B and C. Not that I have any idea how I would make use of historical returns, but partitioning into positive and negative years seems odd.

    Just as I don't think it should matter to an investor whether they get a return of plus 0.1% or negative 0.1% (the numbers are virtually the same), I question whether this year's return being positive or negative 0.1% will make a big difference in next year's return. There's nothing magical about the number 0. But this model draws that distinction.

    Some models, including this one, randomly select historical returns in an attempt to preserve some context. (Partitioning into positive and negative years is just a refinement.) My intuition says that there is some information buried in historical data, but randomly picking the year tends to eviscerate this. That's just a hunch, I've not looked into it. It's probably worth looking for papers where historical data is used in this way (Portfolio Visualizer may also offer this option) to see whether the rationales given seem to make sense.
  • Fwiw, I called your post to the attention of financial developer James Welch, who responded:

    Interesting/amusing, [and] quite so. The idea is for ORP/MC to capture the idea of returns to be following trends.

    Personally, I find 3-PEAT to be a better tool for retirement policy evaluation.


    And one of my kids is a management consultant who teaches MC at his firm to younger consultants:

    This is good, although I feel like the point sort of boils down to "any tool is perfect, it is the use of the tool that was the problem." I mean, sure.... "You say my model of the economy was wrong, but the math was correct -- it just wasn't the correct model." Tomato / tomahto?

    Long ago I built some cool Monte Carlo models to prove to an airline board that there was a pretty high chance they'd be unable to avoid bankruptcy in the next 10 years ... the highest-probability path was okay, but too much cumulative risk of terrorism, oil prices, labor strikes, etc., which made the real "average" path much worse.
    And then I read
    Black Swan and spent the next weeks thinking about that. I actually wrote the training class on MC and probability and incorporated all that fat tail stuff.

    We showed the board the “cone” of future performance in 2008 and said we think they have a 65% chance of bankruptcy in the next 10 years. The hard part was the “okay, so what should we do?” They asked a bigger consulting firm the same thing and their solution boiled down to lobbying the govt for tax breaks (!).

    We told them they needed “super-scale”, which worked on paper but seemed a little far-fetched, as they were already one of the world’s bigger airlines, with $20B in revenue. But 20 months later they merged with another.
  • msf
    edited August 2019
    I agree that 3-PEAT seems to be the better tool. But since this entire thread is on MC, and ORP includes MC, that's what I addressed.

    It seemed pretty clear that the idea of partitioning samples into Pos and Neg sets was to contextualize the return for the current clock tick (year). Perhaps, since there is short term persistence of performance (various papers suggest this lasts up to six months), if the simulated current return were positive, the next tick's (year's) return would be somewhat more likely to trend the same way as historical up years.

    A problem with this, as I tried to explain, is that there's nothing magical about zero as the partitioning criterion. One could easily imagine, for example, partitioning at the median return - so that the two sets of returns would have the same size. Alternatively, using the mean return, to rather similar effect.

    As I wrote, these are just hunches. I asked about theoretical underpinning for any such partitioning. In fact, the ORP page I cited notes that by using random intervals (rather than random sample points) trends are preserved. It is telling that the page is silent about trends when it comes to Pos and Neg partitioning.

    Personally, I like bootstrapping. It's a nonparametric way to randomly generate returns without worrying about the accuracy of yet another submodel (such as a normal distribution parameterized by mean and std dev). I just don't see how it "capture[s] the idea of returns to be following trends."

    The anecdote given illustrates a deficiency in MC simulation. "The hardest part was the 'okay , so what should we do'". MC simulations don't explain anything. As ORP's FAQ says: "MC offers no clues as to how it arrived at its results." If you don't know how you got to a result, it certainly is hard to figure out what to change to improve the distribution of outcomes.
  • One thing about 3-peat is that the results change so much as a function of investing span. Naturally.

    When I put in start date of 1982, which I guess is when I started doing serious active investing, or say I did, my results are rockin'.

    When I put in 1974, when I was 27 and getting notionally more serious about financing my life in general, the results are comparatively dismaying.

    So one hardly knows when set the begin point. I therefore now proclaim that modern market behaviors started w the big bull of the early 1980s.
  • I suspect these Welch retirement-finance articles will be of interest to most everyone here:

    https://www.i-orp.com/DI/articles.html
  • @davidmoran and @msf, thank you for pointing out the link above. Again more tools to explore and learn. How is this MC simulation differs from that from Financial Engine?
  • ORP provides multiple simulators. One is a MC simulator. Another is 3-PEAT, which is not a MC simulator.

    Primarily where ORP's MC simulator differs from other MC simulators (as near as I can tell) is that it does not model market returns. (I haven't found a description of FE's engine, so I'm speaking in general terms here.)

    Most MC simulators model market returns by assuming they follow some sort of distribution pattern (bell curve, logarithmic, etc.) For example, if you assume that market returns will be randomly distributed along a bell curve, you've got a market return model with two parameters: mean and standard deviation. (Those are the types of parameters MJG said he wanted.) It's still up to you to figure out the values for those two parameters.

    ORP takes the opposite approach. Rather than assume a model for the distribution of annual returns (such as a bell curve) and then fitting the real world data to that model, ORP says: forget about a model altogether. It throws all the real, historical annual returns into a black box, and randomly selects returns out of that box. No parameters, just a grab bag of actual returns.

    (ORP tweaks this approach, but that's the general idea.)

    One can argue that this is more representative of the real world, or that because we're working with a finite sample of returns (i.e. just the relatively few that we know from history) this historical sample set may be misleading.

    ORP cites this paper by Jim Otar. He discusses the above as "Flaw #1" in MC.
    http://www.retirementoptimizer.com/articles/MCArticle.pdf

    You'll notice that I've said nothing about all the bells and whistles that different actual programs throw in. They may incorporate SS benefits, retirement age, etc. From a conceptual perspective, I don't find them especially interesting, because they are just fixed adjustments to inflow/outflow. Of course in the real world, they're very important if what you care about is the yes/no output: will I last 30 (or however many) years into retirement?
Sign In or Register to comment.