Identity Verified Thinker in Science / Social Sciences / Psychology
Michael Smithson
Michael Smithson
Michael Smithson is a Professor in the Psychology Department at The Australian National University. He has written 6 books, co-edited 3, and published more than 120 refereed articles and book chapters. His research interests focus on how people think about and respond to unknowns.


This Blog has no active categories.
Posted in Science / Social Sciences / Psychology

You Can Never Plan the Future by the Past

Jul. 2, 2011 8:00 pm
Categories: None

The title of this post is, of course, a famous quotation from Edmund Burke. This is a personal account of an attempt to find an appropriate substitute for such a plan. My siblings and I persuaded our parents that the best option for financing their long-term in-home care is via a reverse-mortgage. At first glance, the problem seems fairly well-structured: Choose the best reverse mortgage setup for my elderly parents. After all, this is the kind of problem for which economists and actuaries claim to have appropriate methods.

There are two viable strategies for utilizing the loan from a reverse mortgage: Take out a line of credit from which my parents can draw as they wish, or a tenured (fixed) schedule of monthly payments to their nominated savings account. The line of credit (LOC) option’s main attraction is its flexibility. However, the LOC runs out when the equity in my parents’ property is exhausted, whereas the tenured payments (TP) continue as long as they live in their home. So if either of them is sufficiently long-lived then the TP could be the safer option. On the other hand, the LOC may be more robust against unexpected expenses (e.g., medical emergencies or house repairs). Of course, one can opt for a mixture of TP and LOC.

So, this sounds like a standard optimization problem: What’s the optimal mix of TP and LOC? Here we run into the first hurdle: “Optimal” by what criteria? One criterion is to maximize the expected remaining equity in the property. This criterion might be appealing to their offspring, but it doesn’t do my parents much good. Another criterion that should appeal to my parents is maximizing the expected funds available to them. Fortunately, my siblings and I are more concerned for our parents’ welfare than what we’d get from the equity, so we’re happy to go with the second criterion. Nevertheless, it’s worth noting that this issue poses a deeper problem in general—How would a family with interests in both criteria come up with an appropriate weighting for each of them, especially if family members disagreed on the importance of these criteria?

Meanwhile, having settled on an optimization criterion, the next step would seem to be computing the expected payout to my parents for various mixtures of TP and LOC. But wait a minute. Surely we also should be worried about the possibility that some financial exigency could exhaust their funds altogether. So, we could arguably consider a third criterion: Minimizing the probability of their running out of funds. So now we encounter a second hurdle: How do we weigh up maximizing expected payout to our parents against the likelihood that their funds could run out? It might seem as if maximizing payout would also minimize that probability, but this is not necessarily so. A strategy that maximized expected payout could also increase the variability of the available funds over time so that the probability of ruin is increased.

Then there are the unknowns: How long our parents might live, what expenses they might incur (e.g., medical or in-home care), inflation, the behaviour of the LIBOR index that determines the interest rate on what is drawn down from the mortgage, and appreciation or deprecation of the property value. It is possible to come up with plausible-looking models for each of these by using standard statistical tools, and that’s exactly what I did.

I pulled down life-expectancy tables for American men and women born when my parents were born, more than two decades of monthly data on inflation in the USA, a similar amount of monthly data on the LIBOR, and likewise for real-estate values in the area where my parents live. I fitted a several “lifetime” distributions to the relevant parts of the life-expectancy tables to model the probability of my parents living 1, 2, 3, … years longer given that they have survived to their mid-80’s and arrived at a model that fitted the data very well. I modeled the inflation, LIBOR and real-estate data with standard time-series (ARIMA) models whose squared correlations with the data were .91, .98, and .91 respectively—All very good fits.

Finally, my brothers and sisters-in-law obtained the necessary information from my mother regarding our parents’ expenses in the recent past, their income from pensions and so on, and we made some reasonable forecasts of additional expenses that we can foresee in the near term. The transition in this post from “I” to “we” is crucial. This was very much a joint effort. In particular, my youngest brother’s sister-in-law made most of the running on determining the ins and outs of reverse mortgages. She has a terrifically analytical intelligence, and we were able to cross-check one another’s perceptions, intuitions, and calculations.

Armed with all of this information and well-fitted models, it would seem that all we should need to do is run a large enough batch of simulations of the future for each reverse-mortgage scenario under consideration to get reliable estimates of expected payout, expected equity, the probability of ruin, and so on. The inflation model would simulate fluctuations in expenses, the LIBOR model would do so for the interest-rates, the real-estate model for the property value, and the life-expectancy model for how long our parents would live.

But there are at least two flaws in my approach. First, it assumes that my parents’ life-spans can best be estimated by considering them as if they are randomly chosen from the population of American men and women born when they were born who have survived to their mid-80’s. Should I take additional characteristics about them into account and base my estimates on only those who share those characteristics as well as their nation and birth-year? What about diet, or body-mass index, or various aspects of their medical histories? This issue is known as the reference-class problem, and it bedevils every school of statistical inference.

What did I do about this? I fudged my life-expectancy model to be “conservative,” i.e., so that it assumes my parents have a somewhat longer life-span than the original model suggests. In short, I tweaked my model as a risk-averse agent would—The longer my parents live, the greater the risk that they will run short of funds.

The second flaw in my approach is more fundamental. It assumes that the future is going to be just like the past. And before anyone says anything, yes, I’ve read Taleb’s The Black Swan (and was aware of most of the material he covered before reading his book), and yes, I’m aware of most criticisms that have been raised against the kind of models I’ve constructed. The most problematic assumption in my models is what is called stationarity, i.e., that the process driving the ups and downs of, say, the LIBOR index has stable characteristics. There were clear indications that the real-estate market fluctuations in the area where my parents live do not resemble a stationary process, and therefore I should not trust my ARIMA model very much despite its high correlation with the data.

Let me also point out the difference between my approach and the materials provided to us by potential lenders and the HUD counsellor. Their scenarios and forecasts are one-shot spreadsheets that don’t simulate my parents’ expenses, the impact of inflation, or fluctuations in real-estate markets. Indeed, the standard assumption about the latter in their spreadsheets is a constant appreciation in property value of 4% per year.

My simulations are literally equivalent to 10,000 spreadsheets for each scenario, each spreadsheet an appropriate random sample from an uncertain future, and capable of being tweaked to include possibilities such as substantial real-estate downturns. I also incorporated random “shock” expenditures on the order of $5-$75K to see how vulnerable each scenario was to unexpected expenses.

The upshot of all this was that the mix of LOC and TP had a substantial effect on the probability of running out of money, but not a large impact on expected balance or equity (the other factors had large impacts on those). So at least we could home in on a robust mix of LOC and TP, one that would have a lower risk of running out of money than others. This criterion became the primary driver in our choice. We also can monitor how our parents’ situation evolves and revise the mix if necessary.

What about maximizing expected utility? Or optimizing in any sense of the term? No, and no. The deep unknowns inherent even in this relatively well-structured problem make those unattainable goals. What can we do instead? Taleb’s advice is to pay attention to consequences instead of probabilities. This is known as “dominance reasoning.” If option A yields better outcomes than option B no matter what the probabilities of those outcomes are, choose option A. But life often isn’t that simple. We can’t do that here because the comparative outcomes of alternative mixtures of LOC and TP depend on probabilities.

Instead, we have ended up closer to the “bounded rationality” that Herbert Simon wrote about. We can’t claim to have optimized, but we do have robustness and corrigibility on our side, two important criteria for good decision making under ignorance (described in my recent post on that topic). Perhaps most importantly, the simulations gave us insights none of our intuitions could, into how variable the future can be and the consequences of that variability. Sir Edmund was right. We can’t plan the future by the past. But sometimes we can chart a steerable course into that future armed with a few clues from the past to give us an honest check on our intuitions, and a generous measure of scepticism about relying too much on those clues.

Mike Sutton
July 15, 2011 at 8:43 am
A paradox?

Hi Michael

An interesting blog as usual.

I suspect Taleb's get out clause to your problem would be to insure against losses. But I'm not sure that is a good - cost effective -investment strategy - or if it’s even possible.

On the theme of the fallacy of induction - a thought occurred to me on reading Taleb's essential ode to Popper. Namely that: Karl Popper’s fallacy of induction is in actual fact derived from what has happened in the past in terms of what the past teaches us about the foolishness of relying on the past to predict the future.

Is that not a paradox? Is the fallacy of induction not in fact based on inductive logic itself? After all, might we not anticipate that one day we will improve our abilities to compute 'known unknowns' and 'unknown unknowns' in order to use the past to accurately predict the future?

Perhaps quantum computing will serendipitously provide the key? If so, then is it right to say that Taleb is so ironically blinded by his own Black Swan obsession that he has missed this paradox?

On another point - Taleb would advise you to invest some of the resources for your parent’s future (that you can afford to lose) on the chances of maximising on an unimaginable Black Swan Event. In this case current stem cell research and gene replacement therapy might soon prolong your parent’s lives by 30 years. During which time new breakthroughs might add another 30 etc. etc. - The way things are going immortality might not be what the past would forecast. For example, see:

I've been wrestling with these ideas for the past couple of weeks.

Any thoughts or knowledge you might have on the notion of Popper’s fallacy of induction being a paradox would be very much appreciated.


Thinker's Post
Michael Smithson
July 16, 2011 at 3:40 am

Thanks for your interesting comments and suggestions. The Popperian view of induction is pretty interesting, and apparently has been controversial. I think what you're getting at is the possibility that his argument is circular. As Hume and followers of his philosophy have pointed out, an inductive justfication of induction would be circular. So, even if for all recorded time induction had worked infallibly, we could not use that evidence to conclude "therefore, induction always works" because that conclusion would be based on induction.

I'm sure Popper knew about this issue. He didn't regard his principle of falsificationism as an inductive principle. Instead, he regarded falsification as a matter of testing hypotheses by deducing predictions from them and then rejecting those hypotheses whose predictions turned out to be wrong. Given falsification, Popper declared that in doing science there is no need to bring induction into it at all. Instead, an hypothesis survives (but never is confirmed) until a counter-example is found, whereupon it is falsified. For Popper, I think, the hypothesis that "induction always works" was a falsified hypothesis. If instead he had claimed on the basis of observing lots of failures of induction that "induction never works," then that would be circular reasoning. The Edmund Burke quotation has a flavour of that.

Hume's arguments about establishing cause amount to saying that induction cannot be justified. Since Hume, various philosophers have sought to justify inductionism on grounds of reasoning, probabilistic inference, and other criteria for rationality. Most of the arguments I have any familiarity with try to specify the conditions under which induction is a good (if not certain) bet. One of the trickier aspects of all this is that there are multiple forms of induction (and meta-inductive arguments as well), so I find that if I'm reading a paper on induction I have to begin by ascertaining which kind(s) the authors are writing about: Enumerative, singular predictive, probabilistic, ... But I'm sure there are plenty of philosophers out there better able than I to comment on your idea.


Mike Sutton
July 16, 2011 at 12:20 pm

Many thanks Michael. That is most useful historical and more detailed background to the issue.

Essentially, yes I agree, Popper certainly falsified the induction always works idea.

What I am getting at is related to how (for example) Popper's black swan event example has been developed by writers such as Taleb.

My point is that Taleb (among others) does not allow for the possibility that his entire thesis would be falsified if a mega black swan event came along (e.g. progress in quantum computing) and made it possible to accurately predict the future (including black swan events) in the affairs of man by analyzing past events.

In other words Taleb has, ironically, failed to imagine the possibility that one day social scientists might be able to predict the social future by analyzing past events. Perhaps such an imaginable, yet 'improbable' outcome should be named a rainbow-coloured swan with a finny- anny border? Who knows one day we just might discover or make one and it will be the Anti-Black Swan.

Thinker's Post
Michael Smithson
July 16, 2011 at 9:43 pm

I'm a quantum-duffer too, so I have no idea whether the emergence of an ability to predict even Black Swan events is "imaginable yet improbable" or "imaginable yet impossible." Any process that has a genuinely random component would seem to be unpredictable in principle (unless one has a way of peering into the future itself), but it is also quite difficult to be certain about whether processes that pass tests for randomness really are random after all.

Given my ignorance about quantum physics, I can't assign probability 0 to your hypothetical event, or at best I can assign a lower probability of 0 and an upper nonzero but very small probability :-) So, by my own lights, you may have a point.

Dennis Lendrem
July 4, 2011 at 2:03 pm
You Can Never...

Plan The Future From The Past, But...'d be a mug to plan the future and ignore it!

All models are wrong, but some are more useful than others.

- George Box.

I've been following and enjoying your blog!

Thinker's Post
Michael Smithson
July 4, 2011 at 7:27 pm

Thanks, Dennis, both for your comment and reminding me of the George Box aphorism. My post wasn't intended to be anti-planning, and my family and I certainly found my modeling exercises to be useful. What did strike me, though, was how even this relatively well-structured problem had uncertainties lurking in it that made standard prescriptions such as maximizing expected utility so difficult to achieve.

During that planning process I was reminded of an anecdote that, when faced with the decision of whether to take a job offer from a competing university, a decision scientist was heard to complain about how complex this decision was. Colleagues asked him why he didn’t just make his decision according to his own prescriptions: choose a prior, add up probability-weighted utilities associated with each of his options, and choose according to the criterion of maximum expected utility. The decision theorist replied, “Come on, this is serious!” (Gigerenzer, 2004, p. 62).

Reference: Gigerenzer, Gerd, “Fast and Frugal Heuristics: The Tools of Bounded Rationality,” in D.J. Koehler and N. Harvey (eds.), Blackwell Handbook of Judgment and Decision Making. (Oxford: Blackwell, 2004, pp. 62-88).

Latest Thinking in Science
Latest Ebooks