It is January again, time for the annual round of predictions. Most headlines talk about what “will” happen. Whatever happened to the word “may”? In an age when we crave certainty more than ever, these predictions give people “facts” to hold on to and create “truths” when none exist.
Mathematical models have never been as widely discussed as in 2021 and many seem to expect “the science” or “the modellers” to give us all the answers. Yes, if enough people make predictions, inevitably some will predict the correct answer. However, hindsight can give us a false illusion of knowledge; witness all the articles explaining with confidence and in detail exactly why something happened, but after the event.
Beware of survivorship bias: the error of looking only at the successful. Philosopher and author, Nassim Taleb, uses Casanova as an example: how many would-be Casanovas have failed? How many predictive “experts” eventually fall from grace? Even Warren Buffet has made poor investment decisions.
I am sure that most financial modellers will agree that there is no crystal ball. Why then do financial modellers tell me about their “inherent circularities” or insist that they need copy paste iterative macros? Crystal ball gazing is usually the cause of these common modelling problems. Of course, models predict, but they can never know.
There is a subtle difference between predicting the future (the possible) and knowing the future (the definite). Being clear about this distinction is a crucial modelling skill: including any forward-looking definite knowledge in a model is an error of logic. There are no crystal balls. Circularity does not exist. And breaking a circularity using a copy paste iterative macro does not solve the underlying logic problem.
Whether you are reading headlines or building a financial model, remember that the word “will” should often be substituted by the word “may”. As the statistician, George Box, wrote: “All models are wrong, but some are useful”.