When we make decisions we nearly always do so in the context of something or other. In fact about the only time we're asked to make contextless choices is in academic exams and laboratory based psychology experiments. As these are the two most familiar situations faced by the academics generating the theories that underpin most of modern finance we shouldn't be awfully surprised if their great ideas are somewhat lacking in any understanding of … well, anything, really.Being trained to think logically and probabilistically is a necessary part of being a modern economist, but it's hardly a requirement for most people in most professions most of the time. You don't find many baristas trying to make Bayesian inferences about which particular coffee to pour next. We clearly don't rationalise most decisions, we make them quickly and effortlessly.
We don't optimise, we satisfice.
Satisficing is a concept introduced by Herbert Simon, who argued that people didn't attempt to solve problems and make judgements in an optimal manner, but merely in a way that was sufficient and satisfying – hence, satisficing. This approach contrasts with both the ideas of classical economics and of the mainstream of behavioural finance, which set up a norm of rational behaviour to measure people against. In the former case the expectation is that people will act in accordance with these principles, in the latter that they will try and fail and, in so doing, generate predictable irrationality: so called behavioural biases.
Now while it's undoubtedly true that the behavioural approach has taken us away from a standard of rational behaviour that not even the most anally-retentive classical economist could hope to match it also seems that the underlying focus on there being some kind of goal of rationality that we're to be measured against has generated its own biases. Some of the results of this research program simply don't make sense when translated to the real world, and increasingly it looks like these deviations occur because there are errors at the heart of behavioural finance.
Simon's idea wasn't just that we weren't achieving some great goal of rationality but that we weren't even bothering to try, thus rendering a large part of the research program of behavioural finance irrelevant. After all, if you're consistently scoring people against the rules of snap when they're playing bridge, you're going to get some odd results. In some cases these results may be consistently strange, but they're no less irrelevant than that.
Choosing Toasters and Stocks
Satisficing suggests that rather than trying to meet some optimal goal we're mostly quite satisfied if we achieve one that's just good enough. If we want to buy a new toaster we generally have a few basis rules about what we're looking for: mainly that it crisps bread effectively, is reasonably cheap and occupies less than 4% of our available kitchen workspace. Most of us don't go around ferociously weighing up all of the myriad features of toasters in order to make a choice.
Generally something similar happens when we go stockpicking. We're looking, roughly, for the right sort of stock that fits our criteria. We may even plug these criteria into some kind of stock filter and do some basic analysis of what we find. Mostly, though, we don't hunt around forever because we're time and processing power limited. Mostly we simply make do with something that looks good enough.
Good enough satisficing is at the heart of Gerd Gigerenzer's implementation of Simon's ideas. Because, if Simon was right, then relatively simple satisficing algorithms should generate the kinds of decision making we see humans make. There's reason to doubt that this will work – after all, we make some pretty complex decisions in real-life so the idea that some, almost trivial, set of checking criteria will lead to some even halfway logical decisions seems unlikely to work, compared to the sophisticated statistical analysis most of economics would expect this process to require.
Of course, Gigerenzer found something pretty much the opposite of what commonsense would indicate: he found that commonsense is actually pretty good at making halfway intelligent decisions. His classic experiment with Daniel Goldstein involved asking participants to figure out questions like: which city has the larger population, Hamburg or Cologne?
Take the Best
The researchers postulated a way of making inferences from memory, a so-called probabilistic mental model. This model worked by assuming that people only have limited information about the task in hand – in this case, German cities – and would use this limited information to make best guesses about which had larger populations. The main algorithm they used is called "Take the Best" which basically works by scanning memory for anything relevant to the task in question – i.e. do we know anything about Hamburg or Cologne – and then deciding whether the information that first comes to mind had any relevance to the question? If it does it's used to discriminate and make a decision, if not we pass on and look for the next cue.
In the case of the German cities the experimenters came up with a plausible list of possible bits of information that someone might hold – whether it's a university town, has a major soccer team, is the capital, etc. Running their algorithm against various partial bits of knowledge they successfully managed to generate high rates of successful decision making, comparable to those that used more traditional decision making processes.
Predicting Irrationality, Darwin-Style
This approach also makes an interesting and completely counterintuitive prediction. Under classical theories the more knowledge you have then the better your success rate should be. Take the Best, however, predicts that as you gain more knowledge your success rate will decline. The reason is that the cues used to make the predictions are not 100% certain to correctly correlate with the target concept – not all larger cities have soccer teams, for example. Hence, the more information you have the more likely the algorithm is to betray you. When the researchers tried this out in real-life this is exactly what they found – students in the Germany were better at judging which American cities had the larger population than with cities in their own country, the so-called less-is-more effect.
Moreover, using the same approach the researchers were able to explain the strange appearance, disappearance and inversion of the overconfidence bias that's seen in various experimental situations depending on whether we're asked about whether we're correct single events decisions or about the frequency of multiple ones. This is because behind the approach is the idea that we have a toolkit of these satisficing heuristics that we apply in the appropriate circumstances and these different situations invoke different ones.
The key is that these heuristics are adaptive: they're constructed for different tasks and they utilise information in the environment – so-called "ecological rationality" – in order to ensure accuracy. So, at root, satisficing is a Darwinian theory, assuming that complexity in decision making arises out of adaptive principles. As Todd and Gigerenzer suggest:
"The adaptive toolbox is inspired by a Darwinian vision of decision making in humans, animals, and artificial agents. First, just as evolution does not follow a grand plan but results in a patchwork of solutions for specific problems, so the toolbox is structured as a collection of mechanisms that each do a particular job. Second, just as evolution produces adaptations that are bound to their particular context, the heuristics in the adaptive toolbox are not good or bad, rational or irrational, per se, but only relative to a particular environment. From these two features springs the potential power of simple heuristics: They can perform astonishingly well when used in a suitable environment."
Kick the Bricks
There's "ecological rationality" in the idea that most human decision making is through satisficing heuristics rather than internalised statistical reasoning. If this idea is correct it kicks a few bricks out of the foundations of most of the existing approaches to understanding human behaviour in financial systems. It also holds out some hope that the mish-mash of confusing and contradictory evidence from existing research may actually be underpinned by some common theme.
We shouldn't hold out too much hope, though. If these ideas are correct we're dealing with an adaptive theory so as soon as we get a handle on it it'll mutate, while, as we've seen, such theories are fiendishly difficult to disprove, a key criteria for scientific plausibility. Still, t'would be boring to know everything, don't ya think?