Last week, I wrote about superstition, inspired by an Isaac Asimov article called “Knock Plastic!” In revisiting that essay, I find that Asimov has collected 6 broad examples of what he calls “Security Beliefs” They are called this because such beliefs are “so comforting and so productive of feelings of security” that all men employ them from time to time. Here they are:
- There exist supernatural forces that can be cajoled or forced into protecting mankind.
- There is no such thing, really, as death.
- There is some purpose to the Universe.
- Individuals have special powers that will enable them to get something for nothing.
- You are better than the next fellow.
- If anything goes wrong, it’s not one’s own fault.
I’ve been thinking a lot about these things, and the extent to which they manifest in my life. When asked to explain my actions (usually only to myself), I can usually come up with a pretty good reason for doing what I did. But did I really do it for that reason?
Last week, I also referenced this: “It seems that our brains are constantly formulating alternatives, and then rejecting most of them at the last instant.” What process do we use to reject the alternatives and eventually select the winner? I’d like to think it was something logical and rational, but that strikes me as something of a security belief in itself (or perhaps just a demonstration of Asimov’s 5th security belief).
When we refer to logic, we are usually referring to a definitive conclusion that can be inferred from the evidence at hand. Furthermore, this deductive process is highly objective and repeatable, meaning that multiple people working under the same rules with the same evidence should all get the same (correct) answer. Obviously, this is a very valuable process; mathematics, for instance, is based on deductive logic.
However, there are limits to this kind of logic, and there are many situations in which it does not apply. For example, we are rarely in possession of all the evidence necessary to come to a logical conclusion. In such cases, decisions are often required, and we must fall back on some other form of reasoning. This is usually referred to as induction. This is usally based on some set of heuristics, or guidelines, which we have all been maintaining during the course of our lives. We produce this set of guidelines by extrapolating from our experiences, and by sharing our observations. Unlike deductive logic, it appears that this process is something that is innate, or at the very least, something that we are bred to do. It also appears that this process is very useful, as it allows us to operate in situations which we do not uderstand. We won’t exactly know why we’re acting the way we are, just that our past experience has shown that acting that way is good. It is almost a non-thinking process, and we all do it constantly.
The problem with this process is that it is inherently subjective and not always accurate. This process is extremely useful, but it doesn’t invariably produce the desired results. Superstitions are actually heuristics, albeit generally false ones. But they arise because producing such explanations are a necessary part of our life. We cannot explain everything we see, and since we often need to act on what we see, we must rely on less than perfect heuristics and processes.
Like it or not, most of what we do is guided by these imperfect processes. Strangely, these non-thinking processes work exceedingly well; so much so that we are rarely inclined to think that there is anything “wrong” with our behavior. I recently stumbled upon this, by Dave Rodgers:
Most of the time, people have little real idea why they do the things they do. They just do them. Mostly the reasons why have to do with emotions and feelings, and little to nothing to do with logic or reason. Those emotions and feelings are the products of complex interactions between certain hardwired behaviors and perceptual receivers; a set of beliefs that are cognitively accessible, but most often function below the level of consciousness in conjunction with the more genetically fixed apparatus mentioned before; and certain habits of behavior which are also usually unconscious. …
If we’re asked “why” we did something, most of the time we’ll be able to craft what appears to be a perfectly rational explanation. That explanation will almost invariably involve making assertions that cast ourselves in the best light. That is to say, among the set of possible explanations, we will choose the ones that make us feel best about ourselves. Some people have physical or mental deficiencies that cause them to make the opposite choice, but similar errors occur in either case. The explanation will not rely on the best available evidence, but instead will rely on ambiguous or incomplete information that is difficult to thoroughly refute, or false information which is nevertheless contained within the accepted set of shared beliefs, and which allows us to feel as good or bad about ourselves as we feel is normal.
Dave seems to think that the processes I’m referring to are “emotional” and “feeling” based but I am not sure that is so. Extrapolating from a set of heuristics doesn’t seem like an emotional process to me, but at this point we reach a rather pedantic discussion of what “emotion” really is.
The point here is that our actions aren’t always pefectly reasonable or rational, and that is not necessarily a bad thing. If we could not act unless we could reach a logical conclusion, we would do very little. We do things because they work, not necessarily because we reasoned that they would work before we did them. Afterwords, we justify our actions, and store away any learned heuristics for future use (or modify existing ones to account for the new data). Most of the time, this process works. However, these heuristics will fail from time to time as well. When you’re me, rooting for a sports team or betting modest amounts of money on a race, failure doesn’t mean much. In other situations, however, failure is not so benign. Yet, despite the repercussions, failure is still inevitable and necessary in these situations. In the case of war, for instance, this can be indeed difficult and heartbreaking, but no less necessary. [thanks to Jonathon Delacour for the Dave Rodgers post]
Mark,
When get the chance, you should read Dr. Antonio Damasio’s books, especially Descartes’ Error.
Your comment on heuristics is correct, but those heuristic models are coded in what Damasio calls “dispositional representations,” which are essentially “maps” of how certain situations make us “feel.” The story of John Gage, the 19th Century railroad worker who had an iron rod driven through his brain and survived is a fascinating glimpse at how our brains function. The part of his brain where the maps were stored or processed was destroyed, and he was incapable of making a good decision. Since Gage, there have been many, many more people with brain injuries to the same region with similar effects. We simply lack the cognitive resources to constantly be developing logical reasoning in real time for the myriad situations we confront. We can, and do, use logic, but only in a relatively limited number of circumstances.
Finally, there is also the issue of “homeostasis.” By that I mean the brain/mind comes to embrace a certain state as “normal” and works to preserve that state as a regulatory mechanism, even though we’re not consciously aware of it. This is observed by psychologists in families of alcoholics, when the alcoholic begins to undertake some sort of therapy. In many cases, another family member will act out in order to create the kind of disruption and stress within the family that was “normal” when the alcoholic was drinking.
While this is largely unconscious behavior, it is cognitively accessible if it can be pointed out and recognized, and much of the time it can be. Cognitive therapy is often effective on these kinds of behaviors.
One of the great difficulties we face is this rather large “blind spot” we have with regard to our own behaviors and thinking. There are other behaviors that are not especially “rational,” but if I pointed them out to you, you would be able to construct seemingly rational explanations for them. We might wish to look to the cases of prisoner abuse in Iraq, or if that’s too controversial, just look at the Stanford Prisoner Experiment or Milgram’s authority experiments. Look at the prevalence of ideological thinking in public discourse. Look at TV talk shows where people endure ritual humiliation – or even American Idol for that matter.
Yes, we probably can add a seventh belief to Asimov’s security beliefs – the belief that humans are rational animals, or that this interior “voice” is what’s in charge of our lives. It ain’t necessarily so.