Intelligence analysis is plagued by many of the cognitive traps also encountered in other disciplines. The first systematic study "Psychology of Intelligence Analysis" focusing on the specific pitfalls lying between an intelligence analyst and clear thinking was carried out by Dick Heuer in 1999. According to Heuer, these "cognitive traps for intelligence analysis" may be rooted either in the analyst's organizational culture or his or her own personality.
The most common personality trap, known as mirror-imaging is the analysts' assumption that the people being studied think like the analysts themselves. An important variation is to confuse actual subjects with one's information or images about them, as the sort of apple one eats and the ideas and issues it may raise. It poses a dilemma for the scientific method in general, since science uses information and theory to represent complex natural systems as if theoretical constructs might be in control of indefinable natural processes. An inability to distinguish subjects from what one is thinking about them is also studied under the subject of functional fixedness, first studied in Gestalt psychology and in relation to the subject–object problem.
Experienced analysts may recognize that they have fallen prey to mirror-imaging if they discover that they are unwilling to examine variants of what they consider most reasonable in light of their personal frame of reference. Less-perceptive analysts affected by this trap may regard legitimate objections as a personal attack, rather than looking beyond ego to the merits of the question. Peer review (especially by people from a different background) can be a wise safeguard. Organizational culture can also create traps which render individual analysts unwilling to challenge acknowledged experts in the group.
See also: False consensus effect
Another trap, target fixation, has an analogy in aviation: it occurs when pilots become so intent on delivering their ordnance that they lose sight of the big picture and crash into the target. This is a more basic human tendency than many realize. Analysts may fixate on one hypothesis, looking only at evidence that is consistent with their preconceptions and ignoring other relevant views. The desire for rapid closure is another form of idea fixation.
"Familiarity with terrorist methods, repeated attacks against U.S. facilities overseas, combined with indications that the continental United States was at the top of the terrorist target list might have alerted us that we were in peril of a significant attack. And yet, for reasons those who study intelligence failure will find familiar, 9/11 fits very much into the norm of surprise caused by a breakdown of intelligence warning." The breakdown happened, in part, because there was poor information-sharing among analysts (in different FBI offices, for example). At a conceptual level, US intelligence knew that al-Qaida actions almost always involve multiple, near-simultaneous attacks; however, the FBI did not assimilate piecemeal information on oddly behaving foreign flight-training students into this context.
On the day of the hijackings (under tremendous time pressure), no analyst associated the multiple hijackings with the multiple-attack signature of al-Qaeda. The failure to conceive that a major attack could occur within the US left the country unprepared. For example, irregularities detected by the Federal Aviation Administration and North American Air Defense Command did not flow into a center where analysts could consolidate this information and (ideally) collate it with earlier reports of odd behavior among certain pilot trainees, or the possibility of hijacked airliners being used as weapons.
Inappropriate analogies are yet another cognitive trap. Though analogies may be extremely useful they can become dangerous when forced, or when they are based on assumptions of cultural or contextual equivalence. Avoiding such analogies is difficult when analysts are merely unconscious of differences between their own context and that of others; it becomes extremely difficult when they are unaware that important knowledge is missing. Difficulties associated with admitting one's ignorance are an additional barrier to avoiding such traps. Such ignorance can take the form of insufficient study: a lack of factual information or understanding; an inability to mesh new facts with old; or a simple denial of conflicting facts.
Even extremely creative thinkers may find it difficult to gain support within their organization. Often more concerned with appearances, managers may suppress conflict born of creativity in favor of the status quo. A special case of stereotyping is stovepiping, whereby a group heavily invested in a particular collection technology ignores valid information from other sources (functional specialization). It was a Soviet tendency to value HUMINT (HUMan INTelligence), gathered from espionage, above all other sources; the Soviet OSINT was forced to go outside the state intelligence organization in developing the USA (later USA-Canada) Institute of the Soviet Academy of Sciences.
Another specialization problem may come as a result of security compartmentalization. An analytic team with unique access to a source may overemphasize that source's significance. This can be a major problem with long-term HUMINT relationships, in which partners develop personal bonds.
Groups (like individual analysts) can also reject evidence which contradicts prior conclusions. When this happens it is often difficult to assess whether the inclusion of certain analysts in the group was the thoughtful application of deliberately contrarian "red teams", or the politicized insertion of ideologues to militate for a certain policy. Monopolization of the information flow (as caused by the latter) has also been termed "stovepiping", by analogy with intelligence-collection disciplines.
There are many levels at which one can misunderstand another culture, be it that of an organization or a country. One frequently encountered trap is the rational-actor hypothesis, which ascribes rational behavior to the other side, according to a definition of rationality from one's own culture.
The social anthropologist Edward T. Hall illustrated one such conflict with an example from the American Southwest. "Anglo" drivers became infuriated when "Hispanic" traffic police would cite them for going 1 mi/h over the speed limit although a Hispanic judge would later dismiss the charge. "Hispanic" drivers, on the other hand, were convinced that "Anglo" judges were unfair because they would not dismiss charges because of extenuating circumstances.
Both cultures were rational with regard to law enforcement and the adjudication of charges; indeed, both believed that one of the two had to be flexible and the other had to be formal. However, in the Anglo culture, the police had discretion with regard to issuing speeding tickets, and the court was expected to stay within the letter of the law. In the Hispanic culture, the police were expected to be strict, but the courts would balance the situation. There was a fundamental misunderstanding; both sides were ethnocentric, and both incorrectly assumed the other culture was a mirror image of itself. In that example, denial of rationality was the result in both cultures, yet each was acting rationally within its own value set.
In a subsequent interview, Hall spoke widely about intercultural communication. He summed up years of study with this statement: "I spent years trying to figure out how to select people to go overseas. This is the secret. You have to know how to make a friend. And that is it!"
To make a friend, one has to understand the culture of the potential friend, one's own culture, and how things which are rational in one may not translate to the other. Key questions are:
If we can get away from theoretical paradigms and focus more on what is really going on with people, we will be doing well. I have two models that I used originally. One is the linguistics model, that is, descriptive linguistics. And the other one is animal behavior. Both involve paying close attention to what is happening right under our nose. There is no way to get answers unless you immerse yourself in a situation and pay close attention. From this, the validity and integrity of patterns is experienced. In other words, the pattern can live and become a part of you.
The main thing that marks my methodology is that I really do use myself as a control. I pay very close attention to myself, my feelings because then I have a base. And it is not intellectual.
Proportionality bias assumes that small things in one culture are small in every culture. In reality, cultures prioritize differently. In Western (especially Northern European) culture, time schedules are important, and being late can be a major discourtesy. Waiting one's turn is the cultural norm, and failing to stand in line is a cultural failing. "Honor killing" seems bizarre in some cultures but is an accepted part of others.
Even within a culture, however, individuals remain individual. Presumption of unitary action by organizations is another trap. In Japanese culture, the lines of authority are very clear, but the senior individual will also seek consensus. American negotiators may push for quick decisions, but the Japanese need to build consensus first; once it exists, they may execute it faster than Americans.
The analyst's country (or organization) is not identical to that of their opponent. One error is to mirror-image the opposition, assuming it will act the same as one's country and culture would under the same circumstances. "It seemed inconceivable to the U.S. planners in 1941 that the Japanese would be so foolish to attack a power whose resources so exceeded those of Japan, thus virtually guaranteeing defeat".
In like manner, no analyst in US Navy force protection conceived of an Arleigh Burke-class destroyer such as the USS Cole being attacked with a small suicide boat, much like those the Japanese planned to use extensively against invasion forces during World War II.
An opponent's cultural framework affects its approach to technology. That complicates the task of one's own analysts in assessing the opponent's resources, how they may be used and defining intelligence targets accordingly. Mirror-imaging, committing to a set of common assumptions rather than challenging those assumptions, has figured in numerous intelligence failures.
In the Pacific Theater of World War II, the Japanese seemed to believe that their language was so complex that even if their cryptosystems such as Type B Cipher Machine (Code Purple) were broken, outsiders would not really understand the content. That was not strictly true, but it was sufficiently that there were cases that even the intended recipients did not clearly understand the writer's intent.
On the other side, the US Navy assumed that ships anchored in the shallow waters of Pearl Harbor were safe from torpedo attack even though in 1940, at the Battle of Taranto, the British had made successful shallow-water torpedo attacks against Italian warships in harbor.
Even if intelligence services had credited the September 11, 2001 attacks conspirators with the organizational capacity necessary to hijack four airliners simultaneously, no one would have suspected that the hijackers' weapon of choice would be the box cutter.
Likewise, the US Navy underestimated the danger of suicide boats in harbor and set rules of engagement that allowed an unidentified boat to sail into the USS Cole without being warned off or fired on. A Burke-class destroyer is one of the most powerful warships ever built, but US security policies did not protect the docked USS Cole.
Mirror-imaging can be a major problem for policymakers, as well as analysts. During the Vietnam War, Lyndon B. Johnson and Robert S. McNamara assumed that Ho Chi Minh would react to situations in the same manner as they would. Similarly, in the run-up to the Gulf War, there was a serious misapprehension independent of politically-motivated intelligence manipulation that Saddam Hussein would view the situation as both Kuwait as the State Department and White House did.
Opposing countries are not monolithic, even within their governments. There can be bureaucratic competition, which becomes associated with different ideas. Some dictators, such as Hitler and Stalin, were known for creating internal dissension, so that only the leader was in complete control. A current issue, which analysts understand but politicians may not or may want to exploit by playing on domestic fears, is the actual political and power structure of Iran; one must not equate the power of Iran's president with that of the president of the United States.
Opponents are not always rational. They may have a greater risk tolerance than one's own country. Maintaining the illusion of a WMD threat appears to have been one of Saddam Hussein's survival strategies. Returning to the Iranian example, an apparently-irrational statement from Iranian President Mahmoud Ahmadinejad would not carry the weight of a similar statement by Supreme Leader Ali Khamenei. Analysts sometimes assume that the opponent is totally wise and knows all of the other side's weaknesses. Despite that danger, opponents are unlikely to act according to one's best-case scenario; they may take the worst-case approach to which one is most vulnerable.
The analysts are to form hypotheses but should also be prepared to reexamine them repeatedly in light of new information instead of searching for evidence buttressing their favored theory. They must remember that the enemy may be deliberately deceiving them with information that seems plausible to the enemy. Donald Bacon observed that "the most successful deception stories were apparently as reasonable as the truth. Allied strategic deception, as well as Soviet deception in support of the operations at Stalingrad, Kursk, and the 1944 summer offensive, all exploited German leadership's preexisting beliefs and were, therefore, incredibly effective." Theories that Hitler thought to be implausible were not accepted. Western deception staffs alternated "ambiguous" and "misleading" deceptions; the former intended simply to confuse analysts and the latter to make one false alternative especially likely.
Of all modern militaries, the Russians treat strategic deception (or, in their word, maskirovka, which goes beyond the English phrase to include deception, operational security and concealment) as an integral part of all planning. The highest levels of command are involved.
Bacon wrote further:
The battle of Kursk was also an example of effective Soviet maskirovka. While the Germans were preparing for their Kursk offensive, the Soviets created a story that they intended to conduct only defensive operations at Kursk. The reality was the Soviets planned a large counteroffensive at Kursk once they blunted the German attack.... German intelligence for the Russian Front assumed the Soviets would conduct only "local" attacks around Kursk to "gain" a better jumping off place for the winter offensive.
The counterattack by the Steppe Front stunned the Germans.
The opponent may try to overload one's analytical capabilities as a gambit for those preparing the intelligence budget and for agencies whose fast track to promotion is in data collection; one's own side may produce so much raw data that the analyst is overwhelmed, even without enemy assistance.
((cite web)): CS1 maint: multiple names: authors list (link)