Intelligence Agencies Have an Impossible Mission in Uncovering Terror Plots; Here’s Why

This article harkens back to lessons learned in the wake of 9/11 from the subsequent analysis of our intelligence capabilities, weaknesses, successes and failures. As more history emerges regarding the San Bernardino and Paris attackers, we see that intelligence gathering and assessment is perhaps a more difficult job than the general public realizes.

No matter how sophisticated an intelligence agency may be, to prevent an attack, the agency must “be right 100% of the time”, whereas a terrorist only has to “be right once” to successfully execute a terror plot. The evildoers strategy is masterful in that they are relying on individuals who are often already in our midst, working alone or in small groups to evade detection. If we were unable to prevent the tragedy of 9/11—such an elaborate scheme committed by a relatively large group of foreigners—how on earth can we be successful in preventing self-radicalized fanatics or lone-wolf attacks?

As I look at the unfolding news about San Bernardino, with new information breaking on a daily basis, I am reminded of an article I read in 2003 by Malcolm Gladwell that was so brilliant, it remains as relevant today as it was when Gladwell wrote it. The piece is worth a read by anyone who is interested in how fragments of intelligence are pieced together to become actionable.

As Chief Kirk mentioned in his article, Train On!, a well-armed citizenry could provide a last line of defense in the event of an attack. When you read this piece and build an understanding of the intelligence apparatus – together with it’s numerous challenges – I think, even if we are very good at gathering and analyzing intelligence, who would want their life and their families’ lives to rest solely in the hands of these organizations?

Intelligence work is a perilous endeavor with many pitfalls…and it’s clear that even the most capable organizations fail at times.

Connecting the Dots
March 10, 2003
by Malcolm Gladwell

The paradoxes of intelligence reform.
1. In the fall of 1973, the Syrian Army began to gather a large number of tanks, artillery batteries, and infantry along its border with Israel. Simultaneously, to the south, the Egyptian Army cancelled all leaves, called up thousands of reservists, and launched a massive military exercise, building roads and preparing anti-aircraft and artillery positions along the Suez Canal. On October 4th, an Israeli aerial reconnaissance mission showed that the Egyptians had moved artillery into offensive positions. That evening, AMAN, the Israeli military intelligence agency, learned that portions of the Soviet fleet near Port Said and Alexandria had set sail, and that the Soviet government had begun airlifting the families of Soviet advisers out of Cairo and Damascus. Then, at four o’clock in the morning on October 6th, Israel’s director of military intelligence received an urgent telephone call from one of the country’s most trusted intelligence sources. Egypt and Syria, the source said, would attack later that day. Top Israeli officials immediately called a meeting. Was war imminent? The head of AMAN, Major General Eli Zeira, looked over the evidence and said he didn’t think so. He was wrong. That afternoon, Syria attacked from the east, overwhelming the thin Israeli defenses in the Golan Heights, and Egypt attacked from the south, bombing Israeli positions and sending eight thousand infantry streaming across the Suez. Despite all the warnings of the previous weeks, Israeli officials were caught by surprise. Why couldn’t they connect the dots?

If you start on the afternoon of October 6th and work backward, the trail of clues pointing to an attack seems obvious; you’d have to conclude that something was badly wrong with the Israeli intelligence service. On the other hand, if you start several years before the Yom Kippur War and work forward, re-creating what people in Israeli intelligence knew in the same order that they knew it, a very different picture emerges. In the fall of 1973, Egypt and Syria certainly looked as if they were preparing to go to war. But, in the Middle East of the time, countries always looked as if they were going to war. In the fall of 1971, for instance, both Egypt’s President and its minister of war stated publicly that the hour of battle was approaching. The Egyptian Army was mobilized. Tanks and bridging equipment were sent to the canal. Offensive positions were readied. And nothing happened. In December of 1972, the Egyptians mobilized again. The Army furiously built fortifications along the canal.

A reliable source told Israeli intelligence that an attack was imminent. Nothing happened. In the spring of 1973, the President of Egypt told Newsweek that everything in his country “is now being mobilized in earnest for the resumption of battle.” Egyptian forces were moved closer to the canal. Extensive fortifications were built along the Suez. Blood donors were rounded up. Civil-defense personnel were mobilized. Blackouts were imposed throughout Egypt. A trusted source told Israeli intelligence that an attack was imminent. It didn’t come. Between January and October of 1973, the Egyptian Army mobilized nineteen times without going to war. The Israeli government couldn’t mobilize its Army every time its neighbors threatened war. Israel is a small country with a citizen Army. Mobilization was disruptive and expensive, and the Israeli government was acutely aware that if its Army was mobilized and Egypt and Syria weren’t serious about war, the very act of mobilization might cause them to become serious about war.

Nor did the other signs seem remarkable. The fact that the Soviet families had been sent home could have signified nothing more than a falling-out between the Arab states and Moscow. Yes, a trusted source called at four in the morning, with definite word of a late afternoon attack, but his last two attack warnings had been wrong. What’s more, the source said that the attack would come at sunset, and an attack so late in the day wouldn’t leave enough time for opening air strikes. Israeli intelligence didn’t see the pattern of Arab intentions, in other words, because, until Egypt and Syria actually attacked, on the afternoon of October 6, 1973, their intentions didn’t form a pattern. They formed a Rorschach blot. What is clear in hindsight is rarely clear before the fact. It’s an obvious point, but one that nonetheless bears repeating, particularly when we’re in the midst of assigning blame for the surprise attack of September 11th.

2. Of the many postmortems conducted after September 11th, the one that has received the most attention is “The Cell: Inside the 9/11 Plot, and Why the F.B.I. and C.I.A. Failed to Stop It” (Hyperion; $24.95), by John Miller, Michael Stone, and Chris Mitchell. The authors begin their tale with El Sayyid Nosair, the Egyptian who was arrested in November of 1990 for shooting Rabbi Meir Kahane, the founder of the Jewish Defense League, in the ballroom of the Marriott Hotel in midtown Manhattan. Nosair’s apartment in New Jersey was searched, and investigators found sixteen boxes of files, including training manuals from the Army Special Warfare School; copies of teletypes that had been routed to the Joint Chiefs of Staff; bombmaking manuals; and maps, annotated in Arabic, of landmarks like the Statue of Liberty, Rockefeller Center, and the World Trade Center.

According to “The Cell,” Nosair was connected to gunrunners and to Islamic radicals in Brooklyn, who were in turn behind the World Trade Center bombing two and a half years later, which was masterminded by Ramzi Yousef, who then showed up in Manila in 1994, apparently plotting to kill the Pope, crash a plane into the Pentagon or the C.I.A., and bomb as many as twelve transcontinental airliners simultaneously. And who was Yousef associating with in the Philippines? Mohammed Khalifa, Wali Khan AminShah, and Ibrahim Munir, all of whom had fought alongside, pledged a loyalty oath to, or worked for a shadowy Saudi Arabian millionaire named Osama bin Laden.

Miller was a network-television correspondent throughout much of the past decade, and the best parts of “The Cell” recount his own experiences in covering the terrorist story. He is an extraordinary reporter. At the time of the first World Trade Center attack, in February of 1993, he clapped a flashing light on the dashboard of his car and followed the wave of emergency vehicles downtown. (At the bombing site, he was continuously trailed by a knot of reporters–I was one of them–who had concluded that the best way to learn what was going on was to try to overhear his conversations.) Miller became friends with the F.B.I. agents who headed the New York counterterrorist office–Neil Herman and John O’Neill, in particular–and he became as obsessed with Al Qaeda as they were. He was in Yemen, with the F.B.I., after Al Qaeda bombed the U.S.S. Cole. In 1998, at the Marriott in Islamabad, he and his cameraman met someone known to them only as Akhtar, who spirited them across the border into the hills of Afghanistan to interview Osama bin Laden. In “The Cell,” the period from 1990 through September 11th becomes a seamless, devastating narrative: the evolution of Al Qaeda. “How did this happen to us?” the book asks in its opening pages. The answer, the authors argue, can be found by following the “thread” connecting Kahane’s murder to September 11th. In the events of the past decade, they declare, there is a clear “recurring pattern.”

The same argument is made by Senator Richard Shelby, vice-chairman of the Senate Select Committee on Intelligence, in his investigative report on September 11th, released this past December. The report is a lucid and powerful document, in which Shelby painstakingly points out all the missed or misinterpreted signals pointing to a major terrorist attack. The C.I.A. knew that two suspected Al Qaeda operatives, Khalid al-Mihdhar and Nawaf al-Hazmi, had entered the country, but the C.I.A. didn’t tell the F.B.I. or the N.S.C. An F.B.I. agent in Phoenix sent a memo to headquarters that began with the sentence “The purpose of this communication is to advise the bureau and New York of the possibility of a coordinated effort by Osama Bin Laden to send students to the United States to attend civilian aviation universities and colleges.” But the F.B.I. never acted on the information, and failed to connect it with reports that terrorists were interested in using airplanes as weapons. The F.B.I. took into custody the suspected terrorist Zacarias Moussaoui, on account of his suspicious behavior at flight school, but was unable to integrate his case into a larger picture of terrorist behavior. “The most fundamental problem . . . is our Intelligence Community’s inability to ‘connect the dots’ available to it before September 11, 2001, about terrorists’ interest in attacking symbolic American targets,” the Shelby report states.

The phrase “connect the dots” appears so often in the report that it becomes a kind of mantra. There was a pattern, as plain as day in retrospect, yet the vaunted American intelligence community simply could not see it.
None of these postmortems, however, answer the question raised by the Yom Kippur War: Was this pattern obvious before the attack? This question–whether we revise our judgment of events after the fact–is something that psychologists have paid a great deal of attention to. For example, on the eve of Richard Nixon’s historic visit to China, the psychologist Baruch Fischhoff asked a group of people to estimate the probability of a series of possible outcomes of the trip. What were the chances that the trip would lead to permanent diplomatic relations between China and the United States? That Nixon would meet with the leader of China, Mao Tse-tung, at least once? That Nixon would call the trip a success? As it turned out, the trip was a diplomatic triumph, and Fischhoff then went back to the same people and asked them to recall what their estimates of the different outcomes of the visit had been. He found that the subjects now, overwhelmingly, “remembered” being more optimistic than they had actually been. If you originally thought that it was unlikely that Nixon would meet with Mao, afterward, when the press was full of accounts of Nixon’s meeting with Mao, you’d “remember” that you had thought the chances of a meeting were pretty good. Fischhoff calls this phenomenon “creeping determinism”–the sense that grows on us, in retrospect, that what has happened was actually inevitable–and the chief effect of creeping determinism, he points out, is that it turns unexpected events into expected events. As he writes, “The occurrence of an event increases its reconstructed probability and makes it less surprising than it would have been had the original probability been remembered.”

To read the Shelby report, or the seamless narrative from Nosair to bin Laden in “The Cell,” is to be convinced that if the C.I.A. and the F.B.I. had simply been able to connect the dots what happened on September 11th should not have been a surprise at all. Is this a fair criticism or is it just a case of creeping determinism?

3. On August 7, 1998, two Al Qaeda terrorists detonated a cargo truck filled with explosives outside the United States Embassy in Nairobi, killing two hundred and thirteen people and injuring more than four thousand. Miller, Stone, and Mitchell see the Kenyan Embassy bombing as a textbook example of intelligence failure. The C.I.A., they tell us, had identified an Al Qaeda cell in Kenya well before the attack, and its members were under surveillance. They had an eight-page letter, written by an Al Qaeda operative, speaking of the imminent arrival of “engineers”–the code word for bombmakers–in Nairobi.

The United States Ambassador to Kenya, Prudence Bushnell, had begged Washington for more security. A prominent Kenyan lawyer and legislator says that the Kenyan intelligence service warned U.S. intelligence about the plot several months before August 7th, and in November of 1997 a man named Mustafa Mahmoud Said Ahmed, who worked for one of Osama bin Laden’s companies, walked into the United States Embassy in Nairobi and told American intelligence of a plot to blow up the building. What did our officials do? They forced the leader of the Kenyan cell–a U.S. citizen–to return home, and then abruptly halted their surveillance of the group. They ignored the eight-page letter. They allegedly showed the Kenyan intelligence service’s warning to the Mossad, which dismissed it, and after questioning Ahmed they decided that he wasn’t credible. After the bombing, “The Cell” tells us, a senior State Department official phoned Bushnell and asked, “How could this have happened?”

“For the first time since the blast,” Miller, Stone, and Mitchell write, “Bushnell’s horror turned to anger. There was too much history. ‘I wrote you a letter,’ she said.”

This is all very damning, but doesn’t it fall into the creeping-determinism trap? It is not at all clear that it passes the creeping-determinism test. It’s an edited version of the past. What we don’t hear about is all the other people whom American intelligence had under surveillance, how many other warnings they received, and how many other tips came in that seemed promising at the time but led nowhere. The central challenge of intelligence gathering has always been the problem of “noise”: the fact that useless information is vastly more plentiful than useful information. Shelby’s report mentions that the F.B.I.’s counter terrorism division has sixty-eight thousand outstanding and unassigned leads dating back to 1995. And, of those, probably no more than a few hundred are useful. Analysts, in short, must be selective, and the decisions made in Kenya, by that standard, do not seem unreasonable. Surveillance on the cell was shut down, but, then, its leader had left the country. Bushnell warned Washington–but, as “The Cell” admits, there were bomb warnings in Africa all the time. Officials at the Mossad thought the Kenyan intelligence was dubious, and the Mossad ought to know. Ahmed may have worked for bin Laden but he failed a polygraph test, and it was also learned that he had previously given similar–groundless–warnings to other embassies in Africa. When a man comes into your office, fails a lie-detector test, and is found to have shopped the same unsubstantiated story all over town, can you be blamed for turning him out?

Miller, Stone, and Mitchell make the same mistake when they quote from a transcript of a conversation that was recorded by Italian intelligence in August of 2001 between two Al Qaeda operatives, Abdel Kader Es Sayed and a man known as al Hilal. This, they say, is yet another piece of intelligence that “seemed to forecast the September 11 attacks.”

“I’ve been studying airplanes,” al Hilal tells Es Sayed. “If God wills, I hope to be able to bring you a window or a piece of a plane the next time I see you.”
“What, is there a jihad planned?” Es Sayed asks. “In the future, listen to the news and remember these words: ‘Up above,'” al Hilal replies.

Es Sayed thinks that al Hilal is referring to an operation in his native Yemen, but al Hilal corrects him: “But the surprise attack will come from the other country, one of those attacks you will never forget.”

A moment later al Hilal says about the plan, “It is something terrifying that goes from south to north, east to west. The person who devised this plan is a madman, but a genius. He will leave them frozen [in shock].”

This is a tantalizing exchange. It would now seem that it refers to September 11th. But in what sense was it a “forecast”? It gave neither time nor place nor method nor target. It suggested only that there were terrorists out there who liked to talk about doing something dramatic with an airplane–which did not, it must be remembered, reliably distinguish them from any other terrorists of the past thirty years.

In the real world, intelligence is invariably ambiguous. Information about enemy intentions tends to be short on detail. And information that’s rich in detail tends to be short on intentions. In April of 1941, for instance, the Allies learned that Germany had moved a huge army up to the Russian front. The intelligence was beyond dispute: the troops could be seen and counted. But what did it mean? Churchill concluded that Hitler wanted to attack Russia. Stalin concluded that Hitler was serious about attacking, but only if the Soviet Union didn’t meet the terms of the German ultimatum. The British foreign secretary, Anthony Eden, thought that Hitler was bluffing, in the hope of winning further Russian concessions. British intelligence thought–at least, in the beginning–that Hitler simply wanted to reinforce his eastern frontier against a possible Soviet attack. The only way for this piece of intelligence to have been definitive wold have been if the Allies had a second piece of intelligence–like the phone call between al Hilal and Es Sayed–that demonstrated Germany’s true purpose. Similarly, the only way the al Hilal phone call would have been definitive is if we’d also had intelligence as detailed as the Allied knowledge of German troop movements. But rarely do intelligence services have the luxury of both kinds of information. Nor are their analysts mind readers. It is only with hindsight that human beings acquire that skill.

“The Cell” tells us that, in the final months before September 11th, Washington was frantic with worry:

A spike in phone traffic among suspected al Qaeda members in the early part of the summer [of 2001], as well as debriefings of [an al Qaeda operative in custody] who had begun cooperating with the government, convinced investigators that bin Laden was planning a significant operation–one intercepted al Qaeda message spoke of a “Hiroshima-type” event–and that he was planning it soon. Through the summer, the CIA repeatedly warned the White House that attacks were imminent.

The fact that these worries did not protect us is not evidence of the limitations of the intelligence community. It is evidence of the limitations of intelligence.

4. In the early nineteen-seventies, a professor of psychology at Stanford University named David L. Rosenhan gathered together a painter, a graduate student, a pediatrician, a psychiatrist, a housewife, and three psychologists. He told them to check into different psychiatric hospitals under aliases, with the complaint that they had been hearing voices. They were instructed to say that the voices were unfamiliar, and that they heard words like “empty,””thud,” and “hollow.” Apart from that initial story, the pseudo patients were instructed to answer every question truthfully, to behave as they normally would, and to tell the hospital staff–at every opportunity–that the voices were gone and that they had experienced no further symptoms. The eight subjects were hospitalized, on average, for nineteen days. One was kept for almost two months. Rosenhan wanted to find out if the hospital staffs would ever see through the ruse. They never did.

Rosenhan’s test is, in a way, a classic intelligence problem. Here was a signal (a sane person) buried in a mountain of conflicting and confusing noise (a mental hospital), and the intelligence analysts (the doctors) were asked to connect the dots–and they failed spectacularly. In the course of their hospital stay, the eight pseudo patients were given a total of twenty-one hundred pills. They underwent psychiatric interviews, and sober case summaries documenting their pathologies were written up. They were asked by Rosenhan to take notes documenting how they were treated, and this quickly became part of their supposed pathology. “Patient engaging in writing behavior,” one nurse ominously wrote in her notes. Having been labelled as ill upon admission, they could not shake the diagnosis. “Nervous?” a friendly nurse asked one of the subjects as he paced the halls one day. “No,” he corrected her, to no avail, “bored.”

The solution to this problem seems obvious enough. Doctors and nurses need to be made alert to the possibility that sane people sometimes get admitted to mental hospitals. So Rosenhan went to a research-and-teaching hospital and informed the staff that at some point in the next three months he would once again send over one or more of his pseudo patients. This time, of the hundred and ninety-three patients admitted in the three-month period, forty-one were identified by at least one staff member as being almost certainly sane. Once again, however, they were wrong. Rosenhan hadn’t sent anyone over. In attempting to solve one kind of intelligence problem (overdiagnosis), the hospital simply created another problem (underdiagnosis). This is the second, and perhaps more serious, consequence of creeping determinism: in our zeal to correct what we believe to be the problems of the past, we end up creating new problems for the future.

Pearl Harbor, for example, was widely considered to be an organizational failure. The United States had all the evidence it needed to predict the Japanese attack, but the signals were scattered throughout the various intelligence services. The Army and the Navy didn’t talk to each other. They spent all their time arguing and competing. This was, in part, why the Central Intelligence Agency was created, in 1947–to insure that all intelligence would be collected and processed in one place.

Twenty years after Pearl Harbor, the United States suffered another catastrophic intelligence failure, at the Bay of Pigs: the Kennedy Administration grossly underestimated the Cubans’ capacity to fight and their support for Fidel Castro. This time, however, the diagnosis was completely different. As Irving L. Janis concluded in his famous study of “groupthink,” the root cause of the Bay of Pigs fiasco was that the operation was conceived by a small, highly cohesive group whose close ties inhibited the beneficial effects of argument and competition. Centralization was now the problem. One of the most influential organizational sociologists of the postwar era, Harold Wilensky, went out of his way to praise the “constructive rivalry” fostered by Franklin D. Roosevelt, which, he says, is why the President had such formidable intelligence on how to attack the economic ills of the Great Depression. In his classic 1967 work “Organizational Intelligence,” Wilensky pointed out that Roosevelt would
use one anonymous informant’s information to challenge and check another’s, putting both on their toes; he recruited strong personalities and structured their work so that clashes would be certain. . . . In foreign affairs, he gave Moley and Welles tasks that overlapped those of Secretary of State Hull; in conservation and power, he gave Ickes and Wallace identical missions; in welfare, confusing both functions and initials, he assigned PWA to Ickes, WPA to Hopkins; in politics, Farley found himself competing with other political advisors for control over patronage. The effect: the timely advertisement of arguments, with both the experts and the President pressured to consider the main choices as they came boiling up from below.

The intelligence community that we had prior to September 11th was the direct result of this philosophy. The F.B.I. and the C.I.A. were supposed to be rivals, just as Ickes and Wallace were rivals. But now we’ve changed our minds. The F.B.I. and the C.I.A., Senator Shelby tells us disapprovingly, argue and compete with one another. The September 11th story, his report concludes, “should be an object lesson in the perils of failing to share information promptly and efficiently between (and within) organizations.” Shelby wants recentralization and more focus on coöperation. He wants a “central national level knowledge-compiling entity standing above and independent from the disputatious bureaucracies.” He thinks the intelligence service should be run by a small, highly cohesive group, and so he suggests that the F.B.I. be removed from the counterterrorism business entirely.

The F.B.I., according to Shelby, is governed by deeply-entrenched individual mindsets that prize the production of evidence-supported narratives of defendant wrongdoing over the drawing of probabilistic inferences based on incomplete and fragmentary information in order to support decision-making. . . . Law enforcement organizations handle information, reach conclusions, and ultimately just think differently than intelligence organizations. Intelligence analysts would doubtless make poor policemen, and it has become very clear that policemen make poor intelligence analysts.

In his State of the Union Message, President George W. Bush did what Shelby wanted, and announced the formation of the Terrorist Threat Integration Center–a special unit combining the antiterrorist activities of the F.B.I. and the C.I.A. The cultural and organizational diversity of the intelligence business, once prized, is now despised.

The truth is, though, that it is just as easy, in the wake of September 11th, to make the case for the old system. Isn’t it an advantage that the F.B.I. doesn’t think like the C.I.A.? It was the F.B.I., after all, that produced two of the most prescient pieces of analysis–the request by the Minneapolis office for a warrant to secretly search Zacarias Moussaoui’s belongings, and the now famous Phoenix memo. In both cases, what was valuable about the F.B.I.’s analysis was precisely the way in which it differed from the traditional “big picture,” probabilistic inference-making of the analyst. The F.B.I. agents in the field focussed on a single case, dug deep, and came up with an “evidence-supported narrative of defendant wrongdoing” that spoke volumes about a possible Al Qaeda threat.
The same can be said for the alleged problem of rivalry. “The Cell” describes what happened after police in the Philippines searched the apartment that Ramzi Yousef shared with his co-conspirator, Abdul Hakim Murad. Agents from the F.B.I.’s counterterrorism unit immediately flew to Manila and “bumped up against the C.I.A.” As the old adage about the Bureau and the Agency has it, the F.B.I. wanted to string Murad up and the C.I.A. wanted to string him along. The two groups eventually worked together, but only because they had to. It was a relationship “marred by rivalry and mistrust.” But what’s wrong with this kind of rivalry?

As Miller, Stone, and Mitchell tell us, the real objection of Neil Herman–the F.B.I.’s former domestic counterterrorism chief–to “working with the C.I.A. had nothing to do with procedure. He just didn’t think the Agency was going to be of any help in finding Ramzi Yousef. ‘Back then, I don’t think the C.I.A. could have found a person in a bathroom,'” Herman says. “‘Hell, I don’t think they could have found the bathroom.'” The assumption of the reformers is always that the rivalry between the F.B.I. and the C.I.A. is essentially marital, that it is the dysfunction of people who ought to work together but can’t. But it could equally be seen as a version of the marketplace rivalry that leads to companies working harder and making better products.

There is no such thing as a perfect intelligence system, and every seeming improvement involves a tradeoff. A couple of months ago, for example, a suspect in custody in Canada, who was wanted in New York on forgery charges, gave police the names and photographs of five Arab immigrants, who he said had crossed the border into the United States. The F.B.I. put out an alert on December 29th, posting the names and photographs on its Web site, in the “war on terrorism” section. Even President Bush joined in, saying, “We need to know why they have been smuggled into the country, what they’re doing in the country.” As it turned out, the suspect in Canada had made the story up. Afterward, an F.B.I. official said that the agency circulated the photographs in order to “err on the side of caution.”

Our intelligence services today are highly sensitive. But this kind of sensitivity is not without its costs. As the political scientist Richard K. Betts wrote in his essay “Analysis, War, and Decision: Why Intelligence Failures Are Inevitable,””Making warning systems more sensitive reduces the risk of surprise, but increases the number of false alarms, which in turn reduces sensitivity.” When we run out and buy duct tape to seal our windows against chemical attack, and nothing happens, and when the government’s warning light is orange for weeks on end, and nothing happens, we soon begin to doubt every warning that comes our way. Why was the Pacific fleet at Pearl Harbor so unresponsive to signs of an impending Japanese attack? Because, in the week before December 7, 1941, they had checked out seven reports of Japanese submarines in the area–and all seven were false. Rosenhan’s psychiatrists used to miss the sane; then they started to see sane people everywhere. That is a change, but it is not exactly progress.

5. In the wake of the Yom Kippur War, the Israeli government appointed a special investigative commission, and one of the witnesses called was Major General Zeira, the head of AMAN. Why, they asked, had he insisted that war was not imminent? His answer was simple:

The Chief of Staff has to make decisions, and his decisions must be clear. The best support that the head of AMAN can give the Chief of Staff is to give a clear and unambiguous estimate, provided that it is done in an objective fashion. To be sure, the clearer and sharper the estimate, the clearer and sharper the mistake–but this is a professional hazard for the head of AMAN.

The historians Eliot A. Cohen and John Gooch, in their book “Military Misfortunes,” argue that it was Zeira’s certainty that had proved fatal: “The culpable failure of AMAN’s leaders in September and October 1973 lay not in their belief that Egypt would not attack but in their supreme confidence, which dazzled decision-makers. . . . Rather than impress upon the prime minister, the chief of staff and the minister of defense the ambiguity of the situation, they insisted–until the last day–that there would be no war, period.”

But, of course, Zeira gave an unambiguous answer to the question of war because that is what the politicians and the public demanded of him. No one wants ambiguity. Today, the F.B.I. gives us color-coded warnings and speaks of “increased chatter” among terrorist operatives, and the information is infuriating to us because it is so vague. What does “increased chatter” mean? We want a prediction. We want to believe that the intentions of our enemies are a puzzle that intelligence services can piece together, so that a clear story emerges. But there rarely is a clear story–at least, not until afterward, when some enterprising journalist or investigative committee decides to write one.

Leave a Reply

1 comment

  1. Gabriel

    Great article Kerry, thanks for sharing.