Security & Intelligence

Sharks, Deer, and Risk

Here’s a question: Which animal poses the greater risk to the average person, a deer or a shark?

Most people’s initial reaction (mine included) to that question is to answer that the shark is the more dangerous animal. Statistically speaking, the average American is much more likely to be killed by deer (due to collisions with vehicles) than by a shark attack. Truly accurate statistics for deer collisions don’t exist, but estimates place the number of accidents in the hundreds of thousands. Millions of dollars worth of damage are caused by deer accidents, as are thousands of injuries and hundreds of deaths, every year.

Shark attacks, on the other hand, are much less frequent. Each year, approximately 50 to 100 shark attacks are reported. “World-wide, over the past decade, there have been an average of 8 shark attack fatalities per year.”

It seems clear that deer actually pose a greater risk to the average person than sharks. So why do people think the reverse is true? There are a number of reasons, among them the fact that deer don’t intentionally cause death and destruction (not that we know of anyway) and they are also usually harmed or killed in the process, while sharks directly attack their victims in a seemingly malicious manner (though I don’t believe sharks to be malicious either).

I’ve been reading Bruce Schneier’s book, Beyond Fear, recently. It’s excellent, and at one point he draws a distinction between what security professionals refer to as “threats” and “risks.”

A threat is a potential way an attacker can attack a system. Car burglary, car theft, and carjacking are all threats … When security professionals talk abour risk, they take into consideration both the likelihood of the threat and the seriousness of a successful attack. In the U.S., car theft is a more serious risk than carjacking because it is much more likely to occur.

Everyone makes risk assessments every day, but most everyone also has different tolerances for risk. It’s essentially a subjective decision, and it turns out that most of us rely on imperfect heuristics and inductive reasoning when it comes to these sorts of decisions (because it’s not like we have the statistics handy). Most of the time, these heuristics serve us well (and it’s a good thing too), but what this really ends up meaning is that when people make a risk assessment, they’re basing their decision on a perceived risk, not the actual risk.

Schneier includes a few interesting theories about why people’s perceptions get skewed, including this:

Modern mass media, specifically movies and TV news, has degraded our sense of natural risk. We learn about risks, or we think we are learning, not by directly experiencing the world around us and by seeing what happens to others, but increasingly by getting our view of things through the distorted lens of the media. Our experience is distilled for us, and it’s a skewed sample that plays havoc with our perceptions. Kids try stunts they’ve seen performed by professional stuntmen on TV, never recognizing the precautions the pros take. The five o’clock news doesn’t truly reflect the world we live in — only a very few small and special parts of it.

Slices of life with immediate visual impact get magnified; those with no visual component, or that can’t be immediately and viscerally comprehended, get downplayed. Rarities and anomalies, like terrorism, are endlessly discussed and debated, while common risks like heart disease, lung cancer, diabetes, and suicide are minimized.

When I first considered the Deer/Shark dilemma, my immediate thoughts turned to film. This may be a reflection on how much movies play a part in my life, but I suspect some others would also immediately think of Bambi, with it’s cuddly cute and innocent deer, and Jaws, with it’s maniacal great white shark. Indeed, Fritz Schranck once wrote about these “rats with antlers” (as some folks refer to deer) and how “Disney’s ability to make certain animals look just too cute to kill” has deterred many people from hunting and eating deer. When you look at the deer collision statistics, what you see is that what Disney has really done is to endanger us all!

Given the above, one might be tempted to pursue some form of censorship to keep the media from degrading our ability to determine risk. However, I would argue that this is wrong. Freedom of speech is ultimately a security measure, and if we’re to consider abridging that freedom, we must also seriously consider the risks of that action. We might be able to slightly improve our risk decisionmaking with censorship, but at what cost?

Schneier himself recently wrote about this subject on his blog. In response to an article which argues that suicide bombings in Iraq shouldn’t be reported (because it scares people and it serves the terrorists’ ends). It turns out, there are a lot of reasons why the media’s focus on horrific events in Iraq cause problems, but almost any way you slice it, it’s still wrong to censor the news:

It’s wrong because the danger of not reporting terrorist attacks is greater than the risk of continuing to report them. Freedom of the press is a security measure. The only tool we have to keep government honest is public disclosure. Once we start hiding pieces of reality from the public — either through legal censorship or self-imposed “restraint” — we end up with a government that acts based on secrets. We end up with some sort of system that decides what the public should or should not know.

Like all of security, this comes down to a basic tradeoff. As I’m fond of saying, human beings don’t so much solve problems as they do trade one set of problems for another (in the hopes that the new problems are preferable the old). Risk can be difficult to determine, and the media’s sensationalism doesn’t help, but censorship isn’t a realistic solution to that problem because it introduces problems of its own (and those new problems are worse than the one we’re trying to solve in the first place). Plus, both Jaws and Bambi really are great movies!

Spy Blogs

We Need Spy Blogs By Kris Alexander : An interesting article advocating the use of blogging on Intelink, the US intelligence community’s classified, highly secure mini-Internet.

A vast amount of information was available to us on Intelink, but there was no simple way to find and use the data efficiently. For instance, our search engine was an outdated version of AltaVista. (We’ve got Google now, a step in the right direction.) And while there were hundreds of people throughout the world reading the same materials, there was no easy way to learn what they thought. Somebody had answers to my questions, I knew, but how were we ever to connect?

It’s clear that we’re using a lot of technology to help our intelligence organizations, but data isn’t the same thing as intelligence. Perhaps unsurprisingly, Alexander points to a few Army initiatives that are leading the way. Army Knowledge Online provides a sort of virtual workspace for each unit – so even soldiers in reserve units who are spread out over a wide area are linked. The Center for Army Lessons Learned, which resembles a blog, allows soldiers to “post white papers on subjects ranging from social etiquette at Iraqi funerals to surviving convoy ambushes.”

Apparently the rest of the intelligence community has not kept up with the Army, perhaps confirming the lack of discipline hypothesized in my recent post A Tale of Two Software Projects. Of course, failure to keep up with technology is not a new criticism, even from within the CIA, but it is worth noting.

The first step toward reform: Encourage blogging on Intelink. When I Google “Afghanistan blog” on the public Internet, I find 1.1 million entries and tons of useful information. But on Intelink there are no blogs. Imagine if the experts in every intelligence field were turned loose – all that’s needed is some cheap software. It’s not far-fetched to picture a top-secret CIA blog about al Qaeda, with postings from Navy Intelligence and the FBI, among others. Leave the bureaucratic infighting to the agency heads. Give good analysts good tools, and they’ll deliver outstanding results.

And why not tap the brainpower of the blogosphere as well? The intelligence community does a terrible job of looking outside itself for information. From journalists to academics and even educated amateurs – there are thousands of people who would be interested and willing to help. Imagine how much traffic an official CIA Iraq blog would attract. If intelligence organizations built a collaborative environment through blogs, they could quickly identify credible sources, develop a deep backfield of contributing analysts, and engage the world as a whole.

Indeed.

A tale of two software projects

A few weeks ago, David Foster wrote an excellent post about two software projects. One was a failure, and one was a success.

The first project was the FBI’s new Virtual Case File system; a tool that would allow agents to better organize, analyze and communicate data on criminal and terrorism cases. After 3 years and over 100 million dollars, it was announced that the system may be totally unusable. How could this happen?

When it became clear that the project was in trouble, Aerospace Corporation was contracted to perform an independent evaluation. It recommended that the software be abandoned, saying that “lack of effective engineering discipline has led to inadequate specification, design and development of VCF.” SAIC has said it believes the problem was caused largely by the FBI: specifically, too many specification changes during the development process…an SAIC executive asserted that there were an average of 1.3 changes per day during the development. SAIC also believes that the current system is useable and can serve as a base for future development.

I’d be interested to see what the actual distribution of changes were (as opposed to the “average changes per day”, which seems awfully vague and somewhat obtuse to me), but I don’t find it that hard to believe that this sort of thing happened (especially because the software development firm was a separate entity). I’ve had some experience with gathering requirements, and it certainly can be a challenge, especially when you don’t know the processes currently in place. This does not excuse anything, however, and the question remains: how could this happen?

The second project, the success, may be able to shed some light on that. DARPA was tapped by the US Army to help protect troops from enemy snipers. The requested application would spot incoming bullets and identify their point of origin, and it would have to be easy to use, mobile, and durable.

The system would identify bullets from their sound..the shock wave created as they travelled through the air. By using multiple microphones and precisely timing the arrival of the “crack” of the bullet, its position could, in theory, be calculated. In practice, though, there were many problems, particularly the high levels of background noise–other weapons, tank engines, people shouting. All these had to be filtered out. By Thanksgiving weekend, the BBN team was at Quantico Marine Base, collecting data from actual firing…in terrible weather, “snowy, freezing, and rainy” recalls DARPA Program Manager Karen Wood. Steve Milligan, BBN’s Chief Technologist, came up with the solution to the filtering problem: use genetic algorithms. These are a kind of “simulated evolution” in which equations can mutate, be tested for effectivess, and sometimes even “mate,” over thousands of simulated generations (more on genetic algorithms here.)

By early March, 2004, the system was operational and had a name–“Boomerang.” 40 of them were installed on vehicles in Iraq. Based on feedback from the troops, improvements were requested. The system has now been reduced in size, shielded from radio interference, and had its display improved. It now tells soldiers the direction, range, and elevation of a sniper.

Now what was the biggest difference between the remarkable success of the Boomerang system and the spectacular failure of the Virtual Case File system? Obviously, the two projects present very different challenges, so a direct comparison doesn’t necessarily tell the whole story. However, it seems to me that discipline (in the case of the Army) or the lack of discipline (in the case of the FBI) might have been a major contributor to the outcomes of these two projects.

It’s obviously no secret that discipline plays a major role in the Army, but there is more to it than just that. Independence and initiative also play an important role in a military culture. In Neal Stephenson’s Cryptonomicon, the way the character Bobby Shaftoe (a Marine Raider, which is “…like a Marine, only more so.”) interacts with his superiors provides some insight (page 113 in my version):

Having now experienced all the phases of military existence except for the terminal ones (violent death, court-martial, retirement), he has come to understand the culture for what it is: a system of etiquette within which it becomes possible for groups of men to live together for years, travel to the ends of the earth, and do all kinds of incredibly weird shit without killing each other or completely losing their minds in the process. The extreme formality with which he addresses these officers carries an important subtext: your problem, sir, is doing it. My gung-ho posture says that once you give the order I’m not going to bother you with any of the details – and your half of the bargain is you had better stay on your side of the line, sir, and not bother me with any of the chickenshit politics that you have to deal with for a living.

Good military officers are used to giving an order, then staying out of their subordinate’s way as they carry out that order. I didn’t see any explicit measurement, but I would assume that there weren’t too many specification changes during the development of the Boomerang system. Of course, the developers themselves made all sorts of changes to specifics and they also incorporated feedback from the Army in the field in their development process, but that is standard stuff.

I suspect that the FBI is not completely to blame, but as the report says, there was a “lack of effective engineering discipline.” The FBI and SAIC share that failure. I suspect, from the number of changes requested by the FBI and the number of government managers involved, that micromanagement played a significant role. As Foster notes, we should be leveraging our technological abilities in the war on terror, and he suggests a loosely based oversight committe (headed by “a Director of Industrial Mobilization”) to make sure things like this don’t happen very often. Sounds like a reasonable idea to me…

Open Source Security

A few weeks ago, I wrote about what the mainstream media could learn from Reflexive documentary filmmaking. Put simply, Reflexive Documentaries achieve a higher degree of objectivity by embracing and acknowledging their own biases and agenda. Ironically, by acknowledging their own subjectivity, these films are more objective and reliable. In a follow up post, I examined how this concept could be applied to a broader range of information dissemination processes. That post focused on computer security and how full disclosure of system vulnerabilities actually improves security in the long run. Ironically, public scrutiny is the only reliable way to improve security.

Full disclosure is certainly not perfect. By definition, it increases risk in the short term, which is why opponents are able to make persuasive arguments against it. Like all security, it is a matter of tradeoffs. Does the long term gain justify the short term risk? As I’m fond of saying, human beings don’t so much solve problems as they trade one set of disadvantages for another (with the hope that the new set isn’t quite as bad as the old). There is no solution here, only a less disadvantaged system.

Now I’d like to broaden the subject even further, and apply the concept of open security to national security. With respect to national security, the stakes are higher and thus the argument will be more difficult to sustain. If people are unwilling to deal with a few computer viruses in the short term in order to increase long term security, imagine how unwilling they’ll be to risk a terrorist attack, even if that risk ultimately closes a few security holes. This may be prudent, and it is quite possible that a secrecy approach is more necessary at the national security level. Secrecy is certainly a key component of intelligence and other similar aspects of national security, so open security techniques would definitely not be a good idea in those areas.

However, there are certain vulnerabilities in processes and systems we use that could perhaps benefit from open security. John Robb has been doing some excellent work describing how terrorists (or global guerillas, as he calls them) can organize a more effective campaign in Iraq. He postulates a Bazaar of violence, which takes its lessons from the open source programming community (using Eric Raymond’s essay The Cathedral and the Bazaar as a starting point):

The decentralized, and seemingly chaotic guerrilla war in Iraq demonstrates a pattern that will likely serve as a model for next generation terrorists. This pattern shows a level of learning, activity, and success similar to what we see in the open source software community. I call this pattern the bazaar. The bazaar solves the problem: how do small, potentially antagonistic networks combine to conduct war?

Not only does the bazaar solve the problem, it appears able to scale to disrupt larger, more stable targets. The bazaar essentially represents the evolution of terrorism as a technique into something more effective: a highly decentralized strategy that is nevertheless able to learn and innovate. Unlike traditional terrorism, it seeks to leverage gains from sabotaging infrastructure and disrupting markets. By focusing on such targets, the bazaar does not experience diminishing returns in the same way that traditional terrorism does. Once established, it creats a dynamic that is very difficult to disrupt.

I’m a little unclear as to what the purpose of the bazaar is – the goal appears to be a state of perpetual violence that is capable of keeping a nation in a position of failure/collapse. That our enemies seek to use this strategy in Iraq is obvious, but success essentially means perpetual failure. What I’m unclear on is how they seek to parlay this result into a successful state (which I assume is their long term goal – perhaps that is not a wise assumption).

In any case, reading about the bazaar can be pretty scary, especially when news from Iraq seems to correllate well with the strategy. Of course, not every attack in Iraq correllates, but this strategy is supposedly new and relatively dynamic. It is constantly improving on itself. They are improvising new tactics and learning from them in an effort to further define this new method of warfare.

As one of the commenters on his site notes, it is tempting to claim that John Robb’s analysis is essentially an instruction manual for a guerilla organization, but that misses the point. It’s better to know where we are vulnerable before we discover that some weakness is being exploited.

One thing that Robb is a little short on is actual, concrete ways with which to fight the bazaar (there are some, and he has pointed out situations where U.S. forces attempted to thwart bazaar tactics, but such examples are not frequent). However, he still provides a valuable service in exposing security vulnerabilities. It seems appropriate that we adopt open source security techniques in order to fight an enemy that employs an open source platform. Vulnerabilities need to be exposed so that we may devise effective counter-measures.

Open Security and Full Disclosure

A few weeks ago, I wrote about what the mainstream media could learn from Reflexive documentary filmmaking. Put simply, Reflexive Documentaries achieve a higher degree of objectivity by embracing and acknowledging their own biases and agenda. Ironically, by acknowledging their own subjectivity, these films are more objective and reliable. I felt that the media could learn from such a model. Interestingly enough, such concepts can be applied to wider scenarios concerning information dissemination, particularly security.

Bruce Schneier has often written about such issues, and most of the information that follows is summarized from several of his articles, recent and old. The question with respect to computer security systems is this: Is publishing computer and network or software vulnerability information a good idea, or does it just help attackers?

When such a vulnerability exists, it creates what Schneier calls a Window of Exposure in which the vulnerability can still be exploited. This window exists until the vulnerability is patched and installed. There are five key phases which define the size of the window:

Phase 1 is before the vulnerability is discovered. The vulnerability exists, but no one can exploit it. Phase 2 is after the vulnerability is discovered, but before it is announced. At that point only a few people know about the vulnerability, but no one knows to defend against it. Depending on who knows what, this could either be an enormous risk or no risk at all. During this phase, news about the vulnerability spreads — either slowly, quickly, or not at all — depending on who discovered the vulnerability. Of course, multiple people can make the same discovery at different times, so this can get very complicated.

Phase 3 is after the vulnerability is announced. Maybe the announcement is made by the person who discovered the vulnerability in Phase 2, or maybe it is made by someone else who independently discovered the vulnerability later. At that point more people learn about the vulnerability, and the risk increases. In Phase 4, an automatic attack tool to exploit the vulnerability is published. Now the number of people who can exploit the vulnerability grows exponentially. Finally, the vendor issues a patch that closes the vulnerability, starting Phase 5. As people install the patch and re-secure their systems, the risk of exploit shrinks. Some people never install the patch, so there is always some risk. But it decays over time as systems are naturally upgraded.

The goal is to minimize the impact of the vulnerability by reducing the window of exposure (the area under the curve in figure 1). There are two basic approaches: secrecy and full disclosure.

The secrecy approach seeks to reduce the window of exposure by limiting public access to vulnerability information. In a different essay about network outages, Schneier gives a good summary of why secrecy doesn’t work well:

The argument that secrecy is good for security is naive, and always worth rebutting. Secrecy is only beneficial to security in limited circumstances, and certainly not with respect to vulnerability or reliability information. Secrets are fragile; once they’re lost they’re lost forever. Security that relies on secrecy is also fragile; once secrecy is lost there’s no way to recover security. Trying to base security on secrecy is just plain bad design.

… Secrecy prevents people from assessing their own risks.

Secrecy may work on paper, but in practice, keeping vulnerabilities secret removes motivation to fix the problem (it is possible that a company could utilize secrecy well, but it is unlikely that all companies would do so and it would be foolish to rely on such competency). The other method of reducing the window of exposure is to disclose all information about the vulnerablity publicly. Full Disclosure, as this method is called, seems counterintuitive, but Schneier explains:

Proponents of secrecy ignore the security value of openness: public scrutiny is the only reliable way to improve security. Before software bugs were routinely published, software companies routinely denied their existence and wouldn’t bother fixing them, believing in the security of secrecy.

Ironically, publishing details about vulnerabilities leads to a more secure system. Of course, this isn’t perfect. Obviously publishing vulnerabilities constitutes a short term danger, and can sometimes do more harm than good. But the alternative, secrecy, is worse. As Schneier is fond of saying, security is about tradeoffs. As I’m fond of saying, human beings don’t so much solve problems as they trade one set of disadvantages for another (with the hope that the new set isn’t quite as bad as the old). There is no solution here, only a less disadvantaged system.

This is what makes advocating open security systems like full disclosure difficult. Opponents will always be able to point to its flaws, and secrecy advocates are good at exploiting the intuitive (but not necessarily correct) nature of their systems. Open security systems are just counter-intuitive, and there is a tendency to not want to increase risk in the short term (as things like full disclosure does). Unfortunately, that means that the long term danger increases, as there is less incentive to fix security problems.

By the way, Schneier has started a blog. It appears to be made up of the same content that he normally releases monthly in the Crypto-Gram newsletter, but spread out over time. I think it will be interesting to see if Schneier starts responding to events in a more timely fashion, as that is one of the keys to the success of blogs (and it’s something that I’m bad at, unless news breaks on a Sunday).

Recent Cloak and Dagger Happenings

Bruce Schneier attempts to untangle the news that the NSA has been reading Iranian codes, and that Ahmed Chalabi informed the Iranians. In doing so, he runs across the massive difficulties of attempting to analyze an intelligence happening. Indeed, what follows is practically useless, unless you enjoy this cat and mouse stuff like I do…

As ordinary citizens without serious security clearances, we don’t know which machines’ codes the NSA compromised, nor do we know how. It’s possible that the U.S. broke the mathematical encryption algorithms that the Iranians used, as the British and Poles did with the German codes during World War II. It’s also possible that the NSA installed a “back door” into the Iranian machines. This is basically a deliberately placed flaw in the encryption that allows someone who knows about it to read the messages.

There are other possibilities: the NSA might have had someone inside Iranian intelligence who gave them the encryption settings required to read the messages. John Walker sold the Soviets this kind of information about U.S. naval codes for years during the 1980s. Or the Iranians could have had sloppy procedures that allowed the NSA to break the encryption. …

Whatever the methodology, this would be an enormous intelligence coup for the NSA. It was also a secret in itself. If the Iranians ever learned that the NSA was reading their messages, they would stop using the broken encryption machines, and the NSA’s source of Iranian secrets would dry up. The secret that the NSA could read the Iranian secrets was more important than any specific Iranian secrets that the NSA could read.

The result was that the U.S. would often learn secrets they couldn’t act upon, as action would give away their secret. During World War II, the Allies would go to great lengths to make sure the Germans never realized that their codes were broken. The Allies would learn about U-boat positions, but wouldn’t bomb the U-boats until they spotted the U-boat by some other means…otherwise the Nazis might get suspicious.

There’s a story about Winston Churchill and the bombing of Coventry: supposedly he knew the city would be bombed but could not warn its citizens. The story is apocryphal, but is a good indication of the extreme measures countries take to protect the secret that they can read an enemy’s secrets.

And there are many stories of slip-ups. In 1986, after the bombing of a Berlin disco, then-President Reagan said that he had irrefutable evidence that Qadaffi was behind the attack. Libyan intelligence realized that their diplomatic codes were broken, and changed them. The result was an enormous setback for U.S. intelligence, all for just a slip of the tongue.

There are also cases when compromised codes are used… The Japanese attack on Midway was extraordinarily complex, and it relied on completely surprising the Americans. US cryptanalysts had partially broken the Japanese code, and were able to deduce most of the Japanese attack plan, but they were missing two key pieces of information – the time and place of the attack. They were able to establish that the target of the attack was represented by the letters AF, and they suspected that Midway was a plausible target. To confirm that Midway was the target, the US military sent an uncoded message indicating that the island’s desalination plant had broken down. Shortly thereafter, a Japanese message was intercepted indicating that AF would be running low on water. However, such clarity in intelligence coups like this is quite rare, and the Iranian news is near impossible to decipher. You get stuck in a recursive and byzantine “what if” structure – what if they know we know they know?

Iranian intelligence supposedly tried to test Chalabi’s claim by sending a message about an Iranian weapons cache. If the U.S. acted on this information, then the Iranians would know that its codes were broken. The U.S. didn’t, which showed they’re very smart about this. Maybe they knew the Iranians suspected, or maybe they were waiting to manufacture a plausible fictitious reason for knowing about the weapons cache.

So Iran’s Midway-style attempt to confirm Chalabi’s claim did not bear fruit. If, that is, Chalabi even told them anything. Who knows? Everything is open to speculation when it comes to this.

If the Iranians knew that the U.S. knew, why didn’t they pretend not to know and feed the U.S. false information? Or maybe they’ve been doing that for years, and the U.S. finally figured out that the Iranians knew. Maybe the U.S. knew that the Iranians knew, and are using the fact to discredit Chalabi.

I’d like to know more about this story, but it seems woefully underreported in the media and it is way too cloak and dagger to accurately analyze with the information currently available. The sad thing is that I suspect we’ll never be able to figure it out.

Thinking about Security

I’ve been making my way through Bruce Schneier’s Crypto-Gram newsletter archives, and I came across this excellent summary of how to think about security. He breaks security down into five simple questions that should be asked of a proposed security solution, some obvious, some not so much. In the post 9/11 era, we’re being presented with all sorts of security solutions, and so Shneier’s system can be quite useful in evaluating proposed security systems.

This five-step process works for any security measure, past, present, or future:

1) What problem does it solve?

2) How well does it solve the problem?

3) What new problems does it add?

4) What are the economic and social costs?

5) Given the above, is it worth the costs?

What this process basically does is force you to judge the tradeoffs of a security system. All to often, we either assume a proposed solution doesn’t create problems of its own, or assume that because a proposed solution isn’t a perfect solution, it’s useless. Security is a tradeoff. It doesn’t matter if a proposed security system makes us safe. What matters is that a system is worth the tradeoffs (or price, if you prefer). For instance, in order to make your computer invulnerable to external attacks from the internet, all you need to do is disconnect it from the internet. However, that means you can no longer access the internet! That is the price you pay for a perfectly secure solution to internet attacks. And it doesn’t protect against attacks from those who have physical access to your computer. Also, you presumably want to use the internet, seeing as though you had a connection you wanted to protect. The old saying still holds: A perfectly secure system is a perfectly useless system.

In the post 9/11 world we’re constantly being bombarded by new security measures, but at the same time, we’re being told that a solution which is not perfect is worthless. It’s rare that a new security measure will provide a clear benefit without causing any problems. It’s all about tradeoffs…

I had intended to apply Schneier’s system to a contemporary security “solution,” but I can’t seem to think of anything at the moment. Perhaps more later. In the mean time, check out Schneier’s recent review of “I am Not a Terrorist” Cards in which he tears apart a proposed security system which sounds interesting on the surface, but makes little sense when you take a closer look (which Scheier does mercilessly).

The Eisenhower Ten

The Eisenhower Ten by CONELRAD : An excellent article detailing a rather strange episode in U.S. History. During 1958 and 1959, President Eisenhower issued ten letters to mostly private citizens granting them unprecedented power in the event of a “national emergency” (i.e. nuclear war). Naturally, the Kennedy administration was less than thrilled with the existence of these letters, which, strangly enough, did not contain expiration dates.

So who made up this Shadow Government?

…of the nine, two of the positions were filled by Eisenhower cabinet secretaries and another slot was filled by the Chairman of the Board of Governors of the Federal Reserve. The remaining six were very accomplished captains of industry who, as time has proven, could keep a secret to the grave. It should be noted that the sheer impressiveness of the Emergency Administrator roster caused Eisenhower Staff Secretary Gen. Andrew J. Goodpaster (USA, Ret.) to gush, some 46 years later, “that list is absolutely glittering in terms of its quality.” In his interview with CONELRAD, the retired general also emphasized how seriously the President took the issue of Continuity of Government: “It was deeply on his mind.”

Eisenhower apparently assembled the list himself, and if that is the case, the quality of the list was no doubt “glittering”. Eisenhower was a good judge of talent, and one of the astounding things about his command of allied forces during WWII was that he successfully assembled an integrated military command made up of both British and American officers, and they were actually effective on the battlefield. I don’t doubt that he would be able to assemble a group of Emergency Administrators that would fit the job, work well together, and provide the country with a reasonably effective continuity of government in the event of the unthinkable.

Upon learning of these letters, Kennedy’s National Security Advisor, McGeorge Bundy, asserted that the “outstanding authority” of the Emergency Administrators should be terminated… but what happened after that is somewhat of a mystery. Some correspondance exists suggesting that several of the Emergency Administrators were indeed relieved of their duties, but there are still questions as to whether or not Kennedy retained the services of 3 of the Eisenhower Ten and whether Kennedy established an emergency administration of his own.

It is Gen. Goodpaster’s assertion that because Eisenhower practically wrote the book on Continuity of Government, the practice of having Emergency Administrators waiting in the wings for the Big One was a tradition that continued throughout the Cold War and perhaps even to this day.

On March 1, 2002, the New York Times reported that Bush had indeed set up a “shadow government” in the wake of the 9/11 terror attacks. This news was, of course, greeted with much consternation, and understandably so. Though there may be a historical precident (even if it is a controversial one) for such a thing, the details of such an open-ended policy are still a bit fuzzy to me…

CONELRAD has done an excellent job collecting, presenting, and analyzing information pertaining to the Eisenhower Ten, and I highly recommend anyone who is interested in the issue of continuity of government to check it out. Even with that, there are still lots of unanswered questions about the practice, but it is still fascinating reading….

The New Paradigm of Intelligence Agility

Whether you believe 9/11 and subsequent events to include massive intelligence failures or not, it has become clear that our intelligence capabilities lack agility. As a nation, we have not moved beyond the Cold War paradigm of threat-based strategic thinking. This thinking was well suited to deterring and defeating specific threats, but has left us unprepared to effectively respond to emerging threats such as terrorism.

The problem with most calls for intelligence or military reform in the post-9/11 era is that they are all still stuck in that Cold War paradigm. In the future, we may be able to cope with the terrorist threat, but what about the next big threat to come along? The true solution, as Bruce Berkowitz suggests, is not to simply change the list of specific threats, but to be agile. We need to be able to respond to new and emerging threats quickly and effectively.

Fortunately, the ability to effectively respond to terrorism may not be possible without instituting at least a measure of agility in our intelligence community. When planning against the Soviets, we had the luxury of knowing that the “threat changed incrementally, came from a known geographic location, and was most likely to follow a well-understood attack plan.” The nature of terrorists is less static than that of the Soviets, so if we are to succeed, we will need to orient ourselves towards a condition of agility. The Soviets required an intense focus of resources on a single threat, whereas terrorism requires our resources to be more dispersed. Agility will give us the ability to evaluate new and emerging threats, and to dynamically adjust resources based on where we need them.

So, in this context, what is agility? Berkowitz has the answer:

For an intelligence organization, agility can be defined as having four features. First, the organization needs to be able to move people and other resources quickly and efficiently as requirements change. Second, it needs to be able to draw on expertise and information sources from around the world. Third, it needs to be able to move information easily so that all of the people required to produce an intelligence product can work together effectively. And, fourth, it needs to be able to deliver products to consumers when needed and in the form they require to do their job.

And how do we achieve this goal? The answer isn’t necessarily a dramatic restructuring of our intelligence community. Agility in this context depends on unglamorous, mundane things like standardized clearances and feedback loops between managers and analysts. We should be encouraging innovation in analysis and ways to penetrate targets. Perhaps most important is the need for a system to escalate activities when the stakes are high:

[We need] Procedures that tell everyone when the stakes are high and they should take more risks and act more aggressively-despite the potential costs. The Defense Department has these procedures… the “Defense Condition,” or DEFCON, system. The CIA does not.

Out intelligence community correctly recognized the threat that terrorism posed long before 9/11, but lacked the organizational agility to shift resources to counter that threat. Currently, we are doing a better job of confronting terrorism, but we will need to be agile if we are to respond to the next big threat. As Bruce Shneier comments, taking away pocket knives and box cutters doesn’t improve airline security:

People who think otherwise don’t understand what allowed the terrorists to take over four planes two years ago. It wasn’t a small knife. It wasn’t a box cutter. The critical weapon that the terrorists had was surprise. With surprise they could have taken the planes over with their bare hands. Without surprise they couldn’t have taken the planes over, even if they had guns.

And surprise has been confiscated on all flights since 9/11. It doesn’t matter what weapons any potential new hijackers have; the passengers will no longer allow them to take over airplanes. I don’t believe that airplane hijacking is a thing of the past, but when the next plane gets taken over it will be because a group of hijackers figured out a clever new weapon that we haven’t thought of, and not because they snuck some small pointy objects through security.

I’ve been hard on the intelligence community (or rather, the way they interact with our politicians) lately, but theirs is truly a thankless job. By their nature, they don’t get to publicize their successes, but we all see their failures. Unfortunately we cannot know how successful they’ve been in the past two years, but given the amount of terrorist attacks during that period, the outlook is promising. We may be more agile than we know…

The State of U.S. Intelligence

Over the past few years, I’ve spent a fair amount of time reading up on the intelligence community and it’s varied strengths or weaknesses. I’ve also spent a fair amount of time defending the Bush administration (or pointing out flaws in the arguments against the administration) in various forums, if only because no one else would. However, I’ve come to believe that our intelligence community is in poor shape… not really because of those we have working at these agencies, but because of the interaction between the intelligence community and the rest of the government.

The problem appears to be more systemic than deliberate as questionible practices such as “stovepiping” (the practice of taking a piece of intelligence or a request, bypassing the chain of command, and bringing it straight to the highest authority) became commonplace in the administration, even before 9/11. Basically, the Bush administration fixed the system so that they got raw intelligence without the proper analysis (intelligence is usually subjected to a thorough vetting). Given that they were also openly (and perhaps rightfully) distrustful of the intelligence community (and that the feeling was mutual), is it any wonder that they tried to bypass the system?

Don’t get me wrong, what the administration has done is clearly wrong and the “stovepiping” situation should be corrected immediately. There appears to be some spiteful and petty actions being taken by both the White House and the Intelligence Community, and no one is benefiting from this. A very cynical feeling is running through one of the most important areas of our national security. This feeling is exemplified by the recent leaked memo written by a member of Senator Jay Rockefeller’s (D-WVa) staff. The memo recommends that Democrats launch an investigation “into pre-war Iraq intelligence in such a way that it could bring maximum embarrassment to President Bush in his re-election campaign.” It has been fairly suggested that this memo is only a desperate response to the Bush administration’s maneuverings, but this does not excuse the downright destructive course of action that the memo advocates.

Bob Kerry, a former vice-chairman of the Senate Select Committee on Intelligence, wrote an excellent oped on this subject:

The production of a memo by an employee of a Democratic member of the Senate Select Committee on Intelligence is an example of the destructive side of partisan politics. That it probably emerged as a consequence of an increasingly partisan environment in Washington and may have been provoked by equally destructive Republican acts is neither a comfort nor a defensible rationalization.

I have no doubt that there are Republican memos of a similar nature floating about but the Senate Intelligence Committee, by virtue of its importance, is supposed to be beyond be beyond partisan politics and it has been in the past. It isn’t now. This, too, is unacceptable and needs to be corrected. Indeed, the Senate Intelligence Committee hasn’t held an open hearing for months, nor has it released any preliminary findings or provided any other insight. It’s website hasn’t been updated in months and contains spelling errors on every page (“Jurisdicton”!?).

The blame does not lie with any one governmental entity, but their stubborn refusal to play well together, especially with something as important as intelligence, is troubling to say the least. We are a nation at war, and if we are to succeed, we must trust in our government to effectively evaluate intelligence at all levels. The practice of “stovepiping” must end, and the White House will need to trust in the intelligence community to provide accurate, useful, and timely information. For their part, the intelligence community will have to provide this information and live up to certain expectations – and, for example, when the Vice President asks for something to be checked out, you might want to put someone competent on the case. Sending a former ambassador to Niger without any resources other than his own contacts, no matter how knowledgeable he may be, simply doesn’t cut it. He didn’t even file a formal report. I don’t pretend to know how or why those involved acted the way they did, but I do know that the end result was representative of the troubling breakdown of communication between the CIA and the White House.

And the Senate Intelligence Committee could perhaps learn something from the House intelligence Committee, which, in a genuinely constructive act of bipartisan oversight of intelligence, “challenged the CIA’s refusal to comply with their request for a copy of the recent report by David Kay on the search for Iraqi weapons of mass destruction.”

Of course, it must also be said that public acknowledgements about intelligence failures before 9/11 or the Iraq war may also prove to be counterproductive as they could reveal valuable intelligence sources (which would be “silenced” by our enemies). Such information cannot be made public without jeopardizing the lives of our people, and it shouldn’t. In the end, we must trust in our government and they must trust in themselves if we are to accomplish anything. If the past few years are any indication, however, we may be in a lot of trouble. [thanks to Secrecy News for Intelligence Committee info]