You are here: Kaedrin > Weblog > Archives > Security & Intelligence

Security & Intelligence
Wednesday, January 23, 2008

Ok, I'm slacking. The top 10 movies of 2007 will be posted this Sunday. In the mean time, I leave you with this anti-terrorism suggestion from Charlie Stross (and yes, I'm posting this a few months late, but it's still funny):
The solution to protecting the London Underground from terrorist suicide bombers can be summed up in one word: Daleks. One Dalek per tube platform, behind a door at the end. Fit them with cameras and remote controls and run them from Ken Livingstone's office. Any sign of terrorism on the platform? Whoosh! The doors open and the Dalek comes out, shrieking "exterminate!" in a demented rasp reminiscent of Michael Howard during his tenure as Home Secretary, only less merciful.

The British are trained from birth to know the two tactics for surviving a Dalek attack; run up the stairs (or escalator), or hide behind the sofa. There are no sofas in the underground, but there are plenty of escalators. Switch them to run upwards when the Dalek is out, and you can clear a platform in seconds.

Suicide bombers are by definition Un-British, and will therefore be unable to pass a citizenship test, much less deal with the Menace from Skaro.
Posted by Mark on January 23, 2008 at 08:13 PM .: link :.

End of This Day's Posts

Wednesday, February 21, 2007

Link Dump
Various links for your enjoyment:
  • The Order of the Science Scouts of Exemplary Repute and Above Average Physique: Like the Boy Scouts, but for Scientists. Aside from the goofy name, they've got an ingenious and hilarious list of badges, including: The "my degree inadvertantly makes me competent in fixing household appliances" badge, The "I've touched human internal organs with my own hands" badge, The "has frozen stuff just to see what happens" badge (oh come one, who hasn't done that?), The "I bet I know more computer languages than you, and I'm not afraid to talk about it" badge (well, I used to know a bunch), and of course, The "dodger of monkey shit" badge. ("One of our self explanatory badges."). Sadly, I qualify for less of these than I'd like. Of course, I'm not a scientist, but still. I'm borderline on many though (for instance, the "I blog about science" badge requires that I maintain a blog where at least a quarter of the material is about science - I certainly blog about technology a lot, but explicitely science? Debateable, I guess.)
  • Dr. Ashen and Gizmodo Reviews The Gamespower 50 (YouTube): It's a funny review of a crappy portable video game device, just watch it. The games on this thing are so bad (there's actually one called "Grass Cutter," which is exactly what you think it is - a game where you mow the lawn).
  • Count Chocula Vandalism on Wikipedia: Some guy came up with an absurdly comprehensive history for Count Chocula:
    Ernst Choukula was born the third child to Estonian landowers in the late autumn of 1873. His parents, Ivan and Brushken Choukula, were well-established traders of Baltic grain who-- by the early twentieth century--had established a monopolistic hold on the export markets of Lithuania, Latvia and southern Finland. A clever child, Ernst advanced quickly through secondary schooling and, at the age of nineteen, was managing one of six Talinn-area farms, along with his father, and older brother, Grinsh. By twenty-four, he appeared in his first "barrelled cereal" endorsement, as the Choukula family debuted "Ernst Choukula's Golden Wheat Muesli", a packaged mix that was intended for horses, mules, and the hospital ridden. Belarussian immigrant silo-tenders started cutting the product with vodka, creating a crude mush-paste they called "gruhll" or "gruell," and would eat the concoction each morning before work.
    It goes on like that for a while. That particular edit has been removed from the real article, but there appears to actually be quite a debate on the Talk page as to whether or not to mention it in the official article.
  • The Psychology of Security by Bruce Schneier: A long draft of an article that delves into psychological reasons we make the security tradeoffs that we do. Interesting stuff.
  • The Sagan Diary by John Scalzi (Audio Book): I've become a great fan of Scalzi's fiction, and his latest work is available here as audio (a book is available too, but it appears to be a limited run). Since the book is essentially the diary of a woman, he got various female authors and friends to read a chapter. This actually makes for somewhat uneven listening, as some are great and others aren't as great. Now that I think about it, this book probably won't make sense if you haven't read Old Man's War and/or The Ghost Brigades. However, they're both wonderful books of the military scifi school (maybe I'll probably write a blog post or two about them in the near future).
Posted by Mark on February 21, 2007 at 08:16 PM .: link :.

End of This Day's Posts

Wednesday, February 14, 2007

Intellectual Property, Copyright and DRM
Roy over at 79Soul has started a series of posts dealing with Intellectual Property. His first post sets the stage with an overview of the situation, and he begins to explore some of the issues, starting with the definition of theft. I'm going to cover some of the same ground in this post, and then some other things which I assume Roy will cover in his later posts.

I think most people have an intuitive understanding of what intellectual property is, but it might be useful to start with a brief definition. Perhaps a good place to start would be Article 1, Section 8 of the U.S. Constitution:
To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries;
I started with this for a number of reasons. First, because I live in the U.S. and most of what follows deals with U.S. IP law. Second, because it's actually a somewhat controversial stance. The fact that IP is only secured for "limited times" is the key. In England, for example, an author does not merely hold a copyright on their work, they have a Moral Right.
The moral right of the author is considered to be -- according to the Berne convention -- an inalienable human right. This is the same serious meaning of "inalienable" the Declaration of Independence uses: not only can't these rights be forcibly stripped from you, you can't even give them away. You can't sell yourself into slavery; and neither can you (in Britain) give the right to be called the author of your writings to someone else.
The U.S. is different. It doesn't grant an inalienable moral right of ownership; instead, it allows copyright. In other words, in the U.S., such works are considered property (i.e. it can be sold, traded, bartered, or given away). This represents a fundamental distinction that needs to be made: some systems emphasize individual rights and rewards, and other systems are more limited. When put that way, the U.S. system sounds pretty awful, except that it was designed for something different: our system was built to advance science and the "useful arts." The U.S. system still rewards creators, but only as a means to an end. Copyright is granted so that there is an incentive to create. However, such protections are only granted for "limited Times." This is because when a copyright is eternal, the system stagnates as protected peoples stifle competition (this need not be malicious). Copyright is thus limited so that when a work is no longer protected, it becomes freely available for everyone to use and to build upon. This is known as the public domain.

The end goal here is the advancement of society, and both protection and expiration are necessary parts of the mix. The balance between the two is important, and as Roy notes, one of the things that appears to have upset the balance is technology. This, of course, extends as far back as the printing press, records, cassettes, VHS, and other similar technologies, but more recently, a convergence between new compression techniques and increasing bandwidth of the internet created an issue. Most new recording technologies were greeted with concern, but physical limitations and costs generally put a cap on the amount of damage that could be done. With computers and large networks like the internet, such limitations became almost negligible. Digital copies of protected works became easy to copy and distribute on a very large scale.

The first major issue came up as a result of Napster, a peer-to-peer music sharing service that essentially promoted widespread copyright infringement. Lawsuits followed, and the original Napster service was shut down, only to be replaced by numerous decentralized peer-to-peer systems and darknets. This meant that no single entity could be sued for the copyright infringement that occurred on the network, but it resulted in a number of (probably ill-advised) lawsuits against regular folks (the anonymity of internet technology and state of recordkeeping being what it is, this sometimes leads to hilarious cases like when the RIAA sued a 79 year old guy who doesn't even own a computer or know how to operate one).

Roy discusses the various arguments for or against this sort of file sharing, noting that the essential difference of opinion is the definition of the word "theft." For my part, I think it's pretty obvious that downloading something for free that you'd normally have to pay for is morally wrong. However, I can see some grey area. A few months ago, I pre-ordered Tool's most recent album, 10,000 Days from Amazon. A friend who already had the album sent me a copy over the internet before I had actually recieved my copy of the CD. Does this count as theft? I would say no.

The concept of borrowing a Book, CD or DVD also seems pretty harmless to me, and I don't have a moral problem with borrowing an electronic copy, then deleting it afterwords (or purchasing it, if I liked it enough), though I can see how such a practice represents a bit of a slippery slope and wouldn't hold up in an honest debate (nor should it). It's too easy to abuse such an argument, or to apply it in retrospect. I suppose there are arguments to be made with respect to making distinctions between benefits and harms, but I generally find those arguments unpersuasive (though perhaps interesting to consider).

There are some other issues that need to be discussed as well. The concept of Fair Use allows limited use of copyrighted material without requiring permission from the rights holders. For example, including a screenshot of a film in a movie review. You're also allowed to parody copyrighted works, and in some instances make complete copies of a copyrighted work. There are rules pertaining to how much of the copyrighted work can be used and in what circumstances, but this is not the venue for such details. The point is that copyright is not absolute and consumers have rights as well.

Another topic that must be addressed is Digital Rights Management (DRM). This refers to a range of technologies used to combat digital copying of protected material. The goal of DRM is to use technology to automatically limit the abilities of a consumer who has purchased digital media. In some cases, this means that you won't be able to play an optical disc on a certain device, in others it means you can only use the media a certain number of times (among other restrictions).

To be blunt, DRM sucks. For the most part, it benefits no one. It's confusing, it basically amounts to treating legitimate customers like criminals while only barely (if that much) slowing down the piracy it purports to be thwarting, and it's lead to numerous disasters and unintended consequences. Essential reading on this subject is this talk given to Microsoft by Cory Doctorow. It's a long but well written and straightforward read that I can't summarize briefly (please read the whole thing). Some details of his argument may be debateable, but as a whole, I find it quite compelling. Put simply, DRM doesn't work and it's bad for artists, businesses, and society as a whole.

Now, the IP industries that are pushing DRM are not that stupid. They know DRM is a fundamentally absurd proposition: the whole point of selling IP media is so that people can consume it. You can't make a system that will prevent people from doing so, as the whole point of having the media in the first place is so that people can use it. The only way to perfectly secure a piece of digital media is to make it unusable (i.e. the only perfectly secure system is a perfectly useless one). That's why DRM systems are broken so quickly. It's not that the programmers are necessarily bad, it's that the entire concept is fundamentally flawed. Again, the IP industries know this, which is why they pushed the Digital Millennium Copyright Act (DMCA). As with most laws, the DMCA is a complex beast, but what it boils down to is that no one is allowed to circumvent measures taken to protect copyright. Thus, even though the copy protection on DVDs is obscenely easy to bypass, it is illegal to do so. In theory, this might be fine. In practice, this law has extended far beyond what I'd consider reasonable and has also been heavily abused. For instance, some software companies have attempted to use the DMCA to prevent security researchers from exposing bugs in their software. The law is sometimes used to silence critics by threatening them with a lawsuit, even though no copright infringement was committed. The Chilling Effects project seems to be a good source for information regarding the DMCA and it's various effects.

DRM combined with the DMCA can be stifling. A good example of how awful DRM is, and how DMCA can affect the situation is the Sony Rootkit Debacle. Boing Boing has a ridiculously comprehensive timeline of the entire fiasco. In short, Sony put DRM on certain CDs. The general idea was to prevent people from putting the CDs in their computer and ripping them to MP3s. To accomplish this, Sony surreptitiously installed software on customer's computers (without their knowledge). A security researcher happened to notice this, and in researching the matter found that the Sony DRM had installed a rootkit that made the computer vulnerable to various attacks. Rootkits are black-hat cracker tools used to disguise the workings of their malicious software. Attempting to remove the rootkit broke the windows installation. Sony reacted slowly and poorly, releasing a service pack that supposedly removed the rootkit, but which actually opened up new security vulnerabilities. And it didn't end there. Reading through the timeline is astounding (as a result, I tend to shy away from Sony these days). Though I don't believe he was called on it, the security researcher who discovered these vulnerabilities was technically breaking the law, because the rootkit was intended to protect copyright.

A few months ago, my windows computer died and I decided to give linux a try. I wanted to see if I could get linux to do everything I needed it to do. As it turns out, I could, but not legally. Watching DVDs on linux is technically illegal, because I'm circumventing the copy protection on DVDs. Similar issues exist for other media formats. The details are complex, but in the end, it turns out that I'm not legally able to watch my legitimately purchased DVDs on my computer (I have since purchased a new computer that has an approved player installed). Similarly, if I were to purchase a song from the iTunes Music Store, it comes in a DRMed format. If I want to use that format on a portable device (let's say my phone, which doesn't support Apple's DRM format), I'd have to convert it to a format that my portable device could understand, which would be illegal.

Which brings me to my next point, which is that DRM isn't really about protecting copyright. I've already established that it doesn't really accomplish that goal (and indeed, even works against many of the reasons copyright was put into place), so why is it still being pushed? One can only really speculate, but I'll bet that part of the issue has to do with IP owners wanting to "undercut fair use and then create new revenue streams where there were previously none." To continue an earlier example, if I buy a song from the iTunes music store and I want to put it on my non-Apple phone (not that I don't want one of those), the music industry would just love it if I were forced to buy the song again, in a format that is readable by my phone. Of course, that format would be incompatible with other devices, so I'd have to purchase the song again if I wanted to listen to it on those devices. When put in those terms, it's pretty easy to see why IP owners like DRM, and given the general person's reaction to such a scheme, it's also easy to see why IP owners are always careful to couch the debate in terms of piracy. This won't last forever, but it could be a bumpy ride.

Interestingly enough, distributers of digital media like Apple and Yahoo have recently come out against DRM. For the most part, these are just symbolic gestures. Cynics will look at Steve Jobs' Thoughts on Music and say that he's just passing the buck. He knows customers don't like or understand DRM, so he's just making a calculated PR move by blaming it on the music industry. Personally, I can see that, but I also think it's a very good thing. I find it encouraging that other distributers are following suit, and I also hope and believe this will lead to better things. Apple has proven that there is a large market for legally purchased music files on the internet, and other companies have even shown that selling DRM-free files yields higher sales. Indeed, the emusic service sells high quality, variable bit rate MP3 files without DRM, and it has established emusic as the #2 retailer of downloadable music behind the iTunes Music Store. Incidentally, this was not done for pure ideological reasons - it just made busines sense. As yet, these pronouncements are only symbolic, but now that online media distributers have established themselves as legitimate businesses, they have ammunition with which to challenge the IP holders. This won't happen overnight, but I think the process has begun.

Last year, I purchased a computer game called Galactic Civilizations II (and posted about it several times). This game was notable to me (in addition to the fact that it's a great game) in that it was the only game I'd purchased in years that featured no CD copy protection (i.e. DRM). As a result, when I bought a new computer, I experienced none of the usual fumbling for 16 digit CD Keys that I normally experience when trying to reinstall a game. Brad Wardell, the owner of the company that made the game, explained his thoughts on copy protection on his blog a while back:
I don't want to make it out that I'm some sort of kumbaya guy. Piracy is a problem and it does cost sales. I just don't think it's as big of a problem as the game industry thinks it is. I also don't think inconveniencing customers is the solution.
For him, it's not that piracy isn't an issue, it's that it's not worth imposing draconian copy protection measures that infuriate customers. The game sold much better than expected. I doubt this was because they didn't use DRM, but I can guarantee one thing: People don't buy games because they want DRM. However, this shows that you don't need DRM to make a successful game.

The future isn't all bright, though. Peter Gutmann's excellent Cost Analysis of Windows Vista Content Protection provides a good example of how things could get considerably worse:
Windows Vista includes an extensive reworking of core OS elements in order to provide content protection for so-called "premium content", typically HD data from Blu-Ray and HD-DVD sources. Providing this protection incurs considerable costs in terms of system performance, system stability, technical support overhead, and hardware and software cost. These issues affect not only users of Vista but the entire PC industry, since the effects of the protection measures extend to cover all hardware and software that will ever come into contact with Vista, even if it's not used directly with Vista (for example hardware in a Macintosh computer or on a Linux server).
This is infuriating. In case you can't tell, I've never liked DRM, but at least it could be avoided. I generally take articles like the one I'm referencing with a grain of salt, but if true, it means that the DRM in Vista is so oppressive that it will raise the price of hardware… And since Microsoft commands such a huge share of the market, hardware manufacturers have to comply, even though a some people (linux users, Mac users) don't need the draconian hardware requirements. This is absurd. Microsoft should have enough clout to stand up to the media giants, there's no reason the DRM in Vista has to be so invasive (or even exist at all). As Gutmann speculates in his cost analysis, some of the potential effects of this are particularly egregious, to the point where I can't see consumers standing for it.

My previous post dealt with Web 2.0, and I posted a YouTube video that summarized how changing technology is going to force us to rethink a few things: copyright, authorship, identity, ethics, aesthetics, rhetorics, governance, privacy, commerce, love, family, ourselves. All of these are true. Earlier, I wrote that the purpose of copyright was to benefit society, and that protection and expiration were both essential. The balance between protection and expiration has been upset by technology. We need to rethink that balance. Indeed, many people smarter than I already have. The internet is replete with examples of people who have profited off of giving things away for free. Creative Commons allows you to share your content so that others can reuse and remix your content, but I don't think it has been adopted to the extent that it should be.

To some people, reusing or remixing music, for example, is not a good thing. This is certainly worthy of a debate, and it is a discussion that needs to happen. Personally, I don't mind it. For an example of why, watch this video detailing the history of the Amen Break. There are amazing things that can happen as a result of sharing, reusing and remixing, and that's only a single example. The current copyright environment seems to stifle such creativity, not the least of which because copyright lasts so long (currently the life of the author plus 70 years). In a world where technology has enabled an entire generation to accellerate the creation and consumption of media, it seems foolish to lock up so much material for what could easily be over a century. Despite all that I've written, I have to admit that I don't have a definitive answer. I'm sure I can come up with something that would work for me, but this is larger than me. We all need to rethink this, and many other things. Maybe that Web 2.0 thing can help.

Update: This post has mutated into a monster. Not only is it extremely long, but I reference several other long, detailed documents and even somewhere around 20-25 minutes of video. It's a large subject, and I'm certainly no expert. Also, I generally like to take a little more time when posting something this large, but I figured getting a draft out there would be better than nothing. Updates may be made...

Update 2.15.07: Made some minor copy edits, and added a link to an Ars Technica article that I forgot to add yesterday.
Posted by Mark on February 14, 2007 at 11:44 PM .: link :.

End of This Day's Posts

Sunday, October 15, 2006

Link Dump
I've been quite busy lately so once again it's time to unleash the chain-smoking monkey research squad and share the results:
  • The Truth About Overselling!: Ever wonder how web hosting companies can offer obscene amounts of storage and bandwidth these days? It turns out that these web hosting companies are offering more than they actually have. Josh Jones of Dreamhost explains why this practice is popular and how they can get away with it (short answer - most people emphatically don't use or need that much bandwidth).
  • Utterly fascinating pseudo-mystery on Metafilter. Someone got curious about a strange flash advertisement, and a whole slew of people started investigating, analyzing the flash file, plotting stuff on a map, etc... Reminded me a little of that whole Publius Enigma thing [via Chizumatic].
  • Weak security in our daily lives: "Right now, I am going to give you a sequence of minimal length that, when you enter it into a car's numeric keypad, is guaranteed to unlock the doors of said car. It is exactly 3129 keypresses long, which should take you around 20 minutes to go through." [via Schneier]
  • America's Most Fonted: The 7 Worst Fonts: Fonts aren't usually a topic of discussion here, but I thought it was funny that the Kaedrin logo (see upper left hand side of this page) uses the #7 worst font. But it's only the logo and that's ok... right? RIGHT?
  • Architecture is another topic rarely discussed here, but I thought that the new trend of secret rooms was interesting. [via Kottke]
That's all for now. Things appear to be slowing down, so that will hopefully mean more time for blogging (i.e. less link dumpy type posts).
Posted by Mark on October 15, 2006 at 11:09 PM .: link :.

End of This Day's Posts

Saturday, August 26, 2006

Travelling Link Dump
I'll be on vacation this week, so Kaedrin compatriots Samael and DyRE will be posting in my stead, though they may not be able to post tomorrow. In any case, here are some links to chew on while I'm gone.
  • Bruce Schneier Facts: In the style of the infamous Chuck Norris Facts, some enterprising folks have come up with facts for security expert Bruce Schneier. "Bruce Schneier only smiles when he finds an unbreakable cryptosystem. Of course, Bruce Schneier never smiles." and "There is an otherwise featureless big black computer in Ft. Meade that has a single dial with three settings: Off, Standby, and Schneier." Heh, Cryptonerd humor.
  • Khaaan! [via the Ministry]
  • Neal Stephenson Q&A (.ram Real Video): I hate Real Player too, but it's worth it to see the man in action. It's from a few years ago, but it's great stuff.
  • I Smell a Mash-Up: James Grimmelmann notes the irony of Weird Al Yankovic's new song entitled Don’t Download This Song (available for free download, naturally) that parodies the RIAA's anti-downloading efforts.
  • How to read: Nick Hornby tells us to read what we like:
    It's set in stone, apparently: books must be hard work, otherwise they're a waste of time. And so we grind our way through serious, and sometimes seriously dull, novels, or enormous biographies of political figures, and every time we do so, books come to seem a little more like a duty, and Pop Idol starts to look a little more attractive. Please, please, put it down.

    And please, please stop patronising those who are reading a book - The Da Vinci Code, maybe - because they are enjoying it.

    For a start, none of us knows what kind of an effort this represents for the individual reader. It could be his or her first full-length adult novel; it might be the book that finally reveals the purpose and joy of reading to someone who has hitherto been mystified by the attraction that books exert on others. And anyway, reading for enjoyment is what we should all be doing.

    ...The regrettable thing about the culture war we still seem to be fighting is that it divides books into two camps, the trashy and the worthwhile. No one who is paid to talk about books for a living seems to be able to convey the message that this isn't how it works, that 'good' books can provide every bit as much pleasure as 'trashy' ones.
That's all from now. I hope everyone has a great week. I now leave you in the capable hands of the guest bloggers, Sam & DyRE....
Posted by Mark on August 26, 2006 at 11:09 AM .: Comments (0) | link :.

End of This Day's Posts

Sunday, October 16, 2005

Operation Solar Eagle
One of the major challenges faced in Iraq is electricity generation. Even before the war, neglect of an aging infrastructure forced scheduled blackouts. To compensate for the outages, Saddam distributed power to desired areas, while denying power to other areas. The war naturally worsened the situation (especially in the immediate aftermath, as there was no security at all), and the coalition and fledgling Iraqi government have been struggling to restore and upgrade power generation facilities since the end of major combat. Many improvements have been made, but attacks on the infrastructure have kept generation at or around pre-war levels for most areas (even if overall generation has increased, the equitable distribution of power means that some people are getting more than they used to, while others are not - ironic, isn't it?).

Attacks on the infrastructure have presented a significant problem, especially because some members of the insurgency seem to be familiar enough with Iraq's power network to attack key nodes, thus increasing the effects of their attacks. Consequently, security costs have gone through the roof. The ongoing disruption and inconsistency of power generation puts the new government under a lot of pressure. The inability to provide basic services like electricity delegitimizes the government and makes it more difficult to prevent future attacks and restore services.

When presented with this problem, my first thought was that solar power may actually help. There are many non-trivial problems with a solar power generation network, but Iraq's security situation combined with lowered expectations and an already insufficient infrastructure does much to mitigate the shortcomings of solar power.

In America, solar power is usually passed over as a large scale power generation system, but things that are problems in America may not be so problematic in Iraq. What are the considerations?
  • Demand: One of the biggest problems with solar power is that it's difficult to schedule power generation to meet demand (demand doesn't go down when the sun does, nor does demand necessarily coincide with peak generation), and a lot of energy is wasted because there isn't a reliable way to store energy (battery systems help, but they're not perfect and they also drive up the costs). The irregularity in generation isn't as bad as wind, but it is still somewhat irregular. In America, this is a deal breaker because we need power generation to match demand, so if we were to rely on solar power on a large scale, we'd have to make sure we have enough backup capacity running to make up for any shortfall (there's much more to it than that, but that's the high-level view). In Iraq, this isn't as big of a deal. The irregularity of conventional generation due to attacks on infrastructure is at least comparable if not worse than solar irregularity. It's also worth noting that it's difficult to scale solar power to a point where it would make a difference in America, as we use truly mammoth amounts of energy. Iraq's demands aren't as high (both in terms of absolute power and geographic distribution), and thus solar doesn't need to scale as much in Iraq.
  • Economics: Solar power requires a high initial capital investment, and also requires regular maintenance (which can be costly as well). In America, this is another dealbreaker, especially when coupled with the fact that its irregular nature requires backup capacity (which is wasteful and expensive as well). However, in Iraq, the cost of securing conventional power generation and transmission is also exceedingly high, and the prevalence of outages has cost billions in repairs and lost productivity. The decentralized nature of solar power thus becomes a major asset in Iraq, as solar power (if using batteries and if connected to the overall grid) can provide a seamless interruptible supply of electricity. Attacks on conventional systems won't have quite the impact they once did, and attacks on the solar network won't be anywhere near as effective (more on this below). Given the increased cost of conventional production (and securing that production) in Iraq, and given the resilience of such a decentralized system, solar power becomes much more viable despite its high initial expense. This is probably the most significant challenge to overcome in Iraq.
  • Security: There are potential gains, as well as new potential problems to be considered here. First, as mentioned in the economics section, a robust solar power system would help lessen the impact of attacks on conventional infrastructure, thus preventing expensive losses in productivity. Another hope here is that we will see a corresponding decrease in attacks (less effective attacks should become less desirable). Also, the decentralized nature of solar power means that attacks on the solar infrastructure are much more difficult. However, this does not mean that there is no danger. First, even if attacks on conventional infrastructure decrease, they probably won't cease altogether (though, again, the solar network could help mitigate the effects of such attacks). And there is also a new problem that is introduced: theft. In Iraq's struggling economy, theft of solar equipment is a major potential problem. Then again, once an area has solar power installed, individual homeowners and businesses won't be likely to neglect their most reliable power supply. Any attacks on the system would actually be attacks on specific individuals or businesses, which would further alienate the population and decrease the attacker's. However, this assumes that the network is already installed. Those who set up the network (most likely outsiders) will be particularly vulnerable during that time. Once installed, solar power is robust, but if terrorists attempt to prevent the installation (which seems likely, given that the terrorists seem to target many external companies operating in Iraq with the intention of forcing them to leave), that would certainly be a problem (at the very least, it would increase costs).
  • Other Benefits: If an installed solar power network helps deter attacks on power generation infrastructure, the success will cascade across several other vectors. A stable and resilient power network that draws from diverse energy sources will certainly help improve Iraq's economic prospects. Greater energy independence and an improved national energy infrastructure will also lend legitimacy to the new Iraqi government, making it stronger and better able to respond to the challenges of rebuilding their country. If successful and widespread, it could become one of the largest solar power systems in the world, and much would be learned along the way. This knowledge would be useful for everyone, not just Iraqis. Obviously, there are also environmental benefits to such a system (and probably a lack of bureaucratic red-tape like environmental impact statements as well. Indeed, while NIMBY might be a problem in America, I doubt it would be a problem in Iraq, due to their current conditions).
In researching this issue, I came across a recent study prepared at the Naval Postgraduate School called Operation Solar Eagle. The report is excellent, and it considers most of the above, and much more (in far greater detail as well). Many of my claims above are essentially assumptions, but this report provides more concrete evidence. One suggestion they make with regard to the problem of theft is to use an RFID system to keep track of solar power equipment. Lots of other interesting stuff in there as well.

As shown above, there are obviously many challenges to completing such a project, most specifically with respect to economic feasibility, but it seems to me to be an interesting idea. I'm glad that there are others thinking about it as well, though at this point it would be really nice to see something a little more concrete (or at least an explanation as to why this wouldn't work).
Posted by Mark on October 16, 2005 at 08:52 PM .: Comments (2) | link :.

End of This Day's Posts

Tuesday, August 16, 2005

Encrypted Confessions
Bruce Schneier points to an AP story about a convicted child-molester and suspected murderer who used cryptography to secure his tell-all diary:
Joseph Duncan III is a computer expert who bragged online, days before authorities believe he killed three people in Idaho, about a tell-all journal that would not be accessed for decades, authorities say.

Duncan, 42, a convicted sex offender, figured technology would catch up in 30 years, "and then the world will know who I really was, and what I really did, and what I really thought," he wrote May 13.
Schneier points out that such cases are often used by the government to illustrate the dangers of allowing regular people to encrypt data. "How can we allow people to use strong encryption, they ask, if it means not being able to convict monsters like Duncan?"

Schneier does a good job pointing out a few reasons why, but he dances around one of the most obvious: If Duncan thought the diary would be readable now, he never would have written it. His goal was a delayed release. He wanted to wait 30 years before the details of his confession were known. I guess it was an attempt to secure some sort of perverted legacy. But he never would have done so if he thought it would be released now (and used against him).

Encryption didn't allow him to commit the crimes, nor did it allow him to cover up the crime, as the data was encrypted under the assumption that it could not be broken for 30 years (which seems to be to be an unwise assumption, but look who we're talking about here). Indeed, since it is quite possible that the authorities will break the diary in the short term, you could even argue that encryption is actually helping the authorities prosecute the man (as he wouldn't have written the diary in the first place if he knew it would be broken so quickly). Could the fact that he knew he could encrypt a confession contribute to his motivation for the crimes? I doubt it, but stranger things have happened.

All technology is a double edged sword: they have good and bad uses and they're used by honest citizens and criminals alike. Except, as Schneier notes, the good usually outweighs the bad for almost all technologies.
Posted by Mark on August 16, 2005 at 12:25 AM .: Comments (2) | link :.

End of This Day's Posts

Sunday, July 17, 2005

Magic Security
In Harry Potter and the Half-Blood Prince, there are a number of new security measures suggested by the Ministry of Magic (as Voldemort and his army of Death Eaters have been running amuk). Some of them are common sense but some of them are much more questionable. Since I've also been reading prominent muggle and security expert Bruce Schneier's book, Beyond Fear, I thought it might be fun to analyze one of the Ministry of Magic's security measures according to Schneier's 5 step process.

Here is the security measure I've chosen to evaluate, as shown on page 42 of my edition:
Agree on security questions with close friends and family, so as to detect Death Eaters masquerading as others by use of the Polyjuice Potion.
For those not in the know, Polyjuice Potion allows the drinker to assume the appearance of someone else, presumably someone you know. Certainly a dangerous attack. The proposed solution is a "security question", set up in advance, so that you can verify the identity of the person in question.
  • Step 1: What assets are you trying to protect? The Ministry of Magic claims that it's solution is to the problem of impersonation by way of the Polyjuice Potion. However, this security measure essentially boils down to a form of identification, so what we're really trying to protect is an identity. The identity is, in itself, a security measure - for example, once verified, it could allow entrance to an otherwise restricted area.
  • Step 2: What are the risks to those assets? The risk is that someone could be impersonating a friend or family member (by using the aforementioned Polyjuice Potion) in an effort to gain entrance to a restricted area or otherwise gain the trust of a certain group of people. Unfortunately, the risk does not end there as the Ministry implies in its communication - it is also quite possible that an attacker could put your friend or family member under the Imperious Curse (a spell that grants the caster control of a victim). Because both the Polyjuice Potion and the Imperious Curse can be used to foil an identity based system, any proposed solution should account for both. It isn't known how frequent such attacks are, but it is implied that both attacks are increasing in frequency.
  • Step 3: How well does the security solution mitigate those risks? Not very well. First, it is quite possible for an attacker to figure out the security questions and answers ahead of time. They could do so through simple research, or through direct observation and reconnaissance. Since the security questions need to be set up in the first place, it's quite possible that an attacker could impersonate someone and set up the security questions while in disguise. Indeed, even Professor Dumbledore alludes to the ease with which an attacker could subvert this system. Heck, we're talking about attackers who are most likely witches or wizards themselves. There may be a spell of some sort that would allow them to get the answer from a victim (the Imperious Curse is one example, and I'm sure there are all sorts of truth serums or charms that could be used as well). The solution works somewhat better in the case of the Polyjuice Potion, but since we've concluded that the Imperious Curse also needs to be considered, and since this would provide almost no security in that case, the security question ends up being a poor solution to the identity problem.
  • Step 4: What other risks does the security solution cause? The most notable risk is that of a false positive. If the attacker successfully answers the security question, they achieve a certain level of trust. When you use identity as a security measure, you make impersonating that identity (or manipulating the person in question via the Imperious Curse) a much more valuable attack.
  • Step 5: What trade-offs does the security solution require? This solution is inexpensive and easy to implement, but also ineffective and inconvenient. It would also requires a certain amount of vigilance to implement indefinitely. After weeks of strict adherence to the security measure, I think you'd find people getting complacent. They'd skip using the security measure when they're in a hurry, for example. When nothing bad happens, it would only reinforce the inconvenience of the practice. It's also worth noting that this system could be used in conjunction with other security measures, but even then, it's not all that useful.
It seems to me that this isn't a very effective security measure, especially when you consider that the attacker is likely a witch or wizard. This is obviously also apparent to many of the characters in the book as well. As such, I'd recommend a magic countermeasure. If you need to verify someone's identity, you should probably use a charm or spell of some sort to do so instead of the easily subverted "security question" system. It shouldn't be difficult. In Harry Potter's universe, it would probably amount to pointing a wand at someone and saying "Identico!" (or some other such word that is vaguely related to the words Identity or Identify) at which point you could find out who the person is and if they're under the Imperious Curse.
Posted by Mark on July 17, 2005 at 12:21 PM .: Comments (0) | link :.

End of This Day's Posts

Sunday, July 10, 2005

Security Theater
In response to Thursday's terrorist attacks in London, the United States raised the threat level for mass transit. As a result, the public saw "more police officers, increased video surveillance, the presence of dogs trained to sniff for bombs and inspections of trash containers around transit stations."

This is a somewhat sensible reaction, on numerous levels (though, ironically, not as much with respect to security). First, there is a small increase in security, but it also struck me as being more effective as a piece of security theater. In the NY Times article reference above, a police officer carrying a submachine gun is pictured. One of Kaedrin's 3 loyal readers wondered if that was really necessary. The truth is that it probably didn't provide much in the way of extra security, but often security decisions are made by those who have an agenda that encompasses more than just security. In Bruce Schneier's excellent book Beyond Fear, he calls this sort of thing security theater.
In 1970, there was no airline security in the U.S.: no metal detectors, no X-ray machines, and no ID checks. After a hijacking in 1972 ... airlines were required to post armed guards in passenger boarding areas. This countermeasure was less to decrease the risk of hijacking than to decrease the anxiety of passengers. After 9/11, the U.S. government posted armed National Guard troops at airport checkpoints primarily for the same reason (but were smart enough not to give them bullets). Of course airlines would prefer it if all their flights were perfectly safe, but actual hijackings and bombings are rare events whereas corporate earnings statements come out every quarter. For an airline, for the economy, and for the country, judicious use of security theater calmed fears... and that was a good thing.
I wonder if the submachine gun the police officer was carrying was loaded? I would assume it actually wasn't, as a submachine gun is about the worst thing you could use on a crowded mass transit system.

The important thing to note here is that security decisions are often based on more than just security considerations. As security theater, Thursday's heightened alert level reduced public anxiety. On a more cynical level, it's also an example of politicians and businesses hedging their bets (if an attack did come, they could at least claim they weren't caught completely off-guard). Sometimes, those in power have to do something quickly to address a security problem. Most people are comforted by action, even if their security isn't improved very much as a result. However, as Schneier notes, security theater is largely a palliative measure. In a world where security risks are difficult to judge, security theater can easily be confused with the real thing. It's important to understand such actions for what they are. At the same time, it should also be noted that such actions do provide some value, often extending beyond the realm of security (which can be important too).

Update: Minor additions and grammar changes.

Update 7.22.05: John Robb notes the added cost (i.e. the monetary cost, the inconvenience, the civil liberties etc...)of the extra security measures implemented as a result of the recent attempts in London, and how the costs have spread throughout the US. Robb also notes that Schneier himself has commented on the specific measure of searching bags. To clarify my comments above, I think the value provided by Security Theater is, at best, a short term value, depending on your perspective. Is that value worth the added costs? If you're a leader or politician, probably. If you're a commuter, probably not. Politicians and other leaders usually have a different agenda than commuters, and they're the ones making the decisions.
Posted by Mark on July 10, 2005 at 10:26 PM .: Comments (0) | link :.

End of This Day's Posts

Monday, June 13, 2005

Guns and Pools
Kevin Baker posts a newspaper headline which demonstrates one of the points I made in Sharks, Deer, and Risk: "A child is 100 times more likely to drown in a pool than be killed by a gun." Kevin looked at the numbers a bit closer and came to the conclusion that the ratio is more like 175:1, but in either case, it demonstrates the point about percieved risks versus actual risks made in my post.
Posted by Mark on June 13, 2005 at 02:10 PM .: link :.

End of This Day's Posts

Sunday, May 29, 2005

Sharks, Deer, and Risk
Here's a question: Which animal poses the greater risk to the average person, a deer or a shark?

Most people's initial reaction (mine included) to that question is to answer that the shark is the more dangerous animal. Statistically speaking, the average American is much more likely to be killed by deer (due to collisions with vehicles) than by a shark attack. Truly accurate statistics for deer collisions don't exist, but estimates place the number of accidents in the hundreds of thousands. Millions of dollars worth of damage are caused by deer accidents, as are thousands of injuries and hundreds of deaths, every year.

Shark attacks, on the other hand, are much less frequent. Each year, approximately 50 to 100 shark attacks are reported. "World-wide, over the past decade, there have been an average of 8 shark attack fatalities per year."

It seems clear that deer actually pose a greater risk to the average person than sharks. So why do people think the reverse is true? There are a number of reasons, among them the fact that deer don't intentionally cause death and destruction (not that we know of anyway) and they are also usually harmed or killed in the process, while sharks directly attack their victims in a seemingly malicious manner (though I don't believe sharks to be malicious either).

I've been reading Bruce Schneier's book, Beyond Fear, recently. It's excellent, and at one point he draws a distinction between what security professionals refer to as "threats" and "risks."
A threat is a potential way an attacker can attack a system. Car burglary, car theft, and carjacking are all threats ... When security professionals talk abour risk, they take into consideration both the likelihood of the threat and the seriousness of a successful attack. In the U.S., car theft is a more serious risk than carjacking because it is much more likely to occur.
Everyone makes risk assessments every day, but most everyone also has different tolerances for risk. It's essentially a subjective decision, and it turns out that most of us rely on imperfect heuristics and inductive reasoning when it comes to these sorts of decisions (because it's not like we have the statistics handy). Most of the time, these heuristics serve us well (and it's a good thing too), but what this really ends up meaning is that when people make a risk assessment, they're basing their decision on a perceived risk, not the actual risk.

Schneier includes a few interesting theories about why people's perceptions get skewed, including this:
Modern mass media, specifically movies and TV news, has degraded our sense of natural risk. We learn about risks, or we think we are learning, not by directly experiencing the world around us and by seeing what happens to others, but increasingly by getting our view of things through the distorted lens of the media. Our experience is distilled for us, and it’s a skewed sample that plays havoc with our perceptions. Kids try stunts they’ve seen performed by professional stuntmen on TV, never recognizing the precautions the pros take. The five o’clock news doesn’t truly reflect the world we live in -- only a very few small and special parts of it.

Slices of life with immediate visual impact get magnified; those with no visual component, or that can’t be immediately and viscerally comprehended, get downplayed. Rarities and anomalies, like terrorism, are endlessly discussed and debated, while common risks like heart disease, lung cancer, diabetes, and suicide are minimized.
When I first considered the Deer/Shark dilemma, my immediate thoughts turned to film. This may be a reflection on how much movies play a part in my life, but I suspect some others would also immediately think of Bambi, with it's cuddly cute and innocent deer, and Jaws, with it's maniacal great white shark. Indeed, Fritz Schranck once wrote about these "rats with antlers" (as some folks refer to deer) and how "Disney's ability to make certain animals look just too cute to kill" has deterred many people from hunting and eating deer. When you look at the deer collision statistics, what you see is that what Disney has really done is to endanger us all!

Given the above, one might be tempted to pursue some form of censorship to keep the media from degrading our ability to determine risk. However, I would argue that this is wrong. Freedom of speech is ultimately a security measure, and if we're to consider abridging that freedom, we must also seriously consider the risks of that action. We might be able to slightly improve our risk decisionmaking with censorship, but at what cost?

Schneier himself recently wrote about this subject on his blog. In response to an article which argues that suicide bombings in Iraq shouldn't be reported (because it scares people and it serves the terrorists' ends). It turns out, there are a lot of reasons why the media's focus on horrific events in Iraq cause problems, but almost any way you slice it, it's still wrong to censor the news:
It's wrong because the danger of not reporting terrorist attacks is greater than the risk of continuing to report them. Freedom of the press is a security measure. The only tool we have to keep government honest is public disclosure. Once we start hiding pieces of reality from the public -- either through legal censorship or self-imposed "restraint" -- we end up with a government that acts based on secrets. We end up with some sort of system that decides what the public should or should not know.
Like all of security, this comes down to a basic tradeoff. As I'm fond of saying, human beings don't so much solve problems as they do trade one set of problems for another (in the hopes that the new problems are preferable the old). Risk can be difficult to determine, and the media's sensationalism doesn't help, but censorship isn't a realistic solution to that problem because it introduces problems of its own (and those new problems are worse than the one we're trying to solve in the first place). Plus, both Jaws and Bambi really are great movies!
Posted by Mark on May 29, 2005 at 08:50 PM .: link :.

End of This Day's Posts

Sunday, May 15, 2005

Spy Blogs
We Need Spy Blogs By Kris Alexander : An interesting article advocating the use of blogging on Intelink, the US intelligence community's classified, highly secure mini-Internet.
A vast amount of information was available to us on Intelink, but there was no simple way to find and use the data efficiently. For instance, our search engine was an outdated version of AltaVista. (We've got Google now, a step in the right direction.) And while there were hundreds of people throughout the world reading the same materials, there was no easy way to learn what they thought. Somebody had answers to my questions, I knew, but how were we ever to connect?
It's clear that we're using a lot of technology to help our intelligence organizations, but data isn't the same thing as intelligence. Perhaps unsurprisingly, Alexander points to a few Army initiatives that are leading the way. Army Knowledge Online provides a sort of virtual workspace for each unit - so even soldiers in reserve units who are spread out over a wide area are linked. The Center for Army Lessons Learned, which resembles a blog, allows soldiers to "post white papers on subjects ranging from social etiquette at Iraqi funerals to surviving convoy ambushes."

Apparently the rest of the intelligence community has not kept up with the Army, perhaps confirming the lack of discipline hypothesized in my recent post A Tale of Two Software Projects. Of course, failure to keep up with technology is not a new criticism, even from within the CIA, but it is worth noting.
The first step toward reform: Encourage blogging on Intelink. When I Google "Afghanistan blog" on the public Internet, I find 1.1 million entries and tons of useful information. But on Intelink there are no blogs. Imagine if the experts in every intelligence field were turned loose - all that's needed is some cheap software. It's not far-fetched to picture a top-secret CIA blog about al Qaeda, with postings from Navy Intelligence and the FBI, among others. Leave the bureaucratic infighting to the agency heads. Give good analysts good tools, and they'll deliver outstanding results.

And why not tap the brainpower of the blogosphere as well? The intelligence community does a terrible job of looking outside itself for information. From journalists to academics and even educated amateurs - there are thousands of people who would be interested and willing to help. Imagine how much traffic an official CIA Iraq blog would attract. If intelligence organizations built a collaborative environment through blogs, they could quickly identify credible sources, develop a deep backfield of contributing analysts, and engage the world as a whole.
Posted by Mark on May 15, 2005 at 11:56 AM .: link :.

End of This Day's Posts

Sunday, March 13, 2005

A tale of two software projects
A few weeks ago, David Foster wrote an excellent post about two software projects. One was a failure, and one was a success.

The first project was the FBI's new Virtual Case File system; a tool that would allow agents to better organize, analyze and communicate data on criminal and terrorism cases. After 3 years and over 100 million dollars, it was announced that the system may be totally unusable. How could this happen?
When it became clear that the project was in trouble, Aerospace Corporation was contracted to perform an independent evaluation. It recommended that the software be abandoned, saying that "lack of effective engineering discipline has led to inadequate specification, design and development of VCF." SAIC has said it believes the problem was caused largely by the FBI: specifically, too many specification changes during the development process...an SAIC executive asserted that there were an average of 1.3 changes per day during the development. SAIC also believes that the current system is useable and can serve as a base for future development.
I'd be interested to see what the actual distribution of changes were (as opposed to the "average changes per day", which seems awfully vague and somewhat obtuse to me), but I don't find it that hard to believe that this sort of thing happened (especially because the software development firm was a separate entity). I've had some experience with gathering requirements, and it certainly can be a challenge, especially when you don't know the processes currently in place. This does not excuse anything, however, and the question remains: how could this happen?

The second project, the success, may be able to shed some light on that. DARPA was tapped by the US Army to help protect troops from enemy snipers. The requested application would spot incoming bullets and identify their point of origin, and it would have to be easy to use, mobile, and durable.
The system would identify bullets from their sound..the shock wave created as they travelled through the air. By using multiple microphones and precisely timing the arrival of the "crack" of the bullet, its position could, in theory, be calculated. In practice, though, there were many problems, particularly the high levels of background noise--other weapons, tank engines, people shouting. All these had to be filtered out. By Thanksgiving weekend, the BBN team was at Quantico Marine Base, collecting data from actual firing...in terrible weather, "snowy, freezing, and rainy" recalls DARPA Program Manager Karen Wood. Steve Milligan, BBN's Chief Technologist, came up with the solution to the filtering problem: use genetic algorithms. These are a kind of "simulated evolution" in which equations can mutate, be tested for effectivess, and sometimes even "mate," over thousands of simulated generations (more on genetic algorithms here.)

By early March, 2004, the system was operational and had a name--"Boomerang." 40 of them were installed on vehicles in Iraq. Based on feedback from the troops, improvements were requested. The system has now been reduced in size, shielded from radio interference, and had its display improved. It now tells soldiers the direction, range, and elevation of a sniper.
Now what was the biggest difference between the remarkable success of the Boomerang system and the spectacular failure of the Virtual Case File system? Obviously, the two projects present very different challenges, so a direct comparison doesn't necessarily tell the whole story. However, it seems to me that discipline (in the case of the Army) or the lack of discipline (in the case of the FBI) might have been a major contributor to the outcomes of these two projects.

It's obviously no secret that discipline plays a major role in the Army, but there is more to it than just that. Independence and initiative also play an important role in a military culture. In Neal Stephenson's Cryptonomicon, the way the character Bobby Shaftoe (a Marine Raider, which is "...like a Marine, only more so.") interacts with his superiors provides some insight (page 113 in my version):
Having now experienced all the phases of military existence except for the terminal ones (violent death, court-martial, retirement), he has come to understand the culture for what it is: a system of etiquette within which it becomes possible for groups of men to live together for years, travel to the ends of the earth, and do all kinds of incredibly weird shit without killing each other or completely losing their minds in the process. The extreme formality with which he addresses these officers carries an important subtext: your problem, sir, is doing it. My gung-ho posture says that once you give the order I'm not going to bother you with any of the details - and your half of the bargain is you had better stay on your side of the line, sir, and not bother me with any of the chickenshit politics that you have to deal with for a living.
Good military officers are used to giving an order, then staying out of their subordinate's way as they carry out that order. I didn't see any explicit measurement, but I would assume that there weren't too many specification changes during the development of the Boomerang system. Of course, the developers themselves made all sorts of changes to specifics and they also incorporated feedback from the Army in the field in their development process, but that is standard stuff.

I suspect that the FBI is not completely to blame, but as the report says, there was a "lack of effective engineering discipline." The FBI and SAIC share that failure. I suspect, from the number of changes requested by the FBI and the number of government managers involved, that micromanagement played a significant role. As Foster notes, we should be leveraging our technological abilities in the war on terror, and he suggests a loosely based oversight committe (headed by "a Director of Industrial Mobilization") to make sure things like this don't happen very often. Sounds like a reasonable idea to me...
Posted by Mark on March 13, 2005 at 08:47 PM .: link :.

End of This Day's Posts

Sunday, November 07, 2004

Open Source Security
A few weeks ago, I wrote about what the mainstream media could learn from Reflexive documentary filmmaking. Put simply, Reflexive Documentaries achieve a higher degree of objectivity by embracing and acknowledging their own biases and agenda. Ironically, by acknowledging their own subjectivity, these films are more objective and reliable. In a follow up post, I examined how this concept could be applied to a broader range of information dissemination processes. That post focused on computer security and how full disclosure of system vulnerabilities actually improves security in the long run. Ironically, public scrutiny is the only reliable way to improve security.

Full disclosure is certainly not perfect. By definition, it increases risk in the short term, which is why opponents are able to make persuasive arguments against it. Like all security, it is a matter of tradeoffs. Does the long term gain justify the short term risk? As I'm fond of saying, human beings don't so much solve problems as they trade one set of disadvantages for another (with the hope that the new set isn't quite as bad as the old). There is no solution here, only a less disadvantaged system.

Now I'd like to broaden the subject even further, and apply the concept of open security to national security. With respect to national security, the stakes are higher and thus the argument will be more difficult to sustain. If people are unwilling to deal with a few computer viruses in the short term in order to increase long term security, imagine how unwilling they'll be to risk a terrorist attack, even if that risk ultimately closes a few security holes. This may be prudent, and it is quite possible that a secrecy approach is more necessary at the national security level. Secrecy is certainly a key component of intelligence and other similar aspects of national security, so open security techniques would definitely not be a good idea in those areas.

However, there are certain vulnerabilities in processes and systems we use that could perhaps benefit from open security. John Robb has been doing some excellent work describing how terrorists (or global guerillas, as he calls them) can organize a more effective campaign in Iraq. He postulates a Bazaar of violence, which takes its lessons from the open source programming community (using Eric Raymond's essay The Cathedral and the Bazaar as a starting point):
The decentralized, and seemingly chaotic guerrilla war in Iraq demonstrates a pattern that will likely serve as a model for next generation terrorists. This pattern shows a level of learning, activity, and success similar to what we see in the open source software community. I call this pattern the bazaar. The bazaar solves the problem: how do small, potentially antagonistic networks combine to conduct war?
Not only does the bazaar solve the problem, it appears able to scale to disrupt larger, more stable targets. The bazaar essentially represents the evolution of terrorism as a technique into something more effective: a highly decentralized strategy that is nevertheless able to learn and innovate. Unlike traditional terrorism, it seeks to leverage gains from sabotaging infrastructure and disrupting markets. By focusing on such targets, the bazaar does not experience diminishing returns in the same way that traditional terrorism does. Once established, it creats a dynamic that is very difficult to disrupt.

I'm a little unclear as to what the purpose of the bazaar is - the goal appears to be a state of perpetual violence that is capable of keeping a nation in a position of failure/collapse. That our enemies seek to use this strategy in Iraq is obvious, but success essentially means perpetual failure. What I'm unclear on is how they seek to parlay this result into a successful state (which I assume is their long term goal - perhaps that is not a wise assumption).

In any case, reading about the bazaar can be pretty scary, especially when news from Iraq seems to correllate well with the strategy. Of course, not every attack in Iraq correllates, but this strategy is supposedly new and relatively dynamic. It is constantly improving on itself. They are improvising new tactics and learning from them in an effort to further define this new method of warfare.

As one of the commenters on his site notes, it is tempting to claim that John Robb's analysis is essentially an instruction manual for a guerilla organization, but that misses the point. It's better to know where we are vulnerable before we discover that some weakness is being exploited.

One thing that Robb is a little short on is actual, concrete ways with which to fight the bazaar (there are some, and he has pointed out situations where U.S. forces attempted to thwart bazaar tactics, but such examples are not frequent). However, he still provides a valuable service in exposing security vulnerabilities. It seems appropriate that we adopt open source security techniques in order to fight an enemy that employs an open source platform. Vulnerabilities need to be exposed so that we may devise effective counter-measures.
Posted by Mark on November 07, 2004 at 08:56 PM .: link :.

End of This Day's Posts

Sunday, October 10, 2004

Open Security and Full Disclosure
A few weeks ago, I wrote about what the mainstream media could learn from Reflexive documentary filmmaking. Put simply, Reflexive Documentaries achieve a higher degree of objectivity by embracing and acknowledging their own biases and agenda. Ironically, by acknowledging their own subjectivity, these films are more objective and reliable. I felt that the media could learn from such a model. Interestingly enough, such concepts can be applied to wider scenarios concerning information dissemination, particularly security.

Bruce Schneier has often written about such issues, and most of the information that follows is summarized from several of his articles, recent and old. The question with respect to computer security systems is this: Is publishing computer and network or software vulnerability information a good idea, or does it just help attackers?

When such a vulnerability exists, it creates what Schneier calls a Window of Exposure in which the vulnerability can still be exploited. This window exists until the vulnerability is patched and installed. There are five key phases which define the size of the window:
Phase 1 is before the vulnerability is discovered. The vulnerability exists, but no one can exploit it. Phase 2 is after the vulnerability is discovered, but before it is announced. At that point only a few people know about the vulnerability, but no one knows to defend against it. Depending on who knows what, this could either be an enormous risk or no risk at all. During this phase, news about the vulnerability spreads -- either slowly, quickly, or not at all -- depending on who discovered the vulnerability. Of course, multiple people can make the same discovery at different times, so this can get very complicated.

Phase 3 is after the vulnerability is announced. Maybe the announcement is made by the person who discovered the vulnerability in Phase 2, or maybe it is made by someone else who independently discovered the vulnerability later. At that point more people learn about the vulnerability, and the risk increases. In Phase 4, an automatic attack tool to exploit the vulnerability is published. Now the number of people who can exploit the vulnerability grows exponentially. Finally, the vendor issues a patch that closes the vulnerability, starting Phase 5. As people install the patch and re-secure their systems, the risk of exploit shrinks. Some people never install the patch, so there is always some risk. But it decays over time as systems are naturally upgraded.
The goal is to minimize the impact of the vulnerability by reducing the window of exposure (the area under the curve in figure 1). There are two basic approaches: secrecy and full disclosure.

The secrecy approach seeks to reduce the window of exposure by limiting public access to vulnerability information. In a different essay about network outages, Schneier gives a good summary of why secrecy doesn't work well:
The argument that secrecy is good for security is naive, and always worth rebutting. Secrecy is only beneficial to security in limited circumstances, and certainly not with respect to vulnerability or reliability information. Secrets are fragile; once they're lost they're lost forever. Security that relies on secrecy is also fragile; once secrecy is lost there's no way to recover security. Trying to base security on secrecy is just plain bad design.

... Secrecy prevents people from assessing their own risks.
Secrecy may work on paper, but in practice, keeping vulnerabilities secret removes motivation to fix the problem (it is possible that a company could utilize secrecy well, but it is unlikely that all companies would do so and it would be foolish to rely on such competency). The other method of reducing the window of exposure is to disclose all information about the vulnerablity publicly. Full Disclosure, as this method is called, seems counterintuitive, but Schneier explains:
Proponents of secrecy ignore the security value of openness: public scrutiny is the only reliable way to improve security. Before software bugs were routinely published, software companies routinely denied their existence and wouldn't bother fixing them, believing in the security of secrecy.
Ironically, publishing details about vulnerabilities leads to a more secure system. Of course, this isn't perfect. Obviously publishing vulnerabilities constitutes a short term danger, and can sometimes do more harm than good. But the alternative, secrecy, is worse. As Schneier is fond of saying, security is about tradeoffs. As I'm fond of saying, human beings don't so much solve problems as they trade one set of disadvantages for another (with the hope that the new set isn't quite as bad as the old). There is no solution here, only a less disadvantaged system.

This is what makes advocating open security systems like full disclosure difficult. Opponents will always be able to point to its flaws, and secrecy advocates are good at exploiting the intuitive (but not necessarily correct) nature of their systems. Open security systems are just counter-intuitive, and there is a tendency to not want to increase risk in the short term (as things like full disclosure does). Unfortunately, that means that the long term danger increases, as there is less incentive to fix security problems.

By the way, Schneier has started a blog. It appears to be made up of the same content that he normally releases monthly in the Crypto-Gram newsletter, but spread out over time. I think it will be interesting to see if Schneier starts responding to events in a more timely fashion, as that is one of the keys to the success of blogs (and it's something that I'm bad at, unless news breaks on a Sunday).
Posted by Mark on October 10, 2004 at 11:56 AM .: link :.

End of This Day's Posts

Sunday, June 27, 2004

Recent Cloak and Dagger Happenings
Bruce Schneier attempts to untangle the news that the NSA has been reading Iranian codes, and that Ahmed Chalabi informed the Iranians. In doing so, he runs across the massive difficulties of attempting to analyze an intelligence happening. Indeed, what follows is practically useless, unless you enjoy this cat and mouse stuff like I do...
As ordinary citizens without serious security clearances, we don't know which machines' codes the NSA compromised, nor do we know how. It's possible that the U.S. broke the mathematical encryption algorithms that the Iranians used, as the British and Poles did with the German codes during World War II. It's also possible that the NSA installed a "back door" into the Iranian machines. This is basically a deliberately placed flaw in the encryption that allows someone who knows about it to read the messages.

There are other possibilities: the NSA might have had someone inside Iranian intelligence who gave them the encryption settings required to read the messages. John Walker sold the Soviets this kind of information about U.S. naval codes for years during the 1980s. Or the Iranians could have had sloppy procedures that allowed the NSA to break the encryption. ...

Whatever the methodology, this would be an enormous intelligence coup for the NSA. It was also a secret in itself. If the Iranians ever learned that the NSA was reading their messages, they would stop using the broken encryption machines, and the NSA's source of Iranian secrets would dry up. The secret that the NSA could read the Iranian secrets was more important than any specific Iranian secrets that the NSA could read.

The result was that the U.S. would often learn secrets they couldn't act upon, as action would give away their secret. During World War II, the Allies would go to great lengths to make sure the Germans never realized that their codes were broken. The Allies would learn about U-boat positions, but wouldn't bomb the U-boats until they spotted the U-boat by some other means...otherwise the Nazis might get suspicious.

There's a story about Winston Churchill and the bombing of Coventry: supposedly he knew the city would be bombed but could not warn its citizens. The story is apocryphal, but is a good indication of the extreme measures countries take to protect the secret that they can read an enemy's secrets.

And there are many stories of slip-ups. In 1986, after the bombing of a Berlin disco, then-President Reagan said that he had irrefutable evidence that Qadaffi was behind the attack. Libyan intelligence realized that their diplomatic codes were broken, and changed them. The result was an enormous setback for U.S. intelligence, all for just a slip of the tongue.
There are also cases when compromised codes are used... The Japanese attack on Midway was extraordinarily complex, and it relied on completely surprising the Americans. US cryptanalysts had partially broken the Japanese code, and were able to deduce most of the Japanese attack plan, but they were missing two key pieces of information - the time and place of the attack. They were able to establish that the target of the attack was represented by the letters AF, and they suspected that Midway was a plausible target. To confirm that Midway was the target, the US military sent an uncoded message indicating that the island's desalination plant had broken down. Shortly thereafter, a Japanese message was intercepted indicating that AF would be running low on water. However, such clarity in intelligence coups like this is quite rare, and the Iranian news is near impossible to decipher. You get stuck in a recursive and byzantine "what if" structure - what if they know we know they know?
Iranian intelligence supposedly tried to test Chalabi's claim by sending a message about an Iranian weapons cache. If the U.S. acted on this information, then the Iranians would know that its codes were broken. The U.S. didn't, which showed they're very smart about this. Maybe they knew the Iranians suspected, or maybe they were waiting to manufacture a plausible fictitious reason for knowing about the weapons cache.
So Iran's Midway-style attempt to confirm Chalabi's claim did not bear fruit. If, that is, Chalabi even told them anything. Who knows? Everything is open to speculation when it comes to this.
If the Iranians knew that the U.S. knew, why didn't they pretend not to know and feed the U.S. false information? Or maybe they've been doing that for years, and the U.S. finally figured out that the Iranians knew. Maybe the U.S. knew that the Iranians knew, and are using the fact to discredit Chalabi.
I'd like to know more about this story, but it seems woefully underreported in the media and it is way too cloak and dagger to accurately analyze with the information currently available. The sad thing is that I suspect we'll never be able to figure it out.
Posted by Mark on June 27, 2004 at 08:59 PM .: link :.

End of This Day's Posts

Sunday, April 04, 2004

Thinking about Security
I've been making my way through Bruce Schneier's Crypto-Gram newsletter archives, and I came across this excellent summary of how to think about security. He breaks security down into five simple questions that should be asked of a proposed security solution, some obvious, some not so much. In the post 9/11 era, we're being presented with all sorts of security solutions, and so Shneier's system can be quite useful in evaluating proposed security systems.
This five-step process works for any security measure, past, present, or future:

1) What problem does it solve?
2) How well does it solve the problem?
3) What new problems does it add?
4) What are the economic and social costs?
5) Given the above, is it worth the costs?
What this process basically does is force you to judge the tradeoffs of a security system. All to often, we either assume a proposed solution doesn't create problems of its own, or assume that because a proposed solution isn't a perfect solution, it's useless. Security is a tradeoff. It doesn't matter if a proposed security system makes us safe. What matters is that a system is worth the tradeoffs (or price, if you prefer). For instance, in order to make your computer invulnerable to external attacks from the internet, all you need to do is disconnect it from the internet. However, that means you can no longer access the internet! That is the price you pay for a perfectly secure solution to internet attacks. And it doesn't protect against attacks from those who have physical access to your computer. Also, you presumably want to use the internet, seeing as though you had a connection you wanted to protect. The old saying still holds: A perfectly secure system is a perfectly useless system.

In the post 9/11 world we're constantly being bombarded by new security measures, but at the same time, we're being told that a solution which is not perfect is worthless. It's rare that a new security measure will provide a clear benefit without causing any problems. It's all about tradeoffs...

I had intended to apply Schneier's system to a contemporary security "solution," but I can't seem to think of anything at the moment. Perhaps more later. In the mean time, check out Schneier's recent review of "I am Not a Terrorist" Cards in which he tears apart a proposed security system which sounds interesting on the surface, but makes little sense when you take a closer look (which Scheier does mercilessly).
Posted by Mark on April 04, 2004 at 11:09 PM .: link :.

End of This Day's Posts

Sunday, February 22, 2004

The Eisenhower Ten
The Eisenhower Ten by CONELRAD : An excellent article detailing a rather strange episode in U.S. History. During 1958 and 1959, President Eisenhower issued ten letters to mostly private citizens granting them unprecedented power in the event of a "national emergency" (i.e. nuclear war). Naturally, the Kennedy administration was less than thrilled with the existence of these letters, which, strangly enough, did not contain expiration dates.

So who made up this Shadow Government?
...of the nine, two of the positions were filled by Eisenhower cabinet secretaries and another slot was filled by the Chairman of the Board of Governors of the Federal Reserve. The remaining six were very accomplished captains of industry who, as time has proven, could keep a secret to the grave. It should be noted that the sheer impressiveness of the Emergency Administrator roster caused Eisenhower Staff Secretary Gen. Andrew J. Goodpaster (USA, Ret.) to gush, some 46 years later, "that list is absolutely glittering in terms of its quality." In his interview with CONELRAD, the retired general also emphasized how seriously the President took the issue of Continuity of Government: "It was deeply on his mind."
Eisenhower apparently assembled the list himself, and if that is the case, the quality of the list was no doubt "glittering". Eisenhower was a good judge of talent, and one of the astounding things about his command of allied forces during WWII was that he successfully assembled an integrated military command made up of both British and American officers, and they were actually effective on the battlefield. I don't doubt that he would be able to assemble a group of Emergency Administrators that would fit the job, work well together, and provide the country with a reasonably effective continuity of government in the event of the unthinkable.

Upon learning of these letters, Kennedy's National Security Advisor, McGeorge Bundy, asserted that the "outstanding authority" of the Emergency Administrators should be terminated... but what happened after that is somewhat of a mystery. Some correspondance exists suggesting that several of the Emergency Administrators were indeed relieved of their duties, but there are still questions as to whether or not Kennedy retained the services of 3 of the Eisenhower Ten and whether Kennedy established an emergency administration of his own.
It is Gen. Goodpaster's assertion that because Eisenhower practically wrote the book on Continuity of Government, the practice of having Emergency Administrators waiting in the wings for the Big One was a tradition that continued throughout the Cold War and perhaps even to this day.
On March 1, 2002, the New York Times reported that Bush had indeed set up a "shadow government" in the wake of the 9/11 terror attacks. This news was, of course, greeted with much consternation, and understandably so. Though there may be a historical precident (even if it is a controversial one) for such a thing, the details of such an open-ended policy are still a bit fuzzy to me...

CONELRAD has done an excellent job collecting, presenting, and analyzing information pertaining to the Eisenhower Ten, and I highly recommend anyone who is interested in the issue of continuity of government to check it out. Even with that, there are still lots of unanswered questions about the practice, but it is still fascinating reading....
Posted by Mark on February 22, 2004 at 09:31 PM .: link :.

End of This Day's Posts

Thursday, November 20, 2003

The New Paradigm of Intelligence Agility
Whether you believe 9/11 and subsequent events to include massive intelligence failures or not, it has become clear that our intelligence capabilities lack agility. As a nation, we have not moved beyond the Cold War paradigm of threat-based strategic thinking. This thinking was well suited to deterring and defeating specific threats, but has left us unprepared to effectively respond to emerging threats such as terrorism.

The problem with most calls for intelligence or military reform in the post-9/11 era is that they are all still stuck in that Cold War paradigm. In the future, we may be able to cope with the terrorist threat, but what about the next big threat to come along? The true solution, as Bruce Berkowitz suggests, is not to simply change the list of specific threats, but to be agile. We need to be able to respond to new and emerging threats quickly and effectively.

Fortunately, the ability to effectively respond to terrorism may not be possible without instituting at least a measure of agility in our intelligence community. When planning against the Soviets, we had the luxury of knowing that the "threat changed incrementally, came from a known geographic location, and was most likely to follow a well-understood attack plan." The nature of terrorists is less static than that of the Soviets, so if we are to succeed, we will need to orient ourselves towards a condition of agility. The Soviets required an intense focus of resources on a single threat, whereas terrorism requires our resources to be more dispersed. Agility will give us the ability to evaluate new and emerging threats, and to dynamically adjust resources based on where we need them.

So, in this context, what is agility? Berkowitz has the answer:
For an intelligence organization, agility can be defined as having four features. First, the organization needs to be able to move people and other resources quickly and efficiently as requirements change. Second, it needs to be able to draw on expertise and information sources from around the world. Third, it needs to be able to move information easily so that all of the people required to produce an intelligence product can work together effectively. And, fourth, it needs to be able to deliver products to consumers when needed and in the form they require to do their job.
And how do we achieve this goal? The answer isn't necessarily a dramatic restructuring of our intelligence community. Agility in this context depends on unglamorous, mundane things like standardized clearances and feedback loops between managers and analysts. We should be encouraging innovation in analysis and ways to penetrate targets. Perhaps most important is the need for a system to escalate activities when the stakes are high:
[We need] Procedures that tell everyone when the stakes are high and they should take more risks and act more aggressively-despite the potential costs. The Defense Department has these procedures... the "Defense Condition," or DEFCON, system. The CIA does not.
Out intelligence community correctly recognized the threat that terrorism posed long before 9/11, but lacked the organizational agility to shift resources to counter that threat. Currently, we are doing a better job of confronting terrorism, but we will need to be agile if we are to respond to the next big threat. As Bruce Shneier comments, taking away pocket knives and box cutters doesn't improve airline security:
People who think otherwise don't understand what allowed the terrorists to take over four planes two years ago. It wasn't a small knife. It wasn't a box cutter. The critical weapon that the terrorists had was surprise. With surprise they could have taken the planes over with their bare hands. Without surprise they couldn't have taken the planes over, even if they had guns.

And surprise has been confiscated on all flights since 9/11. It doesn't matter what weapons any potential new hijackers have; the passengers will no longer allow them to take over airplanes. I don't believe that airplane hijacking is a thing of the past, but when the next plane gets taken over it will be because a group of hijackers figured out a clever new weapon that we haven't thought of, and not because they snuck some small pointy objects through security.
I've been hard on the intelligence community (or rather, the way they interact with our politicians) lately, but theirs is truly a thankless job. By their nature, they don't get to publicize their successes, but we all see their failures. Unfortunately we cannot know how successful they've been in the past two years, but given the amount of terrorist attacks during that period, the outlook is promising. We may be more agile than we know...
Posted by Mark on November 20, 2003 at 12:37 AM .: link :.

End of This Day's Posts

Sunday, November 09, 2003

The State of U.S. Intelligence
Over the past few years, I've spent a fair amount of time reading up on the intelligence community and it's varied strengths or weaknesses. I've also spent a fair amount of time defending the Bush administration (or pointing out flaws in the arguments against the administration) in various forums, if only because no one else would. However, I've come to believe that our intelligence community is in poor shape... not really because of those we have working at these agencies, but because of the interaction between the intelligence community and the rest of the government.

The problem appears to be more systemic than deliberate as questionible practices such as "stovepiping" (the practice of taking a piece of intelligence or a request, bypassing the chain of command, and bringing it straight to the highest authority) became commonplace in the administration, even before 9/11. Basically, the Bush administration fixed the system so that they got raw intelligence without the proper analysis (intelligence is usually subjected to a thorough vetting). Given that they were also openly (and perhaps rightfully) distrustful of the intelligence community (and that the feeling was mutual), is it any wonder that they tried to bypass the system?

Don't get me wrong, what the administration has done is clearly wrong and the "stovepiping" situation should be corrected immediately. There appears to be some spiteful and petty actions being taken by both the White House and the Intelligence Community, and no one is benefiting from this. A very cynical feeling is running through one of the most important areas of our national security. This feeling is exemplified by the recent leaked memo written by a member of Senator Jay Rockefeller's (D-WVa) staff. The memo recommends that Democrats launch an investigation "into pre-war Iraq intelligence in such a way that it could bring maximum embarrassment to President Bush in his re-election campaign." It has been fairly suggested that this memo is only a desperate response to the Bush administration's maneuverings, but this does not excuse the downright destructive course of action that the memo advocates.

Bob Kerry, a former vice-chairman of the Senate Select Committee on Intelligence, wrote an excellent oped on this subject:
The production of a memo by an employee of a Democratic member of the Senate Select Committee on Intelligence is an example of the destructive side of partisan politics. That it probably emerged as a consequence of an increasingly partisan environment in Washington and may have been provoked by equally destructive Republican acts is neither a comfort nor a defensible rationalization.
I have no doubt that there are Republican memos of a similar nature floating about but the Senate Intelligence Committee, by virtue of its importance, is supposed to be beyond be beyond partisan politics and it has been in the past. It isn't now. This, too, is unacceptable and needs to be corrected. Indeed, the Senate Intelligence Committee hasn't held an open hearing for months, nor has it released any preliminary findings or provided any other insight. It's website hasn't been updated in months and contains spelling errors on every page ("Jurisdicton"!?).

The blame does not lie with any one governmental entity, but their stubborn refusal to play well together, especially with something as important as intelligence, is troubling to say the least. We are a nation at war, and if we are to succeed, we must trust in our government to effectively evaluate intelligence at all levels. The practice of "stovepiping" must end, and the White House will need to trust in the intelligence community to provide accurate, useful, and timely information. For their part, the intelligence community will have to provide this information and live up to certain expectations - and, for example, when the Vice President asks for something to be checked out, you might want to put someone competent on the case. Sending a former ambassador to Niger without any resources other than his own contacts, no matter how knowledgeable he may be, simply doesn't cut it. He didn't even file a formal report. I don't pretend to know how or why those involved acted the way they did, but I do know that the end result was representative of the troubling breakdown of communication between the CIA and the White House.

And the Senate Intelligence Committee could perhaps learn something from the House intelligence Committee, which, in a genuinely constructive act of bipartisan oversight of intelligence, "challenged the CIA's refusal to comply with their request for a copy of the recent report by David Kay on the search for Iraqi weapons of mass destruction."

Of course, it must also be said that public acknowledgements about intelligence failures before 9/11 or the Iraq war may also prove to be counterproductive as they could reveal valuable intelligence sources (which would be "silenced" by our enemies). Such information cannot be made public without jeopardizing the lives of our people, and it shouldn't. In the end, we must trust in our government and they must trust in themselves if we are to accomplish anything. If the past few years are any indication, however, we may be in a lot of trouble. [thanks to Secrecy News for Intelligence Committee info]
Posted by Mark on November 09, 2003 at 10:00 PM .: link :.

End of This Day's Posts

Sunday, October 12, 2003

Treason. Such an ugly word. Aldrich Ames prefers "spying." Such rationalizations are a part of what made Ames one of the most cold-blooded traitors in U.S. history. He also remains the most damaging mole (to our knowledge) to betray the CIA.

Spying was in Ames' blood. His father was a spy, and he spent summers working for the agency (nothing devious of course; he was only 16 and simply helped prepare resources, such as fake money, for training exercises.) With the help of his father, he was later hired by the agency and began training to become a case officer in the Directorate of Operations, the CIA's covert branch. His early career proved to be lackluster. He seemed to have difficulty recruiting spies.

He eventually cought a few breaks managing already-turned "assets" (as spies are referred to) and began to make some progress. He was, however, consistently passed over for promotions, due to his lack of recruiting abilities. His personal life was a mess and his marriage was falling apart. He began drinking heavily. In order to prove his worth, he took a tour in Mexico City where he once again failed to recruit a single spy. Distraught, his failure in Mexico City only led to more drinking and disillusionment. His agency friends were worried about him, and set him up with Maria del Rosario, a cultural attach� for the Colombian Embassy in Mexico. Ames promptly fell in love.

Thanks to an agency friend who only knew of Ames' success with managing assets, Ames was finally promoted, and moved back to Washington. He was named counterintelligence branch chief in Soviet operations, a job that would give him access to nearly all of the agency's Soviet cases. Eventually, Rosario came to join him, and he divorced his first wife and remarried.

At the time, the CIA was enjoying an extensive network of intelligence assets, penetrating every aspect of the Soviet system. The range and degree of programs was wider than it had ever been, and Ames had access to all of it. Meanwhile, Rosario was running up huge bills that Ames simply couldn't afford to pay. She talked with her mother on the telephone every day, running up enormous long-distance phone bills. The phone bills along with other gratuitous spending and the cost of his divorce put Ames in deep debt.

When and how Ames exactly began his espionage for the Soviet Union is still debated. Ames claims that he had come up with the "perfect scam." In exchange for $50,000 (roughly the amount of debt he had run up), he would give the Soviets the names of three Russians spying for the CIA. However, the three agents he claims he gave up were actually "double agents" who still worked for the KGB. This was a rather elegant proposal: he was able to shield the U.S. and the CIA from harm because he was only giving the KGB the names of its own agents.

The FBI and CIA disagree, however. They claim Ames gave up the CIA agents who were most likely to discover Ames' betrayal.

Regardless of how malicious he was when he started, this act represented the first step down a slippery slope, indeed. Two days after Ames had received his first payment from the Soviets, the infamous Walker spy ring was broken up and arrested for betraying Naval secrets to the Soviets (and not long after that, another Soviet spy, Ronald Pelton, was arrested for giving away, among other things, the cable tapping operation known as Operation Ivy Bells.) The timing of Walker's arrest was suspicious and Ames became scared.
"I knew how well we had the Soviet system penetrated. It was only a matter of time before one of our spies learned what I had done. I was very vulnerable."
Ames immediately moved to protect himself. He met with his Soviet handlers and gave them the name of all of the CIA's "human assets" that he knew (with the exception of one friend whom he did not want to betray, but later did - on 2 occasions!) along with several pounds of CIA intelligence reports (apparently, he simply whisked them out of the CIA's office in his briefcase.) The Soviet Politburo, severely embarrassed by the CIA's success in recruiting spies, ordered a mass arrest, executing many of the spies that Ames sold out.

Naturally, the CIA noticed that their spies were disappearing and ordered an investigation. Still reeling from the paralyzing effects of a career-destroying witch-hunt a few years earlier, the investigation did not focus on trying to find a mole, preferring to explore other logical explanations. CIA investigators mistakenly concluded that the "1985 losses" (as they became known) were unrelated. Some were thought to have been caused by a defecting agent, others by mistakes made by the spies themselves. This was apparently not convincing, however, and several hard-nosed agents pressed for further investigation.

One of the CIA officers assigned to the case had a background in accounting and had the brilliantly obvious insight that the best way to find a mole was to find unexplained wealth among your own agents (such a tact may have helped nail Pelton, who sold out Ivy Bells for $35,000 to pay off his debts, and maybe even Walker too.)

All during this time, Ames was working, and getting paid (rather generously), for the Soviets. He made no attempts to hide his newfound wealth, nor did his free-spending wife. Expensive wardrobe, a Jaguar sports car, Rolex watches, and so on. Most assumed that Rosario came from a wealthy family (some rather sloppy investigation confirmed that, but it turns out that though the family was socially prominent it was still poor), but one agent who knew her and Ames from Mexico City knew that wasn't true, and reported it.

That proved to be Ames' undoing. He and his wife's overspending were a vital clue, though it didn't actually prove anything. One investigator noticed, however, that Ames had made several suspicious bank deposits in 1985. These deposits happened to coincide with the days that he had lunch with his Soviet handler (whom everyone thought Ames was trying to develop as an "asset.") Ames had taken few precautions to hide his payments, and it was easy to build a case from there.

On February 21, 1994, Ames and his wife were arrested by the FBI. Investigators had found several damning pieces of evidence, including letters to and from his Soviet handlers, and further evidence of he and his wife's gluttony. She was sentenced to 5 years in prison, then deported back to Columbia. He was sentenced to life in prison. He jokes that, ironically, he sealed his own fate: The KGB had no one to swap for him. It had killed all of the spies it had arrested who were worth trading.
Ames would later attempt to rationalize his treason. "A lot of the barriers that should have stopped me from betraying my country were gone," he said. "The first barrier was the idea that political intelligence matters. It doesn't." Ames said he had become disillusioned because several presidents, beginning with Richard Nixon, had ignored the CIA's findings because they did not suit the White House's political agenda. "I realized these men's actions do not excuse mine, but they did influence my decision making and help grease the slope...I also had come to believe that the CIA was morally corrupt. The CIA is all about maintaining and expanding American imperial power, which I had come to think was wrong... and finally, I did not feel any sense of loyalty to what mass culture had become. How does treason fit into all of this? In some ways, not at all. I would love to say that I did what I did out of some moral outrage over our country's acts of imperialism or a political statement or out of anger toward the CIA or even a love of the Soviet Union. But the sad truth is that I did what I did because of the money and I can't get away from that. I wanted a future. I wanted what I saw [Rosario and I] could have together. Taking the money was essential to the recreation of myself and the continuous of us as a couple."
Interestingly enough, a recent Nicolas Kristof column in the New York Times purports that the CIA suspected that Aldrich Ames gave up Valerie Plame's identity to the Soviets before his arrest, thus compromising her undercover security long before White House officials reportedly leaked the information. I generally take Kristof with a grain of salt, however, so you're free to take from that what you want...

Furthermore, the investigator who has taken up the Plame case is one John Dion, the head of the Justice Department's counterespionage division. He also just happens to have been the lead investigator on the Aldrich Ames case (as well as on former FBI agent Robert Hanssen, another infamous spy.)

In case you can't tell, I'm endlessly fascinated by these tales of espionage. For more information regarding the Ames case, check out: Update: Now that I think about it, the fact that Dion, the man who prosecuted Ames, is investigating the Plame case may have been what caused Kristof to point to Ames as the one who outed Plame... I've seen reporters make bigger stretches, but who knows?
Posted by Mark on October 12, 2003 at 10:50 PM .: link :.

End of This Day's Posts

Sunday, June 08, 2003

Oshkosh b' Gosh
The Cold War really was an amazingly strange time. I was alive during that time, but I was too young to really understand what was going on. If I was older and I was aware of some of the things that are now known about that time, I'm not sure how I would have reacted. A while back I read a book about submarine espionage called Blind Man's Bluff, and I was shocked by the daring and audacity of our submarine forces.

One story in particular caught my eye. Operation Ivy Bells was a 1970s U.S. Navy and NSA plot to bug Soviet underwater communications cables in the Sea of Okhotsk*. Submarines periodically serviced the device and recovered tapes from it, providing U.S. Intelligence with tons of valuable data. Its an utterly fascinating story, and it demonstrates yet again America's reliance on technology. (There is much more to the story than I will go into here, but I wrote a more detailed summary at E2. Read the whole thing, as they say... but if you really want to get into details, you should check out the book)

The wildly successful cable tapping operations in the Okhotsk was eventually discovered by the Soviets in the early 1980s. It was originally thought that the discovery was caused by a U.S. submarine mishap in which a sub fell on the cable (*ahem*), but when all the intelligence was analyzed, that explanation just didn't fit. In 1985, U.S. authorities arrested Ronald W. Pelton, a former NSA employee who had sold out the Okhotsk cable tapping operation to the Soviets for $35,000. Yes, the Soviets were able to uncover one of our most important secrets for a paltry $35,000. Another spy named John Walker (and a ring of friends and family members whom he had recruited) was also caught in 1985. Between the two of them, the Soviets were able to get just as good a look at our communications as we were of theirs, and they didn't need to spend years of research, millions of dollars in investments in technology, and risk their submariners' lives.

Now, the contrast between the ways the Soviets went about information gathering and the way we did is an interesting one. The Soviets used a low-tech, inexspensive methodology that was very successful (a defecting KGB agent referred to the Walker ring as "the most important espionage victory in KGB history.") The U.S. spent millions of dollars in technology and research, then daringly entered Soviet waters to place the taps. The U.S. method was just as successful, but more costly. Then again, the research and technology that enabled the cable tapping operations weren't exclusive to these missions.

Its an interesting example of how a secure system can be undone by simple human interactions, isn't it?

* Okhotsk was typically mispronounced as "Oshkosh" by those who partook on these missions (hence the title of this post and a chapter in the book)
Posted by Mark on June 08, 2003 at 11:01 PM .: link :.

End of This Day's Posts

Sunday, May 25, 2003

Security & Technology
The other day, I was looking around for some new information on Quicksilver (Neal Stephenson's new novel, a follow up to Cryptonomicon) and I came across Stephenson's web page. I like everything about that page, from the low-tech simplicity of its design, to the pleading tone of the subject matter (the "continuous partial attention" bit always gets me). At one point, he gives a summary of a talk he gave in Toronto a few years ago:
Basically I think that security measures of a purely technological nature, such as guns and crypto, are of real value, but that the great bulk of our security, at least in modern industrialized nations, derives from intangible factors having to do with the social fabric, which are poorly understood by just about everyone. If that is true, then those who wish to use the Internet as a tool for enhancing security, freedom, and other good things might wish to turn their efforts away from purely technical fixes and try to develop some understanding of just what the social fabric is, how it works, and how the Internet could enhance it. However this may conflict with the (absolutely reasonable and understandable) desire for privacy.
And that quote got me to thinking about technolology and security, and how technology never really replaces human beings, it just makes certain tasks easier, quicker, and more efficient. There was a lot of talk about this sort of thing around the early 90s, when certain security experts were promoting the use of strong cryptography and digital agents that would choose what products we would buy and spend our money for us.

As it turns out, most of those security experts seem to be changing their mind. There are several reasons for this, chief among them fallibility and, quite frankly, a lack of demand. It is impossible to build an infallible system (at least, it's impossible to recognize that you have built such a system), but even if you had accomplished such a feat, what good would it be? A perfectly secure system is also a perfectly useless system. Besides that, you have human ignorance to contend with. How many of you actually encrypt your email? It sounds odd, but most people don't even notice the little yellow lock that comes up in their browser when they are using a secure site.

Applying this to our military, there are some who advocate technology (specifically airpower) as a replacement for the grunt. The recent war in Iraq stands in stark contrast to these arguments, despite the fact that the civilian planners overruled the military's request for additional ground forces. In fact, Rumsfeld and his civilian advisors had wanted to send significantly fewer ground forces, because they believed that airpower could do virtually everything by itself. The only reason there were as many as there were was because General Franks fought long and hard for increased ground forces (being a good soldier, you never heard him complain, but I suspect there will come a time when you hear about this sort of thing in his memoirs).

None of which is to say that airpower or technology are not necessary, nor do I think that ground forces alone can win a modern war. The major lesson of this war is that we need to have balanced forces in order to respond with flexibility and depth to the varied and changing threats our country faces. Technology plays a large part in this, as it makes our forces more effective and more likely to succeed. But, to paraphrase a common argument, we need to keep in mind that weapons don't fight wars, soldiers do. While technology we used provided us with a great deal of security, its also true that the social fabric of our armed forces were undeniably important in the victory.

One thing Stephenson points to is an excerpt from a Sherlock Holmes novel in which Holmes argues:
...the lowest and vilest alleys in London do not present a more dreadful record of sin than does the smiling and beautiful country-side...The pressure of public opinion can do in the town what the law cannot accomplish...But look at these lonely houses, each in its own fields, filled for the most part with poor ignorant folk who know little of the law. Think of the deeds of hellish cruelty, the hidden wickedness which may go on, year in, year out, in such places, and none the wiser.
Once again, the war in Iraq provides us with a great example. Embedding reporters in our units was a controversial move, and there are several reasons the decision could have been made. One reason may very well have been that having reporters around while we fought the war may have made our troops behave better than they would have otherwise. So when we watch the reports on TV, all we see are the professional, honorable soldiers who bravely fought an enemy which was fighting dirty (because embedding reporters revealed that as well).

Communications technology made embedding reporters possible, but it was the complex social interactions that really made it work (well, to our benefit at least). We don't derive security straight from technology, we use it to bolster our already existing social constructs, and the further our technology progresses, the easier and more efficient security becomes.

Update 6.6.03 - Tacitus discusses some similar issues...
Posted by Mark on May 25, 2003 at 02:03 PM .: link :.

End of This Day's Posts

Wednesday, March 19, 2003

Imperative of Intelligence Reform
September 11 and the Imperative of Reform in the U.S. Intelligence Community - Additional Views of Senator Richard C. Shelby : When the findings and recommendations of the congressional joint inquiry into September 11 were published last year, Senator Shelby (R-AL) independantly released a lengthy document detailing his "additional views". Its interesting and more readable than most such discussions, and Shelby proposes some fairly radical concepts:
Intelligence collectors - whose status and bureaucratic influence depends to no small extent upon the monopolization of "their" information-stream - often fail to recognize the importance of providing analysts with "deep" access to data. The whole point of intelligence analysis against transnational targets is to draw patterns out of a mass of seemingly unrelated information, and it is crucial that the analysis of such patterns not be restricted only to personnel from a single agency. As Acting DIA Director Lowell Jacoby observed in his written testimony before the Joint Inquiry, "information considered irrelevant noise by one set of analysts may provide critical clues or reveal significant relationships when subjected to analytic scrutiny by another."

This suggests that the fundamental intellectual assumptions that have guided our Intelligence Community's approach to managing national security information for half a century may be in some respects crucially flawed, in that it may not be true that information-holders - the traditional arbiters of who can see "their" data - are the entities best placed to determine whether outsiders have any "need to know" data in their possession. Analysts who seek access to information, it turns out, may well be the participants best equipped to determine what their particular expertise and contextual understanding can bring to the analysis of certain types of data.
Also notable is his assertion that hard wiring our intelligence community to deal with the terrorist threat is "precisely the wrong answer, because such an approach would surely leave us unprepared for the next major threat, whatever it turns out to be." Rather, "we need an Intelligence Community agile enough to evolve as threats evolve, on a continuing basis." [via FAS's excellent Secrecy News]
Posted by Mark on March 19, 2003 at 02:11 PM .: link :.

End of This Day's Posts

Saturday, March 15, 2003

Democracy Vs. Secrecy
Democracies and Their Spies by Bruce Berkowitz : The other day, I was discussing some of the evidence presented by Colin Powell at the UN, and, as is readily apparent, the presentation did not warrant a conclusion that an invasion of Iraq is necessary. By its very nature, intelligence requires secrecy. Public knowledge places everyone on a level playing field, but intelligence, by its scarcity and exclusivity, tilts the field to your advantage. Thus, what can be released at any given time must be limited to that which does not nullify whatever advantage said intelligence provides. At this point, however, you are faced with a difficult question:
Now the challenge of operating an intelligence organization in a democracy becomes clear: Voting is essential for democracy; freedom of information is essential for voting; but free-flowing information defeats the functions of intelligence. Or, to put it another way, information is the engine that makes democracy work, whereas the effectiveness of intelligence depends on restricting the flow of information.
Berkowitz seeks to answer this challenge by examining how much secrecy usually exists in a democracy. As it turns out, secrecy in a democratic government is actually a common, and sometimes even necessary, occurrence:
Democracies are not strangers to secrets. Protecting secrets when appropriate, disclosing secrets when proper, and managing secrecy are all normal parts of the democratic process. The same principles that are used to strike a balance among competing interests in a democracy can be used to oversee intelligence secrets as well.
The article is well written and organized, and it provides at least a partial answer to the burning questions that intelligence faces. I say "partial" because Horowitz's answer is strategic in nature, meaning that it's looking at the long term effects of keeping and releasing intelligence. In the short term, though, it sure would be nice to know what our government knows about Iraq.
Posted by Mark on March 15, 2003 at 04:06 PM .: link :.

End of This Day's Posts

Sunday, December 15, 2002

Homeland Defence the First Time
The Kaiser Sows Destruction by Michael Warner : In the wake of the 9/11 attacks, American intelligence agencies are sure to respond in ways that are likely to be profound. Though it is impossible to predict the long-term impact of the 9/11 attacks on intelligence agencies, history suggests that we are following in the steps of our predecessors.
On a summer night in New York City in 1916, a pier laden with a thousand tons of munitions destined for Britain, France, and Russia in their war against Imperial Germany suddenly caught fire and exploded with a force that scarred the Statue of Liberty with shrapnel, shattered windows in Times Square, rocked the Brooklyn Bridge, and woke sleepers as far away as Maryland. Within days, local authorities had concluded that the blasts at "Black Tom" pier were the work of German saboteurs seeking to destroy supplies headed from neutral America to Germany's enemies.
Black Tom was but one of many incidents in the two-year German sabotage campaign in America before and during WWI, but it made a deep impression, and the parallels between the American response then and now are striking. The effects of the German sabotage campaign on American intelligence took at least three decades to work themselves out, and it is likely that the 9/11 attacks will also exert significant pressures for change in the American intelligence community for a long time to come.

Which is why the appointment of Henry Kissinger to head an official inquiry into national security problems, and his subsequent stepping down, to are ultimately pointless. As Fritz Schranck notes:
"...the creation and appointment of �official commissions� is a time-honored way to create a record on which political campaigns can be run. More often than not, these commissions exist to create the illusion of substantive action, while focused on the reality of political chit-building. Reviewing the facts and current laws and devising a non-partisan set of recommendations on the commission�s subject matter is a distant second in priority. (By the way, the official commission technique is used at all levels of government.)"
Official commissions run by politicians have their uses, but the real progress will be made by the agencies themselves, whose leaders must also play the political game to get the necessary resources to institute the necessary reforms. As history showed us during the German sabotage campaign and our response, this can be an incredibly slow process, taking decades to iron out the details. The intelligence community has a thankless job. The war they fight is only visable when they fail and their best hope is to fight to a stalemate.
Posted by Mark on December 15, 2002 at 09:02 AM .: link :.

End of This Day's Posts

Tuesday, July 30, 2002

Spy Games
Working with the CIA by Garrett Jones : An interesting and informative article written by a retired case officer for the CIA. His stated goal is to provide insight into the working relationship between the military and the CIA. Basically, what it comes down to is communication: The CIA doesn't understand enough about the Military and its operations, and, conversely, the Military doesn't understand enough about the CIA and its operations. Good, effective communication is essential. In the course of explaining the ins-and-outs of the profession, Jones illuminates some of the unique logistical challenges of the profession, as well as some of the "pretty strange people" you meet when recruiting intelligence "assets":
Before everything else, human assets are recruited because they have access to secret information that can be obtained in no other manner. This means that not only may the asset not be a nice person, it also means he was not selected because he was brave, smart, or particularly hard-working.
Thus, by definition, the best assets are pretty strange people. The case officers handling these assets normally develop a fairly complicated relationship with their assets, becoming everything from father confessor to morale booster, from disciplinarian to best buddy. Like sausages and laws, if you have a queasy stomach, you don't want to see the case officer-asset relationship up close.
As usual, crappy movies and video games have given us the wrong idea about the intelligence community... Spies aren't super-commandos or James Bond-like secret agents, they are mostly just repeating what they've heard from people or what has come across their desk. They do not react favourably to being asked to do something new and strange. Additionally, Jones notes that "existing CIA stations were not established in order to support your mission, and existing CIA human assets were not originally recruited to support your mission". What this means is that intelligence is slow, and that there will be a lot of frustration and anxiety before the situation improves. Again, its a fascinating article, and well worth the read. [found via the Punchstack]
Posted by Mark on July 30, 2002 at 11:44 PM .: link :.

End of This Day's Posts

Where am I?
This page contains entries posted to the Kaedrin Weblog in the Security & Intelligence Category.

Inside Weblog
Best Entries
Fake Webcam
email me
Kaedrin Beer Blog

August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003
October 2003
September 2003
August 2003
July 2003
June 2003
May 2003
April 2003
March 2003
February 2003
January 2003
December 2002
November 2002
October 2002
September 2002
August 2002
July 2002
May 2002
April 2002
March 2002
February 2002
January 2002
December 2001
November 2001
October 2001
September 2001
August 2001
July 2001
June 2001
May 2001
April 2001
March 2001
February 2001
January 2001
December 2000
November 2000
October 2000
September 2000
August 2000
July 2000

12 Days of Christmas
2006 Movie Awards
2007 Movie Awards
2008 Movie Awards
2009 Movie Awards
2010 Movie Awards
2011 Fantastic Fest
2011 Movie Awards
2012 Movie Awards
2013 Movie Awards
2014 Movie Awards
2015 Movie Awards
6 Weeks of Halloween
Arts & Letters
Atari 2600
Best Entries
Book Queue
Comic Books
Commodore 64
Computers & Internet
Disgruntled, Freakish Reflections
Harry Potter
Hugo Awards
Link Dump
Neal Stephenson
Philadelphia Film Festival 2006
Philadelphia Film Festival 2008
Philadelphia Film Festival 2009
Philadelphia Film Festival 2010
Science & Technology
Science Fiction
Security & Intelligence
The Dark Tower
Video Games
Weird Book of the Week
Weird Movie of the Week
Green Flag

Copyright © 1999 - 2012 by Mark Ciocco.