Best Entries

Polarized Debate

This is yet another in a series of posts fleshing out ideas initially presented in a post regarding Reflexive Documentary filmmaking and the media. In short, Reflexive Documentaries achieve a higher degree of objectivity by embracing and acknowledging their own biases and agenda. Ironically, by acknowledging their own subjectivity, these films are more objective and reliable. I expanded the scope of the concepts originally presented in that post to include a broader range of information dissemination processes, which lead to a post on computer security and a post on national security.

I had originally planned to apply the same concepts to debating in a relatively straightforward manner. I’ll still do that, but recent events have lead me to reconsider my position, thus there will most likely be some unresolved questions at the end of this post.

So the obvious implication with respect to debating is that a debate can be more productive when each side exposes their own biases and agenda in making their argument. Of course, this is pretty much required by definition, but what I’m getting at here is more a matter of tactics. Debating tactics often take poor forms, with participants scoring cheap points by using intuitive but fallacious arguments.

I’ve done a lot of debating in various online forums, often taking a less than popular point of view (I tend to be a contrarian, and am comofortable on the defense). One thing that I’ve found is that as a debate heats up, the arguments become polarized. I sometimes find myself defending someone or something that I normally wouldn’t. This is, in part, because a polarizing debate forces you to dispute everything your opponent argues. To concede one point irrevocably weakens your position, or so it seems. Of course, the fact that I’m a contrarian, somewhat competitive, and stubborn also plays a part this. Emotions sometimes flare, attitudes clash, and you’re often left feeling dirty after such a debate.

None of which is to say that polarized debate is bad. My whole reason for participating in such debates is to get others to consider more than one point of view. If a few lurkers read a debate and come away from it confused or at least challenged by some of the ideas presented, I consider that a win. There isn’t anything inherently wrong with partisanship, and as frustrating as some debates are, I find myself looking back on them as good learning experiences. In fact, taking an extreme position and thinking from that biased standpoint helps you understand not only that viewpoint, but the extreme opposite as well.

The problem with such debates, however, is that they really are divisive. A debate which becomes polarized might end up providing you with a more balanced view of an issue, but such debates sometimes also present an unrealistic view of the issue. An example of this is abortion. Debates on that topic are usually heated and emotional, but the issue polarizes, and people who would come down somewhere around the middle end up arguing an extreme position for or against.

Again, I normally chalk this polarization up as a good thing, but after the election, I’m beginning to see the wisdom in perhaps pursuing a more moderated approach. With all the red/blue dichotomies being thrown around with reckless abandon, talk of moving to Canada and even talk of secesssion(!), it’s pretty obvious that the country has become overly-polarized.

I’ve been writing about Benjamin Franklin recently on this here blog, and I think his debating style is particularly apt to this discussion:

Franklin was worried that his fondness for conversation and eagerness to impress made him prone to “prattling, punning and joking, which only made me acceptable to trifling company.” Knowledge, he realized, “was obtained rather by the use of the ear than of the tongue.” So in the Junto, he began to work on his use of silence and gentle dialogue.

One method, which he had developed during his mock debates with John Collins in Boston and then when discoursing with Keimer, was to pursue topics through soft, Socratic queries. That became the preferred style for Junto meetings. Discussions were to be conducted “without fondness for dispute or desire of victory.” Franklin taught his friends to push their ideas through suggestions and questions, and to use (or at least feign) naive curiousity to avoid contradicting people in a manner that could give offense. … It was a style he would urge on the Constitutional Convention sixty years later. [This is an exerpt from the recent biography Benjamin Franklin: An American Life by Walter Isaacson]

This contrasts rather sharply with what passes for civilized debate these days. Franklin actually considered it rude to directly contradict or dispute someone, something I had always found to be confusing. I typically favor a frank exchange of ideas (i.e. saying what you mean), but I’m beginning to come around. In the wake of the election, a lot of advice has been offered up for liberals and the left, and a lot of suggestions center around the idea that they need to “reach out” to more voters. This has been recieved with indignation by liberals and leftists, and one could hardly blame them. From their perspective, conservatives and the right are just as bad if not worse and they read such advice as if they’re being asked to give up their values. Irrespective of which side is right, I think the general thrust of the advice is that liberal arguments must be more persuasive. No matter how much we might want to paint the country into red and blue partitions, if you really want to be accurate, you’d see only a few small areas of red and blue drowning in a sea of purple. The Democrats don’t need to convince that many people to get a more favorable outcome in the next election.

And so perhaps we should be fighting the natural polarization of a debate and take a cue from Franklin, who stressed the importance of deferring, or at least pretending to defer, to others:

“Would you win the hearts of others, you must not seem to vie with them, but to admire them. Give them every opportunity of displaying their own qualifications, and when you have indulged their vanity, they will praise you in turn and prefer you above others… Such is the vanity of mankind that minding what others say is a much surer way of pleasing them than talking well ourselves.”

There are weaknesses to such an approach, especially if your opponent does not return the favor, but I think it is well worth considering. That the country has so many opposing views is not necessarily bad, and indeed, is a necessity in democracy for ideas to compete. But perhaps we need less spin and more moderation… In his essay “Apology for Printers” Franklin opines:

“Printers are educated in the belief that when men differ in opinion, both sides ought equally to have the advantage of being heard by the public; and that when Truth and Error have fair play, the former is always an overmatch for the latter.”

Indeed.

Update: Andrew Olmsted posted something along these lines, and he has a good explanation as to why debates often go south:

I exaggerate for effect, but anyone spending much time on site devoted to either party quickly runs up against the assumption that the other side isn’t just wrong, but evil. And once you’ve made that assumption, it would be wrong to even negotiate with the other side, because any compromise you make is taking the country one step closer to that evil. The enemy must be fought tooth and nail, because his goals are so heinous.

… We tend to assume the worst of those we’re arguing with; that he’s ignoring this critical point, or that he understands what we’re saying but is being deliberately obtuse. So we end up getting frustrated, saying something nasty, and cutting off any opportunity for real dialogue.

I don’t know that we’re a majority, as Olmsted hopes, but there’s more than just a few of us, at least…

Arranging Interests in Parallel

I have noticed a tendency on my part to, on occasion, quote a piece of fiction, and then comment on some wisdom or truth contained therein. This sort of thing is typically frowned upon in rigorous debate as fiction is, by definition, contrived and thus referencing it in a serious argument is rightly seen as undesirable. Fortunately for me, this blog, though often taking a serious tone, is ultimately an exercise in thinking for myself. The point is to have fun. This is why I will sometimes quote fiction to make a point, and it’s also why I enjoy questionable exercises like speculating about historical figures. As I mentioned in a post on Benjamin Franklin, such exercises usually end up saying more about me and my assumptions than anything else. But it’s my blog, so that is more or less appropriate.

Astute readers must at this point be expecting to recieve a citation from a piece of fiction, followed by an application of the relevant concepts to some ends. And they would be correct.

Early on in Neal Stephenson’s novel The System of the World, Daniel Waterhouse reflects on what is required of someone in his position:

He was at an age where it was never possible ot pursue one errand at a time. He must do many at once. He guessed that people who had lived right and arranged things properly must have it all rigged so that all of their quests ran in parallel, and reinforced and supported one another just so. They gained reputations as conjurors. Others found their errands running at cross purposes and were never able to do anything; they ended up seeming mad, or else percieived the futility of what they were doing and gave up, or turned to drink.

Naturally, I believe there is some truth to this. In fact, the life of Benjamin Franklin, a historical figure from approximately the same time period as Dr. Waterhouse, provides us with a more tangible reference point.

Franklin was known to mix private interests with public ones, and to leverage both to further his business interests. The consummate example of Franklin’s proclivities was the Junto, a club of young workingmen formed by Franklin in the fall of 1727. The Junto was a small club composed of enterprising tradesman and artisans who discussed issues of the day and also endeavored to form a vehicle for the furtherance of their own careers. The enterprise was typical of Franklin, who was always eager to form associations for mutual benefit, and who aligned his interests so they ran in parallel, reinforcing and supporting one another.

A more specific example of Franklin’s knack for aligning interests is when he produced the first recorded abortion debate in America. At the time, Franklin was running a print shop in Philadelphia. His main competitor, Andrew Bradford, published the town’s only newspaper. The paper was meager, but very profitable in both moneys and prestige (which led him to be more respected by merchants and politicians, and thus more likely to get printing jobs), and Franklin decided to launch a competing newspaper. Unfortunately, another rival printer, Samuel Keimer, caught wind of Franklin’s plan and immediately launched a hastily assembled newspaper of his own. Franklin, realizing that it would be difficult to launch a third paper right away, vowed to crush Keimer:

In a comptetitive bank shot, Franklin decided to write a series of anonymous letters and essays, along the lines of the Silence Dogood pieces of his youth, for Bradford’s [American Weekly Mercury] to draw attention away from Keimer’s new paper. The goal was to enliven, at least until Keimer was beaten, Bradford’s dull paper, which in its ten years had never puplished any such features.

The first two pieces were attacks on poor Keimer, who was serializing entries from an encyclopedia. His intial installment included, innocently enough, an entry on abortion. Franklin pounced. Using the pen names “Martha Careful” and “Celia Shortface,” he wrote letters to Bradford’s paper feigning shock and indignation at Keimer’s offense. As Miss Careful threatened, “If he proceeds farther to expose the secrets of our sex in that audacious manner [women would] run the hazard of taking him by the beard in the next place we meet him.” Thus Franklin manufactured the first recorded abortion debate in America, not because he had any strong feelings on the issue, but because he knew it would sell newspapers. [This is an exerpt from the recent biography Benjamin Franklin: An American Life by Walter Isaacson]

Franklin’s many actions of the time certainly weren’t running at cross purposes, and he did manage to align his interests in parallel. He truly was a master, and we’ll be hearing more about him on this blog soon.

This isn’t the first time I’ve written about this subject before either. In a previous post, On the Overloading of Information, I noted one of the main reasons why blogging continues to be an enjoyable activity for me, despite changing interests and desires:

I am often overwhelmed by a desire to consume various things – books, movies, music, etc… The subject of such things is also varied and, as such, often don’t mix very well. That said, the only thing I have really found that works is to align those subjects that do mix in such a way that they overlap. This is perhaps the only reason blogging has stayed on my plate for so long: since the medium is so free-form and since I have absolute control over what I write here and when I write it, it is easy to align my interests in such a way that they overlap with my blog (i.e. I write about what interests me at the time).

One way you can tell that my interests have shifted over the years is that the format and content of my writing here has also changed. I am once again reminded of Neal Stephenson’s original minimalist homepage in which he speaks of his ongoing struggle against what Linda Stone termed as “continuous partial attention,” as that curious feature of modern life only makes the necessity of aligning interests in parallel that much more important.

Aligning blogging with my other core interests, such as reading fiction, is one of the reasons I frequently quote fiction, even in reference to a serious topic. Yes, such a practice is frowned upon, but blogging is a hobby, the idea of which is to have fun. Indeed, Glenn Reynolds, progenitor of one of the most popular blogging sites around, also claims to blog for fun, and interestingly enough, he has quoted fiction in support of his own serious interests as well (more than once). One other interesting observation is that all references to fiction in this post, including even Reynolds’ references, are from Neal Stephenson’s novels. I’ll leave it as an exercise for the reader to figure out what significance, if any, that holds.

Open Source Security

A few weeks ago, I wrote about what the mainstream media could learn from Reflexive documentary filmmaking. Put simply, Reflexive Documentaries achieve a higher degree of objectivity by embracing and acknowledging their own biases and agenda. Ironically, by acknowledging their own subjectivity, these films are more objective and reliable. In a follow up post, I examined how this concept could be applied to a broader range of information dissemination processes. That post focused on computer security and how full disclosure of system vulnerabilities actually improves security in the long run. Ironically, public scrutiny is the only reliable way to improve security.

Full disclosure is certainly not perfect. By definition, it increases risk in the short term, which is why opponents are able to make persuasive arguments against it. Like all security, it is a matter of tradeoffs. Does the long term gain justify the short term risk? As I’m fond of saying, human beings don’t so much solve problems as they trade one set of disadvantages for another (with the hope that the new set isn’t quite as bad as the old). There is no solution here, only a less disadvantaged system.

Now I’d like to broaden the subject even further, and apply the concept of open security to national security. With respect to national security, the stakes are higher and thus the argument will be more difficult to sustain. If people are unwilling to deal with a few computer viruses in the short term in order to increase long term security, imagine how unwilling they’ll be to risk a terrorist attack, even if that risk ultimately closes a few security holes. This may be prudent, and it is quite possible that a secrecy approach is more necessary at the national security level. Secrecy is certainly a key component of intelligence and other similar aspects of national security, so open security techniques would definitely not be a good idea in those areas.

However, there are certain vulnerabilities in processes and systems we use that could perhaps benefit from open security. John Robb has been doing some excellent work describing how terrorists (or global guerillas, as he calls them) can organize a more effective campaign in Iraq. He postulates a Bazaar of violence, which takes its lessons from the open source programming community (using Eric Raymond’s essay The Cathedral and the Bazaar as a starting point):

The decentralized, and seemingly chaotic guerrilla war in Iraq demonstrates a pattern that will likely serve as a model for next generation terrorists. This pattern shows a level of learning, activity, and success similar to what we see in the open source software community. I call this pattern the bazaar. The bazaar solves the problem: how do small, potentially antagonistic networks combine to conduct war?

Not only does the bazaar solve the problem, it appears able to scale to disrupt larger, more stable targets. The bazaar essentially represents the evolution of terrorism as a technique into something more effective: a highly decentralized strategy that is nevertheless able to learn and innovate. Unlike traditional terrorism, it seeks to leverage gains from sabotaging infrastructure and disrupting markets. By focusing on such targets, the bazaar does not experience diminishing returns in the same way that traditional terrorism does. Once established, it creats a dynamic that is very difficult to disrupt.

I’m a little unclear as to what the purpose of the bazaar is – the goal appears to be a state of perpetual violence that is capable of keeping a nation in a position of failure/collapse. That our enemies seek to use this strategy in Iraq is obvious, but success essentially means perpetual failure. What I’m unclear on is how they seek to parlay this result into a successful state (which I assume is their long term goal – perhaps that is not a wise assumption).

In any case, reading about the bazaar can be pretty scary, especially when news from Iraq seems to correllate well with the strategy. Of course, not every attack in Iraq correllates, but this strategy is supposedly new and relatively dynamic. It is constantly improving on itself. They are improvising new tactics and learning from them in an effort to further define this new method of warfare.

As one of the commenters on his site notes, it is tempting to claim that John Robb’s analysis is essentially an instruction manual for a guerilla organization, but that misses the point. It’s better to know where we are vulnerable before we discover that some weakness is being exploited.

One thing that Robb is a little short on is actual, concrete ways with which to fight the bazaar (there are some, and he has pointed out situations where U.S. forces attempted to thwart bazaar tactics, but such examples are not frequent). However, he still provides a valuable service in exposing security vulnerabilities. It seems appropriate that we adopt open source security techniques in order to fight an enemy that employs an open source platform. Vulnerabilities need to be exposed so that we may devise effective counter-measures.

Open Security and Full Disclosure

A few weeks ago, I wrote about what the mainstream media could learn from Reflexive documentary filmmaking. Put simply, Reflexive Documentaries achieve a higher degree of objectivity by embracing and acknowledging their own biases and agenda. Ironically, by acknowledging their own subjectivity, these films are more objective and reliable. I felt that the media could learn from such a model. Interestingly enough, such concepts can be applied to wider scenarios concerning information dissemination, particularly security.

Bruce Schneier has often written about such issues, and most of the information that follows is summarized from several of his articles, recent and old. The question with respect to computer security systems is this: Is publishing computer and network or software vulnerability information a good idea, or does it just help attackers?

When such a vulnerability exists, it creates what Schneier calls a Window of Exposure in which the vulnerability can still be exploited. This window exists until the vulnerability is patched and installed. There are five key phases which define the size of the window:

Phase 1 is before the vulnerability is discovered. The vulnerability exists, but no one can exploit it. Phase 2 is after the vulnerability is discovered, but before it is announced. At that point only a few people know about the vulnerability, but no one knows to defend against it. Depending on who knows what, this could either be an enormous risk or no risk at all. During this phase, news about the vulnerability spreads — either slowly, quickly, or not at all — depending on who discovered the vulnerability. Of course, multiple people can make the same discovery at different times, so this can get very complicated.

Phase 3 is after the vulnerability is announced. Maybe the announcement is made by the person who discovered the vulnerability in Phase 2, or maybe it is made by someone else who independently discovered the vulnerability later. At that point more people learn about the vulnerability, and the risk increases. In Phase 4, an automatic attack tool to exploit the vulnerability is published. Now the number of people who can exploit the vulnerability grows exponentially. Finally, the vendor issues a patch that closes the vulnerability, starting Phase 5. As people install the patch and re-secure their systems, the risk of exploit shrinks. Some people never install the patch, so there is always some risk. But it decays over time as systems are naturally upgraded.

The goal is to minimize the impact of the vulnerability by reducing the window of exposure (the area under the curve in figure 1). There are two basic approaches: secrecy and full disclosure.

The secrecy approach seeks to reduce the window of exposure by limiting public access to vulnerability information. In a different essay about network outages, Schneier gives a good summary of why secrecy doesn’t work well:

The argument that secrecy is good for security is naive, and always worth rebutting. Secrecy is only beneficial to security in limited circumstances, and certainly not with respect to vulnerability or reliability information. Secrets are fragile; once they’re lost they’re lost forever. Security that relies on secrecy is also fragile; once secrecy is lost there’s no way to recover security. Trying to base security on secrecy is just plain bad design.

… Secrecy prevents people from assessing their own risks.

Secrecy may work on paper, but in practice, keeping vulnerabilities secret removes motivation to fix the problem (it is possible that a company could utilize secrecy well, but it is unlikely that all companies would do so and it would be foolish to rely on such competency). The other method of reducing the window of exposure is to disclose all information about the vulnerablity publicly. Full Disclosure, as this method is called, seems counterintuitive, but Schneier explains:

Proponents of secrecy ignore the security value of openness: public scrutiny is the only reliable way to improve security. Before software bugs were routinely published, software companies routinely denied their existence and wouldn’t bother fixing them, believing in the security of secrecy.

Ironically, publishing details about vulnerabilities leads to a more secure system. Of course, this isn’t perfect. Obviously publishing vulnerabilities constitutes a short term danger, and can sometimes do more harm than good. But the alternative, secrecy, is worse. As Schneier is fond of saying, security is about tradeoffs. As I’m fond of saying, human beings don’t so much solve problems as they trade one set of disadvantages for another (with the hope that the new set isn’t quite as bad as the old). There is no solution here, only a less disadvantaged system.

This is what makes advocating open security systems like full disclosure difficult. Opponents will always be able to point to its flaws, and secrecy advocates are good at exploiting the intuitive (but not necessarily correct) nature of their systems. Open security systems are just counter-intuitive, and there is a tendency to not want to increase risk in the short term (as things like full disclosure does). Unfortunately, that means that the long term danger increases, as there is less incentive to fix security problems.

By the way, Schneier has started a blog. It appears to be made up of the same content that he normally releases monthly in the Crypto-Gram newsletter, but spread out over time. I think it will be interesting to see if Schneier starts responding to events in a more timely fashion, as that is one of the keys to the success of blogs (and it’s something that I’m bad at, unless news breaks on a Sunday).

A Reflexive Media

“To write or to speak is almost inevitably to lie a little. It is an attempt to clothe an intangible in a tangible form; to compress an immeasurable into a mold. And in the act of compression, how the Truth is mangled and torn!”

– Anne Murrow Lindbergh

There are many types of documentary films. The most common form of documentary is referred to as Direct Address (aka Voice of God). In such a documentary, the viewer is directly acknowledged, usually through narration and voice-overs. There is very little ambiguity and it is pretty obvious how you’re expected to interpret these types of films. Many television and news programs use this style, to varying degrees of success. Ken Burns’ infamous Civil War and Baseball series use this format eloquently, but most traditional propaganda films also fall into this category (a small caveat: most films are hybrids, rarely falling exclusively into one category). Such films give the illusion of being an invisible witness to certain events and are thus very persuasive and powerful.

The problem with Direct Address documentaries is that they grew out of a belief that Truth is knowable through objective facts. In a recent sermon he posted on the web, Donald Sensing spoke of the difference between facts and the Truth:

Truth and fact are not the same thing. We need only observe the presidential race to discern that. John Kerry and allies say that the results of America’s war against Iraq is mostly a failure while George Bush and allies say they are mostly success. Both sides have the same facts, but both arrive at a different “truth.”

People rarely fight over facts. What they argue about is what the facts mean, what is the Truth the facts indicate.

I’m not sure Sensing chose the best example here, but the concept itself is sound. Any documentary is biased in the Truth that it presents, even if the facts are undisputed. In a sense objectivity is impossible, which is why documentary scholar Bill Nichols admires films which seek to contextualize themselves, exposing their limitations and biases to the audience.

Reflexive Documentaries use many devices to acknowledge the filmmaker’s presence, perspective, and selectivity in constructing the film. It is thought that films like this are much more honest about their subjectivity, and thus provide a much greater service to the audience.

An excellent example of a Reflexive documentary is Errol Morris’ brilliant film, The Thin Blue Line. The film examines the “truth” around the murder of a Dallas policeman. The use of colored lighting throughout the film eventually correlates with who is innocent or guilty, and Morris is also quite manipulative through his use of editing – deconstructing and reconstructing the case to demonstrate just how problematic finding the truth can be. His use of framing calls attention to itself, daring the audience to question the intents of the filmmakers. The use of interviews in conjunction with editing is carefully structured to demonstrate the subjectivity of the film and its subjects. As you watch the movie, it becomes quite clear that Morris is toying with you, the viewer, and that he wants you to be critical of the “truth” he is presenting.

Ironically, a documentary becomes more objective when it acknowledges its own biases and agenda. In other words, a documentary becomes more objective when it admits its own subjectivity. There are many other forms of documentary not covered here (i.e. direct cinema/cinema verité, interview-based, performative, mock-documentaries, etc… most of which mesh together as they did in Morris’ Blue Line to form a hybrid).

In Bill Nichols’ seminal essay, Voice of Documentary (Can’t seem to find a version online), he says:

“Documentary filmmakers have a responsibility not to be objective. Objectivity is a concept borrowed from the natural sciences and from journalism, with little place in the social sciences or documentary film.”

I always found it funny that Nichols equates the natural sciences with journalism, as it seems to me that modern journalism is much more like a documentary than a natural science. As such, I think the lessons of Reflexive documentaries (and its counterparts) should apply to the realm of journalism.

The media emphatically does not acknowledge their biases. By bias, I don’t mean anything as short-sighted as liberal or conservative media bias, I mean structural bias of which political orientation is but a small part (that link contains an excellent essay on the nature of media bias, one that I find presents a more complete picture and is much more useful than the tired old ideological bias we always hear so much about*). Such subjectivity does exist in journalism, yet the media stubbornly persists in their firm belief that they are presenting the objective truth.

The recent CBS scandal, consisting of a story bolstered by what appear to be obviously forged documents, provides us with an immediate example. Terry Teachout makes this observation regarding how few prominent people are willing to admit that they are wrong:

I was thinking today about how so few public figures are willing to admit (for attribution, anyway) that they’ve done something wrong, no matter how minor. But I wasn’t thinking of politicians, or even of Dan Rather. A half-remembered quote had flashed unexpectedly through my mind, and thirty seconds’ worth of Web surfing produced this paragraph from an editorial in a magazine called World War II:

Soon after he had completed his epic 140-mile march with his staff from Wuntho, Burma, to safety in India, an unhappy Lieutenant General Joseph W. Stilwell was asked by a reporter to explain the performance of Allied armies in Burma and give his impressions of the recently concluded campaign. Never one to mince words, the peppery general responded: “I claim we took a hell of a beating. We got run out of Burma and it is as humiliating as hell. I think we ought to find out what caused it, and go back and retake it.”

Stilwell spoke those words sixty-two years ago. When was the last time that such candor was heard in like circumstances? What would happen today if similar words were spoken by some equally well-known person who’d stepped in it up to his eyebrows?

As he points out later in his post, I don’t think we’re going to be seeing such admissions any time soon. Again, CBS provides a good example. Rather than admit the possibility that they may be wrong, their response to the criticisms of their sources has been vague, dismissive, and entirely reliant on their reputation as a trustworthy staple of journalism. They have not yet comprehensively responded to any of the numerous questions about the documents; questions which range from “conflicting military terminology to different word-processing techniques”. It appears their strategy is to escape the kill zone by focusing on the “truth” of their story, that Bush’s service in the Air National Guard was less than satisfactory. They won’t admit that the documents are forgeries, and by focusing on the arguably important story, they seek to distract the issue away from their any discussion of their own wrongdoing – in effect claiming that the documents aren’t important because the story is “true” anyway.

Should they admit they were wrong? Of course they should, but they probably won’t. If they won’t, it will not be because they think the story is right, and not because they think the documents are genuine. They won’t admit wrongdoing and they won’t correct their methodologies or policies because to do so would be to acknowledge to the public that they are less than just an objective purveyor of truth.

Yet I would argue that they should do so, that it is their duty to do so just as it is the documentarian’s responsibility to acknowledge their limitations and agenda to their audience.

It is also interesting to note that weblogs contrast the media by doing just that. Glenn Reynolds notes that the internet is a low-trust medium, which paradoxically indicates that it is more trustworthy than the media (because blogs and the like acknowledge their bias and agenda, admit when they’re wrong, and correct their mistakes):

The Internet, on the other hand, is a low-trust environment. Ironically, that probably makes it more trustworthy.

That’s because, while arguments from authority are hard on the Internet, substantiating arguments is easy, thanks to the miracle of hyperlinks. And, where things aren’t linkable, you can post actual images. You can spell out your thinking, and you can back it up with lots of facts, which people then (thanks to Google, et al.) find it easy to check. And the links mean that you can do that without cluttering up your narrative too much, usually, something that’s impossible on TV and nearly so in a newspaper.

(This is actually a lot like the world lawyers live in — nobody trusts us enough to take our word for, well, much of anything, so we back things up with lots of footnotes, citations, and exhibits. Legal citation systems are even like a primitive form of hypertext, really, one that’s been around for six or eight hundred years. But I digress — except that this perhaps explains why so many lawyers take naturally to blogging).

You can also refine your arguments, updating — and even abandoning them — in realtime as new facts or arguments appear. It’s part of the deal.

This also means admitting when you’re wrong. And that’s another difference. When you’re a blogger, you present ideas and arguments, and see how they do. You have a reputation, and it matters, but the reputation is for playing it straight with the facts you present, not necessarily the conclusions you reach.

The mainstream media as we know it is on the decline. They will no longer be able to get by on their brand or their reputations alone. The collective intelligence of the internet, combined with the natural reflexiveness of its environment, has already provided a challenge to the underpinnings of journalism. On the internet, the dominance of the media is constantly challenged by individuals who question the “truth” presented to them in the media. I do not think that blogs have the power to eclipse the media, but their influence is unmistakable. The only question that remains is if the media will rise to the challenge. If the way CBS has reacted is any indication, then, sadly, we still have a long way to go.

* Yes, I do realize the irony of posting this just after I posted about liberal and conservative tendencies in online debating, and I hinted at that with my “Update” in that post.


Thanks to Jay Manifold for the excellent Structural Bias of Journalism link.

Benjamin Franklin: American, Blogger & LIAR!

I’ve been reading a biography of Benjamin Franklin (Benjamin Franklin: An American Life by Walter Isaacson), and several things have struck me about the way in which he conducted himself. As with a lot of historical figures, there is a certain aura that surrounds the man which is seen as impenetrable today, but it’s interesting to read about how he was perceived in his time and contrast that with how he would be perceived today. As usual, there is a certain limit to the usefulness of such speculation, as it necessarily must be based on certain assumptions that may or may not be true (as such this post might end up saying more about me and my assumptions than Franklin!). In any case, I find such exercises interesting, so I’d like to make a few observations.

The first is that he would have probably made a spectacular blogger, if he chose to engage in such an activity (Ken thinks he would definitely be a blogger, but I’m not so sure). He not only has all the makings of a wonderful blogger, I think he’d be extremely creative with the format. He was something of a populist, his writing was humorous, self-deprecating, and often quite profound at the same time. His range of knowledge and interest was wide, and his tone was often quite congenial. All qualities valued in any blogger.

He was incredibly prolific (another necessity for a successful blog), and often wrote the letters to his paper himself under assumed names, and structured them in such a way as to gently deride his competitors while making some other interesting point. For instance, Franklin once published two letters, written under two different pseudonyms, in which he manufactured the first recorded abortion debate in America – not because of any strong feelings on the issue, but because he knew it would sell newspapers and because his competitor was serializing entries from an encyclopedia at the time and had started with “Abortion.” Thus the two letters were not only interesting in themselves, but also provided ample opportunity to impugn his competitor.

On thing I think we’d see in a Franklin blog is entire comment threads consisting of a full back-and-forth debate, with all entries written by Franklin himself under assumed names. I can imagine him working around other “real” commenters with his own pseudonyms, and otherwise having fun with the format (he’d almost certainly make a spectacular troll as well).

If there was ever a man who could make a living out of blogging, I think Franklin was it. This is, in part, why I’m not sure he’d truly end up as a pure blogger, as even in his day, Franklin was known to mix private interests with public ones, and to leverage both to further his business interests. He could certainly have organized something akin to The Junto on the internet, where a group of likeminded fellows got together (whether it be physically or virtually over the internet) and discussed issues of the day and also endeavored to form a vehicle for the furtherance of their own careers.

Then again, perhaps Franklin would simply have started his own newspaper and had nothing to do with blogging (or perhaps he would attempt to mix the two in some new way). The only problem would be that the types of satire and hoaxes he could get away with in his newspapers in the early 18th century would not really be possible in today’s atmosphere (such playfulness has long ago left the medium, but is alive and well in the blogosphere, which is one thing that would tend to favor his participation).

Which brings me to my next point: I have to wonder how Franklin would have done in today’s political climate. Would he have been able to achieve political prominence? Would he want to? Would his anonymous letters, hoaxes, and in his newspapers have gotten him into trouble? I can imagine the self-righteous indignation now: “His newspaper is a farce! He’s a LIAR!” And the Junto? I don’t even want to think of the conspiracy theories that could be conjured with that sort of thing in mind.

One thing Franklin was exceptionally good at was managing his personal image, but would he be able to do so in today’s atmosphere? I suspect he would have done well in our time, but I don’t know how politically active he would be (and I suppose there is something to be said about his participation being partly influenced by the fact that he was a part of a revolution, not a true politician of the kind we have today). I know the basic story of his life, but I haven’t gotten that far in the book, so perhaps I should revisit this subject later. And thus ends my probably inaccurate, but interesting nonetheless, discussion of Franklin in our times. Expect more references to Franklin in the future, as I have been struck by quite a few things about his life that are worth discussing today.

A Village of Expectation

It’s funny how much your expectations influence how much you like or dislike a movie. I’m often disappointed by long awaited films, Star Wars: Episode I being the typical example. Decades of waiting and an unprecidented pre-release hype served only to elevate expectations for the film to unreachable heights. So when the time came, meesa not so impressed. I enjoyed the film and I don’t think it was that bad, but my expecations far outweighed the experience.

Conversely, when I go to watch a movie I think will stink, I’m often pleasantly surprised. Sometimes these movies are bad, but I thought they would be so much worse than they were that I ended up enjoying them. A recent example of this was I, Robot. As an avid Isaac Asimov fan, I was appalled by the previews for the film, which featured legions of apparently rebelling CGI robots, and naturally thought it would be stupifyingly bad as such events were antithetical to Asimov’s nuanced robot stories. Of course, I went to see it, and about halfway through, I was surprised to find that I was enjoying myself. It contains a few mentions to the three laws, positronics, and the name Susan Calvin is used for one of the main characters, but other than those minor details, the story doesn’t even begin to resemble anything out of Asimov, so I was able to disassociate the two and enjoy the film on its own merits. And it was enjoyable.

Of course, I became aware of this phenomenon a long time ago, and have always tried to learn as little as possible about movies before they come out as I can. I used to read up on all the movie news and look forward to tons of movies, but I found that going in with a clean slate is the best way to see a film. So I tend to shy away from reading reviews, though I will glance at the star rating of a few critics I know and respect. (Obviously it is not a perfectly clean slate, but you get the point.)

Earlier this week, I realized that M. Night Shyamalan’s The Village was being released, and made plans to see it. Shyamalan, the writer, director, and producer of such films as The Sixth Sense, Unbreakable, and Signs, has become known for the surprise ending, where some fact is revealed which totally changes the perspective of everything that came before it. This is unfortunate, because the twists and turns of a story are less effective if we’re expecting them. What’s more, if we know it’s coming, we wrack our brains trying to figure out what the surprise will be, hypothesizing several different versions of the story in our head, one of which is bound to be accurate. I’ve never been that impressed with Shyamalan, but he has always produced solid films that were entertaining enough. There are often little absudities or plot holes, but never enough to completely drain my goodwill dry (though Signs came awfully close). I think he’ll mature into a better filmmaker as time goes on.

The Village has it’s share of twists and turns, but of course, we expect them and so they really don’t come as any surprise (and, to be honest, Shyamalan layed on the hints pretty thickly). Fortunately, knowing what is coming doesn’t completely destroy the film, as it would in some of his other films. I’ve tried to avoid spoilers by speaking in generalities, but if you haven’t seen the film, you might want to skip down to the next paragraph (I don’t think I ruined anything, but better safe than sorry). Shyamalan has always relied more on brooding atmosphere and building tension than on gratuitous action and gore, and The Village is no exception. Once again, he does resort to the use of “Boo!” moments, something that has always rubbed me the wrong way in his films, but I’m beginning to come around. He has become quite adept at employing that device, even if it is a cheap thrill. He must realize it, because at one point I think he deliberately eschews the “Boo!” moment in favor of a more meticulous and subtle approach. There are several instances of masterful staging in the film, which is part of why knowing the twists ahead of time doesn’t ruin the film.

Now I was looking forward to this film, but as I mentioned before, I’ve never been blown away by Shyamalan (with the possible exception of Unbreakable, which I still think is the best of his films) so I didn’t have tremendously high expectations. I expected a well done, but not brilliant, film. On Friday, I checked out Ebert’s rating and glanced at Rotten Tomatoes, both of which served to further deflate my expectations. By the time I saw the film, I was expecting a real dud and was pleasantly surprised to find another solid effort from Shyamalan. It’s not for everybody, and those who are expecting another bombshell ending will be disappointed, but that doesn’t matter much in my opinion. The movie is what it is, and I judge it on its own merits, not on inflated expectations of twist endings and shocking revelations.

Would I have enjoyed it as much if I had been expecting something more out of it? Probably not, and there’s the rub. Does it matter? That is a difficult question to answer. No matter how you slice it, what you expect of a film forces a point of reference. When you see the film, you judge it based on that. So now the question becomes, is it right to intentially force the point of reference low, so as to make sure you enjoy the movie? That too is a difficult question to answer. For my money, it is to some extent advisable to keep a check on high expectations, but I suppose you could get carried away with it. In any case, I enjoyed The Village and I look forward to Shyamalan’s next film, albeit with a wary sense of trepidation.

With great freedom, comes great responsibility…

David Foster recently wrote about a letter to the New York Times which echoed sentiments regarding Iraq that appear to be commonplace in certain circles:

While we have removed a murderous dictator, we have left the Iraqi people with a whole new set of problems they never had to face before…

I’ve often written about the tradeoffs inherent in solving problems, and the invasion of Iraq is no exception. Let us pretend for a moment that everything that happened in Iraq over the last year went exactly as planned. Even in that best case scenario, the Iraqis would be facing “a whole new set of problems they never had to face before.” There was no action that could have been taken regarding Iraq (and this includes inaction) that would have resulted in an ideal situation. We weren’t really seeking to solve the problems of Iraq, so much as we were exchanging one set of problems for another.

Yes, the Iraqis are facing new problems they have never had to face before, but the point is that the new problems are more favorable than the old problems. The biggest problem they are facing is, in short, freedom. Freedom is an odd thing, and right now, halfway across the world, the Iraqis are finding that out for themselves. Freedom brings great benefits, but also great responsibility. Freedom allows you to express yourself without fear of retribution, but it also allows those you hate to express things that make your blood boil. Freedom means you have to acknowledge their views, no matter how repulsive or disgusting you may find them (there are limits, of course, but that is another subject). That isn’t easy.

A little while ago, Steven Den Beste wrote about Jewish immigrants from the Soviet Union:

About 1980 (I don’t remember exactly) there was a period in which the USSR permitted huge numbers of Jews to leave and move to Israel. A lot of them got off the jet in Tel Aviv and instantly boarded another one bound for New York, and ended up here.

For most of them, our society was quite a shock. They were free; they were out of the cage. But with freedom came responsibility. The State didn’t tell them what to do, but the State also didn’t look out for them.

The State didn’t prevent them from doing what they wanted, but the State also didn’t prevent them from screwing up royally. One of the freedoms they discovered they had was the freedom to starve.

There are a lot of people who ended up in the U.S. because they were fleeing oppression, and when they got here, they were confronted with “a whole new set of problems they never had to face before.” Most of them were able to adapt to the challenges of freedom and prosper, but don’t confuse prosperity with utopia. These people did not solve their problems, they traded them for a set of new problems. For most of them, the problems associated with freedom were more favorable than the problems they were trying to escape from. For some, the adjustment just wasn’t possible, and they returned to their homes.

Defecting North Koreans face a host of challenges upon their arrival in South Korea (if they can make it that far), including the standard freedom related problems: “In North Korea, the state allocates everything from food to jobs. Here, having to do their own shopping, banking or even eating at a food court can be a trying experience.” The differences between North Korea and South Korea are so vast that many defectors cannot adapt, despite generous financial aid, job training and other assistance from civic and religious groups. Only about half of the defectors are able to wrangle jobs, but even then, it’s hard to say that they’ve prospered. But at the same time, are their difficulties now worse than their previous difficulties? Moon Hee, a defector who is having difficulties adjusting, comments: “The present, while difficult, is still better than the past when I did not even know if there would be food for my next meal.”

There is something almost paradoxical about freedom. You see, it isn’t free. Yes, freedom brings benefits, but you must pay the price. If you want to live in a free country, you have to put up with everyone else being free too, and that’s harder than it sounds. In a sense, we aren’t really free, because the freedom we live with and aspire to is a limiting force.

On the subject of Heaven, Saint Augustine once wrote:

The souls in bliss will still possess the freedom of will, though sin will have no power to tempt them. They will be more free than ever–so free, in fact, from all delight in sinning as to find, in not sinning, an unfailing source of joy. …in eternity, freedom is that more potent freedom which makes all sin impossible. – Saint Augustine, City of God (Book XXII, Chapter 30)

Augustine’s concept of a totally free will is seemingly contradictory. For him, freedom, True Freedom, is doing the right thing all the time (I’m vastly simplifying here, but you get the point). Outside of Heaven, however, doing the right thing, as we all know, isn’t easy. Just ask Spider-Man.

I never really read the comics, but in the movies (which appear to be true to their source material) Spider-Man is all about the conflict between responsibilities and desires. Matthew Yglesias is actually upset with the second film because is has a happy ending:

Being the good guy — doing the right thing — really sucks, because doing the right thing doesn’t just mean avoiding wrongdoing, it means taking affirmative action to prevent it. There’s no time left for Peter’s life, and his life is miserable. Virtue is not its own reward, it’s virtue, the rewards go to the less consciencious. There’s no implication that it’s all worthwhile because God will make it right in the End Times, the life of the good guy is a bleak one. It’s an interesting (and, I think, a correct) view and it’s certainly one that deserves a skilled dramatization, which is what the film gives you right up until the very end. But then — ta da! — it turns out that everyone does get to be happy after all. A huge letdown.

Of course, plenty of people have noted that the Spider-Man story doesn’t end with the second movie, and that the third is bound to be filled with the complications of superhero dating (which are not limited to Spider-Man).

Spider-Man grapples with who he is. He has gained all sorts of powers, and with those powers, he has also gained a certain freedom. It could be very liberating, but as the saying goes: With great power comes great responsibility. He is not obligated to use his powers for good or at all, but he does. However, for a good portion of the second film he shirks his duties because a life of pure duty has totally ruined his personal life. This is that conflict between responsibilities and desires I mentioned earlier. It turns out that there are limits to Spider-Man’s altruism.

For Spider-Man, it is all about tradeoffs, though he may have learned it the hard way. First he took on too much responsibility, and then too little. Will he ever strike a delicate balance? Will we? For we are all, in a manner of speaking, Spider-Man. We all grapple with similar conflicts, though they manifest in our lives with somewhat less drama. Balancing your personal life with your professional life isn’t as exciting, but it can be quite challenging for some.

And so the people of Iraq are facing new challenges; problems they have never had to face before. Like Spider-Man, they’re going to have to deal with their newfound responsibilites and find a way to balance them with their desires. Freedom isn’t easy, and if they really want it, they’ll need to do more than just avoid problems, they’ll have to actively solve them. Or, rather, trade one set of problems for another. Because with great freedom, comes great responsibility.

Kill Faster!

Ralph Peters writes about his experience keeping track of combat in Iraq during the tumultuous month of April:

During the initial fighting in Fallujah, I tuned in al-Jazeera and the BBC. At the same time, I was getting insider reports from the battlefield, from a U.S. military source on the scene and through Kurdish intelligence. I saw two different battles.

Peters’ disenfranchisement with the media is hardly unique. Reports of the inadequacy of the media are legion. Eric M. Johnson is a U.S. Marine who served in Iraq and recently wrote about media bias:

Iraq veterans often say they are confused by American news coverage, because their experience differs so greatly from what journalists report. Soldiers and Marines point to the slow, steady progress in almost all areas of Iraqi life and wonder why they don’t get much notice – or in many cases, any notice at all.

Part of the explanation is Rajiv Chandrasekaran, the Baghdad bureau chief for the Washington Post. He spent most of his career on the metro and technology beats, and has only four years of foreign reporting, two of which are in Iraq. The 31-year-old now runs a news operation that can literally change the world, heading a bureau that is the source for much of the news out of Iraq.

… Chandrasekaran’s crew generates a relentlessly negative stream of articles from Iraq – and if there are no events to report, they resort to man-on-the-street interviews and cobble together a story from that.

It goes on from there, pointing out several examples and further evidence of the substandard performance of the media in Iraq. Then you have this infamous report from the Daily Telegraph’s correspondent Toby Harnden.

The other day, while taking a break by the Al-Hamra Hotel pool, fringed with the usual cast of tattooed defense contractors, I was accosted by an American magazine journalist of serious accomplishment and impeccable liberal credentials.

She had been disturbed by my argument that Iraqis were better off than they had been under Saddam and I was now – there was no choice about this – going to have to justify my bizarre and dangerous views. I’ll spare you most of the details because you know the script – no WMD, no ‘imminent threat'(though the point was to deal with Saddam before such a threat could emerge), a diversion from the hunt for bin Laden, enraging the Arab world. Etcetera.

But then she came to the point. Not only had she ‘known’ the Iraq war would fail but she considered it essential that it did so because this would ensure that the ‘evil’ George W. Bush would no longer be running her country. Her editors back on the East Coast were giggling, she said, over what a disaster Iraq had turned out to be. ‘Lots of us talk about how awful it would be if this worked out.’ Startled by her candour, I asked whether thousands more dead Iraqis would be a good thing.

She nodded and mumbled something about Bush needing to go. By this logic, I ventured, another September 11 on, say, September 11 would be perfect for pushing up John Kerry’s poll numbers. ‘Well, that’s different – that would be Americans,’ she said, haltingly. ‘I guess I’m a bit of an isolationist.’ That’s one way of putting it.

Yikes. I wish I knew a little more about this unnamed “magazine journalist of serious accomplishment and impeccable liberal credentials”, but it is a chilling admonition nonetheless.

Again, the inadequacy of the media has become painfully obvious over the past few years. How to deal with this? At a discussion forum the other day, someone posted this article concerning FOX News bias along with this breathless message:

This shouldn’t come as any surprise. How can a NEWS organization possibly be allowed to lie like this? FOX should be removed from the air and those who are in charge should be removed from the media business and not be allowed to do anything whatsoever where news and media are concerned.

they’re clearly out to deceive the American public.

Well, I suppose that is one way of dealing with media bias. But Ralph Peters’ response is drastically different. He assumes the media can’t or shouldn’t be changed. I tend to take his side, as arbitrarily removing a news organization from the air and blacklisting those in charge seems like a cure that is much worse than the disease to me, but that leads to some unpleasant consequences. Back to the Peters article:

The media is often referred to off-handedly as a strategic factor. But we still don’t fully appreciate its fatal power. Conditioned by the relative objectivity and ultimate respect for facts of the U.S. media, we fail to understand that, even in Europe, the media has become little more than a tool of propaganda.

That propaganda is increasingly, viciously, mindlessly anti-American. When our forces engage in tactical combat, dishonest media reporting immediately creates drag on the chain of command all the way up to the president.

Real atrocities aren’t required. Everything American soldiers do is portrayed as an atrocity. World opinion is outraged, no matter how judiciously we fight.

The implication for tactical combat — war at the bayonet level — is clear: We must direct our doctrine, training, equipment, organization and plans toward winning low-level fights much faster. Before the global media can do what enemy forces cannot do and stop us short. We can still win the big campaigns. But we’re apt to lose thereafter, in the dirty end-game fights.

… Our military must rise to its responsibility to reduce the pressure on the National Command Authority — in essence, the president — by rapidly and effectively executing orders to root out enemy resistance or nests of terrorists.

To do so, we must develop the capabilities to fight within the “media cycle,” before journalists sympathetic to terrorists and murderers can twist the facts and portray us as the villains. Before the combat encounter is politicized globally. Before allied leaders panic. And before such reporting exacerbates bureaucratic rivalries within our own system.

[emphasis mine] This is bound to be a difficult process, and will take years to perfect. If we proceed on this path, we’ll have to suffer many short term problems, including a much higher casualty rate, perhaps for both sides (and even civilians). If we don’t proceed along this path; if we don’t learn to kill quickly, then we’ll lose slowly.

For it’s part, the military has shown some initiative in dealing with the media. Wretchard writes about a Washington Post article describing the victory that the First Armored Division won over Moqtada Al-Sadr’s militia:

In what was probably the most psychologically revealing moment of the battle, infantrymen fought six hours for the possession of one damaged Humvee, of no tactical value, simply so that the network news would not have the satisfaction of displaying the piece of junk in the hands of Sadr’s men.

… Ted Koppel was determined to read the names of 700 American servicemen who have died in Iraq to remind us how serious was their loss. Michael Moore has dedicated his film Farenheit 9/11 to the Americans who died in Afghanistan. And they did a land office business. But at least they didn’t get to show Sadr’s miliamen dancing around a battered Humvee. The men of the First Armored paid the price to stop that screening and those concerned can keep the change.

I don’t know that Peters’ pessimism is totally warranted, but there is an element of pragmatism involved that should be considered. It is certainly frustrating though.

***

It is noteworthy that media bias goes both ways. I tended to be conservative leaning in this post, but liberals have a lot to gripe about too. I’ve written about this before. Peters wrote that killing faster would help the situation, but that is from a military perspective. From our perspective, the only thing we can do is take the media with a grain of salt and do our best to point out their failures and herald their successes. It’s not easy, that is the price we must pay for freedom of speech. Hopefully more on this in a later post. [thanks to Donald Sensing for the Toby Harnden pointer]

Religion isn’t as comforting as it seems

Steven Den Beste is an athiest, yet he is unlike any athiest I have ever met in that he seems to understand theists (in the general sense of the term) and doesn’t hold their beliefs against them. As such, I have gained an immense amount of respect for him and his beliefs. He speaks with conviction about his beliefs, but he is not evangelistic.

In his latest post, he aks one of the great unanswerable questions: What am I? I won’t pretend to have any of the answers, but I do object to one thing he said. It is a belief that is common among athiests (though theists are little better):

Is a virus alive? I don’t know. Is a hive mind intelligent? I don’t know. Is there actually an identifiable self with continuity of existence which is typing these words? I really don’t know. How much would that self have to change before we decide that the continuity has been disrupted? I think I don’t want to find out.

Most of those kinds of questions either become moot or are easily answered within the context of standard religions. Those questions are uniquely troubling only for those of us who believe that life and intelligence are emergent properties of certain blobs of mass which are built in certain ways and which operate in certain kinds of environments. We might be forced to accept that identity is just as mythical as the soul. We might be deluding ourselves into thinking that identity is real because we want it to be true.

[Emphasis added] The idea that these types of unanswerable questions is not troubling or easy to answer to a believer is a common one, but I also believe it to be false. Religion is no more comforting than any other system of beliefs, including athiesm. Religion does provide a vocabulary for the unanswerable, but all that does is help us grapple with the questions – it doesn’t solve anything and I don’t think it is any more comforting. I believe in God, but if you asked me what God really is, I wouldn’t be able to give you a definitive answer. Actually, I might be able to do that, but “God is a mystery” is hardly comforting or all that useful.

Elsewhere in the essay, he refers to the Christian belief in the soul:

To a Christian, life and self are ultimately embodied in a person’s soul. Death is when the soul separates from the body, and that which makes up the essence of a person is embodied in the soul (as it were).

He goes on to list some conundrums that would be troubling to the believer but they all touch on the most troubling thing – what the heck is the soul in the first place? Trying to answer that is no more comforting to a theist than trying to answer the questions he’s asking himself. The only real difference is a matter of vocabulary. All religion has done is shifted the focus of the question.

Den Beste goes on to say that there are many ways in which atheism is cold and unreassuring, but fails to recognize the ways in which religion is cold and unreassuring. For instance, there is no satisfactory theodicy that I have ever seen, and I’ve spent a lot of time studying such things (16 years of Catholic schooling baby!) A theodicy is essentially an attempt to reconcile God’s existance with the existance of evil. Why does God allow evil to exist? Again, there is no satisfactory answer to that question, not the least of which because there is no satisfactory definition of both God and evil!

Now, theists often view athiests in a similar manner. While Den Beste laments the cold and unreassuring aspects of athiesm, a believer almost sees the reverse. To some believers, if you remove God from the picture, you also remove all concept of morality and responsibility. Yet, that is not the case, and Den Beste provides an excellent example of a morally responsible athiest. The grass is greener on the other side, as they say.

All of this is generally speaking, of course. Not all religions are the same, and some are more restrictive and closed-minded than others. I suppose it can be a matter of degrees, with one religion or individual being more open minded than the other, but I don’t really know of any objective way to measure that sort of thing. I know that there are some believers who aren’t troubled by such questions and proclaim their beliefs in blind faith, but I don’t count myself among them, nor do I think it is something that is inherent in religion (perhaps it is inherent in some religions, but even then, religion does not exist in a vacuum and must be reconciled with the rest of the world).

Part of my trouble with this may be that I seem to have the ability to switch mental models rather easily, viewing a problem from a number of different perspectives and attempting to figure out the best way to approach a problem. I seem to be able to reconcile my various perspectives with each other as well (for example, I seem to have no problem reconciling science and religion with each other), though the boundries are blurry and I can sometimes come up with contradictory conclusions. This is in itself somewhat troubling, but at the same time, it is also somwhat of an advantage that I can approach a problem in a number of different ways. The trick is knowing which approach to use for which problem; hardly an easy proposition. Furthermore, I gather that I am somewhat odd in this ability, at least among believers. I used to debate religion a lot on the internet, and after a time, many refused to think of me as a Catholic because I didn’t seem to align with others’ perception of what Catholics are. I always found that rather amusing, though I guess I can understand the sentiment.

Unlike Den Beste, I do harbor some doubt in my beliefs, mainly because I recognize them as beliefs. They are not facts and I must concede the idea that my beliefs are incorrect. Like all sets of beliefs, there is an aspect of my beliefs that is very troubling and uncomforting, and there is a price we all pay for believing what we believe. And yet, believe we must. If we required our beliefs to be facts in order to act, we would do nothing. The value we receive from our beliefs outweighs the price we pay, or so we hope…

I suppose this could be seen by Steven to be missing the forest for the trees, but the reason I posted it is because the issue of beliefs discussed above fits nicely with several recent posts I made under the guise of Superstition and Security Beliefs (and Heuristics). They might provide a little more detail on the way I think regarding these subjects.