Kaedrin.com
You are here: Kaedrin > Weblog > Archives > Best Entries

Best Entries
Every once in a while I'll have a series of posts which I think are very high quality and am really proud of. But then some time passes, and I write some more, and the good stuff eventually gets pushed off the main page to languish in the obscurity of the archives. Taking my cue from some other bloggers, I've decided to collect some of my better posts here on this page in the hopes that they'll get some more exposure.
Sunday, March 14, 2010

Remix Culture and Soviet Montage Theory
A video mashup of The Beastie Boys' popular and amusing Sabotage video with scenes from Battlestar Galactica has been making the rounds recently. It's well done, but a little on the disposable side of remix culture. The video lead Sunny Bunch to question "remix culture":
It’s quite good. But, ultimately, what’s the point?

Leaving aside the questions of copyright and the rest: Seriously…what’s the point? Does this add anything to the culture? I won’t dispute that there’s some technical prowess in creating this mashup. But so what? What does it add to our understanding of the world, or our grasp of the problems that surround us? Anything? Nothing? Is it just “there” for us to have a chuckle with and move on? Is this the future of our entertainment?
These are good questions, and I'm not surprised that the BSG Sabotage video prompted them. The implication of Sonny's post is that he thinks it is an unoriginal waste of talent (he may be playing a bit of devil's advocate here, but I'm willing to play along because these are interesting questions and because it will give me a chance to pedantically lecture about film history later in this post!) In the comments, Julian Sanchez makes a good point (based on a video he produced earlier that was referenced by someone else in the comment thread), which will be something I'll expand on later in this post:
First, the argument I’m making in that video is precisely that exclusive focus on the originality of the contribution misses the value in the activity itself. The vast majority of individual and collective cultural creation practiced by ordinary people is minimally “original” and unlikely to yield any final product of wide appeal or enduring value. I’m thinking of, e.g., people singing karaoke, playing in a garage band, drawing, building models, making silly YouTube videos, improvising freestyle poetry, whatever. What I’m positing is that there’s an intrinsic value to having a culture where people don’t simply get together to consume professionally produced songs and movies, but also routinely participate in cultural creation. And the value of that kind of cultural practice doesn’t depend on the stuff they create being particularly awe-inspiring.
To which Sonny responds:
I’m actually entirely with you on the skill that it takes to produce a video like the Brooklyn hipsters did — I have no talent for lighting, camera movements, etc. I know how hard it is to edit together something like that, let alone shoot it in an aesthetically pleasing manner. That’s one of the reasons I find the final product so depressing, however: An impressive amount of skill and talent has gone into creating something that is not just unoriginal but, in a way, anti-original. These are people who are so devoid of originality that they define themselves not only by copying a video that they’ve seen before but by copying the very personalities of characters that they’ve seen before.
Another good point, but I think Sonny is missing something here. The talents of the BSG Sabotage editor or the Brooklyn hipsters are certainly admirable, but while we can speculate, we don't necessarily know their motivations. About 10 years ago, a friend and amateur filmmaker showed me a video one of his friends had produced. It took scenes from Star Wars and Star Trek: The Wrath of Khan and recut them so it looked like the Millennium Falcon was fighting the Enterprise. It would show Han Solo shooting, then cut to the Enterprise being hit. Shatner would exclaim "Fire!" and then it would cut to a blast hitting the Millennium Falcon. And so on. Another video from the same guy took the musical number George Lucas had added to Return of the Jedi in the Special Edition, laid Wu-Tang Clan in as the soundtrack, then re-edited the video elements so everything matched up.

These videos sound fun, but not particularly original or even special in this day and age. However, these videos were made ten to fifteen years ago. I was watching them on a VHS(!) and the person making the edits was using analog techniques and equipment. It turns out that these videos were how he honed his craft before he officially got a job as an editor in Hollywood. I'm sure there were tons of other videos, probably much less impressive, that he had created before the ones I'm referencing. Now, I'm not saying that the BSG Sabotage editor or the Brooklyn Hipsters are angling for professional filmmaking jobs, but it's quite possible that they are at least exploring their own possibilities. I would also bet that these people have been making videos like this (though probably much less sophisticated) since they were kids. The only big difference now is that technology has enabled them to make a slicker experience and, more importantly, to distribute it widely.

It's also worth noting that this sort of thing is not without historical precedent. Indeed, the history of editing and montage is filled with this sort of thing. In the 1910s and 1920s, Russian filmmaker Lev Kuleshov conducted a series of famous experiments that helped express the role of editing in films. In these experiments, he would show a man with an expressionless face, then cut to various other shots. In one example, he showed the expressionless face, then cut to a bowl of soup. When prompted, audiences would claim that they found that the man was hungry. Kuleshov then took the exact same footage of the expressionless face and cut to a pretty girl. This time, audiences reported that the man was in love. Another experiment alternated between the expressionless face and a coffin, a juxtaposition that lead audiences to believe that the man was stricken with grief. This phenomenon has become known as the Kuleshov Effect.

For the current discussion, one notable aspect of these experiments is that Kuleshov was working entirely from pre-existing material. And this sort of thing was not uncommon, either. At the time, there was a shortage of raw film stock in Russia. Filmmakers had to make due with what they had, and often spent their time re-cutting existing material, which lead to what's now called Soviet Montage Theory. When D.W. Griffith's Intolerance, which used advanced editing techniques (it featured a series of cross cut narratives which eventually converged in the last reel), opened in Russia in 1919, it quickly became very popular. The Russian film community saw this as a validation and popularization of their theories and also as an opportunity. Russian critics and filmmakers were impressed by the film's technical qualities, but dismissed the story as "bourgeois", claiming that it failed to resolve issues of class conflict, and so on. So, not having much raw film stock of their own, they took to playing with Griffith's film, re-editing certain sections of the film to make it more "agitational" and revolutionary.

The extent to which this happened is a bit unclear, and certainly public exhibitions were not as dramatically altered as I'm making it out to be. However, there are Soviet versions of the movie that contained small edits and a newly filmed prologue. This was done to "sharpen the class conflict" and "anti-exploitation" aspects of the film, while still attempting to respect the author's original intentions. This was part of a larger trend of adding Soviet propaganda to pre-existing works of art, and given the ideals of socialism, it makes sense. (The preceeding is a simplification of history, of course... see this chapter from Inside the Film Factory for a more detailed discussion of Intolerance and it's impact on Russian cinema.) In the Russian film world, things really began to take off with Sergei Eisenstein and films like Battleship Potemkin. Watch that film today, and you'll be struck by how modern-feeling the editing is, especially during the infamous Odessa Steps sequence (which you'll also recognize if you've ever seen Brian De Palma's "homage" in The Untouchables).

Now, I'm not really suggesting that the woman who produced BSG Sabotage is going to be the next Eisenstein, merely that the act of cutting together pre-existing footage is not necessarily a sad waste of talent. I've drastically simplified the history of Soviet Montage Theory above, but there are parallels between Soviet filmmakers then and YouTube videomakers today. Due to limited resources and knowledge, they began experimenting with pre-existing footage. They learned from the experience and went on to grander modifications of larger works of art (Griffith's Intolerance). This eventually culminated in original works of art, like those produced by Eisenstein.

Now, YouTube videomakers haven't quite made that expressive leap yet, but it's only been a few years. It's going to take time, and obviously editing and montage are already well established features of film, so innovation won't necessarily come from that direction. But that doesn't mean that nothing of value can emerge from this sort of thing, nor does messing around with videos on YouTube limit these young artists to film. While Roger Ebert's valid criticisms are vaid, more and more, I'm seeing interactivity as the unexplored territory of art. Video games like Heavy Rain are an interesting experience and hint at something along these lines, but they are still severely limited in many ways (in other words, Ebert is probably right when it comes to that game). It will take a lot of experimentation to get to a point where maybe Ebert would be wrong (if it's even possible at all). Learning about the visual medium of film by editing together videos of pre-existing material would be an essential step in the process. Improving the technology with which to do so is also an important step. And so on.

To return back to the BSG Sabotage video for a moment, I think that it's worth noting the origins of that video. The video is clearly having fun by juxtaposing different genres and mediums (it is by no means the best or even a great example of this sort of thing, but it's still there. For a better example of something built entirely from pre-existing works, see Shining.). Battlestar Galactica was a popular science fiction series, beloved by many, and this video comments on the series slightly by setting the whole thing to an unconventional music choice (though given the recent Star Trek reboot's use of the same song, I have to wonder what the deal is with SF and Sabotage). Ironically, even the "original" Beastie Boys video was nothing more than a pastiche of 70s cop television shows. While I'm no expert, the music on Ill Communication, in general, has a very 70s feel to it. I suppose you could say that association only exists because of the Sabotage video itself, but even other songs on that album have that feel - for one example, take Sabrosa. Indeed, the Beastie Boys are themselves known for this sort of appropriation of pre-existing work. Their album Paul's Boutique infamously contains literally hundreds of samples and remixes of popular music. I'm not sure how they got away with some of that stuff, but I suppose this happened before getting sued for sampling was common. Nowadays, in order to get away with something like Paul's Boutique, you'll need to have deep pockets, which sorta defeats the purpose of using a sample in the first place. After all, samples are used in the absence of resources, not just because of a lack of originality (though I guess that's part of it). In 2004 Nate Harrison put together this exceptional video explaining how a 6 second drum beat (known as the Amen Break) exploded into its own sub-culture:


There is certainly some repetition here, and maybe some lack of originality, but I don't find this sort of thing "sad". To be honest, I've never been a big fan of hip hop music, but I can't deny the impact it's had on our culture and all of our music. As I write this post, I'm listening to Danger Mouse's The Grey Album:
It uses an a cappella version of rapper Jay-Z's The Black Album and couples it with instrumentals created from a multitude of unauthorized samples from The Beatles' LP The Beatles (more commonly known as The White Album). The Grey Album gained notoriety due to the response by EMI in attempting to halt its distribution.
I'm not familiar with Jay-Z's album and I'm probably less familiar with The White Album than I should be, but I have to admit that this combination and the artistry with which the two seemingly incompatible works are combined into one cohesive whole is impressive. Despite the lack of an official release (that would have made Danger Mouse money), The Grey Album made many best of the year (and best of the decade) lists. I see some parallels between the 1980s and 1990s use of samples, remixes, and mashups, and what was happening in Russian film in the 1910s and 1920s. There is a pattern worth noticing here: New technology enables artists to play with existing art, then apply their learnings to something more original later. Again, I don't think that the BSG Sabotage video is particularly groundbreaking, but that doesn't mean that the entire remix culture is worthless. I'm willing to bet that remix culture will eventually contribute towards something much more original than BSG Sabotage...

Incidentally, the director of the original Beastie Boys Sabotage video? Spike Jonze, who would go on to direct movies like Being John Malkovich, Adaptation., and Where the Wild Things Are. I think we'll see some parallels between the oft-maligned music video directors, who started to emerge in the film world in the 1990s, and YouTube videomakers. At some point in the near future, we're going to see film directors coming from the world of short-form internet videos. Will this be a good thing? I'm sure there are lots of people who hate the music video aesthetic in film, but it's hard to really be that upset that people like David Fincher and Spike Jonze are making movies these days. I doubt YouTubers will have a more popular style, and I don't think they'll be dominant or anything, but I think they will arrive. Or maybe YouTube videomakers will branch out into some other medium or create something entirely new (as I mentioned earlier, there's a lot of room for innovation in the interactive realm). In all honesty, I don't really know where remix culture is going, but maybe that's why I like it. I'm looking forward to seeing where it leads.
Posted by Mark on March 14, 2010 at 02:18 PM .: link :.


End of This Day's Posts

Sunday, February 14, 2010

Best Films of 2009
As of right now, I've seen 78 movies that were released in 2009. This is probably less than a lot of critics, but more than most folks. Overall, I had a much better feeling about this year than I had in the past couple years. I had a really difficult time with my 2008 list (which I'm actually pretty happy with now, after a year of reflection), but here in 2009, things came together pretty easily. I had 9 movies right away and the 10th movie came when I finally caught up to a movie I knew I would like.

As always, lists like this are inherently subjective and I know that gets on some people's nerves. Both from a you're stupid because you don't like the same movies I do perspective as well as the lists are inherently evil argument. Indeed, due to this year also marking the end of the decade, the multitude of best of the decade lists has also prompted an increase in the typical backlash of anti-list sentiment. This post covers the usual complaints about lists: they're lazy criticism and basically represent filthy linkbait whoring. There's obviously more to it than that (read the full post). He makes some good points and there are certainly a lot of crappy lists out there (hey, here's one!), but on the other hand, who the hell cares what he thinks? I like lists. Apparently Americans Love Lists (and you know who doesn't like lists? Joseph Stalin!) So without further ado:

Top 10 Movies of 2009
* In roughly reverse order
  • (500) Days of Summer: This has emerged as something of a polarizing movie for some reason, but count me among the film's admirers. Great performances, genuine emotion, a playful, non-linear narrative structure and a wonderful ending all helped elevate this movie above the usual romantic comedy cliches.
    More Info: [IMDB] [DVD] [BD] [My Cryptic Twitter Review]
  • The Brothers Bloom: Rian Johnson's sophomore effort is perhaps not as tight as Brick, but it's still a blast. It hits all the con movie tropes while still managing to carve out an identity of its own, and while the ending isn't quite perfect, it's still better than I was expecting. All of the performances are good, but Rachel Weisz was a revelation and Rinko Kikuchi steals every scene she's in... Overall, it's a big barrel of fun and well worth watching (and judging from the box office results, you haven't seen it).
    More Info: [IMDB] [DVD] [BD]
  • Paranormal Activity: This low-budget found-footage horror flick isn't especially innovative and it's not as artistically accomplished as most films on this list, but I'll be damned if it wasn't the creepiest movie of the year. I still get chills thinking about this movie, and I'm very rarely scared by horror movies. The movie employs an effective scheme of tension and release and, thankfully, it also features a tripod (which mitigates many of the issues associated with found-footage movies). It was perhaps hyped too much upon initial release, but I saw it in ideal conditions, which may have something to do with how much I enjoyed it.
    More Info: [IMDB] [DVD] [BD] [Capsule Review]
  • Anvil! The Story of Anvil: This documentary follows the trials and tribulations of a once-influential heavy metal rock band that failed to ever find a real audience. It's a tale of perseverance and hope in the face of adversity, and even though their music isn't especially great (at least, not today - apparently their early stuff heavily influenced bands like Metallica, Slayer, and Anthrax), you can't help but root for these guys.
    More Info: [IMDB] [DVD]
  • A Serious Man: Yet another Coen brothers curveball, I found myself surprisingly riveted to the screen on this one. It has a big smattering of the Coens' trademark humor and at least one exceptionally well executed set piece (not exactly the right term, but I'm trying not to give anything away here). An excellent performance by Michael Stuhlbarg and the usual stable of great side performances (including the scene-stealing Fred Melamed, playing the smarmy Sy Ableman) anchor this film. The ending is abrupt and will undoubtedly infuriate some people, but I found it surprisingly fitting. But then, I'm apparently a sucker for the Coen Brothers.
    More Info: [IMDB] [DVD] [BD]
  • Star Trek: The most fun I've had in a movie theater all year. J.J. Abrams took an old, crusty franchise and made it fresh and interesting again. I wish there was a little more science in the fiction, but in the end, it's a highly enjoyable, action packed, crowd-pleasing popcorn film.
    More Info: [IMDB] [DVD] [BD] [Full Review]
  • Up: The first 20 minutes of this movie are the most devastating of any movie this year (in a good way). Luckily, the rest of the movie reels it back in, leaving you feeling pretty good by the end (which is no small feat considering the intensity of the prologue). Oh, and did I mention that this is an animated kids movie? Pixar continues it's amazing streak of great films.
    More Info: [IMDB] [DVD] [BD]
  • Red Cliff: John Woo's triumphant return to Hong Kong is a wonderful movie and his best since he left. Whether armies are being strategically maneuvered or a woman is pouring tea, Woo manages an elegance that has eluded most of his filmography. He's always choreographed excellent, almost balletic, action sequences, but everything in this film is pulled off with the same precision. So you get wonderful epic battle sequences (a first for Woo, I think) and also some more personal touches. I saw the theatrical cut, but there is apparently a two-part, 5 hour version that I am now quite interested in seeing.
    More Info: [IMDB] [DVD] [BD] [Capsule Review]
  • Fantastic Mr. Fox: A near perfect melding of Wes Anderson's quirky aesthetic with a classic children's story. The stop motion animation looks great and Anderson's visual style complements Roald Dahl's story quite well. Great voice performances from George Clooney, Meryl Streep and Jason Schwartzman (ok and Bill Murray and hell, everyone else too) and overall just a wonderfully fun experience. I'm suddenly interested in Wes Anderson again, as I think he'd fallen into a bit of a rut before this film, which shows that he's capable of growing as a filmmaker.
    More Info: [IMDB] [DVD] [BD]
  • Inglourious Basterds: The single most audacious movie of the year (if not the decade). Anchored by Quentin Tarantino's best writing since Pulp Fiction and a manic villainous performance from Christoph Waltz, playing Colonel Hans "The Jew Hunter" Landa like a Nazi version of Columbo, this movie pulls no punches and never falters. Mildly controversial when it came out, I think such criticism ignores Tarnatino's expert use of exformation, while at the same time exploding any preconceived notions of his WWII epic. Truly an astounding movie and without a doubt my favorite of the year.
    More Info: [IMDB] [DVD] [BD] [Full Review] [Winner of 3 Kaedrin Movie Awards]
Honorable Mention
* In alphabetical order
  • 4bia: This Thai horror anthology, the awful title of which is supposed to be a play on the word "phobia," has a lot going for it. As you might expect from the fact that it's an anthology, there's not a lot holding it together and some of the segments are better than the others. It was an early year favorite of mine, but eventually it yielded to other films. Also, as time went on, it began to feel more derivative than I had originally thought (a few of the segments feel exactly like other movies... interestingly, I think my favorite segment was also the least scary and most referential). Still, there's something to be said for a well executed genre pic, and this one fits that bill well. Definitely worth a watch for horror fans.
    More Info: [IMDB] [DVD] [Capsule Review]
  • Bronson: The semi-true story of Michael Peterson (aka Charles Bronson), the UK's most infamous prisoner. Ultimately not a lot of insight into Bronson, but the film is stylish and features one of the most spectacular performances of the year from Tom Hardy. As Bronson, Hardy is a font of volcanic rage and so, despite there not being much here, the film is never boring. I don't normally like this kind of movie, but I couldn't help but respect what this movie has done.
    More Info: [IMDB] [DVD] [BD] [Capsule Review]
  • Crank: High Voltage: I can't believe how much I enjoyed this movie. Indeed, I seriously considered it for a top 10 position, but it ultimately got pushed off the list by the Coen Brothers. This is a movie that just seems like it would be terrible, but again, I found myself very enthusiastically embracing the movie for what it is. It's just a huge amount of fun, playful and energetic filmmaking at its best. Probably not for everyone, but I had a lot of fun with it.
    More Info: [IMDB] [DVD] [BD]
  • Drag Me to Hell: Sam Raimi's return to his horror roots didn't blow me away the way it did with some other folks, but I did have a lot of fun with it. Really, it was the little things that I enjoyed the most. The handkerchief as villain motif, the anvil in the shed, and so on. It doesn't really approach Raimi's earlier low budget films, but it's still quite entertaining and well worth a watch for fans of the genre.
    More Info: [IMDB] [DVD] [BD]
  • Duplicity: Another strong contender for the top 10, I think this is a criminally underrated movie. I think perhaps this tale of corporate espionage and one-upmanship suffered from being released during a global economic depression. Still, it's well written and entertaining. The only bad thing to say about it is that the chemistry between Clive Owen and Julia Roberts wasn't exactly lighting the screen on fire. That's a small complaint though, and this movie would make a great rental. Check it out.
    More Info: [IMDB] [DVD] [BD]
  • The Hangover: I think this might have been the most I laughed in a theater this year. Sure it's completely random and overly raunchy, but I do like that sort of thing from time to time, and this movie is a fine example of the genre. In any other year, it might also have the best cameo, but as we'll see below, there's some stiff competition this year.
    More Info: [IMDB] [DVD] [BD]
  • The House of the Devil: I finally caught up with this brooding horror film last night, and I have to admit that it gave me pause about including Paranormal Activity in my top 10. Both movies are quasi-haunted house movies, but similarities wind up being mostly superficial. The House of the Devil is made with more artistry and in a more unconventional manner. It's a masterpiece of misdirection and tension building. Unlike the repeated tension and release of Paranormal Activity, The House of the Devil opts to continually build tension while withholding release. This is an interesting approach and the foreboding atmosphere of dread is hard to shake. I wish I was able to catch this a few months ago, as I'd like to see how well it ages. Highly recommended for fans of slow burning horror films.
    More Info: [IMDB] [DVD] [BD]
  • The Hurt Locker: Director Kathryn Bigelow's tense tale of a bomb defusing squad in Iraq is getting a lot of Oscar buzz, and Bigelow is certainly deserving of the best director title. Unfortunately, I'm not a huge fan of the movie as a whole. The action scenes are exceptionally well done, but some of the other sequences are a bit lackluster and the film ends without much of a real resolution. It's the best Iraq war movie made yet, but then again, that's not saying much.
    More Info: [IMDB] [DVD] [BD]
  • Moon: This little science fiction film features a great double performance by Sam Rockwell and a reasonably good SF story too. Unfortunately, I found myself nitpicking a lot of the plot points, especially towards the end, which makes for a less satisfactory experience. I think a lot of SF fans are so starved for good, hard SF movies (as opposed to huge budget special effects extravaganzas like Avatar or most super hero movies) that they're willing to overlook some of the less rational plot points. So I go back and forth on this. Sometimes I love it, sometimes I'm infuriated by the plot.
    More Info: [IMDB] [DVD] [BD]
  • Playing Columbine: What can I say, I'm a sucker for video game documentaries. The film is directed by Danny Ledonne, the creator of a game called Super Columbine Massacre RPG! where you actually play Eric Harris and Dylan Klebold and act out the massacre. Unsurprisingly, the game was very controversial and this movie delves into that a bit, but Ledonne wisely uses his game as a mere jumping-off point, preferring instead to explore broader and more interesting concepts such as the demonization of video games in the media, the value of video games as an artistic medium, censorship, responsibility and the nature of violence and school violence. If you like video games, it's well worth a watch, though I guess it's not available on DVD yet.
    More Info: [IMDB] [Full Review]
  • Surveillance: Jennifer Lynch (yes, daughter of David) directed this rather twisted tale. The film begins with a modern, dark Rashomon type feel, but it eventually eschews that style for something else. It's perhaps a little too reliant on the big twist, but I thought it was rather well done. It's also worth noting for some unconventional casting choices and surprisingly good performances. I'm apparently somewhat alone in even liking the movie at all, but I thought it was pretty good.
    More Info: [IMDB] [DVD] [BD]
  • Trick 'r Treat: This long-awaited horror anthology was worth the wait, but I think perhaps my expectations had become too inflated. Still, it's a worthy movie and one that I think will take its rightful place among Halloween themed movies, if only because of the way it incorporates all sorts of Halloween lore and rituals as plot elements (in a way that no other movie has). Unlike the aforementioned 4bia, the various segments here are all interconnected, and the movie benefits from that structure. Well worth a Halloween night watch next year.
    More Info: [IMDB] [DVD] [BD] [Capsule Review]
  • Watchmen: This movie adaptation of Alan Moore and Dave Gibbons' classic graphic novel Watchmen was a long time coming. It's certainly not perfect, but I think it's about as good as an adaptation could ever be. It's a little uneven, but it absolutely nails some areas of the story. Given that the comic book was created specifically to show off the comic book medium, I'm still surprised that the movie turned out as well as it did. Again, not perfect, but well worth it.
    More Info: [IMDB] [Amazon] [Full Review]
  • Zombieland: I'm not a big fan of zombie stories and I'm also not a big fan of Woody Harrelson, yet I really had a lot of fun with this movie. Sharply written, well acted and it also features the best cameo of the year. Just a big ball of fun, it hits all the right notes. What more can you ask for?
    More Info: [IMDB] [DVD] [BD] [Capsule Review]
Just Missed the Cut...
But still worthwhile, in their own way. Presented without comment and in no particular order: Should Have Seen
Despite the fact that I've seen 78 movies this year (and that this post features 30+ of my favorites), there were a few that got away... mostly due to limited releases, though a few of the flicks listed below didn't interest me as much when they were released as they did when I heard more about them. Unlike last year, I'm not really expecting any of these to break into the top 10, though I guess there's always a chance. Anyway, in no particular order: Well, that wraps up 2009... actually a pretty solid year for movies from my perspective. Not the best ever or anything, but probably better than the past couple years. Hey, perhaps I should put together a best of the decade list? Eh, that would be reallly difficult (not to mention reallly late), but perhaps I'll give it a shot at some point. Indeed, at some point, I want to post a top 100 of all time... but that's even harder! Someday...
Posted by Mark on February 14, 2010 at 06:26 PM .: link :.


End of This Day's Posts

Sunday, December 13, 2009

Visual Literacy and Rembrandt's J'accuse
Perhaps the most fascinating film I saw at the 18½ Philadelphia Film Festival was Rembrandt's J'accuse. It's a documentary where British director Peter Greenaway deconstructs Rembrandt's most famous painting: Night Watch. It's arguably the 4th most celebrated painting in art history (preceded only by the Mona Lisa, The Last Supper, and the ceiling of the Sistine Chapel...) and Greenaway believes it's also an accusation of murder. The movie plays like a forensic detective story as Greenaway analyzes the painting from top to bottom. It's an interesting topic for a documentary, though I think the film ultimately falters a bit in it's investigation (either that, or Greenaway is trying to do something completely different).

(Note, you can click on the images below for a higher resolution image.)

Night Watch
Night Watch

Greenaway began his career as a painter and he contends that most people are visually illiterate, which is an interesting point. We really do live in a text-based culture. Our education system encourages textual learning over visuals, from the alphabet to vocabulary and reading skills. The proportion of time spent "reading paintings as they do text" is minute (if it happens at all). As such, our ability to analyze visual art forms like paintings is ill-informed and impoverished. Greenaway even takes the opportunity to rag on the state of modern cinema (which is a whole other discussion, as sometimes even bad movies are visually well constructed, but I digress). In any case, I do think Greenaway has a point here. Our culture is awash in visual information - television, movies, photography, etc... - and yet, we spend very little time questioning the veracity of what we're shown. They say that a picture is worth a thousand words, which is really just a way of saying that pictures can easily convey massive amounts of information. Pictures are inherently trustworthy and persuasive, but this can, in itself, cause issues. Malcolm Gladwell examined this in his essay, The Picture Problem:
You can build a high-tech camera, capable of taking pictures in the middle of the night, in other words, but the system works only if the camera is pointed in the right place, and even then the pictures are not self-explanatory. They need to be interpreted, and the human task of interpretation is often a bigger obstacle than the technical task of picture-taking. ... pictures promise to clarify but often confuse. ... Is it possible that we place too much faith in pictures?
Gladwell is, of course, casting suspicion on images, but he's actually making many of the same points as Greenaway. What Gladwell is really saying is that human beings are visually illiterate. As Greenaway notes towards the beginning of the film, is what we see really what we see? Or do we only see what we want to see? Both Gladwell and Greenaway seem to agree that interpretation is key (though Gladwell might be a bit more pessimistic about the feasibility of doing so). Though this concept is not explicitly referenced later in the film, I do believe it is essential to understanding the film.

One of the first clues that Greenaway examines is the public nature of Rembrandt's painting. For the most part, public museums didn't start appearing until the mid 19th century. The Night Watch, by contrast, was on public display from day one (1642). In a time where paintings were private luxuries, usually viewed only by the rich and those who commissioned the paintings, the Night Watch was viewed by all. In a lot of ways, the painting is unusual and prompts questions, most of which don't seem to have any sort of satisfactory answers. This leads to all sorts of speculation and theories about the motives behind the painting and what it really depicts. One way to look at it is to view it as an accusation. An indictment of conspiracy. Greenaway starts with this idea and proceeds to examine 34 interconnected mysteries about the painting. The mysteries all server to illuminate one thing: The content of the painting. What is it about? Who are the players? What is the accusation?

I will not go through all 34 mysteries, but as an example, the first mystery is about the Dutch Militia. At the time of the painting, there was a century-long Dutch tradition of the group military portrait. The Dutch had been involved in a long, drawn-out guerrilla war with the Spanish. Local militias were formed all throughout the country to protect their towns from their enemies. These local companies were comprised of regular citizens and volunteers, many of them important local figures, and they liked to have themselves painted, usually in uniform and in a powerful light to inspire solidarity and confidence. As the war wound down, these militias became less about the military and more about politics and power. It was a prestigious thing to be in a militia and they became more of a gentleman's club than a military organization. In the Night Watch, Rembrandt chose to break many of the traditions associated with the common Dutch military portrait. Many of the future mysteries examine these differences in great detail.

After seeing the movie I was struck by numerous things. First, for a filmmaker ostensibly crusading against visual illiteracy, I find it strange that Greenaway has chosen to present his argument as a gigantic wall of text. He narrates the entire film. Occasionally, he'll cut to a "reenactment", which are scenes from his previous film, a fictional retelling of Rembrandt's painting, but even those are comprised primarily of characters spouting dialogue (these scenes rarely provide insight, though it's nice to break up the narration with something a little more theatrical).

Indeed, the grand majority of the mysteries are concerned with context (i.e. the cultural and historical traditions, the timing of the painting, who commissioned the painting, etc...). There is a concept from communication theory called exformation that I think is relevant here.
Effective communication depends on a shared body of knowledge between the persons communicating. In using words, sounds and gestures the speaker has deliberately thrown away a huge body of information, though it remains implied. This shared context is called exformation.
Wikipedia also has an excellent anecdotal example of the concept in action:
In 1862 the author Victor Hugo wrote to his publisher asking how his most recent book, Les Miserables, was getting on. Hugo just wrote “?” in his message, to which his publisher replied “!”, to indicate it was selling well. This exchange of messages would have no meaning to a third party because the shared context is unique to those taking part in it. The amount of information (a single character) was extremely small, and yet because of exformation a meaning is clearly conveyed.
Similarly, when Rembrandt painted the Night Watch and it was put on display, most of the viewers knew the subjects in the painting and the circumstances in which it was painted. As modern viewers, we do not have any of that shared knowledge. In order to understand the visual of The Night Watch, one must first understand the context of the painting, something that is primarily established through text. For example, one of the mysteries of the painting has to do with the lighting. Rembrandt was one of the pioneers of artificial lighting in paintings, and this was the result of improvements to technology of the day. There were apparently big improvements in the use of candles and mirrors, and so Rembrandt enjoyed playing with lighting, making the painting seem almost theatrical. As modern viewers, this sort of playful use of lighting isn't special - it's something we've seen a million times before and in a million other contexts. In Rembrandt's time, it was different. It called attention to itself and caused much speculation. Modern audiences thus need to be informed of this, and again, Greenaway accomplishes this mostly through the use of text.

To be sure, there are some interesting visualization techniques that Greenaway employs when talking about specific aspects of the painting. For example, when discussing the aforementioned use of lighting, Greenaway does his own manipulation, exagerating the lighting in the painting to underline his point:

Lighting Effects

Unfortunately, these are not used as often as I would have hoped, nor are they always necessary or enlightening, and indeed there are numerous distractions throughout. For instance, the frame is often comprised of several overlapping and moving boxes. Sometimes this is used well, but it often feels visually overwhelming. Indeed, sometimes the audio is sometimes also overwhelming - with Greenaway's narration being overlaid on top of music and sometimes even a woman's voice which is saying the names of famous people who have seen Night Watch (the inclusion of which has always confused me). I'm sure it's challenging to make a movie about a painting without just putting up a static shot of the painting (and that's certainly not desirable), but does the screen need to be so busy? The visual components of the film seem to take a back seat to the textual elements... Interestingly, this is a film that seems to work a lot better on the small screen, as it's not nearly as overwhelming on the small screen as it was in the theater.

Visually Overwhelming
Visually Overwhelming

Furthermore, the text presented to us is so dense that it can be hard to follow at times. This at least partially due to the massive amount of exformation, unfamiliar European names, different cultural traditions, etc... There are 34 people depicted in the painting (plus a dog!), and it can be tough to keep track of who is who. I suppose I should not be surprised that someone obsessed with visual literacy is not a master writer, but perhaps there is something else going on here...

Next, I was struck by the inclusion of Greenaway's face, which is often positioned in a box right in the center of the frame. Why do that? Why is he calling so much attention to himself? My first inclination is that it's a breathtakingly arrogant strategy. Also, the sound of his voice (sometimes overly deliberate pronunciation mixed with stereotypical European accent) lends the impression of arrogance and pretentiousness. I think that may still be part of it, but again, there is more going on here.

Look at Me!
Look at me!

There are many types of documentary films. The most common form of documentary is referred to as Direct Address (also known as Expositional Mode). In such a documentary, the viewer is directly acknowledged, usually through narration and voice-overs. There is very little ambiguity and it is pretty obvious how you're expected to interpret these types of films. Many television and news programs use this style, to varying degrees of success. Ken Burns' infamous Civil War and Baseball series use this format eloquently, but most traditional propaganda films also fall into this category. The disembodied nature of a voice-over lends an air of authority and even omniscience to a film's subject matter (this type of voice-over is often referred to as "Voice of God" narration). As such, these films are open to abuse through manipulative rhetoric and social propaganda.

By contrast, Reflexive Documentaries use many devices to acknowledge the filmmaker's presence, perspective, and selectivity in constructing the film. It is thought that films like this are much more honest about their subjectivity, and thus provide a much greater service to the audience.

An excellent example of a Reflexive documentary is Errol Morris' brilliant film, The Thin Blue Line. The film examines the "truth" around the murder of a Dallas policeman. The use of colored lighting throughout the film eventually correlates with who is innocent or guilty, and Morris is also quite manipulative through his use of editing - deconstructing and reconstructing the case to demonstrate just how problematic finding the truth can be. His use of framing calls attention to itself, daring the audience to question the intents of the filmmakers. The use of interviews in conjunction with editing is carefully structured to demonstrate the subjectivity of the film and its subjects. As you watch the movie, it becomes quite clear that Morris is toying with you, the viewer, and that he wants you to be critical of the "truth" he is presenting.

Ironically, a documentary becomes more objective when it acknowledges its own biases and agenda. In other words, a documentary becomes more objective when it admits its own subjectivity.

Greenaway could easily have employed a direct address narration with this film, but he does not. Instead, he conspicuously inserts himself right into the middle of the frame. Indeed, later in the film, Greenaway appears dressed in a ridiculous getup more suited to appear within the painting than in the movie. It's almost like he's daring us to question this visual choice. Why?

Perhaps because of the third thing that struck me - Greenaway is the only narrator in the film. Most documentaries feature many talking heads, experts and historians, and even some contrary opinions, among other expositional techniques. This film does not. Why? Could it be that Greenaway's story is complete bullshit? After all, his story is delivered in textual form. With his visuals, Greenaway is emphasizing his own subjectivity. A cursory glance around the internet (hardly a comprehensive search, but still) reveals that Greenaway appears to be the only one who subscribes to this theory of murder and accusation.

So I'm left with something of a dilemma. This movie is an impressive bit of speculation and interpretation, but I have no idea if it's true or not. The visual elements of the film seem to emphasize that it is an emphatically subjective interpretation of the painting, but that this sort of speculation on the visual composition is still important, and that we should do more of this sort of thing (something I would agree with).

Or maybe I'm reading way too much into the movie and he employs so much text simply because he thinks we're visually illiterate morons. At this point, I really don't know how to rate this film. I'm having a lot of trouble gauging how much I enjoyed this film. Upon first viewing it, in the theater, I have to say that I didn't like it very much. And yet, it still fascinated me, to the point where I started writing this post and rewatching the film to make sure my interpretation fit. Indeed, as previously mentioned, I found it much more watchable on the small screen. If this post at all interests you, I suggest checking it out. It's actually available on Netflix's Watch Instantly feature (and thus can be viewed through a computer, a PS3 or XBox or any number of other Netflix streaming ready boxes).

More screenshots and comments in the extended entry...

Update: More on Visual Literacy (in response to comments in this post)

This is the title screen of the film, and it's one example of the sensory overload that Greenaway employs. The building in the background is where the Night Watch now resides (the Rijksmuseum in Amsterdam). The shot is taken from far away, with many things in the foreground though, including a police car with flashing lights. Given the murder-mystery nature of the film, that part makes symbolic sense. Making less sense is the additional police car inset on the right of the screen (it's harder to see in a static screenshot, but that box is filmed separatel, and apparently during the day, so the lighting is different. In the movie, that box actually scrolls across the screen.). Inset on the right, is a miniature version of the title screen. I have no idea what purpose that serves. And scrolling from right to left across the bottom of the screen is a list of signatures. These names are the aforementioned famous people who have publicly visited the Night Watch, and they are also being read by a female voice (again, I have no real idea why this is being done, as it only serves to really add to the disorienting sensory experience).

Rembrandts j accuse

Interwoven within the documentary are scenes from Greenaway's earlier fictional retelling of the same story, Nightwatching. It stars Martin Freeman (who starred in the British Office show and a bunch of other stuff, including The Hitchhiker's Guide to the Galaxy). I found these scenes really strange at first. They seemed very out of place, at least until I found out that they were from an earlier Greenaway film. Then it made sense.

Rembrandt and friends

As previously mentioned, Greenaway does employ some visualization efforts to help call out certain features and structures within the painting. Some of the interesting ones are below. The first is one that silhouettes out the main actors in the drama of the painting. Then there's one that numbers all of the participants (you'll have to click on the image to get a good look at that one). There are a few that attempt to visualize the lines of sight of all the characters (only two are looking directly at the audience - this is one of the mysteries that Greenaway explores).

Silhouettes

The players, all numbered

Lines of sight

More lines of sight

One of the things that interested me about the film was that many of the "mysteries" are probably things that most people would notice if you asked them to stare at the painting for an hour. They don't have the exformation to read the painting correctly, but they'd easily be able to pick out a lot of the most salient features. For instance, it's easy to question why the girl in the painting is so prominent. It's the brightest part of the painting, and your eyes go there almost immediately upon viewing it. If given some time, you can even see that there's another girl behind the first, and her face is obscured (it turns out that Rembrandt painted it this way because the girl had horrible burns on her face and was thus self-conscious about it). I think the grand majority of the mysteries that Greenaway examines would be found if only someone took the time to really study the painting. Of course, I suspect most people don't actually do that sort of thing, so Greenaway does have a point, but still.

Little Girls Obscured Face

Below is the aforementioned "ridiculous getup" that Greenaway puts on at one point. Again, I think this is how he is stressing his own subjective involvement in what we're seeing.

Greenaway and his ridiculous getup

Well, I think that just about wraps up my thoughts on Rembrandt's J'accuse. In closing, I'll give you one of the final shots of the film, which is a sorta reprise of the title screen. It's still cluttered and busy, but somehow not quite as pointless as the title screen.

More visually overwhelming stuff

It was an intriguing movie, I guess. It would be even more interesting if I could hear what other art historians and experts thought about it...
Posted by Mark on December 13, 2009 at 08:04 PM .: link :.


End of This Day's Posts

Sunday, September 20, 2009

Six Weeks of Halloween 2009: Week 1 - Universal Horror
It's that time of year again. Halloween is my favorite time of the year, and it provides a convenient excuse to explore one of my favorite genres of film (as I have done for the past couple of years). In preparation for this year's six week celebration of Halloween, I pretty quickly drew up a list that could easily take me through ten weeks... I doubt I'll get through them all, but I'm going to have fun trying. Highlights include this week's look at classic Universal Horror films, a sampling of the later Monster revival with Hammer Horror, perhaps some Vincent Price, and of course, some slashers and miscellaneous horrors to round out the pack (including the much anticipated Trick 'r Treat, amongst others). If you can't get enough Halloween madness here, be sure to visit Kernunrex, who's been doing this whole Six Weeks of Halloween thing a lot longer than I have... (Someday I'll redesign Kaedrin so as to allow for an easy switch to Halloween colors like he does... that day is probably not coming anytime soon, but still.)

Its the nicest weather Earth has ever had!
Its the nicest weather Earth has ever had!*

As previously mentioned, this year's marathon kicks off with a look at Universal Studios' classic monster films. I've seen two of the following films before, but not since I was very young, so I figured it would be worth revisiting (as a result, I now want to revisit the original novels upon which the following films were based, which if my current queue is any indication, means I'll get to them sometime in the 2020s). Here goes:
  • Frankenstein's Fiancee (Robot Chicken)
  • Frankenhooker (trailer)
  • Frankenstein (1910 - Full Movie)
  • Frankenstein (1931): My memories of Frankenstein were fond but not overly enthusiastic. I remember these films being hokey and over-the-top, and to be sure, there are elements of that here, but it is much more effective than I remember it being. Adapted from Mary Shelly's classic novel of the same name, the film is dramatically different from both the novel and the many stage variations of the preceding century. Despite the changes, the movie retains the feel and thematic resonance of the novel. This cautionary tale of technology gone awry is something that strikes a chord throughout most of history, perhaps even more now than when it was written. It certainly helps that James Whale was behind the camera and Boris Karloff was in front of it, and the movie has aged quite well (it is perhaps the best of today's choices). ****

    Karloffs Frankenstein Monster
    Karloff's Frankenstein's Monster

  • Young Frankenstein (trailer)
  • Frankenstein for President
  • Abbott and Costello Meet Frankenstein (trailer)
  • Bride of Frankenstein (1935): I may have seen bits and pieces of this film before, but never the whole thing. This direct sequel to the 1931 film features mostly the same cast and crew, and as such, the technical aspects of the film are superb. Indeed, they may even surpass the original. Karloff is given more to do in this film, and while he was wonderfully expressive in the first film, he goes above and beyond in this film, infusing the Monster with emotion and even evoking sympathy. Director James Whale had also honed his skills in the intervening years and the Bride's creation scene is particularly well done, especially when it comes to the editing. This film's special effects also stand out, as when Dr. Pretorius displays his miniature experiments for Dr. Frankenstein (the scene holds up remarkably well, which is more than I can say for a lot of special effects from the era... (or even modern effects, for that matter)). Another standout scene is when the Monster encounters an elderly blind man, who teaches the Monster about bread, wine, and rudimentary English. He also introduces the Monster to the concept of friendship, which drives the rest of the story. I must admit that the story does get to be a bit more silly in this installment, but it still works very well. Thematically, the film expands upon the original, and adds some new twists of its own. The ending is actually quite moving, as the Monster realizes what he is and where he belongs. Many consider this sequel to be superior to the first film, and in many ways, it is. However, it is sillier and more over-the-top than the previous film. It is still a wonderful film in its own right, and something I'm glad I caught up with. ****

    Dr. Pretorius
    Dr. Pretorius

  • Vampire 7:00-8:00AM, Vampire 1:00-2:00PM, and Vampire 8:00-9:00PM (Robot Chicken)
  • Bart Simpson's Dracula (The Simpsons: Treehouse of Horror IV)
  • Vampire Chase (Robot Chicken)
  • Dracula (1931): I was curious to revisit this film in light of the current pop-culture craze for vampires we're experiencing right now. There are many who believe that vampires have been watered down these days:
    Once upon a time, vampires were monsters. Creatures of the night. Beasts who crawled from their coffins at night; consorted with spiders, bats, and rats; ravaged women and tore out the throats of men. They were demonic; spawns of Satan. The best known image of the vampire is that of Bela Lugosi, whose intonation of the line: "I never drink… wine" has become the standard.
    And indeed, many recent vampire stories take a less monsterous approach, favoring instead a more emotional and empathetic creature (though I must admit that I don't mind that approach either, just that it has become the pervasive approach). So in revisiting this classic film, it was refreshing to see Dracula portrayed as something unnatural and evil. Director Tod Browning is at his creepy best when framing Lugosi's Dracula onscreen. Lugosi's menacing glare is undeniably effective and his Dracula is indeed a creature to fear. Alas, the mechanics of the plot (and, uh, the special effects) leaves something to be desired. This is a little disappointing, though still quite entertaining and better than much of today's vampire stories (I'm looking at you, Twilight!). Someday, perhaps, I'll check out the Spanish language version of this film, which was apparently shot at the same time and using the same sets. Some believe it to be superior to the English language version... ***

    Lugosis Dracula
    Lugosi's Dracula

One of the surprising things about all three of the above movies is that they are all between 70-75 minutes in length, significantly shorter than even the shortest movies in theaters today. It's worth noting that many of the above films are also restored from cut versions. In particular, the scenes missing from the original Frankenstein are quite important (the missing scenes were restored in 1986 and most DVDs of the film have them), particularly the scene when the Monster plays with the little girl. It's actually quite a disturbing scene, but Karloff was always able to walk that line between evil and misunderstood, creating a monster that was scary and sympathetic at the same time.

It's also interesting to note that the characters of Dracula and Frankenstein are two of the most frequently utilized fictional characters in the history of film. Dracula has 200+ appearances, while Frankenstein has only had a mere 80+ roles. And I think both will continue to rack up the appearances. Interestingly, I think there are several more recent horror icons that could give the classics a run for their money... Jason Vorhees, Mike Myers, and Freddy Kreuger have established themselves pretty firmly in modern film culture, but I'm not sure they will ever be as prolific as the old Universal classic monsters. Why? Devin Faraci has speculated on this:
There is one major obstacle that's stopping Freddy and Jason and Mike Myers and Leatherface from really getting to that position of being among the truly eternal monsters of filmland: copyright. While the versions of the Universal Monsters we love are copyrighted in terms of their appearance (although a zillion manufacturers of Halloween ephemera have skirted the edges of that legality), the characters themselves are in the public domain. This is what has allowed them to become such prominent forces in film, keeping them going in permutation after permutation. If Universal outright owned the characters then Hammer, for instance, would never have been able to reinvent them in the 50s and 60s (my colleague Ryan Rotten very astutely notes that what Platinum Dunes is doing with the characters of Jason, Freddy and Leatherface, and what Rob Zombie is doing with Michael Myers, is very similar to what Hammer did with the Universal Monsters, recasting them and re-presenting them for a new generation with new tastes). In fact, the copyright on the Gill-Man from The Creature from the Black Lagoon may be one of the things keeping him from really ascending and going places as a character. Being tightly controlled by Universal keeps him from escaping into the pop culture world at large.
Perhaps audiences will still be squirming in their seats in fear of Jason, Mike, and Freddy a century from now, but maybe not. One thing is for sure though: Audiences will still be entertained by updates on Frankenstein and Dracula...

* With apologies to the MST3K Movie for that joke, though it works even better on the newer variations on the logo...
Posted by Mark on September 20, 2009 at 12:00 PM .: link :.


End of This Day's Posts

Sunday, August 16, 2009

Noir Ends
In my first post on Noir, I kinda made light of the body count that our two heroes were racking up as well as the fact that French society never seemed to notice when a few dozen nameless hitmen are discovered in a park or abandoned building somewhere. I was making a joke of it, but it always sorta bothered me. There are a few hundred people who die during the course of this series. While they're all portrayed as mostly nameless, faceless victims, I couldn't help but wonder what the consequences of their deaths were. Were they married? Did they have kids? Friends? And so on. Warning: The rest of the post contains major spoilers!

One of the things I wondered about was how well Mireille and Kirika were able to deal with the amount of death and destruction they were doling out. For the most part, they seem to deal with it remarkably well. Kirika seems to be more affected by it than Mireille. As the series goes on, she seems less and less enthused with what she's capable of doing.... but there's something off about her reaction that took me a while to place. I finally realized what it was - it reminded me of Crime and Punishment (I suppose I should note spoilers for that novel as well), in particular, this paragraph (page 623 in my edition) where Raskolnikov laments his punishment:
... even if fate had sent him no more than remorse - burning remorse that destroyed the heart, driving away sleep, the kind of remorse to escape those fearsome torments the mind clutches at the noose and the well, oh, how glad he would have been! Torment and tears - after all, that is life, too. But he felt no remorse for his crime.
In essense, Raskolnikov felt no guilt or remorse for his crime, but that lack of feeling, that lack of guilt was just as horrible as he could have imagined. That's very much how I thought Kirika felt during the second half of the series. In his take on the series, Steven Den Beste does an excellent job describing the duality of Kirika:
Kirika had two parts inside. One part was a killing machine. It was created by Altena through training and indoctrination, and once it seemed ready, Kirika's memory was wiped and she was placed in Japan, so that she could begin to face the Trials which were required of all candidates for Noir to prove their fitness. Events after that point were not planned, because they depended on what Kirika herself did, and how she reacted to the process. Hints were left which might lead Kirika to Mireille, but if they had not, she would have faced her trials alone.

The other side of Kirika was a lonely girl, who wanted nothing more than a normal life, a name, a home, and someone to love and be loved by. The series shows us those two sides of Kirika, gradually building them up to tangible presences, and in episode #25 Kirika is forced to choose one over the other.
The killing machine part of Kirika's personality was capable of evil, without remorse or guilt, but the human side of her personality recognized how horrible that was and the series is essentially about Kirika's internal struggle. Mireille seemed to be much more neutral. The other piece of the puzzle is Chloe, who seems to take a perverse pleasure in what she is capable of, and as the series progresses, she becomes more and more creepy.

Kirika and Chloe
Kirika and Chloe

Ultimately, when Kirika is forced to choose between Mireille and Chloe, she chooses Mireille (who I guess is supposed to represent the human side of Kirika's personality). As Steven notes, the series does not end there and neither does Kirika's internal struggle. She is still capable of horrible evil and is not sure she could live with herself. Altena still attempts to appeal to killing machine portion of Kirika's personality, but she ultimately fails, and Mireille succeeds in saving Kirika. At the very end, it's clear that Kirika and Mireille will continue on together and that they love each other (like sisters). I am once again reminded of Dostoyevsky (page 630 in my edition - replace the male pronouns with female pronouns and this could easily apply to Kirika):
... at this point a new story begins, the story of a man's gradual renewal, his gradual rebirth, his gradual transition from eone world to another, of his growing acquaintance with a new, hitherto completely unknown reality. This might constitute the theme of a new narrative - our present narrative is, however, at an end.
There's a lot more to the ending of the series that I'm skipping over, but Steven's post covers that in plenty of detail and I don't see a need to repeat all that... It's not a perfect series, but the ending did make it worthwhile for me. I wouldn't say that I was as taken with it as Steven or Alex, but neither was I as disappointed with it as Ben. I thought the series was a bit too long (a little too much filler, perhaps) and unevenly paced, but the ending made up for any issues I may have had with the series.

As usual, more screenshots and commentary in the extended entry...

Kirika and Mireille and a pool table

I didn't notice this at first, but the table that Mireille uses to do her work is a pool table. Not sure what the significance of that is, but I guess you could make something symbolic out of it, like that Mireille and Kirika are stuck playing the Soldats' game or something.

Cargo

Cargo containers in the least organized port in the world. Seriously, look at those things.

Kirika double-fisting pistols

As mentioned above, Kirika, seen here double-fisting some pistols,John Woo style, is the main character of the series. This is interesting because at first glance, the series seems to be primarily about Mireille. As the series progresses, Mireille takes a back seat to Kirika and Chloe, then comes to the foreground at the end.

The Soldats

The Soldats in their stereotypical lair, sitting next to a fireplace and sipping port. We find out more about the Soldats later in the series, but their ultimate plan and Altena's plan for Noir all ends up taking a backseat to Kirika's internal struggle, which is the true conflict of the series. That's a good thing too, as giant conspiracies tend to bore me...

Faceless Henchman #346

As the series progresses, Kirika, Mireille and Chloe encounter more and more hired killers, and in this case, the killers are literally faceless. Not a single one seems to be able to hold a candle to any of the Noirs though, which makes me wonder how challenging these "trials" are supposed to be for Noir.

Chloe and Kirika

This scene really bothered me. Not so much when it happened as in the next episode when we find out... that it doesn't really mean anything. It serves a purpose - Mireille begins to realize just how much she cares for Kirika, etc... but it's a kinda cheapshot. Also, I'm not really sure what happened. Did Chloe actually shoot Kirika? Why is Kirika fine afterwords? I didn't get it.

Chloe

Towards the end of the series, we learn that Kirika killed Mireille's parents (apparently when Kirika was extremely young). Chloe was also there, and the screenshot above is her after she sees Kirika kill. Kinda creepy.

Kirika, with sword

Chloe, with sword

Towards the end of the series, Kirika and Chloe are reuinited at Altena's home and have an awesome swordfight (as a training exercise).
Chloe, with sword

Kirika wins the training session, and in the screenshot above you see something that is a recurring image. Often, when Kirika's killing machine personality is in control, her hair covers her eyes, making her faceless and symbolizing emotionlessness. I didn't really notice this until later in the series, so I'm not sure it applies to the whole series, but I did see it multiple times.

Mireille and Kirika

Mireille and Kirika have a faceoff towards the end, and they are legitimately trying to kill one another, but in the end, neither can pull the trigger.

Mireille and Kirika

This is the last shot in the series. The saturated, washed out brightness of this type of shot usually symbolizes transcendence or resolution, and that certainly fits with the ending of the series.

Well, that about covers it. Next up in the Anime queue is Miyazaki's Ponyo, which I should be seeing sometime this week. It's actually getting a pretty wide release - it's even playing at the local multiplex...
Posted by Mark on August 16, 2009 at 02:08 PM .: link :.


End of This Day's Posts

Sunday, June 28, 2009

Interrupts and Context Switching
To drastically simplify how computers work, you could say that computers do nothing more that shuffle bits (i.e. 1s and 0s) around. All computer data is based on these binary digits, which are represented in computers as voltages (5 V for a 1 and 0 V for a 0), and these voltages are physically manipulated through transistors, circuits, etc... When you get into the guts of a computer and start looking at how they work, it seems amazing how many operations it takes to do something simple, like addition or multiplication. Of course, computers have gotten a lot smaller and thus a lot faster, to the point where they can perform millions of these operations per second, so it still feels fast. The processor is performing these operations in a serial fashion - basically a single-file line of operations.

This single-file line could be quite inefficent and there are times when you want a computer to be processing many different things at once, rather than one thing at a time. For example, most computers rely on peripherals for input, but those peripherals are often much slower than the processor itself. For instance, when a program needs some data, it may have to read that data from the hard drive first. This may only take a few milliseconds, but the CPU would be idle during that time - quite inefficient. To improve efficiency, computers use multitasking. A CPU can still only be running one process at a time, but multitasking gets around that by scheduling which tasks will be running at any given time. The act of switching from one task to another is called Context Switching. Ironically, the act of context switching adds a fair amount of overhead to the computing process. To ensure that the original running program does not lose all its progress, the computer must first save the current state of the CPU in memory before switching to the new program. Later, when switching back to the original, the computer must load the state of the CPU from memory. Fortunately, this overhead is often offset by the efficiency gained with frequent context switches.

If you can do context switches frequently enough, the computer appears to be doing many things at once (even though the CPU is only processing a single task at any given time). Signaling the CPU to do a context switch is often accomplished with the use of a command called an Interrupt. For the most part, the computers we're all using are Interrupt driven, meaning that running processes are often interrupted by higher-priority requests, forcing context switches.

This might sound tedious to us, but computers are excellent at this sort of processing. They will do millions of operations per second, and generally have no problem switching from one program to the other and back again. The way software is written can be an issue, but the core functions of the computer described above happen in a very reliable way. Of course, there are physical limits to what can be done with serial computing - we can't change the speed of light or the size of atoms or a number of other physical constraints, and so performance cannot continue to improve indefinitely. The big challenge for computers in the near future will be to figure out how to use parallel computing as well as we now use serial computing. Hence all the talk about Multi-core processing (most commonly used with 2 or 4 cores).

Parallel computing can do many things which are far beyond our current technological capabilities. For a perfect example of this, look no further than the human brain. The neurons in our brain are incredibly slow when compared to computer processor speeds, yet we can rapidly do things which are far beyond the abilities of the biggest and most complex computers in existance. The reason for that is that there are truly massive numbers of neurons in our brain, and they're all operating in parallel. Furthermore, their configuration appears to be in flux, frequently changing and adapting to various stimuli. This part is key, as it's not so much the number of neurons we have as how they're organized that matters. In mammals, brain size roughly correlates with the size of the body. Big animals generally have larger brains than small animals, but that doesn't mean they're proportionally more intelligent. An elephant's brain is much larger than a human's brain, but they're obviously much less intelligent than humans.

Of course, we know very little about the details of how our brains work (and I'm not an expert), but it seems clear that brain size or neuron count are not as important as how neurons are organized and crosslinked. The human brain has a huge number of neurons (somewhere on the order of one hundred billion), and each individual neuron is connected to several thousand other neurons (leading to a total number of connections in the hundreds of trillions). Technically, neurons are "digital" in that if you were to take a snapshot of the brain at a given instant, each neuron would be either "on" or "off" (i.e. a 1 or a 0). However, neurons don't work like digital electronics. When a neuron fires, it doesn't just turn on, it pulses. What's more, each neuron is accepting input from and providing output to thousands of other neurons. Each connection has a different priority or weight, so that some connections are more powerful or influential than others. Again, these connections and their relative influence tends to be in flux, constantly changing to meet new needs.

This turns out to be a good thing in that it gives us the capability to be creative and solve problems, to be unpredictable - things humans cherish and that computers can't really do on their own.

However, this all comes with its own set of tradeoffs. With respect to this post, the most relevant of which is that humans aren't particularly good at doing context switches. Our brains are actually great at processing a lot of information in parallel. Much of it is subconscious - heart pumping, breathing, processing sensory input, etc... Those are also things that we never really cease doing (while we're alive, at least), so those resources are pretty much always in use. But because of the way our neurons are interconnected, sometimes those resources trigger other processing. For instance, if you see something familiar, that sensory input might trigger memories of childhood (or whatever).

In a computer, everything is happening in serial and thus it is easy to predict how various inputs will impact the system. What's more, when a computer stores its CPU's current state in memory, that state can be restored later with perfect accuracy. Because of the interconnected and parallel nature of the brain, doing this sort of context switching is much more difficult. Again, we know very little about how the humain brain really works, but it seems clear that there is short-term and long-term memory, and that the process of transferring data from short-term memory to long-term memory is lossy. A big part of what the brain does seems to be filtering data, determining what is important and what is not. For instance, studies have shown that people who do well on memory tests don't necessarily have a more effective memory system, they're just better at ignoring unimportant things. In any case, human memory is infamously unreliable, so doing a context switch introduces a lot of thrash in what you were originally doing because you will have to do a lot of duplicate work to get yourself back to your original state (something a computer has a much easier time doing). When you're working on something specific, you're dedicating a significant portion of your conscious brainpower towards that task. In otherwords, you're probably engaging millions if not billions of neurons in the task. When you consider that each of these is interconnected and working in parallel, you start to get an idea of how complex it would be to reconfigure the whole thing for a new task. In a computer, you need to ensure the current state of a single CPU is saved. Your brain, on the other hand, has a much tougher job, and its memory isn't quite as reliable as a computer's memory. I like to refer to this as metal inertia. This sort of issue manifests itself in many different ways.

One thing I've found is that it can be very difficult to get started on a project, but once I get going, it becomes much easier to remain focused and get a lot accomplished. But getting started can be a problem for me, and finding a few uninterrupted hours to delve into something can be difficult as well. One of my favorite essays on the subject was written by Joel Spolsky - its called Fire and Motion. A quick excerpt:
Many of my days go like this: (1) get into work (2) check email, read the web, etc. (3) decide that I might as well have lunch before getting to work (4) get back from lunch (5) check email, read the web, etc. (6) finally decide that I've got to get started (7) check email, read the web, etc. (8) decide again that I really have to get started (9) launch the damn editor and (10) write code nonstop until I don't realize that it's already 7:30 pm.

Somewhere between step 8 and step 9 there seems to be a bug, because I can't always make it across that chasm. For me, just getting started is the only hard thing. An object at rest tends to remain at rest. There's something incredible heavy in my brain that is extremely hard to get up to speed, but once it's rolling at full speed, it takes no effort to keep it going.
I've found this sort of mental inertia to be quite common, and it turns out that there are several areas of study based around this concept. The state of thought where your brain is up to speed and humming along is often referred to as "flow" or being "in the zone." This is particularly important for working on things that require a lot of concentration and attention, such as computer programming or complex writing.

From my own personal experience a couple of years ago during a particularly demanding project, I found that my most productive hours were actually after 6 pm. Why? Because there were no interruptions or distractions, and a two hour chunk of uninterrupted time allowed me to get a lot of work done. Anecdotal evidence suggests that others have had similar experiences. Many people come into work very early in the hopes that they will be able to get more done because no one else is here (and complain when people are here that early). Indeed, a lot of productivity suggestions basically amount to carving out a large chunk of time and finding a quiet place to do your work.

A key component of flow is finding a large, uninterrupted chunk of time in which to work. It's also something that can be difficult to do here at a lot of workplaces. Mine is a 24/7 company, and the nature of our business requires frequent interruptions and thus many of us are in a near constant state of context switching. Between phone calls, emails, and instant messaging, we're sure to be interrupted many times an hour if we're constantly keeping up with them. What's more, some of those interruptions will be high priority and require immediate attention. Plus, many of us have large amounts of meetings on our calendars which only makes it more difficult to concentrate on something important.

Tell me if this sounds familiar: You wake up early and during your morning routine, you plan out what you need to get done at work today. Let's say you figure you can get 4 tasks done during the day. Then you arrive at work to find 3 voice messages and around a hundred emails and by the end of the day, you've accomplished about 15 tasks, none of which are the 4 you had originally planned to do. I think this happens more often than we care to admit.

Another example, if it's 2:40 pm and I know I have a meeting at 3 pm - should I start working on a task I know will take me 3 solid hours or so to complete? Probably not. I might be able to get started and make some progress, but as soon my brain starts firing on all cylinders, I'll have to stop working and head to the meeting. Even if I did get something accomplished during those 20 minutes, chances are when I get back to my desk to get started again, I'm going to have to refamiliarize myself with the project and what I had already done before proceeding.

Of course, none of what I'm saying here is especially new, but in today's world it can be useful to remind ourselves that we don't need to always be connected or constantly monitoring emails, RSS, facebook, twitter, etc... Those things are excellent ways to keep in touch with friends or stay on top of a given topic, but they tend to split attention in many different directions. It's funny, when you look at a lot of attempts to increase productivity, efforts tend to focus on managing time. While important, we might also want to spend some time figuring out how we manage our attention (and the things that interrupt it).

(Note: As long and ponderous as this post is, it's actually part of a larger series of posts I have planned. Some parts of the series will not be posted here, as they will be tailored towards the specifics of my workplace, but in the interest of arranging my interests in parallel (and because I don't have that much time at work dedicated to blogging on our intranet), I've decided to publish what I can here. Also, given the nature of this post, it makes sense to pursue interests in my personal life that could be repurposed in my professional life (and vice/versa).)
Posted by Mark on June 28, 2009 at 03:44 PM .: link :.


End of This Day's Posts

Wednesday, June 17, 2009

The Motion Control Sip Test
A few weeks ago, Microsoft and Sony unveiled rival motion control systems, presumably in response to Nintendo's dominant market position. The Wii has sold much better than both the Xbox 360 and the PS3 (to the point where sales of Xbox and PS3 combined are around the same as the Wii), so I suppose it's only natural for the competition to adapt. To be honest, I'm not sure how wise that would be... or rather, I'm not sure Sony and Microsoft are imitating the right things. Microsoft's Project Natal seems quite ambitious in that it relies completely on gestures and voice (no controllers!). The Sony motion control system, which relies on a camera and two handheld wands, seems somewhat similar to the Wii in that there are still controllers and buttons. Incidentally, the Wii actually released Wii Motion Plus, an improvement to their already dominant system.

My first thought at a way to compete with the Wii would have been along similar lines, but not for the reasons I suspect Microsoft and Sony released their solutions. The problem for MS & Sony is that the Wii is the unquestionable winner of this generation of gaming consoles, and everyone knows that. A third party video game developer can create a game for a console with an install base of 20 million (the PS3), 30 million (Xbox) or 50 million (Wii). Since the PS3 and Xbox have similar controllers, 3rd parties can often release games on both consoles, though there is overhead in porting your code to both systems. This gives a rough parity between those two systems and the Wii... until you realize that developing games for the Xbox/PS3 means HD and that means those games will be much more costly (in both time and money) to develop. On the other hand, you could reach the same size audience by developing a game for the Wii, using standard definition (which is much easier to develop for) and not having to worry about compatibility issues between two consoles.

The problem with Natal and Sony's Wands is that they basically represent brand new consoles. This totally negates the third party advantage of releasing a game on both platforms. Now a third party developer who wants to create a motion control game is forced to choose between two underperforming platforms and one undisputed leader in the field. How do you think that's going to go?

Microsoft's system seems to be the most interesting in that they're trying something much different than Nintendo or Sony. But "interesting" doesn't necessarily translate into successful, and from what I've read, Natal is a long ways away from production quality. Yeah, the marketing video they created is pretty neat, but from what I can tell, it doesn't quite work that well yet. Even MS execs are saying that what's in the video is "conceptual" and what they "hope" to have at launch. If they launch it at all. I'd be surprised if what we're seeing is ever truly launched. Yeah, the Minority Report interface (which is basically what Natal is) really looks cool, but I have my doubts about how easy it will be to actually use. Won't your arms get tired? Why use motion gestures for something that is so much easier and more precise with a mouse?

Sony's system seems to be less ambitious, but also too different from Nintendo's Wiimote. If I were at Sony, I would have tried to duplicate the Wiimote almost exactly. Why? Because then you give 3rd party developers the option of developing for Wii then porting to PS3, thus enlarging the pie from 50 million to 70 million with minimal effort. Sure the graphics wouldn't be as impressive as other PS3 efforts, but as the Wii has amply demonstrated, you don't need unbelievable graphics to be successful. The PS3 would probably need a way to upscale the SD graphics to ensure they don't look horrible, but that should be easy enough. I'm sure there would be some sort of legal issue with that idea, but I'm also sure Sony could weasel their way out of any such troubles. To be clear, this strategy wouldn't have a chance at cutting into Wii sales - it's more of a holding pattern, a way to stop the bleeding (it might help them compete with MS though). Theoretically, Sony's system isn't done yet either and could be made into something that could get Wii ports, but somehow I'm doubting that will actually be in the works.

The big problem with both Sony and Microsoft's answer to the Wiimote is that they've completely misjudged what made the Wii successful. It's not the Wiimote and motion controls, though that's part of it. It's that Nintendo courted everyone, not just video gamers. They courted grandmas and kids and "hardcore" gamers and "casual" gamers and everyone inbetween. They changed video games from solitary entertainment to something that is played in living rooms with families and friends. They moved into the Blue Ocean and disrupted the gaming industry. The unique control system was important, but I think that's because the control system was a signfier that the Wii was for everyone. The fact that it was simple and intuitive was more important than motion controls. The most important part of the process wasn't motion controls, but rather Wii Sports. Yes, Wii Sports uses motion controls, and it uses them exceptionally well. It's also extremely simple and easy to use and it was targeted towards everyone. It was a lot of fun to pop in Wii Sports and play some short games with your friends or family (or coworkers or enemies or strangers off the street or whoever).

The big problem for me is that even Nintendo hasn't improved on motion controls much since then. It's been 3 years since Wii Sports, and yet it's still probably the best example of motion controls in action. I have not played any Wii Motion Plus games yet, so for me, the jury is still out on that one. However, I'm not that interested in playing the games I'm seeing for Motion Plus, let alone the prospect of paying for yet another peripheral for my Wii (though it does seem to be cheap). The other successful games for the Wii weren't so much successful for their motion controls so much as other, intangible factors. Mario Kart is successful... because it's always successful (incidentally, while I still enjoy playing with friends every now and again, the motion controls have nothing to do with that - it's more just the nostagia I have for the original Mario Kart). Wii Fit has been an amazing success story for Nintendo, but it introduced a completely new peripheral and its success is probably more due to the fact that Nintendo was targeting more than just the core gamer audience with software that broadened what was possible on a video game console. Again, Nintendo's success is due to their strategy of creating new customers and their marketing campaigns that follow the same strategy. Wii has a lot of games that have less than imaginitive motion controls - games which simply replace random button mashing with random stick waggling. But where they're most successful seems to be where they target a broader audience. They also seem to be quite adept at playing on people's nostalgia, hence I find myself playing new Mario, Zelda, and Metroid games, even when I don't like some of them (I'm looking at you, Metroid Prime 3!)

Motion controls play a part in this, but they're the least important part. Why? Because the same complaints I have for Natal and the Minority Report interface apply to the Wii (or the new PS3 system, for that matter). For example, take Metroid Prime 3. A FPS for the Wii! Watch how motion controls will revolutionize FPS! Well, not so much. There are a lot of reasons I don't like the game, but one of the reasons was that you constantly had to have your Wiimote pointed up. If your hand strayed or you wanted to rest your wrists for a moment, your POV also strays. There are probably some other ways to do FPS on the Wii, but I'm not especially convinced (The Conduit looks promising, I guess) that a true FPS game will work that well on a Wii (heck, it doesn't work that well on a PS3 or Xbox when compared to the PC). That's probably why Rail Shooters have been much more successful on the Wii.

Part of the issue I have is that motion controls are great for short periods of time, but even when you're playing a great motion control game like Wii Sports, playing for long periods of time has adverse affects (Wii elbow anyone?). Maybe that's a good thing; maybe gamers shouldn't spend so much time playing video games... but personally, I enjoy a nice marathon session every now and again.

You know what this reminds me of? New Coke. Seriously. Why did Coca-Cola change their time-honored and fabled secred formula? Because of the Pepsi Challenge. In the early 1980s, Coke was losing ground to Pepsi. Coke had long been the most popular soft drink, so they were quite concerned about their diminishing lead. Pepsi was growing closer to parity every day, and that's when they started running these commercials pitting Coke vs. Pepsi. The Pepsi Challenge took dedicated Coke drinkers and asked them to take a sip from two different glasses, one labeled Q and one labeled M. Invariably, people chose the M glass, which was revealed to contain Pepsi. Coke initially disputed the results... until they started private running sip tests of their own. It turns out that people really did prefer Pepsi (hard as that may be for those of us who love Coke!). So Coke started tinkering with their secret formula, attempting to make it lighter and sweeter (i.e. more like Pepsi). Eventually, they got to a point where their new formulation consistently outperformed Pepsi in sip tests, and thus New Coke was born. Of course, we all know what happened. New Coke was a disaster. Coke drinkers were outraged, the company's sales plunged, and Coke was forced to bring back the original formula as "Classic Coke" just a few months later (at which point New Coke practically disappeared). What's more, Pepsi's seemingly unstoppable ascendance never materialized. For the past 20-30 years, Coke has beaten Pepsi despite sip tests which say that it should be the other way around. What was going on here? Malcolm Gladwell explains this incident and the aftermath in his book Blink:
The difficulty with interpreting the Pepsi Challenge findings begins with the fact that they were based on what the industry calls a sip test or a CLT (central location test). Tasters don’t drink the entire can. They take a sip from a cup of each of the brands being tested and then make their choice. Now suppose I were to ask you to test a soft drink a little differently. What if you were to take a case of the drink home and tell me what you think after a few weeks? Would that change your opinion? It turns out it would. Carol Dollard, who worked for Pepsi for many years in new-product development, says, “I’ve seen many times when the CLT will give you one result and the home-use test will give you the exact opposite. For example, in a CLT, consumers might taste three or four different products in a row, taking a sip or a couple sips of each. A sip is very different from sitting and drinking a whole beverage on your own. Sometimes a sip tastes good and a whole bottle doesn’t. That’s why home-use tests give you the best information. The user isn’t in an artificial setting. They are at home, sitting in front of the TV, and the way they feel in that situation is the most reflective of how they will behave when the product hits the market.”

Dollard says, for instance, that one of the biases in a sip test is toward sweetness: “If you only test in a sip test, consumers will like the sweeter product. But when they have to drink a whole bottle or can, that sweetness can get really overpowering or cloying.” Pepsi is sweeter than Coke, so right away it had a big advantage in a sip test. Pepsi is also characterized by a citrusy flavor burst, unlike the more raisiny-vanilla taste of Coke. But that burst tends to dissipate over the course of an entire can, and that is another reason Coke suffered by comparison. Pepsi, in short, is a drink built to shine in a sip test. Does this mean that the Pepsi Challenge was a fraud? Not at all. It just means that we have two different reactions to colas. We have one reaction after taking a sip, and we have another reaction after drinking a whole can.
To me, motion controls seem like a video game sip test. The analogy isn't perfect, because I think that motion controls are here to stay, but I think the idea is relevant. Coke is like Sony - they look at a successful competitor and completely misjudge what made them successful. Yes, motion controls are a part of the Wii's success, but their true success lies elsewhere. In small doses and optimized for certain games (like bowling or tennis), nothing can beat motion controls. In larger doses with other types of games, motion controls have a long ways to go (and they make my arm sore). Microsoft and Sony certainly don't seem to be abandoning their standard controllers, and even the Wii has a "Classic Controller", and I think that's about right. Motion controls have secured a place in gaming going forward, but I don't see it completely displacing good old-fashioned button mashing either.

Update: Incidentally, I forgot to mention the best motion control game I've played since Wii Sports has been... Flower, for the PS3. Flower is also probably a good example of a game that makes excellent use of motion controls, but hasn't achieved anywhere near the success of Nintendo's games. It's not because it isn't a good game (it is most definitely an excellent game, and the motion controls are great), it's because it doesn't expand the audience the way Nintendo does. If Natal and Sony's new system do make it to market, and if they do manage to release good games (and those are two big "ifs"), I suspect it won't matter much...
Posted by Mark on June 17, 2009 at 06:40 PM .: link :.


End of This Day's Posts

Sunday, June 07, 2009

A Decade of Kaedrin
It's hard to believe, but it has been ten years since I started this website. The exact date is a bit hard to pinpoint, as the site was launched on my student account at Villanova, which existed and was accessible on the web as far back as 1997. However, as near as I can tell, the site now known as Kaedrin began in earnest on May 31, 1999 at approximately 8 pm. That's when I wrote and published the first entry in The Rebel Fire Alarms, an interactive story written in tandem with my regular visitors. I called these efforts Tandem Stories, and it was my primary reason for creating the website. Other content was being published as well - mostly book, movie, and music reviews - but the primary focus was the tandem stories, because I wanted to do something different on an internet that was filled with boring, uninspired, static content homepages that were almost never updated. At the time, the only form of interaction you were likely to see on a given website was a forum of some kind, so I thought the tandem stories were something of a differentiator for my site, and it was, though I never really knew how many different people visited the site. As time went on, interactivity on the web, even of the interactive story variety, became more common, so that feature became less and less unique...

I did, however, have a regular core of visitors, most of whom knew me from the now defunct 4degreez message boards (which has since morphed into 4th Kingdom, which is still a vibrant community site). To my everlasting surprise and gratitude, several of these folks are still regular visitors and while most of what I do here is for my own benefit, I have to admit that I never would have gotten this far without them. So a big thank you to those who are still with me!

But I'm getting ahead of myself here. Below is a rough timeline of my website, starting with my irrelevant student account homepage (which was basically a default page with some personal details filled in), moving on to the first incarnation of Kaedrin, and progressing through several redesigns and technologies until you got the site you're looking at now (be forewarned, this gets to be pretty long, though it's worth noting that the site looked pretty much like it does today way back in 2001, so the bulk of redesigning happened in the 1999-2001 timeframe)...
  • 1997-1999: As I started to take computer programming courses in college, I gained access to a student account on the university website. By default, all student accounts came with a bare-bones homepage which we were encouraged to personalize. I never really did much with it, though I thought it was funny to see some of the courses I was taking back in the day: MAT 1050 - Who cares about math, and HIS 3140 - The History of the spork. Also of note, the fact that we referred to it as "electronic mail address" and that google was not on my radar yet... Sometime during this timeframe I started considering a more comprehensive "homepage" and made a few stabs that never really got beyond the photoshop stage (thankfully for you!). Among these ill-fated designes included the uber-nerdy logic gate design shown below (click for larger, more complete version):

    Old, bad, nerdy design

    I'm not really embarrassed so much at the logic gate aspect of the design (which I thought was mildly clever at the time) as the font choice. Gah. Anyway, it was during this timeframe that the first designs for a site called Kaedrin started. The first drafts of the now iconic (well, to me) Kaedrin logo were created during this timeframe. They were not used, but every logo since then has used the same Viner Hand ITC font, though these days the logo isn't quite as prominent as it once was (as you'll see below).
  • May 1999 - Kaedrin v1.0: Again, I've had difficulty pinpointing the exact date when I launched Kaedrin in earnest, but judging from the timestamp of the first entry in The Rebel Fire Alarms, I gather that the site had been fully launched in May of 1999 (just as I was finishing up the semester and had some free time on my hands). Thanks to my participation on 4degreez.com (which may have been known as the T.A.S. Boards at the time, I don't remember exactly), I immediately had a built-in audience of like 5 people, which was pretty cool at the time. That summer was filled with updates and content (this was before blogs, so updates came in the form of reviews for books, movies, and music amongst other stuff that was popular on the web at the time, like sound clips and funny pictures, etc...). The layout initially featured mostly red text on a black background, but I found that to be a bit hard on the eyes, so in August I tried to soften the colors a bit (though even the new color scheme was pretty tough on the eyes). I can't seem to find an example of the full red on black, but here's the tweaked version (Click the image to see the full HTML page).

    Kaedrin: Version 1.0

    For the full effect, you have to click through to the HTML page and mouse over the left-navigation. Back in the day, CSS support was minimal, so to do those rollovers I had to write a custom javascript. I don't think any of the links off the page will work, but it's worth viewing just for the fun of it. Also worth noting: the copyright logo animated gif thingy and the fact that I had a guestbook (which was all the rage back in the day). Finally, if you have a high resolution monitor today, it's difficult to notice, but at 800x600 the Kaedrin logo is enormous!
  • May 2000 - Kaedrin v2.0: After graduating college and initiating a job search, I decided that the old homepage design wasn't very professional looking. During the course of my Senior year, I had spent time learning and thinking about usability and accessibility, and my site at the time was not especially great in those respects (i.e. I figured out for certain that dark red and blue text on a black background was a bad thing). Also, being stuck with a modum connection (after the school's snappy T3 lines) made me more acutely aware of page loading speeds (and the old page was rather image heavy). So I came up with a much cleaner and simpler design (Click the image to see the full HTML page).

    Kaedrin: Version 2.0

    This was certainly an improvement and when I eventually did find a job, my boss mentioned that she liked my site, so mission accomplished, I guess. Unfortunately, a "much cleaner and simpler design" also meant a more boring design, so it wasn't long before I started fiddling around with the layout again. This was a little vexing because I was maintaining all of the pages on the site by hand, and converting to the new layout was a monumental pain in the ass. As such, many of the design tweaks made during this (rather short) era were inconsistent throughout the site.
  • July 2000 - Kaedrin Weblog launched: The summer of 2000 is also when I discovered weblogs (the yellow-heavy designs of dack and kottke were my first exposure to the world of weblogs) and the relatively new Blogger. I remember being amazed at the fully featured blogging software that these crazy Pyra people were giving away for free! It's easy enough to pinpoint my first blog entry, but to be perfectly honest, I'm not sure what the design of the blog was like. It was probably something along the lines of the v2 design, but I'm also virtually positive that the v3.0 design was pioneered on the blog, due to the fact that Blogger was something of a light CMS in that I could tweak the design for all blog pages rather easily. I do vaguely remember having a lot of issues with my free web-hosing company (at the time, I believe it was someone called "redrival"), and in particular their ftp sucked. I think there was a time when I would write an entry on Blogger, publish it to one free host, then transfer the code over to the new host. This is perhaps part of why the initial months of the blog were somewhat sparse in terms of entries, but things got going pretty well in September 2000 and I posted a record-high 29 posts in December 2000.
  • November 2000 - Kaedrin v3.0: Due to the blandness of the the v2.0 site and the fact that Blogger provided easily updatable templates, I came up with a different design. It was still clean and simple and ultimately it didn't last too long because it was still pretty boring. In fact, I'm pretty sure I never got around to updating the entire site. Just the homepage and the blog got this new design. (Click the image to see the full HTML page).

    Kaedrin: Version 3.0

    Ultimately not that much different than v2.0 (I suppose you could consider it more of a v2.5 than a new version, but then it's probably different enough). It's still got the big honkin Kaedrin logo, but for some reason I liked this better.. and there's also the first appearance of the "You are here" bar at the top of the page. While I liked this design better than v2.0, I wasn't very happy with it and almost immediately started working on something new. I was also getting pretty well fed up with hand coding all these pages for what amounted to minor layout tweaks. One thing that helped in that respect was Blogger, which worked like a CMS-lite, allowing quick and easy layout changes with the click of a mouse. Here is the first design for the blog that I could find. (Click the image to see the full HTML page).

    Kaedrin Weblog

    Interestingly, it seems that I decided to forgo the Kaedrin logo in favor of a little HTM text thingy. Also, I had completely forgotten about the blog's original subtitle, which could use some explaining. Back in the 1990s it was popular to use "handles" instead of your real name. When I first started posting to message boards and the like, I absent-mindedly chose the moniker "tallman" because I was a big fan of a certain cheesy 1970s horror movie that featured a character who went by that name. Since a lot of popular blogs at the time had playful titles like Boing Boing and the like, I went with "The Royal Kingdom of Tallmania". I have no idea what possessed me to do that, and it wasn't long before the subtitle was dropped in favor of just "Kaedrin Weblog".
  • January 2001 - Kaedrin.com and v4.0: After dealing with the hassle of free hosting companies, I finally realized that I had a steady income and could probably afford a professional hosting service and a real domain, so I bought kaedrin.com and started work on a new design. Fed up with manually coding redesigns, I devised a kludgey XSLT solution that allowed me to completely separate content from design. So I put all my content into XML files and coded the new design into some XSL stylesheets. This design may look somewhat familiar (Click the image to see the full HTML page):

    Kaedrin Version 4.0

    Being obsessed with download speeds and page rendering, I devised an interesting layout for the blog. Instead of using the typical single-table design, I put the blog navigation at the top (instead of to the left or right) and I put each entry in it's own table. The idea was that browsers render content as it's downloaded, and if you have a large table with a lot of content, it could take a while to load. So having a series of smaller tables on the page, while increasing the size, also make the page seem to load quicker. All in all, I rather liked the look of this layout, though I don't think it's something I'll be returning to at any point (Click the image to see the full HTML page):

    Kaedrin Weblog

    While I like what I was able to do with that navigation at the top, I think there were ultimately more things that needed to go into the navigation and that space just couldn't fit it. I broke down and put it all in a big table in later designs (see below).
  • July 2002 - Movable Type: After a couple of years, I had finally gotten fed up with Blogger's centralized system. Blogger was growing faster than they could keep up with, and so the service was experiencing frequent downtime and even when you could access it, it was often mind-numbingly slow. Around this time, a few other solutions were becoming available, one of which was Movable Type (I started with version 2.x - also, it's worth mentioning that Wordpress was not available yet). This solution increased functionality (most notably bringing comments into the fold) and provided a much stabler system for blogging. The design changed to take advantage of some of this stuff and to make my blog more consistent with certain blogging standards. This one should look really familiar (Click the image to see the full HTML page):

    Kaedrin Weblog - Powered by Movable Type

    That's basically the same design as today, except for the date and some of the junk in the right navigation.
  • And from there it was a series of tiny, incremental improvements, upgrades, and design tweaks. It's funny, I didn't realize until now just how little the site has changed since 2002. Also funny: the fact that I had finally devised a way to make redesigns a lot easier (i.e. my xslt solution) and basically stopped redesigning. Then again, it came in really handy when I wanted to do some little things. For instance, the original v4.0 design didn't have the same borders around the main content area that I use today (it did have a small border at the top of the area, but it was barely noticeable and it was coded using spacers - yuck). I suppose the grand majority of the work that I've done has been behind the scenes: upgrading software, switching databases, fighting spam, and did I mention upgrades? In 2004, the main homepage was updated to account for the fact that the grand majority of the updates on the site were coming from the blog, and the design has remained largely unchanged since then. Around the same time, I tried to make sure the blog and homepage were valid HTML 4.01 (this is perhaps not the case for every page on the blog, as I'm sure I missed an & somehwere and of course, embedding video never validates, but otherwise, it should be pretty good).
  • Of course, the big visible thing that I was doing all throughout was blogging. When I started out, technology made it somewhat difficult to update the blog. Eventually I got Blogger working with my host at the time and enjoyed 3 months or so of somewhat prolific blogging. Of course, at the time, I was posting mostly just links and minor commentary, and this eventually trailed off because others were much better at that than I was. December 2000 is still my most prolific month when it comes to the number of posts (29 posts that month), but again, those were mostly just links and assorted short comments. From there, things trailed off for a couple of years until May 2003, when I established my weekly posting schedule. This made the blog a bit more consistent, and gradually, I started to find more and more visitors. Not a lot, mind you. Even today, it's doubtful that I have more than a few dozen semi-regular visitors (if that many). Actually, if you're reading this, you probably know most of the recent history of the blog, which basically amounts to at least 2 posts a week.
Whew, I didn't realize that trip down memory lane would take quite so long, but it was interesting to revisit just how tumultuous the design was in the early years and how it has calmed down considerably since then... Hopefully things will continue to improve around here though, so what kinds of things can you expect in the near future? I have a few ideas:
  • CSS Layout: The site currently uses a table based layout, primarily because it was designed and coded in 2001 and browser support of CSS was pretty bad back then, so CSS layouts weren't really an option. In 2007 (has it really been that long), I put together a mockup of the site using CSS layout, but never got around to actually implementing it. There were a few things about the layout that were bugging me and I never found the time to fix them. Someday, I'll dust off my mockups, finalize them, and launch them to the world. Having a CSS layout would also allow me to optimize for other media like cell phone browsers, print (my goal is to make it easier to read Kaedrin on the can), the Wii browser, etc... None of those things is a particularly burning need, which is probably why I've put this off so long...
  • Weblog Post Designs: I've never really been too happy with the way each post is laid out. For one thing, I feel like I've always given too much prominence to the date - which is something I could probably just remove. Also, the post title should perhaps be a bit larger (and be linked to the permalink).
  • Homepage: The homepage has largely become irrelevant and should probably just redirect to the weblog, as that's where 99% of the content is these days. Again, this doesn't seem to be a burning need, so I haven't spent much time looking into that, but it would be pretty easy to accomplish.
  • Comments: The comments functionality is a bit of a mess and could use some work.
  • Post Content: I feel like I've been in a bit of a rut lately, mostly relying on various crutches like movie reviews, etc... and not writing as much about things that really interest me. Not that movies or video games don't interest me, but I used to write more posts about technology and culture, which is something I'd like to get back into. The issue is that those posts are a lot harder to write, which I think is part of why I've been avoiding them...
So there you have it. Ten years of Kaedrin. Hopefully, it will last another ten years, though perhaps it will be in a completely different format by then... If you have any comments, questions, or suggestions, feel free to leave a comment...
Posted by Mark on June 07, 2009 at 09:38 AM .: link :.


End of This Day's Posts

Sunday, February 15, 2009

Best Films of 2008
I saw somewhere on the order of 70 movies that were released in 2008. Most critics see more than that, but your average moviegoer probably sees far less than that. I have to say, I've been really disappointed with 2008. It's been a rough year for movies and I had a really hard time cobbling together a top 10 (Hence the extreme lateness of this post). The 6-10 of my list is somewhat weak and probably wouldn't have made the list in either 2006 or 2007. On the other hand, the films near the top of the list are great, and would compete with the films of the last two years.

Of course, making a top 10 list is an inherently subjective exercise. I've noted before that these lists tend to tell you more about those who are compiling the list rather than the movies on the list. The hosts of the Filmcouch podcast were recently talking about how these sorts of lists are an autobiographical exercise and invited listeners to send in their top 5 lists, at which point they would psychoanalyze the list and try to come up with a picture of who the list's owner was. I submitted my list, and they tried to figure me out by the movies I listed. Before I go through their results, I should probably let you see my full list, so here goes:

Top 10 Movies of 2008
* In roughly reverse order
  • Man on Wire: This documentary follows French tightrope walker Philippe Petit's amazing high-wire stunt performed between the World Trade Center towers in 1974. This act was, of course, illegal, and indeed, the film carries with it many of the conventions and tropes of the heist movie... except that Petit wasn't stealing anything, he was just obsessed with tightrope walking (and had been performing various other similar stunts around the world, such as his walk across the towers of Notre Dame). The story is amazing and Petit is bewildering. I'm particularly thankful that director James Marsh decided to completely ignore the 9/11 angle, as such sermonizing would be unnecessary and distracting.
    More Info: [IMDB] [Amazon]
  • Slumdog Millionaire: Danny Boyle's Dickensian romp across India is getting a lot of attention these days and is seemingly a frontrunner for the Best Picture Oscar. There seems to be something of a backlash as well, which I feel is somewhat undeserved. I certainly don't think it's the best film of the year, but it features an interesting mix of dark and edgy material with a more optimistic undertone. There are moments of extreme violence and tragedy, but the movie is ultimately an uplifting experience. Of the Oscar nominees, it's my favorite.
    More Info: [IMDB]
  • Teeth: Adventurous filmmaking at its best, this movie is about a teenage girl who has teeth... down there. This is most unfortunate for all the males in the movie, especially the ones who attempt to take advantage of our heroine (which is to say, most of them). As a male, it was sometimes hard for me to watch (let's just say the film gets graphic), but in the end, I had a lot of fun with the movie. Despite it's B movie/horror roots, the film delves deeper than you might expect, exploring the nature of sexual power and male/female interactions. If you think you can handle the gore, it's a good film.
    More Info: [IMDB] [Amazon]
  • The Bank Job: Based on the true story of the 1971 Baker Street bank robbery, this movie follows a band of amateur thieves as they plan and execute their heist, which is aimed at the safe deposit boxes rather than the standard cash. What they don't plan on is that the safe deposit boxes also contain loads of dirty secrets, and there are people who don't want those secrets to come out. Nefarious acts ensue. I have to say that I was really taken with this movie. It seems like a by-the-numbers heist movie, but I'd say it's the best heist movie made in the last several years (and I like me some heist movies).
    More Info: [IMDB] [Amazon]
  • Mad Detective: Directors Johnny To and Ka-Fai Wai have crafted an exceptional police procedural and infused it with a giddy wackiness in the form of their main character, Bun, who can see the inner personalities of people. Bun's talents are explained in a stunning visual manner and the film's climax is a cinematic masterpiece. Unfortunately, this film is hard to find and it took me a while to get to it, but it was well worth the wait (it actually displaced the original number 10 movie on this list and may deserve to be even higher on the list than I placed it).
    More Info: [IMDB] [Amazon] [Full Review]
  • Forgetting Sarah Marshall: A movie that almost perfectly walks the fine line between romantic comedy and raunchy comedy, never straying to far from either. I'd say this is a tough trick to pull off, but this sort of mix seems to be producer Judd Apatow's specialty. Still, I think even among those films, this one is a winner. The film feels fresh and all of the characters in the movie are surprisingly well developed. The film is written by and stars Jason Segal, who goes all out in his performance. Mila Kunis is wonderful, as are the other supporting characters played by Kristin Bell, Russell Brand, Bill Hader and Jonah Hill. Excellent stuff.
    More Info: [IMDB] [Amazon] [Winner of 2 Kaedrin Movie Awards]
  • Let the Right One In: This Swedish horror film follows a lonely 12 year old boy, bullied by schoolmates, who falls in love with his neighbor. She happens to be a vampire. Set against a stark and beautiful snowy backdrop (excellent cinematography here), this film is not your typical vampire movie. It's more contemplative and subtle. There are moments of violence and gore, but they highlight the sadness of a vampire stuck in the body of a 12 year old girl. It's clear that vampires are a bad thing, an evil thing, but they're also sad creatures (and not in the whiney romantic, woe-is-me Interview with the Vampire way), which kinda endears you to them. It's also surprisingly tender, as you see the relationship between the young boy and vampire blossom. There is a Hollywood remake coming, but from what I've heard so far, you'd do far better to watch the original.
    More Info: [IMDB] [Amazon]
  • Timecrimes: An intricate Spanish time-travel thriller, and my favorite film of the 2008 Philly film festival. It has a light and humorous feel to it, but it's got a dark edge and it doesn't shy away from consequences. It's intelligent and rewards thought, but it's not difficult to follow or understand (which can be a problem with some time travel movies). Perhaps it's just my affinity for time travel stories, but I loved this movie.
    More Info: [IMDB] [Amazon] [Capsule Review]
  • The Counterfeiters: This movie actually won the 2007 Oscar for best foreign-language film last year, so perhaps a bit of a cheat, but it did not get a theatrical release until this year. And it's a fantastic film. It follows the story of Jewish artists and counterfeiters forced to produce fake foreign currency, destined for use by the Nazis to destabilize the economies of the UK and US. The film contains a series of fascinating moral dilemmas. Do you refuse to help the enemy and endanger your lives and the lives of those around you? Or do you protect them while aiding your enemy? There are no easy answers here, and there are two main characters who both espouse differing answers. Neither and both are proven right, if that makes any sense. Not an easy movie, but extremely compelling and highly recommended.
    More Info: [IMDB] [Amazon]
  • The Dark Knight: It's an obvious choice for me, and while I can perhaps see some flaws in the film, I can't deny that it was the most enjoyable, entertaining and thought provoking (not an easy mixture) moviegoing experience of the year. One of my criteria for compiling a list like this is rewatch value, and when you consider that I've already seen this movie 5 times (while I have not seen any of the others on this list more than 2 times), it has to be at the top of my list. It's like a crime story that happens to feature a man dressed as a bat fighting a man dressed as a clown. This is another movie that features intricate plotting and a focus on consequences. There are no easy answers here either. Heath Ledger's inspired turn as the Joker is destined to become a classic, and the character is the perfect foil for Batman. The worst thing I can say about the movie is that the sequel has nowhere to go and will certainly pale in comparison.
    More Info: [IMDB] [Amazon] [Winner of 2 Kaedrin Movie Awards] [Blog Post]
So how did the Filmcouch hosts do in psychoanalyzing me? For the record, the top 5 I sent them was a little different - I had The Bank Job where Forgetting Sarah Marshall is in the above list. Anyway, their first observation was that I was a relatively young male, which is certainly true. The next thing they noticed was that all of these movies are about people who are operating under the radar (i.e. counterfeiters, bank robbers, vigilantes, vampires, etc...), so they think I'm drawn to people who operate outside the system (or smarter than the system). This may be partially true (see next paragraph for more). They also noticed that most of the movies touch on the idea that sometimes you have to do a bad thing to make things right (i.e. two wrongs make a right), and in some cases, sympathy for people doing bad things (but a recognition that such sympathy is strange). Because of that, they see me as someone who likes shades of gray. Again, this is probably partially true (more below).

I found their comments interesting, and it did make me wonder about why I really did choose the movies that I did. I think there is some truth in what they say, but I wouldn't say that I am the person they describe. There are some things that I'm fascinated by that aren't things I'd actually do. For instance, I've written before about vigilantes, and despite what the hosts of Filmcouch may think, I'm not a vigilante, and don't really have a desire to do so. What fascinates me about vigilante stories, though, is consequences. This is something that The Dark Knight did in spades, and it also features prominently in a lot of the other movies on the list. I wouldn't say that I particularly like the idea of "two wrongs make a right" but I am fascinated by situations in which the only possible alternatives are wrong. What do you do when no available option is right? How do you counter someone like the Joker? What are the consequences of time travel? What happens if you become a vampire when you're 12 years old? Do you help the Nazis destabilize the Allied economy, or do you protect your fellow concentration camp prisoners? I'm also the type of person who thinks the devil is in the details, and so I like movies that show that sort of thing. Again, Batman is a good example of this sort of thing. Everyone agrees that fighting crime is an honorable thing, but when you get down to the details of such an endeavor, things become a lot more complicated. Sure, Batman could spend all his time taking down the criminals on the streets - but then he's not getting at the root of the problem. But taking on the root of the problem has consequences. And so on. So I supposed their "shades of gray" thing might be somewhat accurate as well. But the point remains, while I may be fascinated by vigilantes in film, that doesn't mean that I want to be a vigilante, nor does it mean that I would tolerate a vigilante in my community. Something similar could be probably be said for other people prominently featured in my list (i.e. vampires, bank robbers, etc...) I'm fascinated by them, but it's not like I want to be them. Perhaps there's a cathartic value in these movies as well. They mentioned that I might be someone who likes to operate outside the system, but in fact, I do no such thing in my life. I'm pretty firmly ensconced within the system. But I suspect that makes people who operate outside the system fascinating... So anyway, that's what Filmcouch thinks. Not a bad job, but perhaps you can't truly read someone's soul through a list of 5 movies:p

Honorable Mention
* In alphabetical order
  • 4 Months, 3 Weeks and 2 Days: Brutal drama about a woman who helps her friend get an illegal abortion. The film takes place in Romania towards the end of the Soviet era, and it's not a very pleasant film, though it is very well made. Strange as it may seem for a movie about abortion, it doesn't take a side in the pro-life/pro-choice debate, and is more effective because of that.
    More Info: [IMDB] [Amazon]
  • Baghead: This ultra-low-budget (reputedly around $1000) horror film has its share of flaws, but it's also quite an entertaining flick. Aside from it's low-budget nature, there's nothing particularly groundbreaking here, but I've always maintained that there is something to be said for a well-executed genre film, and this movie does its job well enough.
    More Info: [IMDB] [Amazon]
  • Body of Lies: This underrated (and, uh, poorly titled) spy movie was actually reasonably smart and entertaining. It has a distinct political viewpoint on the war on terror, but it doesn't overplay its hand and keeps the lecturing to a minimum. The movie focuses more on the plotting of the story than the politics, and I think it works reasonably well.
    More Info: [IMDB] [Amazon]
  • Burn After Reading: The Coen brothers perplexing follow up to the critically lauded No Country for Old Men is about as different from that film as possible. I'm very much reminded of their follow up to Fargo, which was The Big Lebowski. I didn't care much for Lebowski the first time I saw it, but as time went on, I came around. I have a similar feeling about this movie, though I still don't think it's near the top of the Coen brothers films. My biggest issue with the movie is that none of the characters are particularly likeable. On the other hand, several are pretty funny, Brad Pitt's performance is hilarious, and the scenes at the CIA offices with J.K. Simmons and David Rasche are priceless.
    More Info: [IMDB] [Amazon]
  • The Curious Case of Benjamin Button: I actually enjoyed this more than I expected. I'm always game for a David Fincher film, but the previews for this looked awful. So I came away from the film with a pretty good feeling, but that said, there were a bunch of things I didn't particularly care for. Many have mentioned this film's similarities to Forrest Gump, a movie I loath, so it's interesting that I don't mind this movie and even enjoyed it. Not Fincher's best work, but an interesting diversion.
    More Info: [IMDB]
  • The Fall: A gorgeous feast for the eyes. The story follows a man in a hospital who tells a story to a little girl in order to coax her into getting him some morphine. Most of the film takes place in the imaginary world the man creates, which is visually impressive, but the story he tells is somewhat lacking. Of course, that's kinda the point, because the man is kinda making things up as he goes along, but that doesn't make it much better. Ultimately, there are parallels between the real world and the imaginary one, and in the end, I did enjoy the film.
    More Info: [IMDB] [Amazon]
  • In Bruges: I really liked this movie right up until the end, which I felt was rather stupid and glib in attempting to tie everything together. There are some stereotypical characters here: the two hitmen who are opposites of each other - one a philosophical type, the other more hedonistic. Fortunately, the writers do a really good job with those characters, and Brendan Gleeson and Colin Farrell give excellent performances too. If it weren't for the ending, this film would probably be in the top 10.
    More Info: [IMDB] [Amazon]
  • Iron Man: One of the more enjoyable and fun experiences of the year, and one of the better superhero movies, I nevertheless felt this film was somewhat overrated. It's a good, solid film. Robert Downey Jr. gives an excellent performance. The explosions and action were cool. But ultimately, I don't think this film carries the weight of a movie like The Dark Knight, and there are certain aspects which are lacking in this film. For instance, I thought the film lacked a credible villain. I suppose the reveal of the true villain was supposed to be something of a surprise, but it was blatantly obvious from the start who the bad guy was going to be, and the climatic battle was a bit too silly for me. With a box of scraps!
    More Info: [IMDB] [Amazon]
  • Kung Fu Panda: Is there a more common trope than anthropomorphized animals in American animated movies? Despite the cliche, this film was a lot of fun.
    More Info: [IMDB] [Amazon]
  • Ladrón Que Roba a Ladrón: It's like a latino Ocean's Eleven! It even has a latino George Clooney lookalike (but he's the villain in this film). Unfortunately, it's not quite as good as Ocean's Eleven, but it is still a rather entertaining heist film. It doesn't quite hit all the appropriate notes and the various twists aren't quite twisty enough, but it gets the job done and is definitely worth a watch.
    More Info: [IMDB] [Amazon]
  • The Promotion: This odd and underseen comedy stars Seann William Scott and John C. Reilly as assistant managers at a supermarket who are vying for the same promotion. It's offbeat and quirky and fun, but with a darker edge (which I'm assuming is why it didn't get much of a release). That said, it's got an interesting sort of understated humor that works well. I enjoyed this a lot and think it could be interchangeable with my number 10...
    More Info: [IMDB] [Amazon]
  • Role Models: This is probably the funniest movie of the year, and if not for the more cliched story, it might have been in the top 10. Still, it was much better than some of the other high-profile comedies this year, and all of the comedic performances were well done and funny.
    More Info: [IMDB] [Amazon]
  • Spiral: [Note: This was originally my #10 film, but was unseated once I saw Mad Detective. I've preserved my original thoughts here, with some additional notes.] Unquestionably the weakest movie on this list and I have to say that it just barely squeaks onto the list [Again, it has since been knocked off the list]. It's not a great movie, and in objective terms, several of the honorable mentions probably deserves to be here ahead of Spiral. But for some reason, this movie got under my skin and stuck with me, so here it is. It's a slow burning thriller that I'm betting most people haven't even heard of (another reason to give it some love, I guess), but I did enjoy it quite a bit.
    More Info: [IMDB] [Amazon]
  • Wall-E: The first half of this film was spectacular and ambitious filmmaking, but as soon as the humans showed up, things started to get less interesting. It's still a wonderful film, and I have to give credit to a movie that spends the first 45 minutes or so with almost no dialogue... and yet manages to be compelling and interesting. Visually impressive, funny, and touching.
    More Info: [IMDB] [Amazon]
  • The Wrestler: Darren Aronofsky's character portrait of a down-on-his-luck professional wrestler is very well made, but ultimately a little too cliched for my tastes. It's an excellent movie, but it's not really my type of movie. However, Mickey Rourke's performance is amazing and the final shot in the movie is exceptional.
    More Info: [IMDB] [Full Review]
  • Zack & Miri Make a Porno: I've always been a fan of Kevin Smith's brand of raunchy humor, and this film is no exception. Perhaps not the funniest movie of the year, I still laughed a lot and as usual, Smith grounds the film with heart you don't often find in raunchy comedies. I don't think it's his best work, but I do think it was criminally underseen.
    More Info: [IMDB] [Amazon]
Bottom 5 Movies of the Year
Perhaps as evidence of how bad a year this is, I am listing out my 5 least favorite movies. Typically, I'd have a tough time with this list, because I generally try to avoid bad movies and am usually somewhat successful in that. This year, I was not.
  • The Happening: The worst dialogue delivered in the worst possible way make this film laughable. The story is rather pointless as well. I've been something of a Shyamalan apologist in the past, as I liked The Village and even Lady in the Water, but this movie is just indefensible.
    More Info: [IMDB]
  • Speed Racer: Matty Robinson (of Filmspotting fame) described the movie thusly: "It's like a skittles induced stroke." Of course, he was being favorable to the movie, which is something I'm not inclined to do. It is visually ok, but everything else was pretty awful (except for Christina Ricci, who was unfortunately given nothing to do).
    More Info: [IMDB]
  • Storm: My least favorite movie of the 2008 Philly film festival. It has a lot of interesting ideas, none of which are followed through in any detail, instead devolving into an incomprehensible stew of cliches and unlikeable characters.
    More Info: [IMDB] [Capsule Review]
  • Sukiyaki Western Django: I have to give Takashi Miike credit for trying something new and different, but ultimately the film didn't work for me at all. Perhaps I was in the wrong mood or something, but I just couldn't get into this movie.
    More Info: [IMDB]
  • The X Files: I Want to Believe: This could have made an excellent creature of the week type episode of the original series, but instead the movie attempts to tie in way too much of the series' baggage, thus creating a mess of a storyline. I really liked the show a lot, but found this movie terrible.
    More Info: [IMDB]
Should Have Seen
There are a couple of these that might even have potential for unseating my number 10 movie, but I couldn't get to them for whatever reason (usually that it wasn't playing near me or otherwise available). For instance, I ordered Mad Detective (co-directed by Kaedrin favorite Johnny To) on blu-ray on January 21, but according to Amazon, the delivery estimate is sometime in early March!? Well, that just about covers it for 2008. The only thing that remains is the annual liveblogging of the Oscars (which are next Sunday? Yikes, time flies!) Anyway, here's to hoping that 2009 is a better year!

Update 2.21.09: Well that didn't take long. I saw Mad Detective last night and decided that it needed to be on the top 10. This knocks Spiral off the list and into the Honorable Mentions. Also worth noting are the comments to this post where I have an interesting discussion Adam from Filmcouch. And finally, the Filmcouch podcast mentioned my comments on this week's podcast as well. Thanks guys!
Posted by Mark on February 15, 2009 at 09:25 PM .: link :.


End of This Day's Posts

Sunday, January 11, 2009

2008 Kaedrin Movie Awards
As of today, I've seen 62 movies that would be considered 2008 releases. This is on par with my 2007 viewing and perhaps a bit less than 2006. So I'm not your typical movie critic, but I've probably seen more than your average moviegoer. As such, this constitutes the kickoff of my year end movie recap. The categories for this years movie awards are the same as last year, and will proceed in a similar manner. Nominations will be announced today, and starting next week, I'll announce the winners (new winners announced every day). After that, there might be some miscellaneous awards, followed by a top 10 list.

As I've mentioned before, 2008 has been a weak year for movies. Not sure if this was because of the writers strike, some other shift in studio strategy (the independent arms of many studios seem to be closing up shop, for instance), or that my taste has become more discriminating, but whatever the case, I've had trouble compiling my top 10. Indeed, I'm still not sure I've got a good list yet and am still scrambling to catch up with some of the lesser-known films of the year (many of which had minimal releases and are not out on DVD just yet). This is why these awards and my top 10 are a little later than last year. However, one of the things I like about doing these awards is that they allow me to give some love to films that I like, but which aren't necessarily great or are otherwise flawed (as such, the categories may seem a bit eclectic). Some of these movies will end up on my top 10, but the grand majority of them will not.

The rules for this are the same as last year: Nominated movies must have been released in 2008 and I have to have seen the movie (and while I have seen a lot of movies, I don't pretend to have seen a comprehensive selection - don't let that stop you from suggesting something though). Also, I suppose I should mention the requisite disclaimer that these sorts of lists are inherently subjective and personal. But that's all part of the fun, right?

Best Villain/Badass
It's been a pretty good year for villainy! At least on par with last year, if not better. As with the past two years, my picks in this category are for individuals, not groups (i.e. no vampires or zombies as a group). Winner Announced!

Best Hero/Badass
A distinct step down in terms of heroic badassery this year, but it's not a terrible year either. Again limited to individuals and not groups. Winner Announced!

Best Comedic Performance
Not a particularly strong year when it comes to comedy, but there still seem to be plenty of good performances, even in films I thought were lackluster... Winner Announced!

Breakthrough Performance
Not a particularly huge year for breakthrough performances either, but definitely several interesting choices. As with previous years, my main criteria for this category was if I watched a movie, then immediately looking up the actor/actress on IMDB to see what else they've done (or where they came from). This sometimes happens for even well established actors/actresses, and this year was no exception. Winner Announced!

Most Visually Stunning
Winner Announced!

Best Sci-Fi or Horror Film
I'm a total genre hound, despite genres generally receiving very little attention from critics. As usual, there was a dearth of quality SF this year, especially because I don't consider Iron Man or The Dark Knight SF. However, a strong showing from the horror genre rounds out the nominations well. Plus, disappointed by the poor showing of SF, I cheated by nominating a 2007 SF film... I can't even fudge the release dates the way I can with some independent or foreign flicks - by every measurement I can think of, it's a 2007 film. But it was such a small film that flew under just about everyone's radar (including mine!) that I'm going to include it, just to give it some attention, because I really did enjoy it. Winner Announced!

Best Sequel
Honestly, I only saw 4 or 5 sequels all year, so this was a difficult category to populate (as it is every year). Still, there were at least two really great sequels this year... Winner Announced!

Biggest Disappointment
Always a difficult award to figure out, as there are different ways in which a movie can disappoint. Usually, expectations play just as big a part of this as the actual quality of the film, and it's possible that a decent movie can win the award because of astronomical expectations. This year had several obvious choices though. Usually I manage to avoid the real stinkers, but this year I saw two genuinely awful movies... in the theater! Winner Announced!

Best Action Sequences
This is a kinda by-the-numbers year for action sequences. Nothing particularly groundbreaking or incredible, but there were some well executed, straightforward action movies this year. These aren't really individual action sequences, but rather an overall estimation of each film. Winner Announced!

Best Plot Twist/Surprise
Not a particularly strong year for the plot twist either. Winner Announced!

Best High Concept Film
This was a new category last year, and like last year, I had a little difficulty coming up with this list, but overall, not bad. Winner Announced!

Anyone have any suggestions (for either category or nominations)? Comments, complaints and suggestions are welcome, as always.

It looks like The Dark Knight is leading the way with an impressive 6 nominations (rivaled only by the 8 nominations earned by Grindhouse last year... with the caveat that Grindhouse is technically 2 movies in one). Not far behind is Hellboy II with a respectable 5 nominations. Surprisingly, both Forgetting Sarah Marshall and The Signal earned 3 nominations, while a whole slew of other films garnered 2 noms, and an even larger amount earned a single nomination. As I mentioned earlier, I'm going to give myself a week to think about each of these. I might end up adding to the nominations if I end up seeing something new. Winners will be announced starting next Sunday or Monday. As with the last two years, there will be a small set of Arbitrary Awards after the standard awards are given out, followed by the top 10.

Update: Added a new plot twist nominee (Spiral), because I just watched it and it deserves it!

Update 1.25.09: Arbitrary Awards announced!

Update 2.15.09: Top 10 of 2008 has finally been posted!
Posted by Mark on January 11, 2009 at 11:46 AM .: link :.


End of This Day's Posts

Sunday, December 07, 2008

Anathem
I finished Neal Stephenson's latest novel, Anathem, a few weeks back. Overall, I enjoyed it heartily. I don't think it's his best work (a distinction that still belongs to Cryptonomicon or maybe Snow Crash), but it's way above anything I've read recently. It's a dense novel filled with interesting and complex ideas, but I had no problem keeping up once I got started. This is no small feat in a book that is around 900 pages long.

On the other hand, my somewhat recent discussion with Alex regarding the ills of Cryptonomicon has lead me to believe that perhaps the reason I like Neal Stephenson's novels so much is that he tunes into the same geeky frequencies I do. I think Shamus hit the nail on the head with this statement:
In fact, I have yet to introduce anyone to the book and have them like it. I’m slowly coming to the realization that Cryptonomicon is not a book for normal people. Flaws aside, there are wonderful parts to this book. The problem is, you have to really love math, history, and programming to derive enjoyment from them. You have to be odd in just the right way to love the book. Otherwise the thing is a bunch of wanking.
Similarly, Anathem is not a book for normal people. If you have any interest in Philosophy and/or Quantum Physics, this is the book for you. Otherwise, you might find it a bit dry... but you don't need to be in love with those subjects to enjoy the book. You just need to find it interesting. I, for one, don't know much about Quantum Physics at all, and I haven't read any (real) Philosophy since college, and I didn't have any problems. In fact, I was pretty much glued to the book the whole time. One of the reasons I could tell I loved this book was that I wasn't really aware of what page I was on until I neared the end (at which point dealing with the physicality of the book itself make it pretty obvious how much was left).

Minor spoilers ahead, though I try to keep this to a minimum.

The story takes place on another planet named Arbre and is told in first person by a young man named Erasmus. Right away, this yields the interesting effect of negating the multi-threaded stories of most of Stephenson's other novels and providing a somewhat more linear progression of the story (at least, until you get towards the end of the novel, when the linearity becomes dubious... but I digress). Erasmus, who is called Raz by his friends, is an Avout - someone who has taken certain vows to concentrate on studies of science, history and philosophy. The Avout are cloistered in areas called Concents, which is kind of like a monastary except the focus of the Avout is centered around scholarship and not religion. Concents are isolated from the rest of the world (the area beyond a Concent's walls is referred to as Extramuros or the Saecular World), but there are certain periods in which the gates open and the Avout mix with the Saecular world (these periods are called Apert). Each concent is split up into smaller Maths, which are categorized by the number of years which lapse between each Apert.

Each type of Math has interesting characteristics. Unarian maths have Apert every year, and are apparently a common way to achieve higher education before getting a job in the Saecular world (kinda like college or maybe grad-school). Decenarian maths have Apert once every ten years. Raz and most of the characters in the story are "tenners." Centenarian maths have Apert once every century (and are referred to as hundreders) and Millenarian maths have Apert once every thousand years (and are called thousanders).

I suppose after reading the last two paragraphs, you'll notice that Stephenson has spent a fair amount of time devising new words and concepts for his alien planet. At first, this seems a bit odd and it might take some getting used to, but after the first 50-100 pages, it's pretty easy to keep up with all the new history and terminology. There's a glossary in the back of the book for reference, but I honestly didn't find that I needed it very often (at least, not the way I did while reading Dune, for instance). Much has been made of Stephenson's choice in this matter, as well as his choice to set the story on an alien planet that has a history that is roughly analogous to Earth's history. Indeed, it seems like there is a one-to-one relationship between many historical figures and concepts on Arbre and Earth. Take, for instance, Protas:
Protas, the greatest fid of Thelenes, had climbed to the top of a mountain near Ethras and looked down upon the plain that nourished the city-state and observed the shadows of the clouds, and compared their shapes. He had had his famous upsight that while the shapes of the shadows undeniably answered to those of the clouds, the latter were infinitely more complex and more perfectly realized than the former, which were distorted not only by the loss of a spatial dimension but also by being projected onto terrain that was of irregular shape. Hiking back down, he had extended that upsight by noting that the mountain seemed to have a different shape every time he turned round to look back at it, even though he knew it had one absolute form and that these seeming changes were mere figments of his shifting point of view. From there, he had moved on to his greatest upsight of all, which was that these two observations - the one concerning the clouds, the other concerning the mountain - were themselves both shadows cast into his mind by the same greater, unifying idea. (page 84)
Protas is clearly an analog to Plato (and thus, Thelenes is similar to Socrates) and the concepts described above run parallel to Plato's concept of the Ideal (even going so far as to talk about shadows and the like, calling to mind Plato's metaphor of the cave). There are literally dozens of these types of relationships in the book. Adrakhones is analogous to Pythagoras, Gardan's Steelyard is similar to Occam's Razor, and so on. Personally, I rather enjoyed picking up on these similarities, but the referential nature of the setting might seem rather indulgent on Stephenson's part (at least, it might seem so to someone who hasn't read the book). I even speculated as much while I was reading the book, but as a reader noted in the comments to my post, that's not all there is to it. It turns out that Stephenson's choice to set the story on Arbre, a planet that has a history suspiciously similar to Earth, was not an indulgence at all. Indeed, it becomes clear later in the book that these similarities are actually vital to the story being told.

This sort of thing represents a sorta meta-theme of the book. Where Cryptonomicon is filled with little anecdotes and tangents that are somewhat related to the story, Anathem is tighter. Concepts that are seemingly tangential and irrelevant wind up playing an important role later in the book. Don't get me wrong, there are certainly a few tangents or anecdotes that are just that, but despite the 900+ page length of the book, Stephenson does a reasonably good job juggling ideas, most of which end up being important later in the book.

The first couple hundred pages of the novel take place within a Concent, and thus you get a pretty good idea of what life is like for the Avout. It's always been clear that Stephenson appreciates the opportunity to concentrate on something without having any interruptions. His old website quoted former Microsoft employee Linda Stone's concept of "continuous partial attention," which is something most people are familiar with these days. Cell phones, emails, Blackberries/iPhones, TV, and even the internet are all pieces of technology which allow us to split our attention and multi-task, but at the same time, such technology also serves to make it difficult to find a few uninterrupted hours with which to delve into something. Well, in a Concent, the Avout have no such distractions. They lead a somewhat regimented, simple life with few belongings and spend most of their time thinking, talking, building and writing. Much of their time is spent in Socratic dialogue with one another. At first, this seems rather odd, but it's clear that these people are first rate thinkers. And while philosophical discussions can sometimes be a bit dry, Stephenson does his best to liven up the proceedings. Take, for example, this dialogue between Raz and his mentor, Orolo:
"Describe worrying," he went on.

"What!?"

"Pretend I'm someone who has never worried. I'm mystified. I don't get it. Tell me how to worry."

"Well... I guess the first step is to envision a sequence of events as they might play out in the future."

"But I do that all the time. And yet I don't worry."

"It is a sequence of events with a bad end."

"So, you're worried that a pink dragon will fly over the concent and fart nerve gas on us?"

"No," I said with a nervous chuckle.

"I don't get it," Orolo claimed, deadpan. "That is a sequence of events with a bad end."

"But it's nonsensical. There are no nerve-gas-farting pink dragons."

"Fine," he said, "a blue one, then." (page 198)
And this goes on for a few pages as well. Incidentally, this is also an example of one of those things that seems like it's an irrelevant tangent, but returns later in the story.

So the Avout are a patient bunch, willing to put in hundreds of years of study to figure out something you or I might find trivial. I was reminded of the great unglamourous march of technology, only amplified. Take, for instance, these guys:
Bunjo was a Millenarian math built around an empty salt mine two miles underground. Its fraas and suurs worked in shifts, sitting in total darkness waiting to see flashes of light from a vast array of crystalline particle detectors. Every thousand years they published their results. During the First Millenium they were pretty sure they had seen flashes on three separate occasions, but since then they had come up empty. (page 262)
As you might imagine, there is some tension between the Saecular world and the Avout. Indeed, there have been several "sacks" of the various Concents. This happens when the Saecular world gets freaked out by something the Avout are working on and attacks them. However, at the time of the novel, things are relatively calm. Total isolation is not possible, so there are Hierarchs from the Avout who keep in touch with the Saecular world, and thus when the Saecular world comes across a particularly daunting problem or crisis, they can call on the Avout to provide some experts for guidance. Anathem tells the story of one such problem (let's say they are faced with an external threat), and it leads to an unprecedented gathering of Avout outside of their concents.

I realize that I've spent almost 2000 words without describing the story in anything but a vague way, but I'm hesitant to give away too much of the story. However, I will mention that the book is not all philosophical dithering and epic worldbuilding. There are martial artists (who are Avout from a Concent known as the Ringing Vale, which just sounds right), cross-continental survival treks, and even some space travel. All of this is mixed together well, and I while I wouldn't characterise the novel as an action story, there's more than enough there to keep things moving. In fact, I don't want to give the impression that the story takes a back seat at any point during the novel. Most of the world building I've mentioned is something that comes through incidentally in the telling of the story. There are certainly "info-dumps" from time to time, but even those are generally told within the framework of the story.

There are quite a few characters in the novel (as you might expect, when you consider its length), but the main ones are reasonably well defined and interesting. Erasmus turns out to be a typical Stephensonian character - a very smart man who is constantly thrust into feuds between geniuses (i.e. a Randy/Daniel Waterhouse type). As such, he is a likeable fellow who is easy to relate to and empathize with. He has several Avout friends, each of whom plays an important role in the story, despite being separated from time to time. There's even a bit of a romance between Raz and one of the other Avout, though this does proceed somewhat unconventionally. During the course of the story, Raz even makes some Extramuros friends. One being his sister Cord, who seems to be rather bright, especially when it comes to mechanics. Another is Sammann, who is an Ita (basically a tecno-nerd who is always connected to networks, etc...). Raz's mentor Orolo has been in the Concent for much longer than Raz, and is thus always ten steps ahead of Raz (he's the one who brought up the nerve-gas-farting pink dragons above).

Another character who doesn't make an appearance until later on in the story is Fraa Jad. He's a Millenarian, so if Orolo is always ten steps ahead, Jad is probably a thousand steps ahead. He has a habit of biding his time and dropping a philosophical bomb into a conversation, like this:
Fraa Jad threw his napkin on the table and said: "Consciousness amplifies the weak signals that, like cobwebs spun between trees, web Narratives together. Moreover, it amplifies them selectively and in that way creates feedback loops that steer the Narratives." (page 701)
If that doesn't make a lot of sense, that's because it doesn't. In the book, the characters surrounding Jad spend a few pages trying to unpack what was said there. That might seem a bit tedious, but it's actually kinda funny when he does stuff like that, and his ideas actually are driving the plot forward, in a way. One thing Stephenson doesn't spend much time discussing is the details of how the Millenarians continue to exist. He doesn't explicitely come out and say it, but the people on Arbre seem to have life spans similar to humans (perhaps a little longer), so it's a little unclear how things like Millenarian Maths can exist. He does mention that thousanders have managed to survive longer than others, but it's not clear how or why. If one were so inclined, they could perhaps draw a parallel between the Thousanders in Anathem and the Eruditorium in Cryptonomicon and the Baroque Cycle. Indeed, Enoch Root would probably fit right in at a Millenarian Math... but I'm pretty sure I'm just reading way too much into this and that Stephenson wasn't intentionally trying to draw such a parallel. It's still an interesting thought though.

Overall, Stephenson has created and sustained a detailed world, and he has done so primarily through telling the story. Indeed, I'm only really touching the surface of what he's created here, and honestly, so is he. It's clear that Stephenson could easily have made this into another 3000 page Baroque Cycle style trilogy, delving into the details of the history and culture of Arbre, but despite the long length of the novel, he does keep things relatively tight. The ending of the novel probably won't do much to convince those who don't like his endings that he's turned a new leaf, but I enjoyed it and thought it ranked well within his previous books. There are some who will consider the quasi-loose-ends in the story to be frustrating, but I thought it actually worked out well and was internally consistent with the rest of the story (it's hard to describe this without going into too much detail). In the end, this is Stephenson's best work since Cryptonomicon and the best book I've read in years. It will probably be enjoyed by anyone who is already a Stephenson fan. Otherwise, I'm positive that there are people out there who are just the right kind of weird that would really enjoy this book. I expect that anyone who is deeply interested in Philosophy or Quantum Physics would have a ball. Personally, I'm not too experienced in either realm, but I still enjoyed the book immensely. Here's to hoping we don't have to wait another 4 years for a new Stephenson novel...
Posted by Mark on December 07, 2008 at 08:39 PM .: link :.


End of This Day's Posts

Sunday, June 15, 2008

Rewatching Movies
One of the cable channels was playing Ocean's Eleven all weekend, and that's one of those movies I always find myself watching when it comes on (this time, I even went to the shelf and fired up the DVD, so as to avoid commercials). Of course, there are tons of new, never-seen-before things I want to watch. My Netflix queue currently has around 140 movies in it (and this seems to be growing with time, despite the rate at which I go through my rentals). I've got a DVD set of Banner of the Stars that I'm only about 1/3 of the way through. My DVR has a couple episodes of the few TV shows I follow queued up for me. Yet I find myself watching Ocean's Eleven for the umpteenth time. And loving every second of it.

In actuality, I've noticed myself doing this sort of thing less and less over the years. When I was younger, I would watch and rewatch certain movies almost daily. There are several movies that have probably moved up into triple digit rewatches (for the curious, the films in this list include The Terminator, Aliens, The Empire Strikes Back, Return of the Jedi and Phantasm). Others I've only rewatched dozens of times. As time goes on, I find myself less and less likely to rewatch things. I think Netflix has become a big part of that, because I want to get my money's worth from the service, and the only way to do that is to continually watch new movies. In recent years, I've also come to realize that even though I've seen way more movies than the average person, there are still a lot of holes in my film knowledge. I do find myself limited by time these days, so when it comes down to rewatching an old favorite or potentially discovering a new one, I tend to favor the new films these days. But I still relapse (focusing on novelty has its own challenges), and I do find myself rewatching movies on a regular basis.

Get away from her you bitch!

Why is that? There are some people who never rewatch movies, but even with my declining repeat viewings, I don't count myself among them. Some films almost demand you to watch them again. For instance, I recently watched Andrei Tarkovsky's thoughtful, if difficult, SF film Solaris. This is a film that seems designed to reveal itself only upon multiple viewings. Tarkovsky is somewhat infamous for this sort of thing, and there are a lot of movies out there that are like that. Upon repeated viewings, these films take on added dimensions. You start to notice things. Correlations, strange relationships, and references become more apparent.

Other films, however, are just a lot of fun to rewatch. This raises a lot of interesting questions. Why is a movie fun even when we know the ending? Indeed, why do some reviewers even include a rating for rewatchability? In some cases we just like spending time with certain characters or settings and don't mind that we already know the outcome. I've made a distinction between these films and the ones that demand multiple viewings, but many of the same benefits of repeat viewings are mutual between the two types of movies. Rewatching a film can be a richer, deeper experience and you start to notice things you didn't upon first viewing. Indeed, one interesting thing about rewatching movies is that while the movie is the same, you are not. Context matters. Every time we rewatch something, we bring our knowledge and experience (which is always changing) to the table. Sometimes this can be trivial (like noticing a reference or homage you didn't know about), but I've always heard about movies that become more poignant to people after they have children or as they grow older. Similarly, rewatching a movie can transport us back to the context in which we first saw the movie. I still remember the excitement and the spectacle of going to see Batman or Terminator 2 on opening day. Those were fun experiences from my childhood, even if I don't particularly love either movie. Heck, just the thought of how often I used to rewatch some movies is a fun memory that gets brought up whenever I think about those movies today...

I ll be back when you watch this movie 200 more times...

There are also a lot of fascinating psychological implications to rewatching movies. As I mentioned before, we sometimes rewatch movies to revisit characters we consider friends or situations we find satisfying. In the case of comedies, we want to laugh. In the case of horror films, we want to scare ourselves or feel suspense. And strangely, even though we know the outcomes of these movies, they still seem to be able to elicit these various emotions as we rewatch them. For movies that depict true stories, they can feature suspense or fear even when we know how the story will turn out. Two recent, high-profile examples of this are United 93 and Zodiac. Both of those films were immersive enough upon first viewing that I felt suspense at various parts of the story, even though I knew on an intellectual level where both films were heading. David Bordwell has explored this concept thoroughly and references several interesting theories as to why rewatching movies remains powerful:
Normally we say that suspense demands an uncertainty about how things will turn out. Watching Hitchcock’s Notorious for the first time, you feel suspense at certain points-when the champagne is running out during the cocktail party, or when Devlin escorts the drugged Alicia out of Sebastian’s house. That’s because, we usually say, you don’t know if the spying couple will succeed in their mission.

But later you watch Notorious a second time. Strangely, you feel suspense, moment by moment, all over again. You know perfectly well how things will turn out, so how can there be uncertainty? How can you feel suspense on the second, or twenty-second viewing?
Here's one theory he covers:
...in general, when we reread a novel or rewatch a film, our cognitive system doesn’t apply its prior knowledge of what will happen. Why? Because our minds evolved to deal with the real world, and there you never know exactly what will happen next. Every situation is unique, and no course of events is literally identical to an earlier one. “Our moment-by-moment processes evolved in response to the brute fact of nonrepetition” (Experiencing Narrative Worlds, 171). Somehow, this assumption that every act is unique became our default for understanding events, even fictional ones we’ve encountered before.
He goes into a lot more detail about this theory and others in his post. Several of the theories he covers touch on what I find most interesting about the subject, which is that our brain seems to have compartamentalized the processing of various data. I'm going to simplify drastically for effect here, but I think the general idea is right (I'm not a nuerologist though, so take it with a grain of salt). When processing visual and audio data, there is a part of the brain that is, for lack of a better term, stateless. It picks up a stimulus, immediately renders it (into a visual or audio representation) then shuttles it off to another part of the brain which interprets the output. This interpretation seems to be where our brain slows down. The initial processing is involuntary and unconscious and it doesn't take other data (like memories) into account. We don't have to consciously think about it, it just happens. Something similar happens when we first begin to interpret data. Our brain seems to be unconsciously and continually forming different interpretations and then rejecting most of them. The rejected thoughts are displaced by new alternatives which incorporate more of our knowledge and experience (and perhaps this part happens in a more conscious fashion). We've all had the experience of thinking something that almost immediately disturbed us because we wonder where that thought came from. Bordwell gives a common example (I've read about this exact example at least three times from different people):
Standing at a viewing station on a mountaintop, safe behind the railing, I can look down and feel fear. I don’t really believe I’ll fall. If I did, I would back away fast. I imagine I’m going to fall; perhaps I even picture myself plunging into the void and, a la Björk, slamming against the rocks at the bottom. Just the thought of it makes my palms clammy on the rail.
So perhaps one reason it doesn't matter that we know how a movie will turn out is that there is a part of us that is blindly processing data without incorporating what we already know. Another reason we still feel emotions like suspense during a movie we've seen before is because we can imagine what would happen if it didn't turn out the way we know it will. In both cases, there is a conscious intellectual response which can negate our instinctual thoughts, but such responses seem to happen after the fact (at which point, you've already experienced the emotion in question and can't just take it back). One of the most beautiful things about laughter is that it happens involuntarily. We don't (always) have to think about it, we just do it. Dennis Miller once wrote about this:
The truth is the human sense of humor tends to be barbaric and it has been that way all along. I'm sure on the eve of the nativity when the tall Magi smacked his forehead on the crossbeam while entering the stable, Joseph took a second away from pondering who impregnated his wife and laughed his little carpenter ass off. A sense of humor is exactly that: a sense. Not a fact, not etched in stone, not an empirical math equation but just what the word intones: a sense of what you find funny. And obviously, everybody has a different sense of what's funny. If you need confirmation on that I would remind you that Saved by the Bell recently celebrated the taping of their 100th episode. Oh well, one man's Molier is another man's Screech and you know something thats the way it should be.
Indeed, humor generally disappates when you try to explain it. You either get it or you don't.

I could probably go on and on about this, but Bordwell has done an excellent job in his post (there's an interesting bit about mirror neurons, for instance), and unlike me, he's got lots of references. I do find the subject fascinating though, and I began wondering about the impact of people rewatching movies so often. After all, this is a somewhat recent trend we're talking about (not that people didn't rewatch movies before the advent of the VCR and DVD, but that technology has obviously increased the amount of rewatching).

We're living in an on-demand era right now, meaning that we can choose what we want to watch whenever we want (well, we're not quite there yet, but we're moving quickly in that direction). If I want to rewatch Solaris a hundred times and analyze it like the Zapruder film, I'm free to do so (and it might even be a rewarding effort). In the past, things weren't necessarily like that though. James Berardinelli recently wrote about rewatching movies, and he provides some interesting historical context:
30 years ago, if you loved a movie, re-watching it involved patience and hard work. A big Hollywood picture might show up in prime time (ABC regularly aired the James Bond movies on Sunday nights) but smaller/older films were relegated to late night or weekend afternoon showings. Lovers of High Noon (for example) might have to wait a couple of years and religiously check TV listings before being rewarded by its appearance on "The Million Dollar Movie" at 12:30 am some night.

One reason why pre-1980 movie lovers are generally better educated in cinema than their post-1980 counterparts is that TV-based movie watching in the '60s and '70s meant seeing what was provided, and that typically covered many genres and eras of film. I can recall watching a silent film (The Cabinet of Dr. Caligari) on a local station one afternoon in 1977. When was the last time a silent movie aired on any over-the-air television station? The advent of video in the early 1980s and its rapid adoption during the middle of the decade allowed viewers to "program" their home movie watching. They could now see what they wanted to see rather than what was on TV.
Again, this trend has continued, and the degree to which you can program your viewing schedule is ever increasing. Even during the 1980s when I was growing up, I found myself beholden to the broadcast schedules more often than not. Sure I could tape things with a VCR, but I usually found myself browsing the channels looking for something to watch. There was a certain serendipity to discovering movies in those days. I distinctly remember the first time I saw a Spaghetti Western (For a Few Dollars More), getting hooked, and watching a bunch of others (Cinemax was running a series of them that month). The last time I remember something like that happening was about 5-6 years ago when I caught an Italian horror marathon on some cable movie channel. And the only reason I watched that was because I had seen Suspiria before and wanted to watch it again. It was followed by several Mario Bava films that were very interesting. Today, I look back on some of the films I watched in my childhood, even ones I cherished, and I wonder why I ever bothered to watch it in the first place. It was probably becaues nothing else was on. The advent of digital cable has changed things as well because digital cable doesn't encourage blind television surfing. There's a program guide built right in, so you can browse that to find what you want. Unfortunately, that means you could skip right over something you would otherwise like (and that may have caught your eye if you saw a glimpse of it). There's also a lot more to choose from (perhaps leading to a paradox of choice situation).

Of course, there are other ways for film lovers to discover new films they wouldn't otherwise have watched. On a personal level, listening to various film podcasts, especially Filmspotting and All Movie Talk (which is sadly now defunct, though still worth listening to if you love movies), has been incredibly helpful in finding and exploring various genres or eras of film that I had not been acquainted with. One effective technique that Filmspotting has employed is the use of marathons, in which they watch 5-6 movies from a genre or filmmaker they are not particularly familiar with. Of course, this, too, is subject to the whims of listeners - many (including myself) will avoid films that don't have an immediate appeal. Still, I've found myself playing along with several of their marathons and watching movies I don't think I would ever watch on my own.

One interesting film experiment is currently being conducted by a blogger named Matthew Dessem. He wanted to learn more about foreign films and found that the Criterion Collection was an interesting place to start. It contains a good mix of the old, new, foreign, and independent, and it goes in a somewhat random order. He started writing a review for each movie at his blog, The Criterion Contraption. He's about 80 or so movies into the collection, and his reviews are exceptionally good (apparently the product of about 15 hours of work each). In an interview, Dessem explains his reasoning for watching the collection in order and why he writes reviews for each one:
I began writing about the films simply as a way of keeping myself intellectually honest: thinking about how each movie was supposed to work, paying attention to what was effective and what was not. Given the chance to not engage with a difficult film, I'll usually take it, unless I have to come up with something coherent to say about it.
Later in the interview, he expands on why he watches the films in the order Criterion put them out:
Mostly, it keeps me honest. If I had the choice to watch the films in any order, I would quickly jump to all the films I most want to see, and never get around to the ones that seem less interesting. That means I'd miss out on a lot of discoveries, which was one of my main goals to begin with. But jumping around from country to country and decade to decade has its own rewards: like any good 21st century citizen, I have a pretty good case of apophenia, so I'll often see connections that don't exist between films.
I can definitely see where he's coming from. Looking through the catalog of Criterion, I see a lot of movies that I'd probably skip if I didn't require myself to watch them in order (as it is now, I've seen somewhere around 10% of the movies, and there's no particular order I've gone in - I sorta fell into the trap where I "quickly jump to all the films I most want to see, and never get around to the ones that seem less interesting". Except, of course, I haven't decided to watch all the Criterion Collection movies.) Indeed some of the movies I have seen, I probably wouldn't recommend except in certain circumstances (for example, I wouldn't recommend Equinox to anyone but die-hard horror fans).

However, while there are ways for us film lovers to seek out and expand our knowledge of film, I do wonder about the casual moviegoers. Is the recent trend of remakes (or reimaginings or whatever they call them these days) partially the result of this phenomenon? I wonder how many of the younger generation saw Rob Zombie's limp remake of Halloween and then sought out the brilliant original? That is perhaps too high-profile of an example. How about the original Ocean's Eleven? As it turns out, I have not seen that movie, despite loving the remake. I've added it to my Netflix queue. It rests at position 116 right now, which means I'll probably get to it sometime within the next five years. Now if you'll excuse me, I'm going to rewatch The Empire Strikes Back. It is my destiny.

I have seen this a hundred times, but I get the chills during this scene every time...

Update: Added some screenshots from movies I've watched a bazillion times. Also just want to note that while I spent most of my time talking about movies here, the same goes for books and music. I don't tend to reread books much (perhaps due to the time commitment reading a book takes), but on the other hand, music gets better with multiple listenings (so much so that no one even questions the practice of listening to music multiple times).
Posted by Mark on June 15, 2008 at 08:21 PM .: link :.


End of This Day's Posts

Sunday, January 27, 2008

Best Films of 2007
I saw somewhere on the order of 60 movies that were released in 2007. This is somewhat lower than most critics, but higher than your average moviegoer. Also unlike most critics, I don't consider this to be a spectacular year for film. For instance, I left several films off my 2006 list that would have been shoe-ins this year. If I were to take a more objective stance, limiting my picks to the movies with the best technical qualities, the list would be somewhat easier. But that's a boring way to assemble a list and absolute objectivitiy is not possible in any case. Movies that really caught my attention and interested me were somewhat fewer this year. Don't get me wrong, I love movies and there were a lot of good ones this year, but there were few movies that really clicked with me. As such, a lot of the top 10 could easily be exchanged with a movie from the Honorable Mention section. So without further ado:

Top 10 Movies of 2007
* In roughly reverse order
  • Zodiac: This one barely makes it on this list. It's one of the few early year releases that has made it on the list, and as such, it's something I actually want to revisit. But of all the early year films I saw, I remember this being the most interesting and best made. If you know about the Zodiac killer, you know the ending won't provide any real explanations (nor should it) as the killer was never caught in real life. As such, this does diminish some of the tension from the film. Still, director David Fincher has made an impeccable film. It's not as showy or spectacular as his previous efforts. Stylistically, it's rather straightforward, and yet, it's a gorgeous film to look at, and Fincher does manage to imbue some tension throughout the film, which focuses more on the obsession of those trying to find the Zodiac than the Zodiac himself.
    More Info: [IMDB] [Amazon]
  • Gone Baby Gone: It basically starts out as a straightforward crime thriller and mystery and those elements are very well done. But the ending introduces a moral dilemma that has no good answers. You can't help but put yourself into the movie and think about what you would do in such a case, and to be honest, I don't know what I'd do. I suppose I should mention that this is Ben Affleck's directing debut, and he proves shockingly adept at doing so.
    More Info: [IMDB] [Amazon]
  • The Bourne Ultimatum: A fantastic action film, and one of the few sequels worth it's salt in a year of particularly bad sequels. Paul Greengrass' infamous shaky camera is actually put to good use here, and the film also features good performances and great stuntwork. Some may be put off by the camera work, but when you look at a film like this, and then you look at a film like Transformers, you can see a huge difference in style and talent.
    More Info: [IMDB] [Amazon]
  • Superbad: Hands down, the funniest movie of the year. I'm a sucker for raunchy humor with a heart, and this movie has that in spades. Great performances by Jonah Hill and the deadpan Michael Cera, as well as just about everyone else. Of all the movies on this list, this one probably has the most replay value, and is also probably the most quotable.
    More Info: [IMDB] [Amazon]
  • Stardust: This might the most thoroughly enjoyable movie of the year. A great adventure film that evokes The Princess Bride (perhaps unfairly leading to comparisons) while asserting an identity of its own. In a year filled with dark, heavy-hitting dramas, it was nice to sit down to a well done fantasy film. Well directed with good performances (including an unusual turn by Robert DeNiro as a flamboyant pirate) and nice visuals, the real strength of this film is the story, which retains the fun feeling of a fantasy while skirting darker, edgier material.
    More Info: [IMDB] [Amazon]
  • The King of Kong: A Fistful of Quarters: Documentary films don't generally find much of an audience in theaters, but The King of Kong should be in every video game enthusiast's Netflix queue. It delves into the rough and tumble world of competitive video gaming for classic games, particularly Donkey Kong, but it does so kinda like an inspirational sports film. You've got your lovable underdog who has never won anything in his life, and of course the villainous champion who looks down on the underdog and seeks to steal his thunder. It's a great movie and highly recommended for video game fans.
    More Info: [IMDB] [Amazon]
  • The Orphanage: Certainly the creepiest movie of the year. Though perhaps not exactly a horror film, it establishes a high level of tension all throughout the film, and the story, while a little odd, works pretty well too. A spanish language film that gets unfairly compaired to Pan's Labyrinth, it is nonetheless worth watching for any fan of ghost stories.
    More Info: [IMDB] [Amazon]
  • The Lives of Others: This film actually won the Oscar for best foreign-language film last year (beating out Pan's Labyrinth - a surprise to me), so I might be cheating a bit, but it didn't really have a theatrical release in the U.S. until 2007, so I'm putting it on this list. Set in East Germany during the Cold War, this film follows a Stasi agent who begins to feel for the subjects he's surveiling. It doesn't sound like much, and it's not exactly action-packed, but it is quite compelling and one of the most powerful films of the year. All of the technical aspects of the film are brilliant, especially the script and the nuanced acting by Ulrich Mühe. This film would be amongst the top of any year's list
    More Info: [IMDB] [Amazon]
  • Grindhouse: I'm referring, of course, to the theatrical release of this film. I say this because a lot of critics like to separate the two features and heap praise on Tarantino's Death Proof (which I'll grant, is probably the better of the two, if I were forced to chose), but to me, nothing beats the full experience of the theatrical version. It starts out with a hilarious "fake" trailer, then moves into Robert Rodriguez's Planet Terror, an over=the-top zombie action film done in true grindhouse stile (missing reels and all). Following that we get three more absolutely brilliant fake trailers and Tarantino's wonderful Death Proof. The films are dark, they're edgy, and they're probably not for everyone. In attempting to emulate 70s grindhouse cinema, the filmmakers have lovingly reproduced the tropes, some of which may bother audiences (particularly the awkward pacing of both features, which is actuall brilliance in disguise). It's a crime that the theatrical version is not available on DVD. The double-billing was poorly advertised, so it looks like the studio opted to split the films up and give longer cuts of each their own DVD. Supposedly, a 6 disc boxed set containing everything is in the works.
    More Info: [IMDB] [Planet Terror | Death Proof] [Winner of 3 Kaedrin Movie Awards]
  • No Country for Old Men: The Coen brothers have outdone themselves. This is perhaps a boring pick, as this film is at or near the top of most top 10 lists, but that happened for a reason. It's a great damn film. Gorgeous photography, tension-filled action, and that rare brand of dark humor that the Coens are so good at. It also features the most memorable and terrifying villain in years. The ending is uncompromising and ambiguous (which may turn some viewers off), but I found it quite appropriate. Of all the films this year, this one is best made and most entertaining (if a little dark), a combo that's certainly difficult to pull off.
    More Info: [IMDB] [Amazon] [Winner of 3 Kaedrin Movie Awards]
Honorable Mention
As I mentioned above, a lot of these honorable mentions would probably do fine for the bottom half of the top 10 (the top half is pretty strong, actually). In some cases, I really struggled with a lot of the below picks. If my mood were different, I bet some things would change. These are all good movies and worth watching too.
  • Juno: This film could easily have made my top 10 list, and it's the dark horse pick for the best picture oscar. Funny comedies that are also smart and clever are rare, and this is a wonderful example. Juno's too-cool-for-school hipster dialogue was definitely a turn off for portions of the film (particularly the beginning), but it sorta grows on you too, and by the end, you're so involved in the story that it's not noticeable. Of particular note here is Ellen Page's brilliant performance as the title character and her parents, played ably by J.K. Simmons and Allison Janney. Michael Cera puts in another subdued performance, but hey, he's great at that and it fits well.
    More Info: [IMDB]
  • Waitress: Yet another unexpected pregnancy movie (there were three this year, the others being Juno and Knocked Up). It's a "chick flick" but I found that I really enjoyed it. Aside from the fact that nearly everyone in the movie is cheating on their partner, it's really quite an endearing movie, and it's very sad indeed that writer/director Adrienne Shelly will not be making any more films (she died shortly after production). Great performances by Keri Russel and Nathan Fillion (of Firefly/Serenity fame) and a nice turn by Andy Griffith as the crotchety-old-man-with-a-heart-of-gold.
    More Info: [IMDB] [Amazon]
  • Rescue Dawn: Werner Herzog's great film depicting a vietnam POW's struggle for survival in the jungles of Vietnam could easily have made the top 10 (a lot of the films in the honorable mention could have). I'm not that familiar with Herzog, but after seeing this film, I'd definitely like to check out some of his older classics. Good performances by Christian Bale (one of the best of his generation) and Steve Zahn (who is normally relegated to comic relief, but doe a nice job in this dramatic role).
    More Info: [IMDB] [Amazon]
  • Sunshine: Solid space-based science fiction is somewhat of a rarity these days (actually, SF in general seems to be), and this film manages to pull it off. It's a little cliche-ridden (some good, some bad), but I really enjoyed tihs film, even the ending which seems to strike a lot of people the wrong way (I loved it). Good ensemble cast, wonderful high-contrast lighting and a decent story. Perhaps hot the greatest film, but there's something to be said for a well executed genre film
    More Info: [IMDB] [Amazon]
  • Ratatouille: Brad Bird is perhaps my favorite American animator working today, and this film really is a delight. It is, perhaps, not as seamless as his previous efforts (I was particularly taken with his last film, The Incredibles), but it's still quite a good film. The story follows a rat who seems to have developed a talent for cooking. This rat eventually teams up with a young human guy so that they can elevate the cuisine at a famous French restaurant. It sounds silly, and well, it is I guess, but who cares? It's fun. The one ironic bit is that the character of the rat is much more compelling than any of the human characters. There are a lot of nice touches in the movie, and I'm quite looking forward to Bird's next project (whatever that might be).
    More Info: [IMDB] [Amazon]
  • Michael Clayton: This slow-burning legal thriller was actually quite good. Helmed by Bourne collaborator Tony Gilroy, this film goes perhaps a little too far at times, but is otherwise a keenly constructed thriller. At times, it doesn't seem like there's really that much going on in the film, but Gilroy somehow manages to keep the pace high (a neat trick, that) and I did genuinely find myself surprised by the ending.
    More Info: [IMDB] [Amazon]
  • There Will Be Blood: Amazing character study from director Paul Thomas Anderson. The first 20 minutes of the film are an outstanding exercise in breaking from tradition (there's almost no dialogue, but it's also compelling material and necessary for the story). The over-the-top ending is a little strange and leaves you wondering "Why?" but it's also oddly appropriate. It's one of those movies that has grown on me the more I think about it. Daniel Day Lewis gives an amazing performance (yeah, I'll even give it to him considering the last 20 minutes of the movie) and director Anderson is at the top of his game. Oh, and I DRINK YOUR MILKSHAKE!!!! I DRINK IT UP!!!!!!
    More Info: [IMDB]
  • Eastern Promises: Well, the premise of this film isn't all that exciting, but I found Viggo Mortensen's performance riveting and his character provided most of the film's interesting twists and turns. It's worth watching because of him and his character, but it's also a flawed film (especially in comparison to the other recent Cronenberg/Mortensen collaboration, A History of Violence).
    More Info: [IMDB] [Amazon]
  • Hot Fuzz: Among the better comedies this year, Hot Fuzz is an effective action movie parody. While much of that is overt, there are some great subtle touches as well (particularly with respect to Simon Pegg's peformance, as he evokes shades of Schwartzenegger in Predator or the T-1000 in T2). Ultimately, the story devolves into something rather stupid, which puts this a peg below Shaun of the Dead (which was made by the same filmmaking team), but it's still quite entertaining.
    More Info: [IMDB] [Amazon]
  • Black Book: Despite the involvement of Paul Verhoeven (whome I generally dislike except in rare exceptions), this turns out to be one of the more involving historical thrillers that I've seen in recent years. It's not a profound journey, but it's got some wonderful pot-boileresque elements and it managed to pull me in to the story, which was complex and well done.
    More Info: [IMDB] [Amazon]
Should have seen: Well there you have it. A little late, but I made it. That just about wraps up the Kaedrin movie awards, hope you enjoyed them. I don't know if I'll do another Top 10 Box Office Performance analysis, but if I do, it probably won't be for a little while (that actually might make it a little more accurate too)
Posted by Mark on January 27, 2008 at 08:18 PM .: link :.


End of This Day's Posts

Sunday, November 18, 2007

The Paradise of Choice?
A while ago, I wrote a post about the Paradox of Choice based on a talk by Barry Schwartz, the author of a book by the same name. The basic argument Schwartz makes is that choice is a double-edged sword. Choice is a good thing, but too much choice can have negative consequences, usually in the form of some kind of paralysis (where there are so many choices that you simply avoid the decision) and consumer remorse (elevated expectations, anticipated regret, etc...). The observations made by Schwartz struck me as being quite astute, and I've been keenly aware of situations where I find myself confronted with a paradox of choice ever since. Indeed, just knowing and recognizing these situations seems to help deal with the negative aspects of having too many choices available.

This past summer, I read Chris Anderson's book, The Long Tail, and I was a little pleasantly surprised to see a chapter in his book titled "The Paradise of Choice." In that chapter, Anderson explicitely addresses Schwartz's book. However, while I liked Anderson's book and generally agreed with his basic points, I think his dismissal of the Paradox of Choice is off target. Part of the problem, I think, is that Anderson is much more concerned with the choices rather than the consequences of those choices (which is what Schwartz focuses on). It's a little difficult to tell though, as Anderson only dedicates 7 pages or so to the topic. As such, his arguments don't really eviscerate Schwartz's work. There are some good points though, so let's take a closer look.

Anderson starts with a summary of Schwartz's main concepts, and points to some of Schwartz's conclusions (from page 171 in my edition):
As the number of choices keeps growing, negative aspects of having a multitude of options begin to appear. As the number of choices grows further, the negatives escalate until we become overloaded. At this point, choice no longer liberates, but debilitates. It might even be said to tyrannize.
Now, the way Anderson presents this is a bit out of context, but we'll get to that in a moment. Anderson continues and then responds to some of these points (again, page 171):
As an antidote to this poison of our modern age, Schwartz recommends that consumers "satisfice," in the jargon of social science, not "maximize". In other words, they'd be happier if they just settled for what was in front of them rather than obsessing over whether something else might be even better. ...

I'm skeptical. The alternative to letting people choose is choosing for them. The lessons of a century of retail science (along with the history of Soviet department stores) are that this is not what most consumers want.
Anderson has completely missed the point here. Later in the chapter, he spends a lot of time establishing that people do, in fact, like choice. And he's right. My problem is twofold: First, Schwartz never denies that choice is a good thing, and second, he never advocates removing choice in the first place. Yes, people love choice, the more the better. However, Schwartz found that even though people preferred more options, they weren't necessarily happier because of it. That's why it's called the paradox of choice - people obviously prefer something that ends up having negative consequences. Schwartz's book isn't some sort of crusade against choice. Indeed, it's more of a guide for how to cope with being given too many choices. Take "satisficing." As Tom Slee notes in a critique of this chapter, Anderson misstates Schwartz's definition of the term. He makes it seem like satisficing is settling for something you might not want, but Schwartz's definition is much different:
To satisfice is to settle for something that is good enough and not worry about the possibility that there might be something better. A satisficer has criteria and standards. She searches until she finds an item that meets those standards, and at that point, she stops.
Settling for something that is good enough to meet your needs is quite different than just settling for what's in front of you. Again, I'm not sure Anderson is really arguing against Schwartz. Indeed, Anderson even acknowledges part of the problem, though he again misstate's Schwartz's arguments:
Vast choice is not always an unalloyed good, of course. It too often forces us to ask, "Well, what do I want?" and introspection doesn't come naturally to all. But the solution is not to limit choice, but to order it so it isn't oppressive.
Personally, I don't think the problem is that introspection doesn't come naturally to some people (though that could be part of it), it's more that some people just don't give a crap about certain things and don't want to spend time figuring it out. In Schwartz's talk, he gave an example about going to the Gap to buy a pair of jeans. Of course, the Gap offers a wide variety of jeans (as of right now: Standard Fit, Loose Fit, Boot Fit, Easy Fit, Morrison Slim Fit, Low Rise Fit, Toland Fit, Hayes Fit, Relaxed Fit, Baggy Fit, Carpenter Fit). The clerk asked him what he wanted, and he said "I just want a pair of jeans!"

The second part of Anderson's statement is interesting though. Aside from again misstating Schwartz's argument (he does not advocate limiting choice!), the observation that the way a choice is presented is important is interesting. Yes, the Gap has a wide variety of jean styles, but look at their website again. At the top of the page is a little guide to what each of the styles means. For the most part, it's helpful, and I think that's what Anderson is getting at. Too much choice can be oppressive, but if you have the right guide, you can get the best of both worlds. The only problem is that finding the right guide is not as easy as it sounds. The jean style guide at Gap is neat and helpful, but you do have to click through a bunch of stuff and read it. This is easier than going to a store and trying all the varieties on, but it's still a pain for someone who just wants a pair of jeans dammit.

Anderson spends some time fleshing out these guides to making choices, noting the differences between offline and online retailers:
In a bricks-and-mortar store, products sit on the shelf where they have been placed. If a consumer doesn't know what he or she wants, the only guide is whatever marketing material may be printed on the package, and the rough assumption that the product offered in the greatest volume is probably the most popular.

Online, however, the consumer has a lot more help. There are a nearly infinite number of techniques to tap the latent information in a marketplace and make that selection process easier. You can sort by price, by ratings, by date, and by genre. You can read customer reviews. You can compare prices across products and, if you want, head off to Google to find out as much about the product as you can imagine. Recommendations suggest products that 'people like you' have been buying, and surprisingly enough, they're often on-target. Even if you know nothing about the category, ranking best-sellers will reveal the most popular choice, which both makes selection easier and also tends to minimize post-sale regret. ...

... The paradox of choice is simply and artifact of the limitations of the physical world, where the information necessary to make an informed choice is lost.
I think it's a very good point he's making, though I think he's a bit too optimistic about how effective these guides to buying really are. For one thing, there are times when a choice isn't clear, even if you do have a guide. Also, while I think retailers that offer Recommendations based on what other customer purchases are important and helpful, who among us hasn't seen absurd recommendations? From my personal experience, a lot of people don't like the connotations of recommendations either (how do they know so much about me? etc...). Personally, I really like recommendations, but I'm a geek and I like to figure out why they're offering me what they are (Amazon actually tells you why something is recommended, which is really neat). In any case, from my own personal anecdotal observations, no one puts much faith in probablistic systems like recommendations or ratings (for a number of reasons, such as cheating or distrust). There's nothing wrong with that, and that's part of why such systems are effective. Ironically, acknowledging their imperfections allow users to better utilize the systems. Anderson knows this, but I think he's still a bit too optimistic about our tools for traversing the long tail. Personally, I think they need a lot of work.

When I was younger, one of the big problems in computing was storage. Computers are the perfect data gatering tool, but you need somewhere to store all that data. In the 1980s and early 1990s, computers and networks were significantly limited by hardware, particularly storage. By the late 1990s, Moore's law had eroded this deficiency significantly, and today, the problem of storage is largely solved. You can buy a terrabyte of storage for just a couple hundred dollars. However, as I'm fond of saying, we don't so much solve problems as trade one set of problems for another. Now that we have the ability to store all this information, how do we get at it in a meaninful way? When hardware was limited, analysis was easy enough. Now, though, you have so much data available that the simple analyses of the past don't cut it anymore. We're capturing all this new information, but are we really using it to its full potential?

I recently caught up with Malcolm Gladwell's article on the Enron collapse. The really crazy thing about Enron was that they didn't really hide what they were doing. They fully acknowledged and disclosed what they were doing... there was just so much complexity to their operations that no one really recognized the issues. They were "caught" because someone had the persistence to dig through all the public documentation that Enron had provided. Gladwell goes into a lot of detail, but here are a few excerpts:
Enron's downfall has been documented so extensively that it is easy to overlook how peculiar it was. Compare Enron, for instance, with Watergate, the prototypical scandal of the nineteen-seventies. To expose the White House coverup, Bob Woodward and Carl Bernstein used a source-Deep Throat-who had access to many secrets, and whose identity had to be concealed. He warned Woodward and Bernstein that their phones might be tapped. When Woodward wanted to meet with Deep Throat, he would move a flower pot with a red flag in it to the back of his apartment balcony. That evening, he would leave by the back stairs, take multiple taxis to make sure he wasn't being followed, and meet his source in an underground parking garage at 2 A.M. ...

Did Jonathan Weil have a Deep Throat? Not really. He had a friend in the investment-management business with some suspicions about energy-trading companies like Enron, but the friend wasn't an insider. Nor did Weil's source direct him to files detailing the clandestine activities of the company. He just told Weil to read a series of public documents that had been prepared and distributed by Enron itself. Woodward met with his secret source in an underground parking garage in the hours before dawn. Weil called up an accounting expert at Michigan State.

When Weil had finished his reporting, he called Enron for comment. "They had their chief accounting officer and six or seven people fly up to Dallas," Weil says. They met in a conference room at the Journal's offices. The Enron officials acknowledged that the money they said they earned was virtually all money that they hoped to earn. Weil and the Enron officials then had a long conversation about how certain Enron was about its estimates of future earnings. ...

Of all the moments in the Enron unravelling, this meeting is surely the strangest. The prosecutor in the Enron case told the jury to send Jeffrey Skilling to prison because Enron had hidden the truth: You're "entitled to be told what the financial condition of the company is," the prosecutor had said. But what truth was Enron hiding here? Everything Weil learned for his Enron expose came from Enron, and when he wanted to confirm his numbers the company's executives got on a plane and sat down with him in a conference room in Dallas.
Again, there's a lot more detail in Gladwell's article. Just how complicated was the public documentation that Enron had released? Gladwell gives some examples, including this one:
Enron's S.P.E.s were, by any measure, evidence of extraordinary recklessness and incompetence. But you can't blame Enron for covering up the existence of its side deals. It didn't; it disclosed them. The argument against the company, then, is more accurately that it didn't tell its investors enough about its S.P.E.s. But what is enough? Enron had some three thousand S.P.E.s, and the paperwork for each one probably ran in excess of a thousand pages. It scarcely would have helped investors if Enron had made all three million pages public. What about an edited version of each deal? Steven Schwarcz, a professor at Duke Law School, recently examined a random sample of twenty S.P.E. disclosure statements from various corporations-that is, summaries of the deals put together for interested parties-and found that on average they ran to forty single-spaced pages. So a summary of Enron's S.P.E.s would have come to a hundred and twenty thousand single-spaced pages. What about a summary of all those summaries? That's what the bankruptcy examiner in the Enron case put together, and it took up a thousand pages. Well, then, what about a summary of the summary of the summaries? That's what the Powers Committee put together. The committee looked only at the "substance of the most significant transactions," and its accounting still ran to two hundred numbingly complicated pages and, as Schwarcz points out, that was "with the benefit of hindsight and with the assistance of some of the finest legal talent in the nation."
Again, Gladwell's article has a lot of other details and is a fascinating read. What interested me the most, though, was the problem created by so much data. That much information is useless if you can't sift through it quickly or effectively enough. Bringing this back to the paradise of choice, the current systems we have for making such decisions are better than ever, but still require a lot of improvement. Anderson is mostly talking about simple consumer products, so none are really as complicated as the Enron case, but even then, there are still a lot of problems. If we're really going to overcome the paradox of choice, we need better information analysis tools to help guide us. That said, Anderson's general point still holds:
More choice really is better. But now we know that variety alone is not enough; we also need information about that variety and what other consumers before us have done with the same choices. ... The paradox of choice turned out to be more about the poverty of help in making that choice than a rejection of plenty. Order it wrong and choice is oppressive; order it right and it's liberating.
Personally, while the help in making choices has improved, there's still a long way to go before we can really tackle the paradox of choice (though, again, just knowing about the paradox of choice seems to do wonders in coping with it).

As a side note, I wonder if the video game playing generations are better at dealing with too much choice - video games are all about decisions, so I wonder if folks who grew up working on their decision making apparatus are more comfortable with being deluged by choice.
Posted by Mark on November 18, 2007 at 09:47 PM .: link :.


End of This Day's Posts

Sunday, August 05, 2007

Manuals, or the lack thereof...
When I first started playing video games and using computer applications, I remember having to read the instruction manuals to figure out what was happening on screen. I don't know if this was because I was young and couldn't figure this stuff out, or because some of the controls were obtuse and difficult. It was perhaps a combination of both, but I think the latter was more prevalent, especially when applications and games became more complex and powerful. I remember sitting down at a computer running DOS and loading up Wordperfect. The interface that appears is rather simplistic, and the developers apparently wanted to avoid the "clutter" of on-screen menus, so they used keyboard combinations. According to Wikipedia, Wordperfect used "almost every possible combination of function keys with Ctrl, Alt, and Shift modifiers." I vaguely remember needing to use those stupid keyboard templates (little pieces of laminated paper that fit snugly around the keyboard keys, helping you remember what key or combo does what.)

Video Games used to have great manuals too. I distinctly remember several great manuals from the Atari 2600 era. For example, the manual for Pitfall II was a wonderful document done in the style of Pitfall Harry's diary. The game itself had little in the way of exposition, so you had to read the manual to figure out that you were trying to rescue your niece Rhonda and her cat, Quickclaw, who became trapped in a catacomb while searching for the fabled Raj diamond. Another example for the Commodore 64 was Temple of Apshai. The game had awful graphics, but each room you entered had a number, and you had to consult your manual to get a description of the room.

By the time of the NES, the importance of manuals had waned from Apshai levels, but they were still somewhat necessary at times, and gaming companies still went to a lot of trouble to produce helpful documents. The one that stands out in my mind was the manual for Dragon Warrior III, which was huge (at least 50 pages) and also contained a nice fold-out chart of most of the monsters and wapons in the game (with really great artwork). PC games were also getting more complex, and as Roy noted recently, companies like Sierra put together really nice instruction manuals for complex games like the King's Quest series.

In the early 1990s, my family got its first Windows PC, and several things changed. With the Word for Windows software, you didn't need any of those silly keyboard templates. Everything you needed to do was in a menu somewhere, and you could just point and click instead of having to memorize strange keyboard combos. Naturally, computer purists love the keyboard, and with good reason. If you really want to be efficient, the keyboard is the way to go, which is why Linux users are so fond of the command line and simple looking but powerful applications like Emacs. But for your average user, the GUI was very important, and made things a lot easier to figure out. Word had a user manual, and it was several hundred pages long, but I don't think I ever cracked it open, except maybe in curiosity (not because I needed to).

The trends of improving interfaces and less useful manuals proceeded throughout the next decade and today, well, I can't think of the last time I had to consult a physical manual for anything. Steven Den Beste has been playing around with flash for a while, but he says he never looks at the manual. "Manuals are for wimps." In his post, Roy wonders where all the manuals have gone. He speculates that manufacturing costs are a primary culprit, and I have no doubt that they are, but there are probably a couple of other reasons as well. For one, interfaces have become much more intuitive and easy to use. This is in part due to familiarity with computers and the emergence of consistent standards for things like dialog boxes (of course, when you eschew those standards, you get what Jacob Nielson describes as a catastrophic failure). If you can easily figure it out through the interface, what use are the manuals? With respect to gaming, the in-game tutorials have largely taken the place of instruction manuals. Another thing that has perhaps affected official instruction manuals are the unofficial walkthroughs and game guides. Visit a local bookstore and you'll find entire bookcases devoted to vide game guides and walkthrough. As nice as the manual for Pitfall II was, you really didn't need much more than 10 pages to explain how to play that game, but several hundred pages barely does justice to some of the more complex video games in today's market. Perhaps the reason gaming companies don't give you instruction manuals with the game is not just that printing the manual is costly, but that they can sell you a more detailed and useful one.

Steven Johnson's book Everything Bad is Good for You has a chapter on Video Games that is very illuminating (in fact, the whole book is highly recommended - even if you don't totally agree with his premise, he still makes a compelling argument). He talks about the official guides and why they're so popular:
The dirty little secret of gaming is how much time you spend not having fun. You may be frustrated; you may be confused or disoriented; you may be stuck. When you put the game down and move back into the real world, you may find yourself mentally working through the problem you've been wrestling with, as though you were worrying a loose tooth. If this is mindless escapism, it's a strangely masochistic version.
He gives an example of a man who spends six months working as a smith (mindless work) in Ultima online so that he can attain a certain ability, and he also talks about how people spend tons of money on guides for getting past various roadblocks. Why would someone do this? Johnson spends a fair amount of time going into the neurological underpinnings of this, most notably what he calls the "reward circuitry of the brain." In games, rewards are everywhere. More life, more magic spells, new equipment, etc... And how do we get these rewards? Johnson thinks there are two main modes of intellectual labor that go into video gaming, and he calls them probing and telescoping.

Probing is essentially exploration of the game and its possibilities. Much of this is simply the unconscious exploration of the controls and the interface, figuring out how the game works and how you're supposed to interact with it. However, probing also takes the more conscious form of figuring out the limitations of the game. For instance, in a racing game, it's usually interesting to see if you can turn your car around backwards, pick up a lot of speed, then crash head-on into a car going the "correct" way. Or, in Rollercoaster Tycoon, you can creatively place balloon stands next to a roller coaster to see what happens (the result is hilarious). Probing the limits of game physics and finding ways to exploit them are half the fun (or challenge) of video games these days, which is perhaps another reason why manuals are becoming less frequent.

Telescoping has more to do with the games objectives. Once you've figured out how to play the game through probing, you seek to exploit your knowledge to achieve the game's objectives, which are often nested in a hierarchical fashion. For instance, to save the princess, you must first enter the castle, but you need a key to get into the castle and the key is guarded by a dragon, etc... Indeed, the structure is sometimes even more complicated, and you essentially build this hierarchy of goals in your head as the game progresses. This is called telescoping.

So why is this important? Johnson has the answer (page 41 in my edition):
... far more than books or movies or music, games force you to make decisions. Novels may activate our imagination, and music may conjure up powerful emotions, but games force you to decide, to choose, to prioritize. All the intellectual benefits of gaming derive from this fundamental virtue, because learning how to think is ultimately about learning to make the right decisions: weighing evidence, analyzing situations, consulting your long term goals, and then deciding. No other pop culture form directly engages the brain's decision-making apparatus in the same way. From the outside, the primary activity of a gamer looks like a fury of clicking and shooting, which is why much of the conventional wisdom about games focuses on hand-eye coordination. But if you peer inside the gamer's mind, the primary activity turns out to be another creature altogether: making decisions, some of them snap judgements, some long-term strategies.
Probing and telescoping are essential to learning in any sense, and the way Johnson describes them in the book reminds me of a number of critical thinking methods. Probing, developing a hypothesis, reprobing, and then rethinking the hypothesis is essentially the same thing as the scientific method or the hermenutic circle. As such, it should be interesting to see if video games ever really catch on as learning tools. There have been a lot of attempts at this sort of thing, but they're often stifled by the reputation of video games being a "colossal waste of time" (in recent years, the benefits of gaming are being acknowledged more and more, though not usually as dramatically as Johnson does in his book).

Another interesting use for video games might be evaluation. A while ago, Bill Simmons made an offhand reference to EA Sports' Madden games in the context of hiring football coaches (this shows up at #29 on his list):
The Maurice Carthon fiasco raises the annual question, "When teams are hiring offensive and defensive coordinators, why wouldn't they have them call plays in video games to get a feel for their play calling?" Seriously, what would be more valuable, hearing them B.S. about the philosophies for an hour, or seeing them call plays in a simulated game at the all-Madden level? Same goes for head coaches: How could you get a feel for a coach until you've played poker and blackjack with him?
When I think about how such a thing would actually go down, I'm not so sure, because the football world created by Madden, as complex and comprehensive as it is, still isn't exactly the same as the real football world. However, I think the concept is still sound. Theoretically, you could see how a prospective coach would actually react to a new, and yet similar, football paradigm and how they'd find weaknesses and exploit them. The actual plays they call aren't that important; what you'd be trying to figure out is whether or not the coach was making intelligent decisions or not.

So where are manuals headed? I suspect that they'll become less and less prevalent as time goes on and interfaces become more and more intuitive (though there is still a long ways to go before I'd say that computer interfaces are truly intuitive, I think they're much more intuitive now than they were ten years ago). We'll see more interactive demos and in-game tutorials, and perhaps even games used as teaching tools. I could probably write a whole separate post about how this applies to Linux, which actually does require you to look at manuals sometimes (though at least they have a relatively consistent way of treating manuals; even when the documentation is bad, you can usually find it). Manuals and passive teaching devices will become less important. And to be honest, I don't think we'll miss them. They're annoying.
Posted by Mark on August 05, 2007 at 10:58 AM .: link :.


End of This Day's Posts

Sunday, June 10, 2007

Referential
A few weeks ago, I wrote about how context matters when consuming art. As sometimes happens when writing an entry, that one got away from me and I never got around to the point I originally started with (that entry was originally entitled "Referential" but I changed it when I realized that I wasn't going to write anything about references), which was how much of our entertainment these days references its predecessors. This takes many forms, some overt (homages, parody), some a little more subtle.

I originally started thinking about this while watching an episode of Family Guy. The show is infamous for its random cutaway gags - little vignettes that have no connection to the story, but which often make some obscure reference to pop culture. For some reason, I started thinking about what it would be like to watch an episode of Family Guy with someone from, let's say, the 17th century. Let's further speculate that this person isn't a blithering idiot, but perhaps a member of the Royal Society or something (i.e. a bright fellow).

This would naturally be something of a challenge. There are some technical explanations that would be necessary. For example, we'd have to explain electricty, cable networks, signal processing and how the television works (which at least involves discussions on light and color). The concept of an animated show, at least, would probably be easy to explain (but it would involve a discussion of how the human eye works, to a degree).

There's more to it, of course, but moving past all that, once we start watching the show, we're going to have to explain why we're laughing at pretty much all of the jokes. Again, most of the jokes are simply references and parodies of other pieces of pop culture. Watching an episode of Family Guy with Isaac Newton (to pick a prominent Royal Society member) would necessitate a pause just about every minute to explain what each reference was from and why Family Guy's take on it made me laugh. Then there's the fact that Family Guy rarely has any sort of redeemable lesson and often deliberately skews towards actively encouraging evil (something along the lines of "I think the important thing to remember is that it's ok to lie, so long as you don't get caught." I don't think that exact line is in an episode, but it could be.) This works fine for us, as we're so steeped in popular culture that we get the fact that Family Guy is just lampooning of the notion that we could learn important life lessions via a half-hour sitcom. But I'm sure Isaac Newton would be appalled.

For some reason, I find this fascinating, and try to imagine how I would explain various jokes. For instance, the episode I was watching featured a joke concerning "cool side of the pillow." They cut to a scene in bed where Peter flips over the pillow and sees Billy Dee Williams' face, which proceeds to give a speech about how cool this side of the pillow is, ending with "Works every time." This joke alone would require a whole digression into Star Wars and how most of the stars of that series struggled to overcome their typecasting and couldn't find a lot of good work, so people like Billy Dee Williams ended up doing commercials for a malt liquor named Colt 45, which had these really cheesy commercials where Billy Dee talked like that. And so on. It could probably take an hour before my guest would even come close to understanding the context of the joke (I'm not even touching the tip of the iceberg with this post).

And the irony of this whole thing is that jokes that are explained simply aren't funny. To be honest, I'm not even sure why I find these simple gags funny (that, of course, is the joy of humor - you don't usually have to understand it or think about it, you just laugh). Seriously, why is it funny when Family Guy blatantly references some classic movie or show? Again, I'm not sure, but that sort of humor has been steadily growing over the past 30 years or so.

Not all comedies are that blatant about their referential humor though (indeed, Family Guy itself doesn't solely rely upon such references). A recent example of a good referential film is Shaun of the Dead, which somewhow manages to be both a parody and an example of a good zombie movie. It pays homage to all the classic zombie films and it also makes fun of other genres (notably the romantic comedy), but in doing so, the filmmakers have also made a good zombie movie in itself. The filmmakers have recently released a new film called Hot Fuzz, which attempts the same trick for action movies and buddy comedies. It is, perhaps, not as successful as Shaun, but the sheer number of references in the film is astounding. There are the obvious and explicit ones like Point Break and Bad Boys II, but there are also tons of subtle homages that I'd wager most people wouldn't get. For instance, when Simon Pegg yells in the movie, he's doing a pitch perfect impersonation of Arnold Schwarzenegger in Predator. And when he chases after a criminal, he imitates the way Robert Patrick's T-1000 runs from Terminator 2.

References don't need to be part of a comedy either (though comedies seem to make the easiest examples). Hop on IMDB and go to just about any recent movie, and click on the "Movie Connections" link in the left navigation. For instance, did you know that the aformentioned T2 references The Wizard of Oz and The Killing, amongst dozens of other references? Most of the time, these references are really difficult to pick out, especially when you're viewing a foreign film or show that's pulling from a different cultural background. References don't have to be story or character based - they can be the way a scene is composed or the way the lighting is set (i.e. the Venetian blinds in Noir films).

Now, this doesn't just apply to art either. A lot of common knowledge in today's world is referential. Most formal writing includes references and bibliographies, for instance, and a non-fiction book will often assume basic familiarity with a subject. When I was in school, I was always annoyed at the amount of rote memorization they made us do. Why memorize it if I could just look it up? Shouldn't you be focusing on my critical thinking skills instead of making me memorize arbitrary lists of facts? Sometimes this complaining was probably warranted, but most of it wasn't. So much of what we do in today's world requires a well-rounded familiarity with a large number of subjects (including history, science, culture, amongst many other things). There simply isn't any substitute for actual knowledge. Though it was a pain at the time, I'm glad emphasis was put on memorization during my education. A while back, David Foster noted that schools are actually moving away from this, and makes several important distinctions. He takes an example of a song:
Jakob Dylan has a song that includes the following lines:

Cupid, don't draw back your bow
Sam Cooke didn't know what I know


Think of how much you need to know in order to understand these two simple lines:

1)You need to know that, in mythology, Cupid symbolizes love
2)And that Cupid's chosen instrument is the bow and arrow
3)Also that there was a singer/songwriter named Sam Cooke
4)And that he had a song called which included the lines "Cupid, draw back your bow."

... "Progressive" educators, loudly and in large numbers, insist that students should be taught "thinking skills" as opposed to memorization. But consider: If it's not possible to understand a couple of lines from a popular song without knowing by heart the references to which it alludes--without memorizing them--what chance is there for understanding medieval history, or modern physics, without having a ready grasp of the topics which these disciplines reference?

And also consider: in the Dylan case, it's not just what you need to know to appreciate the song. It's what Dylan needed to know to create it in the first place. Had he not already had the reference points--Cupid, the bow and arrow, the Sam Cooke song--in his head, there's no way he would have been able to create his own lines. The idea that he could have just "looked them up," which educators often suggest is the way to deal with factual knowledge, would be ludicrous in this context. And it would also be ludicrous in the context of creating new ideas about history or physics.
As Foster notes, this doesn't mean that "thinking skills" are unimportant, just that knowledge is important too. You need to have a quality data set in order to use those "thinking skills" effectively.

Human beings tend to leverage knowledge to create new knowledge. This has a lot of implications, one of which is intellectual property law. Giving limited copyright to intellectual property is important, because the data in that property eventually becomes available for all to built upon. It's ironic that educators are considering less of a focus on memorization, as this requirement of referential knowledge has been increasing for some time. Students need a base of knowledge to both understand and compose new works. References help you avoid reinventing the wheel everytime you need to create something, which leads to my next point.

I think part of the reason references are becoming more and more common these days is that it makes entertainment a little less passive. Watching TV or a movie is, of course, a passive activity, but if you make lots of references and homages, the viewer is required to think through those references. If the viewer has the appropriate knowledge, such a TV show or movie becomes a little more cognitively engaging. It makes you think, it calls to mind previous work, and it forces you to contextualize what you're watching based on what you know about other works. References are part of the complexity of modern Television and film, and Steven Johnson spends a significant amout of time talking about this subject in his book Everything Bad is Good for You (from page 85 of my edition):
Nearly every extended sequence in Seinfeld or The Simpsons, however, will contain a joke that makes sense only if the viewer fills in the proper supplementary information -- information that is deliberately withheld from the viewer. If you haven't seen the "Mulva" episode, or if the name "Art Vandelay" means nothing to you, then the subsequent references -- many of them arriving years after their original appearance -- will pass on by unappreciated.

At first glance, this looks like the soap opera tradition of plotlines extending past the frame of individual episodes, but in practice the device has a different effect. Knowing that George uses the alias Art Vandelay in awkward social situations doesn't help you understand the plot of the current episode; you don't draw on past narratives to understand the events in the present one. In the 180 Seinfeld episodes that aired, seven contain references to Art Vandelay: in George's actually referring to himself with that alias or invoking the name as part of some elaborate lie. He tells a potential employer at a publishing house that he likes to read the fiction of Art Vandelay, author of Venetian Blinds; in another, he tells an unemployment insurance caseworker that he's applied for a latex salesman job at Vandelay Industries. For storytelling purposes, the only thing that you need to know here is that George is lying in a formal interview; any fictitious author or latex manufacturer would suffice. But the joke arrives through the echo of all those earlier Vandelay references; it's funny because it's making a subtle nod to past events held offscreen. It's what we'd call in a real-world context an "in-joke" -- a joke that's funny only to people who get the reference.
I know some people who hate Family Guy and Seinfeld, but I realized a while ago that they don't hate those shows because of the contents of the shows or because they were offended (though some people certainly are), but rather becaues they simply don't get the references. They didn't grow up watching TV in the 80s and 90s, so many of the references are simply lost on them. Family Guy would be particularly vexing if you didn't have the pop culture knowledge of the writers of that show. These reference heavy shows are also a lot easier to watch and rewatch, over and over again. Why? Because each episode is not self-contained, you often find yourself noticing something new every time you watch. This also sometimes works in reverse. I remember the first time I saw Bill Shatner's campy rendition of Rocket Man, I suddenly understoood a bit on Family Guy which I thought was just a bit based on being random (but was really a reference).

Again, I seem to be focusing on comedy, but it's not necessarily limited to that genre. Eric S. Raymond has written a lot about how science fiction jargon has evolved into a sophisticated code that implicitely references various ideas, conventions and tropes of the genre:
In looking at an SF-jargon term like, say, "groundcar", or "warp drive" there is a spectrum of increasingly sophisticated possible decodings. The most naive is to see a meaningless, uninterpretable wordlike noise and stop there.

The next level up is to recognize that uttering the word "groundcar" or "warp drive" actually signifies something that's important for the story, but to lack the experience to know what that is. The motivated beginning reader of SF is in this position; he must, accordingly, consciously puzzle out the meaning of the term from the context provided by the individual work in which it appears.

The third level is to recognize that "ground car" and "warp drive" are signifiers shared, with a consistent and known meaning, by many works of SF -- but to treat them as isolated stereotypical signs, devoid of meaning save inasmuch as they permit the writer to ratchet forward the plot without requiring imaginative effort from the reader.

Viewed this way, these signs emphasize those respects in which the work in which they appear is merely derivative from previous works in the genre. Many critics (whether through laziness or malice) stop here. As a result they write off all SF, for all its pretensions to imaginative vigor, as a tired jumble of shopworn cliches.

The fourth level, typical of a moderately experienced SF reader, is to recognize that these signifiers function by permitting the writer to quickly establish shared imaginative territory with the reader, so that both parties can concentrate on what is unique about their communication without having to generate or process huge expository lumps. Thus these "stereotypes" actually operate in an anti-stereotypical way -- they permit both writer and reader to focus on novelty.

At this level the reader begins to develop quite analytical habits of reading; to become accustomed to searching the writer's terminology for what is implied (by reference to previous works using the same signifiers) and what kinds of exceptions and novelties convey information about the world and the likely plot twists.

It is at this level, for example, that the reader learns to rely on "groundcar" as a tip-off that the normal transport mode in the writer's world is by personal flyer. At this level, also, the reader begins to analytically compare the author's description of his world with other SFnal worlds featuring personal flyers, and to recognize that different kinds of flyers have very different implications for the rest of the world.

For example, the moderately experienced reader will know that worlds in which the personal fliers use wings or helicopter-like rotors are probably slightly less advanced in other technological ways than worlds in which they use ducted fans -- and way behind any world in which the flyers use antigravity! Once he sees "groundcar" he will be watching for these clues.

The very experienced SF reader, at the fifth level, can see entire worlds in a grain of jargon. When he sees "groundcar" he associates to not only technical questions about flyer propulsion but socio-symbolic ones but about why the culture still uses groundcars at all (and he has a reportoire of possible answers ready to check against the author's reporting). He is automatically aware of a huge range of consequences in areas as apparently far afield as (to name two at random) the architectural style of private buildings, and the ecological consequences of accelerated exploitation of wilderness areas not readily accessible by ground transport.
While comedy makes for convenient examples, I think this better illustrates the cognitive demands of referential art. References require you to be grounded in various subjects, and they'll often require you to think through the implications of those subjects in a new context. References allow writers to pack incredible amounts of information into even the smallest space. This, of course, requires the consumer to decode that information (using available knowledge and critical thinking skills), making the experience less passive and more engaging. Use references will continue to flourish and accellerate in both art and scholarship, and new forms will emerge. One could even argue that aggregation in various weblogs are simply exercises in referential work. Just look at this post, in which I reference several books and movies, in many cases assuming familiarity. Indeed, the whole structure of the internet is based on the concept of links -- essentialy a way to reference other documents. Perhaps this is part of the cause of the rising complexity and information density of modern entertainment. We can cope with it now, because we have such systems to help us out.
Posted by Mark on June 10, 2007 at 03:08 PM .: link :.


End of This Day's Posts

Sunday, February 18, 2007

World Domination Via Dice
One of my favorite board games is Risk. I have lots of fond memories of getting annihilated by my family members (I don't think I've ever played the game without being the youngest person at the table) and have long since mastered the fundamentals. I also hold it responsible for my early knowledge of world geography and geopolitics (and thus my early thoughts were warped, but at least I knew where the Middle East was, even if the map is a little broad).

The key to Risk is Australia

The key to Risk is Australia. The Greeks knew it; the Carthaginians knew it; now you know it. Australia only has four territories to conquer and more importantly, it only has one entrance point, and thus only one territory to defend. Conquering Australia early in the game guarantees an extra two armies a turn, which is huge at that point in the game. Later in the game, that advantage lessens, but after securing Australia, you should be off to a very good start. If you're not in a position to take over Australia, South America will do. It also only has four territories, but it has two entrances and thus two territories to defend. On the bright side, it's also adjacent to Africa and North America, which are good continents to expand to (though they're both considerably more difficult to hold than Australia). This being the internet, there are, of course, some people who have thought about the subject a lot more than I and developed many detailed strategies.

Like many of the classic games, the original has become dwarfed by variants - games set in another universe (LotR Risk) or in a futaristic setting (Risk: 2042) - but I've never played those. However, I recent ran across a little internet game called Dice Wars. It's got the general Risk-like gameplay and concept of world domination via dice, but there are many key differences:
  • The Map and Extra Armies: A different map is generated for each game. One of the other differences is that the number of extra armies (or Dice, in this game) you get per turn is based solely on the number of territories you control (and there's no equivalent to turning in Risk cards for more armies). This nullifies the Australia strategy of conquering an easily-defensible continent, but the general strategy remains: you need to maneuver your forces so as to minimize the number of exposed territories, slowly and carefully expanding your empire.
  • Army Placement and Size: Unlike Risk, you can't choose where to place your armies (nor can you do "free moves" at the end of your turn, which are normally used to consolidate defenses or prepare a forward thrust). If you mount a successful attack, you must move all of your armies except one that you leave behind. This makes extended thrusts difficult, as you'll leave a trail of easily conquered territories behind you. This is one of the more annoying differences. Another difference is that any one territory can only have a certain number of armies (i.e. there is a maximum). This changes the dynamic, adding another element of entropy. Again, it's somewhat annoying, but it's easy enough to work around.
  • Attacking and Defending: In Risk, the attacker has a maximum of 3 dice, while the defender has a maximum of 2 dice. Ties go to the defender, but attackers still have the statistical advantage, no matter how many armies are facing off. If both territories have an equal amount of armies, the attacker has the statistical advantage. In Dice Wars, the number of dice used are equal to the number of armies, and instead of matching up single dice against each other, they just total up the dice. If the attacker's total is greater than the defender's, they win. Again, ties go to the defender. So in this case, if two territories have the same number of armies, the statistical advantage goes to the defender. Of course, you generally try to avoid such a situation in both games, but again, the dynamic is quite different here.
The game's familiar mechanics make it easy to pick up, but the differences above make it a little more difficult to master. Here's an example game:

dice wars

Of course, I'd already played a bit to get to this point, and you can probably spot my strategy here. I started with a concentration of territories towards the middle of the map, and thus focused on consolidating my forces in that area. By the time I got to the screenshot above, I'd narrowed down my exposure to four territories. I began expanding a to the right, and eventually conquered all of the green territories, thus limiting my exposure to only two territories. From there it was just a matter of slowly expanding that wall of two (at one point I needed to expand back to an exposure of three) until I won. Another nice feature of this game is the "History" button that appears at the end. Click it, and you watch the game progress really quickly through every battle, showing you the entire war in a matter of seconds. Neat. It's a fun game, but in the end, I think I still prefer Risk. [hat tip to Hypercubed for the game]
Posted by Mark on February 18, 2007 at 08:33 PM .: link :.


End of This Day's Posts

Wednesday, February 14, 2007

Intellectual Property, Copyright and DRM
Roy over at 79Soul has started a series of posts dealing with Intellectual Property. His first post sets the stage with an overview of the situation, and he begins to explore some of the issues, starting with the definition of theft. I'm going to cover some of the same ground in this post, and then some other things which I assume Roy will cover in his later posts.

I think most people have an intuitive understanding of what intellectual property is, but it might be useful to start with a brief definition. Perhaps a good place to start would be Article 1, Section 8 of the U.S. Constitution:
To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries;
I started with this for a number of reasons. First, because I live in the U.S. and most of what follows deals with U.S. IP law. Second, because it's actually a somewhat controversial stance. The fact that IP is only secured for "limited times" is the key. In England, for example, an author does not merely hold a copyright on their work, they have a Moral Right.
The moral right of the author is considered to be -- according to the Berne convention -- an inalienable human right. This is the same serious meaning of "inalienable" the Declaration of Independence uses: not only can't these rights be forcibly stripped from you, you can't even give them away. You can't sell yourself into slavery; and neither can you (in Britain) give the right to be called the author of your writings to someone else.
The U.S. is different. It doesn't grant an inalienable moral right of ownership; instead, it allows copyright. In other words, in the U.S., such works are considered property (i.e. it can be sold, traded, bartered, or given away). This represents a fundamental distinction that needs to be made: some systems emphasize individual rights and rewards, and other systems are more limited. When put that way, the U.S. system sounds pretty awful, except that it was designed for something different: our system was built to advance science and the "useful arts." The U.S. system still rewards creators, but only as a means to an end. Copyright is granted so that there is an incentive to create. However, such protections are only granted for "limited Times." This is because when a copyright is eternal, the system stagnates as protected peoples stifle competition (this need not be malicious). Copyright is thus limited so that when a work is no longer protected, it becomes freely available for everyone to use and to build upon. This is known as the public domain.

The end goal here is the advancement of society, and both protection and expiration are necessary parts of the mix. The balance between the two is important, and as Roy notes, one of the things that appears to have upset the balance is technology. This, of course, extends as far back as the printing press, records, cassettes, VHS, and other similar technologies, but more recently, a convergence between new compression techniques and increasing bandwidth of the internet created an issue. Most new recording technologies were greeted with concern, but physical limitations and costs generally put a cap on the amount of damage that could be done. With computers and large networks like the internet, such limitations became almost negligible. Digital copies of protected works became easy to copy and distribute on a very large scale.

The first major issue came up as a result of Napster, a peer-to-peer music sharing service that essentially promoted widespread copyright infringement. Lawsuits followed, and the original Napster service was shut down, only to be replaced by numerous decentralized peer-to-peer systems and darknets. This meant that no single entity could be sued for the copyright infringement that occurred on the network, but it resulted in a number of (probably ill-advised) lawsuits against regular folks (the anonymity of internet technology and state of recordkeeping being what it is, this sometimes leads to hilarious cases like when the RIAA sued a 79 year old guy who doesn't even own a computer or know how to operate one).

Roy discusses the various arguments for or against this sort of file sharing, noting that the essential difference of opinion is the definition of the word "theft." For my part, I think it's pretty obvious that downloading something for free that you'd normally have to pay for is morally wrong. However, I can see some grey area. A few months ago, I pre-ordered Tool's most recent album, 10,000 Days from Amazon. A friend who already had the album sent me a copy over the internet before I had actually recieved my copy of the CD. Does this count as theft? I would say no.

The concept of borrowing a Book, CD or DVD also seems pretty harmless to me, and I don't have a moral problem with borrowing an electronic copy, then deleting it afterwords (or purchasing it, if I liked it enough), though I can see how such a practice represents a bit of a slippery slope and wouldn't hold up in an honest debate (nor should it). It's too easy to abuse such an argument, or to apply it in retrospect. I suppose there are arguments to be made with respect to making distinctions between benefits and harms, but I generally find those arguments unpersuasive (though perhaps interesting to consider).

There are some other issues that need to be discussed as well. The concept of Fair Use allows limited use of copyrighted material without requiring permission from the rights holders. For example, including a screenshot of a film in a movie review. You're also allowed to parody copyrighted works, and in some instances make complete copies of a copyrighted work. There are rules pertaining to how much of the copyrighted work can be used and in what circumstances, but this is not the venue for such details. The point is that copyright is not absolute and consumers have rights as well.

Another topic that must be addressed is Digital Rights Management (DRM). This refers to a range of technologies used to combat digital copying of protected material. The goal of DRM is to use technology to automatically limit the abilities of a consumer who has purchased digital media. In some cases, this means that you won't be able to play an optical disc on a certain device, in others it means you can only use the media a certain number of times (among other restrictions).

To be blunt, DRM sucks. For the most part, it benefits no one. It's confusing, it basically amounts to treating legitimate customers like criminals while only barely (if that much) slowing down the piracy it purports to be thwarting, and it's lead to numerous disasters and unintended consequences. Essential reading on this subject is this talk given to Microsoft by Cory Doctorow. It's a long but well written and straightforward read that I can't summarize briefly (please read the whole thing). Some details of his argument may be debateable, but as a whole, I find it quite compelling. Put simply, DRM doesn't work and it's bad for artists, businesses, and society as a whole.

Now, the IP industries that are pushing DRM are not that stupid. They know DRM is a fundamentally absurd proposition: the whole point of selling IP media is so that people can consume it. You can't make a system that will prevent people from doing so, as the whole point of having the media in the first place is so that people can use it. The only way to perfectly secure a piece of digital media is to make it unusable (i.e. the only perfectly secure system is a perfectly useless one). That's why DRM systems are broken so quickly. It's not that the programmers are necessarily bad, it's that the entire concept is fundamentally flawed. Again, the IP industries know this, which is why they pushed the Digital Millennium Copyright Act (DMCA). As with most laws, the DMCA is a complex beast, but what it boils down to is that no one is allowed to circumvent measures taken to protect copyright. Thus, even though the copy protection on DVDs is obscenely easy to bypass, it is illegal to do so. In theory, this might be fine. In practice, this law has extended far beyond what I'd consider reasonable and has also been heavily abused. For instance, some software companies have attempted to use the DMCA to prevent security researchers from exposing bugs in their software. The law is sometimes used to silence critics by threatening them with a lawsuit, even though no copright infringement was committed. The Chilling Effects project seems to be a good source for information regarding the DMCA and it's various effects.

DRM combined with the DMCA can be stifling. A good example of how awful DRM is, and how DMCA can affect the situation is the Sony Rootkit Debacle. Boing Boing has a ridiculously comprehensive timeline of the entire fiasco. In short, Sony put DRM on certain CDs. The general idea was to prevent people from putting the CDs in their computer and ripping them to MP3s. To accomplish this, Sony surreptitiously installed software on customer's computers (without their knowledge). A security researcher happened to notice this, and in researching the matter found that the Sony DRM had installed a rootkit that made the computer vulnerable to various attacks. Rootkits are black-hat cracker tools used to disguise the workings of their malicious software. Attempting to remove the rootkit broke the windows installation. Sony reacted slowly and poorly, releasing a service pack that supposedly removed the rootkit, but which actually opened up new security vulnerabilities. And it didn't end there. Reading through the timeline is astounding (as a result, I tend to shy away from Sony these days). Though I don't believe he was called on it, the security researcher who discovered these vulnerabilities was technically breaking the law, because the rootkit was intended to protect copyright.

A few months ago, my windows computer died and I decided to give linux a try. I wanted to see if I could get linux to do everything I needed it to do. As it turns out, I could, but not legally. Watching DVDs on linux is technically illegal, because I'm circumventing the copy protection on DVDs. Similar issues exist for other media formats. The details are complex, but in the end, it turns out that I'm not legally able to watch my legitimately purchased DVDs on my computer (I have since purchased a new computer that has an approved player installed). Similarly, if I were to purchase a song from the iTunes Music Store, it comes in a DRMed format. If I want to use that format on a portable device (let's say my phone, which doesn't support Apple's DRM format), I'd have to convert it to a format that my portable device could understand, which would be illegal.

Which brings me to my next point, which is that DRM isn't really about protecting copyright. I've already established that it doesn't really accomplish that goal (and indeed, even works against many of the reasons copyright was put into place), so why is it still being pushed? One can only really speculate, but I'll bet that part of the issue has to do with IP owners wanting to "undercut fair use and then create new revenue streams where there were previously none." To continue an earlier example, if I buy a song from the iTunes music store and I want to put it on my non-Apple phone (not that I don't want one of those), the music industry would just love it if I were forced to buy the song again, in a format that is readable by my phone. Of course, that format would be incompatible with other devices, so I'd have to purchase the song again if I wanted to listen to it on those devices. When put in those terms, it's pretty easy to see why IP owners like DRM, and given the general person's reaction to such a scheme, it's also easy to see why IP owners are always careful to couch the debate in terms of piracy. This won't last forever, but it could be a bumpy ride.

Interestingly enough, distributers of digital media like Apple and Yahoo have recently come out against DRM. For the most part, these are just symbolic gestures. Cynics will look at Steve Jobs' Thoughts on Music and say that he's just passing the buck. He knows customers don't like or understand DRM, so he's just making a calculated PR move by blaming it on the music industry. Personally, I can see that, but I also think it's a very good thing. I find it encouraging that other distributers are following suit, and I also hope and believe this will lead to better things. Apple has proven that there is a large market for legally purchased music files on the internet, and other companies have even shown that selling DRM-free files yields higher sales. Indeed, the emusic service sells high quality, variable bit rate MP3 files without DRM, and it has established emusic as the #2 retailer of downloadable music behind the iTunes Music Store. Incidentally, this was not done for pure ideological reasons - it just made busines sense. As yet, these pronouncements are only symbolic, but now that online media distributers have established themselves as legitimate businesses, they have ammunition with which to challenge the IP holders. This won't happen overnight, but I think the process has begun.

Last year, I purchased a computer game called Galactic Civilizations II (and posted about it several times). This game was notable to me (in addition to the fact that it's a great game) in that it was the only game I'd purchased in years that featured no CD copy protection (i.e. DRM). As a result, when I bought a new computer, I experienced none of the usual fumbling for 16 digit CD Keys that I normally experience when trying to reinstall a game. Brad Wardell, the owner of the company that made the game, explained his thoughts on copy protection on his blog a while back:
I don't want to make it out that I'm some sort of kumbaya guy. Piracy is a problem and it does cost sales. I just don't think it's as big of a problem as the game industry thinks it is. I also don't think inconveniencing customers is the solution.
For him, it's not that piracy isn't an issue, it's that it's not worth imposing draconian copy protection measures that infuriate customers. The game sold much better than expected. I doubt this was because they didn't use DRM, but I can guarantee one thing: People don't buy games because they want DRM. However, this shows that you don't need DRM to make a successful game.

The future isn't all bright, though. Peter Gutmann's excellent Cost Analysis of Windows Vista Content Protection provides a good example of how things could get considerably worse:
Windows Vista includes an extensive reworking of core OS elements in order to provide content protection for so-called "premium content", typically HD data from Blu-Ray and HD-DVD sources. Providing this protection incurs considerable costs in terms of system performance, system stability, technical support overhead, and hardware and software cost. These issues affect not only users of Vista but the entire PC industry, since the effects of the protection measures extend to cover all hardware and software that will ever come into contact with Vista, even if it's not used directly with Vista (for example hardware in a Macintosh computer or on a Linux server).
This is infuriating. In case you can't tell, I've never liked DRM, but at least it could be avoided. I generally take articles like the one I'm referencing with a grain of salt, but if true, it means that the DRM in Vista is so oppressive that it will raise the price of hardware… And since Microsoft commands such a huge share of the market, hardware manufacturers have to comply, even though a some people (linux users, Mac users) don't need the draconian hardware requirements. This is absurd. Microsoft should have enough clout to stand up to the media giants, there's no reason the DRM in Vista has to be so invasive (or even exist at all). As Gutmann speculates in his cost analysis, some of the potential effects of this are particularly egregious, to the point where I can't see consumers standing for it.

My previous post dealt with Web 2.0, and I posted a YouTube video that summarized how changing technology is going to force us to rethink a few things: copyright, authorship, identity, ethics, aesthetics, rhetorics, governance, privacy, commerce, love, family, ourselves. All of these are true. Earlier, I wrote that the purpose of copyright was to benefit society, and that protection and expiration were both essential. The balance between protection and expiration has been upset by technology. We need to rethink that balance. Indeed, many people smarter than I already have. The internet is replete with examples of people who have profited off of giving things away for free. Creative Commons allows you to share your content so that others can reuse and remix your content, but I don't think it has been adopted to the extent that it should be.

To some people, reusing or remixing music, for example, is not a good thing. This is certainly worthy of a debate, and it is a discussion that needs to happen. Personally, I don't mind it. For an example of why, watch this video detailing the history of the Amen Break. There are amazing things that can happen as a result of sharing, reusing and remixing, and that's only a single example. The current copyright environment seems to stifle such creativity, not the least of which because copyright lasts so long (currently the life of the author plus 70 years). In a world where technology has enabled an entire generation to accellerate the creation and consumption of media, it seems foolish to lock up so much material for what could easily be over a century. Despite all that I've written, I have to admit that I don't have a definitive answer. I'm sure I can come up with something that would work for me, but this is larger than me. We all need to rethink this, and many other things. Maybe that Web 2.0 thing can help.

Update: This post has mutated into a monster. Not only is it extremely long, but I reference several other long, detailed documents and even somewhere around 20-25 minutes of video. It's a large subject, and I'm certainly no expert. Also, I generally like to take a little more time when posting something this large, but I figured getting a draft out there would be better than nothing. Updates may be made...

Update 2.15.07: Made some minor copy edits, and added a link to an Ars Technica article that I forgot to add yesterday.
Posted by Mark on February 14, 2007 at 11:44 PM .: link :.


End of This Day's Posts

Wednesday, January 24, 2007

Top 10 Box Office Performance
So after looking at a bunch of top 10 films of 2006 lists, and compiling my own, I began to wonder just how popular these movies really were. Film critics are notorious for picking films that the average viewer thinks are boring or pretentious. Indeed, my list features a few such picks, and when I think about it, there are very few movies on the list that I'd give an unqualified recommendation. For instance, some of the movies on my list are very violent or otherwise graphic, and some people just don't like that sort of thing (understandably, of course). United 93 is a superb film, but not everyone wants to relive 9/11. And so on. As I mentioned before, top 10 lists are extremely personal and usually end up saying more about the person compiling the list than anything else, but I thought it would be interesting to see just how mainstream these lists really are. After all, there is a wealth of box office information available for every movie, and if you want to know how popular something is, economic data seems to be quite useful (though, as we'll see, perhaps not useful enough).

So I took nine top 10 lists (including my own) and compiled box office data from Box Office Mojo (since they don't always have budget information, I sometimes referenced IMDB or Wikipedia) and did some crunching (not much, I'm no statistician). I chose the lists of some of my favorite critics (like the Filmspotting guys and the local guy), and then threw in a few others for good measure (I wanted a New York critic, for instance).

The data collected includes domestic gross, budget and the number of theaters (widest release). From that data, I calculated the net gross and dollars per theater (DPT). You'd think this would be pretty conclusive data, but the more I thought about it, the more I realized just how incomplete a picture this paints. Remember, we're using this data to evaluate various top 10 lists, so when I chose domestic gross, I inadvertantly skewed the evaluation against lists that featured foreign films (however, I am trying to figure out whose list works best in the U.S. so I think it is a fair metric). So the gross only gives us part of the picture. The budget is an interesting metric, as it provides information about how much money a film's backers thought it would make and it provides a handy benchmark with which to evaluate (unfortunately, I was not able to find budget figures for a number of the smaller films, further skewing the totals you'll see). Net Gross is a great metric because it incorporates a couple of different things: it's not just a measure of how popular a movie is, it's a measure of how popular a movie is versus how much it cost to make (i.e. how much a film's producers believed in the film). In the context of a top 10 list, it's almost like pretending that the list creator was the head of a studio who chose what films to greenlight. It's not a perfect metric, but it's pretty good. The number of theaters the film showed in is an interesting metric because it shows how much faith theater chains had in the movie (and in looking at the numbers, it seems that the highest grossing films also had the most theaters). However, this could again be misleading because it's only the widest release. I doubt there are many films where the number of theaters doesn't drop considerably after opening weekend. Dollars per theater is perhaps the least interesting metric, but I thought it interesting enough to include.

One other thing to note is that I gathered all of this data earlier this week (Sunday and Monday), and some of the films just recently hit wide distribution (notably Pan's Labyrinth and Children of Men, neither of which have recouped costs yet) and will make more money. Some films will be re-released around Oscar season, as the studios seek to cash in on their award winning films.

I've posted all of my data on a public Google Spreadsheet (each list is on a separate tab), and I've linked each list below to their respective tab with all the data broken out. This table features the totals for the metrics I went over above: Domestic Gross, Budget, Net Gross, Theaters, and Dollars Per Theater (DPT).

List Gross Budget Net Gross Theaters DPT
Kaedrin
(Mark Ciocco)
$484,154,522 $319,850,000 $164,092,855 16,675 $29,034.75
Reelviews
(James Berardinelli)
$586,767,062 $607,000,000 -$20,674,428 16,217 $36,182.22
Filmspotting
(Adam Kempenaar)
$210,592,457 $234,850,000 -$27,159,180 8,589 $24,518.86
Filmspotting
(Sam Van Hallgren)
$79,756,419 $152,204,055 -$73,445,839 4,467 $17,854.58
Philadelphia Inquirer
(Steven Rea)
$236,690,299 $239,000,000 -$40,474,006 10,239 $23,116.54
The New York Times
(A.O. Scott)
$104,484,584 $92,358,000 $11,238,032 3,641 $28,696.67
Rolling Stone
(Peter Travers)
$419,088,036 $264,400,000 $119,130,515 14,784 $28,347.41
Washington Post
(Stephen Hunter)
$540,183,488 $362,900,000 $169,683,807 15,394 $35,090.52
The Onion AV Club
(Scott Tobias)
$195,779,774 $191,580,000 $1,308,777 6,844 $28,606.05


This was quite an interesting exercise, and it would appear from the numbers, that perhaps not all film critics are as out of touch as originally thought. Or are they? Let's take a closer look.
  • Kaedrin (Mark Ciocco): The most surprising thing about my list is that every single film in my top 10 made a profit. In addition, my high net gross figure (around $164 million, which ended up being second out of the nine lists) isn't overly dependent on any single film (the biggest profit vehicle on my list was Inside Man, with about $43 millon, or about 1/4 my net gross). The only real wild card here is Lady Vengeance, which only made about $212 thousand. Its budget figure wasn't available and it was a foreign film that was only released in 15 theaters (I saw it on DVD). Given this data, I think my list is the most well rounded of all the surveyed lists. Not to pat myself on the back here, but my list is among the top 3 lists for all of the metrics (and #1 in theaters). Plus, as you'll read below, the lists that appear ahead of me have certain outliers that skew the data a bit. However, even with all of that, I might not have the most mainstream list.
  • Reelviews (James Berardinelli): James is probably the world's greatest amateur critic, and his list is quite good (it shares 4 films with my own list). Indeed, his list leads the Domestic Gross and Budget Categories, as well as Dollars Per Theater. But look at that Net Gross metric! Almost -$21 million dollars. Ouch. What happened? Superman Returns happened. It made a little more than $200 million dollars at the box office, but it cost $270 million to make it. This skews James' numbers considerably, and he would have been around $50 million in the green if it weren't for Superman. He also has two films that were released in less than 25 theaters, which skews the numbers a bit as well.
  • Filmspotting (Adam Kempenaar): Of the two critics on the Filmspotting podcast, Adam is by far the one I agree with more often, but his list is among the more unprofitable ones. This is due in great part to his inclusion of Children of Men, which has only recently come out in wide release, and which still has to make almost $50 million before it recoups its cost (I think it will make more money, but not enough to break even). To a lesser extent, his inclusion of two foreign films (Pan's Labyrinth and Volver) has also skewed the results a bit (both films did well at the foreign box office). Given those disclaimers, Adam's list isn't as bad as it seems, but it still not too hot. It is, however, better than his co-host:
  • Filmspotting (Sam Van Hallgren): I think it's safe to say that Sam takes the award for least mainstream critic. He's got the worst Domestic Gross and Net Gross of the group, by a significant margin. Like his co-host Adam, this can partly be explained by his inclusion of Children of Men and other small, independent, or foreign films. But it's a pretty toxic list. Only two films on his list turned a profit, which is a pretty miserable showing. Interestingly enough, I still think Sam is a pretty good critic. You don't have to agree with a critic to get something useful out of them, and I know what I'm getting with Sam. Plus, it helps that he's got a good foil in his co-host Adam.
  • Philadelphia Inquirer(Steven Rea): I kinda like my local critic's list, and it's definitely worth noting that his pick of the Chinese martial arts epic Curse of the Golden Flower has impacted his list considerably (as a high budget foreign film that did well internationally, but which understandably didn't do that great domestically). That choice alone (-$40 million) put him in the red. He's also got Pan's Labyrinth on his list, which will go on to make more money. Plus, he suffers from a data problem in that I couldn't find budget figures for The Queen, which has made around $35 million and almost certainly turned a profit. Even with those caveats, he's still only treading water.
  • The New York Times (A.O. Scott): I wanted to choose a critic from both New York and LA (due to the fact that most LA critics seemed to have a lot of ties, I decided not to include their lists), and A.O. Scott's list provides a decent example of why. Three of his picks were only shown in 6 theaters or less. This is more or less what you'd expect from a New York critic. They are one of the two cities that gets these small movies, so you'd expect their critics to show their superiority by including these films in their list (I'm sure they're good films too, but I think this is an interesting dynamic). In any case, it's worth noting that Mr Scott (heh) actually turned a profit. How could this be? Well, he included Little Miss Sunshine on his list. That movie has a net gross of around $50 million dollars, which gave Mr Scott significant breathing room for his other picks.
  • Rolling Stone (Peter Travers): I've always thought of this guy as your typical critic that doesn't like anything popular, but his list is pretty decent, and he turns out to be among the tops in terms of net gross with $119 million. One caveat here is that he does feature a tie in his list (so he has 11 films), but the tie consists of the two Clint Eastwood war flicks, both of which have lost considerable amounts of money (in other words, this list is actually a little undervalued by my metrics). So how did his list get so high? He also had Little Miss Sunshine on his list, which, as already mentioned, was quite the moneymaker. But even bigger than that, he included Borat in his list. Borat is a low budget movie that made huge amounts of cash, and it's net gross comes in at almost $110 million! So those two films account for the grand majority of his net gross. However, of all the lists, I think his is probably the most mainstream (while still retaining a critics edge) and gives my list a run for its money.
  • Washington Post (Stephen Hunter): I wanted to choose a critic from WaPo because it's one of the other "papers of record," and much to my amazement, his turns out to have the highest net gross! He seems to feature the most obscure picks, with 4 films that I couldn't even find budget data for (but which seem pretty small anyway). He's got both Little Miss Sunshine and Borat, which proves to be quite a profitable duo, and he's also got big moneymakers like The Departed and Casino Royale. It's an interesting list.
  • The Onion AV Club (Scott Tobias): He scrapes by with around $1 million net gross, though it should be noted that his list features Children of Men (a big loss film) and a couple of movies that I couldn't find budgets for. It's an interesting list, but it comes in somewhere around the upper middle of the pack.
Whew! That took longer than I thought. Which critic is the most mainstream? I think a case could be made for my list, Peter Travers' list, or Stephen Hunter's list. I think I'd give it to Peter Travers, with myself in a close second place and Stephen Hunter nipping at our heels.

Statistically, the biggest positive outliers appeared to be Little Miss Sunshine and Borat, and the biggest negative outliers appeared to be Flags of our Fathers and Children of Men (both of which will make more money, as they are currently in theaters).

Obviously, this list is not authoritative, and I've already spent too much time harping on the qualitative issues with my metrics, but I found it to be an interesting exercise (if I ever do something similar again, I'm going to need to find a way to automate some of the data gathering, though). Well, this pretty much shuts the door on the 2006 Kaedrin Awards season. I hope you enjoyed it.
Posted by Mark on January 24, 2007 at 11:40 PM .: link :.


End of This Day's Posts

Sunday, January 21, 2007

Best Films of 2006
Top 10 lists are intensely personal affairs. When it comes to movies (or art in general), you have to walk the narrow line between subjective and objective evaluations, and I inevitably end up with a list that says more about me than the movies I selected. James Berardinelli says it well:
I would be surprised if anyone else (critic or otherwise) has an identical Top 10 list to mine. But therein lies the enjoyment of examining individual Top 10 lists: they provide insight into the mindset of the one who has assembled them. It doesn't matter whether one agrees with their choices or not; that's irrelevant. It's about learning something about a person through the movies they like. I don't like "group" lists. To me, they are valueless - a generic popularity contest that reveals nothing.
I actually kinda like "group" lists, but I digress. The point is that these are generally movies that I like or otherwise moved me. Context matters. Some films are on the list because I had low expectations that were exceeded beyond imagination, and some are there because I had a great theater-going experience (apparently a rarity in this day and age). As I've done in years past, my top 10 is listed in a roughly reverse order, with the best last.

Top 10 Movies of 2006
* In roughly reverse order
  • Thank You for Smoking: The bottom two slots in the top 10 were very hard to fill, as there were essentially 4 films (with 4 very different styles) I wanted to include. I went into this film expecting a bland, heavy-handed activism and found myself astounded. This film somehow manages to make a tobacco lobbyist a sympathetic character without excusing the tobacco industry. That said, big tobacco really isn't the target of the film - it's more about media spin and the power of argument than anything else. Aaron Eckhart turns in a great performance as said lobbyist, and I'm not sure anyone else could have pulled this off. It's a humorous film that displays an almost libertarian attitude towards the power of debate. It has its flaws, but it won me over.
    More Info: [IMDB] [Amazon]
  • The Descent: This was the best horror film of the year, and one of the most enjoyable moviegoing experiences as well. Solid direction and acting, brilliant cinematography, and well executed scare sequences contribute to a tension filled film.
    More Info: [IMDB] [Amazon] [Full Review]
  • Clerks II: What can I say, I'm just a sucker for Kevin Smith's brand of raunchy pop-culture laden humor. As usual, he mixes the comedy into a more conventional dramatic story, and in this case, he's more than successful. Borat was funny, but Clerks II was both funny and moving.
    More Info: [IMDB] [Amazon]
  • Casino Royale: I've never been all that enamored with James Bond, but this reboot of the franchise was a revelation - quite possibly the most enjoyable movie going experience and pleasant surprise of the year for me. The film has its flaws, but it overcomes them with its action-packed charm.
    More Info: [IMDB] [Amazon] [Winner of 3 Kaedrin Movie Awards]
  • Inside Man: I'm not normally a fan of Spike Lee "Joints," but this film had me on the edge of my seat. It's a heist film, though it does make use of a historical implausibility and some macguffins. There are hints of Lee's more typical material, but it's done with a surprisingly deft touch (none of the heavy-handedness that I expected from him). Not the best heist film of all time, but a solid and surprisingly entertaining film.
    More Info: [IMDB] [Amazon]
  • Lady Vengeance: The third and final film in Chan-wook Park's "Vengeance Trilogy," this film has a reputation for being the worst of the three films. I, on the other hand, think it might be my favorite, for two reasons. First, it's story is far more believable than the other two, and second, this film actually ends with a touch of hope. The film is perhaps not as twisted as it's sister films, but it's still pretty messed up. The vengeance isn't as layered as the other films, but that only serves to differentiate the films. I enjoyed it a lot.
    More Info: [IMDB] [Amazon]
  • Hard Candy: It is perhaps an uncomfortable film to watch (especially for the guys), but it is also quite a good film. It deals with pedophilia and features only two characters and one major setting. Given these traits, it's amazing that the film manages to retain a lot of tension and challenge viewers with its shifting sympathies. Excellent performances by both leads, though Ellen Page's performance is particularly noteworthy.
    More Info: [IMDB] [Amazon] [Capsule Review]
  • Brick: Sam Spade goes to high school in this remarkable high-concept mixture of genres. Writer/director Rian Johnson nails the tone of the film, creating a stylized world filed with mixtures of the old and new. Perhaps not for everyone, I thoroughly enjoyed this.
    More Info: [IMDB] [Amazon] [Capsule Review]
  • The Departed: Scorcese returns to form with this violent, stylized remake of Infernal Affairs. Excellent directing, acting, music, and an engaging story that retains the original's feel, while adding some flourishes of it's own.
    More Info: [IMDB] [Amazon]
  • United 93: A movie about 9/11 could have come off as horribly exploitive, but director Paul Greengrass managed to create an amazingly emotional experience without being manipulative. Unquestionably the most emotional experience I had at the movies this year (if not ever), for what I assume are obvious reasons.
    More Info: [IMDB] [Amazon]
Honorable Mention
As I've already mentioned above, the first two of the Honorable Mentions listed below could probably be interchangeable with the number 9 or 10 in the top 10. Part of why it was so hard to select was that these four films are just so different from one another. Indeed, the last two has changed back and forth several times (I started this list a while ago).
  • Pan's Labyrinth: This could easily have been 9 or 10 on my list. Guillermo del Toro's visually stunning tale of a young girl who seeks to escape her unpleasant reality with a fantasy world which ends up being... not much of an escape. It's a great film, if a little bit of a downer. It actually ends on a note that is simultaneously tragic and triumphant, which is strange but impressive. Ultimately, I decided against it because it just didn't surprise and excite me the way the other films on the list did.
    More Info: [IMDB] [Amazon]
  • The Matador: Pierce Brosnan plays against character (the anti-Bond) in this quirky film about a hit man (Brosnan) and his unlikely friendship with everyman/businessman Greg Kinnear. Dark humor, a sharp script and a progression that seems strange at first, but makes more sense as the film goes on. Again, this is interchangeable with the 9 or 10 picks above, and it's probably more of a crowd-pleaser than you'd expect.
    More Info: [IMDB] [Amazon]
  • The Proposition: An Australian take on the western, this is a brutal film that is quite original, but also lacking something. Showcasing the grimy desolation of the untamed outback, this film also features one of the best opening scenes of the year (a disorienting gunfight that thrusts you into the story). Ultimately, it doesn't work as well as it might seem, but it's an interesting film.
    More Info: [IMDB] [Amazon]
  • Apocalypto: Mel Gibson's offscreen shenanigans aside, this is actually a decent action/suspense film with one of the better chase sequences of the year. I didn't think I'd be all that enthralled with the setting of the film, but Gibson managed to keep things interesting enough. A well made film that was nowhere near the disaster I thought it would be (seriously, who watched that trailer and thought it would be good?)
    More Info: [IMDB] [Amazon]
  • The Fountain: Darren Aronofsky's trippy exploration of love and mortality is best described by the phrase "Interesting Failure." It is undoubtedly the most gorgeous movie of the year, and all of the technical aspects of the film (direction, acting, cinematography, etc...) are outstanding. Unfortunately, it doesn't add up to a whole lot, though there are deeper themes at work in the story that I admit I haven't taken the time to parse (repeated viewings may fix that).
    More Info: [IMDB] [Amazon] [Full Review]
  • Mission Impossible III: Tom Cruise's offscreen shenanigans aside (do we see a trend here?), MI III was actually one of the more enjoyable popcorn flicks of last summer. I think a large portion of the credit goes to Philip Seymour Hoffman's small role as the villain. It's probably the most enjoyable in the series, though I still don't mind the first film.
    More Info: [IMDB] [Amazon]
  • The Illusionist: One of two good turn-of-the-century magician films, this movie was enjoyable. Writer/director Neil Burger makes some interesting stylistic choices and manages to coax a good performance out of Jessica Biel of all people. Ed Norton and Paul Giamatti are also excellent, of course.
    More Info: [IMDB] [Amazon]
  • The Prestige: The other (and seemingly more popular) turn-of-the-century magician film features an excellent cast and an intriguing story (even though I think they cheated a bit). Director Christopher Nolan is not as stylish as Burger, but he has crafted a good film.
    More Info: [IMDB] [Amazon]
  • Slither: Underrated and fun film in the cheesy horror/sci-fi/comedy tradition of Tremors. It's not the best of its kind, but it was quite enjoyable and well done.
    More Info: [IMDB] [Amazon]
Worth Commenting
These are all decent films, but for some reason, I don't find them as engaging as everyone else.
  • Children of Men: If there is a film that has less faith in humanity, I can't think of one. This is one of the most depressing films of the year, and a few minutes of what I thought was "pretend hope" towards the end of the movie wasn't enough to redeem it in my eyes. It's well made, and there are some harrowing action sequences and long shots that are quite impressive, but it's fundamentally pessimistic - a trait I just can't stand in a movie.
    More Info: [IMDB] [Amazon]
  • Little Miss Sunshine: A fine film, but I must admit being a little baffled by the popular response to this movie. It's not your typical Hollywood fare, which might be part of it, but it is emphatically your typical independent movie fare. I liked it, but didn't love it.
    More Info: [IMDB] [Amazon]
  • V for Vendetta: A decent film that I found to be very sloppy and not all that engaging. The story seemed muddled and unecessarily repetitive and manipulative, and the action sequences were edited to death. It wasn't a bad movie, but it wasn't that great either.
    More Info: [IMDB] [Amazon]
Should have seen: Allrighty then! That about wraps it up for the 2006 movie awards, and it's about time. That said, I do have another idea for a post related to my top 10. Don't worry, it's not all about the movies (it's more of a meta-top-10 type post, whatever that means).

In any case, comments are welcome. Feel free to express your outrage or approval in the comments.
Posted by Mark on January 21, 2007 at 10:06 PM .: link :.


End of This Day's Posts

Sunday, December 03, 2006

Aliens Board Game
A little while ago, I became reaquanted with a game that I used to play often - the Aliens board game. I haven't played the game in about ten years or so, and I found it interesting for a number of reasons. Gameplay is a bit of a mixture of other gaming styles, combining the arbitrary nature and futility of board games with the wonky dice and damage-table style of RPGs (Ok, you shot the alien with your pulse rifle. Roll for acid!) I noticed a few things about the game that I never did before, some good, some bad.

Before I get into those observations, I'll have to explain the mechanics of the game a bit. The game comes with a few maps and there are a couple of scenarios that you can play, each of which is basically re-enacting a memorable scene where the colonial marines get their asses handed to them from the movie (i.e. the initial encounter with the aliens under the reactor, the later encounter and retreat through the air ducts, and a single player scenario where Ripley rescues Newt and fights the alien queen). There was also an expansion pack which featured an additional scenario. Since we'd all played the game countless times in our youth, we decided to mix things up a little and combine the regular map with the expansion map. Basically, we start at one end of the map and have to make ourselfs to the other end. This is easier said than done.

We hand out all the player cards randomly. Most of the characters are colonial marines, but there is a surprising amount of variability between characters and their abilities. Most characters are given two moves per turn, though Ripley, Apone, and Bishop have three. In terms of weaponry, some of the characters are significantly better than others. Hicks, Ripley and Apone have quality weapons to choose from. Drake and Vasquez have those awesome smart guns. On the opposite end of the spectrum, there's the Burke character, who has no weapons (he's essentially used as alien bait, as he should). Since there were only a few of us, we each got multiple characters to play with (which is a good thing, for reasons I'll get into in a moment). I ended up with three relatively lame characters: Corporal Dietrich (who was armed with only a pistol), Lieutenant Gorman (whose Pulse Rifle was the most powerful weapon in my group), and Private Wierzbowski (who was armed with an incinerator). Gorman's an ok character to play, except he's a tool in the movie. Dietrich isn't quite as useless as Burke, but damn near so. Wierzbowski isn't the greatest character to play, but he's awesome in the movie (The Wierzbowski Hunters are one of those wonderful phenomenons that could only be possible on the internet).

Players on the map (click for larger)
That's it man, game over man, game over! *

As already mentioned, our goal is to make our way from one side of the map to the other. Every turn, four aliens are added to the board in semi-random places (as the game proceeds, more aliens are added per turn). While most of the players only have two moves per turn, the aliens have four moves. If an alien enters on or next to your position, you have to roll a ten sided die. Most of the time, the result is that you are "grabbed" by the alien. Essentially, you need to be rescued by one of the other players, illustrating the cooperative nature of the game.

So the game begins, and the initial four aliens are inserted onto the board. The way the game goes for a while is that we take out all of the aliens, and move forward if possible. Eventually my characters are leading the pack and make it to the next map (half way there!), and the DM equivalent decides that we need to start adding more aliens per turn. At this point, we're fending off aliens from all directions, and we start to take on more and more casualties. Some aspects of the game were becoming clearer to me:
  • Weapons & Range: As I previously noted, my three characters were armed with a pistol, a pulse rifle, and an incinerator. The pistol is next to useless (if you ever play the game, don't choose the pistol - use the incinerator) as it's range is absurdly low and even then, you have to make a tough role to hit your target. The pulse rifle is actually a decent weapon with a good, long range. The incinerator is another short range weapon, and I cannot use it to rescue any of my teammates (I could kill the alien, but I'd also be burning my teammate).
  • Turns: As it turns out, I'm the last person to go each turn, so in addition to my mostly short range, there usually aren't any aliens left for me to shoot at. So every turn, I end up moving forward, while everyone else is stuck rescuing their teammates (sometimes me, even though I can't return the favor).
  • The Aliens: Even if you don't start adding more and more aliens per turn, the game becomes more challenging because as you progress throughout the map, the aliens begin to surround you and they're more difficult to attack when they're coming from multiple directions (if you can get two aliens lined up in a row, a single shot can kill both aliens...)
As a result of my turn placement and my characters' lame short-range weapons, I ended up leading the pack. Lieutenant Gorman, my only decent combat soldier, got attacked by an alien relatively early on, and when a teammate shot the alien, Gorman got sprayed by acid and died. This left me with Dietrich (pistol) and Wierzbowski (incinerator).

We had come to a standoff. The second map had more walls and obstructed views, so it took the aliens longer to reach us, but we also couldn't pick them off from afar. Wierzbowski finally proved useful, as you can use the incinerator to set up a "fire wall" that the aliens can't cross for a turn (This ability is particularly useful on the second map because of all the choke points). Still, our ranks were being worn down. I was able to block the forward onslaught, but the aliens came in on the flank and mounted a devestating attack. More than 50% of the original team had perished, and some of us were wounded (which makes it harder to hit targets). Dietrich had become completely disabled, so I had Wierzbowski pick her up in the hopes of feeding her to an alien if I got into trouble.

The game was running a little long at this point, so the DM decided to insert the alien queen (this isn't really supposed to happen, but we like a challenge). The queen is significantly more difficult to deal with, and she managed to kill the remainder of our team... except Wierzbowski who had made his way into a room with a single block choke point. Using the firewall ability, I was able to make it to the final hallway before being attacked. I managed to take out a couple of aliens with my incinerator, but I had to sacrifice Dietrich in order to get away. Alas, the queen had made her way around, and the valiant Wierzbowski finally succumbed to her deadly advance.

Our variations on the rules aside, it's actually a pretty well balanced game. The aliens are appropriately formidable, and they only become moreso as the game progresses. As in the movie, you can't really complete a scenario without taking significant casualties, and even though our team did pretty well, there's no guarantee that we'd have made it (even if we didn't add the queen). The game was made in 1989, and is no longer available. You can find it on eBay, but it commands a relatively high price tag... It's an interesting game, but it's not really worth the high price these days. In the 90s, the game was a lot of fun. These days, other games have far surpassed it (especially video games). Still, it's nice to play an old favorite every now and again.

* I should note that the game does not come with those nice figurines in the picture above. The game has these chinsy cardboard pieces with pictures of the characters and aliens. Functional, but not as nice as the figurines. Also, yes, I'm a huge nerd and can name all the colonial marines without having to look them up.
Posted by Mark on December 03, 2006 at 08:04 PM .: link :.


End of This Day's Posts

Wednesday, November 29, 2006

Animation Marathon: Grave of the Fireflies
Of the six films chosen for the Animation Marathon, Grave of the Fireflies was the only one that I hadn't heard much about. The only thing I knew about it was that it was sad. Infamously sad. After watching the movie, I can say that it certainly does live up to those expecations. It's a heartbreaking movie, all the moreso because it's animated. Spoilers ahead...

The film begins by showing us a 14 year old boy lying dead on a subway platform, so you can't really say that the filmmakers were trying to hide the tragedy in this film. The boy's name is Seita, and through flashbacks, we learn how he came to meet his end. Set during the last days of World War II, the story is kicked off by the American firebombing of Seita's city. Seita's father is in the Japanese Navy and Seita's mother is horribly wounded by bombing, eventually succumbing to her wounds. The entire city is destroyed, leaving Seita and his little 4 year old sister Setsuko homeless. For a time, they take refuge with an Aunt, who seems nice at first, but gets grumpier as she realizes that Seita isn't willing to contribute to the war effort, or to help around the house. Eventually, Seita finds an unused bomb shelter where he can live with his sister without being a burden on their Aunt. It being wartime, food is scarce, and Seita struggles and ultimately fails to support his sister.

This isn't quite like any other animated movie I've ever seen. It's a powerful and evocative film. It has moments of great beauty, even though it's also quite sad. It displays a patience that's not common in animated movies. There are contemplative pauses. Characters and their actions are allowed time to breath. The animations are often visually striking, even when they're used in service of less-than-pleasant events (such as the landscape shot of the city as it burns).

After I finished the film, I was infurated. Obviously no one really enjoys watching two kids starve, suffer, and die after losing their family and home to a war, but it's not just sad. As I said before, it's infuriating. I was so pissed off at Seita because he made a lot of boneheaded, prideful decisions that were ultimately responsible for the death of his sister (and eventually, himself). At one point in the film, as Seita begs a farmer for food, the farmer tells him to swallow his pride and go back to his aunt. Seita refuses, and hence the tragedy. But at least he's young and thus reckless, which is understandable. While I was upset at Seita's actions, I really couldn't blame only him and the film did prompt some empathy for that character. I can't say the same of the Aunt. Who lets two young kids go off to live by themselves in wartime? Yeah, Seita wasn't pulling his weight, but hell, your job as an adult is to teach children about responsibilities... It was wartime for crying out loud. There had to be plenty to do. Yeah, it's sad. Especially when it comes to Setsuko, who was only 4 years old. But other than that, it was infuriating, and I wasn't sure how I was going to rate the movie. Then I read about some context in the Onion A.V. Club review of the movie (emphasis mine):
Adapting a semi-autobiographical book by Akiyuki Nosaka, Takahata scripted and directed Fireflies while his Studio Ghibli partner, Hayao Miyazaki, was scripting and directing his own classic, My Neighbor Totoro. The two films were produced and screened as a package, because Totoro was considered a difficult sell, while Fireflies, as an "educational" adaptation of a well-known historical book, had a guaranteed audience. But while both films won high praise at home and abroad, it's hard to imagine the initial impact of watching them back to back. Totoro is a bubbly, joyous film about the wonders of childhood, while Fireflies follows two children as they starve, suffer, and die after American planes firebomb their town.

...Nosaka, who lost his own young sister under similar circumstances, apparently intended his book in part to chronicle his shameful pride, while Takahata explains ... that he wanted viewers to learn a moral lesson from Seita's hubris. Instead, he reports, they mostly sympathized with the boy, which is easy to do.
It turns out that my feelings about the film were exactly what the filmmakers were going for, which kinda turned me around and made me realize that the film really is brilliant (in other words, my expecation of the film as having to be "Sad" made me feel strange because, while it was certainly sad, it was also infuriating. Now that I know the infurating part was intentional, it makes a lot more sense.) As the Onion article brilliantly summarizes, "not so much an anti-war statement as it is a protest against basic human selfishness, and the way it only worsens during trying times." And that's sad, but it's also quite annoying.

The animation is very well done, and while some might think that something this serious would not be appropriate in animation, I'm not sure it would work any other way. One of the most beautiful scenes in the film shows the two children using fireflies to light their abandoned bomb shelter. It's a scene I think would look cheesy and fake in a live action film, but which works wonderfully in an animated film. Roger Ebert describes it well:
It isn't the typical material of animation. But for "Grave of the Fireflies," I think animation was the right choice. Live action would have been burdened by the weight of special effects, violence and action. Animation allows Takahata to concentrate on the essence of the story, and the lack of visual realism in his animated characters allows our imagination more play; freed from the literal fact of real actors, we can more easily merge the characters with our own associations.
In the end, while this is definitely an excellent film, I find it difficult to actually recommend it (for what I hope are obvious reasons). This type of movie is not for everyone, and while I do think it is brilliantly executed, I don't especially want to watch it again. Ever. In an odd sort of way, that's a testament to how well the film does what it does. (***1/2)

Filmspotting's review is not up yet, but should be up tomorrow. Check it out, as they are also reviewing The Fountain (which I reviewed on Monday).

(In a strange stroke of coincidence, I had actually watched Miyazaki's My Neighbor Totoro just a few days before Fireflies, not quite mimicking the back to back screenings mentioned in the Onion article, but close enough to know that it was an odd combo indeed (and I can't imagine the playful and fun Totoro being a "harder sell" than the gut-punch of Fireflies.))
Posted by Mark on November 29, 2006 at 11:25 PM .: Comments (4) | link :.


End of This Day's Posts

Sunday, October 29, 2006

Adventures in Linux, Paradox of Choice Edition
Last week, I wrote about the paradox of choice: having too many options often leads to something akin to buyer's remorse (paralysis, regret, dissatisfaction, etc...), even if their choice was ultimately a good one. I had attended a talk given by Barry Schwartz on the subject (which he's written a book about) and I found his focus on the psychological impact of making decisions fascinating. In the course of my ramblings, I made an offhand comment about computers and software:
... the amount of choices in assembling your own computer can be stifling. This is why computer and software companies like Microsoft, Dell, and Apple (yes, even Apple) insist on mediating the user's experience with their hardware & software by limiting access (i.e. by limiting choice). This turns out to be not so bad, because the number of things to consider really is staggering.
The foolproofing that these companies do can sometimes be frustrating, but for the most part, it works out well. Linux, on the other hand, is the poster child for freedom and choice, and that's part of why it can be a little frustrating to use, even if it is technically a better, more stable operating system (I'm sure some OSX folks will get a bit riled with me here, but bear with me). You see this all the time with open source software, especially when switching from regular commercial software to open source.

One of the admirable things about Linux is that it is very well thought out and every design decision is usually done for a specific reason. The problem, of course, is that those reasons tend to have something to do with making programmers' lives easier... and most regular users aren't programmers. I dabble a bit here and there, but not enough to really benefit from these efficiencies. I learned most of what I know working with Windows and Mac OS, so when some enterprising open source developer decides that he doesn't like the way a certain Windows application works, you end up seeing some radical new design or paradigm which needs to be learned in order to use it. In recent years a lot of work has gone into making Linux friendlier for the regular user, and usability (especially during the installation process) has certainly improved. Still, a lot of room for improvement remains, and I think part of that has to do with the number of choices people have to make.

Let's start at the beginning and take an old Dell computer that we want to install Linux on (this is basically the computer I'm running right now). First question: which distrubution of Linux do we want to use? Well, to be sure, we could start from scratch and just install the Linux Kernel and build upwards from there (which would make the process I'm about to describe even more difficult). However, even Linux has it's limits, so there are lots of distrubutions of linux which package the OS, desktop environments, and a whole bunch of software together. This makes things a whole lot easier, but at the same time, there are a ton of distrutions to choose from. The distributions differ in a lot of ways for various reasons, including technical (issues like hardware support), philosophical (some distros poo poo commercial involvement) and organizational (things like support and updates). These are all good reasons, but when it's time to make a decision, what distro do you go with? Fedora? Suse? Mandriva? Debian? Gentoo? Ubuntu? A quick look at Wikipedia reveals a comparison of Linux distros, but there are a whopping 67 distros listed and compared in several different categories. Part of the reason there are so many distros is that there are a lot of specialized distros built off of a base distro. For example, Ubuntu has several distributions, including Kubuntu (which defaults to the KDE desktop environment), Edubuntu (for use in schools), Xubuntu (which uses yet another desktop environment called Xfce), and, of course, Ubuntu: Christian Edition (linux for Christians!).

So here's our first choice. I'm going to pick Ubuntu, primarily because their tagline is "Linux for Human Beings" and hey, I'm human, so I figure this might work for me. Ok, and it has a pretty good reputation for being an easy to use distro focused more on users than things like "enterprises."

Alright, the next step is to choose a desktop environment. Lucky for us, this choice is a little easier, but only because Ubuntu splits desktop environments into different distributions (unlike many others which give you the choice during installation). For those who don't know what I'm talking about here, I should point out that a desktop environment is basically an operating system's GUI - it uses the desktop metaphor and includes things like windows, icons, folders, and abilities like drag-and-drop. Microsoft Windows and Mac OSX are desktop environments, but they're relatively locked down (to ensure consistency and ease of use (in theory, at least)). For complicated reasons I won't go into, Linux has a modular system that allows for several different desktop environments. As with linux distributions, there are many desktop environments. However, there are really only two major players: KDE and Gnome. Which is better appears to be a perennial debate amongst linux geeks, but they're both pretty capable (there are a couple of other semi-popular ones like Xfce and Enlightenment, and then there's the old standby, twm (Tom's Window Manager)). We'll just go with the default Gnome installation.

Note that we haven't even started the installation process and if we're a regular user, we've already made two major choices, each of which will make you wonder things like: Would I have this problem if I installed Suse instead of Ubuntu? Is KDE better than Gnome?

But now we're ready for installation. This, at least, isn't all that bad, depending on the computer you're starting with. Since we're using an older Dell model, I'm assuming that the hardware is fairly standard stuff and that it will all be supported by my distro (if I were using a more bleeding edge type box, I'd probably want to check out some compatibility charts before installing). As it turns out, Ubuntu and it's focus on creating a distribution that human beings can understand has a pretty painless installation. It was actually a little easier than Windows, and when I was finished, I didn't have to remove the mess of icons and trial software offers (purchasing a Windows PC through somone like HP is apparently even worse). When you're finished installing Ubuntu, you're greeted with a desktop that looks like this (click the pic for a larger version):

Default Ubuntu Desktop (click for larger)

No desktop clutter, no icons, no crappy trial software. It's beautiful! It's a little different from what we're used to, but not horribly so. Windows users will note that there are two bars, one on the top and one on the bottom, but everything is pretty self explanatory and this desktop actually improves on several things that are really strange about Windows (i.e. to turn off you're computer, first click on "Start!"). Personally, I think having two toolbars is a bit much so I get rid of one of them, and customize the other so that it has everything I need (I also put it at the bottom of the screen for several reasons I won't go into here as this entry is long enough as it is).

Alright, we're almost homefree, and the installation was a breeze. Plus, lots of free software has been installed, including Firefox, Open Office, and a bunch of other good stuff. We're feeling pretty good here. I've got most of my needs covered by the default software, but let's just say we want to install Amarok, so that we can update our iPod. Now we're faced with another decision: How do we install this application? Since Ubuntu has so thoughtfully optimized their desktop for human use, one of the things we immediately notice in the "Applications" menu is an option which says "Add/Remove..." and when you click on it, a list of software comes up and it appears that all you need to do is select what you want and it will install it for you. Sweet! However, the list of software there doesn't include every program, so sometimes you need to use the Synaptic package manager, which is also a GUI application installation program (though it appears to break each piece of software into smaller bits). Also, in looking around the web, you see that someone has explained that you should download and install software by typing this in the command line: apt-get install amarok. But wait! We really should be using the aptitude command instead of apt-get to install applications.

If you're keeping track, that's four different ways to install a program, and I haven't even gotten into repositories (main, restricted, universe, multiverse, oh my!), downloadable package files (these operate more or less the way a Windows user would download a .exe installation file, though not exactly), let alone downloading the source code and compiling (sounds fun, doesn't it?). To be sure, they all work, and they're all pretty easy to figure out, but there's little consistency, especially when it comes to support (most of the time, you'll get a command line in response to a question, which is completely at odds with the expectations of someone switching from Windows). Also, in the case of Amarok, I didn't fare so well (for reasons belabored in that post).

Once installed, most software works pretty much the way you'd expect. As previously mentioned, open source developers sometimes get carried away with their efficiencies, which can sometimes be confusing to a newbie, but for the most part, it works just fine. There are some exceptions, like the absurd Blender, but that's not necessarily a hugely popular application that everyone needs.

Believe it or not, I'm simplifying here. There are that many choices in Linux. Ubuntu tries its best to make things as simple as possible (with considerable success), but when using Linux, it's inevitable that you'll run into something that requires you to break down the metaphorical walls of the GUI and muck around in the complicated swarm of text files and command lines. Again, it's not that difficult to figure this stuff out, but all these choices contribute to the same decision fatigue I discussed in my last post: anticipated regret (there are so many distros - I know I'm going to choose the wrong one), actual regret (should I have installed Suse?), dissatisfaction, excalation of expectations (I've spent so much time figuring out what distro to use that it's going to perfectly suit my every need!), and leakage (i.e. a bad installation process will affect what you think of a program, even after installing it - your feelings before installing leak into the usage of the application).

None of this is to say that Linux is bad. It is free, in every sense of the word, and I believe that's a good thing. But if they ever want to create a desktop that will rival Windows or OSX, someone needs to create a distro that clamps down on some of these choices. Or maybe not. It's hard to advocate something like this when you're talking about software that is so deeply predicated on openess and freedom. However, as I concluded in my last post:
Without choices, life is miserable. When options are added, welfare is increased. Choice is a good thing. But too much choice causes the curve to level out and eventually start moving in the other direction. It becomes a matter of tradeoffs. Regular readers of this blog know what's coming: We don't so much solve problems as we trade one set of problems for another, in the hopes that the new set of problems is more favorable than the old.
Choice is a double edged sword, and by embracing that freedom, Linux has to deal with the bad as well as the good (just as Microsoft and Apple have to deal with the bad aspects of suppressing freedom and choice). Is it possible to create a Linux distro that is as easy to use as Windows or OSX while retaining the openness and freedom that makes it so wonderful? I don't know, but it would certainly be interesting.
Posted by Mark on October 29, 2006 at 07:18 PM .: Comments (0) | link :.


End of This Day's Posts

Sunday, October 22, 2006

The Paradox of Choice
At the UI11 Conference I attended last week, one of the keynote presentations was made by Barry Schwartz, author of The Paradox of Choice: Why More Is Less. Though he believes choice to be a good thing, his presentation focused more on the negative aspects of offering too many choices. He walks through a number of examples that illustrate the problems with our "official syllogism" which is:
  • More freedom means more welfare
  • More choice means more freedom
  • Therefore, more choice means more welfare
In the United States, we have operated as if this syllogism is unambigiously true, and as a result, we're deluged with choices. Just take a look at a relatively small supermarket: there are 285 cookies, 75 iced teas, 275 cereals, 40 toothpastes, 230 soups, and 175 salad dressings (not including 12 extra virgin olive oils and 18 vinegars which could be combined to make hundreds of vinaigrettes) to choose from (and this was supposedly a smaller supermarket). At your typical Circuit City, the sheer breadth of stereo components allows you to create any one of 6.5 million possible stereo systems. And this applies all throughout our lives, extending even to working, marriage, and whether or not to have children. In the past, these things weren't much of a question. Today, everything is a choice. [thanks to Jesper R�nn-Jensen for his notes on Schwartz's talk - it's even got pictures!]

So how do we react to all these choices? Luke Wroblewski provides an excellent summary, which I will partly steal (because, hey, he's stealing from Schwartz after all):
  • Paralysis: When faced with so many choices, people are often overwhelmed and put off the decision. I often find myself in such a situation: Oh, I don't have time to evaluate all of these options, I'll just do it tomorrow. But, of course, tomorrow is usually not so different than today, so you see a lot of procrastination.
  • Decision Quality: Of course, you can't procrastinate forever, so when forced to make a decision, people will often use simple heuristics to evaluate the field of options. In retail, this often boils down to evaluation based mostly on Brand and Price. I also read a recent paper on feature fatigue (full article not available, but the abstract is there) that fits nicely here.

    In fields where there are many competing products, you see a lot of feature bloat. Loading a product with all sorts of bells and whistles will differentiate that product and often increase initial sales. However, all of these additional capabilities come at the expense of usability. What's more, even when people know this, they still choose high-feature models. The only thing that really helps is when someone actually uses a product for a certain amount of time, at which point they realize that they either don't use the extra features or that the tradeoffs in terms of usability make the additional capabilities considerably less attractive. Part of the problem is perhaps that usability is an intangible and somewhat subjective attribute of a product. Intellectually, everyone knows that it is important, but when it comes down to decision-time, most people base their decisions on something that is more easily measured, like number of features, brand, or price. This is also part of why focus groups are so bad at measuring usability. I've been to a number of focus groups that start with a series of exercises in front of a computer, then end with a roundtable discussion about their experiences. Usually, the discussion was completely at odds with what the people actually did when in front of the computer. Watch what they do, not what they say...
  • Decision Satisfaction: When presented with a lot of choices, people may actually do better for themselves, yet they often feel worse due to regret or anticipated regret. Because people resort to simplifying their decision making process, and because they know they're simplifying, they might also wonder if one or more of the options they cut was actually better than what they chose. A little while ago, I bought a new cell phone. I actually did a fair amount of work evaluating the options, and I ended up going with a low-end no-frills phone... and instantly regretted it. Of course, the phone itself wasn't that bad (and for all I know, it was better than the other phones I passesd over), but I regret dismissing some of the other options, such as the camera (how many times over the past two years have I wanted to take a picture and thought Hey, if I had a camera on my phone I could have taken that picture!)
  • Escalation of expectations: When we have so many choices and we do so much work evaluating all the options, we begin to expect more. When things were worse (i.e. when there were less choices), it was much easier to exceed expectations. In the cell phone example above, part of the regret was no doubt fueled by the fact that I spent a lot of time figuring out which phone to get.
  • Maximizer Impact: There are some people who always want to have the best, and the problems inherent in too many choices hit these people the hardest.
  • Leakage: The conditions present when you're making a decision exert influence long after the decision has actually been made, contributing to the dissatisfaction (i.e. regret, anticipated regret) and escalation of expectations outlined above.
As I was watching this presentation, I couldn't help but think of various examples in my own life that illustrated some of the issues. There was the cell phone choice which turned out badly, but I also thought about things I had chosen that had come out well. For example, about a year ago, I bought an iPod, and I've been extremely happy with it (even though it's not perfect), despite the fact that there were many options which I considered. Why didn't the process of evaluating all the options evoke a feeling of regret? Because my initial impulse was to purchase the iPod, and I looked at the other options simply out of curiosity. I also had the opportunity to try out some of the players, and that experience helped enormously. And finally, the one feature that had given me pause was video (which wasn't available on the iPod when I started looking around). The Cowon iAudio X5 was giving me pause because it had video capabilities and the iPod at the time didn't. As it turned out, about a week later the Video iPod was released and made my decision very easy. I got that and haven't looked back since. The funny thing is that since I've gotten that iPod, I haven't used the video feature for anything useful. Not even once.

Another example is my old PC which has recently kicked the bucket. I actually assembled that PC from a bunch of parts, rather than going through a mainstream company like Dell, and the number of components available would probably make the Circuit City stereo example I gave earlier look tiny by comparison. Interestingly, this diversity of choices for PCs is often credited as part of the reason PCs overtook Macs:
Back in the early days of Macintoshes, Apple engineers would reportedly get into arguments with Steve Jobs about creating ports to allow people to add RAM to their Macs. The engineers thought it would be a good idea; Jobs said no, because he didn't want anyone opening up a Mac. He'd rather they just throw out their Mac when they needed new RAM, and buy a new one.

Of course, we know who won this battle. The "Wintel" PC won: The computer that let anyone throw in a new component, new RAM, or a new peripheral when they wanted their computer to do something new. Okay, Mac fans, I know, I know: PCs also "won" unfairly because Bill Gates abused his monopoly with Windows. Fair enough.

But the fact is, as Hill notes, PCs never aimed at being perfect, pristine boxes like Macintoshes. They settled for being "good enough" -- under the assumption that it was up to the users to tweak or adjust the PC if they needed it to do something else.
But as Schwartz would note, the amount of choices in assembling your own computer can be stifling. This is why computer and software companies like Microsoft, Dell, and Apple (yes, even Apple) insist on mediating the user's experience with their hardware by limiting access (i.e. by limiting choice). This turns out to be not so bad, because the number of things to consider really is staggering. So why was I so happy with my computer? Because I really didn't make many of the decisions - I simply went over to Ars Technica's System Guide and used their recommendations. When it comes time to build my next computer, what do you think I'm going to do? Indeed, Ars is currently compiling recommendations for their October system guide, due out sometime this week. My new computer will most likely be based off of their "Hot Rod" box. (Linux presents some interesting issues in this context as well, though I think I'll save that for another post.)

So what are the lessons here? One of the big ones is to separate the analysis from the choice by getting recommendations from someone else (see the Ars Technica example above). In the market for a digital camera? Call a friend (preferably one who is into photography) and ask them what to get. Another thing that strikes me is that just knowing about this can help you overcome it to a degree. Try to keep your expectations in check, and you might open up some room for pleasant surprises (doing this is suprisingly effective with movies). If possible, try using the product first (borrow a friend's, use a rental, etc...). Don't try to maximize the results so much; settle for things that are good enough (this is what Schwartz calls satisficing).

Without choices, life is miserable. When options are added, welfare is increased. Choice is a good thing. But too much choice causes the curve to level out and eventually start moving in the other direction. It becomes a matter of tradeoffs. Regular readers of this blog know what's coming: We don't so much solve problems as we trade one set of problems for another, in the hopes that the new set of problems is more favorable than the old. So where is the sweet spot? That's probably a topic for another post, but my initial thoughts are that it would depend heavily on what you're doing and the context in which you're doing it. Also, if you were to take a wider view of things, there's something to be said for maximizing options and then narrowing the field (a la the free market). Still, the concept of choice as a double edged sword should not be all that surprising... after all, freedom isn't easy. Just ask Spider Man.
Posted by Mark on October 22, 2006 at 10:56 AM .: Comments (2) | link :.


End of This Day's Posts

Sunday, June 18, 2006

Novelty
David Wong's article on the coming video game crash seems to have inspired Steven Den Beste, who agrees with Wong that there will be a gaming crash and also thinks that the same problems affect other forms of entertainment. The crux of the problem appears to be novelty. Part of the problem appears to be evolutionary as well. As humans, we are conditioned for certain things, and it seems that two of our insticts are conflicting.

The first instinct is the human tendency to rely on induction. Correlation does not imply causation, but most of the time, we act like it does. We develop a complex set of heuristics and guidelines that we have extrapolated from past experiences. We do so because circumstances require us to make all sorts of decisions without posessing the knowledge or understanding necessary to provide a correct answer. Induction allows us to to operate in situations which we do not uderstand. Psychologist B. F. Skinner famously explored and exploited this trait in his experiments. Den Beste notes this in his post:
What you do is to reward the animal (usually by giving it a small amount of food) for progressively behaving in ways which is closer to what you want. The reason Skinner studied it was because he (correctly) thought he was empirically studying the way that higher thought in animals worked. Basically, they're wired to believe that "correlation often implies causation". Which is true, by the way. So when an animal does something and gets a reward it likes (e.g. food) it will try it again, and maybe try it a little bit differently just to see if that might increase the chance or quantity of the reward.
So we're hard wired to create these heuristics. This has many implications, from Cargo Cults to Superstition and Security Beliefs.

The second instinct is the human drive to seek novelty, also noted by Den Beste:
The problem is that humans are wired to seek novelty. I think it's a result of our dietary needs. Lions can eat zebra meat exclusively their entire lives without trouble; zebras can eat grass exclusively their entire lives. They don't need novelty, but we do. Primates require a quite varied diet in order to stay healthy, and if we eat the same thing meal after meal we'll get sick. Individuals who became restless and bored with such a diet, and who sought out other things to eat, were more likely to survive. And when you found something new, you were probably deficient in something that it provided nutritionally, so it made sense to like it for a while -- until boredom set in, and you again sought out something new.
The drive for diversity affects more than just our diet. Genetic diversity has been shown to impart broader immunity to disease. Children from diverse parentage tend to develop a blend of each parent's defenses (this has other implications, particularly for the tendency for human beings to work together in groups). The biological benefits of diversity are not limited to humans either. Hybrid strains of many crops have been developed over the years because by selectively mixing the best crops to replant the next year, farmers were promoting the best qualities in the species. The simple act of crossing different strains resulted in higher yields and stronger plants.

The problem here is that evolution has made the biological need for diversity and novelty dependent on our inductive reasoning instincts. As such, what we find is that those we rely upon for new entertainment, like Hollywood or the video game industry, are constantly trying to find a simple formula for a big hit.
It's hard to come up with something completely new. It's scary to even make the attempt. If you get it wrong you can flush amazingly large amounts of money down the drain. It's a long-shot gamble. Every once in a while something new comes along, when someone takes that risk, and the audience gets interested...
Indeed, the majority of big films made today appear to be remakes, sequels or adaptations. One interesting thing I've noticed is that something new and exciting often fails at the box office. Such films usually gain a following on video or television though. Sometimes this is difficult to believe. For instance, The Shawshank Redemption is a very popular film. In fact, it occupies the #2 spot (just behind The Godfather) on IMDB's top rated films. And yet, the film only made $28 million dollars (ranked 52 in 1994) in theaters. To be sure, that's not a modest chunk of change, but given the universal love for this film, you'd expect that number to be much higher. I think part of the reason this movie failed at the box office was that marketers are just as susceptible to these novelty problems as everyone else. I mean, how do you market a period prison drama that has an awkward title an no big stars? It doesn't sound like a movie that would be popular, even though everyone seems to love it.

Which brings up another point. Not only is it difficult to create novelty, it can also be difficult to find novelty. This is the crux of the problem: we require novelty, but we're programmed to seek out new things via correllation. There is no place to go for perfect recommendations and novelty for the sake of novelty isn't necessarily enjoyable. I can seek out some bizarre musical style and listen to it, but the simple fact that it is novel does not guarantee that it will be enjoyable. I can't rely upon how a film is marketed because that is often misleading or, at least, not really representative of the movie (or whatever). Once we do find something we like, our instinct is often to exhaust that author or director or artist's catalog. Usually, by the end of that process, the artist's work begins to seem a little stale, for obvious reasons.

Seeking out something that is both novel and enjoyable is more difficult than it sounds. It can even be a little scary. Many times, things we think will be new actually turn out to be retreads. Other times, something may actually be novel, but unenjoyable. This leads to another phenomenon that Den Beste mentions: the "Unwatched pile." Den Beste is talking about Anime, and at this point, he's begun to accumulate a bunch of anime DVDs which he's bought but never watched. I've had similar things happen with books and movies. In fact, I have several books on my shelf, just waiting to be read, but for some of them, I'm not sure I'm willing to put in the time and effort to read them. Why? Because, for whatever reason, I've begun to experience some set of diminishing returns when it comes to certain types of books. These are similar to other books I've read, and thus I probably won't enjoy these as much (even if they are good books).

The problem is that we know something novel is out there, it's just a matter of finding it. At this point, I've gotten sick of most of the mass consumption entertainment, and have moved on to more niche forms of entertainment. This is really a signal versus noise, traversal of the long tail problem. An analysis problem. What's more, with globalization and the internet, the world is getting smaller... access to new forms of entertainment are popping up (for example, here in the US, anime was around 20 years ago, but it was nowhere near as common as it is today). This is essentially a subset of a larger information aggregation and analysis problem that we're facing. We're adrift in a sea of information, and must find better ways to navigate.
Posted by Mark on June 18, 2006 at 03:55 PM .: Comments (6) | link :.


End of This Day's Posts

Thursday, May 25, 2006

Pitfall II: Lost Caverns
Perhaps I've gone too far. I'm in an underground cavern beneath Peru. It seems to be a complex maze, perhaps eight chambers wide and over three times as deep. Niece Rhonda has disappeared, along with Quickclaw, our cowardly cat. I am beset by all manner of subterranean creatures in this vast, ancient labrynth. And all because of a rock--the Raj diamond. It was stolen a century ago, and hidden here.
- An excerpt from Pitfall Harry's diary
Pitfall II: Lost Caverns - Cover Art; click for a larger version
Cover Art
Without a doubt, the greatest game ever made for the Atari 2600 was Pitfall II: Lost Caverns. The original Pitfall! set the standard for Atari adventure games as it sent our intrepid hero, an Indiana Jones clone named Pitfall Harry, to a junge where he must avoid the likes of scorpions, crocodiles quicksand and tar pits (amongst other things). The goal of the first game was simply to collect 32 bars of gold in 20 minutes without dying 3 times, a typical Atari-era video game goal. The sequel improves upon nearly every aspect of the original game and far surpasses the competition.

To start, the game actually has a legitimate goal, not some arbitrary point score. Your goal is to collect the Raj diamond, rescue your niece Rhonda and also your cowardly cat Quickclaw (with an added bonus for collecting a rare rat and the usual gold bars). What's more, you are given an infinite amount of lives and time with which to accomplish these goals (there are scattered checkpoints and when you die, you are transported back to the last one you reached, deducting points as you go). You're given a few new abilities (like the ability to swim) and you face a new series of hazards, including poisonous frogs, bats, condors and electric eels.

From a technological standpoint, Pitfall II pushed the envelope both visually and musically. It was one of the largest games ever created for the 2600 (a whopping 10k), and it included features like smooth scrolling, an expansive map, relatively high-resolution graphics, varying scenery, detailed animations and a first-rate musical score that was detailed and varied (quite an accomplishment considering that most 2600 games did not feature music at all). Obviously, all of these things are trivial by current standards, but at the time, this was an astounding feat. Indeed, it was only made possible because of custom hardware built inside the game cartridge that enhanced the 2600's video and audio capabilities.

You start the game in the jungle. In a perverse maneuver, the game's designers made sure that you could see Quickclaw (one of your primary objectives) immediately beneath your starting point, but to actually reach him you must traverse the entire map!

So close, yet so far away...
So close, yet so far away...

Again, the sequel imbues Pitfall Harry with a few extra abilities, including the ability to swim. Naturally, this benefit does not come without danger, as shown by the electric eel swimming along side our hero (you can't see it in the screenshot, but the eel alternates between a white squiggly line and a black squiggly line, thus conveying it's electric nature). Also of note is the rather nice graphical element of the waterfall.

Swimming with an electric eel
Swimming with an electric eel

As you explore the caverns, you run across various checkpoints marked with a cross. When you touch a cross, it becomes your new starting point whenever you die.

I think that green thing is supposed to be a poison frog.
I think that green thing is supposed to be a poison frog.

At various points in the game you are faced with a huge, vertical open space. Sometimes you just have to jump. One of the great things about this game, though, is that there is a surprising amount of freedom of movement. You could, if you wanted, just take the ladder down to the bottom of the cavern instead of jumping (though at one point, if you want to get the Raj ring, you'll need to face the abyss). Plus, there are all sorts of gold bars hidden around the caves in places that you don't have to go. Obviously, there are a limited number of specific paths you can take - it's no GTA III - but given the constraints at the time, this was a neat aspect of the game.

Stepping into the abyss
Stepping into the abyss

Another innovation in Pitfall II is Harry's ability to grab onto a rising balloon and ride it to the top of the cavern (a necessary step at one point), dodging bats along the way. A pretty unique and exciting sequence for its time.

That's some powerful helium in that balloon
That's some powerful helium in that balloon

The valiant Pitfall Harry, about to rescue his neice Rhonda.

Rhonda!
Rhonda!

The designers' cruel sense of placement strikes again. I can see the Raj diamond, but how do you get there? Luckily, the game's freedom of movement allows you to backtrack if you want (and when you want).

Curse you, game designers!
Curse you, game designers!

The final portion of the map is still, to this day, challenging. Up until this point in the game, you've only had to dogde a bat here, a condor there. This section requires you to really get your timing and reflexes in order, as you must complete a long sequence of evasions before you get to the top. Nevertheless, success was imminent.

Victory is mine!
Victory is mine!

Naturally, the game does not hold water compared to the games of today in terms of technology or gameplay, but what is remarkable about this game is how close it got. And that it did so at a time when many of these concepts were unheard-of. Sure, there are still some elements taken from the "Do it again, stupid" school of game design, but given the constraints of the 6 year old hardware and the fact that nearly every other game ever released for the console was much worse in this respect, I think it's worth cutting the game some slack (plus, as Shamus notes in the referenced post, these sorts of things are still common today!)

Everything about this game, from the packaging and manual (which is actually an excellent document done in the style of Pitfall Harry's aformentioned diary) to the graphics and music to the innovative gameplay and freedom of movement, is exceptional. Without a doubt, my favorite game for the 2600. Stay tuned for the honorable mentions!
Posted by Mark on May 25, 2006 at 09:09 PM .: Comments (3) | link :.


End of This Day's Posts

Wednesday, March 01, 2006

GalCiv II: Rise of the Kaedrinians!
Galactic Civilizations II continues to occupy the majority of my free time, and I wanted to try showing a game example (similar to this one by one of the game's creators, though my example won't be as thorough). I'll be showing how I was able to secure good long term prospects at the beginning of my second game.

I played my first game as the Terran Alliance (humans), and one of the most enjoyable things I've noticed about the game is the ability to customize various aspects, such as planet names and ship designs. So this time, I decided to create a new race, the Kaedrinians (long time readers should get a kick out of that), and installed tallman as their emperor.

Update: Moved screenshots and commentary to the extended entry. Click below to see full entry...

(Click images for a larger version, usually with more information)

Welcome to planet Kaedrin
Welcome to planet Kaedrin!

I set up the galaxy so it was relatively small and had relatively few habitable planets. This may turn out to be my undoing. My typical strategy in these types of games is to expand quickly and get a foothold in several starsystems to start. The Kaedrin system was blessed to have two habitable planets, Kaedrin (which is my homeworld) and Vizzard II, a very low quality planet. After a cursory examination of the surrounding starsystems, I had not found any other habitable planets. As I expanded my search, I saw that most of my opponents were luckier in terms of colonization. Nevertheless, I was able to secure one planet that was relatively far away from my homeworld. However, that planet was also of a relatively low quality. Low quality planets don't support nearly as many enhancements or production capacity. These would serve me well at the beginning of the game, but would become less and less important as time went on.

The Snathi (click for larger image and more details)
The Snathi
This simply would not do. I had to act quickly if I wanted to secure long term survival (let alone domination). My general strategy is to focus on trade and influence to start, but this time I decided to focus mostly on my military might and, secondarily, economics and trade (having a strong economy would help power the military machine). The goal here would be to find a weak race and take over their planets early. Most of the major races were pretty well established, but I found the perfect opportunity with one of the minor races: the Snathi, a cuddly but apparently "evil" race of squirrel-like beings. One of the nice things about this game is that the creators seem to have a genuine sense of humor. Their description of the Snathi includes this little gem: "... after billions of years of hoarding their proverbial 'nuts,' the Snathi have metaphorically 'climbed out of their tree' and will 'gnaw the galaxy with their squirrel-like teeth'... so to speak."

Despite their cuteness factor, I could not let such nefarious beings continue to exist. Plus, their planet was of an obscenely high quality. It was a real gem. The highest quality planet I'd seen in the galaxy, and thus ideally suited for my purposes of galactic expansion. The Snathi appeared to be farther along in cultivating their planet than I, and were churning out constructors and freighters at a relatively high rate. Lucky for me, neither of those ship classes had any military capacity (no weapons or shields). However, this fortuitous state would not hold forever. I had to act fast if I was to take the planet (I also had to worry about one of the other major civilizations making a run for this ripe planet. Luckily, because they only had one planet, I didn't have to worry about the annoying surrender factor.)

In order to invade, I would need to research a few technologies and build an invasion fleet. The fleet would include a troop transport and a combat escort. The transport ship is one of the core ships and once I had researched the planetary invasion technology, building that ship would be simple. The combat escort, however, presented me with an opportunity to utilize my favorite feature of GalCiv II, the customized ship builder. After researching a number of technologies, I was finally ready to design my first warship, the Space Lion:

The Space Lion Class Battle Cruiser
The Space Lion Class Battle Cruiser

Armed with Stinger II missiles and basic Shields, the Space Lion wasn't unstoppable, but she packed an impressive punch despite being constructed so early in the game. After constructing my fleet and making the long journey to the Snathi homeworld, I was ready to invade. There was just one problem. My technology is still relatively unsophisticated, so I could only transport around 1 billion troops for the invasion. And the Snathi homeworld had a population of 16 billion people! I was drastically outnumbered, so I decided to pay a little extra and use one of the specialized invasion tactics. Many of the invasion tactics result in a large advantage for the invader, but also lower planetary quality and improvements, which is antithetical to my purpose for the invasion. Thus I decided to go for Information warfare. This would cause a significant portion of the enemy troops to join my ranks, thus mitigating their numerical superiority (though I would still be outnumbered), but more importantly, it would leave the planet quality and improvements unharmed. The invasion begins:

The Snathi Invasion
The Snathi Invasion

Victory! The Information Warfare tactic paid off in spades, giving me an extra 2.5 billion troops. I was still outnumbered, but my advantage factor was so much higher that it did not matter. I was able dispatch the adorable but monstrous Snathi with relative ease. The planet was mine!

My New Planet
My New Planet

And what a planet it was. Look at all those manufacturing and technology centers. In terms of industry and research, it was significantly better than my own homeworld of Kaedrin, and I suspect it will quickly become the jewel of the Kaedrinian empire, researching, building and producing more than any other planet. Will I succeed in galactic conquest? Nothing is definite, but now that I have secured this planet, I am primed and ready to go. I'll end my account here, as time does not permit recapping the entire game, but I though this was a natural place to stop.

Update: Read more on this campaign: The continuing adventures of the Kaedrinians
Posted by Mark on March 01, 2006 at 09:00 PM .: Comments (4) | link :.


End of This Day's Posts

Tuesday, December 27, 2005

Silent Hitchcock
Browsing the discount DVD rack while doing a little last-minute shopping, I came across this collection of 9 Hitchcock films for a measly $8. I love Hitchcock, yet I haven't seen many of his films (and he was an extremely prolific director), so I picked it up. It turns out that all of the films on the DVDs are from Hitchcock's pre-Hollywood period, dating from the mid 1920s to the late 1930s. It even includes a 1927 silent film, among Hitchcock's first efforts, called The Lodger.

By today's standards (or even the standards set by Hitchcock's later work), it's not especially impressive, but I haven't seen much in the way of silent films, so this particular movie intrigued me. The conventions of silent films are different enough from what we're all familiar with that it almost seems like a different medium. The film moves at a very deliberate pace, revealing information slowly in many varied ways (though, it seems, rarely through dialogue). In fact, I even played around with watching the film at 2X speed and didn't have any problem keeping up with what was happening on screen. Not having any real experience with silent films, I don't know if this (or any other aspect of the movie) was unusual or not, but it seemed to work well enough.

Details, screenshots, sarcasm and more below the fold.

Also Spoilers, but if you're up for it, you can watch the movie at World Cinema Online... (Click images for a larger version)

The killer had a long nose and floppy ears.
The killer had a long nose and floppy ears.

From the Fog and the Constable, it's obvious London is under the grip of a Jack the Ripper style serial killer called "The Avenger." The film opens just after a murder with a lady describing our villain to the police.

Tall he was - and his face all wrapped up. ... A scarf covered the lower half of his face ... Another Avenger Crime.

Here we see a few of the varied ways in which the film communicates information about the murder to the audience. From these scenes (among others), we gather the following facts about the killer:
  • He is tall.
  • His lower face is covered by a scarf.
  • The murders have occurred on sever successive Tuesdays.
  • All of the victims were fair-haired women.
  • The killer leaves a calling card bearing his name (The Avenger) with each victim.
Sounds like a creepy guy, no? Anyway, the film then takes us to the Bunting household, where we're introduced to the family (a Landlady, her husband, and their fair-haired daughter Daisy, who is being courted by a policeman named Joe) which has a room available to rent. Naturally, someone comes to inquire about the room:

I'm not a murderer!
I'm not a murderer!

Excellent reveal of the Lodger. I think this is the most striking image in the film, and it immediately set off warning bells in my head.

No, really, I'm not a murderer!
No, really, I'm not the Avenger!

Still not a murderer.
See, without my hat and scarf, I'm much creepier. Woops, I mean less creepier. Yeah.

Be careful. I'll get you yet.
You gonna get it, woman!

The-man-who-is-clearly-not-The-Avenger is playing chess with the Landlady's fair-haired daughter Daisy, who has deftly outmaneuvered her non-murderous opponent. At this point, he literally says "Be careful. I'll get you yet." No foreshadowing here, move along...

Oh, and despite the fact that the Lodger is clearly a psychopath, Daisy is falling for him, much to the dismay of Joe, her policeman friend (who happens to be investigating some series of murders or something).

You're under arrest, weenie.
You're under arrest, weenie.

The characters in the film have finally figured out that the new lodger is The Avenger, and policeman Joe searches the premesis and finds a hidden bag in his room containing a map of all the killings, various newspaper clippings, and a photograph of the oddball with one of the victims. Our villain is handcuffed but promptly escapes with the help of Daisy (who thinks he's innocent, of course).

Wha-wha-wha-whaaaat?
Wha-wha-wha-whaaaat?

"My God, he is innocent! The real Avenger was taken red-handed ten minutes ago." Ah Ha! Hitchcock strikes again.

Rabble, Rablle, Rabble!
Rabble, Rablle, Rabble! Rabble!

Oh no, someone spotted the handcuffs! An angry mob has emerged and is chasing the now-exonerated Lodger. For a moment, I really wondered if the mob would take him out, but it seems that film noir hadn't yet emerged, as our beloved Lodger takes a beating, but ends up fine. And he gets the girl, too:

I love you, weenie.
I love you, weenie.

In case you can't tell from all the sarcasm, the "twist" at the end of the story wasn't exactly earth-shattering. These days, we're so zonked out on Lost and 24 that our minds immediately and cynically formulate all the ways the filmmakers are trying to trick us. Were audiences that cynical 80 years ago? Or did the ending truly surprise them? To be honest, there was a part of me that thought that he really could have been the killer. Also, as I hinted at above, this film seems to resemble film noir, and the angry mob scene was somewhat effective in that light.

Ultimately, I enjoyed the film greatly, even if much of my fascination has to do with the context and conventions of silent films. This was apparently the first film where Hitchcock really displayed his own style, and you really can see a lot of themes in this film that would later become Hitchcock staples (i.e. the wrongly accused man, voyeurism, etc...). More on the background of the film can be found at this Wikipedia entry.

So one film down, eight to go. I have to admit, part of the inspiration to get this set is that Cinecast is currently doing a Hitchcock marathon, though it seems that the only film on their list that is in this DVD set is The 39 Steps.
Posted by Mark on December 27, 2005 at 12:52 AM .: Comments (0) | link :.


End of This Day's Posts

Sunday, October 16, 2005

Operation Solar Eagle
One of the major challenges faced in Iraq is electricity generation. Even before the war, neglect of an aging infrastructure forced scheduled blackouts. To compensate for the outages, Saddam distributed power to desired areas, while denying power to other areas. The war naturally worsened the situation (especially in the immediate aftermath, as there was no security at all), and the coalition and fledgling Iraqi government have been struggling to restore and upgrade power generation facilities since the end of major combat. Many improvements have been made, but attacks on the infrastructure have kept generation at or around pre-war levels for most areas (even if overall generation has increased, the equitable distribution of power means that some people are getting more than they used to, while others are not - ironic, isn't it?).

Attacks on the infrastructure have presented a significant problem, especially because some members of the insurgency seem to be familiar enough with Iraq's power network to attack key nodes, thus increasing the effects of their attacks. Consequently, security costs have gone through the roof. The ongoing disruption and inconsistency of power generation puts the new government under a lot of pressure. The inability to provide basic services like electricity delegitimizes the government and makes it more difficult to prevent future attacks and restore services.

When presented with this problem, my first thought was that solar power may actually help. There are many non-trivial problems with a solar power generation network, but Iraq's security situation combined with lowered expectations and an already insufficient infrastructure does much to mitigate the shortcomings of solar power.

In America, solar power is usually passed over as a large scale power generation system, but things that are problems in America may not be so problematic in Iraq. What are the considerations?
  • Demand: One of the biggest problems with solar power is that it's difficult to schedule power generation to meet demand (demand doesn't go down when the sun does, nor does demand necessarily coincide with peak generation), and a lot of energy is wasted because there isn't a reliable way to store energy (battery systems help, but they're not perfect and they also drive up the costs). The irregularity in generation isn't as bad as wind, but it is still somewhat irregular. In America, this is a deal breaker because we need power generation to match demand, so if we were to rely on solar power on a large scale, we'd have to make sure we have enough backup capacity running to make up for any shortfall (there's much more to it than that, but that's the high-level view). In Iraq, this isn't as big of a deal. The irregularity of conventional generation due to attacks on infrastructure is at least comparable if not worse than solar irregularity. It's also worth noting that it's difficult to scale solar power to a point where it would make a difference in America, as we use truly mammoth amounts of energy. Iraq's demands aren't as high (both in terms of absolute power and geographic distribution), and thus solar doesn't need to scale as much in Iraq.
  • Economics: Solar power requires a high initial capital investment, and also requires regular maintenance (which can be costly as well). In America, this is another dealbreaker, especially when coupled with the fact that its irregular nature requires backup capacity (which is wasteful and expensive as well). However, in Iraq, the cost of securing conventional power generation and transmission is also exceedingly high, and the prevalence of outages has cost billions in repairs and lost productivity. The decentralized nature of solar power thus becomes a major asset in Iraq, as solar power (if using batteries and if connected to the overall grid) can provide a seamless interruptible supply of electricity. Attacks on conventional systems won't have quite the impact they once did, and attacks on the solar network won't be anywhere near as effective (more on this below). Given the increased cost of conventional production (and securing that production) in Iraq, and given the resilience of such a decentralized system, solar power becomes much more viable despite its high initial expense. This is probably the most significant challenge to overcome in Iraq.
  • Security: There are potential gains, as well as new potential problems to be considered here. First, as mentioned in the economics section, a robust solar power system would help lessen the impact of attacks on conventional infrastructure, thus preventing expensive losses in productivity. Another hope here is that we will see a corresponding decrease in attacks (less effective attacks should become less desirable). Also, the decentralized nature of solar power means that attacks on the solar infrastructure are much more difficult. However, this does not mean that there is no danger. First, even if attacks on conventional infrastructure decrease, they probably won't cease altogether (though, again, the solar network could help mitigate the effects of such attacks). And there is also a new problem that is introduced: theft. In Iraq's struggling economy, theft of solar equipment is a major potential problem. Then again, once an area has solar power installed, individual homeowners and businesses won't be likely to neglect their most reliable power supply. Any attacks on the system would actually be attacks on specific individuals or businesses, which would further alienate the population and decrease the attacker's. However, this assumes that the network is already installed. Those who set up the network (most likely outsiders) will be particularly vulnerable during that time. Once installed, solar power is robust, but if terrorists attempt to prevent the installation (which seems likely, given that the terrorists seem to target many external companies operating in Iraq with the intention of forcing them to leave), that would certainly be a problem (at the very least, it would increase costs).
  • Other Benefits: If an installed solar power network helps deter attacks on power generation infrastructure, the success will cascade across several other vectors. A stable and resilient power network that draws from diverse energy sources will certainly help improve Iraq's economic prospects. Greater energy independence and an improved national energy infrastructure will also lend legitimacy to the new Iraqi government, making it stronger and better able to respond to the challenges of rebuilding their country. If successful and widespread, it could become one of the largest solar power systems in the world, and much would be learned along the way. This knowledge would be useful for everyone, not just Iraqis. Obviously, there are also environmental benefits to such a system (and probably a lack of bureaucratic red-tape like environmental impact statements as well. Indeed, while NIMBY might be a problem in America, I doubt it would be a problem in Iraq, due to their current conditions).
In researching this issue, I came across a recent study prepared at the Naval Postgraduate School called Operation Solar Eagle. The report is excellent, and it considers most of the above, and much more (in far greater detail as well). Many of my claims above are essentially assumptions, but this report provides more concrete evidence. One suggestion they make with regard to the problem of theft is to use an RFID system to keep track of solar power equipment. Lots of other interesting stuff in there as well.

As shown above, there are obviously many challenges to completing such a project, most specifically with respect to economic feasibility, but it seems to me to be an interesting idea. I'm glad that there are others thinking about it as well, though at this point it would be really nice to see something a little more concrete (or at least an explanation as to why this wouldn't work).
Posted by Mark on October 16, 2005 at 08:52 PM .: Comments (2) | link :.


End of This Day's Posts

Sunday, September 04, 2005

The Pendulum Swings
I've often commented that human beings don't so much solve problems as they trade one set problems for another (in the hope that the new set of problems are more favorable than the old). Yet that process doesn't always follow a linear trajectory. Initial reactions to a problem often cause problems of their own. Reactions to those problems often take the form of an over-correction. And so it continues, like the swinging of a pendulum, back and forth, until it reaches it's final equilibrium.

This is, of course, nothing new. Hegel's philosophy of argument works in exactly that way. You start with a thesis, some sort of claim that becomes generally accepted. Then comes the antithesis, as people begin to find holes in the original thesis and develop an alternative. For a time, the thesis and antithesis vie to establish dominance, but neither really wins. In the end, a synthesis comprised of the best characteristics of the thesis and antithesis emerges.

Naturally, it's rarely so cut and dry, and the process continues as the synthesis eventually takes on the role of the thesis, with new antitheses arising to challenge it. It works like a pendulum, oscillating back and forth until it reaches a stable position (a new synthesis). There are some interesting characteristics of pendulums that are also worth noting in this context. Steven Den Beste once described the two stable states of the pendulum: one in which the weight hangs directly below the hinge, and one in which the weight is balanced directly above the hinge.
On the left, the weight hangs directly below the hinge. On the right, it's balanced directly above it. Both states are stable. But if you slightly perturb the weight, they don't react the same way. When the left weight is moved off to the side, the force of gravity tries to center it again. In practice, if the hinge has a good bearing, the system then will oscillate around the base state and eventually stop back where it started. But if the right weight is perturbed, then gravity pulls the weight away and the right system will fail and convert to the left one.

The left state is robust. The right state is fragile. The left state responds to challenges by trying to maintain itself; the right state responds to challenges by shattering.
Not all systems are robust, but it's worth noting that even robust systems are not immune to perturbation. The point isn't that they can't fail, it's that when they do fail, they fail gracefully. Den Beste applies the concept to all sorts of things, including governments and economic systems, and I think the analogy is apt. In the coming months and years, we're going to see a lot of responses to the tragedy of hurricane Katrina. Katrina represents a massive perturbation; it's set the pendulum swinging, and it'll be a while before it reaches it's resting place. There will be many new policies that will result. Some of them will be good, some will be bad, and some will set new cycles into action. Disaster preparedness will become more prevalent as time goes on, and the plans will get better too. But not all at once, because we don't so much solve problems as trade one set of disadvantages for another, in the hopes that we can get that pendulum to rest in it's stable state.

Glenn Reynolds has collected a ton of worthy places to donate for hurricane relief here. It's also worth noting that many employers are matching donations to the Red Cross (mine is), so you might want to go that route if it's available...
Posted by Mark on September 04, 2005 at 11:02 PM .: Comments (0) | link :.


End of This Day's Posts

Sunday, July 17, 2005

Magic Security
In Harry Potter and the Half-Blood Prince, there are a number of new security measures suggested by the Ministry of Magic (as Voldemort and his army of Death Eaters have been running amuk). Some of them are common sense but some of them are much more questionable. Since I've also been reading prominent muggle and security expert Bruce Schneier's book, Beyond Fear, I thought it might be fun to analyze one of the Ministry of Magic's security measures according to Schneier's 5 step process.

Here is the security measure I've chosen to evaluate, as shown on page 42 of my edition:
Agree on security questions with close friends and family, so as to detect Death Eaters masquerading as others by use of the Polyjuice Potion.
For those not in the know, Polyjuice Potion allows the drinker to assume the appearance of someone else, presumably someone you know. Certainly a dangerous attack. The proposed solution is a "security question", set up in advance, so that you can verify the identity of the person in question.
  • Step 1: What assets are you trying to protect? The Ministry of Magic claims that it's solution is to the problem of impersonation by way of the Polyjuice Potion. However, this security measure essentially boils down to a form of identification, so what we're really trying to protect is an identity. The identity is, in itself, a security measure - for example, once verified, it could allow entrance to an otherwise restricted area.
  • Step 2: What are the risks to those assets? The risk is that someone could be impersonating a friend or family member (by using the aforementioned Polyjuice Potion) in an effort to gain entrance to a restricted area or otherwise gain the trust of a certain group of people. Unfortunately, the risk does not end there as the Ministry implies in its communication - it is also quite possible that an attacker could put your friend or family member under the Imperious Curse (a spell that grants the caster control of a victim). Because both the Polyjuice Potion and the Imperious Curse can be used to foil an identity based system, any proposed solution should account for both. It isn't known how frequent such attacks are, but it is implied that both attacks are increasing in frequency.
  • Step 3: How well does the security solution mitigate those risks? Not very well. First, it is quite possible for an attacker to figure out the security questions and answers ahead of time. They could do so through simple research, or through direct observation and reconnaissance. Since the security questions need to be set up in the first place, it's quite possible that an attacker could impersonate someone and set up the security questions while in disguise. Indeed, even Professor Dumbledore alludes to the ease with which an attacker could subvert this system. Heck, we're talking about attackers who are most likely witches or wizards themselves. There may be a spell of some sort that would allow them to get the answer from a victim (the Imperious Curse is one example, and I'm sure there are all sorts of truth serums or charms that could be used as well). The solution works somewhat better in the case of the Polyjuice Potion, but since we've concluded that the Imperious Curse also needs to be considered, and since this would provide almost no security in that case, the security question ends up being a poor solution to the identity problem.
  • Step 4: What other risks does the security solution cause? The most notable risk is that of a false positive. If the attacker successfully answers the security question, they achieve a certain level of trust. When you use identity as a security measure, you make impersonating that identity (or manipulating the person in question via the Imperious Curse) a much more valuable attack.
  • Step 5: What trade-offs does the security solution require? This solution is inexpensive and easy to implement, but also ineffective and inconvenient. It would also requires a certain amount of vigilance to implement indefinitely. After weeks of strict adherence to the security measure, I think you'd find people getting complacent. They'd skip using the security measure when they're in a hurry, for example. When nothing bad happens, it would only reinforce the inconvenience of the practice. It's also worth noting that this system could be used in conjunction with other security measures, but even then, it's not all that useful.
It seems to me that this isn't a very effective security measure, especially when you consider that the attacker is likely a witch or wizard. This is obviously also apparent to many of the characters in the book as well. As such, I'd recommend a magic countermeasure. If you need to verify someone's identity, you should probably use a charm or spell of some sort to do so instead of the easily subverted "security question" system. It shouldn't be difficult. In Harry Potter's universe, it would probably amount to pointing a wand at someone and saying "Identico!" (or some other such word that is vaguely related to the words Identity or Identify) at which point you could find out who the person is and if they're under the Imperious Curse.
Posted by Mark on July 17, 2005 at 12:21 PM .: Comments (0) | link :.


End of This Day's Posts

Sunday, May 29, 2005

Sharks, Deer, and Risk
Here's a question: Which animal poses the greater risk to the average person, a deer or a shark?

Most people's initial reaction (mine included) to that question is to answer that the shark is the more dangerous animal. Statistically speaking, the average American is much more likely to be killed by deer (due to collisions with vehicles) than by a shark attack. Truly accurate statistics for deer collisions don't exist, but estimates place the number of accidents in the hundreds of thousands. Millions of dollars worth of damage are caused by deer accidents, as are thousands of injuries and hundreds of deaths, every year.

Shark attacks, on the other hand, are much less frequent. Each year, approximately 50 to 100 shark attacks are reported. "World-wide, over the past decade, there have been an average of 8 shark attack fatalities per year."

It seems clear that deer actually pose a greater risk to the average person than sharks. So why do people think the reverse is true? There are a number of reasons, among them the fact that deer don't intentionally cause death and destruction (not that we know of anyway) and they are also usually harmed or killed in the process, while sharks directly attack their victims in a seemingly malicious manner (though I don't believe sharks to be malicious either).

I've been reading Bruce Schneier's book, Beyond Fear, recently. It's excellent, and at one point he draws a distinction between what security professionals refer to as "threats" and "risks."
A threat is a potential way an attacker can attack a system. Car burglary, car theft, and carjacking are all threats ... When security professionals talk abour risk, they take into consideration both the likelihood of the threat and the seriousness of a successful attack. In the U.S., car theft is a more serious risk than carjacking because it is much more likely to occur.
Everyone makes risk assessments every day, but most everyone also has different tolerances for risk. It's essentially a subjective decision, and it turns out that most of us rely on imperfect heuristics and inductive reasoning when it comes to these sorts of decisions (because it's not like we have the statistics handy). Most of the time, these heuristics serve us well (and it's a good thing too), but what this really ends up meaning is that when people make a risk assessment, they're basing their decision on a perceived risk, not the actual risk.

Schneier includes a few interesting theories about why people's perceptions get skewed, including this:
Modern mass media, specifically movies and TV news, has degraded our sense of natural risk. We learn about risks, or we think we are learning, not by directly experiencing the world around us and by seeing what happens to others, but increasingly by getting our view of things through the distorted lens of the media. Our experience is distilled for us, and it’s a skewed sample that plays havoc with our perceptions. Kids try stunts they’ve seen performed by professional stuntmen on TV, never recognizing the precautions the pros take. The five o’clock news doesn’t truly reflect the world we live in -- only a very few small and special parts of it.

Slices of life with immediate visual impact get magnified; those with no visual component, or that can’t be immediately and viscerally comprehended, get downplayed. Rarities and anomalies, like terrorism, are endlessly discussed and debated, while common risks like heart disease, lung cancer, diabetes, and suicide are minimized.
When I first considered the Deer/Shark dilemma, my immediate thoughts turned to film. This may be a reflection on how much movies play a part in my life, but I suspect some others would also immediately think of Bambi, with it's cuddly cute and innocent deer, and Jaws, with it's maniacal great white shark. Indeed, Fritz Schranck once wrote about these "rats with antlers" (as some folks refer to deer) and how "Disney's ability to make certain animals look just too cute to kill" has deterred many people from hunting and eating deer. When you look at the deer collision statistics, what you see is that what Disney has really done is to endanger us all!

Given the above, one might be tempted to pursue some form of censorship to keep the media from degrading our ability to determine risk. However, I would argue that this is wrong. Freedom of speech is ultimately a security measure, and if we're to consider abridging that freedom, we must also seriously consider the risks of that action. We might be able to slightly improve our risk decisionmaking with censorship, but at what cost?

Schneier himself recently wrote about this subject on his blog. In response to an article which argues that suicide bombings in Iraq shouldn't be reported (because it scares people and it serves the terrorists' ends). It turns out, there are a lot of reasons why the media's focus on horrific events in Iraq cause problems, but almost any way you slice it, it's still wrong to censor the news:
It's wrong because the danger of not reporting terrorist attacks is greater than the risk of continuing to report them. Freedom of the press is a security measure. The only tool we have to keep government honest is public disclosure. Once we start hiding pieces of reality from the public -- either through legal censorship or self-imposed "restraint" -- we end up with a government that acts based on secrets. We end up with some sort of system that decides what the public should or should not know.
Like all of security, this comes down to a basic tradeoff. As I'm fond of saying, human beings don't so much solve problems as they do trade one set of problems for another (in the hopes that the new problems are preferable the old). Risk can be difficult to determine, and the media's sensationalism doesn't help, but censorship isn't a realistic solution to that problem because it introduces problems of its own (and those new problems are worse than the one we're trying to solve in the first place). Plus, both Jaws and Bambi really are great movies!
Posted by Mark on May 29, 2005 at 08:50 PM .: link :.


End of This Day's Posts

Friday, April 22, 2005

What is a Weblog, Part II
What is a weblog? My original thoughts leaned towards thinking of blogs as a genre within the internet. Like all genres, there is a common set of conventions that define the blogging genre, but the boundaries are soft and some sites are able to blur the lines quite thoroughly. Furthermore, each individual probably has their own definition as to what constitutes a blog (again similar to genres). The very elusiveness of a definition for blog indicates that perception becomes an important part of determining whether or not something is a blog. It has become clear that there is no one answer, but if we spread the decision out to a broad number of people, each with their own independent definition of blog, we should be able to come to the conclusion that a borderline site like Slashdot is a blog because most people call it a blog.

So now that we have a (non)definition for what a blog is, just how important are blogs? Caesar at Arstechnica writes that according to a new poll, Americans are somewhat ambivalent on blogs. In particular, they don't trust blogs.

I don't particularly mind this, however. For the most part, blogs don't make much of an effort to be impartial, and as I've written before, it is the blogger's willingness to embrace their subjectivity that is their primary strength. Making mistakes on a blog is acceptable, so long as you learn from your mistakes. Since blogs are typically more informal, it's easier for bloggers to acknowledge their mistakes.

Lexington Green from ChicagoBoyz recently wrote about blogging to a writer friend of his:
To paraphrase Truman Capote's famous jibe against Jack Kerouac, blogging is not writing, it is typing. A writer who is blogging is not writing, he is blogging. A concert pianist who is sitting down at the concert grand piano in Carnegie Hall in front of a packed house is the equivalent to an author publishing a finished book. The same person sitting down at the piano in his neighborhood bar on a Saturday night and knocking out a few old standards, doing a little improvisation, and even doing some singing -- that is blogging. Same instrument -- words, piano -- different medium. We forgive the mistakes and wrong-guesses because we value the immediacy and spontaneity. Plus, publish a book, it is fixed in stone. Write a blog post you later decide is completely wrong, it is actually good, since it gives you a good hook for a later post explaining your thoughts that led to the changed conclusion. The essence of a blog is to air things informally, to throw things out, to say "this interests me because ..." From time to time a more considered and article-like post is good. But most people read blogs by skimming. If a post is too long, in my observation, it does not get much response and may not be read at all.
Of course, his definition of what a blog is could be argued (as there are some popular and thoughtful bloggers who routinely write longer, more formal essays), but it actually struck me as being an excellent general description of blogging. Note his favorable attitude towards mistakes ("it gives you a good hook for a later post" is an excellent quote, though I think you might have to be a blogger to fully understand it). In the blogosphere, it's ok to be wrong:
Everyone makes mistakes. It's a fact of life. It isn't a cause for shame, it's just reality. Just as engineers are in the business of producing successful designs which can be fabricated out of less-than-ideal components, the engineering process is designed to produce successful designs out of a team made up of engineers every one of which screws up routinely. The point of the process is not to prevent errors (because that's impossible) but rather to try to detect them and correct them as early as possible.

There's nothing wrong with making a mistake. It's not that you want to be sloppy; everyone should try to do a good job, but we don't flog people for making mistakes.
The problem with the mainstream media is that they purport to be objective, as if they're just reporting the facts. Striving for objectivity can be a very good thing, but total objectivity is impossible, and if you deny the inherent subjectivity in journalism, then something is lost.

One thing Caesar mentions is that "the sensationalism surrounding blogs has got to go. Blogs don't solve world hunger, cure disease, save damsels in distress, or any of the other heroic things attributed to them." I agree with this too, though I do think there is something sensational about blogs, or more generally, the internet.

Steven Den Beste once wrote about what he thought were the four most important inventions of all time:
In my opinion, the four most important inventions in human history are spoken language, writing, movable type printing and digital electronic information processing (computers and networks). Each represented a massive improvement in our ability to distribute information and to preserve it for later use, and this is the foundation of all other human knowledge activities. There are many other inventions which can be cited as being important (agriculture, boats, metal, money, ceramic pottery, postmodernist literary theory) but those have less pervasive overall affects.
Regardless of whether or not you agree with the notion that these are the most important inventions, it is undeniable that the internet provides a stairstep in communication capability, which, in turn, significantly improves the process of large-scale collaboration that is so important to human existence.
When knowledge could only spread by speech, it might take a thousand years for a good idea to cross the planet and begin to make a difference. With writing it could take a couple of centuries. With printing it could happen in fifty years.

With computer networks, it can happen in a week if not less. After I've posted this article to a server in San Diego, it will be read by someone on the far side of a major ocean within minutes. That's a radical change in capability; a sufficient difference in degree to represent a difference in kind. It means that people all over the world can participate in debate about critical subjects with each other in real time.
And it appears that blogs, with their low barrier to entry and automated software processes, will play a large part in the worldwide debate. There is, of course, a ton of room for improvement, but things are progressing rapidly now and perhaps even accelerating. It is true that some blogging proponents are preaching triumphalism, but that's part of the charm. They're allowed to be wrong and if you look closely at what happens when someone makes such a comment, you see that for every exaggerated claim, there are 10 counters in other blogs that call bullshit. Those blogs might be on the long tail and probably won't garner as much attention, but that's part of the point. Blogs aren't trustworthy, which is precisely why they're so important.

Update 4.24.05: I forgot to link the four most important inventions article (and I changed some minor wording: I had originally referred to the four "greatest" inventions, which was not the wording Den Beste had used).
Posted by Mark on April 22, 2005 at 06:49 PM .: link :.


End of This Day's Posts

Sunday, April 17, 2005

What is a Weblog?
Caesar at ArsTechnica has written a few entries recently concerning blogs which interested me. The first simply asks: What, exactly, is a blog? Once you get past the overly-general definitions ("a blog is a frequently updated webpage"), it becomes a surprisingly difficult question.

Caesar quotes Wikipedia:
A weblog, web log or simply a blog, is a web application which contains periodic time-stamped posts on a common webpage. These posts are often but not necessarily in reverse chronological order. Such a website would typically be accessible to any Internet user. "Weblog" is a portmanteau of "web" and "log". The term "blog" came into common use as a way of avoiding confusion with the term server log.
Of course, as Caesar notes, the majority of internet sites could probably be described in such a way. What differentiates blogs from discussion boards, news organizations, and the like?

Reading through the resulting discussion provides some insight, but practically every definition is either too general or too specific.

Many people like to refer to Weblogs as a medium in itself. I can see the point, but I think it's more general than that. The internet is the medium, whereas a weblog is basically a set of commonly used conventions used to communicate through that medium. Among the conventions are things like a main page with chronological posts, permalinks, archives, comments, calendars, syndication (RSS), blogging software (CMS), trackbacks, &c. One problem is that no single convention is, in itself, definitive of a weblog. It is possible to publish a weblog without syndication, comments, or a calendar. Depending on the conventions being eschewed, such blogs may be unusual, but may still be just as much a blog as any other site.

For lack of a better term, I tend to think of weblogs as a genre. This is, of course, not totally appropriate but I think it does communicate what I'm getting at. A genre is typically defined as a category of artistic expression marked by a distinctive style, form, or content. However, anyone who is familiar with genre film or literature knows that there are plenty of movies or books that are difficult to categorize. As such, specific genres such as horror, sci-fi, or comedy are actually quite inclusive. Some genres, Drama in particular, are incredibly broad and are often accompanied by the conventions of other genres (we call such pieces "cross-genre," though I think you could argue that almost everything incorporates "Drama"). The point here is that there is often a blurry line between what constitutes one genre from another.

On the medium of the internet, there are many genres, one of which is a weblog. Other genres include commercial sites (i.e. sites that try to sell you things, Amazon.com, Ebay, &c.), reference sites (i.e. dictionaries & encyclopedias), Bulletin Board Systems and Forums, news sites, personal sites, weblogs, wikis, and probably many, many others.

Any given site is probably made up of a combination of genres and it is often difficult to pinpoint any one genre as being representative. Take, for example, Kaedrin.com. It is a personal site with some random features, a bunch of book & movie reviews, a forum, and, of course, a weblog (which is what you're reading now). Everything is clearly delineated here at Kaedrin, but other sites blur the lines between genres on every page. Take ArsTechnica itself: Is it a news site or a blog or something else entirely? I would say that the front page is really a combination of many different things, one of which is a blog. It's a "cross-genre" webpage, but that doesn't necessarily make it any less effective (though there is something to be said for simplicity and it is quite possible to load a page up with too much stuff, just as it's possible for a book or movie to be too ambitious and take on too much at once) just as Alien isn't necessarily a less effective Science Fiction film because it incorporates elements of Horror and Drama (or vice-versa).

Interestingly, much of what a weblog is can be defined as an already existing literary genre: the journal. People have kept journals and diaries all throughout history. The major difference between a weblog and a journal is that a weblog is published for all to see on the public internet (and also that weblogs can be linked together through the use of the hyperlink and the infrastructure of the internet). Historically, diaries were usually private, but there are notable exceptions which have been published in book form. Theoretically, one could take such diaries and publish them online - would they be blogs? Take, for instance, The Diary of Samuel Pepys which is currently being published daily as if it's a weblog circa 1662 (i.e. Today's entry is dated "Thursday 17 April 1662"). The only difference is that the author of that diary is dead and thus doesn't interact or respond to the rest of the weblog community (though there is still interaction allowed in the form of annotations).

A few other random observations about blogs:
  • Software: Many people brought up the fact that most blogs are produced with the assistance of Weblogging Software, such as Blogger or Movable Type. From my perspective, such tools are necessary for the spread of weblogs, but shouldn't be a part of the definition. They assist in the spread of weblogs because they automate the overly-technical details of publishing a website and make it easy for normal folks to participate. They're also useful for automatically propagating weblog conventions like permalinks, comments, trackbacks, and archives. However, it's possible to do all of this without the use of blogging specific software and it's also possible to use blogging software for other purposes (for instance, Kaedrin's very own Tandem Stories are powered by Movable Type). It's interesting that other genres have their own software as well, particularly bulletin boards and forums. Ironically, one could use such BBS software to publish a blog (or power tandem stories), if they were so inclined. The Pepys blog mentioned above actually makes use of wiki software (though that software powers the entries, it's mostly used to allow annotations). To me content management systems are important, but they don't define so much as propagate the genre.
  • Personality: One mostly common theme in definitions is that weblogs are personal - they're maintained by a person (or small group of people), not an official organization. A personality gets through. There is also the perception that a blog is less filtered than official communications. Part of the charm of weblogs is that you can be wrong (more on this later, possibly in another post). I'm actually not sure how important this is to the definition of a blog. Someone who posts nothing but links doesn't display much of a personality, except through more subtle means (the choice of links can tell you a lot about an individual, albeit in an indirect way that could lead to much confusion).
  • Communities: Any given public weblog is part of a community, whether it wants to be or not. The boundaries of any specific weblog are usually well delineated, but since weblogs are part of the internet, which is an on-demand medium (as opposed to television or radio, which are broadcast), blogs are often seen as relative to one another. Entries and links from different blogs are aggregated, compared, correlated and published in other weblogs. Any blog which builds enough of a readership provides a way connect people who share various interests through the infrastructure of the internet.
Some time ago, Derek Powazek asked What the Hell is a Weblog? You tell me. and published all the answers. It turns out that I answered this myself (last one on that page), many years ago:
I don't care what the hell a weblog is. It is what I say it is. Its something I update whenever I find an interesting tidbit on the web. And its fun. So there.
Heh. Interesting to note that my secondary definition there ("something I update whenever I find an interesting tidbit on the web") has changed significantly since I contributed that definition. This is why, I suppose, I had originally supplied the primary definition ("I don't care what the hell a weblog is. It is what I say it is.") and to be honest, I don't think that's changed (though I guess you could call that definition "too general"). Blogging is whatever I want it to be. Of course, I could up and call anything a blog, but I suppose it is also required that others perceive your blog as a blog. That way, the genre still retains some shape, but is still permeable enough to allow some flexibility.

I had originally intended to make several other points in this post, but since it has grown to a rather large size, I'll save them for other posts. Hopefully, I'll gather the motivation to do so before next week's scheduled entry, but there's no guarantee...
Posted by Mark on April 17, 2005 at 08:27 PM .: link :.


End of This Day's Posts

Sunday, March 20, 2005

Time Travel in Donnie Darko
By popular request, here is a brief analysis of time travel used in the movie Donnie Darko. As I've mentioned before, Donnie Darko is an enigmatic film and I'm not sure it makes total sense. At a very high level everything seems to fit, but when you start to drill down into the details things become less clear.

In the commentary track of the Directors Cut DVD, writer/director Richard Kelly attempts to clarify some of the more mystifying aspects of the film, but he still leaves a lot of wiggle room and ambiguity. He describes the time travel in the film as being driven by a "comic book logic," which should give you an idea of just how scientifically rigorous the subject is treated in the film (i.e. not very). Time travel is essentially a deus ex machina; it drives the story, but its internal mechanics are unimportant. So this analysis isn't really intended to be very rigorous either, just a few thoughts and attempts to clarify or at least call out some of the more confusing concepts.

Before I really get into it, I suppose I should mention that what follows contains many SPOILERS, so read on at your own risk. Another thing that might be useful is to go over other less than rigorous time travel theories that have been presented in film and literature. This list isn't meant to be complete, but these four theories will help in dissecting Donnie Darko. Again, many SPOILERS, especially in the case of lightning (as I'm assuming most people haven't read it).
  • The Terminator: The main timeline is set, and traveling back in time cannot change anything. Indeed, traveling back in time to change the present will sometimes cause the very thing you're trying to avoid, as happens in The Terminator (for obvious dramatic reasons). This is among the more plausible time travel theories, as it avoids those messy paradoxes. As such, it is one of the more popular theories, used in many other stories (like 12 Monkeys and, funnily enough, Bill & Ted's Excellent Adventure). A more pretentious name for this is Circular Causation, but I think The Terminator gets the point across...
  • Back to the Future: There are, I suppose, many ways to interpret time travel in this movie, but in this theory, there is still only one timeline, but you can change the past (and thus the present). In this theory, it's possible to go back in time and kill your father (before he had you), and in such a case you will "disappear." This is also a common theory, but the presence of paradox makes it less plausible. There are probably ways to explain this theory in terms of alternate universes (multiple timelines) as well...
  • The End of Eternity: In Isaac Asimov's novel, a group of people known as Eternals develop time travel and decided to improve upon history by introducing carefully calculated changes in the timeline. There is more to it than that, but the concept of a society using time travel to manipulate history is an important concept that is relevant to DD.
  • Lightning: In Dean Koontz's novel, time travel is only allowed in one direction: to the future. This takes care of the "kill your father" paradox rather neatly. You can, however, change the future. There is a catch though, which is probably more for dramatic effect, but which bears importance in the Donnie Darko discussion - essentially, fate doesn't like it when you attempt to change something in the future: "Destiny struggles to reassert the pattern that was meant to be." Not particularly scientific, but interesting and again, relevant to DD.
Donnie Darko sort of contains elements of all four, and since it includes the Back to the Future theory, it also sort of includes a paradox. To start, here is a diagram that will help visualize the time travel present in the film:
Donnie Darko Timeline
It's not really to scale, but you get the point. Basically, the main timeline is displayed in the line segment AD (and it is a thicker line, as it is the timeline that is meant to be). BC (the black line) represents the tangent universe, a sort of alternate timeline, and this is where the majority of the film takes place. CB (the grey line) represents the time travel in the film. More details listed below:
  • AB - Point A is the start of the film, and the segment AB takes place before the tangent universe begins.
  • BC - Point B is the point at which an airplane engine lands on Donnie Darko's house. It is also the point at which the tangent universe begins. It is unclear as to why or how the tangent universe begins, but in the main timeline Donnie is killed, while in the tangent universe, Donnie is sort of called out of his room by a mysterious force and thus is not killed by the engine. As the movie goes, shortly after point C, the entire universe (I assume this includes the main timeline as well) is destroyed. This implies that tangent universes must be resolved and cannot be allowed to continue. The film references a fictional book which describes the tangent universe thusly:
    If a Tangent Universe occurs, it will be highly unstable, sustaining itself for no longer than several weeks.

    Eventually it will collapse upon itself, forming a black hole within the Primary Universe capable of destroying all existence.
    This particular information is referenced in the Directors Cut, but not in the theatrical cut.
  • CB - This segment is represented by the grey line between points C and B. At point C, a jet engine falls off an aircraft and travels back in time, hitting Donnie's house at point B. I assume that this event is what causes the tangent universe to form in the first place, which is paradoxical - how can the tangent universe exist when it is caused by itself?
  • BD - The period immediately following point B is shown in the film, but the rest of the segment is not. It is unclear whether or not the jet engine falls off the plane at point D (which parallels point C) or not. I get the impression that it doesn't, but if it did, it might help resolve the paradox shown in CB.
Even after all this, there are still many, many, many questions to be answered. There are a few other things we need to establish first.

First, does Donnie have some sort of superpower? Donnie is obviously different from other people. The film doesn't show any sort of explicit references to his powers, but it is sort of implied by his visits to a psychiatrist and his visions. I suppose the water trails he sees (which show the future path of a person, sometimes including himself) could be an expression of his abilities (as it allows him to see into the future). It's clear that Donnie made a decision near the end of the movie that he was going to "fix" the universe and allow himself to be killed by the jet engine, but it's not clear how that happens. Does Donnie actually cause that to happen, or is he just aware of it happening and going along for the ride? There is a sort of messianic theme in the movie, so I'm assuming that Donnie has some sort of power to send himself and/or the jet engine back in time and link the two universes together (and to collapse the tangent universe without destroying all of existence).

Richard Kelly, in explaining his take on the story, indicated that he wanted to communicate that there was some sort of technology at work in the tangent universe, manipulating everyone's actions, and attempting to set things right. It is unclear what exactly this technology is, how it works, or who is using it, but his point is that someone is orchestrating events in the tangent universe so as to fix the universe (or to allow Donnie the opportunity to fix things). When he mentioned this concept, I immediately thought of Asimov's Eternals, people who manipulated time and history for the betterment of mankind. In Donnie Darko, perhaps there exists a similar group of people who are tasked with ensuring that tangent universes are closed. Or perhaps, Donnie himself is subconsciously manipulating events to help fix things.

I also thought of Koontz's Lighting and that infamous line "Destiny struggles to reassert the pattern that was meant to be." In that scenario, there isn't really a technology at work, just fate, perhaps augmented by Donnie's supernatural abilities. Indeed, it could be some sort of combination of these three explanations: Donnie Darko has powers which are augmented by some sort of technology and fate.

What is Frank (the demonic looking bunny), and what role does he play in the story? This is very unclear. He may be a ghost, he may be the result of Donnie's unconscious awareness of the future, or he may be a projection from the technological puppet-masters.

There are obviously a number of other explanations. What if the timeline actually follows a linear path (i.e. the linear presentation in the movie)? In that scenario, the timeline would go from A to B to C to D, except that B and D are essentially the same point in time (perhaps the main timeline stopped while the tangent universe worked itself out). So the time travel line would occur between CD.

And of course, this doesn't really take into account all the themes of the film. I suppose I should also note that I've been analyzing the Directors Cut, which references a lot more of the fictional book, The Philosophy Of Time Travel by Roberta Sparrow (a character in the film). The Directors Cut gives more information on the guiding forces in the story, and it gives a more sci-fi bend than the theatrical cut, but both cuts are sufficiently ambiguous as to allow multiple interpretations, many of which end up being pretty silly when you drill down into the details, and some don't make much sense, but in the end that doesn't really matter all that much because you have to figure it out for yourself...
Posted by Mark on March 20, 2005 at 01:34 PM .: link :.


End of This Day's Posts

Sunday, February 20, 2005

The Stability of Three
One of the things I've always respected about Neal Stephenson is his attitude (or rather, the lack thereof) regarding politics:
Politics - These I avoid for the simple reason that artists often make fools of themselves, and begin to produce bad art, when they decide to get political. A novelist needs to be able to see the world through the eyes of just about anyone, including people who have this or that set of views on religion, politics, etc. By espousing one strong political view a novelist loses the power to do this. Anyone who has convinced himself, based on reading my work, that I hold this or that political view, is probably wrong. What is much more likely is that, for a while, I managed to get inside the head of a fictional character who held that view.
Having read and enjoyed several of his books, I think this attitude has served him well. In a recent interview in Reason magazine, Stephenson makes several interesting observations. The whole thing is great, and many people are interested in his comments regarding an American technology and science, but I found one other tidbit very interesting. Strictly speaking, it doesn't break with his attitude about politics, but it is somewhat political:
Speaking as an observer who has many friends with libertarian instincts, I would point out that terrorism is a much more formidable opponent of political liberty than government. Government acts almost as a recruiting station for libertarians. Anyone who pays taxes or has to fill out government paperwork develops libertarian impulses almost as a knee-jerk reaction. But terrorism acts as a recruiting station for statists. So it looks to me as though we are headed for a triangular system in which libertarians and statists and terrorists interact with each other in a way that I’m afraid might turn out to be quite stable.
I took particular note of what he describes as a "triangular system" because it's something I've seen before...

One of the primary goals of the American Constitutional Convention was to devise a system that would be resistant to tyranny. The founders were clearly aware of the damage that an unrestrained government could do, so they tried to design the new system in such a way that it wouldn't become tyrannical. Democratic institions like mandatory periodic voting and direct accountability to the people played a large part in this, but the founders also did some interesting structural work as well.

Taking their cue from the English Parliament's relationship with the King of England, the founders decided to create a legislative branch separate from the executive. This, in turn, placed the two governing bodies in competition. However, this isn't a very robust system. If one of the governing bodies becomes more powerful than the other, they can leverage their advantage to accrue more power, thus increasing the imbalance.

A two-way balance of power is unstable, but a three-way balance turns out to be very stable. If any one body becomes more powerful than the other two, the two usually can and will temporarily unite, and their combined power will still exceed the third. So the founders added a third governing body, an independent judiciary.

The result was a bizarre sort of stable oscillation of power between the three major branches of the federal government. Major shifts in power (such as wars) disturbed the system, but it always fell back to a preferred state of flux. This stable oscillation turns out to be one of the key elements of Chaos theory, and is referred to as a strange attractor. These "triangular systems" are particularly good at this, and there are many other examples...

Some argue that the Cold War stabilized considerably when China split from the Soviet Union. Once it became a three-way conflict, there was much less of a chance of unbalance (and as unbalance would have lead to nuclear war, this was obviously a good thing).

Steven Den Beste once noted this stabilizing power of three in the interim Iraqi constitution, where the Iraqis instituted a Presidency Council of 3 Presidents representing each of the 3 major factions in Iraq:
...those writing the Iraqi constitution also had to create a system acceptable to the three primary factions inside of Iraq. If they did not, the system would shake itself to pieces and there was a risk of Iraqi civil war.

The divisions within Iraq are very real. But this constitution takes advantage of the fact that there are three competing factions none of which really trusts the other. This constitution leverages that weakness, and makes it into a strength.
It should be interesting to see if that structure will be maintained in the new Iraqi constitution.

As for Stephenson's speculation that a triangular system consisting of libertarians, statists, and terrorists may develop, I'm not sure. They certainly seem to feed off one another in a way that would facilitate such a system, but I'm not positive it would work out that way, nor do I think it is particularly a desirable state to be in, all the more because it could be a very stable system due to its triangular structure. In any case, I thought it was an interesting observation and well worth considering...
Posted by Mark on February 20, 2005 at 08:06 PM .: link :.


End of This Day's Posts

Sunday, January 30, 2005

Elections in Iraq
Iraq held its first national elections in over 50 years today. I don't have much to add to what has already been said, but I will note that it doesn't surprise me that the insurgents were quieter than expected. One of the big advantages of terrorism is the surprise factor, and on a day like today, security forces are expecting attacks and are much more likely to spot unusual activities and investigate. My guess is that attacks will intensify in the coming weeks, as the insurgents test the new government...

Lots of people are commenting on this so I'll try to perform some of that information aggregation that blogs are known for, starting with the Iraqi Blogs, then moving on to the rest of the blogosphere...

Update: Moved all the links into the extended entry. Click below to read on... Iraqi Blogs:
  • Friends of Democracy: Michael Totten is selecting, editing, and posting the reports and photos of Iraqis on the ground in Iraq.
  • Zeyad: An Iraqi dentist travelling in Jordan comments on the elections...
  • Alaa: Another Iraqi blogger comments:
    I bow in respect and awe to the men and women of our people who, armed only with faith and hope are going to the polls under the very real threats of being blown to pieces. These are the real braves; not the miserable creatures of hate who are attacking one of the noblest things that has ever happened to us. Have you ever seen anything like this? Iraq will be O.K. with so many brave people, it will certainly O.K.; I can say no more just now; I am just filled with pride and moved beyond words.
  • hammorabi, another Iraqi blogger, has pictures and some other comments as well.
  • Iraq the Model: More Iraqi bloggers weigh in: "We had all kinds of feelings in our minds while we were on our way to the ballot box except one feeling that never came to us, that was fear."
  • Ali, brother of the Iraq the Model bloggers, comments. He recalls the last time he voted in Iraq:
    This was the same place I went in 1996 to cast my vote in a poll asking if we wanted to have Saddam as a president for life or not. I had to go at that time. The threats for anyone who refused to take that poll were no less than the death penalty. Still our district was one of the places were one could vote secretly, occasionally though. They trusted our neighborhood because it's mainly Sunni military officers who live here with their families. I and some of my friends chose "NO" but we were scared to death as we marked the paper and remained so for days.
    He doesn't seem as worried about his vote today, though.
  • Khalid Jarrar, another Iraqi blogger, didn't vote and isn't too enthused with the less than scientific estimate of 72% voter turnout.
  • Kurdo: Pictures! Lots of purple fingers - "All these fingers are up for you terrorist, anti-democracy, pro-beheading, suicide-bombers, Baathiest, Saddamist and anti-peace people."
  • Shlonkom Bakazay?, yet another Iraqi blogger, has several posts. He doesn't seem to happy with the way the elections are being portrayed:
    I, too, am simply nauseated by the coverage of American news outlets. It's made out to be an exercise in self-help or validation for all the death and misery that has been put directly upon Iraq and America. It is, however, completely understandable that people are tremendously enthusiastic about being able to go through this exercise...even under such draconian lock-down. The next couple months will go a long way to explain what will happen in Iraq. Let us all hope for the best.
  • Raed Jarrar has some pointed words for the Bush adminstration:
    The cowardly and corrupt bush administration, working along with the dirty allow(ie) government is coercing Iraqis to vote. The allow(ie) puppets are threatening Iraqis who don't vote that they will not get their monthly food rations. ... and this is one of the main reasons of why millions of poor and destroyed Iraqis were dragged out of their homes today and sent to election centers in the middle of explosions and bullets. They don't give a damn about elections, they want food.
    Besides the food rumor, he is also not too happy about the turnout estimates...
  • Baghdad Dweller: "Say it loud and clear: I am a Sunni, I am an Iraqi and I voted" He also has an exclusive picture of Al-Zarqawi and he wants to know why so many Americans don't think Democracy will work in Iraq...
  • I wonder if Ahmed voted?
  • Abu Khaleel is having some election night jitters:
    On the one hand, I am passionately for democracy in principle. It is the only hope for Iraq. On the other hand, I am passionately against these particular elections. They are only an ugly, distorted imitation of democracy. I am convinced that they will not lead to stability... or even democracy.
    The elections seem to have gone better than he anticipated (this post was written last night, before the elections). Look for more from him later...
  • Xosh 7al, from the Kurdistan Bloggers Union, is making up words. He's got lots more today too...
  • No Pain No Gain: Yet another Iraqi blogger...
  • A star from Mosul: "We'd all like to vote for the best man but he's never a candidate." Ain't that the truth. Welcome to Democracy, Aunt Najma!
  • Ferid has some photos.
Other blogs:
  • Glenn Reynolds: Duh. Lots of stuff. Just keep scrolling. Lots of links in this post are on Instapundit, and it's provent to be a good starting point (as always), so thanks Glenn!
  • memeorandum: This site is great on a day like today. It follows several news stories, and the blogs that link to them.
  • Ann Althouse notes a NYT article headline change, with the help of Memeorandum... (Update: Kevin Drum responds) She's got lots more, including a post on Kerry's appearance on Meet the Press.
  • John Robb avises caution:
    What is the role of elections if the state is in failure? If the elections bring in a new government that can't revive the state, what will that mean? We need to remember that this election is going to be a demonstration of the value of democracy. A failed demonstration would have negative consequences.
  • Juan Cole has lots of stuff, and doesn't seem to enthused.
  • Belmont Club: Wretchard responds to some of Cole's claims...
  • The Indepundit has lots of pictures juxtaposed with quotes.
  • David Foster finds himself reminded of the Alamo(!?)
  • The Commissar reads the news to his daughter.
  • John Weidner comments.
  • Ryan Stiles is a security advisor in Iraq and has a few coomments. "What's Next? Well for tonight, I imagine it's dodging the celebratory fire."
  • BuzzMachine: Jeff Jarvis has tons of stuff. Just keep scrolling.
    This morning, I asked myself whether I would go to vote if I thought I could be bombed at the polling place or shot because of my blue finger. I don't think I'd have that courage. Most Americans would not (hell, most of us don't vote even in the lap of safety). Remember that every single Iraqi who came to vote today is a victory for democracy.
  • Fritz Schranck likes the blue finger look, and thinks that Americans should use it in our own elections.
  • The Wall Street Journal has a roundup of blogs commenting on the election
  • John Cole (not to be confused with Juan) notes some shifting of the goalposts.
  • Chester is all over the story, including some live blogging.
    1:03 This is Geraldo's finest hour. He can't contain his excitement on the ground in Baghdad -- he just said, "I refuse to speak in measured tones. This is truly exhilirating." And he called this, his sixth trip to Iraq since the war started, as the best one yet. Fox is just letting him go. He just compared the election to the fall of the Berlin Wall and 1776.
    Heh. His finest hour? That's not exactly saying much...
  • Powerline: They've got lots of interesting posts, as usual. Surprisingly, there's more praise of Geraldo. Maybe I should give the guy a break.
  • Donald Sensing notes a courageous act by an Iraqi policeman. "Police Constable Abd al Amir cannot be awarded the [Medal of Honor] by the US government, for only members of the US military are eligible for the award. One hopes he will be appropriately memorialized by the new Iraqi government."
  • Ace of Spades HQ has been posting up a storm, and guest poster Dave gives the blogosphere the "Jack Burton Kick-Ass Award for Excellence." Heh.
  • Captains Quarters also has lots (the next two links come from him - thanks Captain Ed!).
  • Kevin McCullough is blue finger blogging...
  • Radio Blogger: Covers Iraqis voting in Lake Forest, CA yesterday...
  • Arthur Chrenkoff has a three part live-blogging series of posts(one, two, three)
  • Daily Kos: "This Election is simply, in my estimation, an exercise in pretty pictures. Why? Because Elections are to choose governments, not to celebrate the day."
  • Scrappleface: "Iraqi Voting Disrupts News Reports of Bombings"
  • Iraq Election Newswire: Jeff Garzik is providing an excellent roundup of links to MSM news articles.
  • Blackfive
  • Joshua Claybourn: "Flashback: Following WWII, Germany's first election took four years, and Japan's two."
  • Crooked Timber:
    The best possible outcome of the weekend’s election is a successful completion of the present government’s term followed by another real election. It’s often said that the key moment in the growth of a democracy is not its first election but its second, because ... a democracy is a system where governments lose elections.
  • Paul Cella has some comments concerning the Iraqi elections and some historical analogies (including references to regicide!)
  • The Command Post is on top of the story, and also has a great roundup of Iraqi election posts
  • Winds of Change has an Iraq Report (a regular feature at WoC, but this one focuses on the elections).
  • Dean Esmay has been blogging up a storm, and has thoughtfully created an index of his 14 posts (see the "Related posts" at the end of each entry), and put them all in a category (with a single link), making my job that much easier... Thanks Dean!
  • Derek Lowe hopes that Iraqis and other Middle Easter countries step up their scientific endeavors: "Although I generally don't comment on current political events here, I wanted to congratulate the Iraqis who voted in their election this weekend. From a scientist's point of view, it would be a fine thing if they (and the other countries in the region) could have their affairs in good enough order to join the research efforts that are going on in so many other countries. ... I'm showing my biases here, because I think that scientific research is one of the greatest endeavors of the human race. The more hands and minds we have working on the big problems, the better the chances of solutions."
  • Iraqi Election Watch includes a roundup of Iraqi Media, Blogs, and more... [via Volokh]
  • The Counter-Terrorism Blog has the scoop on today's attacks.
  • Michelle Malkin has lots of posts, including one about a ten year old's show of solidarity with Iraqis, one about the lack of leftist blogging about the election (many of the big names don't have much about the elections on their page, if anything at all) and one about women voting. Lots more too.
  • Andrew Sullivan has lots, as you'd expect.
    I think the anti-war left's failure to believe in democracy is a greater failing than the pro-war right's failure to grapple with some of the serious failings of the endeavor. But I hope today that everyone, whatever their view of the war or occupation, can rejoice in the defeat of evil and terror. It's truly inspiring.
    And another one:
    I don't want to be excitable, but aren't you feeling euphoric? It's almost a classic tale of good defeating evil. We always needed the Iraqi people to seize freedom for themselves. Given the chance, they have. This is their victory, made possible by those amazing Western troops. This day eclipses - although, alas, it cannot undo - any errors we have made. Only freedom can defeat terror. Today, freedom won.
  • Pejman Yousefzadeh: "Those who deride this expression of defiance and this irrevocable march towards freedom will themselves be derided by history--and rightfully so. No one thinks for a moment that Iraq's challenges have come to an end, but for all of the obstacles placed in the way of the Iraqis, this day represents a smashing triumph."
  • Joe Gandelman has some comments and a nice roundup of links as well. "The context of this election is unprecedented in recent history -- and perhaps in all of history."
  • Mark Slover has some good stuff, including a roundup and an interesting comparison between voter turnouts of several major democracies.
  • Kevin Drum wonders how the voting turnout splits between the Kurds, Shiites, and the Sunnies (with a prediction of about 70%/70%/20% respectively). He has lots more too.
  • Armed Liberal has a post at Winds of Change:
    I've been betting on the existence of the 'silent middle' in Iraq and throughout the Muslim world, and I'll take a stand here and say that what this election proves, conclusively, is that such a middle exists. Now we'd damn well better do a good job of reaching out to them.
More to come...

Several Updates: Gah! Information overload. Many links added, but I think I'm done for the night. The funny thing is that I haven't even begun to scrape the tip of all the good information that's out there. Partaking in an exercise like this is one of the things that really puts the need for good information aggregation into perspective. But this is a start, I guess...

Another Update: I lied, several new links.
Posted by Mark on January 30, 2005 at 07:06 PM .: link :.


End of This Day's Posts

Sunday, December 12, 2004

Stigmergic Notes
I've been doing a lot of reading and thinking about the concepts discussed in my last post. It's a fascinating, if a little bewildering, topic. I'm not sure I have a great handle on it, but I figured I'd share a few thoughts.

There are many systems that are incredibly flexible, yet they came into existence, grew, and self-organized without any actual planning. Such systems are often referred to as Stigmergic Systems. To a certain extent, free markets have self-organized, guided by such emergent effects as Adam Smith's "invisible hand". Many organisms are able to quickly adapt to changing conditions using a technique of continuous reproduction and selection. To an extent, there are forces on the internet that are beginning to self-organize and produce useful emergent properties, blogs among them.

Such systems are difficult to observe, and it's hard to really get a grasp on what a given system is actually indicating (or what properties are emerging). This is, in part, the way such systems are supposed to work. When many people talk about blogs, they find it hard to believe that a system composed mostly of small, irregularly updated, and downright mediocre (if not worse) blogs can have truly impressive emergent properties (I tend to model the ideal output of the blogosphere as an information resource). Believe it or not, blogging wouldn't work without all the crap. There are a few reasons for this:

The System Design: The idea isn't to design a perfect system. The point is that these systems aren't planned, they're self-organizing. What we design are systems which allow this self-organization to occur. In nature, this is accomplished through constant reproduction and selection (for example, some biological systems can be represented as a function of genes. There are hundreds of thousands of genes, with a huge and diverse number of combinations. Each combination can be judged based on some criteria, such as survival and reproduction. Nature introduces random mutations so that gene combinations vary. Efficient combinations are "selected" and passed on to the next generation through reproduction, and so on).

The important thing with respect to blogs are the tools we use. To a large extent, blogging is simply an extension of many mechanisms already available on the internet, most especially the link. Other weblog specific mechanisms like blogrolls, permanent-links, comments (with links of course) and trackbacks have added functionality to the link and made it more powerful. For a number of reasons, weblogs tend to be affected by power-law distribution, which spontaneously produces a sort of hierarchical organization. Many believe that such a distribution is inherently unfair, as many excellent blogs don't get the attention they deserve, but while many of the larger bloggers seek to promote smaller blogs (some even providing mechanisms for promotion), I'm not sure there is any reliable way to systemically "fix" the problem without harming the system's self-organizational abilities.
In systems where many people are free to choose between many options, a small subset of the whole will get a disproportionate amount of traffic (or attention, or income), even if no members of the system actively work towards such an outcome. This has nothing to do with moral weakness, selling out, or any other psychological explanation. The very act of choosing, spread widely enough and freely enough, creates a power law distribution.
This self-organization is one of the important things about weblogs; any attempt to get around it will end up harming you in the long run as the important thing is to find a state in which weblogs are working most efficiently. How can the weblog community be arranged to self-organize and find its best configuration? That is what the real question is, and that is what we should be trying to accomplish (emphasis mine):
...although the purpose of this example is to build an information resource, the main strategy is concerned with creating an efficient system of collaboration. The information resource emerges as an outcome if this is successful.
Failure is Important: Self-Organizing systems tend to have attractors (a preferred state of the system), such that these systems will always gravitate towards certain positions (or series of positions), no matter where they start. Surprising as it may seem, self-organization only really happens when you expose a system in a steady state to an environment that can destabilize it. By disturbing a steady state, you might cause the system to take up a more efficient position.

It's tempting to dismiss weblogs as a fad because so many of them are crap. But that crap is actually necessary because it destabilizies the system. Bloggers often add their perspective to the weblog community in the hopes that this new information will change the way others think (i.e. they are hoping to induce change - this is roughly referred to as Stigmergy). That new information will often prompt other individuals to respond in some way or another (even if not directly responding). Essentially, change is introduced in the system and this can cause unpredictable and destabilizing effects. Sometimes this destabilization actually helps the system, sometimes (and probably more often than not) it doesn't. Irregardless of its direct effects, the process is essential because it is helping the system become increasingly comprehensive. I touched on this in my last post among several others in which I claim that an argument achieves a higher degree of objectivity by embracing and acknowledging its own biases and agenda. It's not that any one blog or post is particularly reliable in itself, it's that blogs collectively are more objective and reliable than any one analyst (a journalist, for instance), despite the fact that many blogs are mediocre at best. An individual blog may fail to solve a problem, but that failure is important too when you look at the systemic level. Of course, all of this is also muddying the waters and causing the system to deteriorate to a state where it is less efficient to use. For every success story like Rathergate, there are probably 10 bizarre and absurd conspiracy theories to contend with.
This is the dilemma faced by all biological systems. The effects that cause them to become less efficient are also the effects that enable them to evolve into more efficient forms. Nature solves this problem with its evolutionary strategy of selecting for the fittest. This strategy makes sure that progress is always in a positive direction only.
So what weblogs need is a selection process that separates the good blogs from the bad. This ties in with the aforementioned power-law distribution of weblogs. Links, be they blogroll links or links to an individual post, essentially represent a sort of currency of the blogosphere and provide an essential internal feedback loop. There is a rudimentary form of this sort of thing going on, and it has proven to be very successful (as Jeremy Bowers notes, it certainly seems to do so much better than the media whose selection process appears to be simple heuristics). However, the weblog system is still young and I think there is considerable room for improvement in its selection processes. We've only hit the tip of the iceberg here. Syndication, aggregation, and filtering need to improve considerably. Note that all of those things are systemic improvements. None of them directly act upon the weblog community or the desired informational output of the community. They are improvements to the strategy of creating an efficient system of collaboration. A better informational output emerges as an outcome if the systemic improvements are successful.

This is truly a massive subject, and I'm only beginning to understand some of the deeper concepts, so I might end up repeating myself a bit in future posts on this subject, as I delve deeper into the underlying concepts and gain a better understanding. The funny thing is that it doesn't seem like the subject itself is very well defined, so I'm sure lots will be changing in the future. Below are a few links to information that I found helpful in writing this post.
Posted by Mark on December 12, 2004 at 11:15 PM .: link :.


End of This Day's Posts

Sunday, December 05, 2004

An Epic in Parallel Form
Tyler Cowen has an interesting post on the scholarly content of blogging in which he speculates as to how blogging and academic scholarship fit together. In so doing he makes some general observations about blogging:
Blogging is a fundamentally new medium, akin to an epic in serial form, but combining the functions of editor and author. Who doesn't dream of writing an epic?

Don't focus on the single post. Rather a good blog provides you a whole vision of what a field is about, what the interesting questions are, and how you might answer them. It is also a new window onto a mind. And by packaging intellectual content with some personality, bloggers appeal to the biological instincts of blog readers. Be as intellectual as you want, you still are programmed to find people more memorable than ideas.
It's an interesting perspective. Many blogs are general in subject, but some of the ones that really stand out have some sort of narrative (for lack of a better term) that you can follow from post to post. As Cowen puts it, an "epic in serial form." The suggestion that reading a single blog many times is more rewarding than reading the best posts from many different blogs is interesting. But while a single blog may give you a broad view of what a field is about, it can also be rewarding to aggregate the specific views of a wide variety of individuals, even biased and partisan individuals. As Cowen mentions, the blogosphere as a whole is the relevant unit of analysis. Even if each individual view is unimpressive on its own, that may not be the case when taken collectively. In a sense, while each individual is writing a flawed epic in serial form, they are all contributing to an epic in parallel form.

Which brings up another interesting aspect of blogs. When the blogosphere tackles a subject, it produces a diverse set of opinions and perspectives, all published independently by a network of analysts who are all doing work in parallel. The problem here is that the decentralized nature of the blogosphere makes aggregation difficult. Determining a group as large and diverse as the blogosphere's "answer" based on all of the disparate information they have produced is incredibly difficult, especially when the majority of data represents opinions of various analysts. A deficiency in aggregation is part of where groupthink comes from, but some groups are able to harness their disparity into something productive. The many are smarter than the few, but only if the many are able to aggregate their data properly.

In theory, blogs represent a self-organizing system that has the potential to evolve and display emergent properties (a sort of human hive mind). In practice, it's a little more difficult to say. I think it's clear that the spontaneous appearance of collective thought, as implemented through blogs or other communication systems, is happening frequently on the internet. However, each occurrence is isolated and only represents an incremental gain in productivity. In other words, a system will sometimes self-organize in order to analyze a problem and produce an enormous amount of data which is then aggregated into a shared vision (a vision which is much more sophisticated than anything that one individual could come up with), but the structure that appears in that case will disappear as the issue dies down. The incredible increase in analytic power is not a permanent stair step, nor is it ubiquitous. Indeed, it can also be hard to recognize the signal in a great sea of noise.

Of course, such systems are constantly and spontaneously self-organizing; themselves tackling problems in parallel. Some systems will compete with others, some systems will organize around trivial issues, some systems won't be nearly as effective as others. Because of this, it might be that we don't even recognize when a system really transcends its perceived limitations. Of course, such systems are not limited to blogs. In fact they are quite common, and they appear in lots of different types of systems. Business markets are, in part, self-organizing, with emergent properties like Adam Smith's "invisible hand". Open Source software is another example of a self-organizing system.

Interestingly enough, this subject ties in nicely with a series of posts I've been working on regarding the properties of Reflexive documentaries, polarized debates, computer security, and national security. One of the general ideas discussed in those posts is that an argument achieves a higher degree of objectivity by embracing and acknowledging its own biases and agenda. Ironically, in acknowledging one's own subjectivity, one becomes more objective and reliable. This applies on an individual basis, but becomes much more powerful when it is part of an emergent system of analysis as discussed above. Blogs are excellent at this sort of thing precisely because they are made up of independent parts that make no pretense at objectivity. It's not that any one blog or post is particularly reliable in itself, it's that blogs collectively are more objective and reliable than any one analyst (a journalist, for instance), despite the fact that many blogs are mediocre at best. The news media represents a competing system (the journalist being the media's equivalent of the blogger), one that is much more rigid and unyielding. The interplay between blogs and the media is fascinating, and you can see each medium evolving in response to the other (the degree to which this is occurring is naturally up for debate). You might even be able to make the argument that blogs are, themselves, emergent properties of the mainstream media.

Personally, I don't think I have that exact sort of narrative going here, though I do believe I've developed certain thematic consistencies in terms of the subjects I cover here. I'm certainly no expert and I don't post nearly often enough to establish the sort of narrative that Cowen is talking about, but I do think a reader would benefit from reading multiple posts. I try to make up for my low posting frequency by writing longer, more detailed posts, often referencing older posts on similar subjects. However, I get the feeling that if I were to break up my posts into smaller, more digestible pieces, the overall time it would take to read and produce the same material would be significantly longer. Of course, my content is rarely scholarly in nature, and my subject matter varies from week to week as well, but I found this interesting to think about nonetheless.

I think I tend to be more of an aggregator than anything else, which is interesting because I've never thought about what I do in those terms. It's also somewhat challenging, as one of my weaknesses is being timely with information. Plus aggregation appears to be one of the more tricky aspects of a system such as the ones discussed above, and with respect to blogs, it is something which definitely needs some work...

Update 12.13.04: I wrote some more on the subject. I aslo made a minor edit to this entry, moving one paragraph lower down. No content has actually changed, but the new order flows better.
Posted by Mark on December 05, 2004 at 09:23 PM .: link :.


End of This Day's Posts

Sunday, November 21, 2004

Polarized Debate
This is yet another in a series of posts fleshing out ideas initially presented in a post regarding Reflexive Documentary filmmaking and the media. In short, Reflexive Documentaries achieve a higher degree of objectivity by embracing and acknowledging their own biases and agenda. Ironically, by acknowledging their own subjectivity, these films are more objective and reliable. I expanded the scope of the concepts originally presented in that post to include a broader range of information dissemination processes, which lead to a post on computer security and a post on national security.

I had originally planned to apply the same concepts to debating in a relatively straightforward manner. I'll still do that, but recent events have lead me to reconsider my position, thus there will most likely be some unresolved questions at the end of this post.

So the obvious implication with respect to debating is that a debate can be more productive when each side exposes their own biases and agenda in making their argument. Of course, this is pretty much required by definition, but what I'm getting at here is more a matter of tactics. Debating tactics often take poor forms, with participants scoring cheap points by using intuitive but fallacious arguments.

I've done a lot of debating in various online forums, often taking a less than popular point of view (I tend to be a contrarian, and am comofortable on the defense). One thing that I've found is that as a debate heats up, the arguments become polarized. I sometimes find myself defending someone or something that I normally wouldn't. This is, in part, because a polarizing debate forces you to dispute everything your opponent argues. To concede one point irrevocably weakens your position, or so it seems. Of course, the fact that I'm a contrarian, somewhat competitive, and stubborn also plays a part this. Emotions sometimes flare, attitudes clash, and you're often left feeling dirty after such a debate.

None of which is to say that polarized debate is bad. My whole reason for participating in such debates is to get others to consider more than one point of view. If a few lurkers read a debate and come away from it confused or at least challenged by some of the ideas presented, I consider that a win. There isn't anything inherently wrong with partisanship, and as frustrating as some debates are, I find myself looking back on them as good learning experiences. In fact, taking an extreme position and thinking from that biased standpoint helps you understand not only that viewpoint, but the extreme opposite as well.

The problem with such debates, however, is that they really are divisive. A debate which becomes polarized might end up providing you with a more balanced view of an issue, but such debates sometimes also present an unrealistic view of the issue. An example of this is abortion. Debates on that topic are usually heated and emotional, but the issue polarizes, and people who would come down somewhere around the middle end up arguing an extreme position for or against.

Again, I normally chalk this polarization up as a good thing, but after the election, I'm beginning to see the wisdom in perhaps pursuing a more moderated approach. With all the red/blue dichotomies being thrown around with reckless abandon, talk of moving to Canada and even talk of secesssion(!), it's pretty obvious that the country has become overly-polarized.

I've been writing about Benjamin Franklin recently on this here blog, and I think his debating style is particularly apt to this discussion:
Franklin was worried that his fondness for conversation and eagerness to impress made him prone to "prattling, punning and joking, which only made me acceptable to trifling company." Knowledge, he realized, "was obtained rather by the use of the ear than of the tongue." So in the Junto, he began to work on his use of silence and gentle dialogue.

One method, which he had developed during his mock debates with John Collins in Boston and then when discoursing with Keimer, was to pursue topics through soft, Socratic queries. That became the preferred style for Junto meetings. Discussions were to be conducted "without fondness for dispute or desire of victory." Franklin taught his friends to push their ideas through suggestions and questions, and to use (or at least feign) naive curiousity to avoid contradicting people in a manner that could give offense. ... It was a style he would urge on the Constitutional Convention sixty years later. [This is an exerpt from the recent biography Benjamin Franklin: An American Life by Walter Isaacson]
This contrasts rather sharply with what passes for civilized debate these days. Franklin actually considered it rude to directly contradict or dispute someone, something I had always found to be confusing. I typically favor a frank exchange of ideas (i.e. saying what you mean), but I'm beginning to come around. In the wake of the election, a lot of advice has been offered up for liberals and the left, and a lot of suggestions center around the idea that they need to "reach out" to more voters. This has been recieved with indignation by liberals and leftists, and one could hardly blame them. From their perspective, conservatives and the right are just as bad if not worse and they read such advice as if they're being asked to give up their values. Irrespective of which side is right, I think the general thrust of the advice is that liberal arguments must be more persuasive. No matter how much we might want to paint the country into red and blue partitions, if you really want to be accurate, you'd see only a few small areas of red and blue drowning in a sea of purple. The Democrats don't need to convince that many people to get a more favorable outcome in the next election.

And so perhaps we should be fighting the natural polarization of a debate and take a cue from Franklin, who stressed the importance of deferring, or at least pretending to defer, to others:
"Would you win the hearts of others, you must not seem to vie with them, but to admire them. Give them every opportunity of displaying their own qualifications, and when you have indulged their vanity, they will praise you in turn and prefer you above others... Such is the vanity of mankind that minding what others say is a much surer way of pleasing them than talking well ourselves."
There are weaknesses to such an approach, especially if your opponent does not return the favor, but I think it is well worth considering. That the country has so many opposing views is not necessarily bad, and indeed, is a necessity in democracy for ideas to compete. But perhaps we need less spin and more moderation... In his essay "Apology for Printers" Franklin opines:
"Printers are educated in the belief that when men differ in opinion, both sides ought equally to have the advantage of being heard by the public; and that when Truth and Error have fair play, the former is always an overmatch for the latter."
Indeed.

Update: Andrew Olmsted posted something along these lines, and he has a good explanation as to why debates often go south:
I exaggerate for effect, but anyone spending much time on site devoted to either party quickly runs up against the assumption that the other side isn't just wrong, but evil. And once you've made that assumption, it would be wrong to even negotiate with the other side, because any compromise you make is taking the country one step closer to that evil. The enemy must be fought tooth and nail, because his goals are so heinous.

... We tend to assume the worst of those we're arguing with; that he's ignoring this critical point, or that he understands what we're saying but is being deliberately obtuse. So we end up getting frustrated, saying something nasty, and cutting off any opportunity for real dialogue.
I don't know that we're a majority, as Olmsted hopes, but there's more than just a few of us, at least...
Posted by Mark on November 21, 2004 at 03:29 PM .: link :.


End of This Day's Posts

Thursday, November 11, 2004

Arranging Interests in Parallel
I have noticed a tendency on my part to, on occasion, quote a piece of fiction, and then comment on some wisdom or truth contained therein. This sort of thing is typically frowned upon in rigorous debate as fiction is, by definition, contrived and thus referencing it in a serious argument is rightly seen as undesirable. Fortunately for me, this blog, though often taking a serious tone, is ultimately an exercise in thinking for myself. The point is to have fun. This is why I will sometimes quote fiction to make a point, and it's also why I enjoy questionable exercises like speculating about historical figures. As I mentioned in a post on Benjamin Franklin, such exercises usually end up saying more about me and my assumptions than anything else. But it's my blog, so that is more or less appropriate.

Astute readers must at this point be expecting to recieve a citation from a piece of fiction, followed by an application of the relevant concepts to some ends. And they would be correct.

Early on in Neal Stephenson's novel The System of the World, Daniel Waterhouse reflects on what is required of someone in his position:
He was at an age where it was never possible ot pursue one errand at a time. He must do many at once. He guessed that people who had lived right and arranged things properly must have it all rigged so that all of their quests ran in parallel, and reinforced and supported one another just so. They gained reputations as conjurors. Others found their errands running at cross purposes and were never able to do anything; they ended up seeming mad, or else percieived the futility of what they were doing and gave up, or turned to drink.
Naturally, I believe there is some truth to this. In fact, the life of Benjamin Franklin, a historical figure from approximately the same time period as Dr. Waterhouse, provides us with a more tangible reference point.

Franklin was known to mix private interests with public ones, and to leverage both to further his business interests. The consummate example of Franklin's proclivities was the Junto, a club of young workingmen formed by Franklin in the fall of 1727. The Junto was a small club composed of enterprising tradesman and artisans who discussed issues of the day and also endeavored to form a vehicle for the furtherance of their own careers. The enterprise was typical of Franklin, who was always eager to form associations for mutual benefit, and who aligned his interests so they ran in parallel, reinforcing and supporting one another.

A more specific example of Franklin's knack for aligning interests is when he produced the first recorded abortion debate in America. At the time, Franklin was running a print shop in Philadelphia. His main competitor, Andrew Bradford, published the town's only newspaper. The paper was meager, but very profitable in both moneys and prestige (which led him to be more respected by merchants and politicians, and thus more likely to get printing jobs), and Franklin decided to launch a competing newspaper. Unfortunately, another rival printer, Samuel Keimer, caught wind of Franklin's plan and immediately launched a hastily assembled newspaper of his own. Franklin, realizing that it would be difficult to launch a third paper right away, vowed to crush Keimer:
In a comptetitive bank shot, Franklin decided to write a series of anonymous letters and essays, along the lines of the Silence Dogood pieces of his youth, for Bradford's [American Weekly Mercury] to draw attention away from Keimer's new paper. The goal was to enliven, at least until Keimer was beaten, Bradford's dull paper, which in its ten years had never puplished any such features.

The first two pieces were attacks on poor Keimer, who was serializing entries from an encyclopedia. His intial installment included, innocently enough, an entry on abortion. Franklin pounced. Using the pen names "Martha Careful" and "Celia Shortface," he wrote letters to Bradford's paper feigning shock and indignation at Keimer's offense. As Miss Careful threatened, "If he proceeds farther to expose the secrets of our sex in that audacious manner [women would] run the hazard of taking him by the beard in the next place we meet him." Thus Franklin manufactured the first recorded abortion debate in America, not because he had any strong feelings on the issue, but because he knew it would sell newspapers. [This is an exerpt from the recent biography Benjamin Franklin: An American Life by Walter Isaacson]
Franklin's many actions of the time certainly weren't running at cross purposes, and he did manage to align his interests in parallel. He truly was a master, and we'll be hearing more about him on this blog soon.

This isn't the first time I've written about this subject before either. In a previous post, On the Overloading of Information, I noted one of the main reasons why blogging continues to be an enjoyable activity for me, despite changing interests and desires:
I am often overwhelmed by a desire to consume various things - books, movies, music, etc... The subject of such things is also varied and, as such, often don't mix very well. That said, the only thing I have really found that works is to align those subjects that do mix in such a way that they overlap. This is perhaps the only reason blogging has stayed on my plate for so long: since the medium is so free-form and since I have absolute control over what I write here and when I write it, it is easy to align my interests in such a way that they overlap with my blog (i.e. I write about what interests me at the time).
One way you can tell that my interests have shifted over the years is that the format and content of my writing here has also changed. I am once again reminded of Neal Stephenson's original minimalist homepage in which he speaks of his ongoing struggle against what Linda Stone termed as "continuous partial attention," as that curious feature of modern life only makes the necessity of aligning interests in parallel that much more important.

Aligning blogging with my other core interests, such as reading fiction, is one of the reasons I frequently quote fiction, even in reference to a serious topic. Yes, such a practice is frowned upon, but blogging is a hobby, the idea of which is to have fun. Indeed, Glenn Reynolds, progenitor of one of the most popular blogging sites around, also claims to blog for fun, and interestingly enough, he has quoted fiction in support of his own serious interests as well (more than once). One other interesting observation is that all references to fiction in this post, including even Reynolds' references, are from Neal Stephenson's novels. I'll leave it as an exercise for the reader to figure out what significance, if any, that holds.
Posted by Mark on November 11, 2004 at 11:45 PM .: link :.


End of This Day's Posts

Sunday, November 07, 2004

Open Source Security
A few weeks ago, I wrote about what the mainstream media could learn from Reflexive documentary filmmaking. Put simply, Reflexive Documentaries achieve a higher degree of objectivity by embracing and acknowledging their own biases and agenda. Ironically, by acknowledging their own subjectivity, these films are more objective and reliable. In a follow up post, I examined how this concept could be applied to a broader range of information dissemination processes. That post focused on computer security and how full disclosure of system vulnerabilities actually improves security in the long run. Ironically, public scrutiny is the only reliable way to improve security.

Full disclosure is certainly not perfect. By definition, it increases risk in the short term, which is why opponents are able to make persuasive arguments against it. Like all security, it is a matter of tradeoffs. Does the long term gain justify the short term risk? As I'm fond of saying, human beings don't so much solve problems as they trade one set of disadvantages for another (with the hope that the new set isn't quite as bad as the old). There is no solution here, only a less disadvantaged system.

Now I'd like to broaden the subject even further, and apply the concept of open security to national security. With respect to national security, the stakes are higher and thus the argument will be more difficult to sustain. If people are unwilling to deal with a few computer viruses in the short term in order to increase long term security, imagine how unwilling they'll be to risk a terrorist attack, even if that risk ultimately closes a few security holes. This may be prudent, and it is quite possible that a secrecy approach is more necessary at the national security level. Secrecy is certainly a key component of intelligence and other similar aspects of national security, so open security techniques would definitely not be a good idea in those areas.

However, there are certain vulnerabilities in processes and systems we use that could perhaps benefit from open security. John Robb has been doing some excellent work describing how terrorists (or global guerillas, as he calls them) can organize a more effective campaign in Iraq. He postulates a Bazaar of violence, which takes its lessons from the open source programming community (using Eric Raymond's essay The Cathedral and the Bazaar as a starting point):
The decentralized, and seemingly chaotic guerrilla war in Iraq demonstrates a pattern that will likely serve as a model for next generation terrorists. This pattern shows a level of learning, activity, and success similar to what we see in the open source software community. I call this pattern the bazaar. The bazaar solves the problem: how do small, potentially antagonistic networks combine to conduct war?
Not only does the bazaar solve the problem, it appears able to scale to disrupt larger, more stable targets. The bazaar essentially represents the evolution of terrorism as a technique into something more effective: a highly decentralized strategy that is nevertheless able to learn and innovate. Unlike traditional terrorism, it seeks to leverage gains from sabotaging infrastructure and disrupting markets. By focusing on such targets, the bazaar does not experience diminishing returns in the same way that traditional terrorism does. Once established, it creats a dynamic that is very difficult to disrupt.

I'm a little unclear as to what the purpose of the bazaar is - the goal appears to be a state of perpetual violence that is capable of keeping a nation in a position of failure/collapse. That our enemies seek to use this strategy in Iraq is obvious, but success essentially means perpetual failure. What I'm unclear on is how they seek to parlay this result into a successful state (which I assume is their long term goal - perhaps that is not a wise assumption).

In any case, reading about the bazaar can be pretty scary, especially when news from Iraq seems to correllate well with the strategy. Of course, not every attack in Iraq correllates, but this strategy is supposedly new and relatively dynamic. It is constantly improving on itself. They are improvising new tactics and learning from them in an effort to further define this new method of warfare.

As one of the commenters on his site notes, it is tempting to claim that John Robb's analysis is essentially an instruction manual for a guerilla organization, but that misses the point. It's better to know where we are vulnerable before we discover that some weakness is being exploited.

One thing that Robb is a little short on is actual, concrete ways with which to fight the bazaar (there are some, and he has pointed out situations where U.S. forces attempted to thwart bazaar tactics, but such examples are not frequent). However, he still provides a valuable service in exposing security vulnerabilities. It seems appropriate that we adopt open source security techniques in order to fight an enemy that employs an open source platform. Vulnerabilities need to be exposed so that we may devise effective counter-measures.
Posted by Mark on November 07, 2004 at 08:56 PM .: link :.


End of This Day's Posts

Sunday, October 10, 2004

Open Security and Full Disclosure
A few weeks ago, I wrote about what the mainstream media could learn from Reflexive documentary filmmaking. Put simply, Reflexive Documentaries achieve a higher degree of objectivity by embracing and acknowledging their own biases and agenda. Ironically, by acknowledging their own subjectivity, these films are more objective and reliable. I felt that the media could learn from such a model. Interestingly enough, such concepts can be applied to wider scenarios concerning information dissemination, particularly security.

Bruce Schneier has often written about such issues, and most of the information that follows is summarized from several of his articles, recent and old. The question with respect to computer security systems is this: Is publishing computer and network or software vulnerability information a good idea, or does it just help attackers?

When such a vulnerability exists, it creates what Schneier calls a Window of Exposure in which the vulnerability can still be exploited. This window exists until the vulnerability is patched and installed. There are five key phases which define the size of the window:
Phase 1 is before the vulnerability is discovered. The vulnerability exists, but no one can exploit it. Phase 2 is after the vulnerability is discovered, but before it is announced. At that point only a few people know about the vulnerability, but no one knows to defend against it. Depending on who knows what, this could either be an enormous risk or no risk at all. During this phase, news about the vulnerability spreads -- either slowly, quickly, or not at all -- depending on who discovered the vulnerability. Of course, multiple people can make the same discovery at different times, so this can get very complicated.

Phase 3 is after the vulnerability is announced. Maybe the announcement is made by the person who discovered the vulnerability in Phase 2, or maybe it is made by someone else who independently discovered the vulnerability later. At that point more people learn about the vulnerability, and the risk increases. In Phase 4, an automatic attack tool to exploit the vulnerability is published. Now the number of people who can exploit the vulnerability grows exponentially. Finally, the vendor issues a patch that closes the vulnerability, starting Phase 5. As people install the patch and re-secure their systems, the risk of exploit shrinks. Some people never install the patch, so there is always some risk. But it decays over time as systems are naturally upgraded.
The goal is to minimize the impact of the vulnerability by reducing the window of exposure (the area under the curve in figure 1). There are two basic approaches: secrecy and full disclosure.

The secrecy approach seeks to reduce the window of exposure by limiting public access to vulnerability information. In a different essay about network outages, Schneier gives a good summary of why secrecy doesn't work well:
The argument that secrecy is good for security is naive, and always worth rebutting. Secrecy is only beneficial to security in limited circumstances, and certainly not with respect to vulnerability or reliability information. Secrets are fragile; once they're lost they're lost forever. Security that relies on secrecy is also fragile; once secrecy is lost there's no way to recover security. Trying to base security on secrecy is just plain bad design.

... Secrecy prevents people from assessing their own risks.
Secrecy may work on paper, but in practice, keeping vulnerabilities secret removes motivation to fix the problem (it is possible that a company could utilize secrecy well, but it is unlikely that all companies would do so and it would be foolish to rely on such competency). The other method of reducing the window of exposure is to disclose all information about the vulnerablity publicly. Full Disclosure, as this method is called, seems counterintuitive, but Schneier explains:
Proponents of secrecy ignore the security value of openness: public scrutiny is the only reliable way to improve security. Before software bugs were routinely published, software companies routinely denied their existence and wouldn't bother fixing them, believing in the security of secrecy.
Ironically, publishing details about vulnerabilities leads to a more secure system. Of course, this isn't perfect. Obviously publishing vulnerabilities constitutes a short term danger, and can sometimes do more harm than good. But the alternative, secrecy, is worse. As Schneier is fond of saying, security is about tradeoffs. As I'm fond of saying, human beings don't so much solve problems as they trade one set of disadvantages for another (with the hope that the new set isn't quite as bad as the old). There is no solution here, only a less disadvantaged system.

This is what makes advocating open security systems like full disclosure difficult. Opponents will always be able to point to its flaws, and secrecy advocates are good at exploiting the intuitive (but not necessarily correct) nature of their systems. Open security systems are just counter-intuitive, and there is a tendency to not want to increase risk in the short term (as things like full disclosure does). Unfortunately, that means that the long term danger increases, as there is less incentive to fix security problems.

By the way, Schneier has started a blog. It appears to be made up of the same content that he normally releases monthly in the Crypto-Gram newsletter, but spread out over time. I think it will be interesting to see if Schneier starts responding to events in a more timely fashion, as that is one of the keys to the success of blogs (and it's something that I'm bad at, unless news breaks on a Sunday).
Posted by Mark on October 10, 2004 at 11:56 AM .: link :.


End of This Day's Posts

Wednesday, September 15, 2004

A Reflexive Media
"To write or to speak is almost inevitably to lie a little. It is an attempt to clothe an intangible in a tangible form; to compress an immeasurable into a mold. And in the act of compression, how the Truth is mangled and torn!" - Anne Murrow Lindbergh
There are many types of documentary films. The most common form of documentary is referred to as Direct Address (aka Voice of God). In such a documentary, the viewer is directly acknowledged, usually through narration and voice-overs. There is very little ambiguity and it is pretty obvious how you're expected to interpret these types of films. Many television and news programs use this style, to varying degrees of success. Ken Burns' infamous Civil War and Baseball series use this format eloquently, but most traditional propaganda films also fall into this category (a small caveat: most films are hybrids, rarely falling exclusively into one category). Such films give the illusion of being an invisible witness to certain events and are thus very persuasive and powerful.

The problem with Direct Address documentaries is that they grew out of a belief that Truth is knowable through objective facts. In a recent sermon he posted on the web, Donald Sensing spoke of the difference between facts and the Truth:
Truth and fact are not the same thing. We need only observe the presidential race to discern that. John Kerry and allies say that the results of America's war against Iraq is mostly a failure while George Bush and allies say they are mostly success. Both sides have the same facts, but both arrive at a different "truth."

People rarely fight over facts. What they argue about is what the facts mean, what is the Truth the facts indicate.
I'm not sure Sensing chose the best example here, but the concept itself is sound. Any documentary is biased in the Truth that it presents, even if the facts are undisputed. In a sense objectivity is impossible, which is why documentary scholar Bill Nichols admires films which seek to contextualize themselves, exposing their limitations and biases to the audience.

Reflexive Documentaries use many devices to acknowledge the filmmaker's presence, perspective, and selectivity in constructing the film. It is thought that films like this are much more honest about their subjectivity, and thus provide a much greater service to the audience.

An excellent example of a Reflexive documentary is Errol Morris' brilliant film, The Thin Blue Line. The film examines the "truth" around the murder of a Dallas policeman. The use of colored lighting throughout the film eventually correlates with who is innocent or guilty, and Morris is also quite manipulative through his use of editing - deconstructing and reconstructing the case to demonstrate just how problematic finding the truth can be. His use of framing calls attention to itself, daring the audience to question the intents of the filmmakers. The use of interviews in conjunction with editing is carefully structured to demonstrate the subjectivity of the film and its subjects. As you watch the movie, it becomes quite clear that Morris is toying with you, the viewer, and that he wants you to be critical of the "truth" he is presenting.

Ironically, a documentary becomes more objective when it acknowledges its own biases and agenda. In other words, a documentary becomes more objective when it admits its own subjectivity. There are many other forms of documentary not covered here (i.e. direct cinema/cinema verité, interview-based, performative, mock-documentaries, etc... most of which mesh together as they did in Morris' Blue Line to form a hybrid).

In Bill Nichols' seminal essay, Voice of Documentary (Can't seem to find a version online), he says:
"Documentary filmmakers have a responsibility not to be objective. Objectivity is a concept borrowed from the natural sciences and from journalism, with little place in the social sciences or documentary film."
I always found it funny that Nichols equates the natural sciences with journalism, as it seems to me that modern journalism is much more like a documentary than a natural science. As such, I think the lessons of Reflexive documentaries (and its counterparts) should apply to the realm of journalism.

The media emphatically does not acknowledge their biases. By bias, I don't mean anything as short-sighted as liberal or conservative media bias, I mean structural bias of which political orientation is but a small part (that link contains an excellent essay on the nature of media bias, one that I find presents a more complete picture and is much more useful than the tired old ideological bias we always hear so much about*). Such subjectivity does exist in journalism, yet the media stubbornly persists in their firm belief that they are presenting the objective truth.

The recent CBS scandal, consisting of a story bolstered by what appear to be obviously forged documents, provides us with an immediate example. Terry Teachout makes this observation regarding how few prominent people are willing to admit that they are wrong:
I was thinking today about how so few public figures are willing to admit (for attribution, anyway) that they’ve done something wrong, no matter how minor. But I wasn’t thinking of politicians, or even of Dan Rather. A half-remembered quote had flashed unexpectedly through my mind, and thirty seconds’ worth of Web surfing produced this paragraph from an editorial in a magazine called World War II:
Soon after he had completed his epic 140-mile march with his staff from Wuntho, Burma, to safety in India, an unhappy Lieutenant General Joseph W. Stilwell was asked by a reporter to explain the performance of Allied armies in Burma and give his impressions of the recently concluded campaign. Never one to mince words, the peppery general responded: "I claim we took a hell of a beating. We got run out of Burma and it is as humiliating as hell. I think we ought to find out what caused it, and go back and retake it."
Stilwell spoke those words sixty-two years ago. When was the last time that such candor was heard in like circumstances? What would happen today if similar words were spoken by some equally well-known person who’d stepped in it up to his eyebrows?
As he points out later in his post, I don't think we're going to be seeing such admissions any time soon. Again, CBS provides a good example. Rather than admit the possibility that they may be wrong, their response to the criticisms of their sources has been vague, dismissive, and entirely reliant on their reputation as a trustworthy staple of journalism. They have not yet comprehensively responded to any of the numerous questions about the documents; questions which range from "conflicting military terminology to different word-processing techniques". It appears their strategy is to escape the kill zone by focusing on the "truth" of their story, that Bush's service in the Air National Guard was less than satisfactory. They won't admit that the documents are forgeries, and by focusing on the arguably important story, they seek to distract the issue away from their any discussion of their own wrongdoing - in effect claiming that the documents aren't important because the story is "true" anyway.

Should they admit they were wrong? Of course they should, but they probably won't. If they won't, it will not be because they think the story is right, and not because they think the documents are genuine. They won't admit wrongdoing and they won't correct their methodologies or policies because to do so would be to acknowledge to the public that they are less than just an objective purveyor of truth.

Yet I would argue that they should do so, that it is their duty to do so just as it is the documentarian's responsibility to acknowledge their limitations and agenda to their audience.

It is also interesting to note that weblogs contrast the media by doing just that. Glenn Reynolds notes that the internet is a low-trust medium, which paradoxically indicates that it is more trustworthy than the media (because blogs and the like acknowledge their bias and agenda, admit when they're wrong, and correct their mistakes):
The Internet, on the other hand, is a low-trust environment. Ironically, that probably makes it more trustworthy.

That's because, while arguments from authority are hard on the Internet, substantiating arguments is easy, thanks to the miracle of hyperlinks. And, where things aren't linkable, you can post actual images. You can spell out your thinking, and you can back it up with lots of facts, which people then (thanks to Google, et al.) find it easy to check. And the links mean that you can do that without cluttering up your narrative too much, usually, something that's impossible on TV and nearly so in a newspaper.

(This is actually a lot like the world lawyers live in -- nobody trusts us enough to take our word for, well, much of anything, so we back things up with lots of footnotes, citations, and exhibits. Legal citation systems are even like a primitive form of hypertext, really, one that's been around for six or eight hundred years. But I digress -- except that this perhaps explains why so many lawyers take naturally to blogging).

You can also refine your arguments, updating -- and even abandoning them -- in realtime as new facts or arguments appear. It's part of the deal.

This also means admitting when you're wrong. And that's another difference. When you're a blogger, you present ideas and arguments, and see how they do. You have a reputation, and it matters, but the reputation is for playing it straight with the facts you present, not necessarily the conclusions you reach.
The mainstream media as we know it is on the decline. They will no longer be able to get by on their brand or their reputations alone. The collective intelligence of the internet, combined with the natural reflexiveness of its environment, has already provided a challenge to the underpinnings of journalism. On the internet, the dominance of the media is constantly challenged by individuals who question the "truth" presented to them in the media. I do not think that blogs have the power to eclipse the media, but their influence is unmistakable. The only question that remains is if the media will rise to the challenge. If the way CBS has reacted is any indication, then, sadly, we still have a long way to go.

* Yes, I do realize the irony of posting this just after I posted about liberal and conservative tendencies in online debating, and I hinted at that with my "Update" in that post.

Thanks to Jay Manifold for the excellent Structural Bias of Journalism link.
Posted by Mark on September 15, 2004 at 11:07 PM .: link :.


End of This Day's Posts

Thursday, September 09, 2004

Benjamin Franklin: American, Blogger & LIAR!
I've been reading a biography of Benjamin Franklin (Benjamin Franklin: An American Life by Walter Isaacson), and several things have struck me about the way in which he conducted himself. As with a lot of historical figures, there is a certain aura that surrounds the man which is seen as impenetrable today, but it's interesting to read about how he was perceived in his time and contrast that with how he would be perceived today. As usual, there is a certain limit to the usefulness of such speculation, as it necessarily must be based on certain assumptions that may or may not be true (as such this post might end up saying more about me and my assumptions than Franklin!). In any case, I find such exercises interesting, so I'd like to make a few observations.

The first is that he would have probably made a spectacular blogger, if he chose to engage in such an activity (Ken thinks he would definitely be a blogger, but I'm not so sure). He not only has all the makings of a wonderful blogger, I think he'd be extremely creative with the format. He was something of a populist, his writing was humorous, self-deprecating, and often quite profound at the same time. His range of knowledge and interest was wide, and his tone was often quite congenial. All qualities valued in any blogger.

He was incredibly prolific (another necessity for a successful blog), and often wrote the letters to his paper himself under assumed names, and structured them in such a way as to gently deride his competitors while making some other interesting point. For instance, Franklin once published two letters, written under two different pseudonyms, in which he manufactured the first recorded abortion debate in America - not because of any strong feelings on the issue, but because he knew it would sell newspapers and because his competitor was serializing entries from an encyclopedia at the time and had started with "Abortion." Thus the two letters were not only interesting in themselves, but also provided ample opportunity to impugn his competitor.

On thing I think we'd see in a Franklin blog is entire comment threads consisting of a full back-and-forth debate, with all entries written by Franklin himself under assumed names. I can imagine him working around other "real" commenters with his own pseudonyms, and otherwise having fun with the format (he'd almost certainly make a spectacular troll as well).

If there was ever a man who could make a living out of blogging, I think Franklin was it. This is, in part, why I'm not sure he'd truly end up as a pure blogger, as even in his day, Franklin was known to mix private interests with public ones, and to leverage both to further his business interests. He could certainly have organized something akin to The Junto on the internet, where a group of likeminded fellows got together (whether it be physically or virtually over the internet) and discussed issues of the day and also endeavored to form a vehicle for the furtherance of their own careers.

Then again, perhaps Franklin would simply have started his own newspaper and had nothing to do with blogging (or perhaps he would attempt to mix the two in some new way). The only problem would be that the types of satire and hoaxes he could get away with in his newspapers in the early 18th century would not really be possible in today's atmosphere (such playfulness has long ago left the medium, but is alive and well in the blogosphere, which is one thing that would tend to favor his participation).

Which brings me to my next point: I have to wonder how Franklin would have done in today's political climate. Would he have been able to achieve political prominence? Would he want to? Would his anonymous letters, hoaxes, and in his newspapers have gotten him into trouble? I can imagine the self-righteous indignation now: "His newspaper is a farce! He's a LIAR!" And the Junto? I don't even want to think of the conspiracy theories that could be conjured with that sort of thing in mind.

One thing Franklin was exceptionally good at was managing his personal image, but would he be able to do so in today's atmosphere? I suspect he would have done well in our time, but I don't know how politically active he would be (and I suppose there is something to be said about his participation being partly influenced by the fact that he was a part of a revolution, not a true politician of the kind we have today). I know the basic story of his life, but I haven't gotten that far in the book, so perhaps I should revisit this subject later. And thus ends my probably inaccurate, but interesting nonetheless, discussion of Franklin in our times. Expect more references to Franklin in the future, as I have been struck by quite a few things about his life that are worth discussing today.
Posted by Mark on September 09, 2004 at 10:00 PM .: link :.


End of This Day's Posts

Sunday, August 01, 2004

A Village of Expectation
It's funny how much your expectations influence how much you like or dislike a movie. I'm often disappointed by long awaited films, Star Wars: Episode I being the typical example. Decades of waiting and an unprecidented pre-release hype served only to elevate expectations for the film to unreachable heights. So when the time came, meesa not so impressed. I enjoyed the film and I don't think it was that bad, but my expecations far outweighed the experience.

Conversely, when I go to watch a movie I think will stink, I'm often pleasantly surprised. Sometimes these movies are bad, but I thought they would be so much worse than they were that I ended up enjoying them. A recent example of this was I, Robot. As an avid Isaac Asimov fan, I was appalled by the previews for the film, which featured legions of apparently rebelling CGI robots, and naturally thought it would be stupifyingly bad as such events were antithetical to Asimov's nuanced robot stories. Of course, I went to see it, and about halfway through, I was surprised to find that I was enjoying myself. It contains a few mentions to the three laws, positronics, and the name Susan Calvin is used for one of the main characters, but other than those minor details, the story doesn't even begin to resemble anything out of Asimov, so I was able to disassociate the two and enjoy the film on its own merits. And it was enjoyable.

Of course, I became aware of this phenomenon a long time ago, and have always tried to learn as little as possible about movies before they come out as I can. I used to read up on all the movie news and look forward to tons of movies, but I found that going in with a clean slate is the best way to see a film. So I tend to shy away from reading reviews, though I will glance at the star rating of a few critics I know and respect. (Obviously it is not a perfectly clean slate, but you get the point.)

Earlier this week, I realized that M. Night Shyamalan's The Village was being released, and made plans to see it. Shyamalan, the writer, director, and producer of such films as The Sixth Sense, Unbreakable, and Signs, has become known for the surprise ending, where some fact is revealed which totally changes the perspective of everything that came before it. This is unfortunate, because the twists and turns of a story are less effective if we're expecting them. What's more, if we know it's coming, we wrack our brains trying to figure out what the surprise will be, hypothesizing several different versions of the story in our head, one of which is bound to be accurate. I've never been that impressed with Shyamalan, but he has always produced solid films that were entertaining enough. There are often little absudities or plot holes, but never enough to completely drain my goodwill dry (though Signs came awfully close). I think he'll mature into a better filmmaker as time goes on.

The Village has it's share of twists and turns, but of course, we expect them and so they really don't come as any surprise (and, to be honest, Shyamalan layed on the hints pretty thickly). Fortunately, knowing what is coming doesn't completely destroy the film, as it would in some of his other films. I've tried to avoid spoilers by speaking in generalities, but if you haven't seen the film, you might want to skip down to the next paragraph (I don't think I ruined anything, but better safe than sorry). Shyamalan has always relied more on brooding atmosphere and building tension than on gratuitous action and gore, and The Village is no exception. Once again, he does resort to the use of "Boo!" moments, something that has always rubbed me the wrong way in his films, but I'm beginning to come around. He has become quite adept at employing that device, even if it is a cheap thrill. He must realize it, because at one point I think he deliberately eschews the "Boo!" moment in favor of a more meticulous and subtle approach. There are several instances of masterful staging in the film, which is part of why knowing the twists ahead of time doesn't ruin the film.

Now I was looking forward to this film, but as I mentioned before, I've never been blown away by Shyamalan (with the possible exception of Unbreakable, which I still think is the best of his films) so I didn't have tremendously high expectations. I expected a well done, but not brilliant, film. On Friday, I checked out Ebert's rating and glanced at Rotten Tomatoes, both of which served to further deflate my expectations. By the time I saw the film, I was expecting a real dud and was pleasantly surprised to find another solid effort from Shyamalan. It's not for everybody, and those who are expecting another bombshell ending will be disappointed, but that doesn't matter much in my opinion. The movie is what it is, and I judge it on its own merits, not on inflated expectations of twist endings and shocking revelations.

Would I have enjoyed it as much if I had been expecting something more out of it? Probably not, and there's the rub. Does it matter? That is a difficult question to answer. No matter how you slice it, what you expect of a film forces a point of reference. When you see the film, you judge it based on that. So now the question becomes, is it right to intentially force the point of reference low, so as to make sure you enjoy the movie? That too is a difficult question to answer. For my money, it is to some extent advisable to keep a check on high expectations, but I suppose you could get carried away with it. In any case, I enjoyed The Village and I look forward to Shyamalan's next film, albeit with a wary sense of trepidation.
Posted by Mark on August 01, 2004 at 07:34 PM .: link :.


End of This Day's Posts

Sunday, July 18, 2004

With great freedom, comes great responsibility...
David Foster recently wrote about a letter to the New York Times which echoed sentiments regarding Iraq that appear to be commonplace in certain circles:
While we have removed a murderous dictator, we have left the Iraqi people with a whole new set of problems they never had to face before...
I've often written about the tradeoffs inherent in solving problems, and the invasion of Iraq is no exception. Let us pretend for a moment that everything that happened in Iraq over the last year went exactly as planned. Even in that best case scenario, the Iraqis would be facing "a whole new set of problems they never had to face before." There was no action that could have been taken regarding Iraq (and this includes inaction) that would have resulted in an ideal situation. We weren't really seeking to solve the problems of Iraq, so much as we were exchanging one set of problems for another.

Yes, the Iraqis are facing new problems they have never had to face before, but the point is that the new problems are more favorable than the old problems. The biggest problem they are facing is, in short, freedom. Freedom is an odd thing, and right now, halfway across the world, the Iraqis are finding that out for themselves. Freedom brings great benefits, but also great responsibility. Freedom allows you to express yourself without fear of retribution, but it also allows those you hate to express things that make your blood boil. Freedom means you have to acknowledge their views, no matter how repulsive or disgusting you may find them (there are limits, of course, but that is another subject). That isn't easy.

A little while ago, Steven Den Beste wrote about Jewish immigrants from the Soviet Union:
About 1980 (I don't remember exactly) there was a period in which the USSR permitted huge numbers of Jews to leave and move to Israel. A lot of them got off the jet in Tel Aviv and instantly boarded another one bound for New York, and ended up here.

For most of them, our society was quite a shock. They were free; they were out of the cage. But with freedom came responsibility. The State didn't tell them what to do, but the State also didn't look out for them.

The State didn't prevent them from doing what they wanted, but the State also didn't prevent them from screwing up royally. One of the freedoms they discovered they had was the freedom to starve.
There are a lot of people who ended up in the U.S. because they were fleeing oppression, and when they got here, they were confronted with "a whole new set of problems they never had to face before." Most of them were able to adapt to the challenges of freedom and prosper, but don't confuse prosperity with utopia. These people did not solve their problems, they traded them for a set of new problems. For most of them, the problems associated with freedom were more favorable than the problems they were trying to escape from. For some, the adjustment just wasn't possible, and they returned to their homes.

Defecting North Koreans face a host of challenges upon their arrival in South Korea (if they can make it that far), including the standard freedom related problems: "In North Korea, the state allocates everything from food to jobs. Here, having to do their own shopping, banking or even eating at a food court can be a trying experience." The differences between North Korea and South Korea are so vast that many defectors cannot adapt, despite generous financial aid, job training and other assistance from civic and religious groups. Only about half of the defectors are able to wrangle jobs, but even then, it's hard to say that they've prospered. But at the same time, are their difficulties now worse than their previous difficulties? Moon Hee, a defector who is having difficulties adjusting, comments: "The present, while difficult, is still better than the past when I did not even know if there would be food for my next meal."

There is something almost paradoxical about freedom. You see, it isn't free. Yes, freedom brings benefits, but you must pay the price. If you want to live in a free country, you have to put up with everyone else being free too, and that's harder than it sounds. In a sense, we aren't really free, because the freedom we live with and aspire to is a limiting force.

On the subject of Heaven, Saint Augustine once wrote:
The souls in bliss will still possess the freedom of will, though sin will have no power to tempt them. They will be more free than ever–so free, in fact, from all delight in sinning as to find, in not sinning, an unfailing source of joy. ...in eternity, freedom is that more potent freedom which makes all sin impossible. - Saint Augustine, City of God (Book XXII, Chapter 30)
Augustine's concept of a totally free will is seemingly contradictory. For him, freedom, True Freedom, is doing the right thing all the time (I'm vastly simplifying here, but you get the point). Outside of Heaven, however, doing the right thing, as we all know, isn't easy. Just ask Spider-Man.

I never really read the comics, but in the movies (which appear to be true to their source material) Spider-Man is all about the conflict between responsibilities and desires. Matthew Yglesias is actually upset with the second film because is has a happy ending:
Being the good guy -- doing the right thing -- really sucks, because doing the right thing doesn't just mean avoiding wrongdoing, it means taking affirmative action to prevent it. There's no time left for Peter's life, and his life is miserable. Virtue is not its own reward, it's virtue, the rewards go to the less consciencious. There's no implication that it's all worthwhile because God will make it right in the End Times, the life of the good guy is a bleak one. It's an interesting (and, I think, a correct) view and it's certainly one that deserves a skilled dramatization, which is what the film gives you right up until the very end. But then -- ta da! -- it turns out that everyone does get to be happy after all. A huge letdown.
Of course, plenty of people have noted that the Spider-Man story doesn't end with the second movie, and that the third is bound to be filled with the complications of superhero dating (which are not limited to Spider-Man).

Spider-Man grapples with who he is. He has gained all sorts of powers, and with those powers, he has also gained a certain freedom. It could be very liberating, but as the saying goes: With great power comes great responsibility. He is not obligated to use his powers for good or at all, but he does. However, for a good portion of the second film he shirks his duties because a life of pure duty has totally ruined his personal life. This is that conflict between responsibilities and desires I mentioned earlier. It turns out that there are limits to Spider-Man's altruism.

For Spider-Man, it is all about tradeoffs, though he may have learned it the hard way. First he took on too much responsibility, and then too little. Will he ever strike a delicate balance? Will we? For we are all, in a manner of speaking, Spider-Man. We all grapple with similar conflicts, though they manifest in our lives with somewhat less drama. Balancing your personal life with your professional life isn't as exciting, but it can be quite challenging for some.

And so the people of Iraq are facing new challenges; problems they have never had to face before. Like Spider-Man, they're going to have to deal with their newfound responsibilites and find a way to balance them with their desires. Freedom isn't easy, and if they really want it, they'll need to do more than just avoid problems, they'll have to actively solve them. Or, rather, trade one set of problems for another. Because with great freedom, comes great responsibility.
Posted by Mark on July 18, 2004 at 09:16 PM .: link :.


End of This Day's Posts

Sunday, July 04, 2004

Kill Faster!
Ralph Peters writes about his experience keeping track of combat in Iraq during the tumultuous month of April:
During the initial fighting in Fallujah, I tuned in al-Jazeera and the BBC. At the same time, I was getting insider reports from the battlefield, from a U.S. military source on the scene and through Kurdish intelligence. I saw two different battles.
Peters' disenfranchisement with the media is hardly unique. Reports of the inadequacy of the media are legion. Eric M. Johnson is a U.S. Marine who served in Iraq and recently wrote about media bias:
Iraq veterans often say they are confused by American news coverage, because their experience differs so greatly from what journalists report. Soldiers and Marines point to the slow, steady progress in almost all areas of Iraqi life and wonder why they don't get much notice – or in many cases, any notice at all.

Part of the explanation is Rajiv Chandrasekaran, the Baghdad bureau chief for the Washington Post. He spent most of his career on the metro and technology beats, and has only four years of foreign reporting, two of which are in Iraq. The 31-year-old now runs a news operation that can literally change the world, heading a bureau that is the source for much of the news out of Iraq.

... Chandrasekaran's crew generates a relentlessly negative stream of articles from Iraq – and if there are no events to report, they resort to man-on-the-street interviews and cobble together a story from that.
It goes on from there, pointing out several examples and further evidence of the substandard performance of the media in Iraq. Then you have this infamous report from the Daily Telegraph's correspondent Toby Harnden.
The other day, while taking a break by the Al-Hamra Hotel pool, fringed with the usual cast of tattooed defense contractors, I was accosted by an American magazine journalist of serious accomplishment and impeccable liberal credentials.

She had been disturbed by my argument that Iraqis were better off than they had been under Saddam and I was now - there was no choice about this - going to have to justify my bizarre and dangerous views. I’ll spare you most of the details because you know the script - no WMD, no 'imminent threat'(though the point was to deal with Saddam before such a threat could emerge), a diversion from the hunt for bin Laden, enraging the Arab world. Etcetera.

But then she came to the point. Not only had she 'known' the Iraq war would fail but she considered it essential that it did so because this would ensure that the 'evil' George W. Bush would no longer be running her country. Her editors back on the East Coast were giggling, she said, over what a disaster Iraq had turned out to be. 'Lots of us talk about how awful it would be if this worked out.' Startled by her candour, I asked whether thousands more dead Iraqis would be a good thing.

She nodded and mumbled something about Bush needing to go. By this logic, I ventured, another September 11 on, say, September 11 would be perfect for pushing up John Kerry's poll numbers. 'Well, that’s different - that would be Americans,' she said, haltingly. 'I guess I’m a bit of an isolationist.' That’s one way of putting it.
Yikes. I wish I knew a little more about this unnamed "magazine journalist of serious accomplishment and impeccable liberal credentials", but it is a chilling admonition nonetheless.

Again, the inadequacy of the media has become painfully obvious over the past few years. How to deal with this? At a discussion forum the other day, someone posted this article concerning FOX News bias along with this breathless message:
This shouldn't come as any surprise. How can a NEWS organization possibly be allowed to lie like this? FOX should be removed from the air and those who are in charge should be removed from the media business and not be allowed to do anything whatsoever where news and media are concerned.

they're clearly out to deceive the American public.
Well, I suppose that is one way of dealing with media bias. But Ralph Peters' response is drastically different. He assumes the media can't or shouldn't be changed. I tend to take his side, as arbitrarily removing a news organization from the air and blacklisting those in charge seems like a cure that is much worse than the disease to me, but that leads to some unpleasant consequences. Back to the Peters article:
The media is often referred to off-handedly as a strategic factor. But we still don't fully appreciate its fatal power. Conditioned by the relative objectivity and ultimate respect for facts of the U.S. media, we fail to understand that, even in Europe, the media has become little more than a tool of propaganda.

That propaganda is increasingly, viciously, mindlessly anti-American. When our forces engage in tactical combat, dishonest media reporting immediately creates drag on the chain of command all the way up to the president.

Real atrocities aren't required. Everything American soldiers do is portrayed as an atrocity. World opinion is outraged, no matter how judiciously we fight.

... The implication for tactical combat — war at the bayonet level — is clear: We must direct our doctrine, training, equipment, organization and plans toward winning low-level fights much faster. Before the global media can do what enemy forces cannot do and stop us short. We can still win the big campaigns. But we're apt to lose thereafter, in the dirty end-game fights.

... Our military must rise to its responsibility to reduce the pressure on the National Command Authority — in essence, the president — by rapidly and effectively executing orders to root out enemy resistance or nests of terrorists.

To do so, we must develop the capabilities to fight within the "media cycle," before journalists sympathetic to terrorists and murderers can twist the facts and portray us as the villains. Before the combat encounter is politicized globally. Before allied leaders panic. And before such reporting exacerbates bureaucratic rivalries within our own system.
[emphasis mine] This is bound to be a difficult process, and will take years to perfect. If we proceed on this path, we'll have to suffer many short term problems, including a much higher casualty rate, perhaps for both sides (and even civilians). If we don't proceed along this path; if we don't learn to kill quickly, then we'll lose slowly.

For it's part, the military has shown some initiative in dealing with the media. Wretchard writes about a Washington Post article describing the victory that the First Armored Division won over Moqtada Al-Sadr's militia:
In what was probably the most psychologically revealing moment of the battle, infantrymen fought six hours for the possession of one damaged Humvee, of no tactical value, simply so that the network news would not have the satisfaction of displaying the piece of junk in the hands of Sadr's men.

... Ted Koppel was determined to read the names of 700 American servicemen who have died in Iraq to remind us how serious was their loss. Michael Moore has dedicated his film Farenheit 9/11 to the Americans who died in Afghanistan. And they did a land office business. But at least they didn't get to show Sadr's miliamen dancing around a battered Humvee. The men of the First Armored paid the price to stop that screening and those concerned can keep the change.
I don't know that Peters' pessimism is totally warranted, but there is an element of pragmatism involved that should be considered. It is certainly frustrating though.
***
It is noteworthy that media bias goes both ways. I tended to be conservative leaning in this post, but liberals have a lot to gripe about too. I've written about this before. Peters wrote that killing faster would help the situation, but that is from a military perspective. From our perspective, the only thing we can do is take the media with a grain of salt and do our best to point out their failures and herald their successes. It's not easy, that is the price we must pay for freedom of speech. Hopefully more on this in a later post. [thanks to Donald Sensing for the Toby Harnden pointer]
Posted by Mark on July 04, 2004 at 06:06 PM .: link :.


End of This Day's Posts

Friday, June 11, 2004

Religion isn't as comforting as it seems
Steven Den Beste is an athiest, yet he is unlike any athiest I have ever met in that he seems to understand theists (in the general sense of the term) and doesn't hold their beliefs against them. As such, I have gained an immense amount of respect for him and his beliefs. He speaks with conviction about his beliefs, but he is not evangelistic.

In his latest post, he aks one of the great unanswerable questions: What am I? I won't pretend to have any of the answers, but I do object to one thing he said. It is a belief that is common among athiests (though theists are little better):
Is a virus alive? I don't know. Is a hive mind intelligent? I don't know. Is there actually an identifiable self with continuity of existence which is typing these words? I really don't know. How much would that self have to change before we decide that the continuity has been disrupted? I think I don't want to find out.

Most of those kinds of questions either become moot or are easily answered within the context of standard religions. Those questions are uniquely troubling only for those of us who believe that life and intelligence are emergent properties of certain blobs of mass which are built in certain ways and which operate in certain kinds of environments. We might be forced to accept that identity is just as mythical as the soul. We might be deluding ourselves into thinking that identity is real because we want it to be true.
[Emphasis added] The idea that these types of unanswerable questions is not troubling or easy to answer to a believer is a common one, but I also believe it to be false. Religion is no more comforting than any other system of beliefs, including athiesm. Religion does provide a vocabulary for the unanswerable, but all that does is help us grapple with the questions - it doesn't solve anything and I don't think it is any more comforting. I believe in God, but if you asked me what God really is, I wouldn't be able to give you a definitive answer. Actually, I might be able to do that, but "God is a mystery" is hardly comforting or all that useful.

Elsewhere in the essay, he refers to the Christian belief in the soul:
To a Christian, life and self are ultimately embodied in a person's soul. Death is when the soul separates from the body, and that which makes up the essence of a person is embodied in the soul (as it were).
He goes on to list some conundrums that would be troubling to the believer but they all touch on the most troubling thing - what the heck is the soul in the first place? Trying to answer that is no more comforting to a theist than trying to answer the questions he's asking himself. The only real difference is a matter of vocabulary. All religion has done is shifted the focus of the question.

Den Beste goes on to say that there are many ways in which atheism is cold and unreassuring, but fails to recognize the ways in which religion is cold and unreassuring. For instance, there is no satisfactory theodicy that I have ever seen, and I've spent a lot of time studying such things (16 years of Catholic schooling baby!) A theodicy is essentially an attempt to reconcile God's existance with the existance of evil. Why does God allow evil to exist? Again, there is no satisfactory answer to that question, not the least of which because there is no satisfactory definition of both God and evil!

Now, theists often view athiests in a similar manner. While Den Beste laments the cold and unreassuring aspects of athiesm, a believer almost sees the reverse. To some believers, if you remove God from the picture, you also remove all concept of morality and responsibility. Yet, that is not the case, and Den Beste provides an excellent example of a morally responsible athiest. The grass is greener on the other side, as they say.

All of this is generally speaking, of course. Not all religions are the same, and some are more restrictive and closed-minded than others. I suppose it can be a matter of degrees, with one religion or individual being more open minded than the other, but I don't really know of any objective way to measure that sort of thing. I know that there are some believers who aren't troubled by such questions and proclaim their beliefs in blind faith, but I don't count myself among them, nor do I think it is something that is inherent in religion (perhaps it is inherent in some religions, but even then, religion does not exist in a vacuum and must be reconciled with the rest of the world).

Part of my trouble with this may be that I seem to have the ability to switch mental models rather easily, viewing a problem from a number of different perspectives and attempting to figure out the best way to approach a problem. I seem to be able to reconcile my various perspectives with each other as well (for example, I seem to have no problem reconciling science and religion with each other), though the boundries are blurry and I can sometimes come up with contradictory conclusions. This is in itself somewhat troubling, but at the same time, it is also somwhat of an advantage that I can approach a problem in a number of different ways. The trick is knowing which approach to use for which problem; hardly an easy proposition. Furthermore, I gather that I am somewhat odd in this ability, at least among believers. I used to debate religion a lot on the internet, and after a time, many refused to think of me as a Catholic because I didn't seem to align with others' perception of what Catholics are. I always found that rather amusing, though I guess I can understand the sentiment.

Unlike Den Beste, I do harbor some doubt in my beliefs, mainly because I recognize them as beliefs. They are not facts and I must concede the idea that my beliefs are incorrect. Like all sets of beliefs, there is an aspect of my beliefs that is very troubling and uncomforting, and there is a price we all pay for believing what we believe. And yet, believe we must. If we required our beliefs to be facts in order to act, we would do nothing. The value we receive from our beliefs outweighs the price we pay, or so we hope...

I suppose this could be seen by Steven to be missing the forest for the trees, but the reason I posted it is because the issue of beliefs discussed above fits nicely with several recent posts I made under the guise of Superstition and Security Beliefs (and Heuristics). They might provide a little more detail on the way I think regarding these subjects.
Posted by Mark on June 11, 2004 at 12:09 AM .: link :.


End of This Day's Posts

Sunday, May 23, 2004

Superstition
On of my favorite anecdotes (probably apocryphal, as these things usually go) tells of a horseshoe that hung on the wall over Niels Bohr's desk. One day, an exasperated visitor could not help asking, "Professor Bohr, you are one of the world's greatest scientists. Surely you cannot believe that object will bring you good luck." "Of course not," Bohr replied, "but I understand it brings you luck whether you believe or not."

I've had two occasions with which to be obsessively superstitious this weekend. The first was Saturday night's depressing Flyers game. Due to poorly planned family outing (thanks a lot Mike!), I missed the first period and a half of the game. During that time, the Flyers went down 2-0. As soon as I started watching, they scored a goal, much to my relief. But as the game grinded to a less than satisfactory close, I could not help but think, what if I had been watching for that first period?

Even as I thought that, though, I recognized how absurd and arrogant a thought like that is. As a fan, I obviously cannot participate in the game, but all fans like to believe they are a factor in the outcome of the game and will thus go to extreme superstitious lengths to ensure the team wins. That way, there is some sort of personal pride to be gained (or lost, in my case) from the team winning, even though there really isn't.

I spent the day today at the Belmont Racetrack, betting on the ponies. Longtime readers know that I have a soft spot for gambling, but that I don't do it very often nor do I ever really play for high stakes. One of the things I really enjoy is people watching, because some people go to amusing lengths to perform superstitious acts that will bring them that mystical win.

One of my friends informed me of his superstitious strategy today. His entire betting strategy dealt with the name of the horse. If the horse's name began with an "S" (i.e. Secretariat, Seabiscuit, etc...) it was bound to be good. He also made an impromptu decision that names which displayed alliteration (i.e. Seattle Slew, Barton Bank, etc...) were also more likely to win. So today, when he spied "Seaside Salute" in the program, which exhibited both alliteration and the letter "S", he decided it was a shoe-in! Of course, he only bet it to win, and it placed, thus he got screwed out of a modest amount of money.

John R. Velazquez, aboard Maddalena, rides to win the first race at Churchill DownsLike I should talk. My entire betting strategy revolves around John R. Velazquez, the best jockey in the history of horse racing. This superstition did not begin with me, as several friends discovered this guy a few years ago, but it has been passed on and I cannot help but believe in the power of JRV. When I bet on him, I tend to win. When I bet against him, he tends to be riding the horse that screws me over. As a result, I need to seriously consider the consequences of crossing JRV whenever I choose to bet on someone else.

Now, if I were to collect historical data regarding my bets for or against JRV (which is admittedly a very small data set, and thus not terribly conclusive either way, but stay with me here) I wouldn't be surprised to find that my beliefs are unwarranted. But that is the way of the superstition - no amount of logic or evidence is strong enough to be seriously considered (while any supporting evidence is, of course, trumpeted with glee).

Now, I don't believe for a second that watching the Flyers makes them play better, nor do I believe that betting on (or against) John R. Velazquez will increase (or decrease) my chances of winning. But I still think those things... after all, what could I lose?

This could be a manifestation of a few different things. It could be a relatively benign "security belief" (or "pleasing falsehood" as some like to call it - I'm sure there are tons of names for it) which, as long as you realize what you're dealing with can actually be fun (as my obsession with JRV is). It could also be brought on by what Steven Den Beste calls the High cliff syndrome.
It seems that our brains are constantly formulating alternatives, and then rejecting most of them at the last instant. ... All of us have had the experience of thinking something which almost immediately horrified us, "Why would I think such a thing?" I call it "High cliff syndrome".

At a viewpoint in eastern Oregon on the Crooked River, looking over a low stone fence into a deep canyon with sheer walls, a little voice inside me whispered, "Jump!" AAAGH! I became nervous, and my palms started sweating, and I decided I was no longer having fun and got back into my car and continued on my way.
It seems to be one of the profound truths of human existence that we can conceive of impossible situations that we know will never be possible. None of us are immune, from one of the great scientific minds of our time to the lowliest casino hound. This essay was, in fact, inspired by an Isaac Asimov essay called "Knock Plastic!" (as published in Magic) in which Asimov confesses his habitual knocking of wood (of course, he became a little worried over the fact that natural wood was being used less and less in ordinary construction... until, of course, someone introduced him to the joys of knocking on plastic). The insights driven by such superstitious "security beliefs" must indeed be kept into perspective, but that includes realizing that we all think these things and that sometimes, it really can't hurt to indulge in a superstition.

Update: More on Security Beliefs here.
Posted by Mark on May 23, 2004 at 09:32 PM .: link :.


End of This Day's Posts

Sunday, May 02, 2004

The Unglamorous March of Technology
We live in a truly wondrous world. The technological advances over just the past 100 years are astounding, but, in their own way, they're also absurd and even somewhat misleading, especially when you consider how these advances are discovered. More often than not, we stumble onto something profound by dumb luck or by brute force. When you look at how a major technological feat was accomplished, you'd be surprised by how unglamorous it really is. That doesn't make the discovery any less important or impressive, but we often take the results of such discoveries for granted.

For instance, how was Pi originally calculated? Chris Wenham provides a brief history:
So according to the Bible it's an even 3. The Egyptians thought it was 3.16 in 1650 B.C.. Ptolemy figured it was 3.1416 in 150 AD. And on the other side of the world, probably oblivious to Ptolemy's work, Zu Chongzhi calculated it to 355/113. In Bagdad, circa 800 AD, al-Khwarizmi agreed with Ptolemy; 3.1416 it was, until James Gregory begged to differ in the late 1600s.

Part of the reason why it was so hard to find the true value of Pi (π) was the lack of a good way to precisely measure a circle's circumference when your piece of twine would stretch and deform in the process of taking it. When Archimedes tried, he inscribed two polygons in a circle, one fitting inside and the other outside, so he could calculate the average of their boundaries (he calculated ? to be 3.1418). Others found you didn't necessarily need to draw a circle: Georges Buffon found that if you drew a grid of parallel lines, each 1 unit apart, and dropped a pin on it that was also 1 unit in length, then the probability that the pin would fall across a line was 2/π. In 1901, someone dropped a pin 34080 times and got an average of 3.1415929.
π is an important number and being able to figure out what it is has played a significant factor in the advance of technology. While all of these numbers are pretty much the same (to varying degrees of precision), isn't it absurd that someone figured out π by dropping 34,000 pins on a grid? We take π for granted today; we don't have to go about finding the value of π, we just use it in our calculations.

In Quicksilver, Neal Stephenson portrays several experiments performed by some of the greatest minds in history, and many of the things they did struck me as especially unglamorous. Most would point to the dog and bellows scene as a prime example of how unglamorous the unprecedented age of discovery recounted in the book really was (and they'd be right), but I'll choose something more mundane (page 141 in my edition):
"Help me measure out three hundred feet of thread," Hooke said, no longer amused.

They did it by pulling the thread off of a reel, and stretching it alongside a one-fathom-long rod, and counting off fifty fathoms. One end of the thread, Hooke tied to a heavy brass slug. He set the scale up on the platform that Daniel had improvised over the mouth of the well, and put the slug, along with its long bundle of thread, on the pan. He weighed the slug and thread carefully - a seemingly endless procedure disturbed over and over by light gusts of wind. To get a reliable measurement, they had to devote a couple of hours to setting up a canvas wind-screen. Then Hooke spent another half hour peering at the scale's needle through a magnifying lens while adding or subtracting bits of gold foil, no heavier than snowflakes. Every change caused the scale to teeter back and forth for several minutes before settling into a new position. Finally, Hooke called out a weight in pounds, ounces, grains, and fractions of grains, and Daniel noted it down. Then Hooke tied the free end of the thread to a little eye he had screwed on the bottom of the pan, and he and Daniel took turns lowering the weight into the well, letting it drop a few inches at a time - if it got to swinging, and scraped against the chalky sides of the hole, it would pick up a bit of extra weight, and ruin the experiment. When all three hundred feet had been let out, Hooke went for a stroll, because the weight was swinging a little bit, and its movements would disturb the scale. Finally, it settled down enough that he could go back to work with his magnifying glass and his tweezers.
And, of course, the experiment was a failure. Why? The scale was not precise enough! The book is filled with similar such experiments, some successful, some not.

Another example is telephones. Pick one up, enter a few numbers on the keypad and voila! you're talking to someone halfway across the world. Pretty neat, right? But how does that system work, behind the scenes? Take a look at the photo on the right. This is a typical intersection in a typical American city, and it is absolutely absurd. Look at all those wires! Intersections like that are all over the world, which is the part of the reason I can pick up my phone and talk to someone so far away. One other part of the reason I can do that is that almost everyone has a phone. And yet, this system is perceived to be elegant.

Of course, the telephone system has grown over the years, and what we have now is elegant compared to what we used to have:
The engineers who collectively designed the beginnings of the modern phone system in the 1940's and 1950's only had mechanical technologies to work with. Vacuum tubes were too expensive and too unreliable to use in large numbers, so pretty much everything had to be done with physical switches. Their solution to the problem of "direct dial" with the old rotary phones was quite clever, actually, but by modern standards was also terribly crude; it was big, it was loud, it was expensive and used a lot of power and worst of all it didn't really scale well. (A crossbar is an N� solution.) ... The reason the phone system handles the modern load is that the modern telephone switch bears no resemblance whatever to those of 1950's. Except for things like hard disks, they contain no moving parts, because they're implemented entirely in digital electronics.
So we've managed to get rid of all the moving parts and make things run more smoothly and reliably, but isn't it still an absurd system? It is, but we don't really stop to think about it. Why? Because we've hidden the vast and complex backend of the phone system behind innocuous looking telephone numbers. All we need to know to use a telephone is how to operate it (i.e. how to punch in numbers) and what number we want to call. Wenham explains, in a different essay:
The numbers seem pretty simple in design, having an area code, exchange code and four digit number. The area code for Manhattan is 212, Queens is 718, Nassau County is 516, Suffolk County is 631 and so-on. Now let's pretend it's my job to build the phone routing system for Emergency 911 service in the New York City area, and I have to route incoming calls to the correct police department. At first it seems like I could use the area and exchange codes to figure out where someone's coming from, but there's a problem with that: cell phone owners can buy a phone in Manhattan and get a 212 number, and yet use it in Queens. If someone uses their cell phone to report an accident in Queens, then the Manhattan police department will waste precious time transferring the call.

Area codes are also used to determine the billing rate for each call, and this is another way the abstraction leaks. If you use your Manhattan-bought cell phone to call someone ten yards away while vacationing in Los Angeles, you'll get charged long distance rates even though the call was handled by a local cell tower and local exchange. Try as you might, there is no way to completely abstract the physical nature of the network.
He also mentions cell phones, which are somewhat less absurd than plain old telephones, but when you think about it, all we've done with cell phones is abstract the telephone lines. We're still connecting to a cell tower (which need to be placed with high frequency throughout the world) and from there, a call is often routed through the plain old telephone system. If we could see the RF layer in action, we'd be astounded; it would make the telephone wires look organized and downright pleasant by comparison.

The act of hiding the physical nature of a system behind an abstraction is very common, but it turns out that all major abstractions are leaky. But all leaks in an abstraction, to some degree, are useful.

One of the most glamorous technological advances of the past 50 years was the advent of space travel. Thinking of the heavens is indeed an awe-inspiring and humbling experience, to be sure, but when you start breaking things down to the point where we can put a man in space, things get very dicey indeed. When it comes to space travel, there is no more glamorous a person than the astronaut, but again, how does one become an astronaut? The need to pour through and memorize giant telephone-sized books filled with technical specifications and detailed schematics. Hardly a glamorous proposition.

Steven Den Beste recently wrote a series of articles concerning the critical characteristics of space warships, and it is fascinating reading, but one of the things that struck me about the whole concept was just how unglamorous space battles would be. It sounds like a battle using the weapons and defenses described would be punctuated by long periods of waiting followed by a short burst of activity in which one side was completely disabled. This is, perhaps, the reason so many science fiction movies and books seem to flaunt the rules of physics. As a side note, I think a spectacular film could be made while still obeying the rules of physics, but that is only because we're so used to the absurd physics defying space battles.

None of this is to say that technological advances aren't worthwhile or that those who discover new and exciting concepts are somehow not impressive. If anything, I'm more impressed at what we've achieved over the years. And yet, since we take these advances for granted, we marginalize the effort that went into their discovery. This is due in part to the necessary abstractions we make to implement various systems. But when abstractions hide the crude underpinnings of technology, we see that technology and its creation as glamorous, thus bestowing honors upon those who make the discovery (perhaps for the wrong reasons). It's an almost paradoxal cycle. Perhaps because of this, we expect newer discoveries and innovations to somehow be less crude, but we must realize that all of our discoveries are inherently crude.

And while we've discovered a lot, it is still crude and could use improvements. Some technologies have stayed the same for thousands of years. Look at toilet paper. For all of our wondrous technological advances, we're still wiping our ass with a piece of paper. The Japanese have the most advanced toilets in the world, but they've still not figured out a way to bypass the simple toilet paper (or, at least, abstract the process). We've got our work cut out for us. Luckily, we're willing to go to absurd lengths to achieve our goals.
Posted by Mark on May 02, 2004 at 09:47 PM .: link :.


End of This Day's Posts

Sunday, April 04, 2004

Thinking about Security
I've been making my way through Bruce Schneier's Crypto-Gram newsletter archives, and I came across this excellent summary of how to think about security. He breaks security down into five simple questions that should be asked of a proposed security solution, some obvious, some not so much. In the post 9/11 era, we're being presented with all sorts of security solutions, and so Shneier's system can be quite useful in evaluating proposed security systems.
This five-step process works for any security measure, past, present, or future:

1) What problem does it solve?
2) How well does it solve the problem?
3) What new problems does it add?
4) What are the economic and social costs?
5) Given the above, is it worth the costs?
What this process basically does is force you to judge the tradeoffs of a security system. All to often, we either assume a proposed solution doesn't create problems of its own, or assume that because a proposed solution isn't a perfect solution, it's useless. Security is a tradeoff. It doesn't matter if a proposed security system makes us safe. What matters is that a system is worth the tradeoffs (or price, if you prefer). For instance, in order to make your computer invulnerable to external attacks from the internet, all you need to do is disconnect it from the internet. However, that means you can no longer access the internet! That is the price you pay for a perfectly secure solution to internet attacks. And it doesn't protect against attacks from those who have physical access to your computer. Also, you presumably want to use the internet, seeing as though you had a connection you wanted to protect. The old saying still holds: A perfectly secure system is a perfectly useless system.

In the post 9/11 world we're constantly being bombarded by new security measures, but at the same time, we're being told that a solution which is not perfect is worthless. It's rare that a new security measure will provide a clear benefit without causing any problems. It's all about tradeoffs...

I had intended to apply Schneier's system to a contemporary security "solution," but I can't seem to think of anything at the moment. Perhaps more later. In the mean time, check out Schneier's recent review of "I am Not a Terrorist" Cards in which he tears apart a proposed security system which sounds interesting on the surface, but makes little sense when you take a closer look (which Scheier does mercilessly).
Posted by Mark on April 04, 2004 at 11:09 PM .: link :.


End of This Day's Posts

Sunday, March 21, 2004

Inherently Funny Words, Humor, and Howard Stern
Here's a question: Which of the following words is most inherently funny?
  • Boob (and its variations, such as boobies and boobery)
  • Chinchilla
  • Aardvark
  • Urinal
  • Stroganoff
  • Poopie
  • Underpants
  • Underroos
  • Fart
  • Booger
Feel free to advocate your favorites or suggest new ones in the comments. Some words are just funny for no reason. Why is that? In Neil Simon's The Sunshine Boys, a character says:
Words with a 'k' in it are funny. Alkaseltzer is funny. Chicken is funny. Pickle is funny. All with a 'k'. 'L's are not funny. 'M's are not funny. Cupcake is funny. Tomatoes is not funny. Lettuce is not funny. Cucumber's funny. Cab is funny. Cockroach is funny -- not if you get 'em, only if you say 'em.
Well, that is certainly a start, but it doesn't really tell the whole story. Words with an "oo" sound are also often funny, especially when used in reference to bodily functions (as in poop, doody, booger, boobies, etc...) In fact, bodily functions are just plain funny. Witness fart.

Of course, ultimately it's a subjective thing. To me, boobies are funnier than breasts, even though they mean the same thing. To you, perhaps not. It's the great mystery of humor, and one of the most beautiful things about laughter is that it happens involuntarily. We don't (always) have to think about it, we just do it. Here's a quote from Dennis Miller to illustrate the point:
The truth is the human sense of humor tends to be barbaric and it has been that way all along. I'm sure on the eve of the nativity when the tall Magi smacked his forehead on the crossbeam while entering the stable, Joseph took a second away from pondering who impregnated his wife and laughed his little carpenter ass off. A sense of humor is exactly that: a sense. Not a fact, not etched in stone, not an empirical math equation but just what the word intones: a sense of what you find funny. And obviously, everybody has a different sense of what's funny. If you need confirmation on that I would remind you that Saved by the Bell recently celebrated the taping of their 100th episode. Oh well, one man's Molier is another man's Screech and you know something thats the way it should be.
There has been a lot of controversy recently about the FCC's proposed fines against Howard Stern (which may have been temporarily postponed). Stern has been fined many times before, including "$600,000 after Stern discussed masturbating to a picture of Aunt Jemima." Stern, of course, has flown off the handle at the prospect of new fines. Personally, I think he's overreacting a bit by connecting the whole thing with Bush and the religious right, but part of the reason he is so successful is that his overreaction isn't totally uncalled for. At the core of his argument is a serious concern about censorship, and a worry about the FCC abusing it's authority.

On the other hand, some people don't see what all the fuss is about. What's wrong with having a standard for the public airwaves that broacasters must live up to? Well, in theory, nothing. I'm not wild about the idea, but there are things I can understand people not wanting to be broadcast over public airwaves. The problem here is what is acceptible.

Just what is the standard? Sure, you've got the 7 dirty words, that's easy enough, but how do you define decency? The fines proposed against Stern are supposedly from a 3 year old broadcast. Does that sound right to you? Recently Stern wanted to do a game in which the loser had to let someone fart in their face. Now, I can understand some people thinking that's not very nice, but does that qualify as "indecent"? Apparently, it might, and Stern was not allowed to proceed with the game (he was given the option to place the looser in a small booth, and then have someone fart in the booth). Would it actually have resulted in a fine? Who knows? And that is what the real problem with standards are. If you want to propose a standard, it has to be clear and you need to straddle a line between what is hurtful and what is simply disgusting or offensive. You may be upset at Stern's asking a Nigerian woman if she eats monkeys, but does that deserve a fine from the government? And how much? And is it really the job of the government to decide these sorts of things? In the free market, advertisers can choose (and have chose) not to advertise on Stern's program.

At the bottom of this post, Lawrence Theriot makes a good point about that:
Yes a lot of what Stern does could be considered indecent by a large portion of the population (which is the Supreme Court standard) but in this case it's important to consider WHERE those people might live and to what degree they are likely to be exposed to Stern's brand of humor before you decide that those people need federal protection from hearing his show. Or, in other words, might the market have already acted to protect those people in a very real way that makes Federal action unnecessary?

Stern is on something like 75 radio stations in the US and almost every one of them is concentrated in a city. Most people who think Stern is indecent do not live in city centers. They tend to live in "fly-over" country where Stern's show does not reach.

Rush Limbaugh by comparison (which no one could un-ironically argue is indecent in any way) is on 600 stations around the country, and reaches about the same number of listeners as Howard does (10 million to 14 million I think). So in effect, we can see that the market has acted to protect most of those who do not want to hear the kind of radio that Stern does. Stern's show, which could be considered indecent is not very widely available, when you compare it to Limbaugh's show which is available in virtually every single corner of the country, and yet a comparable number of people seem to want to tune in to both shows.

Further, when you take into account the fact that in a city like Miami (where Stern was taken off the air last week) there may be as many as a million people who want to hear his show, any argument that Stern needs to be censored on indecency grounds seems to fly right out the window.

Anyway, I think both sides are making some decent points in this argument, but I hadn't heard one up until now that took the market and demographics into account until last night, and we all know how much faith I put in the market to solve a lot of society's toughest questions, so I thought I'd point this one out as having had an impact on me.
In the end, I don't know the answer, but there is no easy solution here. I can see why people want standards, but standards can be quite impractical. On the other hand, I can see why Stern is so irate at the prospect of being fined for something he said 3 years ago - and also never knowing if what he's going to say qualifies as "indecent" (and not really being able to take such a thing to court to really decide). Dennis Miller again:
We should question it all; poke fun at it all; piss off on it all; rail against it all; and most importantly, for Christ's sake, laugh at it all. Because the only thing separating holy writ from complete bullshit is your perspective. Its your only weapon. Keep the safety off. Don't take yourself too seriously.
In the end, Stern makes a whole lot of people laugh and he doesn't take himself all that serious. Personally, I don't want to fine him for that, but if you do, you need to come up with a standard that makes sense and is clear and practical to implement. I get the feeling this wouldn't be an issue if he was clearly right or clearly wrong...
Posted by Mark on March 21, 2004 at 09:04 PM .: link :.


End of This Day's Posts

Sunday, February 22, 2004

The Eisenhower Ten
The Eisenhower Ten by CONELRAD : An excellent article detailing a rather strange episode in U.S. History. During 1958 and 1959, President Eisenhower issued ten letters to mostly private citizens granting them unprecedented power in the event of a "national emergency" (i.e. nuclear war). Naturally, the Kennedy administration was less than thrilled with the existence of these letters, which, strangly enough, did not contain expiration dates.

So who made up this Shadow Government?
...of the nine, two of the positions were filled by Eisenhower cabinet secretaries and another slot was filled by the Chairman of the Board of Governors of the Federal Reserve. The remaining six were very accomplished captains of industry who, as time has proven, could keep a secret to the grave. It should be noted that the sheer impressiveness of the Emergency Administrator roster caused Eisenhower Staff Secretary Gen. Andrew J. Goodpaster (USA, Ret.) to gush, some 46 years later, "that list is absolutely glittering in terms of its quality." In his interview with CONELRAD, the retired general also emphasized how seriously the President took the issue of Continuity of Government: "It was deeply on his mind."
Eisenhower apparently assembled the list himself, and if that is the case, the quality of the list was no doubt "glittering". Eisenhower was a good judge of talent, and one of the astounding things about his command of allied forces during WWII was that he successfully assembled an integrated military command made up of both British and American officers, and they were actually effective on the battlefield. I don't doubt that he would be able to assemble a group of Emergency Administrators that would fit the job, work well together, and provide the country with a reasonably effective continuity of government in the event of the unthinkable.

Upon learning of these letters, Kennedy's National Security Advisor, McGeorge Bundy, asserted that the "outstanding authority" of the Emergency Administrators should be terminated... but what happened after that is somewhat of a mystery. Some correspondance exists suggesting that several of the Emergency Administrators were indeed relieved of their duties, but there are still questions as to whether or not Kennedy retained the services of 3 of the Eisenhower Ten and whether Kennedy established an emergency administration of his own.
It is Gen. Goodpaster's assertion that because Eisenhower practically wrote the book on Continuity of Government, the practice of having Emergency Administrators waiting in the wings for the Big One was a tradition that continued throughout the Cold War and perhaps even to this day.
On March 1, 2002, the New York Times reported that Bush had indeed set up a "shadow government" in the wake of the 9/11 terror attacks. This news was, of course, greeted with much consternation, and understandably so. Though there may be a historical precident (even if it is a controversial one) for such a thing, the details of such an open-ended policy are still a bit fuzzy to me...

CONELRAD has done an excellent job collecting, presenting, and analyzing information pertaining to the Eisenhower Ten, and I highly recommend anyone who is interested in the issue of continuity of government to check it out. Even with that, there are still lots of unanswered questions about the practice, but it is still fascinating reading....
Posted by Mark on February 22, 2004 at 09:31 PM .: link :.


End of This Day's Posts

Thursday, February 19, 2004

Welcome to the Hotel Baghdad
Steve Mumford has made his way back to Iraq and posted the seventh installment of his brilliant Baghdad Journal. Once again, he puts the traditional media reporting to shame with his usual balanced and thoughtful views. Read the whole thing, as they say.

For those who are not familiar with Mumford, he is a New York artist who has travelled to Iraq a few times in the past year and published several "journal" entries detailing his exploits. I've been posting his stuff since I found it last fall. Here are all the installments to date: They're all excellent. I highly recommend you check them out. There's usually some nice art as well. In the most recent installment, he meets up with several friends he made, and written about, on previous visits:
At Hewar, I meet Qassim, who says he's waiting for some of "your countrymen." He's preparing one of his renowned grilled fish lunches. Soon the guests arrive: it's the Quakers with Bruce Cockburn, who eye me warily. I don't think Qassim realizes how much foreigners tend to avoid one another in their jealous rush to befriend Iraqis. Or maybe he does, and enjoys watching the snubs and one-upmanship. I take my leave, and relax in the teahouse, when the artists Ahmed al Safi and Haider Wadi show up. They seem like old friends now, and I'm happy to see them.

That evening Ahmed and the painter Esam Pasha come by the hotel for dinner. Esam gives me a great bear hug. It's terrific to see him again.
Again, excellent reading. [Thanks must go again to Lexington Green from Chicago Boyz for introducing me to Mumford's writings last fall]

Updates: Several updates have been made, adding links to new columns in the series.
Posted by Mark on February 19, 2004 at 09:51 PM .: link :.


End of This Day's Posts

Sunday, February 15, 2004

Deterministic Chaos and the Simulated Universe
After several months of absence, Chris Wenham has returned with a new essay entitled 2 + 2. In it, he explores a common idea:
Many have speculated that you could simulate a working universe inside a computer. Maybe it wouldn't be exactly the same as ours, and maybe it wouldn't even be as complex, either, but it would have matter and energy and time would elapse so things could happen to them. In fact, tiny little universes are simulated on computers all the time, for both scientific work and for playing games in. Each one obeys simplified laws of physics the programmers have spelled out for them, with some less simplified than others.
As always, the essay is well done and thought provoking, exploring the idea from several mathematical angles. But it makes the assumption that the universe is both deterministic and infinitely quantifiable. I am certainly no expert on chaos theory, but it seems to me that it bears an importance on this subject.

A system is said to be deterministic if its future states are strictly dependant on current conditions. Historically, it was thought that all processes occurring in the universe were deterministic, and that if we knew enough about the rules governing the behavior of the universe and had accurate measurements about its current state we could predict what would happen in the future. Naturally, this theory has proven very useful in modeling real world events such as flying objects or the wax and wane of the tides, but there have always been systems which were more difficult to predict. Weather, for instance, is notoriously tricky to predict. It was always thought that these difficulties stemmed from an incomplete knowledge of how the system works or inaccurate measurement techniques.

In his essay, Wenham discusses how a meteorologist named Edward Lorenz stumbled upon the essence of what is referred to as chaos (or nonlinear dynamics, as it is often called):
Lorenz's simulation worked by processing some numbers to get a result, and then processing the result to get the next result, thus predicting the weather two moments of time into the future. Let's call them result1, which was fed back into the simulation to get result2. result3 could then be figured out by plugging result2 into the simulation and running it again. The computer was storing resultn to six decimal places internally, but only printing them out to three. When it was time to calculate result3 the following day, he re-entered result2, but only to three decimal places, and it was this that led to the discovery of something profound.

Given just an eentsy teensty tiny little change in the input conditions, the result was wild and unpredictable.
This phenomenon is called "sensitive dependence on initial conditions." For the systems in which we could successfully make good predictions (such as the path of a flying object), only a reasonable approximation of the initial state is necessary to make a reasonably accurate prediction. Sensitive dependence of a reasonable approximation of the initial state, however, yields unreasonable predictions. In a system exhibiting sensitive dependence, reasonable approximations of the initial state do not provide reasonable approximations of the future state.

So here comes the important part: For a chaotic system such as weather, in order to make useful long term predictions, you need measurements of initial conditions with infinite accuracy. What this means is that even a deterministic system, which in theory can be modeled by mathematical equations, can generate behavior which seems random and unpredictable. This manifests itself in nature all the time. Weather is the typical example, but there is also evidence that the human brain is also governed by deterministic chaos. Indeed, our brain's ability to generate seemingly unpredictable behavior is an important component of both survival and creativity.

So my question is, if it is not possible to quantify the initial conditions of a chaotic system with infinite accuracy, is that system really deterministic? In a sense, yes, even though it is impossible to calculate it:
Michaelangelo claimed the statue was already in the block of stone, and he just had to chip away the unnecessary parts. And in a literal sense, an infinite number of universes of all types and states should exist in thin air, indifferent to whether or not we discover the rules that exactly reveal their outcome. Our own universe could even be the numerical result of a mathematical equation that nobody has bothered to sit down and solve yet.

But we'd be here, waiting for them to discover us, and everything we'll ever do.
The answer might be there, whether we can calculate it or not, but even if it is, can we really do anything useful with it? In the movie Pi, a mathematician stumbles upon an enigmatic 216 digit number which is supposedly the representation of the infinite, the true name of God, and thus holds the key to deterministic chaos. But it's just a number, and no one really knows what to do with it, not even the mathematician who discovered it (though he could make accurate predictions on for the stock market, though he could not understand why and it came at a price). In the end, it drove him mad. I don't pretend to have any answers here, but I think the makers of Pi got it right.
Posted by Mark on February 15, 2004 at 02:33 PM .: link :.


End of This Day's Posts

Sunday, January 25, 2004

Pynchon : Stephenson :: Apples : Oranges
The publication of Cryptonomicon lead to lots of comparisons with Thomas Pynchon's Gravity's Rainbow in reviews. This was mostly based on the rather flimsy convergences of WWII and technology in the two novels. There were also some thematic similarities, but given the breadth of themes in Gravity's Rainbow, that isn't really a surprise. They did not resemble each other stylistically, nor did the narratives really resemble one another. There was, I suppose, a certain amount of playfulness present in both works, but in the end, anyone who read one and then the other would be struck by the contrast.

However, having recently read Stephenson's Quicksilver, I can see more of a resemblance to Pynchon. With Quicksilver, Stephenson displays a great deal more playfulness with style and narrative. He's become more willing to cut loose, explore language, fit the style to the situation he is describing and even slip out of "novel" format, whether it be the laundry list compilation style of Royal Society meeting notes (for example, pages 182 - 186), the epistolatory exploits of Eliza (pages 636 - 659 among many others), or theater script format (pages 716 - 729). Stephenson isn't quite as spastic as Pynchon, but the similarities between their styles are more than skin deep. In addition to this playfulness in the narrative style, Stephenson, like Pynchon, associates certain styles with specific characters (most notably the epistolatory style that is used for Eliza). Again, Stephenson is much less radical than Pynchon, and only applies a fraction of the techniques that Pynchon employs in his novel, but Stephenson has progressed nicely in his recent works.

Most of the time, Stephenson is considerably more prosaic than Pynchon, and even when he does branch out stylistically, it is done in service of the story. The Eliza letters again provide a good example. The epistolatory style allows Stephenson to write for a different audience. We know this, and thus Stephenson has a good time messing with us, especially towards the end of the novel where he takes it a step further and shows Eliza's encrypted letters and journal entries as translated by Bonaventure Rossignol (in the form of a letter to Louis XIV). All of this serves to further the plot. Pynchon, on the other hand, is more concerned with playfully exploring the narrative by experimenting with the English language. The plot takes a secondary role to the style, and to a certain extent the style drives the plot (well, that might be a bit of a stretch) and while Pynchon is one of the few who can pull it off, Stephenson's style doesn't really compare. They're two different things, really.

Nate has a great post on this very subject, and he shows that a comparison of Quicksilver with Pynchon's novel Mason & Dixon is more apt:
The style of Mason Dixon is a synthesis of old and new that hews remarkably close to the old. Stephenson, on the other hand, writes in a much more modern style, only occasionally dotting his prose with historical flourishes ... The distinction here is an old one; classical rhetoricians spoke of Asiatic versus Attic style - the former is ornate, lush, and detailed, while the latter is lean, clean, and direct. Stephenson is a master of Attic style - a fact that's often obscured because, while his sentences are direct and elegant, their substance is often convoluted and complex. You can see it more clearly in his nonfiction - look at his explanation of the Metaweb for an excellent example. Pynchon, as an Asiatic writer, will elicit more "oohs" and "ahhs" for the power and grace of his prose, but will tend to lose his readers when he's trying to be florid and tackling difficult material at the same time. Obviously, both authors will tend toward the Attic or the Asiatic at different points, but in general, Stephenson wants his language to transparently convey his message, while Pynchon demands a certain amount of attention for the language itself.
I haven't read Mason & Dixon (it's in the queue), but from what I've heard this sounds pretty accurate. Again, he makes the point that Pynchon and Stephenson are on different playing fields, appropriating their styles to serve different purposes... and it shows. Stephenson is a lot more fun to read for someone like me because I prefer storytelling to experimental narrative fiction.

I recently read Pynchon's The Crying of Lot 49, and was shocked by the clarity of the straightforward and yet still vibrant prose. In that respect, I think Stephenson's work might resemble Crying more than the novels discussed in this post...

Update: As I write this, Pynchon is making his appearance on the Simpsons. Coincidence?
Posted by Mark on January 25, 2004 at 08:19 PM .: link :.


End of This Day's Posts

Sunday, January 18, 2004

To the Moon!
President Bush has laid out his vision for space exploration. Reaction has mostly been lukewarm. Naturally, there are opponents and proponents, but in my mind it is a good start. That we've changed focus to include long term manned missions on the Moon and a mission to Mars is a bold enough move for now. What is difficult is that this is a program that will span several decades... and several administrations. There will be competition and distractions. To send someone to Mars on the schedule Bush has set requires a consistent will among the American electorate as well. However, given the technology currently available, it might prove to be a wise move.

A few months ago, in writing about the death of the Galileo probe, I examined the future of manned space flight and drew a historical analogy with the pyramids. I wrote:
Is manned space flight in danger of becoming extinct? Is it worth the insane amount of effort and resources we continually pour into the space program? These are not questions I'm really qualified to answer, but its interesting to ponder. On a personal level, its tempting to righteously proclaim that it is worth it; that doing things that are "difficult verging on insane" have inherent value, well beyond the simple science involved.

Such projects are not without their historical equivalents. There are all sorts of theories explaining why the ancient Egyptian pyramids were built, but none are as persuasive as the idea that they were built to unify Egypt's people and cultures. At the time, almost everything was being done on a local scale. With the possible exception of various irrigation efforts that linked together several small towns, there existed no project that would encompass the whole of Egypt. Yes, an insane amount of resources were expended, but the product was truly awe-inspiring, and still is today.

Those who built the pyramids were not slaves, as is commonly thought. They were mostly farmers from the tribes along the River Nile. They depended on the yearly cycle of flooding of the Nile to enrich their fields, and during the months that that their fields were flooded, they were employed to build pyramids and temples. Why would a common farmer give his time and labor to pyramid construction? There were religious reasons, of course, and patriotic reasons as well... but there was something more. Building the pyramids created a certain sense of pride and community that had not existed before. Markings on pyramid casing stones describe those who built the pyramids. Tally marks and names of "gangs" (groups of workers) indicate a sense of pride in their workmanship and respect between workers. The camaraderie that resulted from working together on such a monumental project united tribes that once fought each other. Furthermore, the building of such an immense structure implied an intense concentration of people in a single area. This drove a need for large-scale food-storage among other social constructs. The Egyptian society that emerged from the Pyramid Age was much different from the one that preceded it (some claim that this was the emergence of the state as we now know it.)

"What mattered was not the pyramid - it was the construction of the pyramid." If the pyramid was a machine for social progress, so too can the Space program be a catalyst for our own society.

Much like the pyramids, space travel is a testament to what the human race is capable of. Sure it allows us to do research we couldn't normally do, and we can launch satellites and space-based telescopes from the shuttle (much like pyramid workers were motivated by religion and a sense of duty to their Pharaoh), but the space program also serves to do much more. Look at the Columbia crew - men, women, white, black, Indian, Israeli - working together in a courageous endeavor, doing research for the benefit of mankind, traveling somewhere where few humans have been. It brings people together in a way few endeavors can, and it inspires the young and old alike. Human beings have always dared to "boldly go where no man has gone before." Where would we be without the courageous exploration of the past five hundred years? We should continue to celebrate this most noble of human spirits, should we not?
We should, and I'm glad we're orienting ourselves in this direction. Bush's plan appeals to me because of it's pragmatism. It doesn't seek to simply fly to Mars, it seeks to leverage the Moon first. We've already been to the Moon, but it still holds much value as a destination in itself as well as a testing ground and possibly even a base from which to launch or at least support our Mars mission. Some, however, see the financial side of things a little too pragmatic:
In its financial aspects, the Bush plan also is pragmatic -- indeed, too much so. The president's proposal would increase NASA's budget very modestly in the near term, pushing more expensive tasks into the future. This approach may avoid an immediate political backlash. But it also limits the prospects for near-term technological progress. Moreover, it gives little assurance that the moon-Mars program will survive the longer haul, amid changing administrations, economic fluctuations, and competition from voracious entitlement programs.
There's that problem of keeping everyone interested and happy in the long run again, but I'm not so sure we should be too worried... yet. Wretchard draws an important distinction, we've laid out a plan to voyage to Mars - not a plan to develop the technology to do so. Efforts will be proceeding on the basis of current technology, but as Wretchard also notes in a different post, current technology may be unsuitable for the task:
Current launch costs are on the order of $8,000/lb, a number that will have to be reduced by a factor of ten for the habitation of the moon, the establishment of La Grange transfer stations or flights to Mars to be feasible. This will require technology, and perhaps even basic physics that does not even exist. Simply building bigger versions of the Saturn V will not work. That would be "like trying to upgrade Columbus?s Nina, Pinta, and Santa Maria with wings to speed up the Atlantic crossing time. A jet airliner is not a better sailing ship. It is a different thing entirely." The dream of settling Mars must await an unforseen development.
Naturally, the unforseen development is notoriously tricky, and while we must pursue alternate forms of propulsion, it would be unwise to hold off on the voyage until this development occurs. We must strike a delicate balance between the concentration on the goal and the means to achieve that goal. As Wretchard notes, this is largely dependant on timing. What is also important here is that we are able to recognize this development when it happens and that we leave our program agile enough to react effectively to this development.

Recognizing this development will prove interesting. At what point does a technology become mature enough to use for something this important? This may be relatively straightforward, but it is possible that we could jump the gun and proceed too early (or, conversely, wait too long). Once recognized, we need to be agile, by which I mean that we must develop the capacity to seamlessly adapt the current program to exploit this new development. This will prove challenging, and will no doubt require a massive increase in funding, as it will also require a certain amount of institutional agility - moving people and resources to where we need them, when we need them. Once we recognize our opportunity, we must pounce without hesitation.

It is a bold and challenging, yet judiciously pragmatic, vision that Bush has laid out, but this is only the first step. The truly important challenges are still a few years off. What is important is that we recognize and exploit any technological advances on our way to Mars, and we can only do so if we are agile enough to effectively react. Exploration of the frontiers is a part of my country's identity, and it is nice to see us proceeding along these lines again. Like the Egyptians so long ago, this mammoth project may indeed inspire a unity amongst our people. In these troubled times, that would be a welcome development. Though Europe, Japan, and China have also shown interest in such an endeavor, I, along with James Lileks, like the idea of an American being the first man on Mars:
When I think of an American astronaut on Mars, I can't imagine a face for the event. I can tell you who staffed the Apollo program, because they were drawn from a specific stratum of American life. But things have changed. Who knows who we'd send to Mars? Black pilot? White astrophysicist? A navigator whose parents came over from India in 1972? Asian female doctor? If we all saw a bulky person bounce out of the landing craft and plant the flag, we'd see that wide blank mirrored visor. Sex or creed or skin hue - we'd have no idea.

This is the quintessence of America: whatever face you'd see when the visor was raised, it wouldn't be a surprise.
Indeed.

Update 1.21.04: More here.
Posted by Mark on January 18, 2004 at 05:16 PM .: link :.


End of This Day's Posts

Tuesday, December 30, 2003

Each will have his personal Rocket
I finally finished my review of Thomas Pynchon's novel Gravity's Rainbow. Since I blogged about the novel often, I figured I'd let everyone know it's out there. Oddly, when writing the review, I wrote the last paragraph first:
If I were to meet Thomas Pynchon tomorrow, I wouldn't know whether to shake his hand or sucker-punch him. Probably both. I'd extend my right arm, take his hand in mine, give one good pump, then yank him towards my swinging left fist. As he lay crumpled on the ground beneath me, gasping in pain, I'd point a bony finger right between his eyes and say "That was for Gravity's Rainbow." I think he'd understand.
Heh. I also wrote up a rather lengthy selection of quotes from the novel, with some added commentary. And in case you missed the previous bloggery about Gravity's Rainbow, here they are, in all their glory: Update: Only marginally on-topic, but Pynchon is due to be on the Simpsons this season. Typical hermit-like behavior. Thanks to Nate for the link. Also, I recently completed Quicksilver and wanted to comment on the differences/similarities between Pynchon and Stephenson, but it turns out that Nate has already done so on his blog a while back. He does a great job, but I still think I'll be posting something on that subject relatively soon...
Posted by Mark on December 30, 2003 at 09:47 PM .: link :.


End of This Day's Posts

Sunday, December 14, 2003

Ladies and gentlemen, we got him
U.S. forces have captured Saddam Hussein. This is exceptional news! And it figures that I had just commented on how intelligence successes are transparent, that we never see them. D'oh! This is a major intelligence victory. We developed an intelligence infrastructure that allowed us to find Hussein, who had burried himself in a hole in a family member's cellar. We captured him with shovels. This will most likely lead to an intelligence windfall, as already captured Iraqi officals who may have been biting their tongue for fear of Saddam may start talking... (not to mention Saddam himself)



The circumstances of the arrest are about as good as we could ever hope:
  • It is speculated that he was turned in by a family member (this is looking less likely, I'm not sure how we found him...)
  • Not a single shot fired, not even by Saddam. He had ample opporunity to shoot himself, but he didn't. That he was captured alive and well will be very beneficial, as it will shut up those conspiracy theorists who would have claimed that it was very convenient that Saddam "killed himself." I've actually seen people who said the same thing about Saddam's sons express suprise that he was taken alive.
  • That it took so long to get him demonstrates just how dedicated and persistent we are when it comes to tracking down someone of Saddam's importance. I wonder how Osama must feel...
  • That his actions were so cowardly (and his visual appearance) will go a long way towards demolishing his image.
This will increase support from the U.S. public as well as support from the Iraqi people. A major worry of Iraqis was that Saddam would come back and punish those who cooperated with the coalition. No more. This will allow the Iraqi people to embrace the new government without fear of retribution from Saddam (though they do still have to worry about the terrorists). And this will represent a major blow to the terrorists. No one knows how involved Hussein was in the attacks against coalition forces, but in almost any scenario, this is bad for the terrorists. I believed Bush to be very vulnerable, but this is big for him. The Democratic candidates have been roundly criticising Bush for this, and this will hurt them.

A lot will depend on how things go from here. The impending trial and how it is executed will be very important. We will also need to make sure Saddam doesn't kill himself or get killed (a la Goering or Oswald). If he turns up dead, we'll lose out on a lot.

Lots of others are commenting on this, so here goes:
  • Glenn Reynolds: Duh. He has several good posts, including one in which he mentions: "THE LESSON: Saddam's capture also shows the importance of patience, and of ignoring the kvetching of the Coalition Of The Pissy. While people bitched, the military just kept gathering intelligence and keeping Saddam on the run until he slipped and they caught him."
  • A BBC reporters log: "We all imagined that if the Americans got a tip off they would just bomb somewhere off the face of the earth." [via Instapundit]
  • Steven Den Beste: "He'll almost certainly end up on trial in an Iraqi tribunal which was created just a few days ago."
  • Merde in France: "Baghdad celebrates, and Paris frowns."
  • Hammorabi: An Iraqi blogger comments
  • Baghdad Skies: Another blog run by an Iraqi
  • Deeds: CPA member John Galt comments
  • Buzz Machine: Lots of good stuff from Jeff Jarvis
  • The Command Post: They're all over this. More here.
  • L.T. Smash: A veteran of this war comments and has a good collection of links...
  • IRAQ THE MODEL: Iraqi blogger Omar comments: "Thank you American, British, Spanish, Italian, Australian, Ukrainian, Japanese and all the coalition people and all the good people on earth. God bless the 1st brigade. God bless the 4th infantry division. God bless Iraq. God bless America. God bless the coalition people and soldiers. God bless all the freedom loving people on earth. I wish I could hug you all."
  • Dean Esmay: "Score!" My thoughts exactly!
  • Belmont Club: Wretchard comments and makes a good point too: "The magnificence of nations often conceals the smallness of their acts; and from their petty corruptions and idiocies this tapestry of tragedy has been woven." Saddam wasn't the only one responsible for the suffering of Iraqis... Look for more from him, as he has proven very insightful...
  • Random Jottings: John Weidner comments. "My guess is that they will now sneer that 'we were promised peace after Saddam was captured.' Well. Tough luck."
  • Porphyrogenitus: Porphy comments: "Today, for me, is a day of happiness for the people of Iraq, off of whom finally the shadow of Saddam will lift."
  • Winds of Change is on the case...
  • The Dissident Frogman: "I'm under the impression that Saddam Hussein would deserve an award for the Most Ridiculous Fall for a Dictator"
  • Sneaking Suspicions: Fritz Schrank comments: "And by the way, who told Hussein it was a good idea to try to pass himself off as Ted Kazsynski?" Heheh, check out the picture he has...
  • Tacitus: "Got him. Good. Now comes the real fun -- weeks and months of debriefing and interrogation at our hands, followed by trial at the hands of his fellow Iraqis. There are so many questions that he can answer: his regime's true WMD status; the nature of and preparation for the Ba'athist-supported insurgency; the tragically long missing persons list from Kuwait and among his own people; the true extent of his collaboration with terror networks abroad. Psychologically, it will be a fascinating experience -- the closest we may ever have come to having a truly Stalinesque personality in the dock. Will he prove himself pliable and brittle, or will sick megalomania impart qualities of fierce resistance?"
  • Jim Miller: "I just heard that December 13 may become a national holiday."
  • Donald Sensing: "CNN says that an Iraqi gave the tip to US forces. Only three hours later, we had him."
  • Baghdaddy: He comments: "Early Sunday morning, the U.S. Army delivered to the peoples of the world, an early Christmas present. The capture of Saddam Hussien. There is such celebrating among the general population, that the spirit of Baghdad has changed to one of jubilation. ... The celebratory fire, and the smiles on everyones faces is reminisent of the victory scene at the end of Return of The Jedi, when the Death Star was destroyed signifying the end of the Empire. The scene here in Baghdad is truly one worthy of a John Williams soundtrack!" Ha!
  • A Small Victory: Michelle has lots of stuff... "We got the bastard!"
  • The Messopotamian: Iraqi blogger Alaa comments: "The Baghdadis are expressing what they really think again. Can you hide this now CNN & others? I don?t like swearing, but for those foul friends of the murderers, of all nationalities and kinds, it is like a spike has shot up their asholes to come out of their mouths."
  • Chicago Boyz: Lex comments: "All morning I have been breaking into a smile and Motorhead's Ace of Spades has been running through my head" Other ChicagoBoyz comment.
  • Solport: Don Quixote comments...
  • Horsefeathers: Stephen Rittenberg has a roundup of the Democratic candidate's reactions
  • Tim Blair has lots, including a roundup of Aussie reactions...
  • Calpundit: Kevin Drum comments
  • Joe User/Right Wing Techie: Brad Wardell comments...
  • Lee Harris comments "The man who called upon his countrymen and fellow Muslims to sacrifice their own lives in suicide attacks, to blow themselves to bits in order to glorify his name, failed to follow his own instructions. He refused the grand opportunity of a martyr's death..."
  • Boots on the Ground: Kevin, a soldier in Iraq, comments on this and his experiences when Uday and Qusay were killed.
  • The End Zone: Hamas is echoing Lee Harris: "CNN reports the head of Palestinian Hamas has issued a statement expressing outrage that Saddam would encourage martrydom in others, yet personally go down without a fight."
  • HipperCritical has an anti-war blogger reaction roundup... [via instapundit]
  • Power Line has lots of good info...
  • Andrew Olmsted comments with a nice Bull Durham reference: "Yes, it is phenomonal news that Saddam has been captured, and I've been fairly bouncing up and down with excitement since I heard the news. ... But as good as this news is, this moment, too, is over."
  • Wolverines!
Gah! Information overload! I could probably find a million other links to put here. Perhaps more later...

Update: I've been updating the link list like crazy...


V is for Victory!

A Thumbs up from Kuwaitis

Update: Dean Esmay steals my picture! Hee hee. He's got more good stuff as well..

Update 12.15.03: And I thought yesterday represented information overload. Tons of new stuff appearing today, much of it excellent, and a lot of it having to do with the challenge of what to do with Hussein...
  • Belmont Club: I told you so - another excellent and insightful article today which examines the strengths of Saddam's current position.
  • Chicago Boyz: Along the same lines, Lex questions the assumption that "it will go well for the 'prosecution' and end without too much hassle in Saddam's execution."
  • Stephen Den Beste weighs in on the situation, focusing more on the success of US intelligence and the importance and effects of what we do with Saddam.
  • Ralph Peters also talks about the intelligence successes in Iraq.
Posted by Mark on December 14, 2003 at 11:52 AM .: link :.


End of This Day's Posts

Wednesday, December 03, 2003

Is the Christmas Tree Christian?
The Winter Solstice occurs when your hemisphere is leaning farthest away from the sun (because of the tilted axis of the earth's rotation), and thus this is the time of the year when daylight is the shortest and the sun has its lowest arc in the sky.

No one is really sure when exactly it happened (or who started the idea), but this period of time eventually took on an obvious symbolic meaning to human beings. Many geographically diverse cultures throughout history have recognized the winter solstice is as a turning point, a return of the sun. Solstice celebrations and ceremonies were common, sometimes performed out of a fear that the failing light of the sun would never return unless humans demonstrated their worth through celebration or vigil.

It has been claimed that the Mesopotamians were among the first to celebrate the winter solstice with a 12 day festival of renewal, designed to help the god Marduk tame the monsters of chaos for one more year. Other theories go as far back as 10,000 years. More recently, the Romans celebrated the winter solstice with a fest called Saturnalia in honor of Saturn, the god of agriculture.

Integral to many of these celebrations were plants and trees that remained green all year. Evergreens reminded them of all the green plants that would grow again when the sun returned; they symbolized the solstice and the triumph of life over death.

In the early days of Christianity, the birth of Christ was not celebrated (instead Easter, was and possibly still is the main holiday of Christianity). In the fourth century, the Church decided to make the birth of Christ a holiday to be celebrated. There was only one problem - the Bible makes no mention of when Christ was born. Although there was some evidence to draw from, the Church chose to celebrate Christmas on December 25. It is believed that this date was chosen to coincide with traditional winter solstice festivals such as the Roman pagan Saturnalia festival in the hopes that Christmas would be more popularly embraced by the people of the world. And embraced it was, but the Church found that as the holiday spread, their choice to hold Christmas at the same time as solstice celebrations did not allow the Church to dictate how the holiday was celebrated. And so many of the pagan traditions of the solstice survived during the next millenia, even though pagan religions had largely given way to Christianity.

And so the importance of evergreens in these celebrations continued. The use of the Christmas tree, as we now know it, is generally credited to sixteenth century Germans, specifically the Protestant-reformer Martin Luther, who is thought to be the first to added lighted candles to a tree.

While the Germans found a certain significance in the pagan traditions concerning evergreens, it was not a universally held belief. For instance, the Christmas tree did not gain traction in America until the mid-nineteenth century. Up until then, they were generally seen as pagan symbols and mocked by New England Puritans. But the tradition gained traction thanks to German settlers in Pennsylvania (among others) and increasing secularization of the holiday in America. In the past century, the Christmas tree has gained in popularity, as more and more people adopted the traditon of displaying a decorated evergreen in their home. After all this time, Christmas trees have become an American tradition.

There has been a lot of controversy lately concerning the presence (or, I suppose, the removal and thus absence) of Christmas trees in schools. Personally, I don't see what is so controversial about it, as a Christmas tree is more of a secular, rather than religious, symbol. Joshua Claybourn quotes the Supreme Court thusly:
"The Christmas tree, unlike the menorah, is not itself a religious symbol. Although Christmas trees once carried religious connotations, today they typify the secular celebration of Christmas." Allegheny v. American Civil Liberties Union Greater Pittsburgh Chapter, 492 U.S. 573, 109 S.Ct. 3086.
It does not represent a religious idea, but rather the idea of renewal that accompanied the winter solstice. One can associate Christian ideas with the tree, as Martin Luther did so long ago, but that does not make it inherently Christian. Indeed, I think of the entire Christmas holiday as more secular than not, though I guess my being Christian might have something to do with it. This idea is worth further exploring in the future, so expect more posts on the historical Christmas.

Update: Patrick Belton notes the strange correlations between Christmas Trees and Prostitution in Virginia.
Posted by Mark on December 03, 2003 at 11:31 PM .: link :.


End of This Day's Posts

Wednesday, November 12, 2003

The Iraqi Art Scene
Steve Mumford's latest Baghdad Journal is up, and it is, as usual, excellent. In it, he actually focuses on the burgeoning Iraqi art scene (How dare he? I've become so accustomed to his other observations that I was somewhat surprised to see him talking about art. Then I remembered that he is an artist and that his articles are published in an internet art magazine. Duh.) Instead of showcasing Mumford's art, as previous installments have done, this article exhibits the works of various Iraqi artists that Mumford was impressed with (and for good reason, at least according to my unrefined eyes). The artistic community is growing in Iraq, in no small part due to the newfound access they have to information from around the world...
Of the younger generation, Ahmed Al-Safi is a particularly talented painter and sculptor who's managed to make a living selling his art. He paints simple, almost crudely rendered figures reminiscent of the German Neo-Expressionists of the 1980s (whose work he immediately investigated on the web when I told him about them). Ahmed has a wonderful studio in the slummy but picturesque part of town near Tarea Square, where he has bronze-casting facilities.
Emphasis mine. Change is coming to the Iraqi art scene, and while they are now soaking up that which is newly available to them, I find myself eager to see what the Iraqis contribute back to the world art scene...
One widely repeated observation here is that abstraction was a convenient technique for a time when all narrative content was suspect. Everyone expects art to change with the passing of Saddam's regime, though at this point, no one I talked to is making any predictions about future trends in Iraqi art. I've seen no video art and practically no photography in Baghdad. Installation art is unknown. Indeed, few artists in Iraq have even heard of Andy Warhol. Now that communication with the rest of the world is starting to open up, Iraqi artists will discover just how large an ocean they're swimming in.
I'm not an artist, but I know what I like and if the art that Mumford posted is any indication, I hope and believe we'll find that the Iraqis will be strong swimmers in the large ocean of art. More on this subject later...

Update: I just thought I'd pick one of my favorite paintings to display here...


Muayad Muhsin
oil on canvas
2002

Mumford describes Muayad Muhsin as "a younger surrealist painter from Hilla" and I like this painting a lot. I don't know art, but have some general knowledge of the visual medium from film, and while it may be foolish to apply film theory to art, I think it might provide some insight. The cool colors suggest an aloof tranquility, a calmness, but the oblique angle produces a sense of visual irresolution and unresolved anxiety. It suggests tension, transition, and impending change. The end result is a feeling of calm, but tense and unstable, transition. It seems appropriate...
Posted by Mark on November 12, 2003 at 12:42 AM .: link :.


End of This Day's Posts

Sunday, November 02, 2003

Horror
Halloween has past* but since horror is one of my favorite genres, I figured I'd list out some good examples of horror books & movies because it's always fun to scare yourself witless. When it comes to film, horror is one of the more difficult genres to execute effectively and, as such, the genuinely great horror films are few and far between. What's left are a series of downright creepy, but flawed, films. Because of their flaws, many horror films are often overlooked and underrated and these are the films I'd like to mention here. Books, on the other hand, tend to be overlooked and underrated as a medium. Horror books doubly so.

Film
I've never been a fan of the classic 1950's horror films like the Mummy, Dracula, or Frankenstein... They're not without their charm, but when it comes to the classics, I prefer their source materiel to the films. For classics, I would mention Halloween (1978, it started the lackluster "slasher" sub-genre, but it is an excellent film, particularly it's soundtrack), Jaws (1975, another excellent soundtrack here, but there was plenty else that made people afraid to go back into the water again...), Psycho (1960, the sudden shifts and feints coupled with, again, a distinctive and effective soundtrack, make for a brutally effective film), Alien (1979, "In space, no one can hear you scream." Director Ridley Scott really knew how to turn the screws with this one), The Exorcist (1973, The power of Christ compels you... to wet yourself in despair whilst watching this film) and The Shining (1980, Kubrick's interpretation of King's masterwork is significantly different, but it is also one of the few examples of an adaptation that works well in it's own right).

But those are all films we know and love. What about the one's we haven't seen? Director John Carpenter built an impressive string of neglected horror films throughout the 1980s and early 1990s (a pity that he has since lost his touch). Aside from the classic Halloween, Carpenter directed the 1982 remake of The Thing, which was brilliantly updated and downright creepy. It has its fill of scary moments, not the least of which is the cryptic and ambiguous ending. He followed that with Christine. Adapted from the novel by Stephen King, Carpenter was able to make a silly story creepy with the sheer will of his technical mastery (not his best, but impressive nonetheless). His 1987 film Prince of Darkness was flawed but undeniably effective. Many have not heard of In the Mouth of Madness, but it has become one of my favorite horror films of the 1990s.

If you're not scared away by subtitles or foreign films, check out Dario Argento's seminal 1977 gorefest Suspiria, which boasts opening and ending scenes amongst the best in the genre. Argento's rival, Lucio Fulci, also has an impressive series of gory horror classics, such as the 1980 film The Gates of Hell. Both Argento and Fulci have an impressive body of work and are worth checking out if you don't mind them being in Italian...

The 1970's and early 1980's were an excellent period in horror filmmaking. Excluding the films already mentioned (a significant portion of the classics are from the 1970s), you may want to check out the 1980 movie The Changeling, an excellent ghost story, or perhaps the disturbing 1981 film The Incubus. And how could I write about horror movies without mentioning my beloved 1979 cheesy creepfest Phantasm. Other 70s flicks to check out: The Hills Have Eyes (1977), Dawn of the Dead (1978), Salem's Lot (a 1979 TV miniseries based on Stephen King's book), The Omen (1976), Carrie (1976), Blue Sunshine (1976, almost forgotten today), The Wicker Man (1973), The Legend of Hell House (1973, a personal favorite, adapted from a novel by Richard Matheson, who we'll get to in a moment), and of course we can't forget that lovable flesh-wearing cannibal, Leatherface, in The Texas Chainsaw Massacre (1974).

Ok, so I think I've inundated you with enough movies, hopefully many of which you've never heard of, for now so let's move on to books (naturally, I could go on and on and on just listing out good horror flicks, but this is at least a good start).

Literature
My knowledge of Horror literature is less extensive than horror film, but I have a fair base to work from. We all know the classics, Dracula, Frankenstein, and the works of Edgar Allen Poe, but there are many overlooked horror stories floating around as well.

M.R. James (1862-1936) is one of the originators of the modern Ghost Story, and there are several exemplary examples of this sub-genre in his oeuvre. His works are public domain, so follow the link above for online versions... I especially enjoyed the creepy Count Magnus.

Shirley Jackson's The Haunting of Hill House is a classic that is rightly praised as one of the finest horror novels ever written.

Richard Matheson's brilliant I Am Legend is a study of isolation and grim irony that turns the traditional vampire story on its head. This might be one of the most influential novels you've never heard of, as there have been many derivatives, particularly in film.

H.P. Lovecraft is another fantastic short story author whose work has been tremendously influential to modern horror. His infamous Cthulhu Mythos and Necronomicon were ingenious creations, and many have seized on them and attempted to follow in his footsteps. Indeed, many even believe his fictional Necronomicon to be real!

You might have noticed Stephen King's name mentioned a few times already, and there is a reason so many of his books are turned into movies. I've never been a huge King fan, but The Shining is among the best horror novels I've read. I've always preferred Dean Koontz (sadly he has absolutely no good film adaptations), who wrote such notable horror staples as Phantoms, Midnight, and The Servants of Twilight. Both Koontz and King can be hit-or-miss, but when they're on, there's no one better.

Other books of note: Clive Barker's The Hellbound Heart (which was adapted into the 1987 film Hellraiser) is an excellent short read (about 120 pages), and some of his longer works, such as The Great and Secret Show and Imajica, are also good. F. Paul Wilson's The Keep is one of the few books that has ever truly scared me while reading it. I've always found William Peter Blatty's novel, The Exorcist, to be more effective than the movie (and that is saying a lot!). Brian Lumly's Necroscope series is an interesting take on the vampire legend, and his Titus Crow series builds on Lovecraft's Cthulhu Mythos nicely.

Well, there you have it. That should keep you busy for the next few years...

* One would think that this post should have been made last week, and one would be right, but then one would also not be too familiar with how we do things here at Kaedrin. Note that the best movies of 2001 is due sometime around mid-2004. Heh. This whole being timely with content thing is something I have always had difficulty with and need to work on, but that is another topic for another post...
Posted by Mark on November 02, 2003 at 07:51 PM .: link :.


End of This Day's Posts

Monday, October 20, 2003

Hindsight isn't Necessarily 20/20
It is conventional wisdom that hindsight is 20/20, but is that really accurate? I get the feeling that when people speak of clarity in hindsight, what they are really talking about is creeping determinism. They aren't really examining the varied and complex details of a scenario so much as they are rationalizing an outcome perceived to have been inevitable (since it has already happened, surely it must have been obvious). This is known in logic as "begging the question" or "circular logic."

In the creeping determinism sense, hindsight is liberally filtered to the point where only evidence that leads to the scenario's conclusion is seen. All other evidence is dismissed as inaccurate or irrelevant.

Which leads me to an excellent article by Adam Garfinkle called Foreign Policy Immaculately Conceived. In it, he argues:
The immaculate conception theory of U.S. foreign policy operates from three central premises. The first is that foreign policy decisions always involve one and only one major interest or principle at a time. The second is that it is always possible to know the direct and peripheral impact of crisis-driven decisions several months or years into the future. The third is that U.S. foreign policy decisions are always taken with all principals in agreement and are implemented down the line as those principals intend - in short, they are logically coherent.
When these premises are laid out in such a way, one can't help but see them for what they really are. And yet so much of what passes for commentary these days is based wholly upon this immaculate conception theory of U.S. foreign policy .

Case in point, the American liberation/occupation of Iraq is often portrayed as a failure. They say that we are not "winning the hearts and minds" of the Iraqis, or that we have "gone into the God business" and that "we want the Iraqis to love us for destroying their orchards too." (Never mind that this is emphatically not what we're doing, but I digress) These people are engaging in creeping determinism before the situation has even played out! They've started with a conclusion, that we have failed in Iraq, and they then collect any and all negative aspects of the occupation and proclaim this outcome inevitable (some perhaps hoping for a form of self-fulfilling prophecy).

But even this is hardly new. Jessica's Well points to a pair of magnificent historical examples. Do you remember that other time when we were mired in a quagmire, failing to win the hearts and minds of our occupied foes? The one in Europe, circa 1946? Yes, you know, the one that resulted in Europe's longest unbroken peaceful period since Charlemagne? These articles are amazingly familiar. Replace "Hitler" with "Saddam", "Nazis" with "Baathists", and "Germany" with "Iraq" and you'll see what I mean.

Naturally, since the overwhelmingly positive results of the US military occupation of Europe are generally acknowledged, these articles are pushed by the wayside, dismissed as irrelevant and forgotten forever (or until an intrepid blogger takes the initiative to post it). Success in Europe was by no means inevitable, both during and after the war, and in a certain respect, these articles are a great example of creeping determinism or Garfinkle's immaculate conception theory of U.S. foreign policy.

They're also an example of just how shortsighted pessimistic reporting on a lengthy process can be. As Garfinkle notes:
American presidents, who have to make the truly big decisions of U.S. foreign policy, must come to a judgment with incomplete information, often under stress and merciless time constraints, and frequently with their closest advisors painting one another in shades of disagreement. The choices are never between obviously good and obviously bad, but between greater and lesser sets of risks, greater and lesser prospects of danger. Banal as it sounds, we do well to remind ourselves from time to time that things really are not so simple, even when one's basic principles are clear and correct.
Indeed. Hindsight isn't necessarily 20/20, but it always purports to be.

Update 10.21.03 - I don't remember where I found this, but I had bookmarked it: That Was Then: Allen W. Dulles on the Occupation of Germany provides some more perspective on post-war Germany. He outlined many of the difficulties they faced and lamented, despite his obvious respect for those in charge, that "the problems inherent in the situation are almost too much for us." It's an excellent piece, so read the whole thing, as they say...
Posted by Mark on October 20, 2003 at 08:58 PM .: link :.


End of This Day's Posts

Wednesday, October 15, 2003

Style as Substance
Kill Bill: Volume 1 is one of those movies that I've been keeping track of for years. From the beginning, I wondered why Tarantino was choosing such material for his next film. The plot certainly isn't edgy. Uma Thurman plays The Bride, a woman miraculously survives a bullet to the head on her wedding day (the groom was not so lucky). After an extended stay in a coma, she awakes and makes a list of five people to exact revenge upon. Then she goes and kills them. That's the plot.

And yet it's still a good film (not a great film, but good). The plot doesn't matter. Nor, really, do the characters. None of them are developed, or really likable. You root for the Bride, a textbook anti-hero, not because she's been wronged and is seeking revenge, but because she's such a badass. It is the style of the film that gets me, and like it or not, Tarantino is a master of style. The man knows how to manipulate the audience, and he is brutally unmerciful in this outing.

Let me rewind a bit. Do you remember the scene in Pulp Fiction where Vincent blows Marvin's head off by accident? Somehow, Tarantino is able to make that scene, and the ensuing events, funny. Not ha-ha funny, it's still black comedy, but funny nonetheless. You don't really know why you are laughing, but you are. And that is what this movie is like. It's like two hours of that one scene in Pulp Fiction.

Blood. Hundreds of gallons of it. Spraying, shooting, fountains of blood. The grisly murder rate in this film approaches triple digits. It's not for everyone. James Lileks says he had "no desire to see clever violence," and that is certainly understandable. These scenes are cold, merciless, and often disgusting, yet I found myself laughing. It's just a natural reaction when you see someone's head cut off and blood sprays out like a sprinkler. The gore is so over the top that it eventually ceases to be disgusting and takes on a blurry, surreal quality. Tarantino knows this works, but he's not content to leave it there.

This isn't an easy movie. It's not the roller coaster kung-fu action flick it's advertised as. It's difficult. Why? Because in those moments where the gore goes beyond the surreal, you still sense gravity in the violence. Tarantino grounds the violence just enough so that you laugh when it happens, but you're hit by an aftertaste of guilt a few seconds later. The blood may be completely over the top, but other details are what got me. The gurgling, the spasms, the screams. These things creeped the hell out of me. And on top of that, towards the end of the film, Tarantino keeps the film rocketing along at such a pace that your conscience can't keep up with the violence, and you know it. That is, I suppose the essense of black comedy. It's not easy and it's not fun, but it makes you laugh anyway.

It is difficult to say, though. It's not as obvious as I'm describing. The black comedy is more subtle than you might think from reading this, so take it with a grain of salt.

Walter sums it up perfectly:
I think Tarantino wanted a 180 from Pulp Fiction's tone. I think he feinted high and then socked us in the gut. And it worked. Bold as hell, and he pulled it off. Now I'm sick to my stomach, but I respect the bastard.
I don't like this movie the way I like Tarantino's other work. I like it like I like Taxi Driver or Requiem for a Dream, which is to say, I don't like it, but it is so well done that I can't stop myself from watching it. The filmmakers, damn them, are so good at manipulating the elements of cinema that I'm spellbound even as I'm wimpering.

Kill Bill doesn't have the weight of Taxi Driver or Requiem and it's a flawed film, but it has it's moments of brilliance too. There is a lot more to say about it, but I am at a loss to say more. It is difficult to describe because what's important about this film isn't what happens, it's how it happens. It's style as substance, and Tarantino makes it work. Damn him.
Posted by Mark on October 15, 2003 at 08:29 PM .: link :.


End of This Day's Posts

Wednesday, September 24, 2003

Pynchon's 1984
I stopped by the bookstore tonight to pick up Quicksilver and while I was there, I happened upon the new edition of George Orwell's Nineteen Eighty-Four. This new edition contains a foreward by none other than Thomas Pynchon, vaunted author and recluse whose similarly prophetic novel, Gravity's Rainbow, has been giving me headaches for the past year or so... Pynchon was a good choice; he's able to place Orwell's novel, including its conception and composition, in its proper cultural and historical context while at the same time applying the humanistic themes of the novel to current times (without, I might add, succumbing to the tempation to list out what Orwell did or didn't "get right" - indeed, Pynchon even takes a humorous swipe at the tendency to do so - "Orwellian, dude!"). And to top that off, I'm a sucker for his style - whatever one he might be employing at the time (this time around it's his nonfiction style, with an alternating elegance and brazenness that works so well).

It's interesting reading, though I don't agree with everything he says. Towards the beginning of the forward, he mentions this bit:
Now, those of fascistic disposition - or merely those among us who remain all too ready to justify any government action, whether right or wrong - will immediately point out that this is prewar thinking, and that the moment enemy bombs begin to fall on one's homeland, altering the landscape and producing casualties among friends and neighbours, all this sort of thing, really, becomes irrelevant, if not indeed subversive. With the homeland in danger, strong leadership and effective measures become of the essence, and if you want to call that fascism, very well, call it whatever you please, no one is likely to be listening, unless it's for the air raids to be over and the all clear to sound. But the unseemliness of an argument - let alone a prophecy - in the heat of some later emergency, does not necessarily make it wrong. One could certainly argue that Churchill's war cabinet had behaved on occasion no differently from a fascist regime, censoring news, controlling wages and prices, restricting travel, subordinating civil liberties to self-defined wartime necessity.
Though he doesn't clearly come out and say it and he is careful even with his historical example, Pynchon clearly fears for America's future in the wake of the "war on terror" and sees Orwell's work not only as a commentary on the perils of communism, but as a warning to democracy. As a general point, I can see that, but you could read Pynchon as believing that Orwell's point equally applies to the policies of, say, the current administration, which I think is a bit of a stretch. For one thing, our system of limited governance already has mechanisms for self-examination and public debate, not to mention checks and balances between certain key elements of the government. For another, our primary enemies now are no longer the forces of progress.

As Pynchon himself notes, Orwell failed to see religious fundamentalism as a threat, and today this is the main enemy we face. It isn't the progress of science and technology that threatens us (at least not in the way expected), but rather a reversion to fundamentalist religion, and Pynchon is hesitant to see that. He tends to be obsessed with the mechanics of paranoia and conspiracy when it comes to technology. This is exemplified by his attitude towards the internet:
...the internet, a development that promises social control on a scale those quaint old 20th-century tyrants with their goofy moustaches could only dream about.
As erich notes, perhaps someone should introduce Pynchon to the hacker subculture, where anarchists deface government and corporate websites, bored kids bring corporate websites to their knees with viruses or DDOS attacks, and bloggers aggregate and debate. Or perhaps our problem will be that with an increase in informational transparency, "Orwellian" scrutiny will to some extent become democratized; abuse of privacy will no longer limited to corporations and states. As William Gibson notes:
"1984" remains one of the quickest and most succinct routes to the core realities of 1948. If you wish to know an era, study its most lucid nightmares. In the mirrors of our darkest fears, much will be revealed. But don't mistake those mirrors for road maps to the future, or even to the present.

We've missed the train to Oceania, and live today with stranger problems.
Stranger problems indeed. But Pynchon isn't all frowns, he actually ends on a note of hope regarding the appendix, which provides an explanation of Newspeak:
why end a novel as passionate, violent and dark as this one with what appears to be a scholarly appendix?

The answer may lie in simple grammar. From its first sentence, "The Principles of Newspeak" is written consistently in the past tense, as if to suggest some later piece of history, post- 1984 , in which Newspeak has become literally a thing of the past - as if in some way the anonymous author of this piece is by now free to discuss, critically and objectively, the political system of which Newspeak was, in its time, the essence. Moreover, it is our own pre-Newspeak English language that is being used to write the essay. Newspeak was supposed to have become general by 2050, and yet it appears that it did not last that long, let alone triumph, that the ancient humanistic ways of thinking inherent in standard English have persisted, survived, and ultimately prevailed, and that perhaps the social and moral order it speaks for has even, somehow, been restored.

... In its hints of restoration and redemption, perhaps "The Principles of Newspeak" serves as a way to brighten an otherwise bleakly pessimistic ending - sending us back out into the streets of our own dystopia whistling a slightly happier tune than the end of the story by itself would have warranted.
Overall, Pynchon's essay is excellent and thought-provoking, if a little paranoid. He tackles more than I have commented on, and he does so in affable style. A commentor at erich's site concludes:
Orwell, to his everlasting credit, saw clearly the threat posed by communism, and spoke out forcefully against it. Unfortunately, as Pynchon's new introduction reminds us, the same cannot be said for far too many on the Left, who remain incapable of making rational distinctions between our constitutional republic and the slavery over which we won a great triumph in the last century.
Indeed.

Update - Most of the text of Pynchon's essay can be found here.

Another Update - Rodney Welch notices a that Pynchon's theory regarding the appendix appears to have been lifted by Guardian columnist, Margaret Atwood. Dave Kipen comments that it's possible that both are paraphrasing an old idea, but he doubts it. Any Orwellians care to shed some light on the originality of the "happy ending" theory?

Another Update: More here.
Posted by Mark on September 24, 2003 at 12:40 AM .: link :.


End of This Day's Posts

Monday, September 08, 2003

My God! It's full of stars!
What Galileo Saw by Michael Benson : A great New Yorker article on the remarkable success of the Galileo probe. James Grimmelmann provides some fantastic commentary:
Launched fifteen years ago with technology that was a decade out of date at the time, Galileo discovered the first extraterrestrial ocean, holds the record for most flybys of planets and moons, pointed out a dual star system, and told us about nine more moons of Jupiter.

Galileo's story is the story of improvisational engineering at its best. When its main 134 KBps antenna failed to open, NASA engineers decided to have it send back images using its puny 10bps antenna. 10 bits per second! 10!

To fit images over that narrow a channel, they needed to teach Galileo some of the tricks we've learned about data compression in the last few decades. And to teach an old satellite new tricks, they needed to upgrade its entire software package. Considering that upgrading your OS rarely goes right here on Earth, pulling off a half-billion-mile remote install is pretty impressive.
And the brilliance doesn't end there:
As if that wasn't enough hacker brilliance, design changes in the wake of the Challenger explosion completely ruled out the original idea of just sending Galileo out to Mars and slingshotting towards Jupiter. Instead, two Ed Harris characters at NASA figured out a triple bank shot -- a Venus flyby, followed by two Earth flybys two years apart -- to get it out to Jupiter. NASA has come in for an awful lot of criticism lately, but there are still some things they do amazingly well.
Score another one for NASA (while you're at it, give Grimmelmann a few points for the Ed Harris reference). Who says NASA can't do anything right anymore? Grimmelmann observes:
The Galileo story points out, I think, that the problem is not that NASA is messed-up, but that manned space flight is messed-up.
...
Manned spaceflight is, in the Ursula K. LeGuin sense, perverse. It's an act of pure conspicuous waste, like eating fifty hotdogs or memorizing ten thousand digits of pi. We do it precisely because it is difficult verging on insane.
Is manned space flight in danger of becoming extinct? Is it worth the insane amount of effort and resources we continually pour into the space program? These are not questions I'm really qualified to answer, but its interesting to ponder. On a personal level, its tempting to righteously proclaim that it is worth it; that doing things that are "difficult verging on insane" have inherent value, well beyond the simple science involved.

Such projects are not without their historical equivalents. There are all sorts of theories explaining why the ancient Egyptian pyramids were built, but none are as persuasive as the idea that they were built to unify Egypt's people and cultures. At the time, almost everything was being done on a local scale. With the possible exception of various irrigation efforts that linked together several small towns, there existed no project that would encompass the whole of Egypt. Yes, an insane amount of resources were expended, but the product was truly awe-inspiring, and still is today.

Those who built the pyramids were not slaves, as is commonly thought. They were mostly farmers from the tribes along the River Nile. They depended on the yearly cycle of flooding of the Nile to enrich their fields, and during the months that that their fields were flooded, they were employed to build pyramids and temples. Why would a common farmer give his time and labor to pyramid construction? There were religious reasons, of course, and patriotic reasons as well... but there was something more. Building the pyramids created a certain sense of pride and community that had not existed before. Markings on pyramid casing stones describe those who built the pyramids. Tally marks and names of "gangs" (groups of workers) indicate a sense of pride in their workmanship and respect between workers. The camaraderie that resulted from working together on such a monumental project united tribes that once fought each other. Furthermore, the building of such an immense structure implied an intense concentration of people in a single area. This drove a need for large-scale food-storage among other social constructs. The Egyptian society that emerged from the Pyramid Age was much different from the one that preceded it (some claim that this was the emergance of the state as we now know it.)

"What mattered was not the pyramid - it was the construction of the pyramid." If the pyramid was a machine for social progress, so too can the Space program be a catalyst for our own society.

Much like the pyramids, space travel is a testament to what the human race is capable of. Sure it allows us to do research we couldn't normally do, and we can launch satellites and space-based telescopes from the shuttle (much like pyramid workers were motivated by religion and a sense of duty to their Pharaoh), but the space program also serves to do much more. Look at the Columbia crew - men, women, white, black, Indian, Israeli - working together in a courageous endeavor, doing research for the benefit of mankind, traveling somewhere where few humans have been. It brings people together in a way few endeavors can, and it inspires the young and old alike. Human beings have always dared to "boldly go where no man has gone before." Where would we be without the courageous exploration of the past five hundred years? We should continue to celebrate this most noble of human spirits, should we not?

In the mean time, Galileo is nearing its end. On September 21st, around 3 p.m. EST, Galileo will be vaporized as it plummets toward Jupiter's atmosphere, sending back whatever data it still can. This planned destruction is exactly what has been planned for Galileo; the answer to an intriguing ethical dilemma.
In 1996, Galileo conducted the first of eight close flybys of Europa, producing breathtaking pictures of its surface, which suggested that the moon has an immense ocean hidden beneath its frozen crust. These images have led to vociferous scientific debate about the prospects for life there; as a result, NASA officials decided that it was necessary to avoid the possibility of seeding Europa with alien life-forms.
I had never really given thought to the idea that one of our space probes could "infect" another planet with our "alien" life-forms, though it does make perfect sense. Reaction to the decision among those who worked on Galileo is mixed, most recognizing the rationale, but not wanting to let go anyway (understandable, I guess)...

For more on the pyramids, check out this paper by Marcell Graeff. The information he referenced that I used in this article came primarily from Kurt Mendelssohn's book The Riddle of the Pyramids.

Update 9.25.03 - Steven Den Beste has posted an excellent piece on the Galileo mission and more...
Posted by Mark on September 08, 2003 at 11:06 PM .: link :.


End of This Day's Posts

Sunday, August 10, 2003

The King Lives!
Cult films are (generally) commercially unsuccessful movies that have limited appeal, but nevertheless attract a fiercely loyal following among fans over time. They often exhibit very strange characters, surreal settings, bizzarre plotting, dark humor, and otherwise quirky and eccentric characteristics. These obscure films often cross genres (horror, sci-fi, fantasy, etc...) and are highly stylized, straying from conventional filmmaking techniques. Many are made by fiercely independent maverick filmmakers with a very low budget (read: cheesy), often showcasing the performance of talented newcomers.

Almost by definition, they're not popular at the time of their release, usually because they exist outside the box, eschewing typical narrative styles and other technical conventions. They achieve cult-film status later, developing a loyal fanbase over time, often through word-of-mouth recommendations (and, as we'll see, the actions of fans themselves). They elicit an eerie passion among their fans, who enthusiastically champion the films, leading to repeated public viewings (midnight movie showings are particularly prevalent in cult films), fan clubs, and active audience participation (i.e. dressing up as the oddball characters, mercilessly MST3King a film, or uh, jumping around in front of a camera with a broomstick). Cult movie followers often get together and argue over the mundane details and varied merits of their favorite films.

While these films are not broadly appealing, they are tremendously popular among certain narrow groups such as college students or independent film lovers. The internet has been immensely enabling in these respects, allowing movie geeks to locate one another and participate in the aforementioned laborious debates and arguments among other interactive fun.

One of the first examples of a cult movie is Tod Browning's 1932 film, Freaks, which was deliberately made to be "the strangest...most startling human story ever screened," and featured real-life freaks as circus performers. Perhaps the most infamous cult film is The Rocky Horror Picture Show, a 1975 film which inspired a craze of interactive, midnight movie screenings where members of the audience dress up as any of the garish and trashy characters and sing along with the music.

Sometimes a cult film will break out of its small fanbase and hit the mainstream. Frank Capra's classic It's a Wonderful Life didn't become popular until many years after its initial release. Repeated television showings during the Christmas season, however, have become a holiday tradition.

Stanley Kubrick's A Clockwork Orange and Dr. Strangelove Or: How I Stopped Worrying and Learned to Love the Bomb, Ridley Scott's Blade Runner, and Frances Ford Coppola's Apocalypse Now are all considered to be classics of modern cinema today, yet were all largely ignored by audiences at the time of their release.

Most cult films don't fare that well, though I can't say that bothers anyone. Their unpopularity is generally considered to be a part of their charm. They're strange beasts, these cult films, and their appeal is hard to pin down. They're often very flawed films in one way or another, yet they strike a passionate chord with specific audiences, and their flaws, strangely, become endearing to their fans. Outsiders just don't "get it".

This doesn't just apply to movies either. Many authors don't become popular until after their deaths (Kafka, Lovecraft) and many works are initially shunned, but eventually pick up that devoted cult following through word of mouth and interactive fun and games. The Lord of the Rings was massively unpopular when it was published, but a small but extremely devoted fanbase grew, and it wasn't too long until people were creating role-playing games like Dungeons & Dragons based in part on Tolkien's enormously imaginative universe. D&D itself garnered a cult following of its own, as has role-playing in its own right. Lord of the Rings is now immensely popular, and its stunningly brilliant movie adaptations by cult filmmaker Peter Jackson (known for his disgusting work in Bad Taste, Meet the Feebles, and Dead Alive, among others) which have met with both popular and critical success.

***

One of my favorite cult films is the cheesy 1979 horror flick, Phantasm. Several years ago, as I first began to explore internet communities, I realized that I needed a "handle," as it was called. I was watching said horror flick almost every day at the time, so I chose tallman as my handle, despite the fact that I do not resemble the nefarious Tall Man present in the Phantasm films (and that, uh, I'm not tall). It is inexplicably one of my favorite films of all time, and it is a dreadful movie. The effects are awful, the acting is often laughable, and the plot is incoherent at times (especially the ending). But I still love the film; I cherish the creepy, surreal atmosphere and to this day, the Tall Man haunts my dreams (nightmares, actually). The bad effects and acting make me laugh, but there are some genuinely brilliant moments in the film, and the unreality of the ending actually serves to heighten the tension of the film, providing an eerie ambiguity that lasts long after viewing the film. The film has its moments of brilliance as well. The score is especially haunting, and the mortuary sets, when combined with director (and producer, and writer, and cinematographer, and editor, and did I mention that cult filmmakers are often fiercely independent?) Don Coscarelli's talented visual style, are stunningly effective.

Like many cult films, it has become a cinematically important film, sparking the rise of surreality in many horror films from the 1980's (most notably A Nightmare on Elm Street, which lifted the ending almost verbatim).

Another favorite cult hit is Sam "For Love of the Game" Raimi's (er, I guess that should be Sam "Spiderman" Raimi's) Evil Dead films, featuring the coolest B-Movie actor ever, Bruce Campbell. Raimi's inventive camera-work and Campbell's gloriously over-the-top performance make these films a joy to watch.

The reason I started this post, which has gotten completely out of hand as I've laboriously digressed into the nature of cult filmmaking (sorry 'bout that), was because of a new film, destined for cult success, in which Phantasm director Don Coscarelli and Evil Dead actor Bruce Campbell join forces.

The new film is called Bubba Ho-Tep, it looks like a doozy. Based on a short story by cult author Joe R. Lansdale, tells the "true" story of what became of Elvis Presley (he didn't die on a toilet) and JFK (he didn't die in Dallas). Oh, did I mention that JFK is now black (THEY dyed him that color; the conspiracy theorists should love that)? We find this unlikely duo in an East Texas rest home which has become the target of an evil Egyptian entity ("Some sorta... Bubba Ho-Tep," as Campbell's Elvis opines). Naturally, the two old coots aren't going to just let Bubba Ho-Tep run hog-wild through their peaceful nursing home, and so they rush forward on their walkers and their wheel chairs to save the day. Its got that mix of the absurd that just screams cult film.

The trailer is great, and it features some of those trademark Coscarelli visuals (which I never realized he had before, but he does. Its tempting to throw out the term Auteur, but I'm way too subjective when it comes to Coscarelli), music that sounds suspiciously like the Phantasm theme, and Campbell's typically cheeky delivery (including Elvis-fu, complete with cheesy sound effects). I can't wait to see this film. Alas, it doesn't look like its coming to Philly very soon, but I'm hoping it will eventually make its way over here so that I can partake of it in all its B-Movie glory. The King lives!
Posted by Mark on August 10, 2003 at 11:08 AM .: link :.


End of This Day's Posts

Friday, August 08, 2003

Villainous Brits!
A few weeks ago, the regular weather guy on the radio was sick and a British meteorologist filled in. And damned if I didn't think it was the best weather forecast I'd ever heard! The report, which called for rain on a weekend in which I was traveling, turned out to be completely inaccurate, much to my surprise. I really shouldn't have been surprised, though. I know full well the limitations of meteorology, and weather reports can't be that accurate. Truth be told, I subcounsciously placed a higher value on the weather report because it was delivered in a British accent. Its not his fault, he can predict the weather no better than anyone else in the world, but the British accent carries with it an intellectual stereotype; when I hear one, I automatically associate it with intelligence.

Which brings me to John Patterson's recent article in the Guardian in which he laments the inevitable placement of British characters and actors in the villainous roles (while all the cheeky Yanks get the heroic roles):
Meanwhile, in Hollywood and London, the movie version of the special relationship has long played itself out in like manner. Our cut-price actors come over and do their dirty work, as villains and baddies and psychopaths, even American ones, while the cream of their prohibitively expensive acting talent Concordes it over the pond to steal the lion's share of our heroic roles. Either way, we lose.
One could be curious why Patterson is so upset that American actors get the heroic parts in American movies, but even if you ignore that, Patterson is stretching it pretty thin.

As Steven Den Beste notes, this theory doesn't go too far in explaining James Bond or Spy Kids. Never mind that the Next Generation captain of the starship Enterprise was a Brit (playing a Frenchman, no less). Ian McKellen plays Gandalf; Ewan McGregor plays Obi Wan Kenobi. The list goes on and on.

All that aside, however, it is true that British actors and characters often do portray the villain. It may even be as lopsided as Patterson contends, but the notion that such a thing implies some sort of deeply-rooted American contempt for the British is a bit off.

As anyone familiar with film will tell you, the villain needs to be so much more than just vile, wicked or depraved to be convincing. A villainous dolt won't create any tension with the audience, you need someone with brains or nobility. Ever notice how educated villains are? Indeed, there seem to a preponderance of doctors that become supervillains (Dr. Demento, Dr. Octopus, Dr. Doom, Dr. Evil, Dr. Frankenstien, Dr. No, Dr. Sardonicus, Dr. Strangelove, etc...) - does this reflect an antipathy towards doctors? The abundance of British villains is no more odd than the abundance of doctors. As my little episode with the weatherman shows, when Americans hear a British accent, they hear intelligence. (This also explains the Gladiator case in which Joaquin Phoenix, who is Puerto Rican by the way, puts on a veiled British accent.)

The very best villains are the ones that are honorable, the ones with whom the audience can sympathize. Once again, the American assumption of British honor lends a certain depth and complexity to a character that is difficult to pull off otherwise. Who was the more engaging villain in X-Men, Magneto or Sabretooth? Obviously, the answer is Magneto, played superbly by British actor Ian McKellen. Having endured Nazi death camps as a child, he's not bent on domination of the world, he's attempting to avoid living through a second holocaust. He's not a megalomaniac, and his motivation strikes a chord with the audience. Sabretooth, on the other hand, is a hulking but pea-brained menace who contributes little to the conflict (much to the dismay of fans of the comic, in which Sabertooth is apparently quite shrewd).

Such characters are challenging. It's difficult to portray a villain as both evil and brilliant, sleazy and funny, moving and tragic. In fact, it is because of the complexity of this duality that villains are often the most interesting characters. That British actors are often chosen to do so is a testament to their capability and talent.

Some would attribute this to the training of the stage that is much less common in the U.S. British actors can do a daring and audacious performance while still fitting into an ensemble. It's also worth noting that many British actors are relatively unknown outside of the UK. Since they are capable of performing such a difficult role, and since they are unfamiliar to US audiences, it makes the films more interesting.

In the end, there's really very little that Patterson has to complain about, especially when he tries to port this issue over to politics. While a case may be made that there are a lot of British villains in movies (and there are plenty of villains that aren't), that doesn't mean there is anything malicious behind it; indeed, depending on how you look at it, it could be considered a complement that British culture lends itself to the complexity and intelligence required for a good villain we all love to hate (and hate to love). [thanks to USS Clueless for the Guardian article]
Posted by Mark on August 08, 2003 at 09:36 AM .: link :.


End of This Day's Posts

Sunday, May 25, 2003

Security & Technology
The other day, I was looking around for some new information on Quicksilver (Neal Stephenson's new novel, a follow up to Cryptonomicon) and I came across Stephenson's web page. I like everything about that page, from the low-tech simplicity of its design, to the pleading tone of the subject matter (the "continuous partial attention" bit always gets me). At one point, he gives a summary of a talk he gave in Toronto a few years ago:
Basically I think that security measures of a purely technological nature, such as guns and crypto, are of real value, but that the great bulk of our security, at least in modern industrialized nations, derives from intangible factors having to do with the social fabric, which are poorly understood by just about everyone. If that is true, then those who wish to use the Internet as a tool for enhancing security, freedom, and other good things might wish to turn their efforts away from purely technical fixes and try to develop some understanding of just what the social fabric is, how it works, and how the Internet could enhance it. However this may conflict with the (absolutely reasonable and understandable) desire for privacy.
And that quote got me to thinking about technolology and security, and how technology never really replaces human beings, it just makes certain tasks easier, quicker, and more efficient. There was a lot of talk about this sort of thing around the early 90s, when certain security experts were promoting the use of strong cryptography and digital agents that would choose what products we would buy and spend our money for us.

As it turns out, most of those security experts seem to be changing their mind. There are several reasons for this, chief among them fallibility and, quite frankly, a lack of demand. It is impossible to build an infallible system (at least, it's impossible to recognize that you have built such a system), but even if you had accomplished such a feat, what good would it be? A perfectly secure system is also a perfectly useless system. Besides that, you have human ignorance to contend with. How many of you actually encrypt your email? It sounds odd, but most people don't even notice the little yellow lock that comes up in their browser when they are using a secure site.

Applying this to our military, there are some who advocate technology (specifically airpower) as a replacement for the grunt. The recent war in Iraq stands in stark contrast to these arguments, despite the fact that the civilian planners overruled the military's request for additional ground forces. In fact, Rumsfeld and his civilian advisors had wanted to send significantly fewer ground forces, because they believed that airpower could do virtually everything by itself. The only reason there were as many as there were was because General Franks fought long and hard for increased ground forces (being a good soldier, you never heard him complain, but I suspect there will come a time when you hear about this sort of thing in his memoirs).

None of which is to say that airpower or technology are not necessary, nor do I think that ground forces alone can win a modern war. The major lesson of this war is that we need to have balanced forces in order to respond with flexibility and depth to the varied and changing threats our country faces. Technology plays a large part in this, as it makes our forces more effective and more likely to succeed. But, to paraphrase a common argument, we need to keep in mind that weapons don't fight wars, soldiers do. While technology we used provided us with a great deal of security, its also true that the social fabric of our armed forces were undeniably important in the victory.

One thing Stephenson points to is an excerpt from a Sherlock Holmes novel in which Holmes argues:
...the lowest and vilest alleys in London do not present a more dreadful record of sin than does the smiling and beautiful country-side...The pressure of public opinion can do in the town what the law cannot accomplish...But look at these lonely houses, each in its own fields, filled for the most part with poor ignorant folk who know little of the law. Think of the deeds of hellish cruelty, the hidden wickedness which may go on, year in, year out, in such places, and none the wiser.
Once again, the war in Iraq provides us with a great example. Embedding reporters in our units was a controversial move, and there are several reasons the decision could have been made. One reason may very well have been that having reporters around while we fought the war may have made our troops behave better than they would have otherwise. So when we watch the reports on TV, all we see are the professional, honorable soldiers who bravely fought an enemy which was fighting dirty (because embedding reporters revealed that as well).

Communications technology made embedding reporters possible, but it was the complex social interactions that really made it work (well, to our benefit at least). We don't derive security straight from technology, we use it to bolster our already existing social constructs, and the further our technology progresses, the easier and more efficient security becomes.

Update 6.6.03 - Tacitus discusses some similar issues...
Posted by Mark on May 25, 2003 at 02:03 PM .: link :.


End of This Day's Posts

Sunday, May 11, 2003

To hit or not to hit, that is the question
Gambling is a strange vice. Anyone with a brain in their head knows the games are rigged in the Casino's favor, and anyone with a knowledge of Mathematics knows how thoroughly the odds are in the Casino's favor. But that doesn't stop people from dropping their paychecks in a few hours. I stopped by Atlantic City this weekend, and I played some blackjack. The swings are amazing. I only played for about an hour, but I am always fascinated by the others at the table and even my own reactions.

I don't play to win, rather, I don't expect to win, but I like to gamble. I like having a stack of chips in front of me, I like the sounds and the smells and the gaudy flashing lights (I like the deliberately structured chaos of the Casino). I allot myself a fixed budget for the night, and it usually adds up to approximately what I'd spend on a good night out. People watching isn't really my thing, but its hard not to enjoy it at a Casino, and that's something I spend a lot of time doing. Some people have the strangest superstitions and beliefs, and its fun to step back and observe them at work. Even though I know the statistical underpinnings of how gambling works at a Casino, I even find myself thinking the same superstitious stuff because its only natural.

For instance, a lot of people think that if a player sitting at their table makes incorrect playing actions, it decreases their advantage. Statistically, this is not true, but when that guy sat down at third-base and started hitting on his 16 when the dealer was showing a 5, you better believe a lot of people got upset. In reality, that moron's actions have just as much a chance of helping other players as hurting them, but that's no consolation to someone who lost a hundred bucks in the short time since that guy sat down. Similarly, many people have progressive betting strategies that are "guaranteed" to win. Except, you know, they don't actually work (unless they're based on counting, but that's another story).

The odds in AC for Blackjack give the House an edge of about 0.44%. That doesn't sound like much, but its plenty for the Casino, because they have an unfair advantage even if the odds were dead even. Don't forget, the Casino has deep pockets, and you don't. In order to take advantage of a prosperous swing in the game, you need to weather the House's streaks. If you're playing with $1000, you might be able to swing it, but don't forget, the Casino is playing with millions of dollars. They will break your bank if you spend enough time there, even if they didn't have the statistical advantage. That's why you get comps when you win. They're trying to keep you there so as to bring you closer to the statistical curve.

The only way you can really win at Blackjack is to have the luck of a quick streak and the willpower to stop while you're up (as I noted before, if you're up a lot, the Casino will do their best to keep you playing), but that's a fragile system - you can't count on that, though it will happen sometimes. The only way to consistently win at Blackjack is to count cards. That can give you the advantage of around 1% (more on certain hands, less on others) - depending on the House rules. This isn't Rain Man - you aren't keeping track of every card that comes out of the deck (rather, you're keeping a relative score of high value cards to low cards), and you don't get an automatic winning edge on every hand. Depending on the count, the dealer can still play consistently better than you - but the dealer can't double down or split, and they only get even money for Blackjack. That's where the advantage comes.

Of course, you have to have a pretty big bankroll to compensate for the Casino's natural "deep pockets" advantage, and you'll need to spend hundreds of hours practicing at home. Blackjack is fast and you need to be able to keep a running tab of the high/low card ratio (and you need to do some other calculations to get the true count), all the while you must appear to be playing normally, talking with the other players, dealing with the deliberately designed chaotic distractions of the Casino and generally trying not to come off as someone who is intensely concentrating. No small feat.

I'm not sure if that'd take all the fun out of it, not to mention draw the Casino's attention on me (which can't be fun), but it would be an interesting talent to have and its a must if you want to win. At the very least, it's a good idea to get the basic strategy down. Do that and you'll be better than most of the people out there (even if you just memorize the Hard Totals table, you'll be in good shape).
Posted by Mark on May 11, 2003 at 09:12 PM .: link :.


End of This Day's Posts

Saturday, July 13, 2002

Chef Wars
Call Me Lenny by James Grimmelmann : Taco Bell is running a new ad called "Chef Wars" and it is an Iron Chef parody. The commercial is pathetic and James laments that Iron Chef is no longer considered to be a piece of elite culture. Essentially, Iron Chef is no longer cool because it has become so popular that even culturally bereft Taco Bell customers will understand the reference.

As a long time fan of Iron Chef, I suppose I can relate to James. Several years ago, a few drunk friends and I discovered Iron Chef one late night and fell in love with it. In the years that followed, it has grown more and more popular, to the point where there was even an pointless American version (hosted by Bill Shatner) and a rather funny parody on Saturday Night Live. Seeing those things made it less fun to be an Iron Chef fan, and to a certain extent, I agree with that point. But in a different way, Iron Chef is just as cool as it ever was and, in my mind, a genuinely good show is well... good, no matter how popular it is.

As commentor Julia (at the bottom) notes, there are two main issues that James is hitting on:
  1. The watering down of concepts from 30 minutes to 30 seconds completely distorts and lessens the impact of the elements that make the original great.
  2. The idea that a cultural item becomes less "cool" when it goes from 1 million to 100 million consumers.
Certainly, there is truth in those statements, but that is not all that is at work here. Iron Chef is a great show, and will always be so. After a while, a piece of culture will lose its "new and exciting" flavour, but if the show is good, its good. James gives away how uncool he really is when he admits that he's only seen 6 episodes or so. Isn't it just a sham then? A facade? A ruse? Of what use is the cool if you never really enjoy it?

I suppose it all comes down to exclusion. Things are cool, in part, because you are cool enough to recognize them as such. But if everyone is cool, what's the point? Which brings us to Malcolm Gladwell and his Coolhunt:
"In this sense, the third rule of cool fits perfectly into the second: the second rule says that cool cannot be manufactured, only observed, and the third says that it can only be observed by those who are themselves cool. And, of course, the first rule says that it cannot accurately be observed at all, because the act of discovering cool causes it to take flight, so if you add all three together they describe a closed loop, the hermenuetic circle of coolhunting, a phenomenon whereby not only can the uncool not see cool but cool cannot be even adequately described to them."
But is it cool to just recognize something as cool? James recognized Iron Chef as cool, but he didn't really enjoy it. So I guess that we should seek the cool, but not be fooled into thinking something is cool simply because it is going to be big one day...
Posted by Mark on July 13, 2002 at 02:19 PM .: link :.


End of This Day's Posts

Tuesday, October 09, 2001

The Fifty Nine Story Crisis
In 1978, William J. LeMessurier, one of the nation's leading structural engineers, received a phone call from an engineering student in New Jersey. The young man was tasked with writing a paper about the unique design of the Citicorp tower in New York. The building's dramatic design was necessitated by the placement of a church. Rather than tear down the church, the designers, Hugh Stubbins and Bill LeMessurier, set their fifty-nine-story tower on four massive, nine-story-high stilts, and positioned them at the center of each side rather than at each corner. This daring scheme allowed the designers to cantilever the building's four corners, allowing room for the church beneath the northwest side.

Thanks to the prodding of the student (whose name was lost in the swirl of subsequent events), LeMessurier discovered a subtle conceptual error in the design of the building's wind braces; they were unusually sensitive to certain kinds of winds known as quartering winds. This alone wasn't cause for worry, as the wind braces would absorb the extra load under normal circumstances. But the circumstances were not normal. Apparently, there had been a crucial change during their manufacture (the braces were fastened together with bolts instead of welds, as welds are generally considered to be stronger than necessary and overly expensive; furthermore the contractors had interpreted the New York building code in such a way as to exempt many of the tower's diagonal braces from loadbearing calculations, so they had used far too few bolts.) which multiplied the strain produced by quartering winds. Statistically, the possibility of a storm severe enough to tear the joint apart was once every sixteen years (what meteorologists call a sixteen year storm). This was alarmingly frequent. To further complicate matters, hurricane season was fast approaching.

The potential for a complete catastrophic failure was there, and because the building was located in Manhattan, the danger applied to nearly the entire city. The fall of the Citicorp building would likely cause a domino effect, wreaking a devestating toll of destruction in New York.

The story of this oversight, though amazing, is dwarfed by the series of events that led to the building's eventual structural integrity. To avert disaster, LeMessurier quickly and bravely blew the whistle - on himself. LeMessurier and other experts immediately drew up a plan in which workers would reinforce the joints by welding heavy steel plates over them.

Astonishingly, just after Citicorp issued a bland and uninformative press release, all of the major newspapers in New York went on strike. This fortuitous turn of events allowed Citicorp to save face and avoid any potential embarrassment. Construction began immediately, with builders and welders working from 5 p.m. until 4 a.m. to apply the steel "band-aids" to the ailing joints. They build plywood boxes around the joints, so as not to disturb the tenants, who remained largely oblivious to the seriousness of the problem.

Instead of lawsuits and public panic, the Citicorp crisis was met with efficient teamwork and a swift solution. In the end, LeMessurier's reputation was enhanced for his courageous honesty, and the story of Citicorp's building is now a textbook example of how to respond to a high-profile, potentially disastrous problem.

Most of this information came from a New Yorker article by Joe Morgenstern (published May 29, 1995) . It's a fascinating story, and I found myself thinking about it during the tragedies of September 11. What if those towers had toppled over in Manhattan? Fortunately, the WTC towers were extremely well designed - they didn't even noticeably rock when the planes hit - and when they did come down, they collapsed in on themselves. They would still be standing today too, if it wasn't for the intense heat that weakened the steel supports.
Posted by Mark on October 09, 2001 at 08:04 AM .: link :.


End of This Day's Posts

Thursday, July 26, 2001

The Dune You'll Never See
Dune: The Movie You Will Never See by Alejandro Jodorowsky : The cult filmmaker's personal recollection of the failed production. The circumstances of Jodorowsky's planned 1970s production of Frank Herbert's novel Dune are inherently fascinating, if only because of the sheer creative power of the collaborators Jodorowsky was able to assemble. Pink Floyd offered to write the score at the peak of their creativity. Salvador Dali, Gloria Swanson, and Orson Welles were cast. Dan O'Bannon (fresh off of Dark Star) was hired to supervize special effects; illustrator Chris Foss to design spacecraft; H.R. Giger to design the world of Geidi Prime and the Harkonnens; artist Jean 'Moebius' Giraud drew thousands of sketches. The project eventually collapsed in 1977, subsequently being passed onto Ridley Scott, and then to David Lynch, whose 1984 film was panned by audience and critics alike.

Interestingly enough, this failed production has been suprisingly influential. "...the visual aspect of Star Wars strangely resembled our style. To make Alien, they called Moebius, Foss, Giger, O'Bannon, etc. The project signalled to Americans the possibility of making a big show of science-fiction films, outside of the scientific rigour of 2001: A Space Odyssey."

In reading his account of the failed production, it becomes readily apparent that Jodorowsky's Dune would only bear a slight resemblance to Herbert's novel. "I feel fervent admiration towards Herbert and at the same time conflict [...] I did everything to keep him away from the project... I had received a version of Dune and I wanted to transit it: the myth had to abandon the literary form and become image..." In all fairness, this is not necessarily a bad thing, especially in the case of Dune, which many considered to be unfilmable (Lynch, it is said, tried to keep his story as close to the novel as possible - and look what happened there). Film and literature are two very different forms, and, as such, they use different tools to accomplish the same tasks. Movies must use a different "language" to express the same ideas.

I find the prospect of Jodorowsky's Dune to be fascinationg, but I must also admit that I, like many others, would have also been aprehensive about his vision. Would Jodorowsky's Dune have been able to live up to his ambition? Some think not:
Theory and retrospect are fine and in theory Jodorowsky's DUNE sounds too good to be true. But then again, anyone that reads his desrription and explanation of El Topo and then actually watches the thing is going to feel slightly conned. They might then come to the conclusion that Jodorowsky says lots, but means little.
Having seen El Topo, I can understand where this guy's coming from. I lack the ability to adequately describe the oddity; the disturbing phenomenon that is El Topo. I can only say that it is the wierdest movie that I have ever seen (nay, experienced). But for all its disquieting peculiarity, I think it contains a certain raw power that really affects the viewer. Its that sort of thing, I think, that might have made Dune great.

In case you couldn't tell, Alejandro Jodorowsky is a strange, if fascinating, fellow. He wrote the script and soundtrack, handled direction, and starred in the previously mentioned El Topo, which was hailed by John Lennon as a masterpiece (thus securing his cult status). His followup, The Holy Mountain, continued along the same lines of thought. It was at this point that the director took the oportunity to work on Dune, which, as we have already found out, was a failure. Nevertheless, Jodorowsky plunges on, still making his own brand of bizzare films. As he says at the end of his account of the Dune debacle, "I have triumphed because I have learned to fail."
Posted by Mark on July 26, 2001 at 09:38 PM .: link :.


End of This Day's Posts

Friday, March 30, 2001

Hard Drinkin' Lincoln
I attended a lecture at Villanova University last night which was quite interesting. The speaker was Mike Reiss, one of the writer/producers of the Simpsons (among various other stints at The Tonight Show with Johnny Carson and the ever-popular Alf). He doesn't work at the Simpsons as much as he used to, but still hangs around the offices occasionally. Some interesting tidbits* from the lecture:
  • On Maude Flanders death: "The character just sucked. She sucked and the woman who voiced her wanted a raise... so we killed her."
  • On the rumored Simpsons Movie: "Its in the contract that a Simpson's movie must be written by Matt Groening himself." Apparently, Matt Groening doesl literally nothing with the show anymore, and he never has done much, so Mike said we shouldn't expect movies anytime soon.
  • Since the Simpsons, he has had a few pet projects, one of which was two series of cartoons for the now defunct Icebox.com. The animated shorts were called "Hard Drinkin' Lincoln" and "Queer Duck". They were quite entertaining. (sorry, but I couldn't find any of them online)
  • In the Q & A, someone from the audience asked if the Simpson's writers (and the way they used to shock people in earlier episodes) were influenced by the Dada movement of the early 20th century. Mike laughed and said "We're just dirty".
  • Mike was one of the creators of Troy McLure; You might remember him from such movies as "The Contrabulous Fabtraption of Professor Horatio Hufnagel" and "'P' is for Psycho".
  • Mr. Smithers was originally black (observe the first few episodes closely, and you can see the "black" Smithers), but they thought having him be the servant of an old, rich, white guy could be offensive. So they made him white, gay, and in love with Mr. Burns.
  • Mr. Burns' character wasn't always supposed to be evil. The evil parts are based on Fox president Barry Diller.
  • How could they get away with [insert offensive antics here]? "Hey, we work for Fox."
  • Conan O'Brien is funny (even after a 16 hour workday).
Theres lots more that I can't remember at the moment, but it was a good time and I enjoyed myself immensely. If you ever get a chance to see this guy speak, check him out.

* - I'm going from memory here, so some of the quotes might be a little off, but you get the gist of it.
Posted by Mark on March 30, 2001 at 01:40 PM .: link :.


End of This Day's Posts

Where am I?
This page contains entries posted to the Kaedrin Weblog in the Best Entries Category.

Inside Weblog
Archives
Best Entries
Fake Webcam
email me
Kaedrin Beer Blog

Archives
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003
October 2003
September 2003
August 2003
July 2003
June 2003
May 2003
April 2003
March 2003
February 2003
January 2003
December 2002
November 2002
October 2002
September 2002
August 2002
July 2002
May 2002
April 2002
March 2002
February 2002
January 2002
December 2001
November 2001
October 2001
September 2001
August 2001
July 2001
June 2001
May 2001
April 2001
March 2001
February 2001
January 2001
December 2000
November 2000
October 2000
September 2000
August 2000
July 2000

Categories
12 Days of Christmas
2006 Movie Awards
2007 Movie Awards
2008 Movie Awards
2009 Movie Awards
2010 Movie Awards
2011 Fantastic Fest
2011 Movie Awards
2012 Movie Awards
2013 Movie Awards
6 Weeks of Halloween
Administration
Anime
Arts & Letters
Atari 2600
Beer
Best Entries
Commodore 64
Computers & Internet
Culture
Disgruntled, Freakish Reflections
Harry Potter
Hitchcock
Humor
Link Dump
Lists
Military
Movies
Music
Neal Stephenson
NES
Philadelphia Film Festival 2006
Philadelphia Film Festival 2008
Philadelphia Film Festival 2009
Philadelphia Film Festival 2010
Politics
Science & Technology
Science Fiction
Security & Intelligence
Television
The Dark Tower
Uncategorized
Video Games
Weblogs
Weird Movie of the Week
Green Flag



Copyright © 1999 - 2012 by Mark Ciocco.