Every once in a while I'll have a series of posts which I think are very high quality and am really proud of. But then some time passes, and I write some more, and the good stuff eventually gets pushed off the main page to languish in the obscurity of the archives. Taking my cue from some other bloggers, I've decided to collect some of my better posts here on this page in the hopes that they'll get some more exposure.
Sunday, March 14, 2010
Remix Culture and Soviet Montage Theory
A video mashup of The Beastie Boys' popular and amusing Sabotage video with scenes from Battlestar Galactica has been making the rounds recently. It's well done, but a little on the disposable side of remix culture. The video lead Sunny Bunch to question "remix culture":
It’s quite good. But, ultimately, what’s the point?These are good questions, and I'm not surprised that the BSG Sabotage video prompted them. The implication of Sonny's post is that he thinks it is an unoriginal waste of talent (he may be playing a bit of devil's advocate here, but I'm willing to play along because these are interesting questions and because it will give me a chance to pedantically lecture about film history later in this post!) In the comments, Julian Sanchez makes a good point (based on a video he produced earlier that was referenced by someone else in the comment thread), which will be something I'll expand on later in this post:
First, the argument I’m making in that video is precisely that exclusive focus on the originality of the contribution misses the value in the activity itself. The vast majority of individual and collective cultural creation practiced by ordinary people is minimally “original” and unlikely to yield any final product of wide appeal or enduring value. I’m thinking of, e.g., people singing karaoke, playing in a garage band, drawing, building models, making silly YouTube videos, improvising freestyle poetry, whatever. What I’m positing is that there’s an intrinsic value to having a culture where people don’t simply get together to consume professionally produced songs and movies, but also routinely participate in cultural creation. And the value of that kind of cultural practice doesn’t depend on the stuff they create being particularly awe-inspiring.To which Sonny responds:
I’m actually entirely with you on the skill that it takes to produce a video like the Brooklyn hipsters did — I have no talent for lighting, camera movements, etc. I know how hard it is to edit together something like that, let alone shoot it in an aesthetically pleasing manner. That’s one of the reasons I find the final product so depressing, however: An impressive amount of skill and talent has gone into creating something that is not just unoriginal but, in a way, anti-original. These are people who are so devoid of originality that they define themselves not only by copying a video that they’ve seen before but by copying the very personalities of characters that they’ve seen before.Another good point, but I think Sonny is missing something here. The talents of the BSG Sabotage editor or the Brooklyn hipsters are certainly admirable, but while we can speculate, we don't necessarily know their motivations. About 10 years ago, a friend and amateur filmmaker showed me a video one of his friends had produced. It took scenes from Star Wars and Star Trek: The Wrath of Khan and recut them so it looked like the Millennium Falcon was fighting the Enterprise. It would show Han Solo shooting, then cut to the Enterprise being hit. Shatner would exclaim "Fire!" and then it would cut to a blast hitting the Millennium Falcon. And so on. Another video from the same guy took the musical number George Lucas had added to Return of the Jedi in the Special Edition, laid Wu-Tang Clan in as the soundtrack, then re-edited the video elements so everything matched up.
These videos sound fun, but not particularly original or even special in this day and age. However, these videos were made ten to fifteen years ago. I was watching them on a VHS(!) and the person making the edits was using analog techniques and equipment. It turns out that these videos were how he honed his craft before he officially got a job as an editor in Hollywood. I'm sure there were tons of other videos, probably much less impressive, that he had created before the ones I'm referencing. Now, I'm not saying that the BSG Sabotage editor or the Brooklyn Hipsters are angling for professional filmmaking jobs, but it's quite possible that they are at least exploring their own possibilities. I would also bet that these people have been making videos like this (though probably much less sophisticated) since they were kids. The only big difference now is that technology has enabled them to make a slicker experience and, more importantly, to distribute it widely.
It's also worth noting that this sort of thing is not without historical precedent. Indeed, the history of editing and montage is filled with this sort of thing. In the 1910s and 1920s, Russian filmmaker Lev Kuleshov conducted a series of famous experiments that helped express the role of editing in films. In these experiments, he would show a man with an expressionless face, then cut to various other shots. In one example, he showed the expressionless face, then cut to a bowl of soup. When prompted, audiences would claim that they found that the man was hungry. Kuleshov then took the exact same footage of the expressionless face and cut to a pretty girl. This time, audiences reported that the man was in love. Another experiment alternated between the expressionless face and a coffin, a juxtaposition that lead audiences to believe that the man was stricken with grief. This phenomenon has become known as the Kuleshov Effect.
For the current discussion, one notable aspect of these experiments is that Kuleshov was working entirely from pre-existing material. And this sort of thing was not uncommon, either. At the time, there was a shortage of raw film stock in Russia. Filmmakers had to make due with what they had, and often spent their time re-cutting existing material, which lead to what's now called Soviet Montage Theory. When D.W. Griffith's Intolerance, which used advanced editing techniques (it featured a series of cross cut narratives which eventually converged in the last reel), opened in Russia in 1919, it quickly became very popular. The Russian film community saw this as a validation and popularization of their theories and also as an opportunity. Russian critics and filmmakers were impressed by the film's technical qualities, but dismissed the story as "bourgeois", claiming that it failed to resolve issues of class conflict, and so on. So, not having much raw film stock of their own, they took to playing with Griffith's film, re-editing certain sections of the film to make it more "agitational" and revolutionary.
The extent to which this happened is a bit unclear, and certainly public exhibitions were not as dramatically altered as I'm making it out to be. However, there are Soviet versions of the movie that contained small edits and a newly filmed prologue. This was done to "sharpen the class conflict" and "anti-exploitation" aspects of the film, while still attempting to respect the author's original intentions. This was part of a larger trend of adding Soviet propaganda to pre-existing works of art, and given the ideals of socialism, it makes sense. (The preceeding is a simplification of history, of course... see this chapter from Inside the Film Factory for a more detailed discussion of Intolerance and it's impact on Russian cinema.) In the Russian film world, things really began to take off with Sergei Eisenstein and films like Battleship Potemkin. Watch that film today, and you'll be struck by how modern-feeling the editing is, especially during the infamous Odessa Steps sequence (which you'll also recognize if you've ever seen Brian De Palma's "homage" in The Untouchables).
Now, I'm not really suggesting that the woman who produced BSG Sabotage is going to be the next Eisenstein, merely that the act of cutting together pre-existing footage is not necessarily a sad waste of talent. I've drastically simplified the history of Soviet Montage Theory above, but there are parallels between Soviet filmmakers then and YouTube videomakers today. Due to limited resources and knowledge, they began experimenting with pre-existing footage. They learned from the experience and went on to grander modifications of larger works of art (Griffith's Intolerance). This eventually culminated in original works of art, like those produced by Eisenstein.
Now, YouTube videomakers haven't quite made that expressive leap yet, but it's only been a few years. It's going to take time, and obviously editing and montage are already well established features of film, so innovation won't necessarily come from that direction. But that doesn't mean that nothing of value can emerge from this sort of thing, nor does messing around with videos on YouTube limit these young artists to film. While Roger Ebert's valid criticisms are vaid, more and more, I'm seeing interactivity as the unexplored territory of art. Video games like Heavy Rain are an interesting experience and hint at something along these lines, but they are still severely limited in many ways (in other words, Ebert is probably right when it comes to that game). It will take a lot of experimentation to get to a point where maybe Ebert would be wrong (if it's even possible at all). Learning about the visual medium of film by editing together videos of pre-existing material would be an essential step in the process. Improving the technology with which to do so is also an important step. And so on.
To return back to the BSG Sabotage video for a moment, I think that it's worth noting the origins of that video. The video is clearly having fun by juxtaposing different genres and mediums (it is by no means the best or even a great example of this sort of thing, but it's still there. For a better example of something built entirely from pre-existing works, see Shining.). Battlestar Galactica was a popular science fiction series, beloved by many, and this video comments on the series slightly by setting the whole thing to an unconventional music choice (though given the recent Star Trek reboot's use of the same song, I have to wonder what the deal is with SF and Sabotage). Ironically, even the "original" Beastie Boys video was nothing more than a pastiche of 70s cop television shows. While I'm no expert, the music on Ill Communication, in general, has a very 70s feel to it. I suppose you could say that association only exists because of the Sabotage video itself, but even other songs on that album have that feel - for one example, take Sabrosa. Indeed, the Beastie Boys are themselves known for this sort of appropriation of pre-existing work. Their album Paul's Boutique infamously contains literally hundreds of samples and remixes of popular music. I'm not sure how they got away with some of that stuff, but I suppose this happened before getting sued for sampling was common. Nowadays, in order to get away with something like Paul's Boutique, you'll need to have deep pockets, which sorta defeats the purpose of using a sample in the first place. After all, samples are used in the absence of resources, not just because of a lack of originality (though I guess that's part of it). In 2004 Nate Harrison put together this exceptional video explaining how a 6 second drum beat (known as the Amen Break) exploded into its own sub-culture:
There is certainly some repetition here, and maybe some lack of originality, but I don't find this sort of thing "sad". To be honest, I've never been a big fan of hip hop music, but I can't deny the impact it's had on our culture and all of our music. As I write this post, I'm listening to Danger Mouse's The Grey Album:
It uses an a cappella version of rapper Jay-Z's The Black Album and couples it with instrumentals created from a multitude of unauthorized samples from The Beatles' LP The Beatles (more commonly known as The White Album). The Grey Album gained notoriety due to the response by EMI in attempting to halt its distribution.I'm not familiar with Jay-Z's album and I'm probably less familiar with The White Album than I should be, but I have to admit that this combination and the artistry with which the two seemingly incompatible works are combined into one cohesive whole is impressive. Despite the lack of an official release (that would have made Danger Mouse money), The Grey Album made many best of the year (and best of the decade) lists. I see some parallels between the 1980s and 1990s use of samples, remixes, and mashups, and what was happening in Russian film in the 1910s and 1920s. There is a pattern worth noticing here: New technology enables artists to play with existing art, then apply their learnings to something more original later. Again, I don't think that the BSG Sabotage video is particularly groundbreaking, but that doesn't mean that the entire remix culture is worthless. I'm willing to bet that remix culture will eventually contribute towards something much more original than BSG Sabotage...
Incidentally, the director of the original Beastie Boys Sabotage video? Spike Jonze, who would go on to direct movies like Being John Malkovich, Adaptation., and Where the Wild Things Are. I think we'll see some parallels between the oft-maligned music video directors, who started to emerge in the film world in the 1990s, and YouTube videomakers. At some point in the near future, we're going to see film directors coming from the world of short-form internet videos. Will this be a good thing? I'm sure there are lots of people who hate the music video aesthetic in film, but it's hard to really be that upset that people like David Fincher and Spike Jonze are making movies these days. I doubt YouTubers will have a more popular style, and I don't think they'll be dominant or anything, but I think they will arrive. Or maybe YouTube videomakers will branch out into some other medium or create something entirely new (as I mentioned earlier, there's a lot of room for innovation in the interactive realm). In all honesty, I don't really know where remix culture is going, but maybe that's why I like it. I'm looking forward to seeing where it leads.
Posted by Mark on March 14, 2010 at 02:18 PM .: link :.
Sunday, February 14, 2010
Best Films of 2009
As of right now, I've seen 78 movies that were released in 2009. This is probably less than a lot of critics, but more than most folks. Overall, I had a much better feeling about this year than I had in the past couple years. I had a really difficult time with my 2008 list (which I'm actually pretty happy with now, after a year of reflection), but here in 2009, things came together pretty easily. I had 9 movies right away and the 10th movie came when I finally caught up to a movie I knew I would like.
As always, lists like this are inherently subjective and I know that gets on some people's nerves. Both from a you're stupid because you don't like the same movies I do perspective as well as the lists are inherently evil argument. Indeed, due to this year also marking the end of the decade, the multitude of best of the decade lists has also prompted an increase in the typical backlash of anti-list sentiment. This post covers the usual complaints about lists: they're lazy criticism and basically represent filthy linkbait whoring. There's obviously more to it than that (read the full post). He makes some good points and there are certainly a lot of crappy lists out there (hey, here's one!), but on the other hand, who the hell cares what he thinks? I like lists. Apparently Americans Love Lists (and you know who doesn't like lists? Joseph Stalin!) So without further ado:
Top 10 Movies of 2009
* In roughly reverse order
* In alphabetical order
But still worthwhile, in their own way. Presented without comment and in no particular order:
Despite the fact that I've seen 78 movies this year (and that this post features 30+ of my favorites), there were a few that got away... mostly due to limited releases, though a few of the flicks listed below didn't interest me as much when they were released as they did when I heard more about them. Unlike last year, I'm not really expecting any of these to break into the top 10, though I guess there's always a chance. Anyway, in no particular order:
Posted by Mark on February 14, 2010 at 06:26 PM .: link :.
Sunday, December 13, 2009
Visual Literacy and Rembrandt's J'accuse
Perhaps the most fascinating film I saw at the 18½ Philadelphia Film Festival was Rembrandt's J'accuse. It's a documentary where British director Peter Greenaway deconstructs Rembrandt's most famous painting: Night Watch. It's arguably the 4th most celebrated painting in art history (preceded only by the Mona Lisa, The Last Supper, and the ceiling of the Sistine Chapel...) and Greenaway believes it's also an accusation of murder. The movie plays like a forensic detective story as Greenaway analyzes the painting from top to bottom. It's an interesting topic for a documentary, though I think the film ultimately falters a bit in it's investigation (either that, or Greenaway is trying to do something completely different).
(Note, you can click on the images below for a higher resolution image.)
Greenaway began his career as a painter and he contends that most people are visually illiterate, which is an interesting point. We really do live in a text-based culture. Our education system encourages textual learning over visuals, from the alphabet to vocabulary and reading skills. The proportion of time spent "reading paintings as they do text" is minute (if it happens at all). As such, our ability to analyze visual art forms like paintings is ill-informed and impoverished. Greenaway even takes the opportunity to rag on the state of modern cinema (which is a whole other discussion, as sometimes even bad movies are visually well constructed, but I digress). In any case, I do think Greenaway has a point here. Our culture is awash in visual information - television, movies, photography, etc... - and yet, we spend very little time questioning the veracity of what we're shown. They say that a picture is worth a thousand words, which is really just a way of saying that pictures can easily convey massive amounts of information. Pictures are inherently trustworthy and persuasive, but this can, in itself, cause issues. Malcolm Gladwell examined this in his essay, The Picture Problem:
You can build a high-tech camera, capable of taking pictures in the middle of the night, in other words, but the system works only if the camera is pointed in the right place, and even then the pictures are not self-explanatory. They need to be interpreted, and the human task of interpretation is often a bigger obstacle than the technical task of picture-taking. ... pictures promise to clarify but often confuse. ... Is it possible that we place too much faith in pictures?Gladwell is, of course, casting suspicion on images, but he's actually making many of the same points as Greenaway. What Gladwell is really saying is that human beings are visually illiterate. As Greenaway notes towards the beginning of the film, is what we see really what we see? Or do we only see what we want to see? Both Gladwell and Greenaway seem to agree that interpretation is key (though Gladwell might be a bit more pessimistic about the feasibility of doing so). Though this concept is not explicitly referenced later in the film, I do believe it is essential to understanding the film.
One of the first clues that Greenaway examines is the public nature of Rembrandt's painting. For the most part, public museums didn't start appearing until the mid 19th century. The Night Watch, by contrast, was on public display from day one (1642). In a time where paintings were private luxuries, usually viewed only by the rich and those who commissioned the paintings, the Night Watch was viewed by all. In a lot of ways, the painting is unusual and prompts questions, most of which don't seem to have any sort of satisfactory answers. This leads to all sorts of speculation and theories about the motives behind the painting and what it really depicts. One way to look at it is to view it as an accusation. An indictment of conspiracy. Greenaway starts with this idea and proceeds to examine 34 interconnected mysteries about the painting. The mysteries all server to illuminate one thing: The content of the painting. What is it about? Who are the players? What is the accusation?
I will not go through all 34 mysteries, but as an example, the first mystery is about the Dutch Militia. At the time of the painting, there was a century-long Dutch tradition of the group military portrait. The Dutch had been involved in a long, drawn-out guerrilla war with the Spanish. Local militias were formed all throughout the country to protect their towns from their enemies. These local companies were comprised of regular citizens and volunteers, many of them important local figures, and they liked to have themselves painted, usually in uniform and in a powerful light to inspire solidarity and confidence. As the war wound down, these militias became less about the military and more about politics and power. It was a prestigious thing to be in a militia and they became more of a gentleman's club than a military organization. In the Night Watch, Rembrandt chose to break many of the traditions associated with the common Dutch military portrait. Many of the future mysteries examine these differences in great detail.
After seeing the movie I was struck by numerous things. First, for a filmmaker ostensibly crusading against visual illiteracy, I find it strange that Greenaway has chosen to present his argument as a gigantic wall of text. He narrates the entire film. Occasionally, he'll cut to a "reenactment", which are scenes from his previous film, a fictional retelling of Rembrandt's painting, but even those are comprised primarily of characters spouting dialogue (these scenes rarely provide insight, though it's nice to break up the narration with something a little more theatrical).
Indeed, the grand majority of the mysteries are concerned with context (i.e. the cultural and historical traditions, the timing of the painting, who commissioned the painting, etc...). There is a concept from communication theory called exformation that I think is relevant here.
Effective communication depends on a shared body of knowledge between the persons communicating. In using words, sounds and gestures the speaker has deliberately thrown away a huge body of information, though it remains implied. This shared context is called exformation.Wikipedia also has an excellent anecdotal example of the concept in action:
In 1862 the author Victor Hugo wrote to his publisher asking how his most recent book, Les Miserables, was getting on. Hugo just wrote “?” in his message, to which his publisher replied “!”, to indicate it was selling well. This exchange of messages would have no meaning to a third party because the shared context is unique to those taking part in it. The amount of information (a single character) was extremely small, and yet because of exformation a meaning is clearly conveyed.Similarly, when Rembrandt painted the Night Watch and it was put on display, most of the viewers knew the subjects in the painting and the circumstances in which it was painted. As modern viewers, we do not have any of that shared knowledge. In order to understand the visual of The Night Watch, one must first understand the context of the painting, something that is primarily established through text. For example, one of the mysteries of the painting has to do with the lighting. Rembrandt was one of the pioneers of artificial lighting in paintings, and this was the result of improvements to technology of the day. There were apparently big improvements in the use of candles and mirrors, and so Rembrandt enjoyed playing with lighting, making the painting seem almost theatrical. As modern viewers, this sort of playful use of lighting isn't special - it's something we've seen a million times before and in a million other contexts. In Rembrandt's time, it was different. It called attention to itself and caused much speculation. Modern audiences thus need to be informed of this, and again, Greenaway accomplishes this mostly through the use of text.
To be sure, there are some interesting visualization techniques that Greenaway employs when talking about specific aspects of the painting. For example, when discussing the aforementioned use of lighting, Greenaway does his own manipulation, exagerating the lighting in the painting to underline his point:
Unfortunately, these are not used as often as I would have hoped, nor are they always necessary or enlightening, and indeed there are numerous distractions throughout. For instance, the frame is often comprised of several overlapping and moving boxes. Sometimes this is used well, but it often feels visually overwhelming. Indeed, sometimes the audio is sometimes also overwhelming - with Greenaway's narration being overlaid on top of music and sometimes even a woman's voice which is saying the names of famous people who have seen Night Watch (the inclusion of which has always confused me). I'm sure it's challenging to make a movie about a painting without just putting up a static shot of the painting (and that's certainly not desirable), but does the screen need to be so busy? The visual components of the film seem to take a back seat to the textual elements... Interestingly, this is a film that seems to work a lot better on the small screen, as it's not nearly as overwhelming on the small screen as it was in the theater.
Furthermore, the text presented to us is so dense that it can be hard to follow at times. This at least partially due to the massive amount of exformation, unfamiliar European names, different cultural traditions, etc... There are 34 people depicted in the painting (plus a dog!), and it can be tough to keep track of who is who. I suppose I should not be surprised that someone obsessed with visual literacy is not a master writer, but perhaps there is something else going on here...
Next, I was struck by the inclusion of Greenaway's face, which is often positioned in a box right in the center of the frame. Why do that? Why is he calling so much attention to himself? My first inclination is that it's a breathtakingly arrogant strategy. Also, the sound of his voice (sometimes overly deliberate pronunciation mixed with stereotypical European accent) lends the impression of arrogance and pretentiousness. I think that may still be part of it, but again, there is more going on here.
Look at me!
There are many types of documentary films. The most common form of documentary is referred to as Direct Address (also known as Expositional Mode). In such a documentary, the viewer is directly acknowledged, usually through narration and voice-overs. There is very little ambiguity and it is pretty obvious how you're expected to interpret these types of films. Many television and news programs use this style, to varying degrees of success. Ken Burns' infamous Civil War and Baseball series use this format eloquently, but most traditional propaganda films also fall into this category. The disembodied nature of a voice-over lends an air of authority and even omniscience to a film's subject matter (this type of voice-over is often referred to as "Voice of God" narration). As such, these films are open to abuse through manipulative rhetoric and social propaganda.
By contrast, Reflexive Documentaries use many devices to acknowledge the filmmaker's presence, perspective, and selectivity in constructing the film. It is thought that films like this are much more honest about their subjectivity, and thus provide a much greater service to the audience.
An excellent example of a Reflexive documentary is Errol Morris' brilliant film, The Thin Blue Line. The film examines the "truth" around the murder of a Dallas policeman. The use of colored lighting throughout the film eventually correlates with who is innocent or guilty, and Morris is also quite manipulative through his use of editing - deconstructing and reconstructing the case to demonstrate just how problematic finding the truth can be. His use of framing calls attention to itself, daring the audience to question the intents of the filmmakers. The use of interviews in conjunction with editing is carefully structured to demonstrate the subjectivity of the film and its subjects. As you watch the movie, it becomes quite clear that Morris is toying with you, the viewer, and that he wants you to be critical of the "truth" he is presenting.
Ironically, a documentary becomes more objective when it acknowledges its own biases and agenda. In other words, a documentary becomes more objective when it admits its own subjectivity.
Greenaway could easily have employed a direct address narration with this film, but he does not. Instead, he conspicuously inserts himself right into the middle of the frame. Indeed, later in the film, Greenaway appears dressed in a ridiculous getup more suited to appear within the painting than in the movie. It's almost like he's daring us to question this visual choice. Why?
Perhaps because of the third thing that struck me - Greenaway is the only narrator in the film. Most documentaries feature many talking heads, experts and historians, and even some contrary opinions, among other expositional techniques. This film does not. Why? Could it be that Greenaway's story is complete bullshit? After all, his story is delivered in textual form. With his visuals, Greenaway is emphasizing his own subjectivity. A cursory glance around the internet (hardly a comprehensive search, but still) reveals that Greenaway appears to be the only one who subscribes to this theory of murder and accusation.
So I'm left with something of a dilemma. This movie is an impressive bit of speculation and interpretation, but I have no idea if it's true or not. The visual elements of the film seem to emphasize that it is an emphatically subjective interpretation of the painting, but that this sort of speculation on the visual composition is still important, and that we should do more of this sort of thing (something I would agree with).
Or maybe I'm reading way too much into the movie and he employs so much text simply because he thinks we're visually illiterate morons. At this point, I really don't know how to rate this film. I'm having a lot of trouble gauging how much I enjoyed this film. Upon first viewing it, in the theater, I have to say that I didn't like it very much. And yet, it still fascinated me, to the point where I started writing this post and rewatching the film to make sure my interpretation fit. Indeed, as previously mentioned, I found it much more watchable on the small screen. If this post at all interests you, I suggest checking it out. It's actually available on Netflix's Watch Instantly feature (and thus can be viewed through a computer, a PS3 or XBox or any number of other Netflix streaming ready boxes).
More screenshots and comments in the extended entry...
Update: More on Visual Literacy (in response to comments in this post)
This is the title screen of the film, and it's one example of the sensory overload that Greenaway employs. The building in the background is where the Night Watch now resides (the Rijksmuseum in Amsterdam). The shot is taken from far away, with many things in the foreground though, including a police car with flashing lights. Given the murder-mystery nature of the film, that part makes symbolic sense. Making less sense is the additional police car inset on the right of the screen (it's harder to see in a static screenshot, but that box is filmed separatel, and apparently during the day, so the lighting is different. In the movie, that box actually scrolls across the screen.). Inset on the right, is a miniature version of the title screen. I have no idea what purpose that serves. And scrolling from right to left across the bottom of the screen is a list of signatures. These names are the aforementioned famous people who have publicly visited the Night Watch, and they are also being read by a female voice (again, I have no real idea why this is being done, as it only serves to really add to the disorienting sensory experience).
Interwoven within the documentary are scenes from Greenaway's earlier fictional retelling of the same story, Nightwatching. It stars Martin Freeman (who starred in the British Office show and a bunch of other stuff, including The Hitchhiker's Guide to the Galaxy). I found these scenes really strange at first. They seemed very out of place, at least until I found out that they were from an earlier Greenaway film. Then it made sense.
As previously mentioned, Greenaway does employ some visualization efforts to help call out certain features and structures within the painting. Some of the interesting ones are below. The first is one that silhouettes out the main actors in the drama of the painting. Then there's one that numbers all of the participants (you'll have to click on the image to get a good look at that one). There are a few that attempt to visualize the lines of sight of all the characters (only two are looking directly at the audience - this is one of the mysteries that Greenaway explores).
One of the things that interested me about the film was that many of the "mysteries" are probably things that most people would notice if you asked them to stare at the painting for an hour. They don't have the exformation to read the painting correctly, but they'd easily be able to pick out a lot of the most salient features. For instance, it's easy to question why the girl in the painting is so prominent. It's the brightest part of the painting, and your eyes go there almost immediately upon viewing it. If given some time, you can even see that there's another girl behind the first, and her face is obscured (it turns out that Rembrandt painted it this way because the girl had horrible burns on her face and was thus self-conscious about it). I think the grand majority of the mysteries that Greenaway examines would be found if only someone took the time to really study the painting. Of course, I suspect most people don't actually do that sort of thing, so Greenaway does have a point, but still.
Below is the aforementioned "ridiculous getup" that Greenaway puts on at one point. Again, I think this is how he is stressing his own subjective involvement in what we're seeing.
Well, I think that just about wraps up my thoughts on Rembrandt's J'accuse. In closing, I'll give you one of the final shots of the film, which is a sorta reprise of the title screen. It's still cluttered and busy, but somehow not quite as pointless as the title screen.
It was an intriguing movie, I guess. It would be even more interesting if I could hear what other art historians and experts thought about it...
Posted by Mark on December 13, 2009 at 08:04 PM .: link :.
Sunday, September 20, 2009
Six Weeks of Halloween 2009: Week 1 - Universal Horror
It's that time of year again. Halloween is my favorite time of the year, and it provides a convenient excuse to explore one of my favorite genres of film (as I have done for the past couple of years). In preparation for this year's six week celebration of Halloween, I pretty quickly drew up a list that could easily take me through ten weeks... I doubt I'll get through them all, but I'm going to have fun trying. Highlights include this week's look at classic Universal Horror films, a sampling of the later Monster revival with Hammer Horror, perhaps some Vincent Price, and of course, some slashers and miscellaneous horrors to round out the pack (including the much anticipated Trick 'r Treat, amongst others). If you can't get enough Halloween madness here, be sure to visit Kernunrex, who's been doing this whole Six Weeks of Halloween thing a lot longer than I have... (Someday I'll redesign Kaedrin so as to allow for an easy switch to Halloween colors like he does... that day is probably not coming anytime soon, but still.)
Its the nicest weather Earth has ever had!*
As previously mentioned, this year's marathon kicks off with a look at Universal Studios' classic monster films. I've seen two of the following films before, but not since I was very young, so I figured it would be worth revisiting (as a result, I now want to revisit the original novels upon which the following films were based, which if my current queue is any indication, means I'll get to them sometime in the 2020s). Here goes:
It's also interesting to note that the characters of Dracula and Frankenstein are two of the most frequently utilized fictional characters in the history of film. Dracula has 200+ appearances, while Frankenstein has only had a mere 80+ roles. And I think both will continue to rack up the appearances. Interestingly, I think there are several more recent horror icons that could give the classics a run for their money... Jason Vorhees, Mike Myers, and Freddy Kreuger have established themselves pretty firmly in modern film culture, but I'm not sure they will ever be as prolific as the old Universal classic monsters. Why? Devin Faraci has speculated on this:
There is one major obstacle that's stopping Freddy and Jason and Mike Myers and Leatherface from really getting to that position of being among the truly eternal monsters of filmland: copyright. While the versions of the Universal Monsters we love are copyrighted in terms of their appearance (although a zillion manufacturers of Halloween ephemera have skirted the edges of that legality), the characters themselves are in the public domain. This is what has allowed them to become such prominent forces in film, keeping them going in permutation after permutation. If Universal outright owned the characters then Hammer, for instance, would never have been able to reinvent them in the 50s and 60s (my colleague Ryan Rotten very astutely notes that what Platinum Dunes is doing with the characters of Jason, Freddy and Leatherface, and what Rob Zombie is doing with Michael Myers, is very similar to what Hammer did with the Universal Monsters, recasting them and re-presenting them for a new generation with new tastes). In fact, the copyright on the Gill-Man from The Creature from the Black Lagoon may be one of the things keeping him from really ascending and going places as a character. Being tightly controlled by Universal keeps him from escaping into the pop culture world at large.Perhaps audiences will still be squirming in their seats in fear of Jason, Mike, and Freddy a century from now, but maybe not. One thing is for sure though: Audiences will still be entertained by updates on Frankenstein and Dracula...
* With apologies to the MST3K Movie for that joke, though it works even better on the newer variations on the logo...
Posted by Mark on September 20, 2009 at 12:00 PM .: link :.
Sunday, August 16, 2009
In my first post on Noir, I kinda made light of the body count that our two heroes were racking up as well as the fact that French society never seemed to notice when a few dozen nameless hitmen are discovered in a park or abandoned building somewhere. I was making a joke of it, but it always sorta bothered me. There are a few hundred people who die during the course of this series. While they're all portrayed as mostly nameless, faceless victims, I couldn't help but wonder what the consequences of their deaths were. Were they married? Did they have kids? Friends? And so on. Warning: The rest of the post contains major spoilers!
One of the things I wondered about was how well Mireille and Kirika were able to deal with the amount of death and destruction they were doling out. For the most part, they seem to deal with it remarkably well. Kirika seems to be more affected by it than Mireille. As the series goes on, she seems less and less enthused with what she's capable of doing.... but there's something off about her reaction that took me a while to place. I finally realized what it was - it reminded me of Crime and Punishment (I suppose I should note spoilers for that novel as well), in particular, this paragraph (page 623 in my edition) where Raskolnikov laments his punishment:
... even if fate had sent him no more than remorse - burning remorse that destroyed the heart, driving away sleep, the kind of remorse to escape those fearsome torments the mind clutches at the noose and the well, oh, how glad he would have been! Torment and tears - after all, that is life, too. But he felt no remorse for his crime.In essense, Raskolnikov felt no guilt or remorse for his crime, but that lack of feeling, that lack of guilt was just as horrible as he could have imagined. That's very much how I thought Kirika felt during the second half of the series. In his take on the series, Steven Den Beste does an excellent job describing the duality of Kirika:
Kirika had two parts inside. One part was a killing machine. It was created by Altena through training and indoctrination, and once it seemed ready, Kirika's memory was wiped and she was placed in Japan, so that she could begin to face the Trials which were required of all candidates for Noir to prove their fitness. Events after that point were not planned, because they depended on what Kirika herself did, and how she reacted to the process. Hints were left which might lead Kirika to Mireille, but if they had not, she would have faced her trials alone.The killing machine part of Kirika's personality was capable of evil, without remorse or guilt, but the human side of her personality recognized how horrible that was and the series is essentially about Kirika's internal struggle. Mireille seemed to be much more neutral. The other piece of the puzzle is Chloe, who seems to take a perverse pleasure in what she is capable of, and as the series progresses, she becomes more and more creepy.
Kirika and Chloe
Ultimately, when Kirika is forced to choose between Mireille and Chloe, she chooses Mireille (who I guess is supposed to represent the human side of Kirika's personality). As Steven notes, the series does not end there and neither does Kirika's internal struggle. She is still capable of horrible evil and is not sure she could live with herself. Altena still attempts to appeal to killing machine portion of Kirika's personality, but she ultimately fails, and Mireille succeeds in saving Kirika. At the very end, it's clear that Kirika and Mireille will continue on together and that they love each other (like sisters). I am once again reminded of Dostoyevsky (page 630 in my edition - replace the male pronouns with female pronouns and this could easily apply to Kirika):
... at this point a new story begins, the story of a man's gradual renewal, his gradual rebirth, his gradual transition from eone world to another, of his growing acquaintance with a new, hitherto completely unknown reality. This might constitute the theme of a new narrative - our present narrative is, however, at an end.There's a lot more to the ending of the series that I'm skipping over, but Steven's post covers that in plenty of detail and I don't see a need to repeat all that... It's not a perfect series, but the ending did make it worthwhile for me. I wouldn't say that I was as taken with it as Steven or Alex, but neither was I as disappointed with it as Ben. I thought the series was a bit too long (a little too much filler, perhaps) and unevenly paced, but the ending made up for any issues I may have had with the series.
As usual, more screenshots and commentary in the extended entry...
I didn't notice this at first, but the table that Mireille uses to do her work is a pool table. Not sure what the significance of that is, but I guess you could make something symbolic out of it, like that Mireille and Kirika are stuck playing the Soldats' game or something.
Cargo containers in the least organized port in the world. Seriously, look at those things.
As mentioned above, Kirika, seen here double-fisting some pistols,John Woo style, is the main character of the series. This is interesting because at first glance, the series seems to be primarily about Mireille. As the series progresses, Mireille takes a back seat to Kirika and Chloe, then comes to the foreground at the end.
The Soldats in their stereotypical lair, sitting next to a fireplace and sipping port. We find out more about the Soldats later in the series, but their ultimate plan and Altena's plan for Noir all ends up taking a backseat to Kirika's internal struggle, which is the true conflict of the series. That's a good thing too, as giant conspiracies tend to bore me...
As the series progresses, Kirika, Mireille and Chloe encounter more and more hired killers, and in this case, the killers are literally faceless. Not a single one seems to be able to hold a candle to any of the Noirs though, which makes me wonder how challenging these "trials" are supposed to be for Noir.
This scene really bothered me. Not so much when it happened as in the next episode when we find out... that it doesn't really mean anything. It serves a purpose - Mireille begins to realize just how much she cares for Kirika, etc... but it's a kinda cheapshot. Also, I'm not really sure what happened. Did Chloe actually shoot Kirika? Why is Kirika fine afterwords? I didn't get it.
Towards the end of the series, we learn that Kirika killed Mireille's parents (apparently when Kirika was extremely young). Chloe was also there, and the screenshot above is her after she sees Kirika kill. Kinda creepy.
Towards the end of the series, Kirika and Chloe are reuinited at Altena's home and have an awesome swordfight (as a training exercise).
Kirika wins the training session, and in the screenshot above you see something that is a recurring image. Often, when Kirika's killing machine personality is in control, her hair covers her eyes, making her faceless and symbolizing emotionlessness. I didn't really notice this until later in the series, so I'm not sure it applies to the whole series, but I did see it multiple times.
Mireille and Kirika have a faceoff towards the end, and they are legitimately trying to kill one another, but in the end, neither can pull the trigger.
This is the last shot in the series. The saturated, washed out brightness of this type of shot usually symbolizes transcendence or resolution, and that certainly fits with the ending of the series.
Well, that about covers it. Next up in the Anime queue is Miyazaki's Ponyo, which I should be seeing sometime this week. It's actually getting a pretty wide release - it's even playing at the local multiplex...
Posted by Mark on August 16, 2009 at 02:08 PM .: link :.
Sunday, June 28, 2009
Interrupts and Context Switching
To drastically simplify how computers work, you could say that computers do nothing more that shuffle bits (i.e. 1s and 0s) around. All computer data is based on these binary digits, which are represented in computers as voltages (5 V for a 1 and 0 V for a 0), and these voltages are physically manipulated through transistors, circuits, etc... When you get into the guts of a computer and start looking at how they work, it seems amazing how many operations it takes to do something simple, like addition or multiplication. Of course, computers have gotten a lot smaller and thus a lot faster, to the point where they can perform millions of these operations per second, so it still feels fast. The processor is performing these operations in a serial fashion - basically a single-file line of operations.
This single-file line could be quite inefficent and there are times when you want a computer to be processing many different things at once, rather than one thing at a time. For example, most computers rely on peripherals for input, but those peripherals are often much slower than the processor itself. For instance, when a program needs some data, it may have to read that data from the hard drive first. This may only take a few milliseconds, but the CPU would be idle during that time - quite inefficient. To improve efficiency, computers use multitasking. A CPU can still only be running one process at a time, but multitasking gets around that by scheduling which tasks will be running at any given time. The act of switching from one task to another is called Context Switching. Ironically, the act of context switching adds a fair amount of overhead to the computing process. To ensure that the original running program does not lose all its progress, the computer must first save the current state of the CPU in memory before switching to the new program. Later, when switching back to the original, the computer must load the state of the CPU from memory. Fortunately, this overhead is often offset by the efficiency gained with frequent context switches.
If you can do context switches frequently enough, the computer appears to be doing many things at once (even though the CPU is only processing a single task at any given time). Signaling the CPU to do a context switch is often accomplished with the use of a command called an Interrupt. For the most part, the computers we're all using are Interrupt driven, meaning that running processes are often interrupted by higher-priority requests, forcing context switches.
This might sound tedious to us, but computers are excellent at this sort of processing. They will do millions of operations per second, and generally have no problem switching from one program to the other and back again. The way software is written can be an issue, but the core functions of the computer described above happen in a very reliable way. Of course, there are physical limits to what can be done with serial computing - we can't change the speed of light or the size of atoms or a number of other physical constraints, and so performance cannot continue to improve indefinitely. The big challenge for computers in the near future will be to figure out how to use parallel computing as well as we now use serial computing. Hence all the talk about Multi-core processing (most commonly used with 2 or 4 cores).
Parallel computing can do many things which are far beyond our current technological capabilities. For a perfect example of this, look no further than the human brain. The neurons in our brain are incredibly slow when compared to computer processor speeds, yet we can rapidly do things which are far beyond the abilities of the biggest and most complex computers in existance. The reason for that is that there are truly massive numbers of neurons in our brain, and they're all operating in parallel. Furthermore, their configuration appears to be in flux, frequently changing and adapting to various stimuli. This part is key, as it's not so much the number of neurons we have as how they're organized that matters. In mammals, brain size roughly correlates with the size of the body. Big animals generally have larger brains than small animals, but that doesn't mean they're proportionally more intelligent. An elephant's brain is much larger than a human's brain, but they're obviously much less intelligent than humans.
Of course, we know very little about the details of how our brains work (and I'm not an expert), but it seems clear that brain size or neuron count are not as important as how neurons are organized and crosslinked. The human brain has a huge number of neurons (somewhere on the order of one hundred billion), and each individual neuron is connected to several thousand other neurons (leading to a total number of connections in the hundreds of trillions). Technically, neurons are "digital" in that if you were to take a snapshot of the brain at a given instant, each neuron would be either "on" or "off" (i.e. a 1 or a 0). However, neurons don't work like digital electronics. When a neuron fires, it doesn't just turn on, it pulses. What's more, each neuron is accepting input from and providing output to thousands of other neurons. Each connection has a different priority or weight, so that some connections are more powerful or influential than others. Again, these connections and their relative influence tends to be in flux, constantly changing to meet new needs.
This turns out to be a good thing in that it gives us the capability to be creative and solve problems, to be unpredictable - things humans cherish and that computers can't really do on their own.
However, this all comes with its own set of tradeoffs. With respect to this post, the most relevant of which is that humans aren't particularly good at doing context switches. Our brains are actually great at processing a lot of information in parallel. Much of it is subconscious - heart pumping, breathing, processing sensory input, etc... Those are also things that we never really cease doing (while we're alive, at least), so those resources are pretty much always in use. But because of the way our neurons are interconnected, sometimes those resources trigger other processing. For instance, if you see something familiar, that sensory input might trigger memories of childhood (or whatever).
In a computer, everything is happening in serial and thus it is easy to predict how various inputs will impact the system. What's more, when a computer stores its CPU's current state in memory, that state can be restored later with perfect accuracy. Because of the interconnected and parallel nature of the brain, doing this sort of context switching is much more difficult. Again, we know very little about how the humain brain really works, but it seems clear that there is short-term and long-term memory, and that the process of transferring data from short-term memory to long-term memory is lossy. A big part of what the brain does seems to be filtering data, determining what is important and what is not. For instance, studies have shown that people who do well on memory tests don't necessarily have a more effective memory system, they're just better at ignoring unimportant things. In any case, human memory is infamously unreliable, so doing a context switch introduces a lot of thrash in what you were originally doing because you will have to do a lot of duplicate work to get yourself back to your original state (something a computer has a much easier time doing). When you're working on something specific, you're dedicating a significant portion of your conscious brainpower towards that task. In otherwords, you're probably engaging millions if not billions of neurons in the task. When you consider that each of these is interconnected and working in parallel, you start to get an idea of how complex it would be to reconfigure the whole thing for a new task. In a computer, you need to ensure the current state of a single CPU is saved. Your brain, on the other hand, has a much tougher job, and its memory isn't quite as reliable as a computer's memory. I like to refer to this as metal inertia. This sort of issue manifests itself in many different ways.
One thing I've found is that it can be very difficult to get started on a project, but once I get going, it becomes much easier to remain focused and get a lot accomplished. But getting started can be a problem for me, and finding a few uninterrupted hours to delve into something can be difficult as well. One of my favorite essays on the subject was written by Joel Spolsky - its called Fire and Motion. A quick excerpt:
Many of my days go like this: (1) get into work (2) check email, read the web, etc. (3) decide that I might as well have lunch before getting to work (4) get back from lunch (5) check email, read the web, etc. (6) finally decide that I've got to get started (7) check email, read the web, etc. (8) decide again that I really have to get started (9) launch the damn editor and (10) write code nonstop until I don't realize that it's already 7:30 pm.I've found this sort of mental inertia to be quite common, and it turns out that there are several areas of study based around this concept. The state of thought where your brain is up to speed and humming along is often referred to as "flow" or being "in the zone." This is particularly important for working on things that require a lot of concentration and attention, such as computer programming or complex writing.
From my own personal experience a couple of years ago during a particularly demanding project, I found that my most productive hours were actually after 6 pm. Why? Because there were no interruptions or distractions, and a two hour chunk of uninterrupted time allowed me to get a lot of work done. Anecdotal evidence suggests that others have had similar experiences. Many people come into work very early in the hopes that they will be able to get more done because no one else is here (and complain when people are here that early). Indeed, a lot of productivity suggestions basically amount to carving out a large chunk of time and finding a quiet place to do your work.
A key component of flow is finding a large, uninterrupted chunk of time in which to work. It's also something that can be difficult to do here at a lot of workplaces. Mine is a 24/7 company, and the nature of our business requires frequent interruptions and thus many of us are in a near constant state of context switching. Between phone calls, emails, and instant messaging, we're sure to be interrupted many times an hour if we're constantly keeping up with them. What's more, some of those interruptions will be high priority and require immediate attention. Plus, many of us have large amounts of meetings on our calendars which only makes it more difficult to concentrate on something important.
Tell me if this sounds familiar: You wake up early and during your morning routine, you plan out what you need to get done at work today. Let's say you figure you can get 4 tasks done during the day. Then you arrive at work to find 3 voice messages and around a hundred emails and by the end of the day, you've accomplished about 15 tasks, none of which are the 4 you had originally planned to do. I think this happens more often than we care to admit.
Another example, if it's 2:40 pm and I know I have a meeting at 3 pm - should I start working on a task I know will take me 3 solid hours or so to complete? Probably not. I might be able to get started and make some progress, but as soon my brain starts firing on all cylinders, I'll have to stop working and head to the meeting. Even if I did get something accomplished during those 20 minutes, chances are when I get back to my desk to get started again, I'm going to have to refamiliarize myself with the project and what I had already done before proceeding.
Of course, none of what I'm saying here is especially new, but in today's world it can be useful to remind ourselves that we don't need to always be connected or constantly monitoring emails, RSS, facebook, twitter, etc... Those things are excellent ways to keep in touch with friends or stay on top of a given topic, but they tend to split attention in many different directions. It's funny, when you look at a lot of attempts to increase productivity, efforts tend to focus on managing time. While important, we might also want to spend some time figuring out how we manage our attention (and the things that interrupt it).
(Note: As long and ponderous as this post is, it's actually part of a larger series of posts I have planned. Some parts of the series will not be posted here, as they will be tailored towards the specifics of my workplace, but in the interest of arranging my interests in parallel (and because I don't have that much time at work dedicated to blogging on our intranet), I've decided to publish what I can here. Also, given the nature of this post, it makes sense to pursue interests in my personal life that could be repurposed in my professional life (and vice/versa).)
Posted by Mark on June 28, 2009 at 03:44 PM .: link :.
Wednesday, June 17, 2009
The Motion Control Sip Test
A few weeks ago, Microsoft and Sony unveiled rival motion control systems, presumably in response to Nintendo's dominant market position. The Wii has sold much better than both the Xbox 360 and the PS3 (to the point where sales of Xbox and PS3 combined are around the same as the Wii), so I suppose it's only natural for the competition to adapt. To be honest, I'm not sure how wise that would be... or rather, I'm not sure Sony and Microsoft are imitating the right things. Microsoft's Project Natal seems quite ambitious in that it relies completely on gestures and voice (no controllers!). The Sony motion control system, which relies on a camera and two handheld wands, seems somewhat similar to the Wii in that there are still controllers and buttons. Incidentally, the Wii actually released Wii Motion Plus, an improvement to their already dominant system.
My first thought at a way to compete with the Wii would have been along similar lines, but not for the reasons I suspect Microsoft and Sony released their solutions. The problem for MS & Sony is that the Wii is the unquestionable winner of this generation of gaming consoles, and everyone knows that. A third party video game developer can create a game for a console with an install base of 20 million (the PS3), 30 million (Xbox) or 50 million (Wii). Since the PS3 and Xbox have similar controllers, 3rd parties can often release games on both consoles, though there is overhead in porting your code to both systems. This gives a rough parity between those two systems and the Wii... until you realize that developing games for the Xbox/PS3 means HD and that means those games will be much more costly (in both time and money) to develop. On the other hand, you could reach the same size audience by developing a game for the Wii, using standard definition (which is much easier to develop for) and not having to worry about compatibility issues between two consoles.
The problem with Natal and Sony's Wands is that they basically represent brand new consoles. This totally negates the third party advantage of releasing a game on both platforms. Now a third party developer who wants to create a motion control game is forced to choose between two underperforming platforms and one undisputed leader in the field. How do you think that's going to go?
Microsoft's system seems to be the most interesting in that they're trying something much different than Nintendo or Sony. But "interesting" doesn't necessarily translate into successful, and from what I've read, Natal is a long ways away from production quality. Yeah, the marketing video they created is pretty neat, but from what I can tell, it doesn't quite work that well yet. Even MS execs are saying that what's in the video is "conceptual" and what they "hope" to have at launch. If they launch it at all. I'd be surprised if what we're seeing is ever truly launched. Yeah, the Minority Report interface (which is basically what Natal is) really looks cool, but I have my doubts about how easy it will be to actually use. Won't your arms get tired? Why use motion gestures for something that is so much easier and more precise with a mouse?
Sony's system seems to be less ambitious, but also too different from Nintendo's Wiimote. If I were at Sony, I would have tried to duplicate the Wiimote almost exactly. Why? Because then you give 3rd party developers the option of developing for Wii then porting to PS3, thus enlarging the pie from 50 million to 70 million with minimal effort. Sure the graphics wouldn't be as impressive as other PS3 efforts, but as the Wii has amply demonstrated, you don't need unbelievable graphics to be successful. The PS3 would probably need a way to upscale the SD graphics to ensure they don't look horrible, but that should be easy enough. I'm sure there would be some sort of legal issue with that idea, but I'm also sure Sony could weasel their way out of any such troubles. To be clear, this strategy wouldn't have a chance at cutting into Wii sales - it's more of a holding pattern, a way to stop the bleeding (it might help them compete with MS though). Theoretically, Sony's system isn't done yet either and could be made into something that could get Wii ports, but somehow I'm doubting that will actually be in the works.
The big problem with both Sony and Microsoft's answer to the Wiimote is that they've completely misjudged what made the Wii successful. It's not the Wiimote and motion controls, though that's part of it. It's that Nintendo courted everyone, not just video gamers. They courted grandmas and kids and "hardcore" gamers and "casual" gamers and everyone inbetween. They changed video games from solitary entertainment to something that is played in living rooms with families and friends. They moved into the Blue Ocean and disrupted the gaming industry. The unique control system was important, but I think that's because the control system was a signfier that the Wii was for everyone. The fact that it was simple and intuitive was more important than motion controls. The most important part of the process wasn't motion controls, but rather Wii Sports. Yes, Wii Sports uses motion controls, and it uses them exceptionally well. It's also extremely simple and easy to use and it was targeted towards everyone. It was a lot of fun to pop in Wii Sports and play some short games with your friends or family (or coworkers or enemies or strangers off the street or whoever).
The big problem for me is that even Nintendo hasn't improved on motion controls much since then. It's been 3 years since Wii Sports, and yet it's still probably the best example of motion controls in action. I have not played any Wii Motion Plus games yet, so for me, the jury is still out on that one. However, I'm not that interested in playing the games I'm seeing for Motion Plus, let alone the prospect of paying for yet another peripheral for my Wii (though it does seem to be cheap). The other successful games for the Wii weren't so much successful for their motion controls so much as other, intangible factors. Mario Kart is successful... because it's always successful (incidentally, while I still enjoy playing with friends every now and again, the motion controls have nothing to do with that - it's more just the nostagia I have for the original Mario Kart). Wii Fit has been an amazing success story for Nintendo, but it introduced a completely new peripheral and its success is probably more due to the fact that Nintendo was targeting more than just the core gamer audience with software that broadened what was possible on a video game console. Again, Nintendo's success is due to their strategy of creating new customers and their marketing campaigns that follow the same strategy. Wii has a lot of games that have less than imaginitive motion controls - games which simply replace random button mashing with random stick waggling. But where they're most successful seems to be where they target a broader audience. They also seem to be quite adept at playing on people's nostalgia, hence I find myself playing new Mario, Zelda, and Metroid games, even when I don't like some of them (I'm looking at you, Metroid Prime 3!)
Motion controls play a part in this, but they're the least important part. Why? Because the same complaints I have for Natal and the Minority Report interface apply to the Wii (or the new PS3 system, for that matter). For example, take Metroid Prime 3. A FPS for the Wii! Watch how motion controls will revolutionize FPS! Well, not so much. There are a lot of reasons I don't like the game, but one of the reasons was that you constantly had to have your Wiimote pointed up. If your hand strayed or you wanted to rest your wrists for a moment, your POV also strays. There are probably some other ways to do FPS on the Wii, but I'm not especially convinced (The Conduit looks promising, I guess) that a true FPS game will work that well on a Wii (heck, it doesn't work that well on a PS3 or Xbox when compared to the PC). That's probably why Rail Shooters have been much more successful on the Wii.
Part of the issue I have is that motion controls are great for short periods of time, but even when you're playing a great motion control game like Wii Sports, playing for long periods of time has adverse affects (Wii elbow anyone?). Maybe that's a good thing; maybe gamers shouldn't spend so much time playing video games... but personally, I enjoy a nice marathon session every now and again.
You know what this reminds me of? New Coke. Seriously. Why did Coca-Cola change their time-honored and fabled secred formula? Because of the Pepsi Challenge. In the early 1980s, Coke was losing ground to Pepsi. Coke had long been the most popular soft drink, so they were quite concerned about their diminishing lead. Pepsi was growing closer to parity every day, and that's when they started running these commercials pitting Coke vs. Pepsi. The Pepsi Challenge took dedicated Coke drinkers and asked them to take a sip from two different glasses, one labeled Q and one labeled M. Invariably, people chose the M glass, which was revealed to contain Pepsi. Coke initially disputed the results... until they started private running sip tests of their own. It turns out that people really did prefer Pepsi (hard as that may be for those of us who love Coke!). So Coke started tinkering with their secret formula, attempting to make it lighter and sweeter (i.e. more like Pepsi). Eventually, they got to a point where their new formulation consistently outperformed Pepsi in sip tests, and thus New Coke was born. Of course, we all know what happened. New Coke was a disaster. Coke drinkers were outraged, the company's sales plunged, and Coke was forced to bring back the original formula as "Classic Coke" just a few months later (at which point New Coke practically disappeared). What's more, Pepsi's seemingly unstoppable ascendance never materialized. For the past 20-30 years, Coke has beaten Pepsi despite sip tests which say that it should be the other way around. What was going on here? Malcolm Gladwell explains this incident and the aftermath in his book Blink:
The difficulty with interpreting the Pepsi Challenge findings begins with the fact that they were based on what the industry calls a sip test or a CLT (central location test). Tasters don’t drink the entire can. They take a sip from a cup of each of the brands being tested and then make their choice. Now suppose I were to ask you to test a soft drink a little differently. What if you were to take a case of the drink home and tell me what you think after a few weeks? Would that change your opinion? It turns out it would. Carol Dollard, who worked for Pepsi for many years in new-product development, says, “I’ve seen many times when the CLT will give you one result and the home-use test will give you the exact opposite. For example, in a CLT, consumers might taste three or four different products in a row, taking a sip or a couple sips of each. A sip is very different from sitting and drinking a whole beverage on your own. Sometimes a sip tastes good and a whole bottle doesn’t. That’s why home-use tests give you the best information. The user isn’t in an artificial setting. They are at home, sitting in front of the TV, and the way they feel in that situation is the most reflective of how they will behave when the product hits the market.”To me, motion controls seem like a video game sip test. The analogy isn't perfect, because I think that motion controls are here to stay, but I think the idea is relevant. Coke is like Sony - they look at a successful competitor and completely misjudge what made them successful. Yes, motion controls are a part of the Wii's success, but their true success lies elsewhere. In small doses and optimized for certain games (like bowling or tennis), nothing can beat motion controls. In larger doses with other types of games, motion controls have a long ways to go (and they make my arm sore). Microsoft and Sony certainly don't seem to be abandoning their standard controllers, and even the Wii has a "Classic Controller", and I think that's about right. Motion controls have secured a place in gaming going forward, but I don't see it completely displacing good old-fashioned button mashing either.
Update: Incidentally, I forgot to mention the best motion control game I've played since Wii Sports has been... Flower, for the PS3. Flower is also probably a good example of a game that makes excellent use of motion controls, but hasn't achieved anywhere near the success of Nintendo's games. It's not because it isn't a good game (it is most definitely an excellent game, and the motion controls are great), it's because it doesn't expand the audience the way Nintendo does. If Natal and Sony's new system do make it to market, and if they do manage to release good games (and those are two big "ifs"), I suspect it won't matter much...
Posted by Mark on June 17, 2009 at 06:40 PM .: link :.
Sunday, June 07, 2009
A Decade of Kaedrin
It's hard to believe, but it has been ten years since I started this website. The exact date is a bit hard to pinpoint, as the site was launched on my student account at Villanova, which existed and was accessible on the web as far back as 1997. However, as near as I can tell, the site now known as Kaedrin began in earnest on May 31, 1999 at approximately 8 pm. That's when I wrote and published the first entry in The Rebel Fire Alarms, an interactive story written in tandem with my regular visitors. I called these efforts Tandem Stories, and it was my primary reason for creating the website. Other content was being published as well - mostly book, movie, and music reviews - but the primary focus was the tandem stories, because I wanted to do something different on an internet that was filled with boring, uninspired, static content homepages that were almost never updated. At the time, the only form of interaction you were likely to see on a given website was a forum of some kind, so I thought the tandem stories were something of a differentiator for my site, and it was, though I never really knew how many different people visited the site. As time went on, interactivity on the web, even of the interactive story variety, became more common, so that feature became less and less unique...
I did, however, have a regular core of visitors, most of whom knew me from the now defunct 4degreez message boards (which has since morphed into 4th Kingdom, which is still a vibrant community site). To my everlasting surprise and gratitude, several of these folks are still regular visitors and while most of what I do here is for my own benefit, I have to admit that I never would have gotten this far without them. So a big thank you to those who are still with me!
But I'm getting ahead of myself here. Below is a rough timeline of my website, starting with my irrelevant student account homepage (which was basically a default page with some personal details filled in), moving on to the first incarnation of Kaedrin, and progressing through several redesigns and technologies until you got the site you're looking at now (be forewarned, this gets to be pretty long, though it's worth noting that the site looked pretty much like it does today way back in 2001, so the bulk of redesigning happened in the 1999-2001 timeframe)...
Posted by Mark on June 07, 2009 at 09:38 AM .: link :.
Sunday, February 15, 2009
Best Films of 2008
I saw somewhere on the order of 70 movies that were released in 2008. Most critics see more than that, but your average moviegoer probably sees far less than that. I have to say, I've been really disappointed with 2008. It's been a rough year for movies and I had a really hard time cobbling together a top 10 (Hence the extreme lateness of this post). The 6-10 of my list is somewhat weak and probably wouldn't have made the list in either 2006 or 2007. On the other hand, the films near the top of the list are great, and would compete with the films of the last two years.
Of course, making a top 10 list is an inherently subjective exercise. I've noted before that these lists tend to tell you more about those who are compiling the list rather than the movies on the list. The hosts of the Filmcouch podcast were recently talking about how these sorts of lists are an autobiographical exercise and invited listeners to send in their top 5 lists, at which point they would psychoanalyze the list and try to come up with a picture of who the list's owner was. I submitted my list, and they tried to figure me out by the movies I listed. Before I go through their results, I should probably let you see my full list, so here goes:
Top 10 Movies of 2008
* In roughly reverse order
I found their comments interesting, and it did make me wonder about why I really did choose the movies that I did. I think there is some truth in what they say, but I wouldn't say that I am the person they describe. There are some things that I'm fascinated by that aren't things I'd actually do. For instance, I've written before about vigilantes, and despite what the hosts of Filmcouch may think, I'm not a vigilante, and don't really have a desire to do so. What fascinates me about vigilante stories, though, is consequences. This is something that The Dark Knight did in spades, and it also features prominently in a lot of the other movies on the list. I wouldn't say that I particularly like the idea of "two wrongs make a right" but I am fascinated by situations in which the only possible alternatives are wrong. What do you do when no available option is right? How do you counter someone like the Joker? What are the consequences of time travel? What happens if you become a vampire when you're 12 years old? Do you help the Nazis destabilize the Allied economy, or do you protect your fellow concentration camp prisoners? I'm also the type of person who thinks the devil is in the details, and so I like movies that show that sort of thing. Again, Batman is a good example of this sort of thing. Everyone agrees that fighting crime is an honorable thing, but when you get down to the details of such an endeavor, things become a lot more complicated. Sure, Batman could spend all his time taking down the criminals on the streets - but then he's not getting at the root of the problem. But taking on the root of the problem has consequences. And so on. So I supposed their "shades of gray" thing might be somewhat accurate as well. But the point remains, while I may be fascinated by vigilantes in film, that doesn't mean that I want to be a vigilante, nor does it mean that I would tolerate a vigilante in my community. Something similar could be probably be said for other people prominently featured in my list (i.e. vampires, bank robbers, etc...) I'm fascinated by them, but it's not like I want to be them. Perhaps there's a cathartic value in these movies as well. They mentioned that I might be someone who likes to operate outside the system, but in fact, I do no such thing in my life. I'm pretty firmly ensconced within the system. But I suspect that makes people who operate outside the system fascinating... So anyway, that's what Filmcouch thinks. Not a bad job, but perhaps you can't truly read someone's soul through a list of 5 movies:p
* In alphabetical order
Perhaps as evidence of how bad a year this is, I am listing out my 5 least favorite movies. Typically, I'd have a tough time with this list, because I generally try to avoid bad movies and am usually somewhat successful in that. This year, I was not.
There are a couple of these that might even have potential for unseating my number 10 movie, but I couldn't get to them for whatever reason (usually that it wasn't playing near me or otherwise available). For instance, I ordered Mad Detective (co-directed by Kaedrin favorite Johnny To) on blu-ray on January 21, but according to Amazon, the delivery estimate is sometime in early March!?
Update 2.21.09: Well that didn't take long. I saw Mad Detective last night and decided that it needed to be on the top 10. This knocks Spiral off the list and into the Honorable Mentions. Also worth noting are the comments to this post where I have an interesting discussion Adam from Filmcouch. And finally, the Filmcouch podcast mentioned my comments on this week's podcast as well. Thanks guys!
Posted by Mark on February 15, 2009 at 09:25 PM .: link :.
Sunday, January 11, 2009
2008 Kaedrin Movie Awards
As of today, I've seen 62 movies that would be considered 2008 releases. This is on par with my 2007 viewing and perhaps a bit less than 2006. So I'm not your typical movie critic, but I've probably seen more than your average moviegoer. As such, this constitutes the kickoff of my year end movie recap. The categories for this years movie awards are the same as last year, and will proceed in a similar manner. Nominations will be announced today, and starting next week, I'll announce the winners (new winners announced every day). After that, there might be some miscellaneous awards, followed by a top 10 list.
As I've mentioned before, 2008 has been a weak year for movies. Not sure if this was because of the writers strike, some other shift in studio strategy (the independent arms of many studios seem to be closing up shop, for instance), or that my taste has become more discriminating, but whatever the case, I've had trouble compiling my top 10. Indeed, I'm still not sure I've got a good list yet and am still scrambling to catch up with some of the lesser-known films of the year (many of which had minimal releases and are not out on DVD just yet). This is why these awards and my top 10 are a little later than last year. However, one of the things I like about doing these awards is that they allow me to give some love to films that I like, but which aren't necessarily great or are otherwise flawed (as such, the categories may seem a bit eclectic). Some of these movies will end up on my top 10, but the grand majority of them will not.
The rules for this are the same as last year: Nominated movies must have been released in 2008 and I have to have seen the movie (and while I have seen a lot of movies, I don't pretend to have seen a comprehensive selection - don't let that stop you from suggesting something though). Also, I suppose I should mention the requisite disclaimer that these sorts of lists are inherently subjective and personal. But that's all part of the fun, right?
It's been a pretty good year for villainy! At least on par with last year, if not better. As with the past two years, my picks in this category are for individuals, not groups (i.e. no vampires or zombies as a group).
A distinct step down in terms of heroic badassery this year, but it's not a terrible year either. Again limited to individuals and not groups.
Best Comedic Performance
Not a particularly strong year when it comes to comedy, but there still seem to be plenty of good performances, even in films I thought were lackluster...
Not a particularly huge year for breakthrough performances either, but definitely several interesting choices. As with previous years, my main criteria for this category was if I watched a movie, then immediately looking up the actor/actress on IMDB to see what else they've done (or where they came from). This sometimes happens for even well established actors/actresses, and this year was no exception.
Most Visually Stunning
Best Sci-Fi or Horror Film
I'm a total genre hound, despite genres generally receiving very little attention from critics. As usual, there was a dearth of quality SF this year, especially because I don't consider Iron Man or The Dark Knight SF. However, a strong showing from the horror genre rounds out the nominations well. Plus, disappointed by the poor showing of SF, I cheated by nominating a 2007 SF film... I can't even fudge the release dates the way I can with some independent or foreign flicks - by every measurement I can think of, it's a 2007 film. But it was such a small film that flew under just about everyone's radar (including mine!) that I'm going to include it, just to give it some attention, because I really did enjoy it. Winner Announced!
Honestly, I only saw 4 or 5 sequels all year, so this was a difficult category to populate (as it is every year). Still, there were at least two really great sequels this year... Winner Announced!
Always a difficult award to figure out, as there are different ways in which a movie can disappoint. Usually, expectations play just as big a part of this as the actual quality of the film, and it's possible that a decent movie can win the award because of astronomical expectations. This year had several obvious choices though. Usually I manage to avoid the real stinkers, but this year I saw two genuinely awful movies... in the theater!
Best Action Sequences
This is a kinda by-the-numbers year for action sequences. Nothing particularly groundbreaking or incredible, but there were some well executed, straightforward action movies this year. These aren't really individual action sequences, but rather an overall estimation of each film. Winner Announced!
Best Plot Twist/Surprise
Not a particularly strong year for the plot twist either. Winner Announced!
Best High Concept Film
This was a new category last year, and like last year, I had a little difficulty coming up with this list, but overall, not bad. Winner Announced!
Anyone have any suggestions (for either category or nominations)? Comments, complaints and suggestions are welcome, as always.
It looks like The Dark Knight is leading the way with an impressive 6 nominations (rivaled only by the 8 nominations earned by Grindhouse last year... with the caveat that Grindhouse is technically 2 movies in one). Not far behind is Hellboy II with a respectable 5 nominations. Surprisingly, both Forgetting Sarah Marshall and The Signal earned 3 nominations, while a whole slew of other films garnered 2 noms, and an even larger amount earned a single nomination. As I mentioned earlier, I'm going to give myself a week to think about each of these. I might end up adding to the nominations if I end up seeing something new. Winners will be announced starting next Sunday or Monday. As with the last two years, there will be a small set of Arbitrary Awards after the standard awards are given out, followed by the top 10.
Update: Added a new plot twist nominee (Spiral), because I just watched it and it deserves it!
Update 1.25.09: Arbitrary Awards announced!
Update 2.15.09: Top 10 of 2008 has finally been posted!
Posted by Mark on January 11, 2009 at 11:46 AM .: link :.
Sunday, December 07, 2008
I finished Neal Stephenson's latest novel, Anathem, a few weeks back. Overall, I enjoyed it heartily. I don't think it's his best work (a distinction that still belongs to Cryptonomicon or maybe Snow Crash), but it's way above anything I've read recently. It's a dense novel filled with interesting and complex ideas, but I had no problem keeping up once I got started. This is no small feat in a book that is around 900 pages long.
On the other hand, my somewhat recent discussion with Alex regarding the ills of Cryptonomicon has lead me to believe that perhaps the reason I like Neal Stephenson's novels so much is that he tunes into the same geeky frequencies I do. I think Shamus hit the nail on the head with this statement:
In fact, I have yet to introduce anyone to the book and have them like it. I’m slowly coming to the realization that Cryptonomicon is not a book for normal people. Flaws aside, there are wonderful parts to this book. The problem is, you have to really love math, history, and programming to derive enjoyment from them. You have to be odd in just the right way to love the book. Otherwise the thing is a bunch of wanking.Similarly, Anathem is not a book for normal people. If you have any interest in Philosophy and/or Quantum Physics, this is the book for you. Otherwise, you might find it a bit dry... but you don't need to be in love with those subjects to enjoy the book. You just need to find it interesting. I, for one, don't know much about Quantum Physics at all, and I haven't read any (real) Philosophy since college, and I didn't have any problems. In fact, I was pretty much glued to the book the whole time. One of the reasons I could tell I loved this book was that I wasn't really aware of what page I was on until I neared the end (at which point dealing with the physicality of the book itself make it pretty obvious how much was left).
Minor spoilers ahead, though I try to keep this to a minimum.
The story takes place on another planet named Arbre and is told in first person by a young man named Erasmus. Right away, this yields the interesting effect of negating the multi-threaded stories of most of Stephenson's other novels and providing a somewhat more linear progression of the story (at least, until you get towards the end of the novel, when the linearity becomes dubious... but I digress). Erasmus, who is called Raz by his friends, is an Avout - someone who has taken certain vows to concentrate on studies of science, history and philosophy. The Avout are cloistered in areas called Concents, which is kind of like a monastary except the focus of the Avout is centered around scholarship and not religion. Concents are isolated from the rest of the world (the area beyond a Concent's walls is referred to as Extramuros or the Saecular World), but there are certain periods in which the gates open and the Avout mix with the Saecular world (these periods are called Apert). Each concent is split up into smaller Maths, which are categorized by the number of years which lapse between each Apert.
Each type of Math has interesting characteristics. Unarian maths have Apert every year, and are apparently a common way to achieve higher education before getting a job in the Saecular world (kinda like college or maybe grad-school). Decenarian maths have Apert once every ten years. Raz and most of the characters in the story are "tenners." Centenarian maths have Apert once every century (and are referred to as hundreders) and Millenarian maths have Apert once every thousand years (and are called thousanders).
I suppose after reading the last two paragraphs, you'll notice that Stephenson has spent a fair amount of time devising new words and concepts for his alien planet. At first, this seems a bit odd and it might take some getting used to, but after the first 50-100 pages, it's pretty easy to keep up with all the new history and terminology. There's a glossary in the back of the book for reference, but I honestly didn't find that I needed it very often (at least, not the way I did while reading Dune, for instance). Much has been made of Stephenson's choice in this matter, as well as his choice to set the story on an alien planet that has a history that is roughly analogous to Earth's history. Indeed, it seems like there is a one-to-one relationship between many historical figures and concepts on Arbre and Earth. Take, for instance, Protas:
Protas, the greatest fid of Thelenes, had climbed to the top of a mountain near Ethras and looked down upon the plain that nourished the city-state and observed the shadows of the clouds, and compared their shapes. He had had his famous upsight that while the shapes of the shadows undeniably answered to those of the clouds, the latter were infinitely more complex and more perfectly realized than the former, which were distorted not only by the loss of a spatial dimension but also by being projected onto terrain that was of irregular shape. Hiking back down, he had extended that upsight by noting that the mountain seemed to have a different shape every time he turned round to look back at it, even though he knew it had one absolute form and that these seeming changes were mere figments of his shifting point of view. From there, he had moved on to his greatest upsight of all, which was that these two observations - the one concerning the clouds, the other concerning the mountain - were themselves both shadows cast into his mind by the same greater, unifying idea. (page 84)Protas is clearly an analog to Plato (and thus, Thelenes is similar to Socrates) and the concepts described above run parallel to Plato's concept of the Ideal (even going so far as to talk about shadows and the like, calling to mind Plato's metaphor of the cave). There are literally dozens of these types of relationships in the book. Adrakhones is analogous to Pythagoras, Gardan's Steelyard is similar to Occam's Razor, and so on. Personally, I rather enjoyed picking up on these similarities, but the referential nature of the setting might seem rather indulgent on Stephenson's part (at least, it might seem so to someone who hasn't read the book). I even speculated as much while I was reading the book, but as a reader noted in the comments to my post, that's not all there is to it. It turns out that Stephenson's choice to set the story on Arbre, a planet that has a history suspiciously similar to Earth, was not an indulgence at all. Indeed, it becomes clear later in the book that these similarities are actually vital to the story being told.
This sort of thing represents a sorta meta-theme of the book. Where Cryptonomicon is filled with little anecdotes and tangents that are somewhat related to the story, Anathem is tighter. Concepts that are seemingly tangential and irrelevant wind up playing an important role later in the book. Don't get me wrong, there are certainly a few tangents or anecdotes that are just that, but despite the 900+ page length of the book, Stephenson does a reasonably good job juggling ideas, most of which end up being important later in the book.
The first couple hundred pages of the novel take place within a Concent, and thus you get a pretty good idea of what life is like for the Avout. It's always been clear that Stephenson appreciates the opportunity to concentrate on something without having any interruptions. His old website quoted former Microsoft employee Linda Stone's concept of "continuous partial attention," which is something most people are familiar with these days. Cell phones, emails, Blackberries/iPhones, TV, and even the internet are all pieces of technology which allow us to split our attention and multi-task, but at the same time, such technology also serves to make it difficult to find a few uninterrupted hours with which to delve into something. Well, in a Concent, the Avout have no such distractions. They lead a somewhat regimented, simple life with few belongings and spend most of their time thinking, talking, building and writing. Much of their time is spent in Socratic dialogue with one another. At first, this seems rather odd, but it's clear that these people are first rate thinkers. And while philosophical discussions can sometimes be a bit dry, Stephenson does his best to liven up the proceedings. Take, for example, this dialogue between Raz and his mentor, Orolo:
"Describe worrying," he went on.And this goes on for a few pages as well. Incidentally, this is also an example of one of those things that seems like it's an irrelevant tangent, but returns later in the story.
So the Avout are a patient bunch, willing to put in hundreds of years of study to figure out something you or I might find trivial. I was reminded of the great unglamourous march of technology, only amplified. Take, for instance, these guys:
Bunjo was a Millenarian math built around an empty salt mine two miles underground. Its fraas and suurs worked in shifts, sitting in total darkness waiting to see flashes of light from a vast array of crystalline particle detectors. Every thousand years they published their results. During the First Millenium they were pretty sure they had seen flashes on three separate occasions, but since then they had come up empty. (page 262)As you might imagine, there is some tension between the Saecular world and the Avout. Indeed, there have been several "sacks" of the various Concents. This happens when the Saecular world gets freaked out by something the Avout are working on and attacks them. However, at the time of the novel, things are relatively calm. Total isolation is not possible, so there are Hierarchs from the Avout who keep in touch with the Saecular world, and thus when the Saecular world comes across a particularly daunting problem or crisis, they can call on the Avout to provide some experts for guidance. Anathem tells the story of one such problem (let's say they are faced with an external threat), and it leads to an unprecedented gathering of Avout outside of their concents.
I realize that I've spent almost 2000 words without describing the story in anything but a vague way, but I'm hesitant to give away too much of the story. However, I will mention that the book is not all philosophical dithering and epic worldbuilding. There are martial artists (who are Avout from a Concent known as the Ringing Vale, which just sounds right), cross-continental survival treks, and even some space travel. All of this is mixed together well, and I while I wouldn't characterise the novel as an action story, there's more than enough there to keep things moving. In fact, I don't want to give the impression that the story takes a back seat at any point during the novel. Most of the world building I've mentioned is something that comes through incidentally in the telling of the story. There are certainly "info-dumps" from time to time, but even those are generally told within the framework of the story.
There are quite a few characters in the novel (as you might expect, when you consider its length), but the main ones are reasonably well defined and interesting. Erasmus turns out to be a typical Stephensonian character - a very smart man who is constantly thrust into feuds between geniuses (i.e. a Randy/Daniel Waterhouse type). As such, he is a likeable fellow who is easy to relate to and empathize with. He has several Avout friends, each of whom plays an important role in the story, despite being separated from time to time. There's even a bit of a romance between Raz and one of the other Avout, though this does proceed somewhat unconventionally. During the course of the story, Raz even makes some Extramuros friends. One being his sister Cord, who seems to be rather bright, especially when it comes to mechanics. Another is Sammann, who is an Ita (basically a tecno-nerd who is always connected to networks, etc...). Raz's mentor Orolo has been in the Concent for much longer than Raz, and is thus always ten steps ahead of Raz (he's the one who brought up the nerve-gas-farting pink dragons above).
Another character who doesn't make an appearance until later on in the story is Fraa Jad. He's a Millenarian, so if Orolo is always ten steps ahead, Jad is probably a thousand steps ahead. He has a habit of biding his time and dropping a philosophical bomb into a conversation, like this:
Fraa Jad threw his napkin on the table and said: "Consciousness amplifies the weak signals that, like cobwebs spun between trees, web Narratives together. Moreover, it amplifies them selectively and in that way creates feedback loops that steer the Narratives." (page 701)If that doesn't make a lot of sense, that's because it doesn't. In the book, the characters surrounding Jad spend a few pages trying to unpack what was said there. That might seem a bit tedious, but it's actually kinda funny when he does stuff like that, and his ideas actually are driving the plot forward, in a way. One thing Stephenson doesn't spend much time discussing is the details of how the Millenarians continue to exist. He doesn't explicitely come out and say it, but the people on Arbre seem to have life spans similar to humans (perhaps a little longer), so it's a little unclear how things like Millenarian Maths can exist. He does mention that thousanders have managed to survive longer than others, but it's not clear how or why. If one were so inclined, they could perhaps draw a parallel between the Thousanders in Anathem and the Eruditorium in Cryptonomicon and the Baroque Cycle. Indeed, Enoch Root would probably fit right in at a Millenarian Math... but I'm pretty sure I'm just reading way too much into this and that Stephenson wasn't intentionally trying to draw such a parallel. It's still an interesting thought though.
Overall, Stephenson has created and sustained a detailed world, and he has done so primarily through telling the story. Indeed, I'm only really touching the surface of what he's created here, and honestly, so is he. It's clear that Stephenson could easily have made this into another 3000 page Baroque Cycle style trilogy, delving into the details of the history and culture of Arbre, but despite the long length of the novel, he does keep things relatively tight. The ending of the novel probably won't do much to convince those who don't like his endings that he's turned a new leaf, but I enjoyed it and thought it ranked well within his previous books. There are some who will consider the quasi-loose-ends in the story to be frustrating, but I thought it actually worked out well and was internally consistent with the rest of the story (it's hard to describe this without going into too much detail). In the end, this is Stephenson's best work since Cryptonomicon and the best book I've read in years. It will probably be enjoyed by anyone who is already a Stephenson fan. Otherwise, I'm positive that there are people out there who are just the right kind of weird that would really enjoy this book. I expect that anyone who is deeply interested in Philosophy or Quantum Physics would have a ball. Personally, I'm not too experienced in either realm, but I still enjoyed the book immensely. Here's to hoping we don't have to wait another 4 years for a new Stephenson novel...
Posted by Mark on December 07, 2008 at 08:39 PM .: link :.
Sunday, June 15, 2008
One of the cable channels was playing Ocean's Eleven all weekend, and that's one of those movies I always find myself watching when it comes on (this time, I even went to the shelf and fired up the DVD, so as to avoid commercials). Of course, there are tons of new, never-seen-before things I want to watch. My Netflix queue currently has around 140 movies in it (and this seems to be growing with time, despite the rate at which I go through my rentals). I've got a DVD set of Banner of the Stars that I'm only about 1/3 of the way through. My DVR has a couple episodes of the few TV shows I follow queued up for me. Yet I find myself watching Ocean's Eleven for the umpteenth time. And loving every second of it.
In actuality, I've noticed myself doing this sort of thing less and less over the years. When I was younger, I would watch and rewatch certain movies almost daily. There are several movies that have probably moved up into triple digit rewatches (for the curious, the films in this list include The Terminator, Aliens, The Empire Strikes Back, Return of the Jedi and Phantasm). Others I've only rewatched dozens of times. As time goes on, I find myself less and less likely to rewatch things. I think Netflix has become a big part of that, because I want to get my money's worth from the service, and the only way to do that is to continually watch new movies. In recent years, I've also come to realize that even though I've seen way more movies than the average person, there are still a lot of holes in my film knowledge. I do find myself limited by time these days, so when it comes down to rewatching an old favorite or potentially discovering a new one, I tend to favor the new films these days. But I still relapse (focusing on novelty has its own challenges), and I do find myself rewatching movies on a regular basis.
Why is that? There are some people who never rewatch movies, but even with my declining repeat viewings, I don't count myself among them. Some films almost demand you to watch them again. For instance, I recently watched Andrei Tarkovsky's thoughtful, if difficult, SF film Solaris. This is a film that seems designed to reveal itself only upon multiple viewings. Tarkovsky is somewhat infamous for this sort of thing, and there are a lot of movies out there that are like that. Upon repeated viewings, these films take on added dimensions. You start to notice things. Correlations, strange relationships, and references become more apparent.
Other films, however, are just a lot of fun to rewatch. This raises a lot of interesting questions. Why is a movie fun even when we know the ending? Indeed, why do some reviewers even include a rating for rewatchability? In some cases we just like spending time with certain characters or settings and don't mind that we already know the outcome. I've made a distinction between these films and the ones that demand multiple viewings, but many of the same benefits of repeat viewings are mutual between the two types of movies. Rewatching a film can be a richer, deeper experience and you start to notice things you didn't upon first viewing. Indeed, one interesting thing about rewatching movies is that while the movie is the same, you are not. Context matters. Every time we rewatch something, we bring our knowledge and experience (which is always changing) to the table. Sometimes this can be trivial (like noticing a reference or homage you didn't know about), but I've always heard about movies that become more poignant to people after they have children or as they grow older. Similarly, rewatching a movie can transport us back to the context in which we first saw the movie. I still remember the excitement and the spectacle of going to see Batman or Terminator 2 on opening day. Those were fun experiences from my childhood, even if I don't particularly love either movie. Heck, just the thought of how often I used to rewatch some movies is a fun memory that gets brought up whenever I think about those movies today...
There are also a lot of fascinating psychological implications to rewatching movies. As I mentioned before, we sometimes rewatch movies to revisit characters we consider friends or situations we find satisfying. In the case of comedies, we want to laugh. In the case of horror films, we want to scare ourselves or feel suspense. And strangely, even though we know the outcomes of these movies, they still seem to be able to elicit these various emotions as we rewatch them. For movies that depict true stories, they can feature suspense or fear even when we know how the story will turn out. Two recent, high-profile examples of this are United 93 and Zodiac. Both of those films were immersive enough upon first viewing that I felt suspense at various parts of the story, even though I knew on an intellectual level where both films were heading. David Bordwell has explored this concept thoroughly and references several interesting theories as to why rewatching movies remains powerful:
Normally we say that suspense demands an uncertainty about how things will turn out. Watching Hitchcock’s Notorious for the first time, you feel suspense at certain points-when the champagne is running out during the cocktail party, or when Devlin escorts the drugged Alicia out of Sebastian’s house. That’s because, we usually say, you don’t know if the spying couple will succeed in their mission.Here's one theory he covers:
...in general, when we reread a novel or rewatch a film, our cognitive system doesn’t apply its prior knowledge of what will happen. Why? Because our minds evolved to deal with the real world, and there you never know exactly what will happen next. Every situation is unique, and no course of events is literally identical to an earlier one. “Our moment-by-moment processes evolved in response to the brute fact of nonrepetition” (Experiencing Narrative Worlds, 171). Somehow, this assumption that every act is unique became our default for understanding events, even fictional ones we’ve encountered before.He goes into a lot more detail about this theory and others in his post. Several of the theories he covers touch on what I find most interesting about the subject, which is that our brain seems to have compartamentalized the processing of various data. I'm going to simplify drastically for effect here, but I think the general idea is right (I'm not a nuerologist though, so take it with a grain of salt). When processing visual and audio data, there is a part of the brain that is, for lack of a better term, stateless. It picks up a stimulus, immediately renders it (into a visual or audio representation) then shuttles it off to another part of the brain which interprets the output. This interpretation seems to be where our brain slows down. The initial processing is involuntary and unconscious and it doesn't take other data (like memories) into account. We don't have to consciously think about it, it just happens. Something similar happens when we first begin to interpret data. Our brain seems to be unconsciously and continually forming different interpretations and then rejecting most of them. The rejected thoughts are displaced by new alternatives which incorporate more of our knowledge and experience (and perhaps this part happens in a more conscious fashion). We've all had the experience of thinking something that almost immediately disturbed us because we wonder where that thought came from. Bordwell gives a common example (I've read about this exact example at least three times from different people):
Standing at a viewing station on a mountaintop, safe behind the railing, I can look down and feel fear. I don’t really believe I’ll fall. If I did, I would back away fast. I imagine I’m going to fall; perhaps I even picture myself plunging into the void and, a la Björk, slamming against the rocks at the bottom. Just the thought of it makes my palms clammy on the rail.So perhaps one reason it doesn't matter that we know how a movie will turn out is that there is a part of us that is blindly processing data without incorporating what we already know. Another reason we still feel emotions like suspense during a movie we've seen before is because we can imagine what would happen if it didn't turn out the way we know it will. In both cases, there is a conscious intellectual response which can negate our instinctual thoughts, but such responses seem to happen after the fact (at which point, you've already experienced the emotion in question and can't just take it back). One of the most beautiful things about laughter is that it happens involuntarily. We don't (always) have to think about it, we just do it. Dennis Miller once wrote about this:
The truth is the human sense of humor tends to be barbaric and it has been that way all along. I'm sure on the eve of the nativity when the tall Magi smacked his forehead on the crossbeam while entering the stable, Joseph took a second away from pondering who impregnated his wife and laughed his little carpenter ass off. A sense of humor is exactly that: a sense. Not a fact, not etched in stone, not an empirical math equation but just what the word intones: a sense of what you find funny. And obviously, everybody has a different sense of what's funny. If you need confirmation on that I would remind you that Saved by the Bell recently celebrated the taping of their 100th episode. Oh well, one man's Molier is another man's Screech and you know something thats the way it should be.Indeed, humor generally disappates when you try to explain it. You either get it or you don't.
I could probably go on and on about this, but Bordwell has done an excellent job in his post (there's an interesting bit about mirror neurons, for instance), and unlike me, he's got lots of references. I do find the subject fascinating though, and I began wondering about the impact of people rewatching movies so often. After all, this is a somewhat recent trend we're talking about (not that people didn't rewatch movies before the advent of the VCR and DVD, but that technology has obviously increased the amount of rewatching).
We're living in an on-demand era right now, meaning that we can choose what we want to watch whenever we want (well, we're not quite there yet, but we're moving quickly in that direction). If I want to rewatch Solaris a hundred times and analyze it like the Zapruder film, I'm free to do so (and it might even be a rewarding effort). In the past, things weren't necessarily like that though. James Berardinelli recently wrote about rewatching movies, and he provides some interesting historical context:
30 years ago, if you loved a movie, re-watching it involved patience and hard work. A big Hollywood picture might show up in prime time (ABC regularly aired the James Bond movies on Sunday nights) but smaller/older films were relegated to late night or weekend afternoon showings. Lovers of High Noon (for example) might have to wait a couple of years and religiously check TV listings before being rewarded by its appearance on "The Million Dollar Movie" at 12:30 am some night.Again, this trend has continued, and the degree to which you can program your viewing schedule is ever increasing. Even during the 1980s when I was growing up, I found myself beholden to the broadcast schedules more often than not. Sure I could tape things with a VCR, but I usually found myself browsing the channels looking for something to watch. There was a certain serendipity to discovering movies in those days. I distinctly remember the first time I saw a Spaghetti Western (For a Few Dollars More), getting hooked, and watching a bunch of others (Cinemax was running a series of them that month). The last time I remember something like that happening was about 5-6 years ago when I caught an Italian horror marathon on some cable movie channel. And the only reason I watched that was because I had seen Suspiria before and wanted to watch it again. It was followed by several Mario Bava films that were very interesting. Today, I look back on some of the films I watched in my childhood, even ones I cherished, and I wonder why I ever bothered to watch it in the first place. It was probably becaues nothing else was on. The advent of digital cable has changed things as well because digital cable doesn't encourage blind television surfing. There's a program guide built right in, so you can browse that to find what you want. Unfortunately, that means you could skip right over something you would otherwise like (and that may have caught your eye if you saw a glimpse of it). There's also a lot more to choose from (perhaps leading to a paradox of choice situation).
Of course, there are other ways for film lovers to discover new films they wouldn't otherwise have watched. On a personal level, listening to various film podcasts, especially Filmspotting and All Movie Talk (which is sadly now defunct, though still worth listening to if you love movies), has been incredibly helpful in finding and exploring various genres or eras of film that I had not been acquainted with. One effective technique that Filmspotting has employed is the use of marathons, in which they watch 5-6 movies from a genre or filmmaker they are not particularly familiar with. Of course, this, too, is subject to the whims of listeners - many (including myself) will avoid films that don't have an immediate appeal. Still, I've found myself playing along with several of their marathons and watching movies I don't think I would ever watch on my own.
One interesting film experiment is currently being conducted by a blogger named Matthew Dessem. He wanted to learn more about foreign films and found that the Criterion Collection was an interesting place to start. It contains a good mix of the old, new, foreign, and independent, and it goes in a somewhat random order. He started writing a review for each movie at his blog, The Criterion Contraption. He's about 80 or so movies into the collection, and his reviews are exceptionally good (apparently the product of about 15 hours of work each). In an interview, Dessem explains his reasoning for watching the collection in order and why he writes reviews for each one:
I began writing about the films simply as a way of keeping myself intellectually honest: thinking about how each movie was supposed to work, paying attention to what was effective and what was not. Given the chance to not engage with a difficult film, I'll usually take it, unless I have to come up with something coherent to say about it.Later in the interview, he expands on why he watches the films in the order Criterion put them out:
Mostly, it keeps me honest. If I had the choice to watch the films in any order, I would quickly jump to all the films I most want to see, and never get around to the ones that seem less interesting. That means I'd miss out on a lot of discoveries, which was one of my main goals to begin with. But jumping around from country to country and decade to decade has its own rewards: like any good 21st century citizen, I have a pretty good case of apophenia, so I'll often see connections that don't exist between films.I can definitely see where he's coming from. Looking through the catalog of Criterion, I see a lot of movies that I'd probably skip if I didn't require myself to watch them in order (as it is now, I've seen somewhere around 10% of the movies, and there's no particular order I've gone in - I sorta fell into the trap where I "quickly jump to all the films I most want to see, and never get around to the ones that seem less interesting". Except, of course, I haven't decided to watch all the Criterion Collection movies.) Indeed some of the movies I have seen, I probably wouldn't recommend except in certain circumstances (for example, I wouldn't recommend Equinox to anyone but die-hard horror fans).
However, while there are ways for us film lovers to seek out and expand our knowledge of film, I do wonder about the casual moviegoers. Is the recent trend of remakes (or reimaginings or whatever they call them these days) partially the result of this phenomenon? I wonder how many of the younger generation saw Rob Zombie's limp remake of Halloween and then sought out the brilliant original? That is perhaps too high-profile of an example. How about the original Ocean's Eleven? As it turns out, I have not seen that movie, despite loving the remake. I've added it to my Netflix queue. It rests at position 116 right now, which means I'll probably get to it sometime within the next five years. Now if you'll excuse me, I'm going to rewatch The Empire Strikes Back. It is my destiny.
Update: Added some screenshots from movies I've watched a bazillion times. Also just want to note that while I spent most of my time talking about movies here, the same goes for books and music. I don't tend to reread books much (perhaps due to the time commitment reading a book takes), but on the other hand, music gets better with multiple listenings (so much so that no one even questions the practice of listening to music multiple times).
Posted by Mark on June 15, 2008 at 08:21 PM .: link :.
Sunday, January 27, 2008
Best Films of 2007
I saw somewhere on the order of 60 movies that were released in 2007. This is somewhat lower than most critics, but higher than your average moviegoer. Also unlike most critics, I don't consider this to be a spectacular year for film. For instance, I left several films off my 2006 list that would have been shoe-ins this year. If I were to take a more objective stance, limiting my picks to the movies with the best technical qualities, the list would be somewhat easier. But that's a boring way to assemble a list and absolute objectivitiy is not possible in any case. Movies that really caught my attention and interested me were somewhat fewer this year. Don't get me wrong, I love movies and there were a lot of good ones this year, but there were few movies that really clicked with me. As such, a lot of the top 10 could easily be exchanged with a movie from the Honorable Mention section. So without further ado:
Top 10 Movies of 2007
* In roughly reverse order
As I mentioned above, a lot of these honorable mentions would probably do fine for the bottom half of the top 10 (the top half is pretty strong, actually). In some cases, I really struggled with a lot of the below picks. If my mood were different, I bet some things would change. These are all good movies and worth watching too.
Posted by Mark on January 27, 2008 at 08:18 PM .: link :.
Sunday, November 18, 2007
The Paradise of Choice?
A while ago, I wrote a post about the Paradox of Choice based on a talk by Barry Schwartz, the author of a book by the same name. The basic argument Schwartz makes is that choice is a double-edged sword. Choice is a good thing, but too much choice can have negative consequences, usually in the form of some kind of paralysis (where there are so many choices that you simply avoid the decision) and consumer remorse (elevated expectations, anticipated regret, etc...). The observations made by Schwartz struck me as being quite astute, and I've been keenly aware of situations where I find myself confronted with a paradox of choice ever since. Indeed, just knowing and recognizing these situations seems to help deal with the negative aspects of having too many choices available.
This past summer, I read Chris Anderson's book, The Long Tail, and I was a little pleasantly surprised to see a chapter in his book titled "The Paradise of Choice." In that chapter, Anderson explicitely addresses Schwartz's book. However, while I liked Anderson's book and generally agreed with his basic points, I think his dismissal of the Paradox of Choice is off target. Part of the problem, I think, is that Anderson is much more concerned with the choices rather than the consequences of those choices (which is what Schwartz focuses on). It's a little difficult to tell though, as Anderson only dedicates 7 pages or so to the topic. As such, his arguments don't really eviscerate Schwartz's work. There are some good points though, so let's take a closer look.
Anderson starts with a summary of Schwartz's main concepts, and points to some of Schwartz's conclusions (from page 171 in my edition):
As the number of choices keeps growing, negative aspects of having a multitude of options begin to appear. As the number of choices grows further, the negatives escalate until we become overloaded. At this point, choice no longer liberates, but debilitates. It might even be said to tyrannize.Now, the way Anderson presents this is a bit out of context, but we'll get to that in a moment. Anderson continues and then responds to some of these points (again, page 171):
As an antidote to this poison of our modern age, Schwartz recommends that consumers "satisfice," in the jargon of social science, not "maximize". In other words, they'd be happier if they just settled for what was in front of them rather than obsessing over whether something else might be even better. ...Anderson has completely missed the point here. Later in the chapter, he spends a lot of time establishing that people do, in fact, like choice. And he's right. My problem is twofold: First, Schwartz never denies that choice is a good thing, and second, he never advocates removing choice in the first place. Yes, people love choice, the more the better. However, Schwartz found that even though people preferred more options, they weren't necessarily happier because of it. That's why it's called the paradox of choice - people obviously prefer something that ends up having negative consequences. Schwartz's book isn't some sort of crusade against choice. Indeed, it's more of a guide for how to cope with being given too many choices. Take "satisficing." As Tom Slee notes in a critique of this chapter, Anderson misstates Schwartz's definition of the term. He makes it seem like satisficing is settling for something you might not want, but Schwartz's definition is much different:
To satisfice is to settle for something that is good enough and not worry about the possibility that there might be something better. A satisficer has criteria and standards. She searches until she finds an item that meets those standards, and at that point, she stops.Settling for something that is good enough to meet your needs is quite different than just settling for what's in front of you. Again, I'm not sure Anderson is really arguing against Schwartz. Indeed, Anderson even acknowledges part of the problem, though he again misstate's Schwartz's arguments:
Vast choice is not always an unalloyed good, of course. It too often forces us to ask, "Well, what do I want?" and introspection doesn't come naturally to all. But the solution is not to limit choice, but to order it so it isn't oppressive.Personally, I don't think the problem is that introspection doesn't come naturally to some people (though that could be part of it), it's more that some people just don't give a crap about certain things and don't want to spend time figuring it out. In Schwartz's talk, he gave an example about going to the Gap to buy a pair of jeans. Of course, the Gap offers a wide variety of jeans (as of right now: Standard Fit, Loose Fit, Boot Fit, Easy Fit, Morrison Slim Fit, Low Rise Fit, Toland Fit, Hayes Fit, Relaxed Fit, Baggy Fit, Carpenter Fit). The clerk asked him what he wanted, and he said "I just want a pair of jeans!"
The second part of Anderson's statement is interesting though. Aside from again misstating Schwartz's argument (he does not advocate limiting choice!), the observation that the way a choice is presented is important is interesting. Yes, the Gap has a wide variety of jean styles, but look at their website again. At the top of the page is a little guide to what each of the styles means. For the most part, it's helpful, and I think that's what Anderson is getting at. Too much choice can be oppressive, but if you have the right guide, you can get the best of both worlds. The only problem is that finding the right guide is not as easy as it sounds. The jean style guide at Gap is neat and helpful, but you do have to click through a bunch of stuff and read it. This is easier than going to a store and trying all the varieties on, but it's still a pain for someone who just wants a pair of jeans dammit.
Anderson spends some time fleshing out these guides to making choices, noting the differences between offline and online retailers:
In a bricks-and-mortar store, products sit on the shelf where they have been placed. If a consumer doesn't know what he or she wants, the only guide is whatever marketing material may be printed on the package, and the rough assumption that the product offered in the greatest volume is probably the most popular.I think it's a very good point he's making, though I think he's a bit too optimistic about how effective these guides to buying really are. For one thing, there are times when a choice isn't clear, even if you do have a guide. Also, while I think retailers that offer Recommendations based on what other customer purchases are important and helpful, who among us hasn't seen absurd recommendations? From my personal experience, a lot of people don't like the connotations of recommendations either (how do they know so much about me? etc...). Personally, I really like recommendations, but I'm a geek and I like to figure out why they're offering me what they are (Amazon actually tells you why something is recommended, which is really neat). In any case, from my own personal anecdotal observations, no one puts much faith in probablistic systems like recommendations or ratings (for a number of reasons, such as cheating or distrust). There's nothing wrong with that, and that's part of why such systems are effective. Ironically, acknowledging their imperfections allow users to better utilize the systems. Anderson knows this, but I think he's still a bit too optimistic about our tools for traversing the long tail. Personally, I think they need a lot of work.
When I was younger, one of the big problems in computing was storage. Computers are the perfect data gatering tool, but you need somewhere to store all that data. In the 1980s and early 1990s, computers and networks were significantly limited by hardware, particularly storage. By the late 1990s, Moore's law had eroded this deficiency significantly, and today, the problem of storage is largely solved. You can buy a terrabyte of storage for just a couple hundred dollars. However, as I'm fond of saying, we don't so much solve problems as trade one set of problems for another. Now that we have the ability to store all this information, how do we get at it in a meaninful way? When hardware was limited, analysis was easy enough. Now, though, you have so much data available that the simple analyses of the past don't cut it anymore. We're capturing all this new information, but are we really using it to its full potential?
I recently caught up with Malcolm Gladwell's article on the Enron collapse. The really crazy thing about Enron was that they didn't really hide what they were doing. They fully acknowledged and disclosed what they were doing... there was just so much complexity to their operations that no one really recognized the issues. They were "caught" because someone had the persistence to dig through all the public documentation that Enron had provided. Gladwell goes into a lot of detail, but here are a few excerpts:
Enron's downfall has been documented so extensively that it is easy to overlook how peculiar it was. Compare Enron, for instance, with Watergate, the prototypical scandal of the nineteen-seventies. To expose the White House coverup, Bob Woodward and Carl Bernstein used a source-Deep Throat-who had access to many secrets, and whose identity had to be concealed. He warned Woodward and Bernstein that their phones might be tapped. When Woodward wanted to meet with Deep Throat, he would move a flower pot with a red flag in it to the back of his apartment balcony. That evening, he would leave by the back stairs, take multiple taxis to make sure he wasn't being followed, and meet his source in an underground parking garage at 2 A.M. ...Again, there's a lot more detail in Gladwell's article. Just how complicated was the public documentation that Enron had released? Gladwell gives some examples, including this one:
Enron's S.P.E.s were, by any measure, evidence of extraordinary recklessness and incompetence. But you can't blame Enron for covering up the existence of its side deals. It didn't; it disclosed them. The argument against the company, then, is more accurately that it didn't tell its investors enough about its S.P.E.s. But what is enough? Enron had some three thousand S.P.E.s, and the paperwork for each one probably ran in excess of a thousand pages. It scarcely would have helped investors if Enron had made all three million pages public. What about an edited version of each deal? Steven Schwarcz, a professor at Duke Law School, recently examined a random sample of twenty S.P.E. disclosure statements from various corporations-that is, summaries of the deals put together for interested parties-and found that on average they ran to forty single-spaced pages. So a summary of Enron's S.P.E.s would have come to a hundred and twenty thousand single-spaced pages. What about a summary of all those summaries? That's what the bankruptcy examiner in the Enron case put together, and it took up a thousand pages. Well, then, what about a summary of the summary of the summaries? That's what the Powers Committee put together. The committee looked only at the "substance of the most significant transactions," and its accounting still ran to two hundred numbingly complicated pages and, as Schwarcz points out, that was "with the benefit of hindsight and with the assistance of some of the finest legal talent in the nation."Again, Gladwell's article has a lot of other details and is a fascinating read. What interested me the most, though, was the problem created by so much data. That much information is useless if you can't sift through it quickly or effectively enough. Bringing this back to the paradise of choice, the current systems we have for making such decisions are better than ever, but still require a lot of improvement. Anderson is mostly talking about simple consumer products, so none are really as complicated as the Enron case, but even then, there are still a lot of problems. If we're really going to overcome the paradox of choice, we need better information analysis tools to help guide us. That said, Anderson's general point still holds:
More choice really is better. But now we know that variety alone is not enough; we also need information about that variety and what other consumers before us have done with the same choices. ... The paradox of choice turned out to be more about the poverty of help in making that choice than a rejection of plenty. Order it wrong and choice is oppressive; order it right and it's liberating.Personally, while the help in making choices has improved, there's still a long way to go before we can really tackle the paradox of choice (though, again, just knowing about the paradox of choice seems to do wonders in coping with it).
As a side note, I wonder if the video game playing generations are better at dealing with too much choice - video games are all about decisions, so I wonder if folks who grew up working on their decision making apparatus are more comfortable with being deluged by choice.
Posted by Mark on November 18, 2007 at 09:47 PM .: link :.
Sunday, August 05, 2007
Manuals, or the lack thereof...
When I first started playing video games and using computer applications, I remember having to read the instruction manuals to figure out what was happening on screen. I don't know if this was because I was young and couldn't figure this stuff out, or because some of the controls were obtuse and difficult. It was perhaps a combination of both, but I think the latter was more prevalent, especially when applications and games became more complex and powerful. I remember sitting down at a computer running DOS and loading up Wordperfect. The interface that appears is rather simplistic, and the developers apparently wanted to avoid the "clutter" of on-screen menus, so they used keyboard combinations. According to Wikipedia, Wordperfect used "almost every possible combination of function keys with Ctrl, Alt, and Shift modifiers." I vaguely remember needing to use those stupid keyboard templates (little pieces of laminated paper that fit snugly around the keyboard keys, helping you remember what key or combo does what.)
Video Games used to have great manuals too. I distinctly remember several great manuals from the Atari 2600 era. For example, the manual for Pitfall II was a wonderful document done in the style of Pitfall Harry's diary. The game itself had little in the way of exposition, so you had to read the manual to figure out that you were trying to rescue your niece Rhonda and her cat, Quickclaw, who became trapped in a catacomb while searching for the fabled Raj diamond. Another example for the Commodore 64 was Temple of Apshai. The game had awful graphics, but each room you entered had a number, and you had to consult your manual to get a description of the room.
By the time of the NES, the importance of manuals had waned from Apshai levels, but they were still somewhat necessary at times, and gaming companies still went to a lot of trouble to produce helpful documents. The one that stands out in my mind was the manual for Dragon Warrior III, which was huge (at least 50 pages) and also contained a nice fold-out chart of most of the monsters and wapons in the game (with really great artwork). PC games were also getting more complex, and as Roy noted recently, companies like Sierra put together really nice instruction manuals for complex games like the King's Quest series.
In the early 1990s, my family got its first Windows PC, and several things changed. With the Word for Windows software, you didn't need any of those silly keyboard templates. Everything you needed to do was in a menu somewhere, and you could just point and click instead of having to memorize strange keyboard combos. Naturally, computer purists love the keyboard, and with good reason. If you really want to be efficient, the keyboard is the way to go, which is why Linux users are so fond of the command line and simple looking but powerful applications like Emacs. But for your average user, the GUI was very important, and made things a lot easier to figure out. Word had a user manual, and it was several hundred pages long, but I don't think I ever cracked it open, except maybe in curiosity (not because I needed to).
The trends of improving interfaces and less useful manuals proceeded throughout the next decade and today, well, I can't think of the last time I had to consult a physical manual for anything. Steven Den Beste has been playing around with flash for a while, but he says he never looks at the manual. "Manuals are for wimps." In his post, Roy wonders where all the manuals have gone. He speculates that manufacturing costs are a primary culprit, and I have no doubt that they are, but there are probably a couple of other reasons as well. For one, interfaces have become much more intuitive and easy to use. This is in part due to familiarity with computers and the emergence of consistent standards for things like dialog boxes (of course, when you eschew those standards, you get what Jacob Nielson describes as a catastrophic failure). If you can easily figure it out through the interface, what use are the manuals? With respect to gaming, the in-game tutorials have largely taken the place of instruction manuals. Another thing that has perhaps affected official instruction manuals are the unofficial walkthroughs and game guides. Visit a local bookstore and you'll find entire bookcases devoted to vide game guides and walkthrough. As nice as the manual for Pitfall II was, you really didn't need much more than 10 pages to explain how to play that game, but several hundred pages barely does justice to some of the more complex video games in today's market. Perhaps the reason gaming companies don't give you instruction manuals with the game is not just that printing the manual is costly, but that they can sell you a more detailed and useful one.
Steven Johnson's book Everything Bad is Good for You has a chapter on Video Games that is very illuminating (in fact, the whole book is highly recommended - even if you don't totally agree with his premise, he still makes a compelling argument). He talks about the official guides and why they're so popular:
The dirty little secret of gaming is how much time you spend not having fun. You may be frustrated; you may be confused or disoriented; you may be stuck. When you put the game down and move back into the real world, you may find yourself mentally working through the problem you've been wrestling with, as though you were worrying a loose tooth. If this is mindless escapism, it's a strangely masochistic version.He gives an example of a man who spends six months working as a smith (mindless work) in Ultima online so that he can attain a certain ability, and he also talks about how people spend tons of money on guides for getting past various roadblocks. Why would someone do this? Johnson spends a fair amount of time going into the neurological underpinnings of this, most notably what he calls the "reward circuitry of the brain." In games, rewards are everywhere. More life, more magic spells, new equipment, etc... And how do we get these rewards? Johnson thinks there are two main modes of intellectual labor that go into video gaming, and he calls them probing and telescoping.
Probing is essentially exploration of the game and its possibilities. Much of this is simply the unconscious exploration of the controls and the interface, figuring out how the game works and how you're supposed to interact with it. However, probing also takes the more conscious form of figuring out the limitations of the game. For instance, in a racing game, it's usually interesting to see if you can turn your car around backwards, pick up a lot of speed, then crash head-on into a car going the "correct" way. Or, in Rollercoaster Tycoon, you can creatively place balloon stands next to a roller coaster to see what happens (the result is hilarious). Probing the limits of game physics and finding ways to exploit them are half the fun (or challenge) of video games these days, which is perhaps another reason why manuals are becoming less frequent.
Telescoping has more to do with the games objectives. Once you've figured out how to play the game through probing, you seek to exploit your knowledge to achieve the game's objectives, which are often nested in a hierarchical fashion. For instance, to save the princess, you must first enter the castle, but you need a key to get into the castle and the key is guarded by a dragon, etc... Indeed, the structure is sometimes even more complicated, and you essentially build this hierarchy of goals in your head as the game progresses. This is called telescoping.
So why is this important? Johnson has the answer (page 41 in my edition):
... far more than books or movies or music, games force you to make decisions. Novels may activate our imagination, and music may conjure up powerful emotions, but games force you to decide, to choose, to prioritize. All the intellectual benefits of gaming derive from this fundamental virtue, because learning how to think is ultimately about learning to make the right decisions: weighing evidence, analyzing situations, consulting your long term goals, and then deciding. No other pop culture form directly engages the brain's decision-making apparatus in the same way. From the outside, the primary activity of a gamer looks like a fury of clicking and shooting, which is why much of the conventional wisdom about games focuses on hand-eye coordination. But if you peer inside the gamer's mind, the primary activity turns out to be another creature altogether: making decisions, some of them snap judgements, some long-term strategies.Probing and telescoping are essential to learning in any sense, and the way Johnson describes them in the book reminds me of a number of critical thinking methods. Probing, developing a hypothesis, reprobing, and then rethinking the hypothesis is essentially the same thing as the scientific method or the hermenutic circle. As such, it should be interesting to see if video games ever really catch on as learning tools. There have been a lot of attempts at this sort of thing, but they're often stifled by the reputation of video games being a "colossal waste of time" (in recent years, the benefits of gaming are being acknowledged more and more, though not usually as dramatically as Johnson does in his book).
Another interesting use for video games might be evaluation. A while ago, Bill Simmons made an offhand reference to EA Sports' Madden games in the context of hiring football coaches (this shows up at #29 on his list):
The Maurice Carthon fiasco raises the annual question, "When teams are hiring offensive and defensive coordinators, why wouldn't they have them call plays in video games to get a feel for their play calling?" Seriously, what would be more valuable, hearing them B.S. about the philosophies for an hour, or seeing them call plays in a simulated game at the all-Madden level? Same goes for head coaches: How could you get a feel for a coach until you've played poker and blackjack with him?When I think about how such a thing would actually go down, I'm not so sure, because the football world created by Madden, as complex and comprehensive as it is, still isn't exactly the same as the real football world. However, I think the concept is still sound. Theoretically, you could see how a prospective coach would actually react to a new, and yet similar, football paradigm and how they'd find weaknesses and exploit them. The actual plays they call aren't that important; what you'd be trying to figure out is whether or not the coach was making intelligent decisions or not.
So where are manuals headed? I suspect that they'll become less and less prevalent as time goes on and interfaces become more and more intuitive (though there is still a long ways to go before I'd say that computer interfaces are truly intuitive, I think they're much more intuitive now than they were ten years ago). We'll see more interactive demos and in-game tutorials, and perhaps even games used as teaching tools. I could probably write a whole separate post about how this applies to Linux, which actually does require you to look at manuals sometimes (though at least they have a relatively consistent way of treating manuals; even when the documentation is bad, you can usually find it). Manuals and passive teaching devices will become less important. And to be honest, I don't think we'll miss them. They're annoying.
Posted by Mark on August 05, 2007 at 10:58 AM .: link :.
Sunday, June 10, 2007
A few weeks ago, I wrote about how context matters when consuming art. As sometimes happens when writing an entry, that one got away from me and I never got around to the point I originally started with (that entry was originally entitled "Referential" but I changed it when I realized that I wasn't going to write anything about references), which was how much of our entertainment these days references its predecessors. This takes many forms, some overt (homages, parody), some a little more subtle.
I originally started thinking about this while watching an episode of Family Guy. The show is infamous for its random cutaway gags - little vignettes that have no connection to the story, but which often make some obscure reference to pop culture. For some reason, I started thinking about what it would be like to watch an episode of Family Guy with someone from, let's say, the 17th century. Let's further speculate that this person isn't a blithering idiot, but perhaps a member of the Royal Society or something (i.e. a bright fellow).
This would naturally be something of a challenge. There are some technical explanations that would be necessary. For example, we'd have to explain electricty, cable networks, signal processing and how the television works (which at least involves discussions on light and color). The concept of an animated show, at least, would probably be easy to explain (but it would involve a discussion of how the human eye works, to a degree).
There's more to it, of course, but moving past all that, once we start watching the show, we're going to have to explain why we're laughing at pretty much all of the jokes. Again, most of the jokes are simply references and parodies of other pieces of pop culture. Watching an episode of Family Guy with Isaac Newton (to pick a prominent Royal Society member) would necessitate a pause just about every minute to explain what each reference was from and why Family Guy's take on it made me laugh. Then there's the fact that Family Guy rarely has any sort of redeemable lesson and often deliberately skews towards actively encouraging evil (something along the lines of "I think the important thing to remember is that it's ok to lie, so long as you don't get caught." I don't think that exact line is in an episode, but it could be.) This works fine for us, as we're so steeped in popular culture that we get the fact that Family Guy is just lampooning of the notion that we could learn important life lessions via a half-hour sitcom. But I'm sure Isaac Newton would be appalled.
For some reason, I find this fascinating, and try to imagine how I would explain various jokes. For instance, the episode I was watching featured a joke concerning "cool side of the pillow." They cut to a scene in bed where Peter flips over the pillow and sees Billy Dee Williams' face, which proceeds to give a speech about how cool this side of the pillow is, ending with "Works every time." This joke alone would require a whole digression into Star Wars and how most of the stars of that series struggled to overcome their typecasting and couldn't find a lot of good work, so people like Billy Dee Williams ended up doing commercials for a malt liquor named Colt 45, which had these really cheesy commercials where Billy Dee talked like that. And so on. It could probably take an hour before my guest would even come close to understanding the context of the joke (I'm not even touching the tip of the iceberg with this post).
And the irony of this whole thing is that jokes that are explained simply aren't funny. To be honest, I'm not even sure why I find these simple gags funny (that, of course, is the joy of humor - you don't usually have to understand it or think about it, you just laugh). Seriously, why is it funny when Family Guy blatantly references some classic movie or show? Again, I'm not sure, but that sort of humor has been steadily growing over the past 30 years or so.
Not all comedies are that blatant about their referential humor though (indeed, Family Guy itself doesn't solely rely upon such references). A recent example of a good referential film is Shaun of the Dead, which somewhow manages to be both a parody and an example of a good zombie movie. It pays homage to all the classic zombie films and it also makes fun of other genres (notably the romantic comedy), but in doing so, the filmmakers have also made a good zombie movie in itself. The filmmakers have recently released a new film called Hot Fuzz, which attempts the same trick for action movies and buddy comedies. It is, perhaps, not as successful as Shaun, but the sheer number of references in the film is astounding. There are the obvious and explicit ones like Point Break and Bad Boys II, but there are also tons of subtle homages that I'd wager most people wouldn't get. For instance, when Simon Pegg yells in the movie, he's doing a pitch perfect impersonation of Arnold Schwarzenegger in Predator. And when he chases after a criminal, he imitates the way Robert Patrick's T-1000 runs from Terminator 2.
References don't need to be part of a comedy either (though comedies seem to make the easiest examples). Hop on IMDB and go to just about any recent movie, and click on the "Movie Connections" link in the left navigation. For instance, did you know that the aformentioned T2 references The Wizard of Oz and The Killing, amongst dozens of other references? Most of the time, these references are really difficult to pick out, especially when you're viewing a foreign film or show that's pulling from a different cultural background. References don't have to be story or character based - they can be the way a scene is composed or the way the lighting is set (i.e. the Venetian blinds in Noir films).
Now, this doesn't just apply to art either. A lot of common knowledge in today's world is referential. Most formal writing includes references and bibliographies, for instance, and a non-fiction book will often assume basic familiarity with a subject. When I was in school, I was always annoyed at the amount of rote memorization they made us do. Why memorize it if I could just look it up? Shouldn't you be focusing on my critical thinking skills instead of making me memorize arbitrary lists of facts? Sometimes this complaining was probably warranted, but most of it wasn't. So much of what we do in today's world requires a well-rounded familiarity with a large number of subjects (including history, science, culture, amongst many other things). There simply isn't any substitute for actual knowledge. Though it was a pain at the time, I'm glad emphasis was put on memorization during my education. A while back, David Foster noted that schools are actually moving away from this, and makes several important distinctions. He takes an example of a song:
Jakob Dylan has a song that includes the following lines:As Foster notes, this doesn't mean that "thinking skills" are unimportant, just that knowledge is important too. You need to have a quality data set in order to use those "thinking skills" effectively.
Human beings tend to leverage knowledge to create new knowledge. This has a lot of implications, one of which is intellectual property law. Giving limited copyright to intellectual property is important, because the data in that property eventually becomes available for all to built upon. It's ironic that educators are considering less of a focus on memorization, as this requirement of referential knowledge has been increasing for some time. Students need a base of knowledge to both understand and compose new works. References help you avoid reinventing the wheel everytime you need to create something, which leads to my next point.
I think part of the reason references are becoming more and more common these days is that it makes entertainment a little less passive. Watching TV or a movie is, of course, a passive activity, but if you make lots of references and homages, the viewer is required to think through those references. If the viewer has the appropriate knowledge, such a TV show or movie becomes a little more cognitively engaging. It makes you think, it calls to mind previous work, and it forces you to contextualize what you're watching based on what you know about other works. References are part of the complexity of modern Television and film, and Steven Johnson spends a significant amout of time talking about this subject in his book Everything Bad is Good for You (from page 85 of my edition):
Nearly every extended sequence in Seinfeld or The Simpsons, however, will contain a joke that makes sense only if the viewer fills in the proper supplementary information -- information that is deliberately withheld from the viewer. If you haven't seen the "Mulva" episode, or if the name "Art Vandelay" means nothing to you, then the subsequent references -- many of them arriving years after their original appearance -- will pass on by unappreciated.I know some people who hate Family Guy and Seinfeld, but I realized a while ago that they don't hate those shows because of the contents of the shows or because they were offended (though some people certainly are), but rather becaues they simply don't get the references. They didn't grow up watching TV in the 80s and 90s, so many of the references are simply lost on them. Family Guy would be particularly vexing if you didn't have the pop culture knowledge of the writers of that show. These reference heavy shows are also a lot easier to watch and rewatch, over and over again. Why? Because each episode is not self-contained, you often find yourself noticing something new every time you watch. This also sometimes works in reverse. I remember the first time I saw Bill Shatner's campy rendition of Rocket Man, I suddenly understoood a bit on Family Guy which I thought was just a bit based on being random (but was really a reference).
Again, I seem to be focusing on comedy, but it's not necessarily limited to that genre. Eric S. Raymond has written a lot about how science fiction jargon has evolved into a sophisticated code that implicitely references various ideas, conventions and tropes of the genre:
In looking at an SF-jargon term like, say, "groundcar", or "warp drive" there is a spectrum of increasingly sophisticated possible decodings. The most naive is to see a meaningless, uninterpretable wordlike noise and stop there.While comedy makes for convenient examples, I think this better illustrates the cognitive demands of referential art. References require you to be grounded in various subjects, and they'll often require you to think through the implications of those subjects in a new context. References allow writers to pack incredible amounts of information into even the smallest space. This, of course, requires the consumer to decode that information (using available knowledge and critical thinking skills), making the experience less passive and more engaging. Use references will continue to flourish and accellerate in both art and scholarship, and new forms will emerge. One could even argue that aggregation in various weblogs are simply exercises in referential work. Just look at this post, in which I reference several books and movies, in many cases assuming familiarity. Indeed, the whole structure of the internet is based on the concept of links -- essentialy a way to reference other documents. Perhaps this is part of the cause of the rising complexity and information density of modern entertainment. We can cope with it now, because we have such systems to help us out.
Posted by Mark on June 10, 2007 at 03:08 PM .: link :.
Sunday, February 18, 2007
World Domination Via Dice
One of my favorite board games is Risk. I have lots of fond memories of getting annihilated by my family members (I don't think I've ever played the game without being the youngest person at the table) and have long since mastered the fundamentals. I also hold it responsible for my early knowledge of world geography and geopolitics (and thus my early thoughts were warped, but at least I knew where the Middle East was, even if the map is a little broad).
The key to Risk is Australia. The Greeks knew it; the Carthaginians knew it; now you know it. Australia only has four territories to conquer and more importantly, it only has one entrance point, and thus only one territory to defend. Conquering Australia early in the game guarantees an extra two armies a turn, which is huge at that point in the game. Later in the game, that advantage lessens, but after securing Australia, you should be off to a very good start. If you're not in a position to take over Australia, South America will do. It also only has four territories, but it has two entrances and thus two territories to defend. On the bright side, it's also adjacent to Africa and North America, which are good continents to expand to (though they're both considerably more difficult to hold than Australia). This being the internet, there are, of course, some people who have thought about the subject a lot more than I and developed many detailed strategies.
Like many of the classic games, the original has become dwarfed by variants - games set in another universe (LotR Risk) or in a futaristic setting (Risk: 2042) - but I've never played those. However, I recent ran across a little internet game called Dice Wars. It's got the general Risk-like gameplay and concept of world domination via dice, but there are many key differences:
Of course, I'd already played a bit to get to this point, and you can probably spot my strategy here. I started with a concentration of territories towards the middle of the map, and thus focused on consolidating my forces in that area. By the time I got to the screenshot above, I'd narrowed down my exposure to four territories. I began expanding a to the right, and eventually conquered all of the green territories, thus limiting my exposure to only two territories. From there it was just a matter of slowly expanding that wall of two (at one point I needed to expand back to an exposure of three) until I won. Another nice feature of this game is the "History" button that appears at the end. Click it, and you watch the game progress really quickly through every battle, showing you the entire war in a matter of seconds. Neat. It's a fun game, but in the end, I think I still prefer Risk. [hat tip to Hypercubed for the game]
Posted by Mark on February 18, 2007 at 08:33 PM .: link :.
Wednesday, February 14, 2007
Intellectual Property, Copyright and DRM
Roy over at 79Soul has started a series of posts dealing with Intellectual Property. His first post sets the stage with an overview of the situation, and he begins to explore some of the issues, starting with the definition of theft. I'm going to cover some of the same ground in this post, and then some other things which I assume Roy will cover in his later posts.
I think most people have an intuitive understanding of what intellectual property is, but it might be useful to start with a brief definition. Perhaps a good place to start would be Article 1, Section 8 of the U.S. Constitution:
To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries;I started with this for a number of reasons. First, because I live in the U.S. and most of what follows deals with U.S. IP law. Second, because it's actually a somewhat controversial stance. The fact that IP is only secured for "limited times" is the key. In England, for example, an author does not merely hold a copyright on their work, they have a Moral Right.
The moral right of the author is considered to be -- according to the Berne convention -- an inalienable human right. This is the same serious meaning of "inalienable" the Declaration of Independence uses: not only can't these rights be forcibly stripped from you, you can't even give them away. You can't sell yourself into slavery; and neither can you (in Britain) give the right to be called the author of your writings to someone else.The U.S. is different. It doesn't grant an inalienable moral right of ownership; instead, it allows copyright. In other words, in the U.S., such works are considered property (i.e. it can be sold, traded, bartered, or given away). This represents a fundamental distinction that needs to be made: some systems emphasize individual rights and rewards, and other systems are more limited. When put that way, the U.S. system sounds pretty awful, except that it was designed for something different: our system was built to advance science and the "useful arts." The U.S. system still rewards creators, but only as a means to an end. Copyright is granted so that there is an incentive to create. However, such protections are only granted for "limited Times." This is because when a copyright is eternal, the system stagnates as protected peoples stifle competition (this need not be malicious). Copyright is thus limited so that when a work is no longer protected, it becomes freely available for everyone to use and to build upon. This is known as the public domain.
The end goal here is the advancement of society, and both protection and expiration are necessary parts of the mix. The balance between the two is important, and as Roy notes, one of the things that appears to have upset the balance is technology. This, of course, extends as far back as the printing press, records, cassettes, VHS, and other similar technologies, but more recently, a convergence between new compression techniques and increasing bandwidth of the internet created an issue. Most new recording technologies were greeted with concern, but physical limitations and costs generally put a cap on the amount of damage that could be done. With computers and large networks like the internet, such limitations became almost negligible. Digital copies of protected works became easy to copy and distribute on a very large scale.
The first major issue came up as a result of Napster, a peer-to-peer music sharing service that essentially promoted widespread copyright infringement. Lawsuits followed, and the original Napster service was shut down, only to be replaced by numerous decentralized peer-to-peer systems and darknets. This meant that no single entity could be sued for the copyright infringement that occurred on the network, but it resulted in a number of (probably ill-advised) lawsuits against regular folks (the anonymity of internet technology and state of recordkeeping being what it is, this sometimes leads to hilarious cases like when the RIAA sued a 79 year old guy who doesn't even own a computer or know how to operate one).
Roy discusses the various arguments for or against this sort of file sharing, noting that the essential difference of opinion is the definition of the word "theft." For my part, I think it's pretty obvious that downloading something for free that you'd normally have to pay for is morally wrong. However, I can see some grey area. A few months ago, I pre-ordered Tool's most recent album, 10,000 Days from Amazon. A friend who already had the album sent me a copy over the internet before I had actually recieved my copy of the CD. Does this count as theft? I would say no.
The concept of borrowing a Book, CD or DVD also seems pretty harmless to me, and I don't have a moral problem with borrowing an electronic copy, then deleting it afterwords (or purchasing it, if I liked it enough), though I can see how such a practice represents a bit of a slippery slope and wouldn't hold up in an honest debate (nor should it). It's too easy to abuse such an argument, or to apply it in retrospect. I suppose there are arguments to be made with respect to making distinctions between benefits and harms, but I generally find those arguments unpersuasive (though perhaps interesting to consider).
There are some other issues that need to be discussed as well. The concept of Fair Use allows limited use of copyrighted material without requiring permission from the rights holders. For example, including a screenshot of a film in a movie review. You're also allowed to parody copyrighted works, and in some instances make complete copies of a copyrighted work. There are rules pertaining to how much of the copyrighted work can be used and in what circumstances, but this is not the venue for such details. The point is that copyright is not absolute and consumers have rights as well.
Another topic that must be addressed is Digital Rights Management (DRM). This refers to a range of technologies used to combat digital copying of protected material. The goal of DRM is to use technology to automatically limit the abilities of a consumer who has purchased digital media. In some cases, this means that you won't be able to play an optical disc on a certain device, in others it means you can only use the media a certain number of times (among other restrictions).
To be blunt, DRM sucks. For the most part, it benefits no one. It's confusing, it basically amounts to treating legitimate customers like criminals while only barely (if that much) slowing down the piracy it purports to be thwarting, and it's lead to numerous disasters and unintended consequences. Essential reading on this subject is this talk given to Microsoft by Cory Doctorow. It's a long but well written and straightforward read that I can't summarize briefly (please read the whole thing). Some details of his argument may be debateable, but as a whole, I find it quite compelling. Put simply, DRM doesn't work and it's bad for artists, businesses, and society as a whole.
Now, the IP industries that are pushing DRM are not that stupid. They know DRM is a fundamentally absurd proposition: the whole point of selling IP media is so that people can consume it. You can't make a system that will prevent people from doing so, as the whole point of having the media in the first place is so that people can use it. The only way to perfectly secure a piece of digital media is to make it unusable (i.e. the only perfectly secure system is a perfectly useless one). That's why DRM systems are broken so quickly. It's not that the programmers are necessarily bad, it's that the entire concept is fundamentally flawed. Again, the IP industries know this, which is why they pushed the Digital Millennium Copyright Act (DMCA). As with most laws, the DMCA is a complex beast, but what it boils down to is that no one is allowed to circumvent measures taken to protect copyright. Thus, even though the copy protection on DVDs is obscenely easy to bypass, it is illegal to do so. In theory, this might be fine. In practice, this law has extended far beyond what I'd consider reasonable and has also been heavily abused. For instance, some software companies have attempted to use the DMCA to prevent security researchers from exposing bugs in their software. The law is sometimes used to silence critics by threatening them with a lawsuit, even though no copright infringement was committed. The Chilling Effects project seems to be a good source for information regarding the DMCA and it's various effects.
DRM combined with the DMCA can be stifling. A good example of how awful DRM is, and how DMCA can affect the situation is the Sony Rootkit Debacle. Boing Boing has a ridiculously comprehensive timeline of the entire fiasco. In short, Sony put DRM on certain CDs. The general idea was to prevent people from putting the CDs in their computer and ripping them to MP3s. To accomplish this, Sony surreptitiously installed software on customer's computers (without their knowledge). A security researcher happened to notice this, and in researching the matter found that the Sony DRM had installed a rootkit that made the computer vulnerable to various attacks. Rootkits are black-hat cracker tools used to disguise the workings of their malicious software. Attempting to remove the rootkit broke the windows installation. Sony reacted slowly and poorly, releasing a service pack that supposedly removed the rootkit, but which actually opened up new security vulnerabilities. And it didn't end there. Reading through the timeline is astounding (as a result, I tend to shy away from Sony these days). Though I don't believe he was called on it, the security researcher who discovered these vulnerabilities was technically breaking the law, because the rootkit was intended to protect copyright.
A few months ago, my windows computer died and I decided to give linux a try. I wanted to see if I could get linux to do everything I needed it to do. As it turns out, I could, but not legally. Watching DVDs on linux is technically illegal, because I'm circumventing the copy protection on DVDs. Similar issues exist for other media formats. The details are complex, but in the end, it turns out that I'm not legally able to watch my legitimately purchased DVDs on my computer (I have since purchased a new computer that has an approved player installed). Similarly, if I were to purchase a song from the iTunes Music Store, it comes in a DRMed format. If I want to use that format on a portable device (let's say my phone, which doesn't support Apple's DRM format), I'd have to convert it to a format that my portable device could understand, which would be illegal.
Which brings me to my next point, which is that DRM isn't really about protecting copyright. I've already established that it doesn't really accomplish that goal (and indeed, even works against many of the reasons copyright was put into place), so why is it still being pushed? One can only really speculate, but I'll bet that part of the issue has to do with IP owners wanting to "undercut fair use and then create new revenue streams where there were previously none." To continue an earlier example, if I buy a song from the iTunes music store and I want to put it on my non-Apple phone (not that I don't want one of those), the music industry would just love it if I were forced to buy the song again, in a format that is readable by my phone. Of course, that format would be incompatible with other devices, so I'd have to purchase the song again if I wanted to listen to it on those devices. When put in those terms, it's pretty easy to see why IP owners like DRM, and given the general person's reaction to such a scheme, it's also easy to see why IP owners are always careful to couch the debate in terms of piracy. This won't last forever, but it could be a bumpy ride.
Interestingly enough, distributers of digital media like Apple and Yahoo have recently come out against DRM. For the most part, these are just symbolic gestures. Cynics will look at Steve Jobs' Thoughts on Music and say that he's just passing the buck. He knows customers don't like or understand DRM, so he's just making a calculated PR move by blaming it on the music industry. Personally, I can see that, but I also think it's a very good thing. I find it encouraging that other distributers are following suit, and I also hope and believe this will lead to better things. Apple has proven that there is a large market for legally purchased music files on the internet, and other companies have even shown that selling DRM-free files yields higher sales. Indeed, the emusic service sells high quality, variable bit rate MP3 files without DRM, and it has established emusic as the #2 retailer of downloadable music behind the iTunes Music Store. Incidentally, this was not done for pure ideological reasons - it just made busines sense. As yet, these pronouncements are only symbolic, but now that online media distributers have established themselves as legitimate businesses, they have ammunition with which to challenge the IP holders. This won't happen overnight, but I think the process has begun.
Last year, I purchased a computer game called Galactic Civilizations II (and posted about it several times). This game was notable to me (in addition to the fact that it's a great game) in that it was the only game I'd purchased in years that featured no CD copy protection (i.e. DRM). As a result, when I bought a new computer, I experienced none of the usual fumbling for 16 digit CD Keys that I normally experience when trying to reinstall a game. Brad Wardell, the owner of the company that made the game, explained his thoughts on copy protection on his blog a while back:
I don't want to make it out that I'm some sort of kumbaya guy. Piracy is a problem and it does cost sales. I just don't think it's as big of a problem as the game industry thinks it is. I also don't think inconveniencing customers is the solution.For him, it's not that piracy isn't an issue, it's that it's not worth imposing draconian copy protection measures that infuriate customers. The game sold much better than expected. I doubt this was because they didn't use DRM, but I can guarantee one thing: People don't buy games because they want DRM. However, this shows that you don't need DRM to make a successful game.
The future isn't all bright, though. Peter Gutmann's excellent Cost Analysis of Windows Vista Content Protection provides a good example of how things could get considerably worse:
Windows Vista includes an extensive reworking of core OS elements in order to provide content protection for so-called "premium content", typically HD data from Blu-Ray and HD-DVD sources. Providing this protection incurs considerable costs in terms of system performance, system stability, technical support overhead, and hardware and software cost. These issues affect not only users of Vista but the entire PC industry, since the effects of the protection measures extend to cover all hardware and software that will ever come into contact with Vista, even if it's not used directly with Vista (for example hardware in a Macintosh computer or on a Linux server).This is infuriating. In case you can't tell, I've never liked DRM, but at least it could be avoided. I generally take articles like the one I'm referencing with a grain of salt, but if true, it means that the DRM in Vista is so oppressive that it will raise the price of hardware And since Microsoft commands such a huge share of the market, hardware manufacturers have to comply, even though a some people (linux users, Mac users) don't need the draconian hardware requirements. This is absurd. Microsoft should have enough clout to stand up to the media giants, there's no reason the DRM in Vista has to be so invasive (or even exist at all). As Gutmann speculates in his cost analysis, some of the potential effects of this are particularly egregious, to the point where I can't see consumers standing for it.
My previous post dealt with Web 2.0, and I posted a YouTube video that summarized how changing technology is going to force us to rethink a few things: copyright, authorship, identity, ethics, aesthetics, rhetorics, governance, privacy, commerce, love, family, ourselves. All of these are true. Earlier, I wrote that the purpose of copyright was to benefit society, and that protection and expiration were both essential. The balance between protection and expiration has been upset by technology. We need to rethink that balance. Indeed, many people smarter than I already have. The internet is replete with examples of people who have profited off of giving things away for free. Creative Commons allows you to share your content so that others can reuse and remix your content, but I don't think it has been adopted to the extent that it should be.
To some people, reusing or remixing music, for example, is not a good thing. This is certainly worthy of a debate, and it is a discussion that needs to happen. Personally, I don't mind it. For an example of why, watch this video detailing the history of the Amen Break. There are amazing things that can happen as a result of sharing, reusing and remixing, and that's only a single example. The current copyright environment seems to stifle such creativity, not the least of which because copyright lasts so long (currently the life of the author plus 70 years). In a world where technology has enabled an entire generation to accellerate the creation and consumption of media, it seems foolish to lock up so much material for what could easily be over a century. Despite all that I've written, I have to admit that I don't have a definitive answer. I'm sure I can come up with something that would work for me, but this is larger than me. We all need to rethink this, and many other things. Maybe that Web 2.0 thing can help.
Update: This post has mutated into a monster. Not only is it extremely long, but I reference several other long, detailed documents and even somewhere around 20-25 minutes of video. It's a large subject, and I'm certainly no expert. Also, I generally like to take a little more time when posting something this large, but I figured getting a draft out there would be better than nothing. Updates may be made...
Update 2.15.07: Made some minor copy edits, and added a link to an Ars Technica article that I forgot to add yesterday.
Posted by Mark on February 14, 2007 at 11:44 PM .: link :.
Wednesday, January 24, 2007
Top 10 Box Office Performance
So after looking at a bunch of top 10 films of 2006 lists, and compiling my own, I began to wonder just how popular these movies really were. Film critics are notorious for picking films that the average viewer thinks are boring or pretentious. Indeed, my list features a few such picks, and when I think about it, there are very few movies on the list that I'd give an unqualified recommendation. For instance, some of the movies on my list are very violent or otherwise graphic, and some people just don't like that sort of thing (understandably, of course). United 93 is a superb film, but not everyone wants to relive 9/11. And so on. As I mentioned before, top 10 lists are extremely personal and usually end up saying more about the person compiling the list than anything else, but I thought it would be interesting to see just how mainstream these lists really are. After all, there is a wealth of box office information available for every movie, and if you want to know how popular something is, economic data seems to be quite useful (though, as we'll see, perhaps not useful enough).
So I took nine top 10 lists (including my own) and compiled box office data from Box Office Mojo (since they don't always have budget information, I sometimes referenced IMDB or Wikipedia) and did some crunching (not much, I'm no statistician). I chose the lists of some of my favorite critics (like the Filmspotting guys and the local guy), and then threw in a few others for good measure (I wanted a New York critic, for instance).
The data collected includes domestic gross, budget and the number of theaters (widest release). From that data, I calculated the net gross and dollars per theater (DPT). You'd think this would be pretty conclusive data, but the more I thought about it, the more I realized just how incomplete a picture this paints. Remember, we're using this data to evaluate various top 10 lists, so when I chose domestic gross, I inadvertantly skewed the evaluation against lists that featured foreign films (however, I am trying to figure out whose list works best in the U.S. so I think it is a fair metric). So the gross only gives us part of the picture. The budget is an interesting metric, as it provides information about how much money a film's backers thought it would make and it provides a handy benchmark with which to evaluate (unfortunately, I was not able to find budget figures for a number of the smaller films, further skewing the totals you'll see). Net Gross is a great metric because it incorporates a couple of different things: it's not just a measure of how popular a movie is, it's a measure of how popular a movie is versus how much it cost to make (i.e. how much a film's producers believed in the film). In the context of a top 10 list, it's almost like pretending that the list creator was the head of a studio who chose what films to greenlight. It's not a perfect metric, but it's pretty good. The number of theaters the film showed in is an interesting metric because it shows how much faith theater chains had in the movie (and in looking at the numbers, it seems that the highest grossing films also had the most theaters). However, this could again be misleading because it's only the widest release. I doubt there are many films where the number of theaters doesn't drop considerably after opening weekend. Dollars per theater is perhaps the least interesting metric, but I thought it interesting enough to include.
One other thing to note is that I gathered all of this data earlier this week (Sunday and Monday), and some of the films just recently hit wide distribution (notably Pan's Labyrinth and Children of Men, neither of which have recouped costs yet) and will make more money. Some films will be re-released around Oscar season, as the studios seek to cash in on their award winning films.
I've posted all of my data on a public Google Spreadsheet (each list is on a separate tab), and I've linked each list below to their respective tab with all the data broken out. This table features the totals for the metrics I went over above: Domestic Gross, Budget, Net Gross, Theaters, and Dollars Per Theater (DPT).
This was quite an interesting exercise, and it would appear from the numbers, that perhaps not all film critics are as out of touch as originally thought. Or are they? Let's take a closer look.
Statistically, the biggest positive outliers appeared to be Little Miss Sunshine and Borat, and the biggest negative outliers appeared to be Flags of our Fathers and Children of Men (both of which will make more money, as they are currently in theaters).
Obviously, this list is not authoritative, and I've already spent too much time harping on the qualitative issues with my metrics, but I found it to be an interesting exercise (if I ever do something similar again, I'm going to need to find a way to automate some of the data gathering, though). Well, this pretty much shuts the door on the 2006 Kaedrin Awards season. I hope you enjoyed it.
Posted by Mark on January 24, 2007 at 11:40 PM .: link :.
Sunday, January 21, 2007
Best Films of 2006
Top 10 lists are intensely personal affairs. When it comes to movies (or art in general), you have to walk the narrow line between subjective and objective evaluations, and I inevitably end up with a list that says more about me than the movies I selected. James Berardinelli says it well:
I would be surprised if anyone else (critic or otherwise) has an identical Top 10 list to mine. But therein lies the enjoyment of examining individual Top 10 lists: they provide insight into the mindset of the one who has assembled them. It doesn't matter whether one agrees with their choices or not; that's irrelevant. It's about learning something about a person through the movies they like. I don't like "group" lists. To me, they are valueless - a generic popularity contest that reveals nothing.I actually kinda like "group" lists, but I digress. The point is that these are generally movies that I like or otherwise moved me. Context matters. Some films are on the list because I had low expectations that were exceeded beyond imagination, and some are there because I had a great theater-going experience (apparently a rarity in this day and age). As I've done in years past, my top 10 is listed in a roughly reverse order, with the best last.
Top 10 Movies of 2006
* In roughly reverse order
As I've already mentioned above, the first two of the Honorable Mentions listed below could probably be interchangeable with the number 9 or 10 in the top 10. Part of why it was so hard to select was that these four films are just so different from one another. Indeed, the last two has changed back and forth several times (I started this list a while ago).
These are all decent films, but for some reason, I don't find them as engaging as everyone else.
In any case, comments are welcome. Feel free to express your outrage or approval in the comments.
Posted by Mark on January 21, 2007 at 10:06 PM .: link :.
Sunday, December 03, 2006
Aliens Board Game
A little while ago, I became reaquanted with a game that I used to play often - the Aliens board game. I haven't played the game in about ten years or so, and I found it interesting for a number of reasons. Gameplay is a bit of a mixture of other gaming styles, combining the arbitrary nature and futility of board games with the wonky dice and damage-table style of RPGs (Ok, you shot the alien with your pulse rifle. Roll for acid!) I noticed a few things about the game that I never did before, some good, some bad.
Before I get into those observations, I'll have to explain the mechanics of the game a bit. The game comes with a few maps and there are a couple of scenarios that you can play, each of which is basically re-enacting a memorable scene where the colonial marines get their asses handed to them from the movie (i.e. the initial encounter with the aliens under the reactor, the later encounter and retreat through the air ducts, and a single player scenario where Ripley rescues Newt and fights the alien queen). There was also an expansion pack which featured an additional scenario. Since we'd all played the game countless times in our youth, we decided to mix things up a little and combine the regular map with the expansion map. Basically, we start at one end of the map and have to make ourselfs to the other end. This is easier said than done.
We hand out all the player cards randomly. Most of the characters are colonial marines, but there is a surprising amount of variability between characters and their abilities. Most characters are given two moves per turn, though Ripley, Apone, and Bishop have three. In terms of weaponry, some of the characters are significantly better than others. Hicks, Ripley and Apone have quality weapons to choose from. Drake and Vasquez have those awesome smart guns. On the opposite end of the spectrum, there's the Burke character, who has no weapons (he's essentially used as alien bait, as he should). Since there were only a few of us, we each got multiple characters to play with (which is a good thing, for reasons I'll get into in a moment). I ended up with three relatively lame characters: Corporal Dietrich (who was armed with only a pistol), Lieutenant Gorman (whose Pulse Rifle was the most powerful weapon in my group), and Private Wierzbowski (who was armed with an incinerator). Gorman's an ok character to play, except he's a tool in the movie. Dietrich isn't quite as useless as Burke, but damn near so. Wierzbowski isn't the greatest character to play, but he's awesome in the movie (The Wierzbowski Hunters are one of those wonderful phenomenons that could only be possible on the internet).
That's it man, game over man, game over! *
As already mentioned, our goal is to make our way from one side of the map to the other. Every turn, four aliens are added to the board in semi-random places (as the game proceeds, more aliens are added per turn). While most of the players only have two moves per turn, the aliens have four moves. If an alien enters on or next to your position, you have to roll a ten sided die. Most of the time, the result is that you are "grabbed" by the alien. Essentially, you need to be rescued by one of the other players, illustrating the cooperative nature of the game.
So the game begins, and the initial four aliens are inserted onto the board. The way the game goes for a while is that we take out all of the aliens, and move forward if possible. Eventually my characters are leading the pack and make it to the next map (half way there!), and the DM equivalent decides that we need to start adding more aliens per turn. At this point, we're fending off aliens from all directions, and we start to take on more and more casualties. Some aspects of the game were becoming clearer to me:
We had come to a standoff. The second map had more walls and obstructed views, so it took the aliens longer to reach us, but we also couldn't pick them off from afar. Wierzbowski finally proved useful, as you can use the incinerator to set up a "fire wall" that the aliens can't cross for a turn (This ability is particularly useful on the second map because of all the choke points). Still, our ranks were being worn down. I was able to block the forward onslaught, but the aliens came in on the flank and mounted a devestating attack. More than 50% of the original team had perished, and some of us were wounded (which makes it harder to hit targets). Dietrich had become completely disabled, so I had Wierzbowski pick her up in the hopes of feeding her to an alien if I got into trouble.
The game was running a little long at this point, so the DM decided to insert the alien queen (this isn't really supposed to happen, but we like a challenge). The queen is significantly more difficult to deal with, and she managed to kill the remainder of our team... except Wierzbowski who had made his way into a room with a single block choke point. Using the firewall ability, I was able to make it to the final hallway before being attacked. I managed to take out a couple of aliens with my incinerator, but I had to sacrifice Dietrich in order to get away. Alas, the queen had made her way around, and the valiant Wierzbowski finally succumbed to her deadly advance.
Our variations on the rules aside, it's actually a pretty well balanced game. The aliens are appropriately formidable, and they only become moreso as the game progresses. As in the movie, you can't really complete a scenario without taking significant casualties, and even though our team did pretty well, there's no guarantee that we'd have made it (even if we didn't add the queen). The game was made in 1989, and is no longer available. You can find it on eBay, but it commands a relatively high price tag... It's an interesting game, but it's not really worth the high price these days. In the 90s, the game was a lot of fun. These days, other games have far surpassed it (especially video games). Still, it's nice to play an old favorite every now and again.
* I should note that the game does not come with those nice figurines in the picture above. The game has these chinsy cardboard pieces with pictures of the characters and aliens. Functional, but not as nice as the figurines. Also, yes, I'm a huge nerd and can name all the colonial marines without having to look them up.
Posted by Mark on December 03, 2006 at 08:04 PM .: link :.
Wednesday, November 29, 2006
Animation Marathon: Grave of the Fireflies
Of the six films chosen for the Animation Marathon, Grave of the Fireflies was the only one that I hadn't heard much about. The only thing I knew about it was that it was sad. Infamously sad. After watching the movie, I can say that it certainly does live up to those expecations. It's a heartbreaking movie, all the moreso because it's animated. Spoilers ahead...
The film begins by showing us a 14 year old boy lying dead on a subway platform, so you can't really say that the filmmakers were trying to hide the tragedy in this film. The boy's name is Seita, and through flashbacks, we learn how he came to meet his end. Set during the last days of World War II, the story is kicked off by the American firebombing of Seita's city. Seita's father is in the Japanese Navy and Seita's mother is horribly wounded by bombing, eventually succumbing to her wounds. The entire city is destroyed, leaving Seita and his little 4 year old sister Setsuko homeless. For a time, they take refuge with an Aunt, who seems nice at first, but gets grumpier as she realizes that Seita isn't willing to contribute to the war effort, or to help around the house. Eventually, Seita finds an unused bomb shelter where he can live with his sister without being a burden on their Aunt. It being wartime, food is scarce, and Seita struggles and ultimately fails to support his sister.
This isn't quite like any other animated movie I've ever seen. It's a powerful and evocative film. It has moments of great beauty, even though it's also quite sad. It displays a patience that's not common in animated movies. There are contemplative pauses. Characters and their actions are allowed time to breath. The animations are often visually striking, even when they're used in service of less-than-pleasant events (such as the landscape shot of the city as it burns).
After I finished the film, I was infurated. Obviously no one really enjoys watching two kids starve, suffer, and die after losing their family and home to a war, but it's not just sad. As I said before, it's infuriating. I was so pissed off at Seita because he made a lot of boneheaded, prideful decisions that were ultimately responsible for the death of his sister (and eventually, himself). At one point in the film, as Seita begs a farmer for food, the farmer tells him to swallow his pride and go back to his aunt. Seita refuses, and hence the tragedy. But at least he's young and thus reckless, which is understandable. While I was upset at Seita's actions, I really couldn't blame only him and the film did prompt some empathy for that character. I can't say the same of the Aunt. Who lets two young kids go off to live by themselves in wartime? Yeah, Seita wasn't pulling his weight, but hell, your job as an adult is to teach children about responsibilities... It was wartime for crying out loud. There had to be plenty to do. Yeah, it's sad. Especially when it comes to Setsuko, who was only 4 years old. But other than that, it was infuriating, and I wasn't sure how I was going to rate the movie. Then I read about some context in the Onion A.V. Club review of the movie (emphasis mine):
Adapting a semi-autobiographical book by Akiyuki Nosaka, Takahata scripted and directed Fireflies while his Studio Ghibli partner, Hayao Miyazaki, was scripting and directing his own classic, My Neighbor Totoro. The two films were produced and screened as a package, because Totoro was considered a difficult sell, while Fireflies, as an "educational" adaptation of a well-known historical book, had a guaranteed audience. But while both films won high praise at home and abroad, it's hard to imagine the initial impact of watching them back to back. Totoro is a bubbly, joyous film about the wonders of childhood, while Fireflies follows two children as they starve, suffer, and die after American planes firebomb their town.It turns out that my feelings about the film were exactly what the filmmakers were going for, which kinda turned me around and made me realize that the film really is brilliant (in other words, my expecation of the film as having to be "Sad" made me feel strange because, while it was certainly sad, it was also infuriating. Now that I know the infurating part was intentional, it makes a lot more sense.) As the Onion article brilliantly summarizes, "not so much an anti-war statement as it is a protest against basic human selfishness, and the way it only worsens during trying times." And that's sad, but it's also quite annoying.
The animation is very well done, and while some might think that something this serious would not be appropriate in animation, I'm not sure it would work any other way. One of the most beautiful scenes in the film shows the two children using fireflies to light their abandoned bomb shelter. It's a scene I think would look cheesy and fake in a live action film, but which works wonderfully in an animated film. Roger Ebert describes it well:
It isn't the typical material of animation. But for "Grave of the Fireflies," I think animation was the right choice. Live action would have been burdened by the weight of special effects, violence and action. Animation allows Takahata to concentrate on the essence of the story, and the lack of visual realism in his animated characters allows our imagination more play; freed from the literal fact of real actors, we can more easily merge the characters with our own associations.In the end, while this is definitely an excellent film, I find it difficult to actually recommend it (for what I hope are obvious reasons). This type of movie is not for everyone, and while I do think it is brilliantly executed, I don't especially want to watch it again. Ever. In an odd sort of way, that's a testament to how well the film does what it does. (***1/2)
Filmspotting's review is not up yet, but should be up tomorrow. Check it out, as they are also reviewing The Fountain (which I reviewed on Monday).
(In a strange stroke of coincidence, I had actually watched Miyazaki's My Neighbor Totoro just a few days before Fireflies, not quite mimicking the back to back screenings mentioned in the Onion article, but close enough to know that it was an odd combo indeed (and I can't imagine the playful and fun Totoro being a "harder sell" than the gut-punch of Fireflies.))
Sunday, October 29, 2006
Adventures in Linux, Paradox of Choice Edition
Last week, I wrote about the paradox of choice: having too many options often leads to something akin to buyer's remorse (paralysis, regret, dissatisfaction, etc...), even if their choice was ultimately a good one. I had attended a talk given by Barry Schwartz on the subject (which he's written a book about) and I found his focus on the psychological impact of making decisions fascinating. In the course of my ramblings, I made an offhand comment about computers and software:
... the amount of choices in assembling your own computer can be stifling. This is why computer and software companies like Microsoft, Dell, and Apple (yes, even Apple) insist on mediating the user's experience with their hardware & software by limiting access (i.e. by limiting choice). This turns out to be not so bad, because the number of things to consider really is staggering.The foolproofing that these companies do can sometimes be frustrating, but for the most part, it works out well. Linux, on the other hand, is the poster child for freedom and choice, and that's part of why it can be a little frustrating to use, even if it is technically a better, more stable operating system (I'm sure some OSX folks will get a bit riled with me here, but bear with me). You see this all the time with open source software, especially when switching from regular commercial software to open source.
One of the admirable things about Linux is that it is very well thought out and every design decision is usually done for a specific reason. The problem, of course, is that those reasons tend to have something to do with making programmers' lives easier... and most regular users aren't programmers. I dabble a bit here and there, but not enough to really benefit from these efficiencies. I learned most of what I know working with Windows and Mac OS, so when some enterprising open source developer decides that he doesn't like the way a certain Windows application works, you end up seeing some radical new design or paradigm which needs to be learned in order to use it. In recent years a lot of work has gone into making Linux friendlier for the regular user, and usability (especially during the installation process) has certainly improved. Still, a lot of room for improvement remains, and I think part of that has to do with the number of choices people have to make.
Let's start at the beginning and take an old Dell computer that we want to install Linux on (this is basically the computer I'm running right now). First question: which distrubution of Linux do we want to use? Well, to be sure, we could start from scratch and just install the Linux Kernel and build upwards from there (which would make the process I'm about to describe even more difficult). However, even Linux has it's limits, so there are lots of distrubutions of linux which package the OS, desktop environments, and a whole bunch of software together. This makes things a whole lot easier, but at the same time, there are a ton of distrutions to choose from. The distributions differ in a lot of ways for various reasons, including technical (issues like hardware support), philosophical (some distros poo poo commercial involvement) and organizational (things like support and updates). These are all good reasons, but when it's time to make a decision, what distro do you go with? Fedora? Suse? Mandriva? Debian? Gentoo? Ubuntu? A quick look at Wikipedia reveals a comparison of Linux distros, but there are a whopping 67 distros listed and compared in several different categories. Part of the reason there are so many distros is that there are a lot of specialized distros built off of a base distro. For example, Ubuntu has several distributions, including Kubuntu (which defaults to the KDE desktop environment), Edubuntu (for use in schools), Xubuntu (which uses yet another desktop environment called Xfce), and, of course, Ubuntu: Christian Edition (linux for Christians!).
So here's our first choice. I'm going to pick Ubuntu, primarily because their tagline is "Linux for Human Beings" and hey, I'm human, so I figure this might work for me. Ok, and it has a pretty good reputation for being an easy to use distro focused more on users than things like "enterprises."
Alright, the next step is to choose a desktop environment. Lucky for us, this choice is a little easier, but only because Ubuntu splits desktop environments into different distributions (unlike many others which give you the choice during installation). For those who don't know what I'm talking about here, I should point out that a desktop environment is basically an operating system's GUI - it uses the desktop metaphor and includes things like windows, icons, folders, and abilities like drag-and-drop. Microsoft Windows and Mac OSX are desktop environments, but they're relatively locked down (to ensure consistency and ease of use (in theory, at least)). For complicated reasons I won't go into, Linux has a modular system that allows for several different desktop environments. As with linux distributions, there are many desktop environments. However, there are really only two major players: KDE and Gnome. Which is better appears to be a perennial debate amongst linux geeks, but they're both pretty capable (there are a couple of other semi-popular ones like Xfce and Enlightenment, and then there's the old standby, twm (Tom's Window Manager)). We'll just go with the default Gnome installation.
Note that we haven't even started the installation process and if we're a regular user, we've already made two major choices, each of which will make you wonder things like: Would I have this problem if I installed Suse instead of Ubuntu? Is KDE better than Gnome?
But now we're ready for installation. This, at least, isn't all that bad, depending on the computer you're starting with. Since we're using an older Dell model, I'm assuming that the hardware is fairly standard stuff and that it will all be supported by my distro (if I were using a more bleeding edge type box, I'd probably want to check out some compatibility charts before installing). As it turns out, Ubuntu and it's focus on creating a distribution that human beings can understand has a pretty painless installation. It was actually a little easier than Windows, and when I was finished, I didn't have to remove the mess of icons and trial software offers (purchasing a Windows PC through somone like HP is apparently even worse). When you're finished installing Ubuntu, you're greeted with a desktop that looks like this (click the pic for a larger version):
No desktop clutter, no icons, no crappy trial software. It's beautiful! It's a little different from what we're used to, but not horribly so. Windows users will note that there are two bars, one on the top and one on the bottom, but everything is pretty self explanatory and this desktop actually improves on several things that are really strange about Windows (i.e. to turn off you're computer, first click on "Start!"). Personally, I think having two toolbars is a bit much so I get rid of one of them, and customize the other so that it has everything I need (I also put it at the bottom of the screen for several reasons I won't go into here as this entry is long enough as it is).
Alright, we're almost homefree, and the installation was a breeze. Plus, lots of free software has been installed, including Firefox, Open Office, and a bunch of other good stuff. We're feeling pretty good here. I've got most of my needs covered by the default software, but let's just say we want to install Amarok, so that we can update our iPod. Now we're faced with another decision: How do we install this application? Since Ubuntu has so thoughtfully optimized their desktop for human use, one of the things we immediately notice in the "Applications" menu is an option which says "Add/Remove..." and when you click on it, a list of software comes up and it appears that all you need to do is select what you want and it will install it for you. Sweet! However, the list of software there doesn't include every program, so sometimes you need to use the Synaptic package manager, which is also a GUI application installation program (though it appears to break each piece of software into smaller bits). Also, in looking around the web, you see that someone has explained that you should download and install software by typing this in the command line: apt-get install amarok. But wait! We really should be using the aptitude command instead of apt-get to install applications.
If you're keeping track, that's four different ways to install a program, and I haven't even gotten into repositories (main, restricted, universe, multiverse, oh my!), downloadable package files (these operate more or less the way a Windows user would download a .exe installation file, though not exactly), let alone downloading the source code and compiling (sounds fun, doesn't it?). To be sure, they all work, and they're all pretty easy to figure out, but there's little consistency, especially when it comes to support (most of the time, you'll get a command line in response to a question, which is completely at odds with the expectations of someone switching from Windows). Also, in the case of Amarok, I didn't fare so well (for reasons belabored in that post).
Once installed, most software works pretty much the way you'd expect. As previously mentioned, open source developers sometimes get carried away with their efficiencies, which can sometimes be confusing to a newbie, but for the most part, it works just fine. There are some exceptions, like the absurd Blender, but that's not necessarily a hugely popular application that everyone needs.
Believe it or not, I'm simplifying here. There are that many choices in Linux. Ubuntu tries its best to make things as simple as possible (with considerable success), but when using Linux, it's inevitable that you'll run into something that requires you to break down the metaphorical walls of the GUI and muck around in the complicated swarm of text files and command lines. Again, it's not that difficult to figure this stuff out, but all these choices contribute to the same decision fatigue I discussed in my last post: anticipated regret (there are so many distros - I know I'm going to choose the wrong one), actual regret (should I have installed Suse?), dissatisfaction, excalation of expectations (I've spent so much time figuring out what distro to use that it's going to perfectly suit my every need!), and leakage (i.e. a bad installation process will affect what you think of a program, even after installing it - your feelings before installing leak into the usage of the application).
None of this is to say that Linux is bad. It is free, in every sense of the word, and I believe that's a good thing. But if they ever want to create a desktop that will rival Windows or OSX, someone needs to create a distro that clamps down on some of these choices. Or maybe not. It's hard to advocate something like this when you're talking about software that is so deeply predicated on openess and freedom. However, as I concluded in my last post:
Without choices, life is miserable. When options are added, welfare is increased. Choice is a good thing. But too much choice causes the curve to level out and eventually start moving in the other direction. It becomes a matter of tradeoffs. Regular readers of this blog know what's coming: We don't so much solve problems as we trade one set of problems for another, in the hopes that the new set of problems is more favorable than the old.Choice is a double edged sword, and by embracing that freedom, Linux has to deal with the bad as well as the good (just as Microsoft and Apple have to deal with the bad aspects of suppressing freedom and choice). Is it possible to create a Linux distro that is as easy to use as Windows or OSX while retaining the openness and freedom that makes it so wonderful? I don't know, but it would certainly be interesting.
Sunday, October 22, 2006
The Paradox of Choice
At the UI11 Conference I attended last week, one of the keynote presentations was made by Barry Schwartz, author of The Paradox of Choice: Why More Is Less. Though he believes choice to be a good thing, his presentation focused more on the negative aspects of offering too many choices. He walks through a number of examples that illustrate the problems with our "official syllogism" which is:
So how do we react to all these choices? Luke Wroblewski provides an excellent summary, which I will partly steal (because, hey, he's stealing from Schwartz after all):
Another example is my old PC which has recently kicked the bucket. I actually assembled that PC from a bunch of parts, rather than going through a mainstream company like Dell, and the number of components available would probably make the Circuit City stereo example I gave earlier look tiny by comparison. Interestingly, this diversity of choices for PCs is often credited as part of the reason PCs overtook Macs:
Back in the early days of Macintoshes, Apple engineers would reportedly get into arguments with Steve Jobs about creating ports to allow people to add RAM to their Macs. The engineers thought it would be a good idea; Jobs said no, because he didn't want anyone opening up a Mac. He'd rather they just throw out their Mac when they needed new RAM, and buy a new one.But as Schwartz would note, the amount of choices in assembling your own computer can be stifling. This is why computer and software companies like Microsoft, Dell, and Apple (yes, even Apple) insist on mediating the user's experience with their hardware by limiting access (i.e. by limiting choice). This turns out to be not so bad, because the number of things to consider really is staggering. So why was I so happy with my computer? Because I really didn't make many of the decisions - I simply went over to Ars Technica's System Guide and used their recommendations. When it comes time to build my next computer, what do you think I'm going to do? Indeed, Ars is currently compiling recommendations for their October system guide, due out sometime this week. My new computer will most likely be based off of their "Hot Rod" box. (Linux presents some interesting issues in this context as well, though I think I'll save that for another post.)
So what are the lessons here? One of the big ones is to separate the analysis from the choice by getting recommendations from someone else (see the Ars Technica example above). In the market for a digital camera? Call a friend (preferably one who is into photography) and ask them what to get. Another thing that strikes me is that just knowing about this can help you overcome it to a degree. Try to keep your expectations in check, and you might open up some room for pleasant surprises (doing this is suprisingly effective with movies). If possible, try using the product first (borrow a friend's, use a rental, etc...). Don't try to maximize the results so much; settle for things that are good enough (this is what Schwartz calls satisficing).
Without choices, life is miserable. When options are added, welfare is increased. Choice is a good thing. But too much choice causes the curve to level out and eventually start moving in the other direction. It becomes a matter of tradeoffs. Regular readers of this blog know what's coming: We don't so much solve problems as we trade one set of problems for another, in the hopes that the new set of problems is more favorable than the old. So where is the sweet spot? That's probably a topic for another post, but my initial thoughts are that it would depend heavily on what you're doing and the context in which you're doing it. Also, if you were to take a wider view of things, there's something to be said for maximizing options and then narrowing the field (a la the free market). Still, the concept of choice as a double edged sword should not be all that surprising... after all, freedom isn't easy. Just ask Spider Man.
Sunday, June 18, 2006
David Wong's article on the coming video game crash seems to have inspired Steven Den Beste, who agrees with Wong that there will be a gaming crash and also thinks that the same problems affect other forms of entertainment. The crux of the problem appears to be novelty. Part of the problem appears to be evolutionary as well. As humans, we are conditioned for certain things, and it seems that two of our insticts are conflicting.
The first instinct is the human tendency to rely on induction. Correlation does not imply causation, but most of the time, we act like it does. We develop a complex set of heuristics and guidelines that we have extrapolated from past experiences. We do so because circumstances require us to make all sorts of decisions without posessing the knowledge or understanding necessary to provide a correct answer. Induction allows us to to operate in situations which we do not uderstand. Psychologist B. F. Skinner famously explored and exploited this trait in his experiments. Den Beste notes this in his post:
What you do is to reward the animal (usually by giving it a small amount of food) for progressively behaving in ways which is closer to what you want. The reason Skinner studied it was because he (correctly) thought he was empirically studying the way that higher thought in animals worked. Basically, they're wired to believe that "correlation often implies causation". Which is true, by the way. So when an animal does something and gets a reward it likes (e.g. food) it will try it again, and maybe try it a little bit differently just to see if that might increase the chance or quantity of the reward.So we're hard wired to create these heuristics. This has many implications, from Cargo Cults to Superstition and Security Beliefs.
The second instinct is the human drive to seek novelty, also noted by Den Beste:
The problem is that humans are wired to seek novelty. I think it's a result of our dietary needs. Lions can eat zebra meat exclusively their entire lives without trouble; zebras can eat grass exclusively their entire lives. They don't need novelty, but we do. Primates require a quite varied diet in order to stay healthy, and if we eat the same thing meal after meal we'll get sick. Individuals who became restless and bored with such a diet, and who sought out other things to eat, were more likely to survive. And when you found something new, you were probably deficient in something that it provided nutritionally, so it made sense to like it for a while -- until boredom set in, and you again sought out something new.The drive for diversity affects more than just our diet. Genetic diversity has been shown to impart broader immunity to disease. Children from diverse parentage tend to develop a blend of each parent's defenses (this has other implications, particularly for the tendency for human beings to work together in groups). The biological benefits of diversity are not limited to humans either. Hybrid strains of many crops have been developed over the years because by selectively mixing the best crops to replant the next year, farmers were promoting the best qualities in the species. The simple act of crossing different strains resulted in higher yields and stronger plants.
The problem here is that evolution has made the biological need for diversity and novelty dependent on our inductive reasoning instincts. As such, what we find is that those we rely upon for new entertainment, like Hollywood or the video game industry, are constantly trying to find a simple formula for a big hit.
It's hard to come up with something completely new. It's scary to even make the attempt. If you get it wrong you can flush amazingly large amounts of money down the drain. It's a long-shot gamble. Every once in a while something new comes along, when someone takes that risk, and the audience gets interested...Indeed, the majority of big films made today appear to be remakes, sequels or adaptations. One interesting thing I've noticed is that something new and exciting often fails at the box office. Such films usually gain a following on video or television though. Sometimes this is difficult to believe. For instance, The Shawshank Redemption is a very popular film. In fact, it occupies the #2 spot (just behind The Godfather) on IMDB's top rated films. And yet, the film only made $28 million dollars (ranked 52 in 1994) in theaters. To be sure, that's not a modest chunk of change, but given the universal love for this film, you'd expect that number to be much higher. I think part of the reason this movie failed at the box office was that marketers are just as susceptible to these novelty problems as everyone else. I mean, how do you market a period prison drama that has an awkward title an no big stars? It doesn't sound like a movie that would be popular, even though everyone seems to love it.
Which brings up another point. Not only is it difficult to create novelty, it can also be difficult to find novelty. This is the crux of the problem: we require novelty, but we're programmed to seek out new things via correllation. There is no place to go for perfect recommendations and novelty for the sake of novelty isn't necessarily enjoyable. I can seek out some bizarre musical style and listen to it, but the simple fact that it is novel does not guarantee that it will be enjoyable. I can't rely upon how a film is marketed because that is often misleading or, at least, not really representative of the movie (or whatever). Once we do find something we like, our instinct is often to exhaust that author or director or artist's catalog. Usually, by the end of that process, the artist's work begins to seem a little stale, for obvious reasons.
Seeking out something that is both novel and enjoyable is more difficult than it sounds. It can even be a little scary. Many times, things we think will be new actually turn out to be retreads. Other times, something may actually be novel, but unenjoyable. This leads to another phenomenon that Den Beste mentions: the "Unwatched pile." Den Beste is talking about Anime, and at this point, he's begun to accumulate a bunch of anime DVDs which he's bought but never watched. I've had similar things happen with books and movies. In fact, I have several books on my shelf, just waiting to be read, but for some of them, I'm not sure I'm willing to put in the time and effort to read them. Why? Because, for whatever reason, I've begun to experience some set of diminishing returns when it comes to certain types of books. These are similar to other books I've read, and thus I probably won't enjoy these as much (even if they are good books).
The problem is that we know something novel is out there, it's just a matter of finding it. At this point, I've gotten sick of most of the mass consumption entertainment, and have moved on to more niche forms of entertainment. This is really a signal versus noise, traversal of the long tail problem. An analysis problem. What's more, with globalization and the internet, the world is getting smaller... access to new forms of entertainment are popping up (for example, here in the US, anime was around 20 years ago, but it was nowhere near as common as it is today). This is essentially a subset of a larger information aggregation and analysis problem that we're facing. We're adrift in a sea of information, and must find better ways to navigate.
Thursday, May 25, 2006
Pitfall II: Lost Caverns
Perhaps I've gone too far. I'm in an underground cavern beneath Peru. It seems to be a complex maze, perhaps eight chambers wide and over three times as deep. Niece Rhonda has disappeared, along with Quickclaw, our cowardly cat. I am beset by all manner of subterranean creatures in this vast, ancient labrynth. And all because of a rock--the Raj diamond. It was stolen a century ago, and hidden here.Pitfall II: Lost Caverns. The original Pitfall! set the standard for Atari adventure games as it sent our intrepid hero, an Indiana Jones clone named Pitfall Harry, to a junge where he must avoid the likes of scorpions, crocodiles quicksand and tar pits (amongst other things). The goal of the first game was simply to collect 32 bars of gold in 20 minutes without dying 3 times, a typical Atari-era video game goal. The sequel improves upon nearly every aspect of the original game and far surpasses the competition.
To start, the game actually has a legitimate goal, not some arbitrary point score. Your goal is to collect the Raj diamond, rescue your niece Rhonda and also your cowardly cat Quickclaw (with an added bonus for collecting a rare rat and the usual gold bars). What's more, you are given an infinite amount of lives and time with which to accomplish these goals (there are scattered checkpoints and when you die, you are transported back to the last one you reached, deducting points as you go). You're given a few new abilities (like the ability to swim) and you face a new series of hazards, including poisonous frogs, bats, condors and electric eels.
From a technological standpoint, Pitfall II pushed the envelope both visually and musically. It was one of the largest games ever created for the 2600 (a whopping 10k), and it included features like smooth scrolling, an expansive map, relatively high-resolution graphics, varying scenery, detailed animations and a first-rate musical score that was detailed and varied (quite an accomplishment considering that most 2600 games did not feature music at all). Obviously, all of these things are trivial by current standards, but at the time, this was an astounding feat. Indeed, it was only made possible because of custom hardware built inside the game cartridge that enhanced the 2600's video and audio capabilities.
You start the game in the jungle. In a perverse maneuver, the game's designers made sure that you could see Quickclaw (one of your primary objectives) immediately beneath your starting point, but to actually reach him you must traverse the entire map!
So close, yet so far away...
Again, the sequel imbues Pitfall Harry with a few extra abilities, including the ability to swim. Naturally, this benefit does not come without danger, as shown by the electric eel swimming along side our hero (you can't see it in the screenshot, but the eel alternates between a white squiggly line and a black squiggly line, thus conveying it's electric nature). Also of note is the rather nice graphical element of the waterfall.
Swimming with an electric eel
As you explore the caverns, you run across various checkpoints marked with a cross. When you touch a cross, it becomes your new starting point whenever you die.
I think that green thing is supposed to be a poison frog.
At various points in the game you are faced with a huge, vertical open space. Sometimes you just have to jump. One of the great things about this game, though, is that there is a surprising amount of freedom of movement. You could, if you wanted, just take the ladder down to the bottom of the cavern instead of jumping (though at one point, if you want to get the Raj ring, you'll need to face the abyss). Plus, there are all sorts of gold bars hidden around the caves in places that you don't have to go. Obviously, there are a limited number of specific paths you can take - it's no GTA III - but given the constraints at the time, this was a neat aspect of the game.
Stepping into the abyss
Another innovation in Pitfall II is Harry's ability to grab onto a rising balloon and ride it to the top of the cavern (a necessary step at one point), dodging bats along the way. A pretty unique and exciting sequence for its time.
That's some powerful helium in that balloon
The valiant Pitfall Harry, about to rescue his neice Rhonda.
The designers' cruel sense of placement strikes again. I can see the Raj diamond, but how do you get there? Luckily, the game's freedom of movement allows you to backtrack if you want (and when you want).
Curse you, game designers!
The final portion of the map is still, to this day, challenging. Up until this point in the game, you've only had to dogde a bat here, a condor there. This section requires you to really get your timing and reflexes in order, as you must complete a long sequence of evasions before you get to the top. Nevertheless, success was imminent.
Victory is mine!
Naturally, the game does not hold water compared to the games of today in terms of technology or gameplay, but what is remarkable about this game is how close it got. And that it did so at a time when many of these concepts were unheard-of. Sure, there are still some elements taken from the "Do it again, stupid" school of game design, but given the constraints of the 6 year old hardware and the fact that nearly every other game ever released for the console was much worse in this respect, I think it's worth cutting the game some slack (plus, as Shamus notes in the referenced post, these sorts of things are still common today!)
Everything about this game, from the packaging and manual (which is actually an excellent document done in the style of Pitfall Harry's aformentioned diary) to the graphics and music to the innovative gameplay and freedom of movement, is exceptional. Without a doubt, my favorite game for the 2600. Stay tuned for the honorable mentions!
Wednesday, March 01, 2006
GalCiv II: Rise of the Kaedrinians!
Galactic Civilizations II continues to occupy the majority of my free time, and I wanted to try showing a game example (similar to this one by one of the game's creators, though my example won't be as thorough). I'll be showing how I was able to secure good long term prospects at the beginning of my second game.
I played my first game as the Terran Alliance (humans), and one of the most enjoyable things I've noticed about the game is the ability to customize various aspects, such as planet names and ship designs. So this time, I decided to create a new race, the Kaedrinians (long time readers should get a kick out of that), and installed tallman as their emperor.
Update: Moved screenshots and commentary to the extended entry. Click below to see full entry...
(Click images for a larger version, usually with more information)
Welcome to planet Kaedrin!
I set up the galaxy so it was relatively small and had relatively few habitable planets. This may turn out to be my undoing. My typical strategy in these types of games is to expand quickly and get a foothold in several starsystems to start. The Kaedrin system was blessed to have two habitable planets, Kaedrin (which is my homeworld) and Vizzard II, a very low quality planet. After a cursory examination of the surrounding starsystems, I had not found any other habitable planets. As I expanded my search, I saw that most of my opponents were luckier in terms of colonization. Nevertheless, I was able to secure one planet that was relatively far away from my homeworld. However, that planet was also of a relatively low quality. Low quality planets don't support nearly as many enhancements or production capacity. These would serve me well at the beginning of the game, but would become less and less important as time went on.
Despite their cuteness factor, I could not let such nefarious beings continue to exist. Plus, their planet was of an obscenely high quality. It was a real gem. The highest quality planet I'd seen in the galaxy, and thus ideally suited for my purposes of galactic expansion. The Snathi appeared to be farther along in cultivating their planet than I, and were churning out constructors and freighters at a relatively high rate. Lucky for me, neither of those ship classes had any military capacity (no weapons or shields). However, this fortuitous state would not hold forever. I had to act fast if I was to take the planet (I also had to worry about one of the other major civilizations making a run for this ripe planet. Luckily, because they only had one planet, I didn't have to worry about the annoying surrender factor.)
In order to invade, I would need to research a few technologies and build an invasion fleet. The fleet would include a troop transport and a combat escort. The transport ship is one of the core ships and once I had researched the planetary invasion technology, building that ship would be simple. The combat escort, however, presented me with an opportunity to utilize my favorite feature of GalCiv II, the customized ship builder. After researching a number of technologies, I was finally ready to design my first warship, the Space Lion:
The Space Lion Class Battle Cruiser
Armed with Stinger II missiles and basic Shields, the Space Lion wasn't unstoppable, but she packed an impressive punch despite being constructed so early in the game. After constructing my fleet and making the long journey to the Snathi homeworld, I was ready to invade. There was just one problem. My technology is still relatively unsophisticated, so I could only transport around 1 billion troops for the invasion. And the Snathi homeworld had a population of 16 billion people! I was drastically outnumbered, so I decided to pay a little extra and use one of the specialized invasion tactics. Many of the invasion tactics result in a large advantage for the invader, but also lower planetary quality and improvements, which is antithetical to my purpose for the invasion. Thus I decided to go for Information warfare. This would cause a significant portion of the enemy troops to join my ranks, thus mitigating their numerical superiority (though I would still be outnumbered), but more importantly, it would leave the planet quality and improvements unharmed. The invasion begins:
The Snathi Invasion
Victory! The Information Warfare tactic paid off in spades, giving me an extra 2.5 billion troops. I was still outnumbered, but my advantage factor was so much higher that it did not matter. I was able dispatch the adorable but monstrous Snathi with relative ease. The planet was mine!
My New Planet
And what a planet it was. Look at all those manufacturing and technology centers. In terms of industry and research, it was significantly better than my own homeworld of Kaedrin, and I suspect it will quickly become the jewel of the Kaedrinian empire, researching, building and producing more than any other planet. Will I succeed in galactic conquest? Nothing is definite, but now that I have secured this planet, I am primed and ready to go. I'll end my account here, as time does not permit recapping the entire game, but I though this was a natural place to stop.
Update: Read more on this campaign: The continuing adventures of the Kaedrinians
Tuesday, December 27, 2005
Browsing the discount DVD rack while doing a little last-minute shopping, I came across this collection of 9 Hitchcock films for a measly $8. I love Hitchcock, yet I haven't seen many of his films (and he was an extremely prolific director), so I picked it up. It turns out that all of the films on the DVDs are from Hitchcock's pre-Hollywood period, dating from the mid 1920s to the late 1930s. It even includes a 1927 silent film, among Hitchcock's first efforts, called The Lodger.
By today's standards (or even the standards set by Hitchcock's later work), it's not especially impressive, but I haven't seen much in the way of silent films, so this particular movie intrigued me. The conventions of silent films are different enough from what we're all familiar with that it almost seems like a different medium. The film moves at a very deliberate pace, revealing information slowly in many varied ways (though, it seems, rarely through dialogue). In fact, I even played around with watching the film at 2X speed and didn't have any problem keeping up with what was happening on screen. Not having any real experience with silent films, I don't know if this (or any other aspect of the movie) was unusual or not, but it seemed to work well enough.
Details, screenshots, sarcasm and more below the fold.
Also Spoilers, but if you're up for it, you can watch the movie at World Cinema Online... (Click images for a larger version)
The killer had a long nose and floppy ears.
From the Fog and the Constable, it's obvious London is under the grip of a Jack the Ripper style serial killer called "The Avenger." The film opens just after a murder with a lady describing our villain to the police.
Here we see a few of the varied ways in which the film communicates information about the murder to the audience. From these scenes (among others), we gather the following facts about the killer:
I'm not a murderer!
Excellent reveal of the Lodger. I think this is the most striking image in the film, and it immediately set off warning bells in my head.
No, really, I'm not the Avenger!
See, without my hat and scarf, I'm much creepier. Woops, I mean less creepier. Yeah.
You gonna get it, woman!
The-man-who-is-clearly-not-The-Avenger is playing chess with the Landlady's fair-haired daughter Daisy, who has deftly outmaneuvered her non-murderous opponent. At this point, he literally says "Be careful. I'll get you yet." No foreshadowing here, move along...
Oh, and despite the fact that the Lodger is clearly a psychopath, Daisy is falling for him, much to the dismay of Joe, her policeman friend (who happens to be investigating some series of murders or something).
You're under arrest, weenie.
The characters in the film have finally figured out that the new lodger is The Avenger, and policeman Joe searches the premesis and finds a hidden bag in his room containing a map of all the killings, various newspaper clippings, and a photograph of the oddball with one of the victims. Our villain is handcuffed but promptly escapes with the help of Daisy (who thinks he's innocent, of course).
"My God, he is innocent! The real Avenger was taken red-handed ten minutes ago." Ah Ha! Hitchcock strikes again.
Rabble, Rablle, Rabble! Rabble!
Oh no, someone spotted the handcuffs! An angry mob has emerged and is chasing the now-exonerated Lodger. For a moment, I really wondered if the mob would take him out, but it seems that film noir hadn't yet emerged, as our beloved Lodger takes a beating, but ends up fine. And he gets the girl, too:
I love you, weenie.
In case you can't tell from all the sarcasm, the "twist" at the end of the story wasn't exactly earth-shattering. These days, we're so zonked out on Lost and 24 that our minds immediately and cynically formulate all the ways the filmmakers are trying to trick us. Were audiences that cynical 80 years ago? Or did the ending truly surprise them? To be honest, there was a part of me that thought that he really could have been the killer. Also, as I hinted at above, this film seems to resemble film noir, and the angry mob scene was somewhat effective in that light.
Ultimately, I enjoyed the film greatly, even if much of my fascination has to do with the context and conventions of silent films. This was apparently the first film where Hitchcock really displayed his own style, and you really can see a lot of themes in this film that would later become Hitchcock staples (i.e. the wrongly accused man, voyeurism, etc...). More on the background of the film can be found at this Wikipedia entry.
So one film down, eight to go. I have to admit, part of the inspiration to get this set is that Cinecast is currently doing a Hitchcock marathon, though it seems that the only film on their list that is in this DVD set is The 39 Steps.
Sunday, October 16, 2005
Operation Solar Eagle
One of the major challenges faced in Iraq is electricity generation. Even before the war, neglect of an aging infrastructure forced scheduled blackouts. To compensate for the outages, Saddam distributed power to desired areas, while denying power to other areas. The war naturally worsened the situation (especially in the immediate aftermath, as there was no security at all), and the coalition and fledgling Iraqi government have been struggling to restore and upgrade power generation facilities since the end of major combat. Many improvements have been made, but attacks on the infrastructure have kept generation at or around pre-war levels for most areas (even if overall generation has increased, the equitable distribution of power means that some people are getting more than they used to, while others are not - ironic, isn't it?).
Attacks on the infrastructure have presented a significant problem, especially because some members of the insurgency seem to be familiar enough with Iraq's power network to attack key nodes, thus increasing the effects of their attacks. Consequently, security costs have gone through the roof. The ongoing disruption and inconsistency of power generation puts the new government under a lot of pressure. The inability to provide basic services like electricity delegitimizes the government and makes it more difficult to prevent future attacks and restore services.
When presented with this problem, my first thought was that solar power may actually help. There are many non-trivial problems with a solar power generation network, but Iraq's security situation combined with lowered expectations and an already insufficient infrastructure does much to mitigate the shortcomings of solar power.
In America, solar power is usually passed over as a large scale power generation system, but things that are problems in America may not be so problematic in Iraq. What are the considerations?
As shown above, there are obviously many challenges to completing such a project, most specifically with respect to economic feasibility, but it seems to me to be an interesting idea. I'm glad that there are others thinking about it as well, though at this point it would be really nice to see something a little more concrete (or at least an explanation as to why this wouldn't work).
Sunday, September 04, 2005
The Pendulum Swings
I've often commented that human beings don't so much solve problems as they trade one set problems for another (in the hope that the new set of problems are more favorable than the old). Yet that process doesn't always follow a linear trajectory. Initial reactions to a problem often cause problems of their own. Reactions to those problems often take the form of an over-correction. And so it continues, like the swinging of a pendulum, back and forth, until it reaches it's final equilibrium.
This is, of course, nothing new. Hegel's philosophy of argument works in exactly that way. You start with a thesis, some sort of claim that becomes generally accepted. Then comes the antithesis, as people begin to find holes in the original thesis and develop an alternative. For a time, the thesis and antithesis vie to establish dominance, but neither really wins. In the end, a synthesis comprised of the best characteristics of the thesis and antithesis emerges.
Naturally, it's rarely so cut and dry, and the process continues as the synthesis eventually takes on the role of the thesis, with new antitheses arising to challenge it. It works like a pendulum, oscillating back and forth until it reaches a stable position (a new synthesis). There are some interesting characteristics of pendulums that are also worth noting in this context. Steven Den Beste once described the two stable states of the pendulum: one in which the weight hangs directly below the hinge, and one in which the weight is balanced directly above the hinge.
On the left, the weight hangs directly below the hinge. On the right, it's balanced directly above it. Both states are stable. But if you slightly perturb the weight, they don't react the same way. When the left weight is moved off to the side, the force of gravity tries to center it again. In practice, if the hinge has a good bearing, the system then will oscillate around the base state and eventually stop back where it started. But if the right weight is perturbed, then gravity pulls the weight away and the right system will fail and convert to the left one.Not all systems are robust, but it's worth noting that even robust systems are not immune to perturbation. The point isn't that they can't fail, it's that when they do fail, they fail gracefully. Den Beste applies the concept to all sorts of things, including governments and economic systems, and I think the analogy is apt. In the coming months and years, we're going to see a lot of responses to the tragedy of hurricane Katrina. Katrina represents a massive perturbation; it's set the pendulum swinging, and it'll be a while before it reaches it's resting place. There will be many new policies that will result. Some of them will be good, some will be bad, and some will set new cycles into action. Disaster preparedness will become more prevalent as time goes on, and the plans will get better too. But not all at once, because we don't so much solve problems as trade one set of disadvantages for another, in the hopes that we can get that pendulum to rest in it's stable state.
Glenn Reynolds has collected a ton of worthy places to donate for hurricane relief here. It's also worth noting that many employers are matching donations to the Red Cross (mine is), so you might want to go that route if it's available...
Sunday, July 17, 2005
In Harry Potter and the Half-Blood Prince, there are a number of new security measures suggested by the Ministry of Magic (as Voldemort and his army of Death Eaters have been running amuk). Some of them are common sense but some of them are much more questionable. Since I've also been reading prominent muggle and security expert Bruce Schneier's book, Beyond Fear, I thought it might be fun to analyze one of the Ministry of Magic's security measures according to Schneier's 5 step process.
Here is the security measure I've chosen to evaluate, as shown on page 42 of my edition:
Agree on security questions with close friends and family, so as to detect Death Eaters masquerading as others by use of the Polyjuice Potion.For those not in the know, Polyjuice Potion allows the drinker to assume the appearance of someone else, presumably someone you know. Certainly a dangerous attack. The proposed solution is a "security question", set up in advance, so that you can verify the identity of the person in question.
Sunday, May 29, 2005
Sharks, Deer, and Risk
Here's a question: Which animal poses the greater risk to the average person, a deer or a shark?
Most people's initial reaction (mine included) to that question is to answer that the shark is the more dangerous animal. Statistically speaking, the average American is much more likely to be killed by deer (due to collisions with vehicles) than by a shark attack. Truly accurate statistics for deer collisions don't exist, but estimates place the number of accidents in the hundreds of thousands. Millions of dollars worth of damage are caused by deer accidents, as are thousands of injuries and hundreds of deaths, every year.
Shark attacks, on the other hand, are much less frequent. Each year, approximately 50 to 100 shark attacks are reported. "World-wide, over the past decade, there have been an average of 8 shark attack fatalities per year."
It seems clear that deer actually pose a greater risk to the average person than sharks. So why do people think the reverse is true? There are a number of reasons, among them the fact that deer don't intentionally cause death and destruction (not that we know of anyway) and they are also usually harmed or killed in the process, while sharks directly attack their victims in a seemingly malicious manner (though I don't believe sharks to be malicious either).
I've been reading Bruce Schneier's book, Beyond Fear, recently. It's excellent, and at one point he draws a distinction between what security professionals refer to as "threats" and "risks."
A threat is a potential way an attacker can attack a system. Car burglary, car theft, and carjacking are all threats ... When security professionals talk abour risk, they take into consideration both the likelihood of the threat and the seriousness of a successful attack. In the U.S., car theft is a more serious risk than carjacking because it is much more likely to occur.Everyone makes risk assessments every day, but most everyone also has different tolerances for risk. It's essentially a subjective decision, and it turns out that most of us rely on imperfect heuristics and inductive reasoning when it comes to these sorts of decisions (because it's not like we have the statistics handy). Most of the time, these heuristics serve us well (and it's a good thing too), but what this really ends up meaning is that when people make a risk assessment, they're basing their decision on a perceived risk, not the actual risk.
Schneier includes a few interesting theories about why people's perceptions get skewed, including this:
Modern mass media, specifically movies and TV news, has degraded our sense of natural risk. We learn about risks, or we think we are learning, not by directly experiencing the world around us and by seeing what happens to others, but increasingly by getting our view of things through the distorted lens of the media. Our experience is distilled for us, and it’s a skewed sample that plays havoc with our perceptions. Kids try stunts they’ve seen performed by professional stuntmen on TV, never recognizing the precautions the pros take. The five o’clock news doesn’t truly reflect the world we live in -- only a very few small and special parts of it.When I first considered the Deer/Shark dilemma, my immediate thoughts turned to film. This may be a reflection on how much movies play a part in my life, but I suspect some others would also immediately think of Bambi, with it's cuddly cute and innocent deer, and Jaws, with it's maniacal great white shark. Indeed, Fritz Schranck once wrote about these "rats with antlers" (as some folks refer to deer) and how "Disney's ability to make certain animals look just too cute to kill" has deterred many people from hunting and eating deer. When you look at the deer collision statistics, what you see is that what Disney has really done is to endanger us all!
Given the above, one might be tempted to pursue some form of censorship to keep the media from degrading our ability to determine risk. However, I would argue that this is wrong. Freedom of speech is ultimately a security measure, and if we're to consider abridging that freedom, we must also seriously consider the risks of that action. We might be able to slightly improve our risk decisionmaking with censorship, but at what cost?
Schneier himself recently wrote about this subject on his blog. In response to an article which argues that suicide bombings in Iraq shouldn't be reported (because it scares people and it serves the terrorists' ends). It turns out, there are a lot of reasons why the media's focus on horrific events in Iraq cause problems, but almost any way you slice it, it's still wrong to censor the news:
It's wrong because the danger of not reporting terrorist attacks is greater than the risk of continuing to report them. Freedom of the press is a security measure. The only tool we have to keep government honest is public disclosure. Once we start hiding pieces of reality from the public -- either through legal censorship or self-imposed "restraint" -- we end up with a government that acts based on secrets. We end up with some sort of system that decides what the public should or should not know.Like all of security, this comes down to a basic tradeoff. As I'm fond of saying, human beings don't so much solve problems as they do trade one set of problems for another (in the hopes that the new problems are preferable the old). Risk can be difficult to determine, and the media's sensationalism doesn't help, but censorship isn't a realistic solution to that problem because it introduces problems of its own (and those new problems are worse than the one we're trying to solve in the first place). Plus, both Jaws and Bambi really are great movies!
Posted by Mark on May 29, 2005 at 08:50 PM .: link :.
Friday, April 22, 2005
What is a Weblog, Part II
What is a weblog? My original thoughts leaned towards thinking of blogs as a genre within the internet. Like all genres, there is a common set of conventions that define the blogging genre, but the boundaries are soft and some sites are able to blur the lines quite thoroughly. Furthermore, each individual probably has their own definition as to what constitutes a blog (again similar to genres). The very elusiveness of a definition for blog indicates that perception becomes an important part of determining whether or not something is a blog. It has become clear that there is no one answer, but if we spread the decision out to a broad number of people, each with their own independent definition of blog, we should be able to come to the conclusion that a borderline site like Slashdot is a blog because most people call it a blog.
So now that we have a (non)definition for what a blog is, just how important are blogs? Caesar at Arstechnica writes that according to a new poll, Americans are somewhat ambivalent on blogs. In particular, they don't trust blogs.
I don't particularly mind this, however. For the most part, blogs don't make much of an effort to be impartial, and as I've written before, it is the blogger's willingness to embrace their subjectivity that is their primary strength. Making mistakes on a blog is acceptable, so long as you learn from your mistakes. Since blogs are typically more informal, it's easier for bloggers to acknowledge their mistakes.
Lexington Green from ChicagoBoyz recently wrote about blogging to a writer friend of his:
To paraphrase Truman Capote's famous jibe against Jack Kerouac, blogging is not writing, it is typing. A writer who is blogging is not writing, he is blogging. A concert pianist who is sitting down at the concert grand piano in Carnegie Hall in front of a packed house is the equivalent to an author publishing a finished book. The same person sitting down at the piano in his neighborhood bar on a Saturday night and knocking out a few old standards, doing a little improvisation, and even doing some singing -- that is blogging. Same instrument -- words, piano -- different medium. We forgive the mistakes and wrong-guesses because we value the immediacy and spontaneity. Plus, publish a book, it is fixed in stone. Write a blog post you later decide is completely wrong, it is actually good, since it gives you a good hook for a later post explaining your thoughts that led to the changed conclusion. The essence of a blog is to air things informally, to throw things out, to say "this interests me because ..." From time to time a more considered and article-like post is good. But most people read blogs by skimming. If a post is too long, in my observation, it does not get much response and may not be read at all.Of course, his definition of what a blog is could be argued (as there are some popular and thoughtful bloggers who routinely write longer, more formal essays), but it actually struck me as being an excellent general description of blogging. Note his favorable attitude towards mistakes ("it gives you a good hook for a later post" is an excellent quote, though I think you might have to be a blogger to fully understand it). In the blogosphere, it's ok to be wrong:
Everyone makes mistakes. It's a fact of life. It isn't a cause for shame, it's just reality. Just as engineers are in the business of producing successful designs which can be fabricated out of less-than-ideal components, the engineering process is designed to produce successful designs out of a team made up of engineers every one of which screws up routinely. The point of the process is not to prevent errors (because that's impossible) but rather to try to detect them and correct them as early as possible.The problem with the mainstream media is that they purport to be objective, as if they're just reporting the facts. Striving for objectivity can be a very good thing, but total objectivity is impossible, and if you deny the inherent subjectivity in journalism, then something is lost.
One thing Caesar mentions is that "the sensationalism surrounding blogs has got to go. Blogs don't solve world hunger, cure disease, save damsels in distress, or any of the other heroic things attributed to them." I agree with this too, though I do think there is something sensational about blogs, or more generally, the internet.
Steven Den Beste once wrote about what he thought were the four most important inventions of all time:
In my opinion, the four most important inventions in human history are spoken language, writing, movable type printing and digital electronic information processing (computers and networks). Each represented a massive improvement in our ability to distribute information and to preserve it for later use, and this is the foundation of all other human knowledge activities. There are many other inventions which can be cited as being important (agriculture, boats, metal, money, ceramic pottery, postmodernist literary theory) but those have less pervasive overall affects.Regardless of whether or not you agree with the notion that these are the most important inventions, it is undeniable that the internet provides a stairstep in communication capability, which, in turn, significantly improves the process of large-scale collaboration that is so important to human existence.
When knowledge could only spread by speech, it might take a thousand years for a good idea to cross the planet and begin to make a difference. With writing it could take a couple of centuries. With printing it could happen in fifty years.And it appears that blogs, with their low barrier to entry and automated software processes, will play a large part in the worldwide debate. There is, of course, a ton of room for improvement, but things are progressing rapidly now and perhaps even accelerating. It is true that some blogging proponents are preaching triumphalism, but that's part of the charm. They're allowed to be wrong and if you look closely at what happens when someone makes such a comment, you see that for every exaggerated claim, there are 10 counters in other blogs that call bullshit. Those blogs might be on the long tail and probably won't garner as much attention, but that's part of the point. Blogs aren't trustworthy, which is precisely why they're so important.
Update 4.24.05: I forgot to link the four most important inventions article (and I changed some minor wording: I had originally referred to the four "greatest" inventions, which was not the wording Den Beste had used).
Posted by Mark on April 22, 2005 at 06:49 PM .: link :.
Sunday, April 17, 2005
What is a Weblog?
Caesar at ArsTechnica has written a few entries recently concerning blogs which interested me. The first simply asks: What, exactly, is a blog? Once you get past the overly-general definitions ("a blog is a frequently updated webpage"), it becomes a surprisingly difficult question.
Caesar quotes Wikipedia:
A weblog, web log or simply a blog, is a web application which contains periodic time-stamped posts on a common webpage. These posts are often but not necessarily in reverse chronological order. Such a website would typically be accessible to any Internet user. "Weblog" is a portmanteau of "web" and "log". The term "blog" came into common use as a way of avoiding confusion with the term server log.Of course, as Caesar notes, the majority of internet sites could probably be described in such a way. What differentiates blogs from discussion boards, news organizations, and the like?
Reading through the resulting discussion provides some insight, but practically every definition is either too general or too specific.
Many people like to refer to Weblogs as a medium in itself. I can see the point, but I think it's more general than that. The internet is the medium, whereas a weblog is basically a set of commonly used conventions used to communicate through that medium. Among the conventions are things like a main page with chronological posts, permalinks, archives, comments, calendars, syndication (RSS), blogging software (CMS), trackbacks, &c. One problem is that no single convention is, in itself, definitive of a weblog. It is possible to publish a weblog without syndication, comments, or a calendar. Depending on the conventions being eschewed, such blogs may be unusual, but may still be just as much a blog as any other site.
For lack of a better term, I tend to think of weblogs as a genre. This is, of course, not totally appropriate but I think it does communicate what I'm getting at. A genre is typically defined as a category of artistic expression marked by a distinctive style, form, or content. However, anyone who is familiar with genre film or literature knows that there are plenty of movies or books that are difficult to categorize. As such, specific genres such as horror, sci-fi, or comedy are actually quite inclusive. Some genres, Drama in particular, are incredibly broad and are often accompanied by the conventions of other genres (we call such pieces "cross-genre," though I think you could argue that almost everything incorporates "Drama"). The point here is that there is often a blurry line between what constitutes one genre from another.
On the medium of the internet, there are many genres, one of which is a weblog. Other genres include commercial sites (i.e. sites that try to sell you things, Amazon.com, Ebay, &c.), reference sites (i.e. dictionaries & encyclopedias), Bulletin Board Systems and Forums, news sites, personal sites, weblogs, wikis, and probably many, many others.
Any given site is probably made up of a combination of genres and it is often difficult to pinpoint any one genre as being representative. Take, for example, Kaedrin.com. It is a personal site with some random features, a bunch of book & movie reviews, a forum, and, of course, a weblog (which is what you're reading now). Everything is clearly delineated here at Kaedrin, but other sites blur the lines between genres on every page. Take ArsTechnica itself: Is it a news site or a blog or something else entirely? I would say that the front page is really a combination of many different things, one of which is a blog. It's a "cross-genre" webpage, but that doesn't necessarily make it any less effective (though there is something to be said for simplicity and it is quite possible to load a page up with too much stuff, just as it's possible for a book or movie to be too ambitious and take on too much at once) just as Alien isn't necessarily a less effective Science Fiction film because it incorporates elements of Horror and Drama (or vice-versa).
Interestingly, much of what a weblog is can be defined as an already existing literary genre: the journal. People have kept journals and diaries all throughout history. The major difference between a weblog and a journal is that a weblog is published for all to see on the public internet (and also that weblogs can be linked together through the use of the hyperlink and the infrastructure of the internet). Historically, diaries were usually private, but there are notable exceptions which have been published in book form. Theoretically, one could take such diaries and publish them online - would they be blogs? Take, for instance, The Diary of Samuel Pepys which is currently being published daily as if it's a weblog circa 1662 (i.e. Today's entry is dated "Thursday 17 April 1662"). The only difference is that the author of that diary is dead and thus doesn't interact or respond to the rest of the weblog community (though there is still interaction allowed in the form of annotations).
A few other random observations about blogs:
I don't care what the hell a weblog is. It is what I say it is. Its something I update whenever I find an interesting tidbit on the web. And its fun. So there.Heh. Interesting to note that my secondary definition there ("something I update whenever I find an interesting tidbit on the web") has changed significantly since I contributed that definition. This is why, I suppose, I had originally supplied the primary definition ("I don't care what the hell a weblog is. It is what I say it is.") and to be honest, I don't think that's changed (though I guess you could call that definition "too general"). Blogging is whatever I want it to be. Of course, I could up and call anything a blog, but I suppose it is also required that others perceive your blog as a blog. That way, the genre still retains some shape, but is still permeable enough to allow some flexibility.
I had originally intended to make several other points in this post, but since it has grown to a rather large size, I'll save them for other posts. Hopefully, I'll gather the motivation to do so before next week's scheduled entry, but there's no guarantee...
Posted by Mark on April 17, 2005 at 08:27 PM .: link :.
Sunday, March 20, 2005
Time Travel in Donnie Darko
By popular request, here is a brief analysis of time travel used in the movie Donnie Darko. As I've mentioned before, Donnie Darko is an enigmatic film and I'm not sure it makes total sense. At a very high level everything seems to fit, but when you start to drill down into the details things become less clear.
In the commentary track of the Directors Cut DVD, writer/director Richard Kelly attempts to clarify some of the more mystifying aspects of the film, but he still leaves a lot of wiggle room and ambiguity. He describes the time travel in the film as being driven by a "comic book logic," which should give you an idea of just how scientifically rigorous the subject is treated in the film (i.e. not very). Time travel is essentially a deus ex machina; it drives the story, but its internal mechanics are unimportant. So this analysis isn't really intended to be very rigorous either, just a few thoughts and attempts to clarify or at least call out some of the more confusing concepts.
Before I really get into it, I suppose I should mention that what follows contains many SPOILERS, so read on at your own risk. Another thing that might be useful is to go over other less than rigorous time travel theories that have been presented in film and literature. This list isn't meant to be complete, but these four theories will help in dissecting Donnie Darko. Again, many SPOILERS, especially in the case of lightning (as I'm assuming most people haven't read it).
First, does Donnie have some sort of superpower? Donnie is obviously different from other people. The film doesn't show any sort of explicit references to his powers, but it is sort of implied by his visits to a psychiatrist and his visions. I suppose the water trails he sees (which show the future path of a person, sometimes including himself) could be an expression of his abilities (as it allows him to see into the future). It's clear that Donnie made a decision near the end of the movie that he was going to "fix" the universe and allow himself to be killed by the jet engine, but it's not clear how that happens. Does Donnie actually cause that to happen, or is he just aware of it happening and going along for the ride? There is a sort of messianic theme in the movie, so I'm assuming that Donnie has some sort of power to send himself and/or the jet engine back in time and link the two universes together (and to collapse the tangent universe without destroying all of existence).
Richard Kelly, in explaining his take on the story, indicated that he wanted to communicate that there was some sort of technology at work in the tangent universe, manipulating everyone's actions, and attempting to set things right. It is unclear what exactly this technology is, how it works, or who is using it, but his point is that someone is orchestrating events in the tangent universe so as to fix the universe (or to allow Donnie the opportunity to fix things). When he mentioned this concept, I immediately thought of Asimov's Eternals, people who manipulated time and history for the betterment of mankind. In Donnie Darko, perhaps there exists a similar group of people who are tasked with ensuring that tangent universes are closed. Or perhaps, Donnie himself is subconsciously manipulating events to help fix things.
I also thought of Koontz's Lighting and that infamous line "Destiny struggles to reassert the pattern that was meant to be." In that scenario, there isn't really a technology at work, just fate, perhaps augmented by Donnie's supernatural abilities. Indeed, it could be some sort of combination of these three explanations: Donnie Darko has powers which are augmented by some sort of technology and fate.
What is Frank (the demonic looking bunny), and what role does he play in the story? This is very unclear. He may be a ghost, he may be the result of Donnie's unconscious awareness of the future, or he may be a projection from the technological puppet-masters.
There are obviously a number of other explanations. What if the timeline actually follows a linear path (i.e. the linear presentation in the movie)? In that scenario, the timeline would go from A to B to C to D, except that B and D are essentially the same point in time (perhaps the main timeline stopped while the tangent universe worked itself out). So the time travel line would occur between CD.
And of course, this doesn't really take into account all the themes of the film. I suppose I should also note that I've been analyzing the Directors Cut, which references a lot more of the fictional book, The Philosophy Of Time Travel by Roberta Sparrow (a character in the film). The Directors Cut gives more information on the guiding forces in the story, and it gives a more sci-fi bend than the theatrical cut, but both cuts are sufficiently ambiguous as to allow multiple interpretations, many of which end up being pretty silly when you drill down into the details, and some don't make much sense, but in the end that doesn't really matter all that much because you have to figure it out for yourself...
Posted by Mark on March 20, 2005 at 01:34 PM .: link :.
Sunday, February 20, 2005
The Stability of Three
One of the things I've always respected about Neal Stephenson is his attitude (or rather, the lack thereof) regarding politics:
Politics - These I avoid for the simple reason that artists often make fools of themselves, and begin to produce bad art, when they decide to get political. A novelist needs to be able to see the world through the eyes of just about anyone, including people who have this or that set of views on religion, politics, etc. By espousing one strong political view a novelist loses the power to do this. Anyone who has convinced himself, based on reading my work, that I hold this or that political view, is probably wrong. What is much more likely is that, for a while, I managed to get inside the head of a fictional character who held that view.Having read and enjoyed several of his books, I think this attitude has served him well. In a recent interview in Reason magazine, Stephenson makes several interesting observations. The whole thing is great, and many people are interested in his comments regarding an American technology and science, but I found one other tidbit very interesting. Strictly speaking, it doesn't break with his attitude about politics, but it is somewhat political:
Speaking as an observer who has many friends with libertarian instincts, I would point out that terrorism is a much more formidable opponent of political liberty than government. Government acts almost as a recruiting station for libertarians. Anyone who pays taxes or has to fill out government paperwork develops libertarian impulses almost as a knee-jerk reaction. But terrorism acts as a recruiting station for statists. So it looks to me as though we are headed for a triangular system in which libertarians and statists and terrorists interact with each other in a way that I’m afraid might turn out to be quite stable.I took particular note of what he describes as a "triangular system" because it's something I've seen before...
One of the primary goals of the American Constitutional Convention was to devise a system that would be resistant to tyranny. The founders were clearly aware of the damage that an unrestrained government could do, so they tried to design the new system in such a way that it wouldn't become tyrannical. Democratic institions like mandatory periodic voting and direct accountability to the people played a large part in this, but the founders also did some interesting structural work as well.
Taking their cue from the English Parliament's relationship with the King of England, the founders decided to create a legislative branch separate from the executive. This, in turn, placed the two governing bodies in competition. However, this isn't a very robust system. If one of the governing bodies becomes more powerful than the other, they can leverage their advantage to accrue more power, thus increasing the imbalance.
A two-way balance of power is unstable, but a three-way balance turns out to be very stable. If any one body becomes more powerful than the other two, the two usually can and will temporarily unite, and their combined power will still exceed the third. So the founders added a third governing body, an independent judiciary.
The result was a bizarre sort of stable oscillation of power between the three major branches of the federal government. Major shifts in power (such as wars) disturbed the system, but it always fell back to a preferred state of flux. This stable oscillation turns out to be one of the key elements of Chaos theory, and is referred to as a strange attractor. These "triangular systems" are particularly good at this, and there are many other examples...
Some argue that the Cold War stabilized considerably when China split from the Soviet Union. Once it became a three-way conflict, there was much less of a chance of unbalance (and as unbalance would have lead to nuclear war, this was obviously a good thing).
Steven Den Beste once noted this stabilizing power of three in the interim Iraqi constitution, where the Iraqis instituted a Presidency Council of 3 Presidents representing each of the 3 major factions in Iraq:
...those writing the Iraqi constitution also had to create a system acceptable to the three primary factions inside of Iraq. If they did not, the system would shake itself to pieces and there was a risk of Iraqi civil war.It should be interesting to see if that structure will be maintained in the new Iraqi constitution.
As for Stephenson's speculation that a triangular system consisting of libertarians, statists, and terrorists may develop, I'm not sure. They certainly seem to feed off one another in a way that would facilitate such a system, but I'm not positive it would work out that way, nor do I think it is particularly a desirable state to be in, all the more because it could be a very stable system due to its triangular structure. In any case, I thought it was an interesting observation and well worth considering...
Posted by Mark on February 20, 2005 at 08:06 PM .: link :.
Sunday, January 30, 2005
Elections in Iraq
Iraq held its first national elections in over 50 years today. I don't have much to add to what has already been said, but I will note that it doesn't surprise me that the insurgents were quieter than expected. One of the big advantages of terrorism is the surprise factor, and on a day like today, security forces are expecting attacks and are much more likely to spot unusual activities and investigate. My guess is that attacks will intensify in the coming weeks, as the insurgents test the new government...
Lots of people are commenting on this so I'll try to perform some of that information aggregation that blogs are known for, starting with the Iraqi Blogs, then moving on to the rest of the blogosphere...
Update: Moved all the links into the extended entry. Click below to read on... Iraqi Blogs:
Several Updates: Gah! Information overload. Many links added, but I think I'm done for the night. The funny thing is that I haven't even begun to scrape the tip of all the good information that's out there. Partaking in an exercise like this is one of the things that really puts the need for good information aggregation into perspective. But this is a start, I guess...
Another Update: I lied, several new links.
Posted by Mark on January 30, 2005 at 07:06 PM .: link :.
Sunday, December 12, 2004
I've been doing a lot of reading and thinking about the concepts discussed in my last post. It's a fascinating, if a little bewildering, topic. I'm not sure I have a great handle on it, but I figured I'd share a few thoughts.
There are many systems that are incredibly flexible, yet they came into existence, grew, and self-organized without any actual planning. Such systems are often referred to as Stigmergic Systems. To a certain extent, free markets have self-organized, guided by such emergent effects as Adam Smith's "invisible hand". Many organisms are able to quickly adapt to changing conditions using a technique of continuous reproduction and selection. To an extent, there are forces on the internet that are beginning to self-organize and produce useful emergent properties, blogs among them.
Such systems are difficult to observe, and it's hard to really get a grasp on what a given system is actually indicating (or what properties are emerging). This is, in part, the way such systems are supposed to work. When many people talk about blogs, they find it hard to believe that a system composed mostly of small, irregularly updated, and downright mediocre (if not worse) blogs can have truly impressive emergent properties (I tend to model the ideal output of the blogosphere as an information resource). Believe it or not, blogging wouldn't work without all the crap. There are a few reasons for this:
The System Design: The idea isn't to design a perfect system. The point is that these systems aren't planned, they're self-organizing. What we design are systems which allow this self-organization to occur. In nature, this is accomplished through constant reproduction and selection (for example, some biological systems can be represented as a function of genes. There are hundreds of thousands of genes, with a huge and diverse number of combinations. Each combination can be judged based on some criteria, such as survival and reproduction. Nature introduces random mutations so that gene combinations vary. Efficient combinations are "selected" and passed on to the next generation through reproduction, and so on).
The important thing with respect to blogs are the tools we use. To a large extent, blogging is simply an extension of many mechanisms already available on the internet, most especially the link. Other weblog specific mechanisms like blogrolls, permanent-links, comments (with links of course) and trackbacks have added functionality to the link and made it more powerful. For a number of reasons, weblogs tend to be affected by power-law distribution, which spontaneously produces a sort of hierarchical organization. Many believe that such a distribution is inherently unfair, as many excellent blogs don't get the attention they deserve, but while many of the larger bloggers seek to promote smaller blogs (some even providing mechanisms for promotion), I'm not sure there is any reliable way to systemically "fix" the problem without harming the system's self-organizational abilities.
In systems where many people are free to choose between many options, a small subset of the whole will get a disproportionate amount of traffic (or attention, or income), even if no members of the system actively work towards such an outcome. This has nothing to do with moral weakness, selling out, or any other psychological explanation. The very act of choosing, spread widely enough and freely enough, creates a power law distribution.This self-organization is one of the important things about weblogs; any attempt to get around it will end up harming you in the long run as the important thing is to find a state in which weblogs are working most efficiently. How can the weblog community be arranged to self-organize and find its best configuration? That is what the real question is, and that is what we should be trying to accomplish (emphasis mine):
...although the purpose of this example is to build an information resource, the main strategy is concerned with creating an efficient system of collaboration. The information resource emerges as an outcome if this is successful.Failure is Important: Self-Organizing systems tend to have attractors (a preferred state of the system), such that these systems will always gravitate towards certain positions (or series of positions), no matter where they start. Surprising as it may seem, self-organization only really happens when you expose a system in a steady state to an environment that can destabilize it. By disturbing a steady state, you might cause the system to take up a more efficient position.
It's tempting to dismiss weblogs as a fad because so many of them are crap. But that crap is actually necessary because it destabilizies the system. Bloggers often add their perspective to the weblog community in the hopes that this new information will change the way others think (i.e. they are hoping to induce change - this is roughly referred to as Stigmergy). That new information will often prompt other individuals to respond in some way or another (even if not directly responding). Essentially, change is introduced in the system and this can cause unpredictable and destabilizing effects. Sometimes this destabilization actually helps the system, sometimes (and probably more often than not) it doesn't. Irregardless of its direct effects, the process is essential because it is helping the system become increasingly comprehensive. I touched on this in my last post among several others in which I claim that an argument achieves a higher degree of objectivity by embracing and acknowledging its own biases and agenda. It's not that any one blog or post is particularly reliable in itself, it's that blogs collectively are more objective and reliable than any one analyst (a journalist, for instance), despite the fact that many blogs are mediocre at best. An individual blog may fail to solve a problem, but that failure is important too when you look at the systemic level. Of course, all of this is also muddying the waters and causing the system to deteriorate to a state where it is less efficient to use. For every success story like Rathergate, there are probably 10 bizarre and absurd conspiracy theories to contend with.
This is the dilemma faced by all biological systems. The effects that cause them to become less efficient are also the effects that enable them to evolve into more efficient forms. Nature solves this problem with its evolutionary strategy of selecting for the fittest. This strategy makes sure that progress is always in a positive direction only.So what weblogs need is a selection process that separates the good blogs from the bad. This ties in with the aforementioned power-law distribution of weblogs. Links, be they blogroll links or links to an individual post, essentially represent a sort of currency of the blogosphere and provide an essential internal feedback loop. There is a rudimentary form of this sort of thing going on, and it has proven to be very successful (as Jeremy Bowers notes, it certainly seems to do so much better than the media whose selection process appears to be simple heuristics). However, the weblog system is still young and I think there is considerable room for improvement in its selection processes. We've only hit the tip of the iceberg here. Syndication, aggregation, and filtering need to improve considerably. Note that all of those things are systemic improvements. None of them directly act upon the weblog community or the desired informational output of the community. They are improvements to the strategy of creating an efficient system of collaboration. A better informational output emerges as an outcome if the systemic improvements are successful.
This is truly a massive subject, and I'm only beginning to understand some of the deeper concepts, so I might end up repeating myself a bit in future posts on this subject, as I delve deeper into the underlying concepts and gain a better understanding. The funny thing is that it doesn't seem like the subject itself is very well defined, so I'm sure lots will be changing in the future. Below are a few links to information that I found helpful in writing this post.
Posted by Mark on December 12, 2004 at 11:15 PM .: link :.
Sunday, December 05, 2004
An Epic in Parallel Form
Tyler Cowen has an interesting post on the scholarly content of blogging in which he speculates as to how blogging and academic scholarship fit together. In so doing he makes some general observations about blogging:
Blogging is a fundamentally new medium, akin to an epic in serial form, but combining the functions of editor and author. Who doesn't dream of writing an epic?It's an interesting perspective. Many blogs are general in subject, but some of the ones that really stand out have some sort of narrative (for lack of a better term) that you can follow from post to post. As Cowen puts it, an "epic in serial form." The suggestion that reading a single blog many times is more rewarding than reading the best posts from many different blogs is interesting. But while a single blog may give you a broad view of what a field is about, it can also be rewarding to aggregate the specific views of a wide variety of individuals, even biased and partisan individuals. As Cowen mentions, the blogosphere as a whole is the relevant unit of analysis. Even if each individual view is unimpressive on its own, that may not be the case when taken collectively. In a sense, while each individual is writing a flawed epic in serial form, they are all contributing to an epic in parallel form.
Which brings up another interesting aspect of blogs. When the blogosphere tackles a subject, it produces a diverse set of opinions and perspectives, all published independently by a network of analysts who are all doing work in parallel. The problem here is that the decentralized nature of the blogosphere makes aggregation difficult. Determining a group as large and diverse as the blogosphere's "answer" based on all of the disparate information they have produced is incredibly difficult, especially when the majority of data represents opinions of various analysts. A deficiency in aggregation is part of where groupthink comes from, but some groups are able to harness their disparity into something productive. The many are smarter than the few, but only if the many are able to aggregate their data properly.
In theory, blogs represent a self-organizing system that has the potential to evolve and display emergent properties (a sort of human hive mind). In practice, it's a little more difficult to say. I think it's clear that the spontaneous appearance of collective thought, as implemented through blogs or other communication systems, is happening frequently on the internet. However, each occurrence is isolated and only represents an incremental gain in productivity. In other words, a system will sometimes self-organize in order to analyze a problem and produce an enormous amount of data which is then aggregated into a shared vision (a vision which is much more sophisticated than anything that one individual could come up with), but the structure that appears in that case will disappear as the issue dies down. The incredible increase in analytic power is not a permanent stair step, nor is it ubiquitous. Indeed, it can also be hard to recognize the signal in a great sea of noise.
Of course, such systems are constantly and spontaneously self-organizing; themselves tackling problems in parallel. Some systems will compete with others, some systems will organize around trivial issues, some systems won't be nearly as effective as others. Because of this, it might be that we don't even recognize when a system really transcends its perceived limitations. Of course, such systems are not limited to blogs. In fact they are quite common, and they appear in lots of different types of systems. Business markets are, in part, self-organizing, with emergent properties like Adam Smith's "invisible hand". Open Source software is another example of a self-organizing system.
Interestingly enough, this subject ties in nicely with a series of posts I've been working on regarding the properties of Reflexive documentaries, polarized debates, computer security, and national security. One of the general ideas discussed in those posts is that an argument achieves a higher degree of objectivity by embracing and acknowledging its own biases and agenda. Ironically, in acknowledging one's own subjectivity, one becomes more objective and reliable. This applies on an individual basis, but becomes much more powerful when it is part of an emergent system of analysis as discussed above. Blogs are excellent at this sort of thing precisely because they are made up of independent parts that make no pretense at objectivity. It's not that any one blog or post is particularly reliable in itself, it's that blogs collectively are more objective and reliable than any one analyst (a journalist, for instance), despite the fact that many blogs are mediocre at best. The news media represents a competing system (the journalist being the media's equivalent of the blogger), one that is much more rigid and unyielding. The interplay between blogs and the media is fascinating, and you can see each medium evolving in response to the other (the degree to which this is occurring is naturally up for debate). You might even be able to make the argument that blogs are, themselves, emergent properties of the mainstream media.
Personally, I don't think I have that exact sort of narrative going here, though I do believe I've developed certain thematic consistencies in terms of the subjects I cover here. I'm certainly no expert and I don't post nearly often enough to establish the sort of narrative that Cowen is talking about, but I do think a reader would benefit from reading multiple posts. I try to make up for my low posting frequency by writing longer, more detailed posts, often referencing older posts on similar subjects. However, I get the feeling that if I were to break up my posts into smaller, more digestible pieces, the overall time it would take to read and produce the same material would be significantly longer. Of course, my content is rarely scholarly in nature, and my subject matter varies from week to week as well, but I found this interesting to think about nonetheless.
I think I tend to be more of an aggregator than anything else, which is interesting because I've never thought about what I do in those terms. It's also somewhat challenging, as one of my weaknesses is being timely with information. Plus aggregation appears to be one of the more tricky aspects of a system such as the ones discussed above, and with respect to blogs, it is something which definitely needs some work...
Update 12.13.04: I wrote some more on the subject. I aslo made a minor edit to this entry, moving one paragraph lower down. No content has actually changed, but the new order flows better.
Posted by Mark on December 05, 2004 at 09:23 PM .: link :.
Sunday, November 21, 2004
This is yet another in a series of posts fleshing out ideas initially presented in a post regarding Reflexive Documentary filmmaking and the media. In short, Reflexive Documentaries achieve a higher degree of objectivity by embracing and acknowledging their own biases and agenda. Ironically, by acknowledging their own subjectivity, these films are more objective and reliable. I expanded the scope of the concepts originally presented in that post to include a broader range of information dissemination processes, which lead to a post on computer security and a post on national security.
I had originally planned to apply the same concepts to debating in a relatively straightforward manner. I'll still do that, but recent events have lead me to reconsider my position, thus there will most likely be some unresolved questions at the end of this post.
So the obvious implication with respect to debating is that a debate can be more productive when each side exposes their own biases and agenda in making their argument. Of course, this is pretty much required by definition, but what I'm getting at here is more a matter of tactics. Debating tactics often take poor forms, with participants scoring cheap points by using intuitive but fallacious arguments.
I've done a lot of debating in various online forums, often taking a less than popular point of view (I tend to be a contrarian, and am comofortable on the defense). One thing that I've found is that as a debate heats up, the arguments become polarized. I sometimes find myself defending someone or something that I normally wouldn't. This is, in part, because a polarizing debate forces you to dispute everything your opponent argues. To concede one point irrevocably weakens your position, or so it seems. Of course, the fact that I'm a contrarian, somewhat competitive, and stubborn also plays a part this. Emotions sometimes flare, attitudes clash, and you're often left feeling dirty after such a debate.
None of which is to say that polarized debate is bad. My whole reason for participating in such debates is to get others to consider more than one point of view. If a few lurkers read a debate and come away from it confused or at least challenged by some of the ideas presented, I consider that a win. There isn't anything inherently wrong with partisanship, and as frustrating as some debates are, I find myself looking back on them as good learning experiences. In fact, taking an extreme position and thinking from that biased standpoint helps you understand not only that viewpoint, but the extreme opposite as well.
The problem with such debates, however, is that they really are divisive. A debate which becomes polarized might end up providing you with a more balanced view of an issue, but such debates sometimes also present an unrealistic view of the issue. An example of this is abortion. Debates on that topic are usually heated and emotional, but the issue polarizes, and people who would come down somewhere around the middle end up arguing an extreme position for or against.
Again, I normally chalk this polarization up as a good thing, but after the election, I'm beginning to see the wisdom in perhaps pursuing a more moderated approach. With all the red/blue dichotomies being thrown around with reckless abandon, talk of moving to Canada and even talk of secesssion(!), it's pretty obvious that the country has become overly-polarized.
I've been writing about Benjamin Franklin recently on this here blog, and I think his debating style is particularly apt to this discussion:
Franklin was worried that his fondness for conversation and eagerness to impress made him prone to "prattling, punning and joking, which only made me acceptable to trifling company." Knowledge, he realized, "was obtained rather by the use of the ear than of the tongue." So in the Junto, he began to work on his use of silence and gentle dialogue.This contrasts rather sharply with what passes for civilized debate these days. Franklin actually considered it rude to directly contradict or dispute someone, something I had always found to be confusing. I typically favor a frank exchange of ideas (i.e. saying what you mean), but I'm beginning to come around. In the wake of the election, a lot of advice has been offered up for liberals and the left, and a lot of suggestions center around the idea that they need to "reach out" to more voters. This has been recieved with indignation by liberals and leftists, and one could hardly blame them. From their perspective, conservatives and the right are just as bad if not worse and they read such advice as if they're being asked to give up their values. Irrespective of which side is right, I think the general thrust of the advice is that liberal arguments must be more persuasive. No matter how much we might want to paint the country into red and blue partitions, if you really want to be accurate, you'd see only a few small areas of red and blue drowning in a sea of purple. The Democrats don't need to convince that many people to get a more favorable outcome in the next election.
And so perhaps we should be fighting the natural polarization of a debate and take a cue from Franklin, who stressed the importance of deferring, or at least pretending to defer, to others:
"Would you win the hearts of others, you must not seem to vie with them, but to admire them. Give them every opportunity of displaying their own qualifications, and when you have indulged their vanity, they will praise you in turn and prefer you above others... Such is the vanity of mankind that minding what others say is a much surer way of pleasing them than talking well ourselves."There are weaknesses to such an approach, especially if your opponent does not return the favor, but I think it is well worth considering. That the country has so many opposing views is not necessarily bad, and indeed, is a necessity in democracy for ideas to compete. But perhaps we need less spin and more moderation... In his essay "Apology for Printers" Franklin opines:
"Printers are educated in the belief that when men differ in opinion, both sides ought equally to have the advantage of being heard by the public; and that when Truth and Error have fair play, the former is always an overmatch for the latter."Indeed.
Update: Andrew Olmsted posted something along these lines, and he has a good explanation as to why debates often go south:
I exaggerate for effect, but anyone spending much time on site devoted to either party quickly runs up against the assumption that the other side isn't just wrong, but evil. And once you've made that assumption, it would be wrong to even negotiate with the other side, because any compromise you make is taking the country one step closer to that evil. The enemy must be fought tooth and nail, because his goals are so heinous.I don't know that we're a majority, as Olmsted hopes, but there's more than just a few of us, at least...
Posted by Mark on November 21, 2004 at 03:29 PM .: link :.
Thursday, November 11, 2004
Arranging Interests in Parallel
I have noticed a tendency on my part to, on occasion, quote a piece of fiction, and then comment on some wisdom or truth contained therein. This sort of thing is typically frowned upon in rigorous debate as fiction is, by definition, contrived and thus referencing it in a serious argument is rightly seen as undesirable. Fortunately for me, this blog, though often taking a serious tone, is ultimately an exercise in thinking for myself. The point is to have fun. This is why I will sometimes quote fiction to make a point, and it's also why I enjoy questionable exercises like speculating about historical figures. As I mentioned in a post on Benjamin Franklin, such exercises usually end up saying more about me and my assumptions than anything else. But it's my blog, so that is more or less appropriate.
Astute readers must at this point be expecting to recieve a citation from a piece of fiction, followed by an application of the relevant concepts to some ends. And they would be correct.
Early on in Neal Stephenson's novel The System of the World, Daniel Waterhouse reflects on what is required of someone in his position:
He was at an age where it was never possible ot pursue one errand at a time. He must do many at once. He guessed that people who had lived right and arranged things properly must have it all rigged so that all of their quests ran in parallel, and reinforced and supported one another just so. They gained reputations as conjurors. Others found their errands running at cross purposes and were never able to do anything; they ended up seeming mad, or else percieived the futility of what they were doing and gave up, or turned to drink.Naturally, I believe there is some truth to this. In fact, the life of Benjamin Franklin, a historical figure from approximately the same time period as Dr. Waterhouse, provides us with a more tangible reference point.
Franklin was known to mix private interests with public ones, and to leverage both to further his business interests. The consummate example of Franklin's proclivities was the Junto, a club of young workingmen formed by Franklin in the fall of 1727. The Junto was a small club composed of enterprising tradesman and artisans who discussed issues of the day and also endeavored to form a vehicle for the furtherance of their own careers. The enterprise was typical of Franklin, who was always eager to form associations for mutual benefit, and who aligned his interests so they ran in parallel, reinforcing and supporting one another.
A more specific example of Franklin's knack for aligning interests is when he produced the first recorded abortion debate in America. At the time, Franklin was running a print shop in Philadelphia. His main competitor, Andrew Bradford, published the town's only newspaper. The paper was meager, but very profitable in both moneys and prestige (which led him to be more respected by merchants and politicians, and thus more likely to get printing jobs), and Franklin decided to launch a competing newspaper. Unfortunately, another rival printer, Samuel Keimer, caught wind of Franklin's plan and immediately launched a hastily assembled newspaper of his own. Franklin, realizing that it would be difficult to launch a third paper right away, vowed to crush Keimer:
In a comptetitive bank shot, Franklin decided to write a series of anonymous letters and essays, along the lines of the Silence Dogood pieces of his youth, for Bradford's [American Weekly Mercury] to draw attention away from Keimer's new paper. The goal was to enliven, at least until Keimer was beaten, Bradford's dull paper, which in its ten years had never puplished any such features.Franklin's many actions of the time certainly weren't running at cross purposes, and he did manage to align his interests in parallel. He truly was a master, and we'll be hearing more about him on this blog soon.
This isn't the first time I've written about this subject before either. In a previous post, On the Overloading of Information, I noted one of the main reasons why blogging continues to be an enjoyable activity for me, despite changing interests and desires:
I am often overwhelmed by a desire to consume various things - books, movies, music, etc... The subject of such things is also varied and, as such, often don't mix very well. That said, the only thing I have really found that works is to align those subjects that do mix in such a way that they overlap. This is perhaps the only reason blogging has stayed on my plate for so long: since the medium is so free-form and since I have absolute control over what I write here and when I write it, it is easy to align my interests in such a way that they overlap with my blog (i.e. I write about what interests me at the time).One way you can tell that my interests have shifted over the years is that the format and content of my writing here has also changed. I am once again reminded of Neal Stephenson's original minimalist homepage in which he speaks of his ongoing struggle against what Linda Stone termed as "continuous partial attention," as that curious feature of modern life only makes the necessity of aligning interests in parallel that much more important.
Aligning blogging with my other core interests, such as reading fiction, is one of the reasons I frequently quote fiction, even in reference to a serious topic. Yes, such a practice is frowned upon, but blogging is a hobby, the idea of which is to have fun. Indeed, Glenn Reynolds, progenitor of one of the most popular blogging sites around, also claims to blog for fun, and interestingly enough, he has quoted fiction in support of his own serious interests as well (more than once). One other interesting observation is that all references to fiction in this post, including even Reynolds' references, are from Neal Stephenson's novels. I'll leave it as an exercise for the reader to figure out what significance, if any, that holds.
Posted by Mark on November 11, 2004 at 11:45 PM .: link :.
Sunday, November 07, 2004
Open Source Security
A few weeks ago, I wrote about what the mainstream media could learn from Reflexive documentary filmmaking. Put simply, Reflexive Documentaries achieve a higher degree of objectivity by embracing and acknowledging their own biases and agenda. Ironically, by acknowledging their own subjectivity, these films are more objective and reliable. In a follow up post, I examined how this concept could be applied to a broader range of information dissemination processes. That post focused on computer security and how full disclosure of system vulnerabilities actually improves security in the long run. Ironically, public scrutiny is the only reliable way to improve security.
Full disclosure is certainly not perfect. By definition, it increases risk in the short term, which is why opponents are able to make persuasive arguments against it. Like all security, it is a matter of tradeoffs. Does the long term gain justify the short term risk? As I'm fond of saying, human beings don't so much solve problems as they trade one set of disadvantages for another (with the hope that the new set isn't quite as bad as the old). There is no solution here, only a less disadvantaged system.
Now I'd like to broaden the subject even further, and apply the concept of open security to national security. With respect to national security, the stakes are higher and thus the argument will be more difficult to sustain. If people are unwilling to deal with a few computer viruses in the short term in order to increase long term security, imagine how unwilling they'll be to risk a terrorist attack, even if that risk ultimately closes a few security holes. This may be prudent, and it is quite possible that a secrecy approach is more necessary at the national security level. Secrecy is certainly a key component of intelligence and other similar aspects of national security, so open security techniques would definitely not be a good idea in those areas.
However, there are certain vulnerabilities in processes and systems we use that could perhaps benefit from open security. John Robb has been doing some excellent work describing how terrorists (or global guerillas, as he calls them) can organize a more effective campaign in Iraq. He postulates a Bazaar of violence, which takes its lessons from the open source programming community (using Eric Raymond's essay The Cathedral and the Bazaar as a starting point):
The decentralized, and seemingly chaotic guerrilla war in Iraq demonstrates a pattern that will likely serve as a model for next generation terrorists. This pattern shows a level of learning, activity, and success similar to what we see in the open source software community. I call this pattern the bazaar. The bazaar solves the problem: how do small, potentially antagonistic networks combine to conduct war?Not only does the bazaar solve the problem, it appears able to scale to disrupt larger, more stable targets. The bazaar essentially represents the evolution of terrorism as a technique into something more effective: a highly decentralized strategy that is nevertheless able to learn and innovate. Unlike traditional terrorism, it seeks to leverage gains from sabotaging infrastructure and disrupting markets. By focusing on such targets, the bazaar does not experience diminishing returns in the same way that traditional terrorism does. Once established, it creats a dynamic that is very difficult to disrupt.
I'm a little unclear as to what the purpose of the bazaar is - the goal appears to be a state of perpetual violence that is capable of keeping a nation in a position of failure/collapse. That our enemies seek to use this strategy in Iraq is obvious, but success essentially means perpetual failure. What I'm unclear on is how they seek to parlay this result into a successful state (which I assume is their long term goal - perhaps that is not a wise assumption).
In any case, reading about the bazaar can be pretty scary, especially when news from Iraq seems to correllate well with the strategy. Of course, not every attack in Iraq correllates, but this strategy is supposedly new and relatively dynamic. It is constantly improving on itself. They are improvising new tactics and learning from them in an effort to further define this new method of warfare.
As one of the commenters on his site notes, it is tempting to claim that John Robb's analysis is essentially an instruction manual for a guerilla organization, but that misses the point. It's better to know where we are vulnerable before we discover that some weakness is being exploited.
One thing that Robb is a little short on is actual, concrete ways with which to fight the bazaar (there are some, and he has pointed out situations where U.S. forces attempted to thwart bazaar tactics, but such examples are not frequent). However, he still provides a valuable service in exposing security vulnerabilities. It seems appropriate that we adopt open source security techniques in order to fight an enemy that employs an open source platform. Vulnerabilities need to be exposed so that we may devise effective counter-measures.
Posted by Mark on November 07, 2004 at 08:56 PM .: link :.
Sunday, October 10, 2004
Open Security and Full Disclosure
A few weeks ago, I wrote about what the mainstream media could learn from Reflexive documentary filmmaking. Put simply, Reflexive Documentaries achieve a higher degree of objectivity by embracing and acknowledging their own biases and agenda. Ironically, by acknowledging their own subjectivity, these films are more objective and reliable. I felt that the media could learn from such a model. Interestingly enough, such concepts can be applied to wider scenarios concerning information dissemination, particularly security.
Bruce Schneier has often written about such issues, and most of the information that follows is summarized from several of his articles, recent and old. The question with respect to computer security systems is this: Is publishing computer and network or software vulnerability information a good idea, or does it just help attackers?
When such a vulnerability exists, it creates what Schneier calls a Window of Exposure in which the vulnerability can still be exploited. This window exists until the vulnerability is patched and installed. There are five key phases which define the size of the window:
Phase 1 is before the vulnerability is discovered. The vulnerability exists, but no one can exploit it. Phase 2 is after the vulnerability is discovered, but before it is announced. At that point only a few people know about the vulnerability, but no one knows to defend against it. Depending on who knows what, this could either be an enormous risk or no risk at all. During this phase, news about the vulnerability spreads -- either slowly, quickly, or not at all -- depending on who discovered the vulnerability. Of course, multiple people can make the same discovery at different times, so this can get very complicated.The goal is to minimize the impact of the vulnerability by reducing the window of exposure (the area under the curve in figure 1). There are two basic approaches: secrecy and full disclosure.
The secrecy approach seeks to reduce the window of exposure by limiting public access to vulnerability information. In a different essay about network outages, Schneier gives a good summary of why secrecy doesn't work well:
The argument that secrecy is good for security is naive, and always worth rebutting. Secrecy is only beneficial to security in limited circumstances, and certainly not with respect to vulnerability or reliability information. Secrets are fragile; once they're lost they're lost forever. Security that relies on secrecy is also fragile; once secrecy is lost there's no way to recover security. Trying to base security on secrecy is just plain bad design.Secrecy may work on paper, but in practice, keeping vulnerabilities secret removes motivation to fix the problem (it is possible that a company could utilize secrecy well, but it is unlikely that all companies would do so and it would be foolish to rely on such competency). The other method of reducing the window of exposure is to disclose all information about the vulnerablity publicly. Full Disclosure, as this method is called, seems counterintuitive, but Schneier explains:
Proponents of secrecy ignore the security value of openness: public scrutiny is the only reliable way to improve security. Before software bugs were routinely published, software companies routinely denied their existence and wouldn't bother fixing them, believing in the security of secrecy.Ironically, publishing details about vulnerabilities leads to a more secure system. Of course, this isn't perfect. Obviously publishing vulnerabilities constitutes a short term danger, and can sometimes do more harm than good. But the alternative, secrecy, is worse. As Schneier is fond of saying, security is about tradeoffs. As I'm fond of saying, human beings don't so much solve problems as they trade one set of disadvantages for another (with the hope that the new set isn't quite as bad as the old). There is no solution here, only a less disadvantaged system.
This is what makes advocating open security systems like full disclosure difficult. Opponents will always be able to point to its flaws, and secrecy advocates are good at exploiting the intuitive (but not necessarily correct) nature of their systems. Open security systems are just counter-intuitive, and there is a tendency to not want to increase risk in the short term (as things like full disclosure does). Unfortunately, that means that the long term danger increases, as there is less incentive to fix security problems.
By the way, Schneier has started a blog. It appears to be made up of the same content that he normally releases monthly in the Crypto-Gram newsletter, but spread out over time. I think it will be interesting to see if Schneier starts responding to events in a more timely fashion, as that is one of the keys to the success of blogs (and it's something that I'm bad at, unless news breaks on a Sunday).
Posted by Mark on October 10, 2004 at 11:56 AM .: link :.
Wednesday, September 15, 2004
A Reflexive Media
"To write or to speak is almost inevitably to lie a little. It is an attempt to clothe an intangible in a tangible form; to compress an immeasurable into a mold. And in the act of compression, how the Truth is mangled and torn!" - Anne Murrow LindberghThere are many types of documentary films. The most common form of documentary is referred to as Direct Address (aka Voice of God). In such a documentary, the viewer is directly acknowledged, usually through narration and voice-overs. There is very little ambiguity and it is pretty obvious how you're expected to interpret these types of films. Many television and news programs use this style, to varying degrees of success. Ken Burns' infamous Civil War and Baseball series use this format eloquently, but most traditional propaganda films also fall into this category (a small caveat: most films are hybrids, rarely falling exclusively into one category). Such films give the illusion of being an invisible witness to certain events and are thus very persuasive and powerful.
The problem with Direct Address documentaries is that they grew out of a belief that Truth is knowable through objective facts. In a recent sermon he posted on the web, Donald Sensing spoke of the difference between facts and the Truth:
Truth and fact are not the same thing. We need only observe the presidential race to discern that. John Kerry and allies say that the results of America's war against Iraq is mostly a failure while George Bush and allies say they are mostly success. Both sides have the same facts, but both arrive at a different "truth."I'm not sure Sensing chose the best example here, but the concept itself is sound. Any documentary is biased in the Truth that it presents, even if the facts are undisputed. In a sense objectivity is impossible, which is why documentary scholar Bill Nichols admires films which seek to contextualize themselves, exposing their limitations and biases to the audience.
Reflexive Documentaries use many devices to acknowledge the filmmaker's presence, perspective, and selectivity in constructing the film. It is thought that films like this are much more honest about their subjectivity, and thus provide a much greater service to the audience.
An excellent example of a Reflexive documentary is Errol Morris' brilliant film, The Thin Blue Line. The film examines the "truth" around the murder of a Dallas policeman. The use of colored lighting throughout the film eventually correlates with who is innocent or guilty, and Morris is also quite manipulative through his use of editing - deconstructing and reconstructing the case to demonstrate just how problematic finding the truth can be. His use of framing calls attention to itself, daring the audience to question the intents of the filmmakers. The use of interviews in conjunction with editing is carefully structured to demonstrate the subjectivity of the film and its subjects. As you watch the movie, it becomes quite clear that Morris is toying with you, the viewer, and that he wants you to be critical of the "truth" he is presenting.
Ironically, a documentary becomes more objective when it acknowledges its own biases and agenda. In other words, a documentary becomes more objective when it admits its own subjectivity. There are many other forms of documentary not covered here (i.e. direct cinema/cinema verité, interview-based, performative, mock-documentaries, etc... most of which mesh together as they did in Morris' Blue Line to form a hybrid).
In Bill Nichols' seminal essay, Voice of Documentary (Can't seem to find a version online), he says:
"Documentary filmmakers have a responsibility not to be objective. Objectivity is a concept borrowed from the natural sciences and from journalism, with little place in the social sciences or documentary film."I always found it funny that Nichols equates the natural sciences with journalism, as it seems to me that modern journalism is much more like a documentary than a natural science. As such, I think the lessons of Reflexive documentaries (and its counterparts) should apply to the realm of journalism.
The media emphatically does not acknowledge their biases. By bias, I don't mean anything as short-sighted as liberal or conservative media bias, I mean structural bias of which political orientation is but a small part (that link contains an excellent essay on the nature of media bias, one that I find presents a more complete picture and is much more useful than the tired old ideological bias we always hear so much about*). Such subjectivity does exist in journalism, yet the media stubbornly persists in their firm belief that they are presenting the objective truth.
The recent CBS scandal, consisting of a story bolstered by what appear to be obviously forged documents, provides us with an immediate example. Terry Teachout makes this observation regarding how few prominent people are willing to admit that they are wrong:
I was thinking today about how so few public figures are willing to admit (for attribution, anyway) that they’ve done something wrong, no matter how minor. But I wasn’t thinking of politicians, or even of Dan Rather. A half-remembered quote had flashed unexpectedly through my mind, and thirty seconds’ worth of Web surfing produced this paragraph from an editorial in a magazine called World War II:As he points out later in his post, I don't think we're going to be seeing such admissions any time soon. Again, CBS provides a good example. Rather than admit the possibility that they may be wrong, their response to the criticisms of their sources has been vague, dismissive, and entirely reliant on their reputation as a trustworthy staple of journalism. They have not yet comprehensively responded to any of the numerous questions about the documents; questions which range from "conflicting military terminology to different word-processing techniques". It appears their strategy is to escape the kill zone by focusing on the "truth" of their story, that Bush's service in the Air National Guard was less than satisfactory. They won't admit that the documents are forgeries, and by focusing on the arguably important story, they seek to distract the issue away from their any discussion of their own wrongdoing - in effect claiming that the documents aren't important because the story is "true" anyway.Soon after he had completed his epic 140-mile march with his staff from Wuntho, Burma, to safety in India, an unhappy Lieutenant General Joseph W. Stilwell was asked by a reporter to explain the performance of Allied armies in Burma and give his impressions of the recently concluded campaign. Never one to mince words, the peppery general responded: "I claim we took a hell of a beating. We got run out of Burma and it is as humiliating as hell. I think we ought to find out what caused it, and go back and retake it."Stilwell spoke those words sixty-two years ago. When was the last time that such candor was heard in like circumstances? What would happen today if similar words were spoken by some equally well-known person who’d stepped in it up to his eyebrows?
Should they admit they were wrong? Of course they should, but they probably won't. If they won't, it will not be because they think the story is right, and not because they think the documents are genuine. They won't admit wrongdoing and they won't correct their methodologies or policies because to do so would be to acknowledge to the public that they are less than just an objective purveyor of truth.
Yet I would argue that they should do so, that it is their duty to do so just as it is the documentarian's responsibility to acknowledge their limitations and agenda to their audience.
It is also interesting to note that weblogs contrast the media by doing just that. Glenn Reynolds notes that the internet is a low-trust medium, which paradoxically indicates that it is more trustworthy than the media (because blogs and the like acknowledge their bias and agenda, admit when they're wrong, and correct their mistakes):
The Internet, on the other hand, is a low-trust environment. Ironically, that probably makes it more trustworthy.The mainstream media as we know it is on the decline. They will no longer be able to get by on their brand or their reputations alone. The collective intelligence of the internet, combined with the natural reflexiveness of its environment, has already provided a challenge to the underpinnings of journalism. On the internet, the dominance of the media is constantly challenged by individuals who question the "truth" presented to them in the media. I do not think that blogs have the power to eclipse the media, but their influence is unmistakable. The only question that remains is if the media will rise to the challenge. If the way CBS has reacted is any indication, then, sadly, we still have a long way to go.
* Yes, I do realize the irony of posting this just after I posted about liberal and conservative tendencies in online debating, and I hinted at that with my "Update" in that post.
Thanks to Jay Manifold for the excellent Structural Bias of Journalism link.
Posted by Mark on September 15, 2004 at 11:07 PM .: link :.
Thursday, September 09, 2004
Benjamin Franklin: American, Blogger & LIAR!
I've been reading a biography of Benjamin Franklin (Benjamin Franklin: An American Life by Walter Isaacson), and several things have struck me about the way in which he conducted himself. As with a lot of historical figures, there is a certain aura that surrounds the man which is seen as impenetrable today, but it's interesting to read about how he was perceived in his time and contrast that with how he would be perceived today. As usual, there is a certain limit to the usefulness of such speculation, as it necessarily must be based on certain assumptions that may or may not be true (as such this post might end up saying more about me and my assumptions than Franklin!). In any case, I find such exercises interesting, so I'd like to make a few observations.
The first is that he would have probably made a spectacular blogger, if he chose to engage in such an activity (Ken thinks he would definitely be a blogger, but I'm not so sure). He not only has all the makings of a wonderful blogger, I think he'd be extremely creative with the format. He was something of a populist, his writing was humorous, self-deprecating, and often quite profound at the same time. His range of knowledge and interest was wide, and his tone was often quite congenial. All qualities valued in any blogger.
He was incredibly prolific (another necessity for a successful blog), and often wrote the letters to his paper himself under assumed names, and structured them in such a way as to gently deride his competitors while making some other interesting point. For instance, Franklin once published two letters, written under two different pseudonyms, in which he manufactured the first recorded abortion debate in America - not because of any strong feelings on the issue, but because he knew it would sell newspapers and because his competitor was serializing entries from an encyclopedia at the time and had started with "Abortion." Thus the two letters were not only interesting in themselves, but also provided ample opportunity to impugn his competitor.
On thing I think we'd see in a Franklin blog is entire comment threads consisting of a full back-and-forth debate, with all entries written by Franklin himself under assumed names. I can imagine him working around other "real" commenters with his own pseudonyms, and otherwise having fun with the format (he'd almost certainly make a spectacular troll as well).
If there was ever a man who could make a living out of blogging, I think Franklin was it. This is, in part, why I'm not sure he'd truly end up as a pure blogger, as even in his day, Franklin was known to mix private interests with public ones, and to leverage both to further his business interests. He could certainly have organized something akin to The Junto on the internet, where a group of likeminded fellows got together (whether it be physically or virtually over the internet) and discussed issues of the day and also endeavored to form a vehicle for the furtherance of their own careers.
Then again, perhaps Franklin would simply have started his own newspaper and had nothing to do with blogging (or perhaps he would attempt to mix the two in some new way). The only problem would be that the types of satire and hoaxes he could get away with in his newspapers in the early 18th century would not really be possible in today's atmosphere (such playfulness has long ago left the medium, but is alive and well in the blogosphere, which is one thing that would tend to favor his participation).
Which brings me to my next point: I have to wonder how Franklin would have done in today's political climate. Would he have been able to achieve political prominence? Would he want to? Would his anonymous letters, hoaxes, and in his newspapers have gotten him into trouble? I can imagine the self-righteous indignation now: "His newspaper is a farce! He's a LIAR!" And the Junto? I don't even want to think of the conspiracy theories that could be conjured with that sort of thing in mind.
One thing Franklin was exceptionally good at was managing his personal image, but would he be able to do so in today's atmosphere? I suspect he would have done well in our time, but I don't know how politically active he would be (and I suppose there is something to be said about his participation being partly influenced by the fact that he was a part of a revolution, not a true politician of the kind we have today). I know the basic story of his life, but I haven't gotten that far in the book, so perhaps I should revisit this subject later. And thus ends my probably inaccurate, but interesting nonetheless, discussion of Franklin in our times. Expect more references to Franklin in the future, as I have been struck by quite a few things about his life that are worth discussing today.
Posted by Mark on September 09, 2004 at 10:00 PM .: link :.
Sunday, August 01, 2004
A Village of Expectation
It's funny how much your expectations influence how much you like or dislike a movie. I'm often disappointed by long awaited films, Star Wars: Episode I being the typical example. Decades of waiting and an unprecidented pre-release hype served only to elevate expectations for the film to unreachable heights. So when the time came, meesa not so impressed. I enjoyed the film and I don't think it was that bad, but my expecations far outweighed the experience.
Conversely, when I go to watch a movie I think will stink, I'm often pleasantly surprised. Sometimes these movies are bad, but I thought they would be so much worse than they were that I ended up enjoying them. A recent example of this was I, Robot. As an avid Isaac Asimov fan, I was appalled by the previews for the film, which featured legions of apparently rebelling CGI robots, and naturally thought it would be stupifyingly bad as such events were antithetical to Asimov's nuanced robot stories. Of course, I went to see it, and about halfway through, I was surprised to find that I was enjoying myself. It contains a few mentions to the three laws, positronics, and the name Susan Calvin is used for one of the main characters, but other than those minor details, the story doesn't even begin to resemble anything out of Asimov, so I was able to disassociate the two and enjoy the film on its own merits. And it was enjoyable.
Of course, I became aware of this phenomenon a long time ago, and have always tried to learn as little as possible about movies before they come out as I can. I used to read up on all the movie news and look forward to tons of movies, but I found that going in with a clean slate is the best way to see a film. So I tend to shy away from reading reviews, though I will glance at the star rating of a few critics I know and respect. (Obviously it is not a perfectly clean slate, but you get the point.)
Earlier this week, I realized that M. Night Shyamalan's The Village was being released, and made plans to see it. Shyamalan, the writer, director, and producer of such films as The Sixth Sense, Unbreakable, and Signs, has become known for the surprise ending, where some fact is revealed which totally changes the perspective of everything that came before it. This is unfortunate, because the twists and turns of a story are less effective if we're expecting them. What's more, if we know it's coming, we wrack our brains trying to figure out what the surprise will be, hypothesizing several different versions of the story in our head, one of which is bound to be accurate. I've never been that impressed with Shyamalan, but he has always produced solid films that were entertaining enough. There are often little absudities or plot holes, but never enough to completely drain my goodwill dry (though Signs came awfully close). I think he'll mature into a better filmmaker as time goes on.
The Village has it's share of twists and turns, but of course, we expect them and so they really don't come as any surprise (and, to be honest, Shyamalan layed on the hints pretty thickly). Fortunately, knowing what is coming doesn't completely destroy the film, as it would in some of his other films. I've tried to avoid spoilers by speaking in generalities, but if you haven't seen the film, you might want to skip down to the next paragraph (I don't think I ruined anything, but better safe than sorry). Shyamalan has always relied more on brooding atmosphere and building tension than on gratuitous action and gore, and The Village is no exception. Once again, he does resort to the use of "Boo!" moments, something that has always rubbed me the wrong way in his films, but I'm beginning to come around. He has become quite adept at employing that device, even if it is a cheap thrill. He must realize it, because at one point I think he deliberately eschews the "Boo!" moment in favor of a more meticulous and subtle approach. There are several instances of masterful staging in the film, which is part of why knowing the twists ahead of time doesn't ruin the film.
Now I was looking forward to this film, but as I mentioned before, I've never been blown away by Shyamalan (with the possible exception of Unbreakable, which I still think is the best of his films) so I didn't have tremendously high expectations. I expected a well done, but not brilliant, film. On Friday, I checked out Ebert's rating and glanced at Rotten Tomatoes, both of which served to further deflate my expectations. By the time I saw the film, I was expecting a real dud and was pleasantly surprised to find another solid effort from Shyamalan. It's not for everybody, and those who are expecting another bombshell ending will be disappointed, but that doesn't matter much in my opinion. The movie is what it is, and I judge it on its own merits, not on inflated expectations of twist endings and shocking revelations.
Would I have enjoyed it as much if I had been expecting something more out of it? Probably not, and there's the rub. Does it matter? That is a difficult question to answer. No matter how you slice it, what you expect of a film forces a point of reference. When you see the film, you judge it based on that. So now the question becomes, is it right to intentially force the point of reference low, so as to make sure you enjoy the movie? That too is a difficult question to answer. For my money, it is to some extent advisable to keep a check on high expectations, but I suppose you could get carried away with it. In any case, I enjoyed The Village and I look forward to Shyamalan's next film, albeit with a wary sense of trepidation.
Posted by Mark on August 01, 2004 at 07:34 PM .: link :.
Sunday, July 18, 2004
With great freedom, comes great responsibility...
David Foster recently wrote about a letter to the New York Times which echoed sentiments regarding Iraq that appear to be commonplace in certain circles:
While we have removed a murderous dictator, we have left the Iraqi people with a whole new set of problems they never had to face before...I've often written about the tradeoffs inherent in solving problems, and the invasion of Iraq is no exception. Let us pretend for a moment that everything that happened in Iraq over the last year went exactly as planned. Even in that best case scenario, the Iraqis would be facing "a whole new set of problems they never had to face before." There was no action that could have been taken regarding Iraq (and this includes inaction) that would have resulted in an ideal situation. We weren't really seeking to solve the problems of Iraq, so much as we were exchanging one set of problems for another.
Yes, the Iraqis are facing new problems they have never had to face before, but the point is that the new problems are more favorable than the old problems. The biggest problem they are facing is, in short, freedom. Freedom is an odd thing, and right now, halfway across the world, the Iraqis are finding that out for themselves. Freedom brings great benefits, but also great responsibility. Freedom allows you to express yourself without fear of retribution, but it also allows those you hate to express things that make your blood boil. Freedom means you have to acknowledge their views, no matter how repulsive or disgusting you may find them (there are limits, of course, but that is another subject). That isn't easy.
A little while ago, Steven Den Beste wrote about Jewish immigrants from the Soviet Union:
About 1980 (I don't remember exactly) there was a period in which the USSR permitted huge numbers of Jews to leave and move to Israel. A lot of them got off the jet in Tel Aviv and instantly boarded another one bound for New York, and ended up here.There are a lot of people who ended up in the U.S. because they were fleeing oppression, and when they got here, they were confronted with "a whole new set of problems they never had to face before." Most of them were able to adapt to the challenges of freedom and prosper, but don't confuse prosperity with utopia. These people did not solve their problems, they traded them for a set of new problems. For most of them, the problems associated with freedom were more favorable than the problems they were trying to escape from. For some, the adjustment just wasn't possible, and they returned to their homes.
Defecting North Koreans face a host of challenges upon their arrival in South Korea (if they can make it that far), including the standard freedom related problems: "In North Korea, the state allocates everything from food to jobs. Here, having to do their own shopping, banking or even eating at a food court can be a trying experience." The differences between North Korea and South Korea are so vast that many defectors cannot adapt, despite generous financial aid, job training and other assistance from civic and religious groups. Only about half of the defectors are able to wrangle jobs, but even then, it's hard to say that they've prospered. But at the same time, are their difficulties now worse than their previous difficulties? Moon Hee, a defector who is having difficulties adjusting, comments: "The present, while difficult, is still better than the past when I did not even know if there would be food for my next meal."
There is something almost paradoxical about freedom. You see, it isn't free. Yes, freedom brings benefits, but you must pay the price. If you want to live in a free country, you have to put up with everyone else being free too, and that's harder than it sounds. In a sense, we aren't really free, because the freedom we live with and aspire to is a limiting force.
On the subject of Heaven, Saint Augustine once wrote:
The souls in bliss will still possess the freedom of will, though sin will have no power to tempt them. They will be more free than everso free, in fact, from all delight in sinning as to find, in not sinning, an unfailing source of joy. ...in eternity, freedom is that more potent freedom which makes all sin impossible. - Saint Augustine, City of God (Book XXII, Chapter 30)Augustine's concept of a totally free will is seemingly contradictory. For him, freedom, True Freedom, is doing the right thing all the time (I'm vastly simplifying here, but you get the point). Outside of Heaven, however, doing the right thing, as we all know, isn't easy. Just ask Spider-Man.
I never really read the comics, but in the movies (which appear to be true to their source material) Spider-Man is all about the conflict between responsibilities and desires. Matthew Yglesias is actually upset with the second film because is has a happy ending:
Being the good guy -- doing the right thing -- really sucks, because doing the right thing doesn't just mean avoiding wrongdoing, it means taking affirmative action to prevent it. There's no time left for Peter's life, and his life is miserable. Virtue is not its own reward, it's virtue, the rewards go to the less consciencious. There's no implication that it's all worthwhile because God will make it right in the End Times, the life of the good guy is a bleak one. It's an interesting (and, I think, a correct) view and it's certainly one that deserves a skilled dramatization, which is what the film gives you right up until the very end. But then -- ta da! -- it turns out that everyone does get to be happy after all. A huge letdown.Of course, plenty of people have noted that the Spider-Man story doesn't end with the second movie, and that the third is bound to be filled with the complications of superhero dating (which are not limited to Spider-Man).
Spider-Man grapples with who he is. He has gained all sorts of powers, and with those powers, he has also gained a certain freedom. It could be very liberating, but as the saying goes: With great power comes great responsibility. He is not obligated to use his powers for good or at all, but he does. However, for a good portion of the second film he shirks his duties because a life of pure duty has totally ruined his personal life. This is that conflict between responsibilities and desires I mentioned earlier. It turns out that there are limits to Spider-Man's altruism.
For Spider-Man, it is all about tradeoffs, though he may have learned it the hard way. First he took on too much responsibility, and then too little. Will he ever strike a delicate balance? Will we? For we are all, in a manner of speaking, Spider-Man. We all grapple with similar conflicts, though they manifest in our lives with somewhat less drama. Balancing your personal life with your professional life isn't as exciting, but it can be quite challenging for some.
And so the people of Iraq are facing new challenges; problems they have never had to face before. Like Spider-Man, they're going to have to deal with their newfound responsibilites and find a way to balance them with their desires. Freedom isn't easy, and if they really want it, they'll need to do more than just avoid problems, they'll have to actively solve them. Or, rather, trade one set of problems for another. Because with great freedom, comes great responsibility.
Posted by Mark on July 18, 2004 at 09:16 PM .: link :.
Sunday, July 04, 2004
Ralph Peters writes about his experience keeping track of combat in Iraq during the tumultuous month of April:
During the initial fighting in Fallujah, I tuned in al-Jazeera and the BBC. At the same time, I was getting insider reports from the battlefield, from a U.S. military source on the scene and through Kurdish intelligence. I saw two different battles.Peters' disenfranchisement with the media is hardly unique. Reports of the inadequacy of the media are legion. Eric M. Johnson is a U.S. Marine who served in Iraq and recently wrote about media bias:
Iraq veterans often say they are confused by American news coverage, because their experience differs so greatly from what journalists report. Soldiers and Marines point to the slow, steady progress in almost all areas of Iraqi life and wonder why they don't get much notice or in many cases, any notice at all.It goes on from there, pointing out several examples and further evidence of the substandard performance of the media in Iraq. Then you have this infamous report from the Daily Telegraph's correspondent Toby Harnden.
The other day, while taking a break by the Al-Hamra Hotel pool, fringed with the usual cast of tattooed defense contractors, I was accosted by an American magazine journalist of serious accomplishment and impeccable liberal credentials.Yikes. I wish I knew a little more about this unnamed "magazine journalist of serious accomplishment and impeccable liberal credentials", but it is a chilling admonition nonetheless.
Again, the inadequacy of the media has become painfully obvious over the past few years. How to deal with this? At a discussion forum the other day, someone posted this article concerning FOX News bias along with this breathless message:
This shouldn't come as any surprise. How can a NEWS organization possibly be allowed to lie like this? FOX should be removed from the air and those who are in charge should be removed from the media business and not be allowed to do anything whatsoever where news and media are concerned.Well, I suppose that is one way of dealing with media bias. But Ralph Peters' response is drastically different. He assumes the media can't or shouldn't be changed. I tend to take his side, as arbitrarily removing a news organization from the air and blacklisting those in charge seems like a cure that is much worse than the disease to me, but that leads to some unpleasant consequences. Back to the Peters article:
The media is often referred to off-handedly as a strategic factor. But we still don't fully appreciate its fatal power. Conditioned by the relative objectivity and ultimate respect for facts of the U.S. media, we fail to understand that, even in Europe, the media has become little more than a tool of propaganda.[emphasis mine] This is bound to be a difficult process, and will take years to perfect. If we proceed on this path, we'll have to suffer many short term problems, including a much higher casualty rate, perhaps for both sides (and even civilians). If we don't proceed along this path; if we don't learn to kill quickly, then we'll lose slowly.
For it's part, the military has shown some initiative in dealing with the media. Wretchard writes about a Washington Post article describing the victory that the First Armored Division won over Moqtada Al-Sadr's militia:
In what was probably the most psychologically revealing moment of the battle, infantrymen fought six hours for the possession of one damaged Humvee, of no tactical value, simply so that the network news would not have the satisfaction of displaying the piece of junk in the hands of Sadr's men.I don't know that Peters' pessimism is totally warranted, but there is an element of pragmatism involved that should be considered. It is certainly frustrating though.
Posted by Mark on July 04, 2004 at 06:06 PM .: link :.
Friday, June 11, 2004
Religion isn't as comforting as it seems
Steven Den Beste is an athiest, yet he is unlike any athiest I have ever met in that he seems to understand theists (in the general sense of the term) and doesn't hold their beliefs against them. As such, I have gained an immense amount of respect for him and his beliefs. He speaks with conviction about his beliefs, but he is not evangelistic.
In his latest post, he aks one of the great unanswerable questions: What am I? I won't pretend to have any of the answers, but I do object to one thing he said. It is a belief that is common among athiests (though theists are little better):
Is a virus alive? I don't know. Is a hive mind intelligent? I don't know. Is there actually an identifiable self with continuity of existence which is typing these words? I really don't know. How much would that self have to change before we decide that the continuity has been disrupted? I think I don't want to find out.[Emphasis added] The idea that these types of unanswerable questions is not troubling or easy to answer to a believer is a common one, but I also believe it to be false. Religion is no more comforting than any other system of beliefs, including athiesm. Religion does provide a vocabulary for the unanswerable, but all that does is help us grapple with the questions - it doesn't solve anything and I don't think it is any more comforting. I believe in God, but if you asked me what God really is, I wouldn't be able to give you a definitive answer. Actually, I might be able to do that, but "God is a mystery" is hardly comforting or all that useful.
Elsewhere in the essay, he refers to the Christian belief in the soul:
To a Christian, life and self are ultimately embodied in a person's soul. Death is when the soul separates from the body, and that which makes up the essence of a person is embodied in the soul (as it were).He goes on to list some conundrums that would be troubling to the believer but they all touch on the most troubling thing - what the heck is the soul in the first place? Trying to answer that is no more comforting to a theist than trying to answer the questions he's asking himself. The only real difference is a matter of vocabulary. All religion has done is shifted the focus of the question.
Den Beste goes on to say that there are many ways in which atheism is cold and unreassuring, but fails to recognize the ways in which religion is cold and unreassuring. For instance, there is no satisfactory theodicy that I have ever seen, and I've spent a lot of time studying such things (16 years of Catholic schooling baby!) A theodicy is essentially an attempt to reconcile God's existance with the existance of evil. Why does God allow evil to exist? Again, there is no satisfactory answer to that question, not the least of which because there is no satisfactory definition of both God and evil!
Now, theists often view athiests in a similar manner. While Den Beste laments the cold and unreassuring aspects of athiesm, a believer almost sees the reverse. To some believers, if you remove God from the picture, you also remove all concept of morality and responsibility. Yet, that is not the case, and Den Beste provides an excellent example of a morally responsible athiest. The grass is greener on the other side, as they say.
All of this is generally speaking, of course. Not all religions are the same, and some are more restrictive and closed-minded than others. I suppose it can be a matter of degrees, with one religion or individual being more open minded than the other, but I don't really know of any objective way to measure that sort of thing. I know that there are some believers who aren't troubled by such questions and proclaim their beliefs in blind faith, but I don't count myself among them, nor do I think it is something that is inherent in religion (perhaps it is inherent in some religions, but even then, religion does not exist in a vacuum and must be reconciled with the rest of the world).
Part of my trouble with this may be that I seem to have the ability to switch mental models rather easily, viewing a problem from a number of different perspectives and attempting to figure out the best way to approach a problem. I seem to be able to reconcile my various perspectives with each other as well (for example, I seem to have no problem reconciling science and religion with each other), though the boundries are blurry and I can sometimes come up with contradictory conclusions. This is in itself somewhat troubling, but at the same time, it is also somwhat of an advantage that I can approach a problem in a number of different ways. The trick is knowing which approach to use for which problem; hardly an easy proposition. Furthermore, I gather that I am somewhat odd in this ability, at least among believers. I used to debate religion a lot on the internet, and after a time, many refused to think of me as a Catholic because I didn't seem to align with others' perception of what Catholics are. I always found that rather amusing, though I guess I can understand the sentiment.
Unlike Den Beste, I do harbor some doubt in my beliefs, mainly because I recognize them as beliefs. They are not facts and I must concede the idea that my beliefs are incorrect. Like all sets of beliefs, there is an aspect of my beliefs that is very troubling and uncomforting, and there is a price we all pay for believing what we believe. And yet, believe we must. If we required our beliefs to be facts in order to act, we would do nothing. The value we receive from our beliefs outweighs the price we pay, or so we hope...
I suppose this could be seen by Steven to be missing the forest for the trees, but the reason I posted it is because the issue of beliefs discussed above fits nicely with several recent posts I made under the guise of Superstition and Security Beliefs (and Heuristics). They might provide a little more detail on the way I think regarding these subjects.
Posted by Mark on June 11, 2004 at 12:09 AM .: link :.
Sunday, May 23, 2004
On of my favorite anecdotes (probably apocryphal, as these things usually go) tells of a horseshoe that hung on the wall over Niels Bohr's desk. One day, an exasperated visitor could not help asking, "Professor Bohr, you are one of the world's greatest scientists. Surely you cannot believe that object will bring you good luck." "Of course not," Bohr replied, "but I understand it brings you luck whether you believe or not."
I've had two occasions with which to be obsessively superstitious this weekend. The first was Saturday night's depressing Flyers game. Due to poorly planned family outing (thanks a lot Mike!), I missed the first period and a half of the game. During that time, the Flyers went down 2-0. As soon as I started watching, they scored a goal, much to my relief. But as the game grinded to a less than satisfactory close, I could not help but think, what if I had been watching for that first period?
Even as I thought that, though, I recognized how absurd and arrogant a thought like that is. As a fan, I obviously cannot participate in the game, but all fans like to believe they are a factor in the outcome of the game and will thus go to extreme superstitious lengths to ensure the team wins. That way, there is some sort of personal pride to be gained (or lost, in my case) from the team winning, even though there really isn't.
I spent the day today at the Belmont Racetrack, betting on the ponies. Longtime readers know that I have a soft spot for gambling, but that I don't do it very often nor do I ever really play for high stakes. One of the things I really enjoy is people watching, because some people go to amusing lengths to perform superstitious acts that will bring them that mystical win.
One of my friends informed me of his superstitious strategy today. His entire betting strategy dealt with the name of the horse. If the horse's name began with an "S" (i.e. Secretariat, Seabiscuit, etc...) it was bound to be good. He also made an impromptu decision that names which displayed alliteration (i.e. Seattle Slew, Barton Bank, etc...) were also more likely to win. So today, when he spied "Seaside Salute" in the program, which exhibited both alliteration and the letter "S", he decided it was a shoe-in! Of course, he only bet it to win, and it placed, thus he got screwed out of a modest amount of money.
Like I should talk. My entire betting strategy revolves around John R. Velazquez, the best jockey in the history of horse racing. This superstition did not begin with me, as several friends discovered this guy a few years ago, but it has been passed on and I cannot help but believe in the power of JRV. When I bet on him, I tend to win. When I bet against him, he tends to be riding the horse that screws me over. As a result, I need to seriously consider the consequences of crossing JRV whenever I choose to bet on someone else.
Now, if I were to collect historical data regarding my bets for or against JRV (which is admittedly a very small data set, and thus not terribly conclusive either way, but stay with me here) I wouldn't be surprised to find that my beliefs are unwarranted. But that is the way of the superstition - no amount of logic or evidence is strong enough to be seriously considered (while any supporting evidence is, of course, trumpeted with glee).
Now, I don't believe for a second that watching the Flyers makes them play better, nor do I believe that betting on (or against) John R. Velazquez will increase (or decrease) my chances of winning. But I still think those things... after all, what could I lose?
This could be a manifestation of a few different things. It could be a relatively benign "security belief" (or "pleasing falsehood" as some like to call it - I'm sure there are tons of names for it) which, as long as you realize what you're dealing with can actually be fun (as my obsession with JRV is). It could also be brought on by what Steven Den Beste calls the High cliff syndrome.
It seems that our brains are constantly formulating alternatives, and then rejecting most of them at the last instant. ... All of us have had the experience of thinking something which almost immediately horrified us, "Why would I think such a thing?" I call it "High cliff syndrome".It seems to be one of the profound truths of human existence that we can conceive of impossible situations that we know will never be possible. None of us are immune, from one of the great scientific minds of our time to the lowliest casino hound. This essay was, in fact, inspired by an Isaac Asimov essay called "Knock Plastic!" (as published in Magic) in which Asimov confesses his habitual knocking of wood (of course, he became a little worried over the fact that natural wood was being used less and less in ordinary construction... until, of course, someone introduced him to the joys of knocking on plastic). The insights driven by such superstitious "security beliefs" must indeed be kept into perspective, but that includes realizing that we all think these things and that sometimes, it really can't hurt to indulge in a superstition.
Update: More on Security Beliefs here.
Posted by Mark on May 23, 2004 at 09:32 PM .: link :.
Sunday, May 02, 2004
The Unglamorous March of Technology
We live in a truly wondrous world. The technological advances over just the past 100 years are astounding, but, in their own way, they're also absurd and even somewhat misleading, especially when you consider how these advances are discovered. More often than not, we stumble onto something profound by dumb luck or by brute force. When you look at how a major technological feat was accomplished, you'd be surprised by how unglamorous it really is. That doesn't make the discovery any less important or impressive, but we often take the results of such discoveries for granted.
For instance, how was Pi originally calculated? Chris Wenham provides a brief history:
So according to the Bible it's an even 3. The Egyptians thought it was 3.16 in 1650 B.C.. Ptolemy figured it was 3.1416 in 150 AD. And on the other side of the world, probably oblivious to Ptolemy's work, Zu Chongzhi calculated it to 355/113. In Bagdad, circa 800 AD, al-Khwarizmi agreed with Ptolemy; 3.1416 it was, until James Gregory begged to differ in the late 1600s.π is an important number and being able to figure out what it is has played a significant factor in the advance of technology. While all of these numbers are pretty much the same (to varying degrees of precision), isn't it absurd that someone figured out π by dropping 34,000 pins on a grid? We take π for granted today; we don't have to go about finding the value of π, we just use it in our calculations.
In Quicksilver, Neal Stephenson portrays several experiments performed by some of the greatest minds in history, and many of the things they did struck me as especially unglamorous. Most would point to the dog and bellows scene as a prime example of how unglamorous the unprecedented age of discovery recounted in the book really was (and they'd be right), but I'll choose something more mundane (page 141 in my edition):
"Help me measure out three hundred feet of thread," Hooke said, no longer amused.And, of course, the experiment was a failure. Why? The scale was not precise enough! The book is filled with similar such experiments, some successful, some not.
Another example is telephones. Pick one up, enter a few numbers on the keypad and voila! you're talking to someone halfway across the world. Pretty neat, right? But how does that system work, behind the scenes? Take a look at the photo on the right. This is a typical intersection in a typical American city, and it is absolutely absurd. Look at all those wires! Intersections like that are all over the world, which is the part of the reason I can pick up my phone and talk to someone so far away. One other part of the reason I can do that is that almost everyone has a phone. And yet, this system is perceived to be elegant.
Of course, the telephone system has grown over the years, and what we have now is elegant compared to what we used to have:
The engineers who collectively designed the beginnings of the modern phone system in the 1940's and 1950's only had mechanical technologies to work with. Vacuum tubes were too expensive and too unreliable to use in large numbers, so pretty much everything had to be done with physical switches. Their solution to the problem of "direct dial" with the old rotary phones was quite clever, actually, but by modern standards was also terribly crude; it was big, it was loud, it was expensive and used a lot of power and worst of all it didn't really scale well. (A crossbar is an N� solution.) ... The reason the phone system handles the modern load is that the modern telephone switch bears no resemblance whatever to those of 1950's. Except for things like hard disks, they contain no moving parts, because they're implemented entirely in digital electronics.So we've managed to get rid of all the moving parts and make things run more smoothly and reliably, but isn't it still an absurd system? It is, but we don't really stop to think about it. Why? Because we've hidden the vast and complex backend of the phone system behind innocuous looking telephone numbers. All we need to know to use a telephone is how to operate it (i.e. how to punch in numbers) and what number we want to call. Wenham explains, in a different essay:
The numbers seem pretty simple in design, having an area code, exchange code and four digit number. The area code for Manhattan is 212, Queens is 718, Nassau County is 516, Suffolk County is 631 and so-on. Now let's pretend it's my job to build the phone routing system for Emergency 911 service in the New York City area, and I have to route incoming calls to the correct police department. At first it seems like I could use the area and exchange codes to figure out where someone's coming from, but there's a problem with that: cell phone owners can buy a phone in Manhattan and get a 212 number, and yet use it in Queens. If someone uses their cell phone to report an accident in Queens, then the Manhattan police department will waste precious time transferring the call.He also mentions cell phones, which are somewhat less absurd than plain old telephones, but when you think about it, all we've done with cell phones is abstract the telephone lines. We're still connecting to a cell tower (which need to be placed with high frequency throughout the world) and from there, a call is often routed through the plain old telephone system. If we could see the RF layer in action, we'd be astounded; it would make the telephone wires look organized and downright pleasant by comparison.
The act of hiding the physical nature of a system behind an abstraction is very common, but it turns out that all major abstractions are leaky. But all leaks in an abstraction, to some degree, are useful.
One of the most glamorous technological advances of the past 50 years was the advent of space travel. Thinking of the heavens is indeed an awe-inspiring and humbling experience, to be sure, but when you start breaking things down to the point where we can put a man in space, things get very dicey indeed. When it comes to space travel, there is no more glamorous a person than the astronaut, but again, how does one become an astronaut? The need to pour through and memorize giant telephone-sized books filled with technical specifications and detailed schematics. Hardly a glamorous proposition.
Steven Den Beste recently wrote a series of articles concerning the critical characteristics of space warships, and it is fascinating reading, but one of the things that struck me about the whole concept was just how unglamorous space battles would be. It sounds like a battle using the weapons and defenses described would be punctuated by long periods of waiting followed by a short burst of activity in which one side was completely disabled. This is, perhaps, the reason so many science fiction movies and books seem to flaunt the rules of physics. As a side note, I think a spectacular film could be made while still obeying the rules of physics, but that is only because we're so used to the absurd physics defying space battles.
None of this is to say that technological advances aren't worthwhile or that those who discover new and exciting concepts are somehow not impressive. If anything, I'm more impressed at what we've achieved over the years. And yet, since we take these advances for granted, we marginalize the effort that went into their discovery. This is due in part to the necessary abstractions we make to implement various systems. But when abstractions hide the crude underpinnings of technology, we see that technology and its creation as glamorous, thus bestowing honors upon those who make the discovery (perhaps for the wrong reasons). It's an almost paradoxal cycle. Perhaps because of this, we expect newer discoveries and innovations to somehow be less crude, but we must realize that all of our discoveries are inherently crude.
And while we've discovered a lot, it is still crude and could use improvements. Some technologies have stayed the same for thousands of years. Look at toilet paper. For all of our wondrous technological advances, we're still wiping our ass with a piece of paper. The Japanese have the most advanced toilets in the world, but they've still not figured out a way to bypass the simple toilet paper (or, at least, abstract the process). We've got our work cut out for us. Luckily, we're willing to go to absurd lengths to achieve our goals.
Posted by Mark on May 02, 2004 at 09:47 PM .: link :.
Sunday, April 04, 2004
Thinking about Security
I've been making my way through Bruce Schneier's Crypto-Gram newsletter archives, and I came across this excellent summary of how to think about security. He breaks security down into five simple questions that should be asked of a proposed security solution, some obvious, some not so much. In the post 9/11 era, we're being presented with all sorts of security solutions, and so Shneier's system can be quite useful in evaluating proposed security systems.
This five-step process works for any security measure, past, present, or future:What this process basically does is force you to judge the tradeoffs of a security system. All to often, we either assume a proposed solution doesn't create problems of its own, or assume that because a proposed solution isn't a perfect solution, it's useless. Security is a tradeoff. It doesn't matter if a proposed security system makes us safe. What matters is that a system is worth the tradeoffs (or price, if you prefer). For instance, in order to make your computer invulnerable to external attacks from the internet, all you need to do is disconnect it from the internet. However, that means you can no longer access the internet! That is the price you pay for a perfectly secure solution to internet attacks. And it doesn't protect against attacks from those who have physical access to your computer. Also, you presumably want to use the internet, seeing as though you had a connection you wanted to protect. The old saying still holds: A perfectly secure system is a perfectly useless system.
In the post 9/11 world we're constantly being bombarded by new security measures, but at the same time, we're being told that a solution which is not perfect is worthless. It's rare that a new security measure will provide a clear benefit without causing any problems. It's all about tradeoffs...
I had intended to apply Schneier's system to a contemporary security "solution," but I can't seem to think of anything at the moment. Perhaps more later. In the mean time, check out Schneier's recent review of "I am Not a Terrorist" Cards in which he tears apart a proposed security system which sounds interesting on the surface, but makes little sense when you take a closer look (which Scheier does mercilessly).
Posted by Mark on April 04, 2004 at 11:09 PM .: link :.
Sunday, March 21, 2004
Inherently Funny Words, Humor, and Howard Stern
Here's a question: Which of the following words is most inherently funny?
Words with a 'k' in it are funny. Alkaseltzer is funny. Chicken is funny. Pickle is funny. All with a 'k'. 'L's are not funny. 'M's are not funny. Cupcake is funny. Tomatoes is not funny. Lettuce is not funny. Cucumber's funny. Cab is funny. Cockroach is funny -- not if you get 'em, only if you say 'em.Well, that is certainly a start, but it doesn't really tell the whole story. Words with an "oo" sound are also often funny, especially when used in reference to bodily functions (as in poop, doody, booger, boobies, etc...) In fact, bodily functions are just plain funny. Witness fart.
Of course, ultimately it's a subjective thing. To me, boobies are funnier than breasts, even though they mean the same thing. To you, perhaps not. It's the great mystery of humor, and one of the most beautiful things about laughter is that it happens involuntarily. We don't (always) have to think about it, we just do it. Here's a quote from Dennis Miller to illustrate the point:
The truth is the human sense of humor tends to be barbaric and it has been that way all along. I'm sure on the eve of the nativity when the tall Magi smacked his forehead on the crossbeam while entering the stable, Joseph took a second away from pondering who impregnated his wife and laughed his little carpenter ass off. A sense of humor is exactly that: a sense. Not a fact, not etched in stone, not an empirical math equation but just what the word intones: a sense of what you find funny. And obviously, everybody has a different sense of what's funny. If you need confirmation on that I would remind you that Saved by the Bell recently celebrated the taping of their 100th episode. Oh well, one man's Molier is another man's Screech and you know something thats the way it should be.There has been a lot of controversy recently about the FCC's proposed fines against Howard Stern (which may have been temporarily postponed). Stern has been fined many times before, including "$600,000 after Stern discussed masturbating to a picture of Aunt Jemima." Stern, of course, has flown off the handle at the prospect of new fines. Personally, I think he's overreacting a bit by connecting the whole thing with Bush and the religious right, but part of the reason he is so successful is that his overreaction isn't totally uncalled for. At the core of his argument is a serious concern about censorship, and a worry about the FCC abusing it's authority.
On the other hand, some people don't see what all the fuss is about. What's wrong with having a standard for the public airwaves that broacasters must live up to? Well, in theory, nothing. I'm not wild about the idea, but there are things I can understand people not wanting to be broadcast over public airwaves. The problem here is what is acceptible.
Just what is the standard? Sure, you've got the 7 dirty words, that's easy enough, but how do you define decency? The fines proposed against Stern are supposedly from a 3 year old broadcast. Does that sound right to you? Recently Stern wanted to do a game in which the loser had to let someone fart in their face. Now, I can understand some people thinking that's not very nice, but does that qualify as "indecent"? Apparently, it might, and Stern was not allowed to proceed with the game (he was given the option to place the looser in a small booth, and then have someone fart in the booth). Would it actually have resulted in a fine? Who knows? And that is what the real problem with standards are. If you want to propose a standard, it has to be clear and you need to straddle a line between what is hurtful and what is simply disgusting or offensive. You may be upset at Stern's asking a Nigerian woman if she eats monkeys, but does that deserve a fine from the government? And how much? And is it really the job of the government to decide these sorts of things? In the free market, advertisers can choose (and have chose) not to advertise on Stern's program.
At the bottom of this post, Lawrence Theriot makes a good point about that:
Yes a lot of what Stern does could be considered indecent by a large portion of the population (which is the Supreme Court standard) but in this case it's important to consider WHERE those people might live and to what degree they are likely to be exposed to Stern's brand of humor before you decide that those people need federal protection from hearing his show. Or, in other words, might the market have already acted to protect those people in a very real way that makes Federal action unnecessary?In the end, I don't know the answer, but there is no easy solution here. I can see why people want standards, but standards can be quite impractical. On the other hand, I can see why Stern is so irate at the prospect of being fined for something he said 3 years ago - and also never knowing if what he's going to say qualifies as "indecent" (and not really being able to take such a thing to court to really decide). Dennis Miller again:
We should question it all; poke fun at it all; piss off on it all; rail against it all; and most importantly, for Christ's sake, laugh at it all. Because the only thing separating holy writ from complete bullshit is your perspective. Its your only weapon. Keep the safety off. Don't take yourself too seriously.In the end, Stern makes a whole lot of people laugh and he doesn't take himself all that serious. Personally, I don't want to fine him for that, but if you do, you need to come up with a standard that makes sense and is clear and practical to implement. I get the feeling this wouldn't be an issue if he was clearly right or clearly wrong...
Posted by Mark on March 21, 2004 at 09:04 PM .: link :.
Sunday, February 22, 2004
The Eisenhower Ten
The Eisenhower Ten by CONELRAD : An excellent article detailing a rather strange episode in U.S. History. During 1958 and 1959, President Eisenhower issued ten letters to mostly private citizens granting them unprecedented power in the event of a "national emergency" (i.e. nuclear war). Naturally, the Kennedy administration was less than thrilled with the existence of these letters, which, strangly enough, did not contain expiration dates.
So who made up this Shadow Government?
...of the nine, two of the positions were filled by Eisenhower cabinet secretaries and another slot was filled by the Chairman of the Board of Governors of the Federal Reserve. The remaining six were very accomplished captains of industry who, as time has proven, could keep a secret to the grave. It should be noted that the sheer impressiveness of the Emergency Administrator roster caused Eisenhower Staff Secretary Gen. Andrew J. Goodpaster (USA, Ret.) to gush, some 46 years later, "that list is absolutely glittering in terms of its quality." In his interview with CONELRAD, the retired general also emphasized how seriously the President took the issue of Continuity of Government: "It was deeply on his mind."Eisenhower apparently assembled the list himself, and if that is the case, the quality of the list was no doubt "glittering". Eisenhower was a good judge of talent, and one of the astounding things about his command of allied forces during WWII was that he successfully assembled an integrated military command made up of both British and American officers, and they were actually effective on the battlefield. I don't doubt that he would be able to assemble a group of Emergency Administrators that would fit the job, work well together, and provide the country with a reasonably effective continuity of government in the event of the unthinkable.
Upon learning of these letters, Kennedy's National Security Advisor, McGeorge Bundy, asserted that the "outstanding authority" of the Emergency Administrators should be terminated... but what happened after that is somewhat of a mystery. Some correspondance exists suggesting that several of the Emergency Administrators were indeed relieved of their duties, but there are still questions as to whether or not Kennedy retained the services of 3 of the Eisenhower Ten and whether Kennedy established an emergency administration of his own.
It is Gen. Goodpaster's assertion that because Eisenhower practically wrote the book on Continuity of Government, the practice of having Emergency Administrators waiting in the wings for the Big One was a tradition that continued throughout the Cold War and perhaps even to this day.On March 1, 2002, the New York Times reported that Bush had indeed set up a "shadow government" in the wake of the 9/11 terror attacks. This news was, of course, greeted with much consternation, and understandably so. Though there may be a historical precident (even if it is a controversial one) for such a thing, the details of such an open-ended policy are still a bit fuzzy to me...
CONELRAD has done an excellent job collecting, presenting, and analyzing information pertaining to the Eisenhower Ten, and I highly recommend anyone who is interested in the issue of continuity of government to check it out. Even with that, there are still lots of unanswered questions about the practice, but it is still fascinating reading....
Posted by Mark on February 22, 2004 at 09:31 PM .: link :.
Thursday, February 19, 2004
Welcome to the Hotel Baghdad
Steve Mumford has made his way back to Iraq and posted the seventh installment of his brilliant Baghdad Journal. Once again, he puts the traditional media reporting to shame with his usual balanced and thoughtful views. Read the whole thing, as they say.
For those who are not familiar with Mumford, he is a New York artist who has travelled to Iraq a few times in the past year and published several "journal" entries detailing his exploits. I've been posting his stuff since I found it last fall. Here are all the installments to date:
At Hewar, I meet Qassim, who says he's waiting for some of "your countrymen." He's preparing one of his renowned grilled fish lunches. Soon the guests arrive: it's the Quakers with Bruce Cockburn, who eye me warily. I don't think Qassim realizes how much foreigners tend to avoid one another in their jealous rush to befriend Iraqis. Or maybe he does, and enjoys watching the snubs and one-upmanship. I take my leave, and relax in the teahouse, when the artists Ahmed al Safi and Haider Wadi show up. They seem like old friends now, and I'm happy to see them.Again, excellent reading. [Thanks must go again to Lexington Green from Chicago Boyz for introducing me to Mumford's writings last fall]
Updates: Several updates have been made, adding links to new columns in the series.
Posted by Mark on February 19, 2004 at 09:51 PM .: link :.
Sunday, February 15, 2004
Deterministic Chaos and the Simulated Universe
After several months of absence, Chris Wenham has returned with a new essay entitled 2 + 2. In it, he explores a common idea:
Many have speculated that you could simulate a working universe inside a computer. Maybe it wouldn't be exactly the same as ours, and maybe it wouldn't even be as complex, either, but it would have matter and energy and time would elapse so things could happen to them. In fact, tiny little universes are simulated on computers all the time, for both scientific work and for playing games in. Each one obeys simplified laws of physics the programmers have spelled out for them, with some less simplified than others.As always, the essay is well done and thought provoking, exploring the idea from several mathematical angles. But it makes the assumption that the universe is both deterministic and infinitely quantifiable. I am certainly no expert on chaos theory, but it seems to me that it bears an importance on this subject.
A system is said to be deterministic if its future states are strictly dependant on current conditions. Historically, it was thought that all processes occurring in the universe were deterministic, and that if we knew enough about the rules governing the behavior of the universe and had accurate measurements about its current state we could predict what would happen in the future. Naturally, this theory has proven very useful in modeling real world events such as flying objects or the wax and wane of the tides, but there have always been systems which were more difficult to predict. Weather, for instance, is notoriously tricky to predict. It was always thought that these difficulties stemmed from an incomplete knowledge of how the system works or inaccurate measurement techniques.
In his essay, Wenham discusses how a meteorologist named Edward Lorenz stumbled upon the essence of what is referred to as chaos (or nonlinear dynamics, as it is often called):
Lorenz's simulation worked by processing some numbers to get a result, and then processing the result to get the next result, thus predicting the weather two moments of time into the future. Let's call them result1, which was fed back into the simulation to get result2. result3 could then be figured out by plugging result2 into the simulation and running it again. The computer was storing resultn to six decimal places internally, but only printing them out to three. When it was time to calculate result3 the following day, he re-entered result2, but only to three decimal places, and it was this that led to the discovery of something profound.This phenomenon is called "sensitive dependence on initial conditions." For the systems in which we could successfully make good predictions (such as the path of a flying object), only a reasonable approximation of the initial state is necessary to make a reasonably accurate prediction. Sensitive dependence of a reasonable approximation of the initial state, however, yields unreasonable predictions. In a system exhibiting sensitive dependence, reasonable approximations of the initial state do not provide reasonable approximations of the future state.
So here comes the important part: For a chaotic system such as weather, in order to make useful long term predictions, you need measurements of initial conditions with infinite accuracy. What this means is that even a deterministic system, which in theory can be modeled by mathematical equations, can generate behavior which seems random and unpredictable. This manifests itself in nature all the time. Weather is the typical example, but there is also evidence that the human brain is also governed by deterministic chaos. Indeed, our brain's ability to generate seemingly unpredictable behavior is an important component of both survival and creativity.
So my question is, if it is not possible to quantify the initial conditions of a chaotic system with infinite accuracy, is that system really deterministic? In a sense, yes, even though it is impossible to calculate it:
Michaelangelo claimed the statue was already in the block of stone, and he just had to chip away the unnecessary parts. And in a literal sense, an infinite number of universes of all types and states should exist in thin air, indifferent to whether or not we discover the rules that exactly reveal their outcome. Our own universe could even be the numerical result of a mathematical equation that nobody has bothered to sit down and solve yet.The answer might be there, whether we can calculate it or not, but even if it is, can we really do anything useful with it? In the movie Pi, a mathematician stumbles upon an enigmatic 216 digit number which is supposedly the representation of the infinite, the true name of God, and thus holds the key to deterministic chaos. But it's just a number, and no one really knows what to do with it, not even the mathematician who discovered it (though he could make accurate predictions on for the stock market, though he could not understand why and it came at a price). In the end, it drove him mad. I don't pretend to have any answers here, but I think the makers of Pi got it right.
Posted by Mark on February 15, 2004 at 02:33 PM .: link :.
Sunday, January 25, 2004
Pynchon : Stephenson :: Apples : Oranges
The publication of Cryptonomicon lead to lots of comparisons with Thomas Pynchon's Gravity's Rainbow in reviews. This was mostly based on the rather flimsy convergences of WWII and technology in the two novels. There were also some thematic similarities, but given the breadth of themes in Gravity's Rainbow, that isn't really a surprise. They did not resemble each other stylistically, nor did the narratives really resemble one another. There was, I suppose, a certain amount of playfulness present in both works, but in the end, anyone who read one and then the other would be struck by the contrast.
However, having recently read Stephenson's Quicksilver, I can see more of a resemblance to Pynchon. With Quicksilver, Stephenson displays a great deal more playfulness with style and narrative. He's become more willing to cut loose, explore language, fit the style to the situation he is describing and even slip out of "novel" format, whether it be the laundry list compilation style of Royal Society meeting notes (for example, pages 182 - 186), the epistolatory exploits of Eliza (pages 636 - 659 among many others), or theater script format (pages 716 - 729). Stephenson isn't quite as spastic as Pynchon, but the similarities between their styles are more than skin deep. In addition to this playfulness in the narrative style, Stephenson, like Pynchon, associates certain styles with specific characters (most notably the epistolatory style that is used for Eliza). Again, Stephenson is much less radical than Pynchon, and only applies a fraction of the techniques that Pynchon employs in his novel, but Stephenson has progressed nicely in his recent works.
Most of the time, Stephenson is considerably more prosaic than Pynchon, and even when he does branch out stylistically, it is done in service of the story. The Eliza letters again provide a good example. The epistolatory style allows Stephenson to write for a different audience. We know this, and thus Stephenson has a good time messing with us, especially towards the end of the novel where he takes it a step further and shows Eliza's encrypted letters and journal entries as translated by Bonaventure Rossignol (in the form of a letter to Louis XIV). All of this serves to further the plot. Pynchon, on the other hand, is more concerned with playfully exploring the narrative by experimenting with the English language. The plot takes a secondary role to the style, and to a certain extent the style drives the plot (well, that might be a bit of a stretch) and while Pynchon is one of the few who can pull it off, Stephenson's style doesn't really compare. They're two different things, really.
Nate has a great post on this very subject, and he shows that a comparison of Quicksilver with Pynchon's novel Mason & Dixon is more apt:
The style of Mason Dixon is a synthesis of old and new that hews remarkably close to the old. Stephenson, on the other hand, writes in a much more modern style, only occasionally dotting his prose with historical flourishes ... The distinction here is an old one; classical rhetoricians spoke of Asiatic versus Attic style - the former is ornate, lush, and detailed, while the latter is lean, clean, and direct. Stephenson is a master of Attic style - a fact that's often obscured because, while his sentences are direct and elegant, their substance is often convoluted and complex. You can see it more clearly in his nonfiction - look at his explanation of the Metaweb for an excellent example. Pynchon, as an Asiatic writer, will elicit more "oohs" and "ahhs" for the power and grace of his prose, but will tend to lose his readers when he's trying to be florid and tackling difficult material at the same time. Obviously, both authors will tend toward the Attic or the Asiatic at different points, but in general, Stephenson wants his language to transparently convey his message, while Pynchon demands a certain amount of attention for the language itself.I haven't read Mason & Dixon (it's in the queue), but from what I've heard this sounds pretty accurate. Again, he makes the point that Pynchon and Stephenson are on different playing fields, appropriating their styles to serve different purposes... and it shows. Stephenson is a lot more fun to read for someone like me because I prefer storytelling to experimental narrative fiction.
I recently read Pynchon's The Crying of Lot 49, and was shocked by the clarity of the straightforward and yet still vibrant prose. In that respect, I think Stephenson's work might resemble Crying more than the novels discussed in this post...
Update: As I write this, Pynchon is making his appearance on the Simpsons. Coincidence?
Posted by Mark on January 25, 2004 at 08:19 PM .: link :.
Sunday, January 18, 2004
To the Moon!
President Bush has laid out his vision for space exploration. Reaction has mostly been lukewarm. Naturally, there are opponents and proponents, but in my mind it is a good start. That we've changed focus to include long term manned missions on the Moon and a mission to Mars is a bold enough move for now. What is difficult is that this is a program that will span several decades... and several administrations. There will be competition and distractions. To send someone to Mars on the schedule Bush has set requires a consistent will among the American electorate as well. However, given the technology currently available, it might prove to be a wise move.
A few months ago, in writing about the death of the Galileo probe, I examined the future of manned space flight and drew a historical analogy with the pyramids. I wrote:
Is manned space flight in danger of becoming extinct? Is it worth the insane amount of effort and resources we continually pour into the space program? These are not questions I'm really qualified to answer, but its interesting to ponder. On a personal level, its tempting to righteously proclaim that it is worth it; that doing things that are "difficult verging on insane" have inherent value, well beyond the simple science involved.We should, and I'm glad we're orienting ourselves in this direction. Bush's plan appeals to me because of it's pragmatism. It doesn't seek to simply fly to Mars, it seeks to leverage the Moon first. We've already been to the Moon, but it still holds much value as a destination in itself as well as a testing ground and possibly even a base from which to launch or at least support our Mars mission. Some, however, see the financial side of things a little too pragmatic:
In its financial aspects, the Bush plan also is pragmatic -- indeed, too much so. The president's proposal would increase NASA's budget very modestly in the near term, pushing more expensive tasks into the future. This approach may avoid an immediate political backlash. But it also limits the prospects for near-term technological progress. Moreover, it gives little assurance that the moon-Mars program will survive the longer haul, amid changing administrations, economic fluctuations, and competition from voracious entitlement programs.There's that problem of keeping everyone interested and happy in the long run again, but I'm not so sure we should be too worried... yet. Wretchard draws an important distinction, we've laid out a plan to voyage to Mars - not a plan to develop the technology to do so. Efforts will be proceeding on the basis of current technology, but as Wretchard also notes in a different post, current technology may be unsuitable for the task:
Current launch costs are on the order of $8,000/lb, a number that will have to be reduced by a factor of ten for the habitation of the moon, the establishment of La Grange transfer stations or flights to Mars to be feasible. This will require technology, and perhaps even basic physics that does not even exist. Simply building bigger versions of the Saturn V will not work. That would be "like trying to upgrade Columbus?s Nina, Pinta, and Santa Maria with wings to speed up the Atlantic crossing time. A jet airliner is not a better sailing ship. It is a different thing entirely." The dream of settling Mars must await an unforseen development.Naturally, the unforseen development is notoriously tricky, and while we must pursue alternate forms of propulsion, it would be unwise to hold off on the voyage until this development occurs. We must strike a delicate balance between the concentration on the goal and the means to achieve that goal. As Wretchard notes, this is largely dependant on timing. What is also important here is that we are able to recognize this development when it happens and that we leave our program agile enough to react effectively to this development.
Recognizing this development will prove interesting. At what point does a technology become mature enough to use for something this important? This may be relatively straightforward, but it is possible that we could jump the gun and proceed too early (or, conversely, wait too long). Once recognized, we need to be agile, by which I mean that we must develop the capacity to seamlessly adapt the current program to exploit this new development. This will prove challenging, and will no doubt require a massive increase in funding, as it will also require a certain amount of institutional agility - moving people and resources to where we need them, when we need them. Once we recognize our opportunity, we must pounce without hesitation.
It is a bold and challenging, yet judiciously pragmatic, vision that Bush has laid out, but this is only the first step. The truly important challenges are still a few years off. What is important is that we recognize and exploit any technological advances on our way to Mars, and we can only do so if we are agile enough to effectively react. Exploration of the frontiers is a part of my country's identity, and it is nice to see us proceeding along these lines again. Like the Egyptians so long ago, this mammoth project may indeed inspire a unity amongst our people. In these troubled times, that would be a welcome development. Though Europe, Japan, and China have also shown interest in such an endeavor, I, along with James Lileks, like the idea of an American being the first man on Mars:
When I think of an American astronaut on Mars, I can't imagine a face for the event. I can tell you who staffed the Apollo program, because they were drawn from a specific stratum of American life. But things have changed. Who knows who we'd send to Mars? Black pilot? White astrophysicist? A navigator whose parents came over from India in 1972? Asian female doctor? If we all saw a bulky person bounce out of the landing craft and plant the flag, we'd see that wide blank mirrored visor. Sex or creed or skin hue - we'd have no idea.Indeed.
Update 1.21.04: More here.
Posted by Mark on January 18, 2004 at 05:16 PM .: link :.
Tuesday, December 30, 2003
Each will have his personal Rocket
I finally finished my review of Thomas Pynchon's novel Gravity's Rainbow. Since I blogged about the novel often, I figured I'd let everyone know it's out there. Oddly, when writing the review, I wrote the last paragraph first:
If I were to meet Thomas Pynchon tomorrow, I wouldn't know whether to shake his hand or sucker-punch him. Probably both. I'd extend my right arm, take his hand in mine, give one good pump, then yank him towards my swinging left fist. As he lay crumpled on the ground beneath me, gasping in pain, I'd point a bony finger right between his eyes and say "That was for Gravity's Rainbow." I think he'd understand.Heh. I also wrote up a rather lengthy selection of quotes from the novel, with some added commentary. And in case you missed the previous bloggery about Gravity's Rainbow, here they are, in all their glory:
Posted by Mark on December 30, 2003 at 09:47 PM .: link :.
Sunday, December 14, 2003
Ladies and gentlemen, we got him
U.S. forces have captured Saddam Hussein. This is exceptional news! And it figures that I had just commented on how intelligence successes are transparent, that we never see them. D'oh! This is a major intelligence victory. We developed an intelligence infrastructure that allowed us to find Hussein, who had burried himself in a hole in a family member's cellar. We captured him with shovels. This will most likely lead to an intelligence windfall, as already captured Iraqi officals who may have been biting their tongue for fear of Saddam may start talking... (not to mention Saddam himself)
The circumstances of the arrest are about as good as we could ever hope:
A lot will depend on how things go from here. The impending trial and how it is executed will be very important. We will also need to make sure Saddam doesn't kill himself or get killed (a la Goering or Oswald). If he turns up dead, we'll lose out on a lot.
Lots of others are commenting on this, so here goes:
Update: I've been updating the link list like crazy...
Update: Dean Esmay steals my picture! Hee hee. He's got more good stuff as well..
Update 12.15.03: And I thought yesterday represented information overload. Tons of new stuff appearing today, much of it excellent, and a lot of it having to do with the challenge of what to do with Hussein...
Posted by Mark on December 14, 2003 at 11:52 AM .: link :.
Wednesday, December 03, 2003
Is the Christmas Tree Christian?
The Winter Solstice occurs when your hemisphere is leaning farthest away from the sun (because of the tilted axis of the earth's rotation), and thus this is the time of the year when daylight is the shortest and the sun has its lowest arc in the sky.
No one is really sure when exactly it happened (or who started the idea), but this period of time eventually took on an obvious symbolic meaning to human beings. Many geographically diverse cultures throughout history have recognized the winter solstice is as a turning point, a return of the sun. Solstice celebrations and ceremonies were common, sometimes performed out of a fear that the failing light of the sun would never return unless humans demonstrated their worth through celebration or vigil.
It has been claimed that the Mesopotamians were among the first to celebrate the winter solstice with a 12 day festival of renewal, designed to help the god Marduk tame the monsters of chaos for one more year. Other theories go as far back as 10,000 years. More recently, the Romans celebrated the winter solstice with a fest called Saturnalia in honor of Saturn, the god of agriculture.
Integral to many of these celebrations were plants and trees that remained green all year. Evergreens reminded them of all the green plants that would grow again when the sun returned; they symbolized the solstice and the triumph of life over death.
In the early days of Christianity, the birth of Christ was not celebrated (instead Easter, was and possibly still is the main holiday of Christianity). In the fourth century, the Church decided to make the birth of Christ a holiday to be celebrated. There was only one problem - the Bible makes no mention of when Christ was born. Although there was some evidence to draw from, the Church chose to celebrate Christmas on December 25. It is believed that this date was chosen to coincide with traditional winter solstice festivals such as the Roman pagan Saturnalia festival in the hopes that Christmas would be more popularly embraced by the people of the world. And embraced it was, but the Church found that as the holiday spread, their choice to hold Christmas at the same time as solstice celebrations did not allow the Church to dictate how the holiday was celebrated. And so many of the pagan traditions of the solstice survived during the next millenia, even though pagan religions had largely given way to Christianity.
And so the importance of evergreens in these celebrations continued. The use of the Christmas tree, as we now know it, is generally credited to sixteenth century Germans, specifically the Protestant-reformer Martin Luther, who is thought to be the first to added lighted candles to a tree.
While the Germans found a certain significance in the pagan traditions concerning evergreens, it was not a universally held belief. For instance, the Christmas tree did not gain traction in America until the mid-nineteenth century. Up until then, they were generally seen as pagan symbols and mocked by New England Puritans. But the tradition gained traction thanks to German settlers in Pennsylvania (among others) and increasing secularization of the holiday in America. In the past century, the Christmas tree has gained in popularity, as more and more people adopted the traditon of displaying a decorated evergreen in their home. After all this time, Christmas trees have become an American tradition.
There has been a lot of controversy lately concerning the presence (or, I suppose, the removal and thus absence) of Christmas trees in schools. Personally, I don't see what is so controversial about it, as a Christmas tree is more of a secular, rather than religious, symbol. Joshua Claybourn quotes the Supreme Court thusly:
"The Christmas tree, unlike the menorah, is not itself a religious symbol. Although Christmas trees once carried religious connotations, today they typify the secular celebration of Christmas." Allegheny v. American Civil Liberties Union Greater Pittsburgh Chapter, 492 U.S. 573, 109 S.Ct. 3086.It does not represent a religious idea, but rather the idea of renewal that accompanied the winter solstice. One can associate Christian ideas with the tree, as Martin Luther did so long ago, but that does not make it inherently Christian. Indeed, I think of the entire Christmas holiday as more secular than not, though I guess my being Christian might have something to do with it. This idea is worth further exploring in the future, so expect more posts on the historical Christmas.
Update: Patrick Belton notes the strange correlations between Christmas Trees and Prostitution in Virginia.
Posted by Mark on December 03, 2003 at 11:31 PM .: link :.
Wednesday, November 12, 2003
The Iraqi Art Scene
Steve Mumford's latest Baghdad Journal is up, and it is, as usual, excellent. In it, he actually focuses on the burgeoning Iraqi art scene (How dare he? I've become so accustomed to his other observations that I was somewhat surprised to see him talking about art. Then I remembered that he is an artist and that his articles are published in an internet art magazine. Duh.) Instead of showcasing Mumford's art, as previous installments have done, this article exhibits the works of various Iraqi artists that Mumford was impressed with (and for good reason, at least according to my unrefined eyes). The artistic community is growing in Iraq, in no small part due to the newfound access they have to information from around the world...
Of the younger generation, Ahmed Al-Safi is a particularly talented painter and sculptor who's managed to make a living selling his art. He paints simple, almost crudely rendered figures reminiscent of the German Neo-Expressionists of the 1980s (whose work he immediately investigated on the web when I told him about them). Ahmed has a wonderful studio in the slummy but picturesque part of town near Tarea Square, where he has bronze-casting facilities.Emphasis mine. Change is coming to the Iraqi art scene, and while they are now soaking up that which is newly available to them, I find myself eager to see what the Iraqis contribute back to the world art scene...
One widely repeated observation here is that abstraction was a convenient technique for a time when all narrative content was suspect. Everyone expects art to change with the passing of Saddam's regime, though at this point, no one I talked to is making any predictions about future trends in Iraqi art. I've seen no video art and practically no photography in Baghdad. Installation art is unknown. Indeed, few artists in Iraq have even heard of Andy Warhol. Now that communication with the rest of the world is starting to open up, Iraqi artists will discover just how large an ocean they're swimming in.I'm not an artist, but I know what I like and if the art that Mumford posted is any indication, I hope and believe we'll find that the Iraqis will be strong swimmers in the large ocean of art. More on this subject later...
Update: I just thought I'd pick one of my favorite paintings to display here...
oil on canvas
Mumford describes Muayad Muhsin as "a younger surrealist painter from Hilla" and I like this painting a lot. I don't know art, but have some general knowledge of the visual medium from film, and while it may be foolish to apply film theory to art, I think it might provide some insight. The cool colors suggest an aloof tranquility, a calmness, but the oblique angle produces a sense of visual irresolution and unresolved anxiety. It suggests tension, transition, and impending change. The end result is a feeling of calm, but tense and unstable, transition. It seems appropriate...
Posted by Mark on November 12, 2003 at 12:42 AM .: link :.
Sunday, November 02, 2003
Halloween has past* but since horror is one of my favorite genres, I figured I'd list out some good examples of horror books & movies because it's always fun to scare yourself witless. When it comes to film, horror is one of the more difficult genres to execute effectively and, as such, the genuinely great horror films are few and far between. What's left are a series of downright creepy, but flawed, films. Because of their flaws, many horror films are often overlooked and underrated and these are the films I'd like to mention here. Books, on the other hand, tend to be overlooked and underrated as a medium. Horror books doubly so.
I've never been a fan of the classic 1950's horror films like the Mummy, Dracula, or Frankenstein... They're not without their charm, but when it comes to the classics, I prefer their source materiel to the films. For classics, I would mention Halloween (1978, it started the lackluster "slasher" sub-genre, but it is an excellent film, particularly it's soundtrack), Jaws (1975, another excellent soundtrack here, but there was plenty else that made people afraid to go back into the water again...), Psycho (1960, the sudden shifts and feints coupled with, again, a distinctive and effective soundtrack, make for a brutally effective film), Alien (1979, "In space, no one can hear you scream." Director Ridley Scott really knew how to turn the screws with this one), The Exorcist (1973, The power of Christ compels you... to wet yourself in despair whilst watching this film) and The Shining (1980, Kubrick's interpretation of King's masterwork is significantly different, but it is also one of the few examples of an adaptation that works well in it's own right).
But those are all films we know and love. What about the one's we haven't seen? Director John Carpenter built an impressive string of neglected horror films throughout the 1980s and early 1990s (a pity that he has since lost his touch). Aside from the classic Halloween, Carpenter directed the 1982 remake of The Thing, which was brilliantly updated and downright creepy. It has its fill of scary moments, not the least of which is the cryptic and ambiguous ending. He followed that with Christine. Adapted from the novel by Stephen King, Carpenter was able to make a silly story creepy with the sheer will of his technical mastery (not his best, but impressive nonetheless). His 1987 film Prince of Darkness was flawed but undeniably effective. Many have not heard of In the Mouth of Madness, but it has become one of my favorite horror films of the 1990s.
If you're not scared away by subtitles or foreign films, check out Dario Argento's seminal 1977 gorefest Suspiria, which boasts opening and ending scenes amongst the best in the genre. Argento's rival, Lucio Fulci, also has an impressive series of gory horror classics, such as the 1980 film The Gates of Hell. Both Argento and Fulci have an impressive body of work and are worth checking out if you don't mind them being in Italian...
The 1970's and early 1980's were an excellent period in horror filmmaking. Excluding the films already mentioned (a significant portion of the classics are from the 1970s), you may want to check out the 1980 movie The Changeling, an excellent ghost story, or perhaps the disturbing 1981 film The Incubus. And how could I write about horror movies without mentioning my beloved 1979 cheesy creepfest Phantasm. Other 70s flicks to check out: The Hills Have Eyes (1977), Dawn of the Dead (1978), Salem's Lot (a 1979 TV miniseries based on Stephen King's book), The Omen (1976), Carrie (1976), Blue Sunshine (1976, almost forgotten today), The Wicker Man (1973), The Legend of Hell House (1973, a personal favorite, adapted from a novel by Richard Matheson, who we'll get to in a moment), and of course we can't forget that lovable flesh-wearing cannibal, Leatherface, in The Texas Chainsaw Massacre (1974).
Ok, so I think I've inundated you with enough movies, hopefully many of which you've never heard of, for now so let's move on to books (naturally, I could go on and on and on just listing out good horror flicks, but this is at least a good start).
My knowledge of Horror literature is less extensive than horror film, but I have a fair base to work from. We all know the classics, Dracula, Frankenstein, and the works of Edgar Allen Poe, but there are many overlooked horror stories floating around as well.
M.R. James (1862-1936) is one of the originators of the modern Ghost Story, and there are several exemplary examples of this sub-genre in his oeuvre. His works are public domain, so follow the link above for online versions... I especially enjoyed the creepy Count Magnus.
Shirley Jackson's The Haunting of Hill House is a classic that is rightly praised as one of the finest horror novels ever written.
Richard Matheson's brilliant I Am Legend is a study of isolation and grim irony that turns the traditional vampire story on its head. This might be one of the most influential novels you've never heard of, as there have been many derivatives, particularly in film.
H.P. Lovecraft is another fantastic short story author whose work has been tremendously influential to modern horror. His infamous Cthulhu Mythos and Necronomicon were ingenious creations, and many have seized on them and attempted to follow in his footsteps. Indeed, many even believe his fictional Necronomicon to be real!
You might have noticed Stephen King's name mentioned a few times already, and there is a reason so many of his books are turned into movies. I've never been a huge King fan, but The Shining is among the best horror novels I've read. I've always preferred Dean Koontz (sadly he has absolutely no good film adaptations), who wrote such notable horror staples as Phantoms, Midnight, and The Servants of Twilight. Both Koontz and King can be hit-or-miss, but when they're on, there's no one better.
Other books of note: Clive Barker's The Hellbound Heart (which was adapted into the 1987 film Hellraiser) is an excellent short read (about 120 pages), and some of his longer works, such as The Great and Secret Show and Imajica, are also good. F. Paul Wilson's The Keep is one of the few books that has ever truly scared me while reading it. I've always found William Peter Blatty's novel, The Exorcist, to be more effective than the movie (and that is saying a lot!). Brian Lumly's Necroscope series is an interesting take on the vampire legend, and his Titus Crow series builds on Lovecraft's Cthulhu Mythos nicely.
Well, there you have it. That should keep you busy for the next few years...
* One would think that this post should have been made last week, and one would be right, but then one would also not be too familiar with how we do things here at Kaedrin. Note that the best movies of 2001 is due sometime around mid-2004. Heh. This whole being timely with content thing is something I have always had difficulty with and need to work on, but that is another topic for another post...
Posted by Mark on November 02, 2003 at 07:51 PM .: link :.
Monday, October 20, 2003
Hindsight isn't Necessarily 20/20
It is conventional wisdom that hindsight is 20/20, but is that really accurate? I get the feeling that when people speak of clarity in hindsight, what they are really talking about is creeping determinism. They aren't really examining the varied and complex details of a scenario so much as they are rationalizing an outcome perceived to have been inevitable (since it has already happened, surely it must have been obvious). This is known in logic as "begging the question" or "circular logic."
In the creeping determinism sense, hindsight is liberally filtered to the point where only evidence that leads to the scenario's conclusion is seen. All other evidence is dismissed as inaccurate or irrelevant.
Which leads me to an excellent article by Adam Garfinkle called Foreign Policy Immaculately Conceived. In it, he argues:
The immaculate conception theory of U.S. foreign policy operates from three central premises. The first is that foreign policy decisions always involve one and only one major interest or principle at a time. The second is that it is always possible to know the direct and peripheral impact of crisis-driven decisions several months or years into the future. The third is that U.S. foreign policy decisions are always taken with all principals in agreement and are implemented down the line as those principals intend - in short, they are logically coherent.When these premises are laid out in such a way, one can't help but see them for what they really are. And yet so much of what passes for commentary these days is based wholly upon this immaculate conception theory of U.S. foreign policy .
Case in point, the American liberation/occupation of Iraq is often portrayed as a failure. They say that we are not "winning the hearts and minds" of the Iraqis, or that we have "gone into the God business" and that "we want the Iraqis to love us for destroying their orchards too." (Never mind that this is emphatically not what we're doing, but I digress) These people are engaging in creeping determinism before the situation has even played out! They've started with a conclusion, that we have failed in Iraq, and they then collect any and all negative aspects of the occupation and proclaim this outcome inevitable (some perhaps hoping for a form of self-fulfilling prophecy).
But even this is hardly new. Jessica's Well points to a pair of magnificent historical examples. Do you remember that other time when we were mired in a quagmire, failing to win the hearts and minds of our occupied foes? The one in Europe, circa 1946? Yes, you know, the one that resulted in Europe's longest unbroken peaceful period since Charlemagne? These articles are amazingly familiar. Replace "Hitler" with "Saddam", "Nazis" with "Baathists", and "Germany" with "Iraq" and you'll see what I mean.
Naturally, since the overwhelmingly positive results of the US military occupation of Europe are generally acknowledged, these articles are pushed by the wayside, dismissed as irrelevant and forgotten forever (or until an intrepid blogger takes the initiative to post it). Success in Europe was by no means inevitable, both during and after the war, and in a certain respect, these articles are a great example of creeping determinism or Garfinkle's immaculate conception theory of U.S. foreign policy.
They're also an example of just how shortsighted pessimistic reporting on a lengthy process can be. As Garfinkle notes:
American presidents, who have to make the truly big decisions of U.S. foreign policy, must come to a judgment with incomplete information, often under stress and merciless time constraints, and frequently with their closest advisors painting one another in shades of disagreement. The choices are never between obviously good and obviously bad, but between greater and lesser sets of risks, greater and lesser prospects of danger. Banal as it sounds, we do well to remind ourselves from time to time that things really are not so simple, even when one's basic principles are clear and correct.Indeed. Hindsight isn't necessarily 20/20, but it always purports to be.
Update 10.21.03 - I don't remember where I found this, but I had bookmarked it: That Was Then: Allen W. Dulles on the Occupation of Germany provides some more perspective on post-war Germany. He outlined many of the difficulties they faced and lamented, despite his obvious respect for those in charge, that "the problems inherent in the situation are almost too much for us." It's an excellent piece, so read the whole thing, as they say...
Posted by Mark on October 20, 2003 at 08:58 PM .: link :.
Wednesday, October 15, 2003
Style as Substance
Kill Bill: Volume 1 is one of those movies that I've been keeping track of for years. From the beginning, I wondered why Tarantino was choosing such material for his next film. The plot certainly isn't edgy. Uma Thurman plays The Bride, a woman miraculously survives a bullet to the head on her wedding day (the groom was not so lucky). After an extended stay in a coma, she awakes and makes a list of five people to exact revenge upon. Then she goes and kills them. That's the plot.
And yet it's still a good film (not a great film, but good). The plot doesn't matter. Nor, really, do the characters. None of them are developed, or really likable. You root for the Bride, a textbook anti-hero, not because she's been wronged and is seeking revenge, but because she's such a badass. It is the style of the film that gets me, and like it or not, Tarantino is a master of style. The man knows how to manipulate the audience, and he is brutally unmerciful in this outing.
Let me rewind a bit. Do you remember the scene in Pulp Fiction where Vincent blows Marvin's head off by accident? Somehow, Tarantino is able to make that scene, and the ensuing events, funny. Not ha-ha funny, it's still black comedy, but funny nonetheless. You don't really know why you are laughing, but you are. And that is what this movie is like. It's like two hours of that one scene in Pulp Fiction.
Blood. Hundreds of gallons of it. Spraying, shooting, fountains of blood. The grisly murder rate in this film approaches triple digits. It's not for everyone. James Lileks says he had "no desire to see clever violence," and that is certainly understandable. These scenes are cold, merciless, and often disgusting, yet I found myself laughing. It's just a natural reaction when you see someone's head cut off and blood sprays out like a sprinkler. The gore is so over the top that it eventually ceases to be disgusting and takes on a blurry, surreal quality. Tarantino knows this works, but he's not content to leave it there.
This isn't an easy movie. It's not the roller coaster kung-fu action flick it's advertised as. It's difficult. Why? Because in those moments where the gore goes beyond the surreal, you still sense gravity in the violence. Tarantino grounds the violence just enough so that you laugh when it happens, but you're hit by an aftertaste of guilt a few seconds later. The blood may be completely over the top, but other details are what got me. The gurgling, the spasms, the screams. These things creeped the hell out of me. And on top of that, towards the end of the film, Tarantino keeps the film rocketing along at such a pace that your conscience can't keep up with the violence, and you know it. That is, I suppose the essense of black comedy. It's not easy and it's not fun, but it makes you laugh anyway.
It is difficult to say, though. It's not as obvious as I'm describing. The black comedy is more subtle than you might think from reading this, so take it with a grain of salt.
Walter sums it up perfectly:
I think Tarantino wanted a 180 from Pulp Fiction's tone. I think he feinted high and then socked us in the gut. And it worked. Bold as hell, and he pulled it off. Now I'm sick to my stomach, but I respect the bastard.I don't like this movie the way I like Tarantino's other work. I like it like I like Taxi Driver or Requiem for a Dream, which is to say, I don't like it, but it is so well done that I can't stop myself from watching it. The filmmakers, damn them, are so good at manipulating the elements of cinema that I'm spellbound even as I'm wimpering.
Kill Bill doesn't have the weight of Taxi Driver or Requiem and it's a flawed film, but it has it's moments of brilliance too. There is a lot more to say about it, but I am at a loss to say more. It is difficult to describe because what's important about this film isn't what happens, it's how it happens. It's style as substance, and Tarantino makes it work. Damn him.
Posted by Mark on October 15, 2003 at 08:29 PM .: link :.
Wednesday, September 24, 2003
I stopped by the bookstore tonight to pick up Quicksilver and while I was there, I happened upon the new edition of George Orwell's Nineteen Eighty-Four. This new edition contains a foreward by none other than Thomas Pynchon, vaunted author and recluse whose similarly prophetic novel, Gravity's Rainbow, has been giving me headaches for the past year or so... Pynchon was a good choice; he's able to place Orwell's novel, including its conception and composition, in its proper cultural and historical context while at the same time applying the humanistic themes of the novel to current times (without, I might add, succumbing to the tempation to list out what Orwell did or didn't "get right" - indeed, Pynchon even takes a humorous swipe at the tendency to do so - "Orwellian, dude!"). And to top that off, I'm a sucker for his style - whatever one he might be employing at the time (this time around it's his nonfiction style, with an alternating elegance and brazenness that works so well).
It's interesting reading, though I don't agree with everything he says. Towards the beginning of the forward, he mentions this bit:
Now, those of fascistic disposition - or merely those among us who remain all too ready to justify any government action, whether right or wrong - will immediately point out that this is prewar thinking, and that the moment enemy bombs begin to fall on one's homeland, altering the landscape and producing casualties among friends and neighbours, all this sort of thing, really, becomes irrelevant, if not indeed subversive. With the homeland in danger, strong leadership and effective measures become of the essence, and if you want to call that fascism, very well, call it whatever you please, no one is likely to be listening, unless it's for the air raids to be over and the all clear to sound. But the unseemliness of an argument - let alone a prophecy - in the heat of some later emergency, does not necessarily make it wrong. One could certainly argue that Churchill's war cabinet had behaved on occasion no differently from a fascist regime, censoring news, controlling wages and prices, restricting travel, subordinating civil liberties to self-defined wartime necessity.Though he doesn't clearly come out and say it and he is careful even with his historical example, Pynchon clearly fears for America's future in the wake of the "war on terror" and sees Orwell's work not only as a commentary on the perils of communism, but as a warning to democracy. As a general point, I can see that, but you could read Pynchon as believing that Orwell's point equally applies to the policies of, say, the current administration, which I think is a bit of a stretch. For one thing, our system of limited governance already has mechanisms for self-examination and public debate, not to mention checks and balances between certain key elements of the government. For another, our primary enemies now are no longer the forces of progress.
As Pynchon himself notes, Orwell failed to see religious fundamentalism as a threat, and today this is the main enemy we face. It isn't the progress of science and technology that threatens us (at least not in the way expected), but rather a reversion to fundamentalist religion, and Pynchon is hesitant to see that. He tends to be obsessed with the mechanics of paranoia and conspiracy when it comes to technology. This is exemplified by his attitude towards the internet:
...the internet, a development that promises social control on a scale those quaint old 20th-century tyrants with their goofy moustaches could only dream about.As erich notes, perhaps someone should introduce Pynchon to the hacker subculture, where anarchists deface government and corporate websites, bored kids bring corporate websites to their knees with viruses or DDOS attacks, and bloggers aggregate and debate. Or perhaps our problem will be that with an increase in informational transparency, "Orwellian" scrutiny will to some extent become democratized; abuse of privacy will no longer limited to corporations and states. As William Gibson notes:
"1984" remains one of the quickest and most succinct routes to the core realities of 1948. If you wish to know an era, study its most lucid nightmares. In the mirrors of our darkest fears, much will be revealed. But don't mistake those mirrors for road maps to the future, or even to the present.Stranger problems indeed. But Pynchon isn't all frowns, he actually ends on a note of hope regarding the appendix, which provides an explanation of Newspeak:
why end a novel as passionate, violent and dark as this one with what appears to be a scholarly appendix?Overall, Pynchon's essay is excellent and thought-provoking, if a little paranoid. He tackles more than I have commented on, and he does so in affable style. A commentor at erich's site concludes:
Orwell, to his everlasting credit, saw clearly the threat posed by communism, and spoke out forcefully against it. Unfortunately, as Pynchon's new introduction reminds us, the same cannot be said for far too many on the Left, who remain incapable of making rational distinctions between our constitutional republic and the slavery over which we won a great triumph in the last century.Indeed.
Update - Most of the text of Pynchon's essay can be found here.
Another Update - Rodney Welch notices a that Pynchon's theory regarding the appendix appears to have been lifted by Guardian columnist, Margaret Atwood. Dave Kipen comments that it's possible that both are paraphrasing an old idea, but he doubts it. Any Orwellians care to shed some light on the originality of the "happy ending" theory?
Another Update: More here.
Posted by Mark on September 24, 2003 at 12:40 AM .: link :.
Monday, September 08, 2003
My God! It's full of stars!
What Galileo Saw by Michael Benson : A great New Yorker article on the remarkable success of the Galileo probe. James Grimmelmann provides some fantastic commentary:
Launched fifteen years ago with technology that was a decade out of date at the time, Galileo discovered the first extraterrestrial ocean, holds the record for most flybys of planets and moons, pointed out a dual star system, and told us about nine more moons of Jupiter.And the brilliance doesn't end there:
As if that wasn't enough hacker brilliance, design changes in the wake of the Challenger explosion completely ruled out the original idea of just sending Galileo out to Mars and slingshotting towards Jupiter. Instead, two Ed Harris characters at NASA figured out a triple bank shot -- a Venus flyby, followed by two Earth flybys two years apart -- to get it out to Jupiter. NASA has come in for an awful lot of criticism lately, but there are still some things they do amazingly well.Score another one for NASA (while you're at it, give Grimmelmann a few points for the Ed Harris reference). Who says NASA can't do anything right anymore? Grimmelmann observes:
The Galileo story points out, I think, that the problem is not that NASA is messed-up, but that manned space flight is messed-up.Is manned space flight in danger of becoming extinct? Is it worth the insane amount of effort and resources we continually pour into the space program? These are not questions I'm really qualified to answer, but its interesting to ponder. On a personal level, its tempting to righteously proclaim that it is worth it; that doing things that are "difficult verging on insane" have inherent value, well beyond the simple science involved.
Such projects are not without their historical equivalents. There are all sorts of theories explaining why the ancient Egyptian pyramids were built, but none are as persuasive as the idea that they were built to unify Egypt's people and cultures. At the time, almost everything was being done on a local scale. With the possible exception of various irrigation efforts that linked together several small towns, there existed no project that would encompass the whole of Egypt. Yes, an insane amount of resources were expended, but the product was truly awe-inspiring, and still is today.
Those who built the pyramids were not slaves, as is commonly thought. They were mostly farmers from the tribes along the River Nile. They depended on the yearly cycle of flooding of the Nile to enrich their fields, and during the months that that their fields were flooded, they were employed to build pyramids and temples. Why would a common farmer give his time and labor to pyramid construction? There were religious reasons, of course, and patriotic reasons as well... but there was something more. Building the pyramids created a certain sense of pride and community that had not existed before. Markings on pyramid casing stones describe those who built the pyramids. Tally marks and names of "gangs" (groups of workers) indicate a sense of pride in their workmanship and respect between workers. The camaraderie that resulted from working together on such a monumental project united tribes that once fought each other. Furthermore, the building of such an immense structure implied an intense concentration of people in a single area. This drove a need for large-scale food-storage among other social constructs. The Egyptian society that emerged from the Pyramid Age was much different from the one that preceded it (some claim that this was the emergance of the state as we now know it.)
"What mattered was not the pyramid - it was the construction of the pyramid." If the pyramid was a machine for social progress, so too can the Space program be a catalyst for our own society.
Much like the pyramids, space travel is a testament to what the human race is capable of. Sure it allows us to do research we couldn't normally do, and we can launch satellites and space-based telescopes from the shuttle (much like pyramid workers were motivated by religion and a sense of duty to their Pharaoh), but the space program also serves to do much more. Look at the Columbia crew - men, women, white, black, Indian, Israeli - working together in a courageous endeavor, doing research for the benefit of mankind, traveling somewhere where few humans have been. It brings people together in a way few endeavors can, and it inspires the young and old alike. Human beings have always dared to "boldly go where no man has gone before." Where would we be without the courageous exploration of the past five hundred years? We should continue to celebrate this most noble of human spirits, should we not?
In the mean time, Galileo is nearing its end. On September 21st, around 3 p.m. EST, Galileo will be vaporized as it plummets toward Jupiter's atmosphere, sending back whatever data it still can. This planned destruction is exactly what has been planned for Galileo; the answer to an intriguing ethical dilemma.
In 1996, Galileo conducted the first of eight close flybys of Europa, producing breathtaking pictures of its surface, which suggested that the moon has an immense ocean hidden beneath its frozen crust. These images have led to vociferous scientific debate about the prospects for life there; as a result, NASA officials decided that it was necessary to avoid the possibility of seeding Europa with alien life-forms.I had never really given thought to the idea that one of our space probes could "infect" another planet with our "alien" life-forms, though it does make perfect sense. Reaction to the decision among those who worked on Galileo is mixed, most recognizing the rationale, but not wanting to let go anyway (understandable, I guess)...
For more on the pyramids, check out this paper by Marcell Graeff. The information he referenced that I used in this article came primarily from Kurt Mendelssohn's book The Riddle of the Pyramids.
Update 9.25.03 - Steven Den Beste has posted an excellent piece on the Galileo mission and more...
Posted by Mark on September 08, 2003 at 11:06 PM .: link :.
Sunday, August 10, 2003
The King Lives!
Cult films are (generally) commercially unsuccessful movies that have limited appeal, but nevertheless attract a fiercely loyal following among fans over time. They often exhibit very strange characters, surreal settings, bizzarre plotting, dark humor, and otherwise quirky and eccentric characteristics. These obscure films often cross genres (horror, sci-fi, fantasy, etc...) and are highly stylized, straying from conventional filmmaking techniques. Many are made by fiercely independent maverick filmmakers with a very low budget (read: cheesy), often showcasing the performance of talented newcomers.
Almost by definition, they're not popular at the time of their release, usually because they exist outside the box, eschewing typical narrative styles and other technical conventions. They achieve cult-film status later, developing a loyal fanbase over time, often through word-of-mouth recommendations (and, as we'll see, the actions of fans themselves). They elicit an eerie passion among their fans, who enthusiastically champion the films, leading to repeated public viewings (midnight movie showings are particularly prevalent in cult films), fan clubs, and active audience participation (i.e. dressing up as the oddball characters, mercilessly MST3King a film, or uh, jumping around in front of a camera with a broomstick). Cult movie followers often get together and argue over the mundane details and varied merits of their favorite films.
While these films are not broadly appealing, they are tremendously popular among certain narrow groups such as college students or independent film lovers. The internet has been immensely enabling in these respects, allowing movie geeks to locate one another and participate in the aforementioned laborious debates and arguments among other interactive fun.
One of the first examples of a cult movie is Tod Browning's 1932 film, Freaks, which was deliberately made to be "the strangest...most startling human story ever screened," and featured real-life freaks as circus performers. Perhaps the most infamous cult film is The Rocky Horror Picture Show, a 1975 film which inspired a craze of interactive, midnight movie screenings where members of the audience dress up as any of the garish and trashy characters and sing along with the music.
Sometimes a cult film will break out of its small fanbase and hit the mainstream. Frank Capra's classic It's a Wonderful Life didn't become popular until many years after its initial release. Repeated television showings during the Christmas season, however, have become a holiday tradition.
Stanley Kubrick's A Clockwork Orange and Dr. Strangelove Or: How I Stopped Worrying and Learned to Love the Bomb, Ridley Scott's Blade Runner, and Frances Ford Coppola's Apocalypse Now are all considered to be classics of modern cinema today, yet were all largely ignored by audiences at the time of their release.
Most cult films don't fare that well, though I can't say that bothers anyone. Their unpopularity is generally considered to be a part of their charm. They're strange beasts, these cult films, and their appeal is hard to pin down. They're often very flawed films in one way or another, yet they strike a passionate chord with specific audiences, and their flaws, strangely, become endearing to their fans. Outsiders just don't "get it".
This doesn't just apply to movies either. Many authors don't become popular until after their deaths (Kafka, Lovecraft) and many works are initially shunned, but eventually pick up that devoted cult following through word of mouth and interactive fun and games. The Lord of the Rings was massively unpopular when it was published, but a small but extremely devoted fanbase grew, and it wasn't too long until people were creating role-playing games like Dungeons & Dragons based in part on Tolkien's enormously imaginative universe. D&D itself garnered a cult following of its own, as has role-playing in its own right. Lord of the Rings is now immensely popular, and its stunningly brilliant movie adaptations by cult filmmaker Peter Jackson (known for his disgusting work in Bad Taste, Meet the Feebles, and Dead Alive, among others) which have met with both popular and critical success.
One of my favorite cult films is the cheesy 1979 horror flick, Phantasm. Several years ago, as I first began to explore internet communities, I realized that I needed a "handle," as it was called. I was watching said horror flick almost every day at the time, so I chose tallman as my handle, despite the fact that I do not resemble the nefarious Tall Man present in the Phantasm films (and that, uh, I'm not tall). It is inexplicably one of my favorite films of all time, and it is a dreadful movie. The effects are awful, the acting is often laughable, and the plot is incoherent at times (especially the ending). But I still love the film; I cherish the creepy, surreal atmosphere and to this day, the Tall Man haunts my dreams (nightmares, actually). The bad effects and acting make me laugh, but there are some genuinely brilliant moments in the film, and the unreality of the ending actually serves to heighten the tension of the film, providing an eerie ambiguity that lasts long after viewing the film. The film has its moments of brilliance as well. The score is especially haunting, and the mortuary sets, when combined with director (and producer, and writer, and cinematographer, and editor, and did I mention that cult filmmakers are often fiercely independent?) Don Coscarelli's talented visual style, are stunningly effective.
Like many cult films, it has become a cinematically important film, sparking the rise of surreality in many horror films from the 1980's (most notably A Nightmare on Elm Street, which lifted the ending almost verbatim).
Another favorite cult hit is Sam "For Love of the Game" Raimi's (er, I guess that should be Sam "Spiderman" Raimi's) Evil Dead films, featuring the coolest B-Movie actor ever, Bruce Campbell. Raimi's inventive camera-work and Campbell's gloriously over-the-top performance make these films a joy to watch.
The reason I started this post, which has gotten completely out of hand as I've laboriously digressed into the nature of cult filmmaking (sorry 'bout that), was because of a new film, destined for cult success, in which Phantasm director Don Coscarelli and Evil Dead actor Bruce Campbell join forces.
The new film is called Bubba Ho-Tep, it looks like a doozy. Based on a short story by cult author Joe R. Lansdale, tells the "true" story of what became of Elvis Presley (he didn't die on a toilet) and JFK (he didn't die in Dallas). Oh, did I mention that JFK is now black (THEY dyed him that color; the conspiracy theorists should love that)? We find this unlikely duo in an East Texas rest home which has become the target of an evil Egyptian entity ("Some sorta... Bubba Ho-Tep," as Campbell's Elvis opines). Naturally, the two old coots aren't going to just let Bubba Ho-Tep run hog-wild through their peaceful nursing home, and so they rush forward on their walkers and their wheel chairs to save the day. Its got that mix of the absurd that just screams cult film.
The trailer is great, and it features some of those trademark Coscarelli visuals (which I never realized he had before, but he does. Its tempting to throw out the term Auteur, but I'm way too subjective when it comes to Coscarelli), music that sounds suspiciously like the Phantasm theme, and Campbell's typically cheeky delivery (including Elvis-fu, complete with cheesy sound effects). I can't wait to see this film. Alas, it doesn't look like its coming to Philly very soon, but I'm hoping it will eventually make its way over here so that I can partake of it in all its B-Movie glory. The King lives!
Posted by Mark on August 10, 2003 at 11:08 AM .: link :.
Friday, August 08, 2003
A few weeks ago, the regular weather guy on the radio was sick and a British meteorologist filled in. And damned if I didn't think it was the best weather forecast I'd ever heard! The report, which called for rain on a weekend in which I was traveling, turned out to be completely inaccurate, much to my surprise. I really shouldn't have been surprised, though. I know full well the limitations of meteorology, and weather reports can't be that accurate. Truth be told, I subcounsciously placed a higher value on the weather report because it was delivered in a British accent. Its not his fault, he can predict the weather no better than anyone else in the world, but the British accent carries with it an intellectual stereotype; when I hear one, I automatically associate it with intelligence.
Which brings me to John Patterson's recent article in the Guardian in which he laments the inevitable placement of British characters and actors in the villainous roles (while all the cheeky Yanks get the heroic roles):
Meanwhile, in Hollywood and London, the movie version of the special relationship has long played itself out in like manner. Our cut-price actors come over and do their dirty work, as villains and baddies and psychopaths, even American ones, while the cream of their prohibitively expensive acting talent Concordes it over the pond to steal the lion's share of our heroic roles. Either way, we lose.One could be curious why Patterson is so upset that American actors get the heroic parts in American movies, but even if you ignore that, Patterson is stretching it pretty thin.
As Steven Den Beste notes, this theory doesn't go too far in explaining James Bond or Spy Kids. Never mind that the Next Generation captain of the starship Enterprise was a Brit (playing a Frenchman, no less). Ian McKellen plays Gandalf; Ewan McGregor plays Obi Wan Kenobi. The list goes on and on.
All that aside, however, it is true that British actors and characters often do portray the villain. It may even be as lopsided as Patterson contends, but the notion that such a thing implies some sort of deeply-rooted American contempt for the British is a bit off.
As anyone familiar with film will tell you, the villain needs to be so much more than just vile, wicked or depraved to be convincing. A villainous dolt won't create any tension with the audience, you need someone with brains or nobility. Ever notice how educated villains are? Indeed, there seem to a preponderance of doctors that become supervillains (Dr. Demento, Dr. Octopus, Dr. Doom, Dr. Evil, Dr. Frankenstien, Dr. No, Dr. Sardonicus, Dr. Strangelove, etc...) - does this reflect an antipathy towards doctors? The abundance of British villains is no more odd than the abundance of doctors. As my little episode with the weatherman shows, when Americans hear a British accent, they hear intelligence. (This also explains the Gladiator case in which Joaquin Phoenix, who is Puerto Rican by the way, puts on a veiled British accent.)
The very best villains are the ones that are honorable, the ones with whom the audience can sympathize. Once again, the American assumption of British honor lends a certain depth and complexity to a character that is difficult to pull off otherwise. Who was the more engaging villain in X-Men, Magneto or Sabretooth? Obviously, the answer is Magneto, played superbly by British actor Ian McKellen. Having endured Nazi death camps as a child, he's not bent on domination of the world, he's attempting to avoid living through a second holocaust. He's not a megalomaniac, and his motivation strikes a chord with the audience. Sabretooth, on the other hand, is a hulking but pea-brained menace who contributes little to the conflict (much to the dismay of fans of the comic, in which Sabertooth is apparently quite shrewd).
Such characters are challenging. It's difficult to portray a villain as both evil and brilliant, sleazy and funny, moving and tragic. In fact, it is because of the complexity of this duality that villains are often the most interesting characters. That British actors are often chosen to do so is a testament to their capability and talent.
Some would attribute this to the training of the stage that is much less common in the U.S. British actors can do a daring and audacious performance while still fitting into an ensemble. It's also worth noting that many British actors are relatively unknown outside of the UK. Since they are capable of performing such a difficult role, and since they are unfamiliar to US audiences, it makes the films more interesting.
In the end, there's really very little that Patterson has to complain about, especially when he tries to port this issue over to politics. While a case may be made that there are a lot of British villains in movies (and there are plenty of villains that aren't), that doesn't mean there is anything malicious behind it; indeed, depending on how you look at it, it could be considered a complement that British culture lends itself to the complexity and intelligence required for a good villain we all love to hate (and hate to love). [thanks to USS Clueless for the Guardian article]
Posted by Mark on August 08, 2003 at 09:36 AM .: link :.
Sunday, May 25, 2003
Security & Technology
The other day, I was looking around for some new information on Quicksilver (Neal Stephenson's new novel, a follow up to Cryptonomicon) and I came across Stephenson's web page. I like everything about that page, from the low-tech simplicity of its design, to the pleading tone of the subject matter (the "continuous partial attention" bit always gets me). At one point, he gives a summary of a talk he gave in Toronto a few years ago:
Basically I think that security measures of a purely technological nature, such as guns and crypto, are of real value, but that the great bulk of our security, at least in modern industrialized nations, derives from intangible factors having to do with the social fabric, which are poorly understood by just about everyone. If that is true, then those who wish to use the Internet as a tool for enhancing security, freedom, and other good things might wish to turn their efforts away from purely technical fixes and try to develop some understanding of just what the social fabric is, how it works, and how the Internet could enhance it. However this may conflict with the (absolutely reasonable and understandable) desire for privacy.And that quote got me to thinking about technolology and security, and how technology never really replaces human beings, it just makes certain tasks easier, quicker, and more efficient. There was a lot of talk about this sort of thing around the early 90s, when certain security experts were promoting the use of strong cryptography and digital agents that would choose what products we would buy and spend our money for us.
As it turns out, most of those security experts seem to be changing their mind. There are several reasons for this, chief among them fallibility and, quite frankly, a lack of demand. It is impossible to build an infallible system (at least, it's impossible to recognize that you have built such a system), but even if you had accomplished such a feat, what good would it be? A perfectly secure system is also a perfectly useless system. Besides that, you have human ignorance to contend with. How many of you actually encrypt your email? It sounds odd, but most people don't even notice the little yellow lock that comes up in their browser when they are using a secure site.
Applying this to our military, there are some who advocate technology (specifically airpower) as a replacement for the grunt. The recent war in Iraq stands in stark contrast to these arguments, despite the fact that the civilian planners overruled the military's request for additional ground forces. In fact, Rumsfeld and his civilian advisors had wanted to send significantly fewer ground forces, because they believed that airpower could do virtually everything by itself. The only reason there were as many as there were was because General Franks fought long and hard for increased ground forces (being a good soldier, you never heard him complain, but I suspect there will come a time when you hear about this sort of thing in his memoirs).
None of which is to say that airpower or technology are not necessary, nor do I think that ground forces alone can win a modern war. The major lesson of this war is that we need to have balanced forces in order to respond with flexibility and depth to the varied and changing threats our country faces. Technology plays a large part in this, as it makes our forces more effective and more likely to succeed. But, to paraphrase a common argument, we need to keep in mind that weapons don't fight wars, soldiers do. While technology we used provided us with a great deal of security, its also true that the social fabric of our armed forces were undeniably important in the victory.
One thing Stephenson points to is an excerpt from a Sherlock Holmes novel in which Holmes argues:
...the lowest and vilest alleys in London do not present a more dreadful record of sin than does the smiling and beautiful country-side...The pressure of public opinion can do in the town what the law cannot accomplish...But look at these lonely houses, each in its own fields, filled for the most part with poor ignorant folk who know little of the law. Think of the deeds of hellish cruelty, the hidden wickedness which may go on, year in, year out, in such places, and none the wiser.Once again, the war in Iraq provides us with a great example. Embedding reporters in our units was a controversial move, and there are several reasons the decision could have been made. One reason may very well have been that having reporters around while we fought the war may have made our troops behave better than they would have otherwise. So when we watch the reports on TV, all we see are the professional, honorable soldiers who bravely fought an enemy which was fighting dirty (because embedding reporters revealed that as well).
Communications technology made embedding reporters possible, but it was the complex social interactions that really made it work (well, to our benefit at least). We don't derive security straight from technology, we use it to bolster our already existing social constructs, and the further our technology progresses, the easier and more efficient security becomes.
Update 6.6.03 - Tacitus discusses some similar issues...
Posted by Mark on May 25, 2003 at 02:03 PM .: link :.
Sunday, May 11, 2003
To hit or not to hit, that is the question
Gambling is a strange vice. Anyone with a brain in their head knows the games are rigged in the Casino's favor, and anyone with a knowledge of Mathematics knows how thoroughly the odds are in the Casino's favor. But that doesn't stop people from dropping their paychecks in a few hours. I stopped by Atlantic City this weekend, and I played some blackjack. The swings are amazing. I only played for about an hour, but I am always fascinated by the others at the table and even my own reactions.
I don't play to win, rather, I don't expect to win, but I like to gamble. I like having a stack of chips in front of me, I like the sounds and the smells and the gaudy flashing lights (I like the deliberately structured chaos of the Casino). I allot myself a fixed budget for the night, and it usually adds up to approximately what I'd spend on a good night out. People watching isn't really my thing, but its hard not to enjoy it at a Casino, and that's something I spend a lot of time doing. Some people have the strangest superstitions and beliefs, and its fun to step back and observe them at work. Even though I know the statistical underpinnings of how gambling works at a Casino, I even find myself thinking the same superstitious stuff because its only natural.
For instance, a lot of people think that if a player sitting at their table makes incorrect playing actions, it decreases their advantage. Statistically, this is not true, but when that guy sat down at third-base and started hitting on his 16 when the dealer was showing a 5, you better believe a lot of people got upset. In reality, that moron's actions have just as much a chance of helping other players as hurting them, but that's no consolation to someone who lost a hundred bucks in the short time since that guy sat down. Similarly, many people have progressive betting strategies that are "guaranteed" to win. Except, you know, they don't actually work (unless they're based on counting, but that's another story).
The odds in AC for Blackjack give the House an edge of about 0.44%. That doesn't sound like much, but its plenty for the Casino, because they have an unfair advantage even if the odds were dead even. Don't forget, the Casino has deep pockets, and you don't. In order to take advantage of a prosperous swing in the game, you need to weather the House's streaks. If you're playing with $1000, you might be able to swing it, but don't forget, the Casino is playing with millions of dollars. They will break your bank if you spend enough time there, even if they didn't have the statistical advantage. That's why you get comps when you win. They're trying to keep you there so as to bring you closer to the statistical curve.
The only way you can really win at Blackjack is to have the luck of a quick streak and the willpower to stop while you're up (as I noted before, if you're up a lot, the Casino will do their best to keep you playing), but that's a fragile system - you can't count on that, though it will happen sometimes. The only way to consistently win at Blackjack is to count cards. That can give you the advantage of around 1% (more on certain hands, less on others) - depending on the House rules. This isn't Rain Man - you aren't keeping track of every card that comes out of the deck (rather, you're keeping a relative score of high value cards to low cards), and you don't get an automatic winning edge on every hand. Depending on the count, the dealer can still play consistently better than you - but the dealer can't double down or split, and they only get even money for Blackjack. That's where the advantage comes.
Of course, you have to have a pretty big bankroll to compensate for the Casino's natural "deep pockets" advantage, and you'll need to spend hundreds of hours practicing at home. Blackjack is fast and you need to be able to keep a running tab of the high/low card ratio (and you need to do some other calculations to get the true count), all the while you must appear to be playing normally, talking with the other players, dealing with the deliberately designed chaotic distractions of the Casino and generally trying not to come off as someone who is intensely concentrating. No small feat.
I'm not sure if that'd take all the fun out of it, not to mention draw the Casino's attention on me (which can't be fun), but it would be an interesting talent to have and its a must if you want to win. At the very least, it's a good idea to get the basic strategy down. Do that and you'll be better than most of the people out there (even if you just memorize the Hard Totals table, you'll be in good shape).
Posted by Mark on May 11, 2003 at 09:12 PM .: link :.
Saturday, July 13, 2002
Call Me Lenny by James Grimmelmann : Taco Bell is running a new ad called "Chef Wars" and it is an Iron Chef parody. The commercial is pathetic and James laments that Iron Chef is no longer considered to be a piece of elite culture. Essentially, Iron Chef is no longer cool because it has become so popular that even culturally bereft Taco Bell customers will understand the reference.
As a long time fan of Iron Chef, I suppose I can relate to James. Several years ago, a few drunk friends and I discovered Iron Chef one late night and fell in love with it. In the years that followed, it has grown more and more popular, to the point where there was even an pointless American version (hosted by Bill Shatner) and a rather funny parody on Saturday Night Live. Seeing those things made it less fun to be an Iron Chef fan, and to a certain extent, I agree with that point. But in a different way, Iron Chef is just as cool as it ever was and, in my mind, a genuinely good show is well... good, no matter how popular it is.
As commentor Julia (at the bottom) notes, there are two main issues that James is hitting on:
I suppose it all comes down to exclusion. Things are cool, in part, because you are cool enough to recognize them as such. But if everyone is cool, what's the point? Which brings us to Malcolm Gladwell and his Coolhunt:
"In this sense, the third rule of cool fits perfectly into the second: the second rule says that cool cannot be manufactured, only observed, and the third says that it can only be observed by those who are themselves cool. And, of course, the first rule says that it cannot accurately be observed at all, because the act of discovering cool causes it to take flight, so if you add all three together they describe a closed loop, the hermenuetic circle of coolhunting, a phenomenon whereby not only can the uncool not see cool but cool cannot be even adequately described to them."But is it cool to just recognize something as cool? James recognized Iron Chef as cool, but he didn't really enjoy it. So I guess that we should seek the cool, but not be fooled into thinking something is cool simply because it is going to be big one day...
Posted by Mark on July 13, 2002 at 02:19 PM .: link :.
Tuesday, October 09, 2001
The Fifty Nine Story Crisis
In 1978, William J. LeMessurier, one of the nation's leading structural engineers, received a phone call from an engineering student in New Jersey. The young man was tasked with writing a paper about the unique design of the Citicorp tower in New York. The building's dramatic design was necessitated by the placement of a church. Rather than tear down the church, the designers, Hugh Stubbins and Bill LeMessurier, set their fifty-nine-story tower on four massive, nine-story-high stilts, and positioned them at the center of each side rather than at each corner. This daring scheme allowed the designers to cantilever the building's four corners, allowing room for the church beneath the northwest side.
Thanks to the prodding of the student (whose name was lost in the swirl of subsequent events), LeMessurier discovered a subtle conceptual error in the design of the building's wind braces; they were unusually sensitive to certain kinds of winds known as quartering winds. This alone wasn't cause for worry, as the wind braces would absorb the extra load under normal circumstances. But the circumstances were not normal. Apparently, there had been a crucial change during their manufacture (the braces were fastened together with bolts instead of welds, as welds are generally considered to be stronger than necessary and overly expensive; furthermore the contractors had interpreted the New York building code in such a way as to exempt many of the tower's diagonal braces from loadbearing calculations, so they had used far too few bolts.) which multiplied the strain produced by quartering winds. Statistically, the possibility of a storm severe enough to tear the joint apart was once every sixteen years (what meteorologists call a sixteen year storm). This was alarmingly frequent. To further complicate matters, hurricane season was fast approaching.
The potential for a complete catastrophic failure was there, and because the building was located in Manhattan, the danger applied to nearly the entire city. The fall of the Citicorp building would likely cause a domino effect, wreaking a devestating toll of destruction in New York.
The story of this oversight, though amazing, is dwarfed by the series of events that led to the building's eventual structural integrity. To avert disaster, LeMessurier quickly and bravely blew the whistle - on himself. LeMessurier and other experts immediately drew up a plan in which workers would reinforce the joints by welding heavy steel plates over them.
Astonishingly, just after Citicorp issued a bland and uninformative press release, all of the major newspapers in New York went on strike. This fortuitous turn of events allowed Citicorp to save face and avoid any potential embarrassment. Construction began immediately, with builders and welders working from 5 p.m. until 4 a.m. to apply the steel "band-aids" to the ailing joints. They build plywood boxes around the joints, so as not to disturb the tenants, who remained largely oblivious to the seriousness of the problem.
Instead of lawsuits and public panic, the Citicorp crisis was met with efficient teamwork and a swift solution. In the end, LeMessurier's reputation was enhanced for his courageous honesty, and the story of Citicorp's building is now a textbook example of how to respond to a high-profile, potentially disastrous problem.
Most of this information came from a New Yorker article by Joe Morgenstern (published May 29, 1995) . It's a fascinating story, and I found myself thinking about it during the tragedies of September 11. What if those towers had toppled over in Manhattan? Fortunately, the WTC towers were extremely well designed - they didn't even noticeably rock when the planes hit - and when they did come down, they collapsed in on themselves. They would still be standing today too, if it wasn't for the intense heat that weakened the steel supports.
Posted by Mark on October 09, 2001 at 08:04 AM .: link :.
Thursday, July 26, 2001
The Dune You'll Never See
Dune: The Movie You Will Never See by Alejandro Jodorowsky : The cult filmmaker's personal recollection of the failed production. The circumstances of Jodorowsky's planned 1970s production of Frank Herbert's novel Dune are inherently fascinating, if only because of the sheer creative power of the collaborators Jodorowsky was able to assemble. Pink Floyd offered to write the score at the peak of their creativity. Salvador Dali, Gloria Swanson, and Orson Welles were cast. Dan O'Bannon (fresh off of Dark Star) was hired to supervize special effects; illustrator Chris Foss to design spacecraft; H.R. Giger to design the world of Geidi Prime and the Harkonnens; artist Jean 'Moebius' Giraud drew thousands of sketches. The project eventually collapsed in 1977, subsequently being passed onto Ridley Scott, and then to David Lynch, whose 1984 film was panned by audience and critics alike.
Interestingly enough, this failed production has been suprisingly influential. "...the visual aspect of Star Wars strangely resembled our style. To make Alien, they called Moebius, Foss, Giger, O'Bannon, etc. The project signalled to Americans the possibility of making a big show of science-fiction films, outside of the scientific rigour of 2001: A Space Odyssey."
In reading his account of the failed production, it becomes readily apparent that Jodorowsky's Dune would only bear a slight resemblance to Herbert's novel. "I feel fervent admiration towards Herbert and at the same time conflict [...] I did everything to keep him away from the project... I had received a version of Dune and I wanted to transit it: the myth had to abandon the literary form and become image..." In all fairness, this is not necessarily a bad thing, especially in the case of Dune, which many considered to be unfilmable (Lynch, it is said, tried to keep his story as close to the novel as possible - and look what happened there). Film and literature are two very different forms, and, as such, they use different tools to accomplish the same tasks. Movies must use a different "language" to express the same ideas.
I find the prospect of Jodorowsky's Dune to be fascinationg, but I must also admit that I, like many others, would have also been aprehensive about his vision. Would Jodorowsky's Dune have been able to live up to his ambition? Some think not:
Theory and retrospect are fine and in theory Jodorowsky's DUNE sounds too good to be true. But then again, anyone that reads his desrription and explanation of El Topo and then actually watches the thing is going to feel slightly conned. They might then come to the conclusion that Jodorowsky says lots, but means little.Having seen El Topo, I can understand where this guy's coming from. I lack the ability to adequately describe the oddity; the disturbing phenomenon that is El Topo. I can only say that it is the wierdest movie that I have ever seen (nay, experienced). But for all its disquieting peculiarity, I think it contains a certain raw power that really affects the viewer. Its that sort of thing, I think, that might have made Dune great.
In case you couldn't tell, Alejandro Jodorowsky is a strange, if fascinating, fellow. He wrote the script and soundtrack, handled direction, and starred in the previously mentioned El Topo, which was hailed by John Lennon as a masterpiece (thus securing his cult status). His followup, The Holy Mountain, continued along the same lines of thought. It was at this point that the director took the oportunity to work on Dune, which, as we have already found out, was a failure. Nevertheless, Jodorowsky plunges on, still making his own brand of bizzare films. As he says at the end of his account of the Dune debacle, "I have triumphed because I have learned to fail."
Posted by Mark on July 26, 2001 at 09:38 PM .: link :.
Friday, March 30, 2001
Hard Drinkin' Lincoln
I attended a lecture at Villanova University last night which was quite interesting. The speaker was Mike Reiss, one of the writer/producers of the Simpsons (among various other stints at The Tonight Show with Johnny Carson and the ever-popular Alf). He doesn't work at the Simpsons as much as he used to, but still hangs around the offices occasionally. Some interesting tidbits* from the lecture:
* - I'm going from memory here, so some of the quotes might be a little off, but you get the gist of it.
Posted by Mark on March 30, 2001 at 01:40 PM .: link :.
Where am I?
This page contains entries posted to the Kaedrin Weblog in the Best Entries Category.
Kaedrin Beer Blog
12 Days of Christmas
2006 Movie Awards
2007 Movie Awards
2008 Movie Awards
2009 Movie Awards
2010 Movie Awards
2011 Fantastic Fest
2011 Movie Awards
2012 Movie Awards
2013 Movie Awards
6 Weeks of Halloween
Arts & Letters
Computers & Internet
Disgruntled, Freakish Reflections
Philadelphia Film Festival 2006
Philadelphia Film Festival 2008
Philadelphia Film Festival 2009
Philadelphia Film Festival 2010
Science & Technology
Security & Intelligence
The Dark Tower
Weird Book of the Week
Weird Movie of the Week
Copyright © 1999 - 2012 by Mark Ciocco.