Best Entries

2008 Kaedrin Movie Awards

As of today, I’ve seen 62 movies that would be considered 2008 releases. This is on par with my 2007 viewing and perhaps a bit less than 2006. So I’m not your typical movie critic, but I’ve probably seen more than your average moviegoer. As such, this constitutes the kickoff of my year end movie recap. The categories for this years movie awards are the same as last year, and will proceed in a similar manner. Nominations will be announced today, and starting next week, I’ll announce the winners (new winners announced every day). After that, there might be some miscellaneous awards, followed by a top 10 list.

As I’ve mentioned before, 2008 has been a weak year for movies. Not sure if this was because of the writers strike, some other shift in studio strategy (the independent arms of many studios seem to be closing up shop, for instance), or that my taste has become more discriminating, but whatever the case, I’ve had trouble compiling my top 10. Indeed, I’m still not sure I’ve got a good list yet and am still scrambling to catch up with some of the lesser-known films of the year (many of which had minimal releases and are not out on DVD just yet). This is why these awards and my top 10 are a little later than last year. However, one of the things I like about doing these awards is that they allow me to give some love to films that I like, but which aren’t necessarily great or are otherwise flawed (as such, the categories may seem a bit eclectic). Some of these movies will end up on my top 10, but the grand majority of them will not.

The rules for this are the same as last year: Nominated movies must have been released in 2008 and I have to have seen the movie (and while I have seen a lot of movies, I don’t pretend to have seen a comprehensive selection – don’t let that stop you from suggesting something though). Also, I suppose I should mention the requisite disclaimer that these sorts of lists are inherently subjective and personal. But that’s all part of the fun, right?

Best Villain/Badass

It’s been a pretty good year for villainy! At least on par with last year, if not better. As with the past two years, my picks in this category are for individuals, not groups (i.e. no vampires or zombies as a group).

Winner Announced!

Best Hero/Badass

A distinct step down in terms of heroic badassery this year, but it’s not a terrible year either. Again limited to individuals and not groups.

Winner Announced!

Best Comedic Performance

Not a particularly strong year when it comes to comedy, but there still seem to be plenty of good performances, even in films I thought were lackluster…

Winner Announced!

Breakthrough Performance

Not a particularly huge year for breakthrough performances either, but definitely several interesting choices. As with previous years, my main criteria for this category was if I watched a movie, then immediately looking up the actor/actress on IMDB to see what else they’ve done (or where they came from). This sometimes happens for even well established actors/actresses, and this year was no exception.

Winner Announced!

Most Visually Stunning

Winner Announced!

Best Sci-Fi or Horror Film

I’m a total genre hound, despite genres generally receiving very little attention from critics. As usual, there was a dearth of quality SF this year, especially because I don’t consider Iron Man or The Dark Knight SF. However, a strong showing from the horror genre rounds out the nominations well. Plus, disappointed by the poor showing of SF, I cheated by nominating a 2007 SF film… I can’t even fudge the release dates the way I can with some independent or foreign flicks – by every measurement I can think of, it’s a 2007 film. But it was such a small film that flew under just about everyone’s radar (including mine!) that I’m going to include it, just to give it some attention, because I really did enjoy it.

Winner Announced!

Best Sequel

Honestly, I only saw 4 or 5 sequels all year, so this was a difficult category to populate (as it is every year). Still, there were at least two really great sequels this year…

Winner Announced!

Biggest Disappointment

Always a difficult award to figure out, as there are different ways in which a movie can disappoint. Usually, expectations play just as big a part of this as the actual quality of the film, and it’s possible that a decent movie can win the award because of astronomical expectations. This year had several obvious choices though. Usually I manage to avoid the real stinkers, but this year I saw two genuinely awful movies… in the theater!

Winner Announced!

Best Action Sequences

This is a kinda by-the-numbers year for action sequences. Nothing particularly groundbreaking or incredible, but there were some well executed, straightforward action movies this year. These aren’t really individual action sequences, but rather an overall estimation of each film.

Winner Announced!

Best Plot Twist/Surprise

Not a particularly strong year for the plot twist either.

Winner Announced!

Best High Concept Film

This was a new category last year, and like last year, I had a little difficulty coming up with this list, but overall, not bad.

Winner Announced!

Anyone have any suggestions (for either category or nominations)? Comments, complaints and suggestions are welcome, as always.

It looks like The Dark Knight is leading the way with an impressive 6 nominations (rivaled only by the 8 nominations earned by Grindhouse last year… with the caveat that Grindhouse is technically 2 movies in one). Not far behind is Hellboy II with a respectable 5 nominations. Surprisingly, both Forgetting Sarah Marshall and The Signal earned 3 nominations, while a whole slew of other films garnered 2 noms, and an even larger amount earned a single nomination. As I mentioned earlier, I’m going to give myself a week to think about each of these. I might end up adding to the nominations if I end up seeing something new. Winners will be announced starting next Sunday or Monday. As with the last two years, there will be a small set of Arbitrary Awards after the standard awards are given out, followed by the top 10.

Update: Added a new plot twist nominee (Spiral), because I just watched it and it deserves it!

Update 1.25.09: Arbitrary Awards announced!

Update 2.15.09: Top 10 of 2008 has finally been posted!

Anathem

I finished Neal Stephenson’s latest novel, Anathem, a few weeks back. Overall, I enjoyed it heartily. I don’t think it’s his best work (a distinction that still belongs to Cryptonomicon or maybe Snow Crash), but it’s way above anything I’ve read recently. It’s a dense novel filled with interesting and complex ideas, but I had no problem keeping up once I got started. This is no small feat in a book that is around 900 pages long.

On the other hand, my somewhat recent discussion with Alex regarding the ills of Cryptonomicon has lead me to believe that perhaps the reason I like Neal Stephenson’s novels so much is that he tunes into the same geeky frequencies I do. I think Shamus hit the nail on the head with this statement:

In fact, I have yet to introduce anyone to the book and have them like it. I’m slowly coming to the realization that Cryptonomicon is not a book for normal people. Flaws aside, there are wonderful parts to this book. The problem is, you have to really love math, history, and programming to derive enjoyment from them. You have to be odd in just the right way to love the book. Otherwise the thing is a bunch of wanking.

Similarly, Anathem is not a book for normal people. If you have any interest in Philosophy and/or Quantum Physics, this is the book for you. Otherwise, you might find it a bit dry… but you don’t need to be in love with those subjects to enjoy the book. You just need to find it interesting. I, for one, don’t know much about Quantum Physics at all, and I haven’t read any (real) Philosophy since college, and I didn’t have any problems. In fact, I was pretty much glued to the book the whole time. One of the reasons I could tell I loved this book was that I wasn’t really aware of what page I was on until I neared the end (at which point dealing with the physicality of the book itself make it pretty obvious how much was left).

Minor spoilers ahead, though I try to keep this to a minimum.

The story takes place on another planet named Arbre and is told in first person by a young man named Erasmus. Right away, this yields the interesting effect of negating the multi-threaded stories of most of Stephenson’s other novels and providing a somewhat more linear progression of the story (at least, until you get towards the end of the novel, when the linearity becomes dubious… but I digress). Erasmus, who is called Raz by his friends, is an Avout – someone who has taken certain vows to concentrate on studies of science, history and philosophy. The Avout are cloistered in areas called Concents, which is kind of like a monastary except the focus of the Avout is centered around scholarship and not religion. Concents are isolated from the rest of the world (the area beyond a Concent’s walls is referred to as Extramuros or the Saecular World), but there are certain periods in which the gates open and the Avout mix with the Saecular world (these periods are called Apert). Each concent is split up into smaller Maths, which are categorized by the number of years which lapse between each Apert.

Each type of Math has interesting characteristics. Unarian maths have Apert every year, and are apparently a common way to achieve higher education before getting a job in the Saecular world (kinda like college or maybe grad-school). Decenarian maths have Apert once every ten years. Raz and most of the characters in the story are “tenners.” Centenarian maths have Apert once every century (and are referred to as hundreders) and Millenarian maths have Apert once every thousand years (and are called thousanders).

I suppose after reading the last two paragraphs, you’ll notice that Stephenson has spent a fair amount of time devising new words and concepts for his alien planet. At first, this seems a bit odd and it might take some getting used to, but after the first 50-100 pages, it’s pretty easy to keep up with all the new history and terminology. There’s a glossary in the back of the book for reference, but I honestly didn’t find that I needed it very often (at least, not the way I did while reading Dune, for instance). Much has been made of Stephenson’s choice in this matter, as well as his choice to set the story on an alien planet that has a history that is roughly analogous to Earth’s history. Indeed, it seems like there is a one-to-one relationship between many historical figures and concepts on Arbre and Earth. Take, for instance, Protas:

Protas, the greatest fid of Thelenes, had climbed to the top of a mountain near Ethras and looked down upon the plain that nourished the city-state and observed the shadows of the clouds, and compared their shapes. He had had his famous upsight that while the shapes of the shadows undeniably answered to those of the clouds, the latter were infinitely more complex and more perfectly realized than the former, which were distorted not only by the loss of a spatial dimension but also by being projected onto terrain that was of irregular shape. Hiking back down, he had extended that upsight by noting that the mountain seemed to have a different shape every time he turned round to look back at it, even though he knew it had one absolute form and that these seeming changes were mere figments of his shifting point of view. From there, he had moved on to his greatest upsight of all, which was that these two observations – the one concerning the clouds, the other concerning the mountain – were themselves both shadows cast into his mind by the same greater, unifying idea. (page 84)

Protas is clearly an analog to Plato (and thus, Thelenes is similar to Socrates) and the concepts described above run parallel to Plato’s concept of the Ideal (even going so far as to talk about shadows and the like, calling to mind Plato’s metaphor of the cave). There are literally dozens of these types of relationships in the book. Adrakhones is analogous to Pythagoras, Gardan’s Steelyard is similar to Occam’s Razor, and so on. Personally, I rather enjoyed picking up on these similarities, but the referential nature of the setting might seem rather indulgent on Stephenson’s part (at least, it might seem so to someone who hasn’t read the book). I even speculated as much while I was reading the book, but as a reader noted in the comments to my post, that’s not all there is to it. It turns out that Stephenson’s choice to set the story on Arbre, a planet that has a history suspiciously similar to Earth, was not an indulgence at all. Indeed, it becomes clear later in the book that these similarities are actually vital to the story being told.

This sort of thing represents a sorta meta-theme of the book. Where Cryptonomicon is filled with little anecdotes and tangents that are somewhat related to the story, Anathem is tighter. Concepts that are seemingly tangential and irrelevant wind up playing an important role later in the book. Don’t get me wrong, there are certainly a few tangents or anecdotes that are just that, but despite the 900+ page length of the book, Stephenson does a reasonably good job juggling ideas, most of which end up being important later in the book.

The first couple hundred pages of the novel take place within a Concent, and thus you get a pretty good idea of what life is like for the Avout. It’s always been clear that Stephenson appreciates the opportunity to concentrate on something without having any interruptions. His old website quoted former Microsoft employee Linda Stone’s concept of “continuous partial attention,” which is something most people are familiar with these days. Cell phones, emails, Blackberries/iPhones, TV, and even the internet are all pieces of technology which allow us to split our attention and multi-task, but at the same time, such technology also serves to make it difficult to find a few uninterrupted hours with which to delve into something. Well, in a Concent, the Avout have no such distractions. They lead a somewhat regimented, simple life with few belongings and spend most of their time thinking, talking, building and writing. Much of their time is spent in Socratic dialogue with one another. At first, this seems rather odd, but it’s clear that these people are first rate thinkers. And while philosophical discussions can sometimes be a bit dry, Stephenson does his best to liven up the proceedings. Take, for example, this dialogue between Raz and his mentor, Orolo:

“Describe worrying,” he went on.

What!?

“Pretend I’m someone who has never worried. I’m mystified. I don’t get it. Tell me how to worry.”

“Well… I guess the first step is to envision a sequence of events as they might play out in the future.”

“But I do that all the time. And yet I don’t worry.”

“It is a sequence of events with a bad end.”

“So, you’re worried that a pink dragon will fly over the concent and fart nerve gas on us?”

“No,” I said with a nervous chuckle.

“I don’t get it,” Orolo claimed, deadpan. “That is a sequence of events with a bad end.”

“But it’s nonsensical. There are no nerve-gas-farting pink dragons.”

“Fine,” he said, “a blue one, then.” (page 198)

And this goes on for a few pages as well. Incidentally, this is also an example of one of those things that seems like it’s an irrelevant tangent, but returns later in the story.

So the Avout are a patient bunch, willing to put in hundreds of years of study to figure out something you or I might find trivial. I was reminded of the great unglamourous march of technology, only amplified. Take, for instance, these guys:

Bunjo was a Millenarian math built around an empty salt mine two miles underground. Its fraas and suurs worked in shifts, sitting in total darkness waiting to see flashes of light from a vast array of crystalline particle detectors. Every thousand years they published their results. During the First Millenium they were pretty sure they had seen flashes on three separate occasions, but since then they had come up empty. (page 262)

As you might imagine, there is some tension between the Saecular world and the Avout. Indeed, there have been several “sacks” of the various Concents. This happens when the Saecular world gets freaked out by something the Avout are working on and attacks them. However, at the time of the novel, things are relatively calm. Total isolation is not possible, so there are Hierarchs from the Avout who keep in touch with the Saecular world, and thus when the Saecular world comes across a particularly daunting problem or crisis, they can call on the Avout to provide some experts for guidance. Anathem tells the story of one such problem (let’s say they are faced with an external threat), and it leads to an unprecedented gathering of Avout outside of their concents.

I realize that I’ve spent almost 2000 words without describing the story in anything but a vague way, but I’m hesitant to give away too much of the story. However, I will mention that the book is not all philosophical dithering and epic worldbuilding. There are martial artists (who are Avout from a Concent known as the Ringing Vale, which just sounds right), cross-continental survival treks, and even some space travel. All of this is mixed together well, and I while I wouldn’t characterise the novel as an action story, there’s more than enough there to keep things moving. In fact, I don’t want to give the impression that the story takes a back seat at any point during the novel. Most of the world building I’ve mentioned is something that comes through incidentally in the telling of the story. There are certainly “info-dumps” from time to time, but even those are generally told within the framework of the story.

There are quite a few characters in the novel (as you might expect, when you consider its length), but the main ones are reasonably well defined and interesting. Erasmus turns out to be a typical Stephensonian character – a very smart man who is constantly thrust into feuds between geniuses (i.e. a Randy/Daniel Waterhouse type). As such, he is a likeable fellow who is easy to relate to and empathize with. He has several Avout friends, each of whom plays an important role in the story, despite being separated from time to time. There’s even a bit of a romance between Raz and one of the other Avout, though this does proceed somewhat unconventionally. During the course of the story, Raz even makes some Extramuros friends. One being his sister Cord, who seems to be rather bright, especially when it comes to mechanics. Another is Sammann, who is an Ita (basically a tecno-nerd who is always connected to networks, etc…). Raz’s mentor Orolo has been in the Concent for much longer than Raz, and is thus always ten steps ahead of Raz (he’s the one who brought up the nerve-gas-farting pink dragons above).

Another character who doesn’t make an appearance until later on in the story is Fraa Jad. He’s a Millenarian, so if Orolo is always ten steps ahead, Jad is probably a thousand steps ahead. He has a habit of biding his time and dropping a philosophical bomb into a conversation, like this:

Fraa Jad threw his napkin on the table and said: “Consciousness amplifies the weak signals that, like cobwebs spun between trees, web Narratives together. Moreover, it amplifies them selectively and in that way creates feedback loops that steer the Narratives.” (page 701)

If that doesn’t make a lot of sense, that’s because it doesn’t. In the book, the characters surrounding Jad spend a few pages trying to unpack what was said there. That might seem a bit tedious, but it’s actually kinda funny when he does stuff like that, and his ideas actually are driving the plot forward, in a way. One thing Stephenson doesn’t spend much time discussing is the details of how the Millenarians continue to exist. He doesn’t explicitely come out and say it, but the people on Arbre seem to have life spans similar to humans (perhaps a little longer), so it’s a little unclear how things like Millenarian Maths can exist. He does mention that thousanders have managed to survive longer than others, but it’s not clear how or why. If one were so inclined, they could perhaps draw a parallel between the Thousanders in Anathem and the Eruditorium in Cryptonomicon and the Baroque Cycle. Indeed, Enoch Root would probably fit right in at a Millenarian Math… but I’m pretty sure I’m just reading way too much into this and that Stephenson wasn’t intentionally trying to draw such a parallel. It’s still an interesting thought though.

Overall, Stephenson has created and sustained a detailed world, and he has done so primarily through telling the story. Indeed, I’m only really touching the surface of what he’s created here, and honestly, so is he. It’s clear that Stephenson could easily have made this into another 3000 page Baroque Cycle style trilogy, delving into the details of the history and culture of Arbre, but despite the long length of the novel, he does keep things relatively tight. The ending of the novel probably won’t do much to convince those who don’t like his endings that he’s turned a new leaf, but I enjoyed it and thought it ranked well within his previous books. There are some who will consider the quasi-loose-ends in the story to be frustrating, but I thought it actually worked out well and was internally consistent with the rest of the story (it’s hard to describe this without going into too much detail). In the end, this is Stephenson’s best work since Cryptonomicon and the best book I’ve read in years. It will probably be enjoyed by anyone who is already a Stephenson fan. Otherwise, I’m positive that there are people out there who are just the right kind of weird that would really enjoy this book. I expect that anyone who is deeply interested in Philosophy or Quantum Physics would have a ball. Personally, I’m not too experienced in either realm, but I still enjoyed the book immensely. Here’s to hoping we don’t have to wait another 4 years for a new Stephenson novel…

Rewatching Movies

One of the cable channels was playing Ocean’s Eleven all weekend, and that’s one of those movies I always find myself watching when it comes on (this time, I even went to the shelf and fired up the DVD, so as to avoid commercials). Of course, there are tons of new, never-seen-before things I want to watch. My Netflix queue currently has around 140 movies in it (and this seems to be growing with time, despite the rate at which I go through my rentals). I’ve got a DVD set of Banner of the Stars that I’m only about 1/3 of the way through. My DVR has a couple episodes of the few TV shows I follow queued up for me. Yet I find myself watching Ocean’s Eleven for the umpteenth time. And loving every second of it.

In actuality, I’ve noticed myself doing this sort of thing less and less over the years. When I was younger, I would watch and rewatch certain movies almost daily. There are several movies that have probably moved up into triple digit rewatches (for the curious, the films in this list include The Terminator, Aliens, The Empire Strikes Back, Return of the Jedi and Phantasm). Others I’ve only rewatched dozens of times. As time goes on, I find myself less and less likely to rewatch things. I think Netflix has become a big part of that, because I want to get my money’s worth from the service, and the only way to do that is to continually watch new movies. In recent years, I’ve also come to realize that even though I’ve seen way more movies than the average person, there are still a lot of holes in my film knowledge. I do find myself limited by time these days, so when it comes down to rewatching an old favorite or potentially discovering a new one, I tend to favor the new films these days. But I still relapse (focusing on novelty has its own challenges), and I do find myself rewatching movies on a regular basis.

Get away from her you bitch!

Why is that? There are some people who never rewatch movies, but even with my declining repeat viewings, I don’t count myself among them. Some films almost demand you to watch them again. For instance, I recently watched Andrei Tarkovsky’s thoughtful, if difficult, SF film Solaris. This is a film that seems designed to reveal itself only upon multiple viewings. Tarkovsky is somewhat infamous for this sort of thing, and there are a lot of movies out there that are like that. Upon repeated viewings, these films take on added dimensions. You start to notice things. Correlations, strange relationships, and references become more apparent.

Other films, however, are just a lot of fun to rewatch. This raises a lot of interesting questions. Why is a movie fun even when we know the ending? Indeed, why do some reviewers even include a rating for rewatchability? In some cases we just like spending time with certain characters or settings and don’t mind that we already know the outcome. I’ve made a distinction between these films and the ones that demand multiple viewings, but many of the same benefits of repeat viewings are mutual between the two types of movies. Rewatching a film can be a richer, deeper experience and you start to notice things you didn’t upon first viewing. Indeed, one interesting thing about rewatching movies is that while the movie is the same, you are not. Context matters. Every time we rewatch something, we bring our knowledge and experience (which is always changing) to the table. Sometimes this can be trivial (like noticing a reference or homage you didn’t know about), but I’ve always heard about movies that become more poignant to people after they have children or as they grow older. Similarly, rewatching a movie can transport us back to the context in which we first saw the movie. I still remember the excitement and the spectacle of going to see Batman or Terminator 2 on opening day. Those were fun experiences from my childhood, even if I don’t particularly love either movie. Heck, just the thought of how often I used to rewatch some movies is a fun memory that gets brought up whenever I think about those movies today…

I ll be back when you watch this movie 200 more times...

There are also a lot of fascinating psychological implications to rewatching movies. As I mentioned before, we sometimes rewatch movies to revisit characters we consider friends or situations we find satisfying. In the case of comedies, we want to laugh. In the case of horror films, we want to scare ourselves or feel suspense. And strangely, even though we know the outcomes of these movies, they still seem to be able to elicit these various emotions as we rewatch them. For movies that depict true stories, they can feature suspense or fear even when we know how the story will turn out. Two recent, high-profile examples of this are United 93 and Zodiac. Both of those films were immersive enough upon first viewing that I felt suspense at various parts of the story, even though I knew on an intellectual level where both films were heading. David Bordwell has explored this concept thoroughly and references several interesting theories as to why rewatching movies remains powerful:

Normally we say that suspense demands an uncertainty about how things will turn out. Watching Hitchcock’s Notorious for the first time, you feel suspense at certain points-when the champagne is running out during the cocktail party, or when Devlin escorts the drugged Alicia out of Sebastian’s house. That’s because, we usually say, you don’t know if the spying couple will succeed in their mission.

But later you watch Notorious a second time. Strangely, you feel suspense, moment by moment, all over again. You know perfectly well how things will turn out, so how can there be uncertainty? How can you feel suspense on the second, or twenty-second viewing?

Here’s one theory he covers:

…in general, when we reread a novel or rewatch a film, our cognitive system doesn’t apply its prior knowledge of what will happen. Why? Because our minds evolved to deal with the real world, and there you never know exactly what will happen next. Every situation is unique, and no course of events is literally identical to an earlier one. “Our moment-by-moment processes evolved in response to the brute fact of nonrepetition” (Experiencing Narrative Worlds, 171). Somehow, this assumption that every act is unique became our default for understanding events, even fictional ones we’ve encountered before.

He goes into a lot more detail about this theory and others in his post. Several of the theories he covers touch on what I find most interesting about the subject, which is that our brain seems to have compartamentalized the processing of various data. I’m going to simplify drastically for effect here, but I think the general idea is right (I’m not a nuerologist though, so take it with a grain of salt). When processing visual and audio data, there is a part of the brain that is, for lack of a better term, stateless. It picks up a stimulus, immediately renders it (into a visual or audio representation) then shuttles it off to another part of the brain which interprets the output. This interpretation seems to be where our brain slows down. The initial processing is involuntary and unconscious and it doesn’t take other data (like memories) into account. We don’t have to consciously think about it, it just happens. Something similar happens when we first begin to interpret data. Our brain seems to be unconsciously and continually forming different interpretations and then rejecting most of them. The rejected thoughts are displaced by new alternatives which incorporate more of our knowledge and experience (and perhaps this part happens in a more conscious fashion). We’ve all had the experience of thinking something that almost immediately disturbed us because we wonder where that thought came from. Bordwell gives a common example (I’ve read about this exact example at least three times from different people):

Standing at a viewing station on a mountaintop, safe behind the railing, I can look down and feel fear. I don’t really believe I’ll fall. If I did, I would back away fast. I imagine I’m going to fall; perhaps I even picture myself plunging into the void and, a la Björk, slamming against the rocks at the bottom. Just the thought of it makes my palms clammy on the rail.

So perhaps one reason it doesn’t matter that we know how a movie will turn out is that there is a part of us that is blindly processing data without incorporating what we already know. Another reason we still feel emotions like suspense during a movie we’ve seen before is because we can imagine what would happen if it didn’t turn out the way we know it will. In both cases, there is a conscious intellectual response which can negate our instinctual thoughts, but such responses seem to happen after the fact (at which point, you’ve already experienced the emotion in question and can’t just take it back). One of the most beautiful things about laughter is that it happens involuntarily. We don’t (always) have to think about it, we just do it. Dennis Miller once wrote about this:

The truth is the human sense of humor tends to be barbaric and it has been that way all along. I’m sure on the eve of the nativity when the tall Magi smacked his forehead on the crossbeam while entering the stable, Joseph took a second away from pondering who impregnated his wife and laughed his little carpenter ass off. A sense of humor is exactly that: a sense. Not a fact, not etched in stone, not an empirical math equation but just what the word intones: a sense of what you find funny. And obviously, everybody has a different sense of what’s funny. If you need confirmation on that I would remind you that Saved by the Bell recently celebrated the taping of their 100th episode. Oh well, one man’s Molier is another man’s Screech and you know something thats the way it should be.

Indeed, humor generally disappates when you try to explain it. You either get it or you don’t.

I could probably go on and on about this, but Bordwell has done an excellent job in his post (there’s an interesting bit about mirror neurons, for instance), and unlike me, he’s got lots of references. I do find the subject fascinating though, and I began wondering about the impact of people rewatching movies so often. After all, this is a somewhat recent trend we’re talking about (not that people didn’t rewatch movies before the advent of the VCR and DVD, but that technology has obviously increased the amount of rewatching).

We’re living in an on-demand era right now, meaning that we can choose what we want to watch whenever we want (well, we’re not quite there yet, but we’re moving quickly in that direction). If I want to rewatch Solaris a hundred times and analyze it like the Zapruder film, I’m free to do so (and it might even be a rewarding effort). In the past, things weren’t necessarily like that though. James Berardinelli recently wrote about rewatching movies, and he provides some interesting historical context:

30 years ago, if you loved a movie, re-watching it involved patience and hard work. A big Hollywood picture might show up in prime time (ABC regularly aired the James Bond movies on Sunday nights) but smaller/older films were relegated to late night or weekend afternoon showings. Lovers of High Noon (for example) might have to wait a couple of years and religiously check TV listings before being rewarded by its appearance on “The Million Dollar Movie” at 12:30 am some night.

One reason why pre-1980 movie lovers are generally better educated in cinema than their post-1980 counterparts is that TV-based movie watching in the ’60s and ’70s meant seeing what was provided, and that typically covered many genres and eras of film. I can recall watching a silent film (The Cabinet of Dr. Caligari) on a local station one afternoon in 1977. When was the last time a silent movie aired on any over-the-air television station? The advent of video in the early 1980s and its rapid adoption during the middle of the decade allowed viewers to “program” their home movie watching. They could now see what they wanted to see rather than what was on TV.

Again, this trend has continued, and the degree to which you can program your viewing schedule is ever increasing. Even during the 1980s when I was growing up, I found myself beholden to the broadcast schedules more often than not. Sure I could tape things with a VCR, but I usually found myself browsing the channels looking for something to watch. There was a certain serendipity to discovering movies in those days. I distinctly remember the first time I saw a Spaghetti Western (For a Few Dollars More), getting hooked, and watching a bunch of others (Cinemax was running a series of them that month). The last time I remember something like that happening was about 5-6 years ago when I caught an Italian horror marathon on some cable movie channel. And the only reason I watched that was because I had seen Suspiria before and wanted to watch it again. It was followed by several Mario Bava films that were very interesting. Today, I look back on some of the films I watched in my childhood, even ones I cherished, and I wonder why I ever bothered to watch it in the first place. It was probably becaues nothing else was on. The advent of digital cable has changed things as well because digital cable doesn’t encourage blind television surfing. There’s a program guide built right in, so you can browse that to find what you want. Unfortunately, that means you could skip right over something you would otherwise like (and that may have caught your eye if you saw a glimpse of it). There’s also a lot more to choose from (perhaps leading to a paradox of choice situation).

Of course, there are other ways for film lovers to discover new films they wouldn’t otherwise have watched. On a personal level, listening to various film podcasts, especially Filmspotting and All Movie Talk (which is sadly now defunct, though still worth listening to if you love movies), has been incredibly helpful in finding and exploring various genres or eras of film that I had not been acquainted with. One effective technique that Filmspotting has employed is the use of marathons, in which they watch 5-6 movies from a genre or filmmaker they are not particularly familiar with. Of course, this, too, is subject to the whims of listeners – many (including myself) will avoid films that don’t have an immediate appeal. Still, I’ve found myself playing along with several of their marathons and watching movies I don’t think I would ever watch on my own.

One interesting film experiment is currently being conducted by a blogger named Matthew Dessem. He wanted to learn more about foreign films and found that the Criterion Collection was an interesting place to start. It contains a good mix of the old, new, foreign, and independent, and it goes in a somewhat random order. He started writing a review for each movie at his blog, The Criterion Contraption. He’s about 80 or so movies into the collection, and his reviews are exceptionally good (apparently the product of about 15 hours of work each). In an interview, Dessem explains his reasoning for watching the collection in order and why he writes reviews for each one:

I began writing about the films simply as a way of keeping myself intellectually honest: thinking about how each movie was supposed to work, paying attention to what was effective and what was not. Given the chance to not engage with a difficult film, I’ll usually take it, unless I have to come up with something coherent to say about it.

Later in the interview, he expands on why he watches the films in the order Criterion put them out:

Mostly, it keeps me honest. If I had the choice to watch the films in any order, I would quickly jump to all the films I most want to see, and never get around to the ones that seem less interesting. That means I’d miss out on a lot of discoveries, which was one of my main goals to begin with. But jumping around from country to country and decade to decade has its own rewards: like any good 21st century citizen, I have a pretty good case of apophenia, so I’ll often see connections that don’t exist between films.

I can definitely see where he’s coming from. Looking through the catalog of Criterion, I see a lot of movies that I’d probably skip if I didn’t require myself to watch them in order (as it is now, I’ve seen somewhere around 10% of the movies, and there’s no particular order I’ve gone in – I sorta fell into the trap where I “quickly jump to all the films I most want to see, and never get around to the ones that seem less interesting”. Except, of course, I haven’t decided to watch all the Criterion Collection movies.) Indeed some of the movies I have seen, I probably wouldn’t recommend except in certain circumstances (for example, I wouldn’t recommend Equinox to anyone but die-hard horror fans).

However, while there are ways for us film lovers to seek out and expand our knowledge of film, I do wonder about the casual moviegoers. Is the recent trend of remakes (or reimaginings or whatever they call them these days) partially the result of this phenomenon? I wonder how many of the younger generation saw Rob Zombie’s limp remake of Halloween and then sought out the brilliant original? That is perhaps too high-profile of an example. How about the original Ocean’s Eleven? As it turns out, I have not seen that movie, despite loving the remake. I’ve added it to my Netflix queue. It rests at position 116 right now, which means I’ll probably get to it sometime within the next five years. Now if you’ll excuse me, I’m going to rewatch The Empire Strikes Back. It is my destiny.

I have seen this a hundred times, but I get the chills during this scene every time...

Update: Added some screenshots from movies I’ve watched a bazillion times. Also just want to note that while I spent most of my time talking about movies here, the same goes for books and music. I don’t tend to reread books much (perhaps due to the time commitment reading a book takes), but on the other hand, music gets better with multiple listenings (so much so that no one even questions the practice of listening to music multiple times).

Best Films of 2007

I saw somewhere on the order of 60 movies that were released in 2007. This is somewhat lower than most critics, but higher than your average moviegoer. Also unlike most critics, I don’t consider this to be a spectacular year for film. For instance, I left several films off my 2006 list that would have been shoe-ins this year. If I were to take a more objective stance, limiting my picks to the movies with the best technical qualities, the list would be somewhat easier. But that’s a boring way to assemble a list and absolute objectivitiy is not possible in any case. Movies that really caught my attention and interested me were somewhat fewer this year. Don’t get me wrong, I love movies and there were a lot of good ones this year, but there were few movies that really clicked with me. As such, a lot of the top 10 could easily be exchanged with a movie from the Honorable Mention section. So without further ado:

Top 10 Movies of 2007

* In roughly reverse order

  • Zodiac: This one barely makes it on this list. It’s one of the few early year releases that has made it on the list, and as such, it’s something I actually want to revisit. But of all the early year films I saw, I remember this being the most interesting and best made. If you know about the Zodiac killer, you know the ending won’t provide any real explanations (nor should it) as the killer was never caught in real life. As such, this does diminish some of the tension from the film. Still, director David Fincher has made an impeccable film. It’s not as showy or spectacular as his previous efforts. Stylistically, it’s rather straightforward, and yet, it’s a gorgeous film to look at, and Fincher does manage to imbue some tension throughout the film, which focuses more on the obsession of those trying to find the Zodiac than the Zodiac himself.

    More Info: [IMDB] [Amazon]

  • Gone Baby Gone: It basically starts out as a straightforward crime thriller and mystery and those elements are very well done. But the ending introduces a moral dilemma that has no good answers. You can’t help but put yourself into the movie and think about what you would do in such a case, and to be honest, I don’t know what I’d do. I suppose I should mention that this is Ben Affleck’s directing debut, and he proves shockingly adept at doing so.

    More Info: [IMDB] [Amazon]

  • The Bourne Ultimatum: A fantastic action film, and one of the few sequels worth it’s salt in a year of particularly bad sequels. Paul Greengrass’ infamous shaky camera is actually put to good use here, and the film also features good performances and great stuntwork. Some may be put off by the camera work, but when you look at a film like this, and then you look at a film like Transformers, you can see a huge difference in style and talent.

    More Info: [IMDB] [Amazon]

  • Superbad: Hands down, the funniest movie of the year. I’m a sucker for raunchy humor with a heart, and this movie has that in spades. Great performances by Jonah Hill and the deadpan Michael Cera, as well as just about everyone else. Of all the movies on this list, this one probably has the most replay value, and is also probably the most quotable.

    More Info: [IMDB] [Amazon]

  • Stardust: This might the most thoroughly enjoyable movie of the year. A great adventure film that evokes The Princess Bride (perhaps unfairly leading to comparisons) while asserting an identity of its own. In a year filled with dark, heavy-hitting dramas, it was nice to sit down to a well done fantasy film. Well directed with good performances (including an unusual turn by Robert DeNiro as a flamboyant pirate) and nice visuals, the real strength of this film is the story, which retains the fun feeling of a fantasy while skirting darker, edgier material.

    More Info: [IMDB] [Amazon]

  • The King of Kong: A Fistful of Quarters: Documentary films don’t generally find much of an audience in theaters, but The King of Kong should be in every video game enthusiast’s Netflix queue. It delves into the rough and tumble world of competitive video gaming for classic games, particularly Donkey Kong, but it does so kinda like an inspirational sports film. You’ve got your lovable underdog who has never won anything in his life, and of course the villainous champion who looks down on the underdog and seeks to steal his thunder. It’s a great movie and highly recommended for video game fans.

    More Info: [IMDB] [Amazon]

  • The Orphanage: Certainly the creepiest movie of the year. Though perhaps not exactly a horror film, it establishes a high level of tension all throughout the film, and the story, while a little odd, works pretty well too. A spanish language film that gets unfairly compaired to Pan’s Labyrinth, it is nonetheless worth watching for any fan of ghost stories.

    More Info: [IMDB] [Amazon]

  • The Lives of Others: This film actually won the Oscar for best foreign-language film last year (beating out Pan’s Labyrinth – a surprise to me), so I might be cheating a bit, but it didn’t really have a theatrical release in the U.S. until 2007, so I’m putting it on this list. Set in East Germany during the Cold War, this film follows a Stasi agent who begins to feel for the subjects he’s surveiling. It doesn’t sound like much, and it’s not exactly action-packed, but it is quite compelling and one of the most powerful films of the year. All of the technical aspects of the film are brilliant, especially the script and the nuanced acting by Ulrich Mühe. This film would be amongst the top of any year’s list

    More Info: [IMDB] [Amazon]

  • Grindhouse: I’m referring, of course, to the theatrical release of this film. I say this because a lot of critics like to separate the two features and heap praise on Tarantino’s Death Proof (which I’ll grant, is probably the better of the two, if I were forced to chose), but to me, nothing beats the full experience of the theatrical version. It starts out with a hilarious “fake” trailer, then moves into Robert Rodriguez’s Planet Terror, an over=the-top zombie action film done in true grindhouse stile (missing reels and all). Following that we get three more absolutely brilliant fake trailers and Tarantino’s wonderful Death Proof. The films are dark, they’re edgy, and they’re probably not for everyone. In attempting to emulate 70s grindhouse cinema, the filmmakers have lovingly reproduced the tropes, some of which may bother audiences (particularly the awkward pacing of both features, which is actuall brilliance in disguise). It’s a crime that the theatrical version is not available on DVD. The double-billing was poorly advertised, so it looks like the studio opted to split the films up and give longer cuts of each their own DVD. Supposedly, a 6 disc boxed set containing everything is in the works.

    More Info: [IMDB] [Planet Terror | Death Proof] [Winner of 3 Kaedrin Movie Awards]

  • No Country for Old Men: The Coen brothers have outdone themselves. This is perhaps a boring pick, as this film is at or near the top of most top 10 lists, but that happened for a reason. It’s a great damn film. Gorgeous photography, tension-filled action, and that rare brand of dark humor that the Coens are so good at. It also features the most memorable and terrifying villain in years. The ending is uncompromising and ambiguous (which may turn some viewers off), but I found it quite appropriate. Of all the films this year, this one is best made and most entertaining (if a little dark), a combo that’s certainly difficult to pull off.

    More Info: [IMDB] [Amazon] [Winner of 3 Kaedrin Movie Awards]

Honorable Mention

As I mentioned above, a lot of these honorable mentions would probably do fine for the bottom half of the top 10 (the top half is pretty strong, actually). In some cases, I really struggled with a lot of the below picks. If my mood were different, I bet some things would change. These are all good movies and worth watching too.

  • Juno: This film could easily have made my top 10 list, and it’s the dark horse pick for the best picture oscar. Funny comedies that are also smart and clever are rare, and this is a wonderful example. Juno‘s too-cool-for-school hipster dialogue was definitely a turn off for portions of the film (particularly the beginning), but it sorta grows on you too, and by the end, you’re so involved in the story that it’s not noticeable. Of particular note here is Ellen Page’s brilliant performance as the title character and her parents, played ably by J.K. Simmons and Allison Janney. Michael Cera puts in another subdued performance, but hey, he’s great at that and it fits well.

    More Info: [IMDB]

  • Waitress: Yet another unexpected pregnancy movie (there were three this year, the others being Juno and Knocked Up). It’s a “chick flick” but I found that I really enjoyed it. Aside from the fact that nearly everyone in the movie is cheating on their partner, it’s really quite an endearing movie, and it’s very sad indeed that writer/director Adrienne Shelly will not be making any more films (she died shortly after production). Great performances by Keri Russel and Nathan Fillion (of Firefly/Serenity fame) and a nice turn by Andy Griffith as the crotchety-old-man-with-a-heart-of-gold.

    More Info: [IMDB] [Amazon]

  • Rescue Dawn: Werner Herzog’s great film depicting a vietnam POW’s struggle for survival in the jungles of Vietnam could easily have made the top 10 (a lot of the films in the honorable mention could have). I’m not that familiar with Herzog, but after seeing this film, I’d definitely like to check out some of his older classics. Good performances by Christian Bale (one of the best of his generation) and Steve Zahn (who is normally relegated to comic relief, but doe a nice job in this dramatic role).

    More Info: [IMDB] [Amazon]

  • Sunshine: Solid space-based science fiction is somewhat of a rarity these days (actually, SF in general seems to be), and this film manages to pull it off. It’s a little cliche-ridden (some good, some bad), but I really enjoyed tihs film, even the ending which seems to strike a lot of people the wrong way (I loved it). Good ensemble cast, wonderful high-contrast lighting and a decent story. Perhaps hot the greatest film, but there’s something to be said for a well executed genre film

    More Info: [IMDB] [Amazon]

  • Ratatouille: Brad Bird is perhaps my favorite American animator working today, and this film really is a delight. It is, perhaps, not as seamless as his previous efforts (I was particularly taken with his last film, The Incredibles), but it’s still quite a good film. The story follows a rat who seems to have developed a talent for cooking. This rat eventually teams up with a young human guy so that they can elevate the cuisine at a famous French restaurant. It sounds silly, and well, it is I guess, but who cares? It’s fun. The one ironic bit is that the character of the rat is much more compelling than any of the human characters. There are a lot of nice touches in the movie, and I’m quite looking forward to Bird’s next project (whatever that might be).

    More Info: [IMDB] [Amazon]

  • Michael Clayton: This slow-burning legal thriller was actually quite good. Helmed by Bourne collaborator Tony Gilroy, this film goes perhaps a little too far at times, but is otherwise a keenly constructed thriller. At times, it doesn’t seem like there’s really that much going on in the film, but Gilroy somehow manages to keep the pace high (a neat trick, that) and I did genuinely find myself surprised by the ending.

    More Info: [IMDB] [Amazon]

  • There Will Be Blood: Amazing character study from director Paul Thomas Anderson. The first 20 minutes of the film are an outstanding exercise in breaking from tradition (there’s almost no dialogue, but it’s also compelling material and necessary for the story). The over-the-top ending is a little strange and leaves you wondering “Why?” but it’s also oddly appropriate. It’s one of those movies that has grown on me the more I think about it. Daniel Day Lewis gives an amazing performance (yeah, I’ll even give it to him considering the last 20 minutes of the movie) and director Anderson is at the top of his game. Oh, and I DRINK YOUR MILKSHAKE!!!! I DRINK IT UP!!!!!!

    More Info: [IMDB]

  • Eastern Promises: Well, the premise of this film isn’t all that exciting, but I found Viggo Mortensen’s performance riveting and his character provided most of the film’s interesting twists and turns. It’s worth watching because of him and his character, but it’s also a flawed film (especially in comparison to the other recent Cronenberg/Mortensen collaboration, A History of Violence).

    More Info: [IMDB] [Amazon]

  • Hot Fuzz: Among the better comedies this year, Hot Fuzz is an effective action movie parody. While much of that is overt, there are some great subtle touches as well (particularly with respect to Simon Pegg’s peformance, as he evokes shades of Schwartzenegger in Predator or the T-1000 in T2). Ultimately, the story devolves into something rather stupid, which puts this a peg below Shaun of the Dead (which was made by the same filmmaking team), but it’s still quite entertaining.

    More Info: [IMDB] [Amazon]

  • Black Book: Despite the involvement of Paul Verhoeven (whome I generally dislike except in rare exceptions), this turns out to be one of the more involving historical thrillers that I’ve seen in recent years. It’s not a profound journey, but it’s got some wonderful pot-boileresque elements and it managed to pull me in to the story, which was complex and well done.

    More Info: [IMDB] [Amazon]

Should have seen:

Well there you have it. A little late, but I made it. That just about wraps up the Kaedrin movie awards, hope you enjoyed them. I don’t know if I’ll do another Top 10 Box Office Performance analysis, but if I do, it probably won’t be for a little while (that actually might make it a little more accurate too)

The Paradise of Choice?

A while ago, I wrote a post about the Paradox of Choice based on a talk by Barry Schwartz, the author of a book by the same name. The basic argument Schwartz makes is that choice is a double-edged sword. Choice is a good thing, but too much choice can have negative consequences, usually in the form of some kind of paralysis (where there are so many choices that you simply avoid the decision) and consumer remorse (elevated expectations, anticipated regret, etc…). The observations made by Schwartz struck me as being quite astute, and I’ve been keenly aware of situations where I find myself confronted with a paradox of choice ever since. Indeed, just knowing and recognizing these situations seems to help deal with the negative aspects of having too many choices available.

This past summer, I read Chris Anderson’s book, The Long Tail, and I was a little pleasantly surprised to see a chapter in his book titled “The Paradise of Choice.” In that chapter, Anderson explicitely addresses Schwartz’s book. However, while I liked Anderson’s book and generally agreed with his basic points, I think his dismissal of the Paradox of Choice is off target. Part of the problem, I think, is that Anderson is much more concerned with the choices rather than the consequences of those choices (which is what Schwartz focuses on). It’s a little difficult to tell though, as Anderson only dedicates 7 pages or so to the topic. As such, his arguments don’t really eviscerate Schwartz’s work. There are some good points though, so let’s take a closer look.

Anderson starts with a summary of Schwartz’s main concepts, and points to some of Schwartz’s conclusions (from page 171 in my edition):

As the number of choices keeps growing, negative aspects of having a multitude of options begin to appear. As the number of choices grows further, the negatives escalate until we become overloaded. At this point, choice no longer liberates, but debilitates. It might even be said to tyrannize.

Now, the way Anderson presents this is a bit out of context, but we’ll get to that in a moment. Anderson continues and then responds to some of these points (again, page 171):

As an antidote to this poison of our modern age, Schwartz recommends that consumers “satisfice,” in the jargon of social science, not “maximize”. In other words, they’d be happier if they just settled for what was in front of them rather than obsessing over whether something else might be even better. …

I’m skeptical. The alternative to letting people choose is choosing for them. The lessons of a century of retail science (along with the history of Soviet department stores) are that this is not what most consumers want.

Anderson has completely missed the point here. Later in the chapter, he spends a lot of time establishing that people do, in fact, like choice. And he’s right. My problem is twofold: First, Schwartz never denies that choice is a good thing, and second, he never advocates removing choice in the first place. Yes, people love choice, the more the better. However, Schwartz found that even though people preferred more options, they weren’t necessarily happier because of it. That’s why it’s called the paradox of choice – people obviously prefer something that ends up having negative consequences. Schwartz’s book isn’t some sort of crusade against choice. Indeed, it’s more of a guide for how to cope with being given too many choices. Take “satisficing.” As Tom Slee notes in a critique of this chapter, Anderson misstates Schwartz’s definition of the term. He makes it seem like satisficing is settling for something you might not want, but Schwartz’s definition is much different:

To satisfice is to settle for something that is good enough and not worry about the possibility that there might be something better. A satisficer has criteria and standards. She searches until she finds an item that meets those standards, and at that point, she stops.

Settling for something that is good enough to meet your needs is quite different than just settling for what’s in front of you. Again, I’m not sure Anderson is really arguing against Schwartz. Indeed, Anderson even acknowledges part of the problem, though he again misstate’s Schwartz’s arguments:

Vast choice is not always an unalloyed good, of course. It too often forces us to ask, “Well, what do I want?” and introspection doesn’t come naturally to all. But the solution is not to limit choice, but to order it so it isn’t oppressive.

Personally, I don’t think the problem is that introspection doesn’t come naturally to some people (though that could be part of it), it’s more that some people just don’t give a crap about certain things and don’t want to spend time figuring it out. In Schwartz’s talk, he gave an example about going to the Gap to buy a pair of jeans. Of course, the Gap offers a wide variety of jeans (as of right now: Standard Fit, Loose Fit, Boot Fit, Easy Fit, Morrison Slim Fit, Low Rise Fit, Toland Fit, Hayes Fit, Relaxed Fit, Baggy Fit, Carpenter Fit). The clerk asked him what he wanted, and he said “I just want a pair of jeans!”

The second part of Anderson’s statement is interesting though. Aside from again misstating Schwartz’s argument (he does not advocate limiting choice!), the observation that the way a choice is presented is important is interesting. Yes, the Gap has a wide variety of jean styles, but look at their website again. At the top of the page is a little guide to what each of the styles means. For the most part, it’s helpful, and I think that’s what Anderson is getting at. Too much choice can be oppressive, but if you have the right guide, you can get the best of both worlds. The only problem is that finding the right guide is not as easy as it sounds. The jean style guide at Gap is neat and helpful, but you do have to click through a bunch of stuff and read it. This is easier than going to a store and trying all the varieties on, but it’s still a pain for someone who just wants a pair of jeans dammit.

Anderson spends some time fleshing out these guides to making choices, noting the differences between offline and online retailers:

In a bricks-and-mortar store, products sit on the shelf where they have been placed. If a consumer doesn’t know what he or she wants, the only guide is whatever marketing material may be printed on the package, and the rough assumption that the product offered in the greatest volume is probably the most popular.

Online, however, the consumer has a lot more help. There are a nearly infinite number of techniques to tap the latent information in a marketplace and make that selection process easier. You can sort by price, by ratings, by date, and by genre. You can read customer reviews. You can compare prices across products and, if you want, head off to Google to find out as much about the product as you can imagine. Recommendations suggest products that ‘people like you’ have been buying, and surprisingly enough, they’re often on-target. Even if you know nothing about the category, ranking best-sellers will reveal the most popular choice, which both makes selection easier and also tends to minimize post-sale regret. …

… The paradox of choice is simply and artifact of the limitations of the physical world, where the information necessary to make an informed choice is lost.

I think it’s a very good point he’s making, though I think he’s a bit too optimistic about how effective these guides to buying really are. For one thing, there are times when a choice isn’t clear, even if you do have a guide. Also, while I think retailers that offer Recommendations based on what other customer purchases are important and helpful, who among us hasn’t seen absurd recommendations? From my personal experience, a lot of people don’t like the connotations of recommendations either (how do they know so much about me? etc…). Personally, I really like recommendations, but I’m a geek and I like to figure out why they’re offering me what they are (Amazon actually tells you why something is recommended, which is really neat). In any case, from my own personal anecdotal observations, no one puts much faith in probablistic systems like recommendations or ratings (for a number of reasons, such as cheating or distrust). There’s nothing wrong with that, and that’s part of why such systems are effective. Ironically, acknowledging their imperfections allow users to better utilize the systems. Anderson knows this, but I think he’s still a bit too optimistic about our tools for traversing the long tail. Personally, I think they need a lot of work.

When I was younger, one of the big problems in computing was storage. Computers are the perfect data gatering tool, but you need somewhere to store all that data. In the 1980s and early 1990s, computers and networks were significantly limited by hardware, particularly storage. By the late 1990s, Moore’s law had eroded this deficiency significantly, and today, the problem of storage is largely solved. You can buy a terrabyte of storage for just a couple hundred dollars. However, as I’m fond of saying, we don’t so much solve problems as trade one set of problems for another. Now that we have the ability to store all this information, how do we get at it in a meaninful way? When hardware was limited, analysis was easy enough. Now, though, you have so much data available that the simple analyses of the past don’t cut it anymore. We’re capturing all this new information, but are we really using it to its full potential?

I recently caught up with Malcolm Gladwell’s article on the Enron collapse. The really crazy thing about Enron was that they didn’t really hide what they were doing. They fully acknowledged and disclosed what they were doing… there was just so much complexity to their operations that no one really recognized the issues. They were “caught” because someone had the persistence to dig through all the public documentation that Enron had provided. Gladwell goes into a lot of detail, but here are a few excerpts:

Enron’s downfall has been documented so extensively that it is easy to overlook how peculiar it was. Compare Enron, for instance, with Watergate, the prototypical scandal of the nineteen-seventies. To expose the White House coverup, Bob Woodward and Carl Bernstein used a source-Deep Throat-who had access to many secrets, and whose identity had to be concealed. He warned Woodward and Bernstein that their phones might be tapped. When Woodward wanted to meet with Deep Throat, he would move a flower pot with a red flag in it to the back of his apartment balcony. That evening, he would leave by the back stairs, take multiple taxis to make sure he wasn’t being followed, and meet his source in an underground parking garage at 2 A.M. …

Did Jonathan Weil have a Deep Throat? Not really. He had a friend in the investment-management business with some suspicions about energy-trading companies like Enron, but the friend wasn’t an insider. Nor did Weil’s source direct him to files detailing the clandestine activities of the company. He just told Weil to read a series of public documents that had been prepared and distributed by Enron itself. Woodward met with his secret source in an underground parking garage in the hours before dawn. Weil called up an accounting expert at Michigan State.

When Weil had finished his reporting, he called Enron for comment. “They had their chief accounting officer and six or seven people fly up to Dallas,” Weil says. They met in a conference room at the Journal’s offices. The Enron officials acknowledged that the money they said they earned was virtually all money that they hoped to earn. Weil and the Enron officials then had a long conversation about how certain Enron was about its estimates of future earnings. …

Of all the moments in the Enron unravelling, this meeting is surely the strangest. The prosecutor in the Enron case told the jury to send Jeffrey Skilling to prison because Enron had hidden the truth: You’re “entitled to be told what the financial condition of the company is,” the prosecutor had said. But what truth was Enron hiding here? Everything Weil learned for his Enron expose came from Enron, and when he wanted to confirm his numbers the company’s executives got on a plane and sat down with him in a conference room in Dallas.

Again, there’s a lot more detail in Gladwell’s article. Just how complicated was the public documentation that Enron had released? Gladwell gives some examples, including this one:

Enron’s S.P.E.s were, by any measure, evidence of extraordinary recklessness and incompetence. But you can’t blame Enron for covering up the existence of its side deals. It didn’t; it disclosed them. The argument against the company, then, is more accurately that it didn’t tell its investors enough about its S.P.E.s. But what is enough? Enron had some three thousand S.P.E.s, and the paperwork for each one probably ran in excess of a thousand pages. It scarcely would have helped investors if Enron had made all three million pages public. What about an edited version of each deal? Steven Schwarcz, a professor at Duke Law School, recently examined a random sample of twenty S.P.E. disclosure statements from various corporations-that is, summaries of the deals put together for interested parties-and found that on average they ran to forty single-spaced pages. So a summary of Enron’s S.P.E.s would have come to a hundred and twenty thousand single-spaced pages. What about a summary of all those summaries? That’s what the bankruptcy examiner in the Enron case put together, and it took up a thousand pages. Well, then, what about a summary of the summary of the summaries? That’s what the Powers Committee put together. The committee looked only at the “substance of the most significant transactions,” and its accounting still ran to two hundred numbingly complicated pages and, as Schwarcz points out, that was “with the benefit of hindsight and with the assistance of some of the finest legal talent in the nation.”

Again, Gladwell’s article has a lot of other details and is a fascinating read. What interested me the most, though, was the problem created by so much data. That much information is useless if you can’t sift through it quickly or effectively enough. Bringing this back to the paradise of choice, the current systems we have for making such decisions are better than ever, but still require a lot of improvement. Anderson is mostly talking about simple consumer products, so none are really as complicated as the Enron case, but even then, there are still a lot of problems. If we’re really going to overcome the paradox of choice, we need better information analysis tools to help guide us. That said, Anderson’s general point still holds:

More choice really is better. But now we know that variety alone is not enough; we also need information about that variety and what other consumers before us have done with the same choices. … The paradox of choice turned out to be more about the poverty of help in making that choice than a rejection of plenty. Order it wrong and choice is oppressive; order it right and it’s liberating.

Personally, while the help in making choices has improved, there’s still a long way to go before we can really tackle the paradox of choice (though, again, just knowing about the paradox of choice seems to do wonders in coping with it).

As a side note, I wonder if the video game playing generations are better at dealing with too much choice – video games are all about decisions, so I wonder if folks who grew up working on their decision making apparatus are more comfortable with being deluged by choice.

Manuals, or the lack thereof…

When I first started playing video games and using computer applications, I remember having to read the instruction manuals to figure out what was happening on screen. I don’t know if this was because I was young and couldn’t figure this stuff out, or because some of the controls were obtuse and difficult. It was perhaps a combination of both, but I think the latter was more prevalent, especially when applications and games became more complex and powerful. I remember sitting down at a computer running DOS and loading up Wordperfect. The interface that appears is rather simplistic, and the developers apparently wanted to avoid the “clutter” of on-screen menus, so they used keyboard combinations. According to Wikipedia, Wordperfect used “almost every possible combination of function keys with Ctrl, Alt, and Shift modifiers.” I vaguely remember needing to use those stupid keyboard templates (little pieces of laminated paper that fit snugly around the keyboard keys, helping you remember what key or combo does what.)

Video Games used to have great manuals too. I distinctly remember several great manuals from the Atari 2600 era. For example, the manual for Pitfall II was a wonderful document done in the style of Pitfall Harry’s diary. The game itself had little in the way of exposition, so you had to read the manual to figure out that you were trying to rescue your niece Rhonda and her cat, Quickclaw, who became trapped in a catacomb while searching for the fabled Raj diamond. Another example for the Commodore 64 was Temple of Apshai. The game had awful graphics, but each room you entered had a number, and you had to consult your manual to get a description of the room.

By the time of the NES, the importance of manuals had waned from Apshai levels, but they were still somewhat necessary at times, and gaming companies still went to a lot of trouble to produce helpful documents. The one that stands out in my mind was the manual for Dragon Warrior III, which was huge (at least 50 pages) and also contained a nice fold-out chart of most of the monsters and wapons in the game (with really great artwork). PC games were also getting more complex, and as Roy noted recently, companies like Sierra put together really nice instruction manuals for complex games like the King’s Quest series.

In the early 1990s, my family got its first Windows PC, and several things changed. With the Word for Windows software, you didn’t need any of those silly keyboard templates. Everything you needed to do was in a menu somewhere, and you could just point and click instead of having to memorize strange keyboard combos. Naturally, computer purists love the keyboard, and with good reason. If you really want to be efficient, the keyboard is the way to go, which is why Linux users are so fond of the command line and simple looking but powerful applications like Emacs. But for your average user, the GUI was very important, and made things a lot easier to figure out. Word had a user manual, and it was several hundred pages long, but I don’t think I ever cracked it open, except maybe in curiosity (not because I needed to).

The trends of improving interfaces and less useful manuals proceeded throughout the next decade and today, well, I can’t think of the last time I had to consult a physical manual for anything. Steven Den Beste has been playing around with flash for a while, but he says he never looks at the manual. “Manuals are for wimps.” In his post, Roy wonders where all the manuals have gone. He speculates that manufacturing costs are a primary culprit, and I have no doubt that they are, but there are probably a couple of other reasons as well. For one, interfaces have become much more intuitive and easy to use. This is in part due to familiarity with computers and the emergence of consistent standards for things like dialog boxes (of course, when you eschew those standards, you get what Jacob Nielson describes as a catastrophic failure). If you can easily figure it out through the interface, what use are the manuals? With respect to gaming, the in-game tutorials have largely taken the place of instruction manuals. Another thing that has perhaps affected official instruction manuals are the unofficial walkthroughs and game guides. Visit a local bookstore and you’ll find entire bookcases devoted to vide game guides and walkthrough. As nice as the manual for Pitfall II was, you really didn’t need much more than 10 pages to explain how to play that game, but several hundred pages barely does justice to some of the more complex video games in today’s market. Perhaps the reason gaming companies don’t give you instruction manuals with the game is not just that printing the manual is costly, but that they can sell you a more detailed and useful one.

Steven Johnson’s book Everything Bad is Good for You has a chapter on Video Games that is very illuminating (in fact, the whole book is highly recommended – even if you don’t totally agree with his premise, he still makes a compelling argument). He talks about the official guides and why they’re so popular:

The dirty little secret of gaming is how much time you spend not having fun. You may be frustrated; you may be confused or disoriented; you may be stuck. When you put the game down and move back into the real world, you may find yourself mentally working through the problem you’ve been wrestling with, as though you were worrying a loose tooth. If this is mindless escapism, it’s a strangely masochistic version.

He gives an example of a man who spends six months working as a smith (mindless work) in Ultima online so that he can attain a certain ability, and he also talks about how people spend tons of money on guides for getting past various roadblocks. Why would someone do this? Johnson spends a fair amount of time going into the neurological underpinnings of this, most notably what he calls the “reward circuitry of the brain.” In games, rewards are everywhere. More life, more magic spells, new equipment, etc… And how do we get these rewards? Johnson thinks there are two main modes of intellectual labor that go into video gaming, and he calls them probing and telescoping.

Probing is essentially exploration of the game and its possibilities. Much of this is simply the unconscious exploration of the controls and the interface, figuring out how the game works and how you’re supposed to interact with it. However, probing also takes the more conscious form of figuring out the limitations of the game. For instance, in a racing game, it’s usually interesting to see if you can turn your car around backwards, pick up a lot of speed, then crash head-on into a car going the “correct” way. Or, in Rollercoaster Tycoon, you can creatively place balloon stands next to a roller coaster to see what happens (the result is hilarious). Probing the limits of game physics and finding ways to exploit them are half the fun (or challenge) of video games these days, which is perhaps another reason why manuals are becoming less frequent.

Telescoping has more to do with the games objectives. Once you’ve figured out how to play the game through probing, you seek to exploit your knowledge to achieve the game’s objectives, which are often nested in a hierarchical fashion. For instance, to save the princess, you must first enter the castle, but you need a key to get into the castle and the key is guarded by a dragon, etc… Indeed, the structure is sometimes even more complicated, and you essentially build this hierarchy of goals in your head as the game progresses. This is called telescoping.

So why is this important? Johnson has the answer (page 41 in my edition):

… far more than books or movies or music, games force you to make decisions. Novels may activate our imagination, and music may conjure up powerful emotions, but games force you to decide, to choose, to prioritize. All the intellectual benefits of gaming derive from this fundamental virtue, because learning how to think is ultimately about learning to make the right decisions: weighing evidence, analyzing situations, consulting your long term goals, and then deciding. No other pop culture form directly engages the brain’s decision-making apparatus in the same way. From the outside, the primary activity of a gamer looks like a fury of clicking and shooting, which is why much of the conventional wisdom about games focuses on hand-eye coordination. But if you peer inside the gamer’s mind, the primary activity turns out to be another creature altogether: making decisions, some of them snap judgements, some long-term strategies.

Probing and telescoping are essential to learning in any sense, and the way Johnson describes them in the book reminds me of a number of critical thinking methods. Probing, developing a hypothesis, reprobing, and then rethinking the hypothesis is essentially the same thing as the scientific method or the hermenutic circle. As such, it should be interesting to see if video games ever really catch on as learning tools. There have been a lot of attempts at this sort of thing, but they’re often stifled by the reputation of video games being a “colossal waste of time” (in recent years, the benefits of gaming are being acknowledged more and more, though not usually as dramatically as Johnson does in his book).

Another interesting use for video games might be evaluation. A while ago, Bill Simmons made an offhand reference to EA Sports’ Madden games in the context of hiring football coaches (this shows up at #29 on his list):

The Maurice Carthon fiasco raises the annual question, “When teams are hiring offensive and defensive coordinators, why wouldn’t they have them call plays in video games to get a feel for their play calling?” Seriously, what would be more valuable, hearing them B.S. about the philosophies for an hour, or seeing them call plays in a simulated game at the all-Madden level? Same goes for head coaches: How could you get a feel for a coach until you’ve played poker and blackjack with him?

When I think about how such a thing would actually go down, I’m not so sure, because the football world created by Madden, as complex and comprehensive as it is, still isn’t exactly the same as the real football world. However, I think the concept is still sound. Theoretically, you could see how a prospective coach would actually react to a new, and yet similar, football paradigm and how they’d find weaknesses and exploit them. The actual plays they call aren’t that important; what you’d be trying to figure out is whether or not the coach was making intelligent decisions or not.

So where are manuals headed? I suspect that they’ll become less and less prevalent as time goes on and interfaces become more and more intuitive (though there is still a long ways to go before I’d say that computer interfaces are truly intuitive, I think they’re much more intuitive now than they were ten years ago). We’ll see more interactive demos and in-game tutorials, and perhaps even games used as teaching tools. I could probably write a whole separate post about how this applies to Linux, which actually does require you to look at manuals sometimes (though at least they have a relatively consistent way of treating manuals; even when the documentation is bad, you can usually find it). Manuals and passive teaching devices will become less important. And to be honest, I don’t think we’ll miss them. They’re annoying.

Referential

A few weeks ago, I wrote about how context matters when consuming art. As sometimes happens when writing an entry, that one got away from me and I never got around to the point I originally started with (that entry was originally entitled “Referential” but I changed it when I realized that I wasn’t going to write anything about references), which was how much of our entertainment these days references its predecessors. This takes many forms, some overt (homages, parody), some a little more subtle.

I originally started thinking about this while watching an episode of Family Guy. The show is infamous for its random cutaway gags – little vignettes that have no connection to the story, but which often make some obscure reference to pop culture. For some reason, I started thinking about what it would be like to watch an episode of Family Guy with someone from, let’s say, the 17th century. Let’s further speculate that this person isn’t a blithering idiot, but perhaps a member of the Royal Society or something (i.e. a bright fellow).

This would naturally be something of a challenge. There are some technical explanations that would be necessary. For example, we’d have to explain electricty, cable networks, signal processing and how the television works (which at least involves discussions on light and color). The concept of an animated show, at least, would probably be easy to explain (but it would involve a discussion of how the human eye works, to a degree).

There’s more to it, of course, but moving past all that, once we start watching the show, we’re going to have to explain why we’re laughing at pretty much all of the jokes. Again, most of the jokes are simply references and parodies of other pieces of pop culture. Watching an episode of Family Guy with Isaac Newton (to pick a prominent Royal Society member) would necessitate a pause just about every minute to explain what each reference was from and why Family Guy’s take on it made me laugh. Then there’s the fact that Family Guy rarely has any sort of redeemable lesson and often deliberately skews towards actively encouraging evil (something along the lines of “I think the important thing to remember is that it’s ok to lie, so long as you don’t get caught.” I don’t think that exact line is in an episode, but it could be.) This works fine for us, as we’re so steeped in popular culture that we get the fact that Family Guy is just lampooning of the notion that we could learn important life lessions via a half-hour sitcom. But I’m sure Isaac Newton would be appalled.

For some reason, I find this fascinating, and try to imagine how I would explain various jokes. For instance, the episode I was watching featured a joke concerning “cool side of the pillow.” They cut to a scene in bed where Peter flips over the pillow and sees Billy Dee Williams’ face, which proceeds to give a speech about how cool this side of the pillow is, ending with “Works every time.” This joke alone would require a whole digression into Star Wars and how most of the stars of that series struggled to overcome their typecasting and couldn’t find a lot of good work, so people like Billy Dee Williams ended up doing commercials for a malt liquor named Colt 45, which had these really cheesy commercials where Billy Dee talked like that. And so on. It could probably take an hour before my guest would even come close to understanding the context of the joke (I’m not even touching the tip of the iceberg with this post).

And the irony of this whole thing is that jokes that are explained simply aren’t funny. To be honest, I’m not even sure why I find these simple gags funny (that, of course, is the joy of humor – you don’t usually have to understand it or think about it, you just laugh). Seriously, why is it funny when Family Guy blatantly references some classic movie or show? Again, I’m not sure, but that sort of humor has been steadily growing over the past 30 years or so.

Not all comedies are that blatant about their referential humor though (indeed, Family Guy itself doesn’t solely rely upon such references). A recent example of a good referential film is Shaun of the Dead, which somewhow manages to be both a parody and an example of a good zombie movie. It pays homage to all the classic zombie films and it also makes fun of other genres (notably the romantic comedy), but in doing so, the filmmakers have also made a good zombie movie in itself. The filmmakers have recently released a new film called Hot Fuzz, which attempts the same trick for action movies and buddy comedies. It is, perhaps, not as successful as Shaun, but the sheer number of references in the film is astounding. There are the obvious and explicit ones like Point Break and Bad Boys II, but there are also tons of subtle homages that I’d wager most people wouldn’t get. For instance, when Simon Pegg yells in the movie, he’s doing a pitch perfect impersonation of Arnold Schwarzenegger in Predator. And when he chases after a criminal, he imitates the way Robert Patrick’s T-1000 runs from Terminator 2.

References don’t need to be part of a comedy either (though comedies seem to make the easiest examples). Hop on IMDB and go to just about any recent movie, and click on the “Movie Connections” link in the left navigation. For instance, did you know that the aformentioned T2 references The Wizard of Oz and The Killing, amongst dozens of other references? Most of the time, these references are really difficult to pick out, especially when you’re viewing a foreign film or show that’s pulling from a different cultural background. References don’t have to be story or character based – they can be the way a scene is composed or the way the lighting is set (i.e. the Venetian blinds in Noir films).

Now, this doesn’t just apply to art either. A lot of common knowledge in today’s world is referential. Most formal writing includes references and bibliographies, for instance, and a non-fiction book will often assume basic familiarity with a subject. When I was in school, I was always annoyed at the amount of rote memorization they made us do. Why memorize it if I could just look it up? Shouldn’t you be focusing on my critical thinking skills instead of making me memorize arbitrary lists of facts? Sometimes this complaining was probably warranted, but most of it wasn’t. So much of what we do in today’s world requires a well-rounded familiarity with a large number of subjects (including history, science, culture, amongst many other things). There simply isn’t any substitute for actual knowledge. Though it was a pain at the time, I’m glad emphasis was put on memorization during my education. A while back, David Foster noted that schools are actually moving away from this, and makes several important distinctions. He takes an example of a song:

Jakob Dylan has a song that includes the following lines:

Cupid, don’t draw back your bow

Sam Cooke didn’t know what I know

Think of how much you need to know in order to understand these two simple lines:

1)You need to know that, in mythology, Cupid symbolizes love

2)And that Cupid’s chosen instrument is the bow and arrow

3)Also that there was a singer/songwriter named Sam Cooke

4)And that he had a song called which included the lines “Cupid, draw back your bow.”

… “Progressive” educators, loudly and in large numbers, insist that students should be taught “thinking skills” as opposed to memorization. But consider: If it’s not possible to understand a couple of lines from a popular song without knowing by heart the references to which it alludes–without memorizing them–what chance is there for understanding medieval history, or modern physics, without having a ready grasp of the topics which these disciplines reference?

And also consider: in the Dylan case, it’s not just what you need to know to appreciate the song. It’s what Dylan needed to know to create it in the first place. Had he not already had the reference points–Cupid, the bow and arrow, the Sam Cooke song–in his head, there’s no way he would have been able to create his own lines. The idea that he could have just “looked them up,” which educators often suggest is the way to deal with factual knowledge, would be ludicrous in this context. And it would also be ludicrous in the context of creating new ideas about history or physics.

As Foster notes, this doesn’t mean that “thinking skills” are unimportant, just that knowledge is important too. You need to have a quality data set in order to use those “thinking skills” effectively.

Human beings tend to leverage knowledge to create new knowledge. This has a lot of implications, one of which is intellectual property law. Giving limited copyright to intellectual property is important, because the data in that property eventually becomes available for all to built upon. It’s ironic that educators are considering less of a focus on memorization, as this requirement of referential knowledge has been increasing for some time. Students need a base of knowledge to both understand and compose new works. References help you avoid reinventing the wheel everytime you need to create something, which leads to my next point.

I think part of the reason references are becoming more and more common these days is that it makes entertainment a little less passive. Watching TV or a movie is, of course, a passive activity, but if you make lots of references and homages, the viewer is required to think through those references. If the viewer has the appropriate knowledge, such a TV show or movie becomes a little more cognitively engaging. It makes you think, it calls to mind previous work, and it forces you to contextualize what you’re watching based on what you know about other works. References are part of the complexity of modern Television and film, and Steven Johnson spends a significant amout of time talking about this subject in his book Everything Bad is Good for You (from page 85 of my edition):

Nearly every extended sequence in Seinfeld or The Simpsons, however, will contain a joke that makes sense only if the viewer fills in the proper supplementary information — information that is deliberately withheld from the viewer. If you haven’t seen the “Mulva” episode, or if the name “Art Vandelay” means nothing to you, then the subsequent references — many of them arriving years after their original appearance — will pass on by unappreciated.

At first glance, this looks like the soap opera tradition of plotlines extending past the frame of individual episodes, but in practice the device has a different effect. Knowing that George uses the alias Art Vandelay in awkward social situations doesn’t help you understand the plot of the current episode; you don’t draw on past narratives to understand the events in the present one. In the 180 Seinfeld episodes that aired, seven contain references to Art Vandelay: in George’s actually referring to himself with that alias or invoking the name as part of some elaborate lie. He tells a potential employer at a publishing house that he likes to read the fiction of Art Vandelay, author of Venetian Blinds; in another, he tells an unemployment insurance caseworker that he’s applied for a latex salesman job at Vandelay Industries. For storytelling purposes, the only thing that you need to know here is that George is lying in a formal interview; any fictitious author or latex manufacturer would suffice. But the joke arrives through the echo of all those earlier Vandelay references; it’s funny because it’s making a subtle nod to past events held offscreen. It’s what we’d call in a real-world context an “in-joke” — a joke that’s funny only to people who get the reference.

I know some people who hate Family Guy and Seinfeld, but I realized a while ago that they don’t hate those shows because of the contents of the shows or because they were offended (though some people certainly are), but rather becaues they simply don’t get the references. They didn’t grow up watching TV in the 80s and 90s, so many of the references are simply lost on them. Family Guy would be particularly vexing if you didn’t have the pop culture knowledge of the writers of that show. These reference heavy shows are also a lot easier to watch and rewatch, over and over again. Why? Because each episode is not self-contained, you often find yourself noticing something new every time you watch. This also sometimes works in reverse. I remember the first time I saw Bill Shatner’s campy rendition of Rocket Man, I suddenly understoood a bit on Family Guy which I thought was just a bit based on being random (but was really a reference).

Again, I seem to be focusing on comedy, but it’s not necessarily limited to that genre. Eric S. Raymond has written a lot about how science fiction jargon has evolved into a sophisticated code that implicitely references various ideas, conventions and tropes of the genre:

In looking at an SF-jargon term like, say, “groundcar”, or “warp drive” there is a spectrum of increasingly sophisticated possible decodings. The most naive is to see a meaningless, uninterpretable wordlike noise and stop there.

The next level up is to recognize that uttering the word “groundcar” or “warp drive” actually signifies something that’s important for the story, but to lack the experience to know what that is. The motivated beginning reader of SF is in this position; he must, accordingly, consciously puzzle out the meaning of the term from the context provided by the individual work in which it appears.

The third level is to recognize that “ground car” and “warp drive” are signifiers shared, with a consistent and known meaning, by many works of SF — but to treat them as isolated stereotypical signs, devoid of meaning save inasmuch as they permit the writer to ratchet forward the plot without requiring imaginative effort from the reader.

Viewed this way, these signs emphasize those respects in which the work in which they appear is merely derivative from previous works in the genre. Many critics (whether through laziness or malice) stop here. As a result they write off all SF, for all its pretensions to imaginative vigor, as a tired jumble of shopworn cliches.

The fourth level, typical of a moderately experienced SF reader, is to recognize that these signifiers function by permitting the writer to quickly establish shared imaginative territory with the reader, so that both parties can concentrate on what is unique about their communication without having to generate or process huge expository lumps. Thus these “stereotypes” actually operate in an anti-stereotypical way — they permit both writer and reader to focus on novelty.

At this level the reader begins to develop quite analytical habits of reading; to become accustomed to searching the writer’s terminology for what is implied (by reference to previous works using the same signifiers) and what kinds of exceptions and novelties convey information about the world and the likely plot twists.

It is at this level, for example, that the reader learns to rely on “groundcar” as a tip-off that the normal transport mode in the writer’s world is by personal flyer. At this level, also, the reader begins to analytically compare the author’s description of his world with other SFnal worlds featuring personal flyers, and to recognize that different kinds of flyers have very different implications for the rest of the world.

For example, the moderately experienced reader will know that worlds in which the personal fliers use wings or helicopter-like rotors are probably slightly less advanced in other technological ways than worlds in which they use ducted fans — and way behind any world in which the flyers use antigravity! Once he sees “groundcar” he will be watching for these clues.

The very experienced SF reader, at the fifth level, can see entire worlds in a grain of jargon. When he sees “groundcar” he associates to not only technical questions about flyer propulsion but socio-symbolic ones but about why the culture still uses groundcars at all (and he has a reportoire of possible answers ready to check against the author’s reporting). He is automatically aware of a huge range of consequences in areas as apparently far afield as (to name two at random) the architectural style of private buildings, and the ecological consequences of accelerated exploitation of wilderness areas not readily accessible by ground transport.

While comedy makes for convenient examples, I think this better illustrates the cognitive demands of referential art. References require you to be grounded in various subjects, and they’ll often require you to think through the implications of those subjects in a new context. References allow writers to pack incredible amounts of information into even the smallest space. This, of course, requires the consumer to decode that information (using available knowledge and critical thinking skills), making the experience less passive and more engaging. Use references will continue to flourish and accellerate in both art and scholarship, and new forms will emerge. One could even argue that aggregation in various weblogs are simply exercises in referential work. Just look at this post, in which I reference several books and movies, in many cases assuming familiarity. Indeed, the whole structure of the internet is based on the concept of links — essentialy a way to reference other documents. Perhaps this is part of the cause of the rising complexity and information density of modern entertainment. We can cope with it now, because we have such systems to help us out.

World Domination Via Dice

One of my favorite board games is Risk. I have lots of fond memories of getting annihilated by my family members (I don’t think I’ve ever played the game without being the youngest person at the table) and have long since mastered the fundamentals. I also hold it responsible for my early knowledge of world geography and geopolitics (and thus my early thoughts were warped, but at least I knew where the Middle East was, even if the map is a little broad).

The key to Risk is Australia

The key to Risk is Australia. The Greeks knew it; the Carthaginians knew it; now you know it. Australia only has four territories to conquer and more importantly, it only has one entrance point, and thus only one territory to defend. Conquering Australia early in the game guarantees an extra two armies a turn, which is huge at that point in the game. Later in the game, that advantage lessens, but after securing Australia, you should be off to a very good start. If you’re not in a position to take over Australia, South America will do. It also only has four territories, but it has two entrances and thus two territories to defend. On the bright side, it’s also adjacent to Africa and North America, which are good continents to expand to (though they’re both considerably more difficult to hold than Australia). This being the internet, there are, of course, some people who have thought about the subject a lot more than I and developed many detailed strategies.

Like many of the classic games, the original has become dwarfed by variants – games set in another universe (LotR Risk) or in a futaristic setting (Risk: 2042) – but I’ve never played those. However, I recent ran across a little internet game called Dice Wars. It’s got the general Risk-like gameplay and concept of world domination via dice, but there are many key differences:

  • The Map and Extra Armies: A different map is generated for each game. One of the other differences is that the number of extra armies (or Dice, in this game) you get per turn is based solely on the number of territories you control (and there’s no equivalent to turning in Risk cards for more armies). This nullifies the Australia strategy of conquering an easily-defensible continent, but the general strategy remains: you need to maneuver your forces so as to minimize the number of exposed territories, slowly and carefully expanding your empire.
  • Army Placement and Size: Unlike Risk, you can’t choose where to place your armies (nor can you do “free moves” at the end of your turn, which are normally used to consolidate defenses or prepare a forward thrust). If you mount a successful attack, you must move all of your armies except one that you leave behind. This makes extended thrusts difficult, as you’ll leave a trail of easily conquered territories behind you. This is one of the more annoying differences. Another difference is that any one territory can only have a certain number of armies (i.e. there is a maximum). This changes the dynamic, adding another element of entropy. Again, it’s somewhat annoying, but it’s easy enough to work around.
  • Attacking and Defending: In Risk, the attacker has a maximum of 3 dice, while the defender has a maximum of 2 dice. Ties go to the defender, but attackers still have the statistical advantage, no matter how many armies are facing off. If both territories have an equal amount of armies, the attacker has the statistical advantage. In Dice Wars, the number of dice used are equal to the number of armies, and instead of matching up single dice against each other, they just total up the dice. If the attacker’s total is greater than the defender’s, they win. Again, ties go to the defender. So in this case, if two territories have the same number of armies, the statistical advantage goes to the defender. Of course, you generally try to avoid such a situation in both games, but again, the dynamic is quite different here.

The game’s familiar mechanics make it easy to pick up, but the differences above make it a little more difficult to master. Here’s an example game:

dice wars

Of course, I’d already played a bit to get to this point, and you can probably spot my strategy here. I started with a concentration of territories towards the middle of the map, and thus focused on consolidating my forces in that area. By the time I got to the screenshot above, I’d narrowed down my exposure to four territories. I began expanding a to the right, and eventually conquered all of the green territories, thus limiting my exposure to only two territories. From there it was just a matter of slowly expanding that wall of two (at one point I needed to expand back to an exposure of three) until I won. Another nice feature of this game is the “History” button that appears at the end. Click it, and you watch the game progress really quickly through every battle, showing you the entire war in a matter of seconds. Neat. It’s a fun game, but in the end, I think I still prefer Risk. [hat tip to Hypercubed for the game]

Intellectual Property, Copyright and DRM

Roy over at 79Soul has started a series of posts dealing with Intellectual Property. His first post sets the stage with an overview of the situation, and he begins to explore some of the issues, starting with the definition of theft. I’m going to cover some of the same ground in this post, and then some other things which I assume Roy will cover in his later posts.

I think most people have an intuitive understanding of what intellectual property is, but it might be useful to start with a brief definition. Perhaps a good place to start would be Article 1, Section 8 of the U.S. Constitution:

To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries;

I started with this for a number of reasons. First, because I live in the U.S. and most of what follows deals with U.S. IP law. Second, because it’s actually a somewhat controversial stance. The fact that IP is only secured for “limited times” is the key. In England, for example, an author does not merely hold a copyright on their work, they have a Moral Right.

The moral right of the author is considered to be — according to the Berne convention — an inalienable human right. This is the same serious meaning of “inalienable” the Declaration of Independence uses: not only can’t these rights be forcibly stripped from you, you can’t even give them away. You can’t sell yourself into slavery; and neither can you (in Britain) give the right to be called the author of your writings to someone else.

The U.S. is different. It doesn’t grant an inalienable moral right of ownership; instead, it allows copyright. In other words, in the U.S., such works are considered property (i.e. it can be sold, traded, bartered, or given away). This represents a fundamental distinction that needs to be made: some systems emphasize individual rights and rewards, and other systems are more limited. When put that way, the U.S. system sounds pretty awful, except that it was designed for something different: our system was built to advance science and the “useful arts.” The U.S. system still rewards creators, but only as a means to an end. Copyright is granted so that there is an incentive to create. However, such protections are only granted for “limited Times.” This is because when a copyright is eternal, the system stagnates as protected peoples stifle competition (this need not be malicious). Copyright is thus limited so that when a work is no longer protected, it becomes freely available for everyone to use and to build upon. This is known as the public domain.

The end goal here is the advancement of society, and both protection and expiration are necessary parts of the mix. The balance between the two is important, and as Roy notes, one of the things that appears to have upset the balance is technology. This, of course, extends as far back as the printing press, records, cassettes, VHS, and other similar technologies, but more recently, a convergence between new compression techniques and increasing bandwidth of the internet created an issue. Most new recording technologies were greeted with concern, but physical limitations and costs generally put a cap on the amount of damage that could be done. With computers and large networks like the internet, such limitations became almost negligible. Digital copies of protected works became easy to copy and distribute on a very large scale.

The first major issue came up as a result of Napster, a peer-to-peer music sharing service that essentially promoted widespread copyright infringement. Lawsuits followed, and the original Napster service was shut down, only to be replaced by numerous decentralized peer-to-peer systems and darknets. This meant that no single entity could be sued for the copyright infringement that occurred on the network, but it resulted in a number of (probably ill-advised) lawsuits against regular folks (the anonymity of internet technology and state of recordkeeping being what it is, this sometimes leads to hilarious cases like when the RIAA sued a 79 year old guy who doesn’t even own a computer or know how to operate one).

Roy discusses the various arguments for or against this sort of file sharing, noting that the essential difference of opinion is the definition of the word “theft.” For my part, I think it’s pretty obvious that downloading something for free that you’d normally have to pay for is morally wrong. However, I can see some grey area. A few months ago, I pre-ordered Tool’s most recent album, 10,000 Days from Amazon. A friend who already had the album sent me a copy over the internet before I had actually recieved my copy of the CD. Does this count as theft? I would say no.

The concept of borrowing a Book, CD or DVD also seems pretty harmless to me, and I don’t have a moral problem with borrowing an electronic copy, then deleting it afterwords (or purchasing it, if I liked it enough), though I can see how such a practice represents a bit of a slippery slope and wouldn’t hold up in an honest debate (nor should it). It’s too easy to abuse such an argument, or to apply it in retrospect. I suppose there are arguments to be made with respect to making distinctions between benefits and harms, but I generally find those arguments unpersuasive (though perhaps interesting to consider).

There are some other issues that need to be discussed as well. The concept of Fair Use allows limited use of copyrighted material without requiring permission from the rights holders. For example, including a screenshot of a film in a movie review. You’re also allowed to parody copyrighted works, and in some instances make complete copies of a copyrighted work. There are rules pertaining to how much of the copyrighted work can be used and in what circumstances, but this is not the venue for such details. The point is that copyright is not absolute and consumers have rights as well.

Another topic that must be addressed is Digital Rights Management (DRM). This refers to a range of technologies used to combat digital copying of protected material. The goal of DRM is to use technology to automatically limit the abilities of a consumer who has purchased digital media. In some cases, this means that you won’t be able to play an optical disc on a certain device, in others it means you can only use the media a certain number of times (among other restrictions).

To be blunt, DRM sucks. For the most part, it benefits no one. It’s confusing, it basically amounts to treating legitimate customers like criminals while only barely (if that much) slowing down the piracy it purports to be thwarting, and it’s lead to numerous disasters and unintended consequences. Essential reading on this subject is this talk given to Microsoft by Cory Doctorow. It’s a long but well written and straightforward read that I can’t summarize briefly (please read the whole thing). Some details of his argument may be debateable, but as a whole, I find it quite compelling. Put simply, DRM doesn’t work and it’s bad for artists, businesses, and society as a whole.

Now, the IP industries that are pushing DRM are not that stupid. They know DRM is a fundamentally absurd proposition: the whole point of selling IP media is so that people can consume it. You can’t make a system that will prevent people from doing so, as the whole point of having the media in the first place is so that people can use it. The only way to perfectly secure a piece of digital media is to make it unusable (i.e. the only perfectly secure system is a perfectly useless one). That’s why DRM systems are broken so quickly. It’s not that the programmers are necessarily bad, it’s that the entire concept is fundamentally flawed. Again, the IP industries know this, which is why they pushed the Digital Millennium Copyright Act (DMCA). As with most laws, the DMCA is a complex beast, but what it boils down to is that no one is allowed to circumvent measures taken to protect copyright. Thus, even though the copy protection on DVDs is obscenely easy to bypass, it is illegal to do so. In theory, this might be fine. In practice, this law has extended far beyond what I’d consider reasonable and has also been heavily abused. For instance, some software companies have attempted to use the DMCA to prevent security researchers from exposing bugs in their software. The law is sometimes used to silence critics by threatening them with a lawsuit, even though no copright infringement was committed. The Chilling Effects project seems to be a good source for information regarding the DMCA and it’s various effects.

DRM combined with the DMCA can be stifling. A good example of how awful DRM is, and how DMCA can affect the situation is the Sony Rootkit Debacle. Boing Boing has a ridiculously comprehensive timeline of the entire fiasco. In short, Sony put DRM on certain CDs. The general idea was to prevent people from putting the CDs in their computer and ripping them to MP3s. To accomplish this, Sony surreptitiously installed software on customer’s computers (without their knowledge). A security researcher happened to notice this, and in researching the matter found that the Sony DRM had installed a rootkit that made the computer vulnerable to various attacks. Rootkits are black-hat cracker tools used to disguise the workings of their malicious software. Attempting to remove the rootkit broke the windows installation. Sony reacted slowly and poorly, releasing a service pack that supposedly removed the rootkit, but which actually opened up new security vulnerabilities. And it didn’t end there. Reading through the timeline is astounding (as a result, I tend to shy away from Sony these days). Though I don’t believe he was called on it, the security researcher who discovered these vulnerabilities was technically breaking the law, because the rootkit was intended to protect copyright.

A few months ago, my windows computer died and I decided to give linux a try. I wanted to see if I could get linux to do everything I needed it to do. As it turns out, I could, but not legally. Watching DVDs on linux is technically illegal, because I’m circumventing the copy protection on DVDs. Similar issues exist for other media formats. The details are complex, but in the end, it turns out that I’m not legally able to watch my legitimately purchased DVDs on my computer (I have since purchased a new computer that has an approved player installed). Similarly, if I were to purchase a song from the iTunes Music Store, it comes in a DRMed format. If I want to use that format on a portable device (let’s say my phone, which doesn’t support Apple’s DRM format), I’d have to convert it to a format that my portable device could understand, which would be illegal.

Which brings me to my next point, which is that DRM isn’t really about protecting copyright. I’ve already established that it doesn’t really accomplish that goal (and indeed, even works against many of the reasons copyright was put into place), so why is it still being pushed? One can only really speculate, but I’ll bet that part of the issue has to do with IP owners wanting to “undercut fair use and then create new revenue streams where there were previously none.” To continue an earlier example, if I buy a song from the iTunes music store and I want to put it on my non-Apple phone (not that I don’t want one of those), the music industry would just love it if I were forced to buy the song again, in a format that is readable by my phone. Of course, that format would be incompatible with other devices, so I’d have to purchase the song again if I wanted to listen to it on those devices. When put in those terms, it’s pretty easy to see why IP owners like DRM, and given the general person’s reaction to such a scheme, it’s also easy to see why IP owners are always careful to couch the debate in terms of piracy. This won’t last forever, but it could be a bumpy ride.

Interestingly enough, distributers of digital media like Apple and Yahoo have recently come out against DRM. For the most part, these are just symbolic gestures. Cynics will look at Steve Jobs’ Thoughts on Music and say that he’s just passing the buck. He knows customers don’t like or understand DRM, so he’s just making a calculated PR move by blaming it on the music industry. Personally, I can see that, but I also think it’s a very good thing. I find it encouraging that other distributers are following suit, and I also hope and believe this will lead to better things. Apple has proven that there is a large market for legally purchased music files on the internet, and other companies have even shown that selling DRM-free files yields higher sales. Indeed, the emusic service sells high quality, variable bit rate MP3 files without DRM, and it has established emusic as the #2 retailer of downloadable music behind the iTunes Music Store. Incidentally, this was not done for pure ideological reasons – it just made busines sense. As yet, these pronouncements are only symbolic, but now that online media distributers have established themselves as legitimate businesses, they have ammunition with which to challenge the IP holders. This won’t happen overnight, but I think the process has begun.

Last year, I purchased a computer game called Galactic Civilizations II (and posted about it several times). This game was notable to me (in addition to the fact that it’s a great game) in that it was the only game I’d purchased in years that featured no CD copy protection (i.e. DRM). As a result, when I bought a new computer, I experienced none of the usual fumbling for 16 digit CD Keys that I normally experience when trying to reinstall a game. Brad Wardell, the owner of the company that made the game, explained his thoughts on copy protection on his blog a while back:

I don’t want to make it out that I’m some sort of kumbaya guy. Piracy is a problem and it does cost sales. I just don’t think it’s as big of a problem as the game industry thinks it is. I also don’t think inconveniencing customers is the solution.

For him, it’s not that piracy isn’t an issue, it’s that it’s not worth imposing draconian copy protection measures that infuriate customers. The game sold much better than expected. I doubt this was because they didn’t use DRM, but I can guarantee one thing: People don’t buy games because they want DRM. However, this shows that you don’t need DRM to make a successful game.

The future isn’t all bright, though. Peter Gutmann’s excellent Cost Analysis of Windows Vista Content Protection provides a good example of how things could get considerably worse:

Windows Vista includes an extensive reworking of core OS elements in order to provide content protection for so-called “premium content”, typically HD data from Blu-Ray and HD-DVD sources. Providing this protection incurs considerable costs in terms of system performance, system stability, technical support overhead, and hardware and software cost. These issues affect not only users of Vista but the entire PC industry, since the effects of the protection measures extend to cover all hardware and software that will ever come into contact with Vista, even if it’s not used directly with Vista (for example hardware in a Macintosh computer or on a Linux server).

This is infuriating. In case you can’t tell, I’ve never liked DRM, but at least it could be avoided. I generally take articles like the one I’m referencing with a grain of salt, but if true, it means that the DRM in Vista is so oppressive that it will raise the price of hardware… And since Microsoft commands such a huge share of the market, hardware manufacturers have to comply, even though a some people (linux users, Mac users) don’t need the draconian hardware requirements. This is absurd. Microsoft should have enough clout to stand up to the media giants, there’s no reason the DRM in Vista has to be so invasive (or even exist at all). As Gutmann speculates in his cost analysis, some of the potential effects of this are particularly egregious, to the point where I can’t see consumers standing for it.

My previous post dealt with Web 2.0, and I posted a YouTube video that summarized how changing technology is going to force us to rethink a few things: copyright, authorship, identity, ethics, aesthetics, rhetorics, governance, privacy, commerce, love, family, ourselves. All of these are true. Earlier, I wrote that the purpose of copyright was to benefit society, and that protection and expiration were both essential. The balance between protection and expiration has been upset by technology. We need to rethink that balance. Indeed, many people smarter than I already have. The internet is replete with examples of people who have profited off of giving things away for free. Creative Commons allows you to share your content so that others can reuse and remix your content, but I don’t think it has been adopted to the extent that it should be.

To some people, reusing or remixing music, for example, is not a good thing. This is certainly worthy of a debate, and it is a discussion that needs to happen. Personally, I don’t mind it. For an example of why, watch this video detailing the history of the Amen Break. There are amazing things that can happen as a result of sharing, reusing and remixing, and that’s only a single example. The current copyright environment seems to stifle such creativity, not the least of which because copyright lasts so long (currently the life of the author plus 70 years). In a world where technology has enabled an entire generation to accellerate the creation and consumption of media, it seems foolish to lock up so much material for what could easily be over a century. Despite all that I’ve written, I have to admit that I don’t have a definitive answer. I’m sure I can come up with something that would work for me, but this is larger than me. We all need to rethink this, and many other things. Maybe that Web 2.0 thing can help.

Update: This post has mutated into a monster. Not only is it extremely long, but I reference several other long, detailed documents and even somewhere around 20-25 minutes of video. It’s a large subject, and I’m certainly no expert. Also, I generally like to take a little more time when posting something this large, but I figured getting a draft out there would be better than nothing. Updates may be made…

Update 2.15.07: Made some minor copy edits, and added a link to an Ars Technica article that I forgot to add yesterday.

Top 10 Box Office Performance

So after looking at a bunch of top 10 films of 2006 lists, and compiling my own, I began to wonder just how popular these movies really were. Film critics are notorious for picking films that the average viewer thinks are boring or pretentious. Indeed, my list features a few such picks, and when I think about it, there are very few movies on the list that I’d give an unqualified recommendation. For instance, some of the movies on my list are very violent or otherwise graphic, and some people just don’t like that sort of thing (understandably, of course). United 93 is a superb film, but not everyone wants to relive 9/11. And so on. As I mentioned before, top 10 lists are extremely personal and usually end up saying more about the person compiling the list than anything else, but I thought it would be interesting to see just how mainstream these lists really are. After all, there is a wealth of box office information available for every movie, and if you want to know how popular something is, economic data seems to be quite useful (though, as we’ll see, perhaps not useful enough).

So I took nine top 10 lists (including my own) and compiled box office data from Box Office Mojo (since they don’t always have budget information, I sometimes referenced IMDB or Wikipedia) and did some crunching (not much, I’m no statistician). I chose the lists of some of my favorite critics (like the Filmspotting guys and the local guy), and then threw in a few others for good measure (I wanted a New York critic, for instance).

The data collected includes domestic gross, budget and the number of theaters (widest release). From that data, I calculated the net gross and dollars per theater (DPT). You’d think this would be pretty conclusive data, but the more I thought about it, the more I realized just how incomplete a picture this paints. Remember, we’re using this data to evaluate various top 10 lists, so when I chose domestic gross, I inadvertantly skewed the evaluation against lists that featured foreign films (however, I am trying to figure out whose list works best in the U.S. so I think it is a fair metric). So the gross only gives us part of the picture. The budget is an interesting metric, as it provides information about how much money a film’s backers thought it would make and it provides a handy benchmark with which to evaluate (unfortunately, I was not able to find budget figures for a number of the smaller films, further skewing the totals you’ll see). Net Gross is a great metric because it incorporates a couple of different things: it’s not just a measure of how popular a movie is, it’s a measure of how popular a movie is versus how much it cost to make (i.e. how much a film’s producers believed in the film). In the context of a top 10 list, it’s almost like pretending that the list creator was the head of a studio who chose what films to greenlight. It’s not a perfect metric, but it’s pretty good. The number of theaters the film showed in is an interesting metric because it shows how much faith theater chains had in the movie (and in looking at the numbers, it seems that the highest grossing films also had the most theaters). However, this could again be misleading because it’s only the widest release. I doubt there are many films where the number of theaters doesn’t drop considerably after opening weekend. Dollars per theater is perhaps the least interesting metric, but I thought it interesting enough to include.

One other thing to note is that I gathered all of this data earlier this week (Sunday and Monday), and some of the films just recently hit wide distribution (notably Pan’s Labyrinth and Children of Men, neither of which have recouped costs yet) and will make more money. Some films will be re-released around Oscar season, as the studios seek to cash in on their award winning films.

I’ve posted all of my data on a public Google Spreadsheet (each list is on a separate tab), and I’ve linked each list below to their respective tab with all the data broken out. This table features the totals for the metrics I went over above: Domestic Gross, Budget, Net Gross, Theaters, and Dollars Per Theater (DPT).

#movie-data {border: 2px solid #A8B3C2;}

#movie-data th {vertical-align: bottom; background-color: #A8B3C2; color: #EEEEEE; padding: 3px;}

#movie-data tr:hover {background-color: #E0E4EB}

#movie-data td {padding: 3px; color: #444;}

#movie-data .red {color: #B73B3B;}

#movie-data .green {color: #239A23;}

#movie-data a {font-weight: bold;}

#movie-data .odd {background-color: #F0F2F5}

List Gross Budget Net Gross Theaters DPT
Kaedrin
(Mark Ciocco)
$484,154,522 $319,850,000 $164,092,855 16,675 $29,034.75
Reelviews
(James Berardinelli)
$586,767,062 $607,000,000 -$20,674,428 16,217 $36,182.22
Filmspotting
(Adam Kempenaar)
$210,592,457 $234,850,000 -$27,159,180 8,589 $24,518.86
Filmspotting
(Sam Van Hallgren)
$79,756,419 $152,204,055 -$73,445,839 4,467 $17,854.58
Philadelphia Inquirer
(Steven Rea)
$236,690,299 $239,000,000 -$40,474,006 10,239 $23,116.54
The New York Times
(A.O. Scott)
$104,484,584 $92,358,000 $11,238,032 3,641 $28,696.67
Rolling Stone
(Peter Travers)
$419,088,036 $264,400,000 $119,130,515 14,784 $28,347.41
Washington Post
(Stephen Hunter)
$540,183,488 $362,900,000 $169,683,807 15,394 $35,090.52
The Onion AV Club
(Scott Tobias)
$195,779,774 $191,580,000 $1,308,777 6,844 $28,606.05

This was quite an interesting exercise, and it would appear from the numbers, that perhaps not all film critics are as out of touch as originally thought. Or are they? Let’s take a closer look.

  • Kaedrin (Mark Ciocco): The most surprising thing about my list is that every single film in my top 10 made a profit. In addition, my high net gross figure (around $164 million, which ended up being second out of the nine lists) isn’t overly dependent on any single film (the biggest profit vehicle on my list was Inside Man, with about $43 millon, or about 1/4 my net gross). The only real wild card here is Lady Vengeance, which only made about $212 thousand. Its budget figure wasn’t available and it was a foreign film that was only released in 15 theaters (I saw it on DVD). Given this data, I think my list is the most well rounded of all the surveyed lists. Not to pat myself on the back here, but my list is among the top 3 lists for all of the metrics (and #1 in theaters). Plus, as you’ll read below, the lists that appear ahead of me have certain outliers that skew the data a bit. However, even with all of that, I might not have the most mainstream list.
  • Reelviews (James Berardinelli): James is probably the world’s greatest amateur critic, and his list is quite good (it shares 4 films with my own list). Indeed, his list leads the Domestic Gross and Budget Categories, as well as Dollars Per Theater. But look at that Net Gross metric! Almost -$21 million dollars. Ouch. What happened? Superman Returns happened. It made a little more than $200 million dollars at the box office, but it cost $270 million to make it. This skews James’ numbers considerably, and he would have been around $50 million in the green if it weren’t for Superman. He also has two films that were released in less than 25 theaters, which skews the numbers a bit as well.
  • Filmspotting (Adam Kempenaar): Of the two critics on the Filmspotting podcast, Adam is by far the one I agree with more often, but his list is among the more unprofitable ones. This is due in great part to his inclusion of Children of Men, which has only recently come out in wide release, and which still has to make almost $50 million before it recoups its cost (I think it will make more money, but not enough to break even). To a lesser extent, his inclusion of two foreign films (Pan’s Labyrinth and Volver) has also skewed the results a bit (both films did well at the foreign box office). Given those disclaimers, Adam’s list isn’t as bad as it seems, but it still not too hot. It is, however, better than his co-host:
  • Filmspotting (Sam Van Hallgren): I think it’s safe to say that Sam takes the award for least mainstream critic. He’s got the worst Domestic Gross and Net Gross of the group, by a significant margin. Like his co-host Adam, this can partly be explained by his inclusion of Children of Men and other small, independent, or foreign films. But it’s a pretty toxic list. Only two films on his list turned a profit, which is a pretty miserable showing. Interestingly enough, I still think Sam is a pretty good critic. You don’t have to agree with a critic to get something useful out of them, and I know what I’m getting with Sam. Plus, it helps that he’s got a good foil in his co-host Adam.
  • Philadelphia Inquirer(Steven Rea): I kinda like my local critic’s list, and it’s definitely worth noting that his pick of the Chinese martial arts epic Curse of the Golden Flower has impacted his list considerably (as a high budget foreign film that did well internationally, but which understandably didn’t do that great domestically). That choice alone (-$40 million) put him in the red. He’s also got Pan’s Labyrinth on his list, which will go on to make more money. Plus, he suffers from a data problem in that I couldn’t find budget figures for The Queen, which has made around $35 million and almost certainly turned a profit. Even with those caveats, he’s still only treading water.
  • The New York Times (A.O. Scott): I wanted to choose a critic from both New York and LA (due to the fact that most LA critics seemed to have a lot of ties, I decided not to include their lists), and A.O. Scott’s list provides a decent example of why. Three of his picks were only shown in 6 theaters or less. This is more or less what you’d expect from a New York critic. They are one of the two cities that gets these small movies, so you’d expect their critics to show their superiority by including these films in their list (I’m sure they’re good films too, but I think this is an interesting dynamic). In any case, it’s worth noting that Mr Scott (heh) actually turned a profit. How could this be? Well, he included Little Miss Sunshine on his list. That movie has a net gross of around $50 million dollars, which gave Mr Scott significant breathing room for his other picks.
  • Rolling Stone (Peter Travers): I’ve always thought of this guy as your typical critic that doesn’t like anything popular, but his list is pretty decent, and he turns out to be among the tops in terms of net gross with $119 million. One caveat here is that he does feature a tie in his list (so he has 11 films), but the tie consists of the two Clint Eastwood war flicks, both of which have lost considerable amounts of money (in other words, this list is actually a little undervalued by my metrics). So how did his list get so high? He also had Little Miss Sunshine on his list, which, as already mentioned, was quite the moneymaker. But even bigger than that, he included Borat in his list. Borat is a low budget movie that made huge amounts of cash, and it’s net gross comes in at almost $110 million! So those two films account for the grand majority of his net gross. However, of all the lists, I think his is probably the most mainstream (while still retaining a critics edge) and gives my list a run for its money.
  • Washington Post (Stephen Hunter): I wanted to choose a critic from WaPo because it’s one of the other “papers of record,” and much to my amazement, his turns out to have the highest net gross! He seems to feature the most obscure picks, with 4 films that I couldn’t even find budget data for (but which seem pretty small anyway). He’s got both Little Miss Sunshine and Borat, which proves to be quite a profitable duo, and he’s also got big moneymakers like The Departed and Casino Royale. It’s an interesting list.
  • The Onion AV Club (Scott Tobias): He scrapes by with around $1 million net gross, though it should be noted that his list features Children of Men (a big loss film) and a couple of movies that I couldn’t find budgets for. It’s an interesting list, but it comes in somewhere around the upper middle of the pack.

Whew! That took longer than I thought. Which critic is the most mainstream? I think a case could be made for my list, Peter Travers’ list, or Stephen Hunter’s list. I think I’d give it to Peter Travers, with myself in a close second place and Stephen Hunter nipping at our heels.

Statistically, the biggest positive outliers appeared to be Little Miss Sunshine and Borat, and the biggest negative outliers appeared to be Flags of our Fathers and Children of Men (both of which will make more money, as they are currently in theaters).

Obviously, this list is not authoritative, and I’ve already spent too much time harping on the qualitative issues with my metrics, but I found it to be an interesting exercise (if I ever do something similar again, I’m going to need to find a way to automate some of the data gathering, though). Well, this pretty much shuts the door on the 2006 Kaedrin Awards season. I hope you enjoyed it.