Best Entries

The Streaming Narrative

The NYT laments the sorry state of royalties paid out by music streaming services like Spotify.

A decade after Apple revolutionized the music world with its iTunes store, the music industry is undergoing another, even more radical, digital transformation as listeners begin to move from CDs and downloads to streaming services like Spotify, Pandora and YouTube.

As purveyors of legally licensed music, they have been largely welcomed by an industry still buffeted by piracy. But as the companies behind these digital services swell into multibillion-dollar enterprises, the relative trickle of money that has made its way to artists is causing anxiety at every level of the business.

So I really don’t know enough to comment on whether or not the whole royalty situation for streaming will pan out (or not!) the way some think it will, but the interesting thing here is the narrative.

The NYT credits iTunes with revolutionizing the music world, and in some ways it did, but only by making the revolution legal. The real shift began with file sharing services like Napster. One of the old narratives that the music industry endorsed was that if you liked a song and wanted to own it, you had to also purchase the 10 or so other songs that surrounded it on an album. Napster was free, and while it’s ability to enable widespread music theft was probably the cause of its popularity, it also changed that whole album purchasing paradigm. You like “For Whom the Bell Tolls”, fine, download it and stick it to that annoying Lars guy. No need to go buy the whole album. Apple, to their credit, realized that the narrative had shifted, and when they implemented iTunes, they allowed customers to purchase only the songs they wanted.

Like I said, the free downloads were probably the main cause of Napster’s popularity, but the success of iTunes shows that the whole a la carte idea was also a key component. A decade later, and the narrative is changing again.

The thing that struck me reading the article is that free music streaming services like Pandora and Spotify, while providing truly minimal royalties, also shine a light on another narrative about listening frequency. Namely, once you bought a record, the music industry could care less how often you listened to it. But streaming services aren’t based on sales, they’re based on “listens” – the number of times you streamed a specific song.

I’m probably the last person in the world who should be commenting on listening habits, as I suck at music. I love it, I’m just bad at keeping up with this stuff and constantly go back to the same well (What? I’ve got movies to watch, books to read, and beer to drink over here, leave me alone!) All of which is to say that I have to wonder how the metric of “listens” will impact the industry. I tend to listen to the same thing over and over again, and when I do that, I’ll probably earn someone a few cents of royalties. But I have a large suspicion that a lot of people will give most music a single listen (especially given the low barrier of entry on streaming), maybe revisiting once or twice if they’re really psyched about it.

Music is certainly relistened to more than movies are rewatched, and being more of a movie guy, that might throw off my calibration on this issue, but I really have to wonder about the relationship between sales and listens. Yeah, such and such album or song may have sold a million copies last week… but how long will that song be in heavy rotation in streaming? And when you literally have millions of songs on your fingertips, are you likely to cast your net far and wide, or return to the same music over and over again? Will this notion drive what kinds of music becomes available? More pop music with clear hooks, less experimental stuff? Will those experimental folks be able to survive on the long tail?

I don’t have any answers here and I don’t really know enough about the music industry to say how this will play out, but I’m thinking we’ll see some interesting developments in the next few years. Incidentally, movie streaming doesn’t seem to have caved to streaming in the same way. They don’t charge streaming services like Netflix per watch, but for the general ability to stream a certain catalog. I’ll be curious to see if we ever reach a Spotify-level streaming service for movies. As I’ve mentioned before, I don’t think that’s going to happen anytime soon… but again, the next few years will be interesting.

On Nitpicking

I watch a lot of movies and thus it follows that I also consume a fair amount of film criticism, mostly through the internets (reviews, forums, podcasts, etc…) One thing I’ve noticed recently in a few high-profile movies is that many reviews resort to long lists of nitpicking. I’m certainly not immune to this tendency – I tried to minimize my nitpicks in my Prometheus review, but if I were so inclined, I could probably generate a few thousand words picking the nits out of that movie. I really disliked that movie, but were the nitpicks the cause? Another movie I could probably nitpick to death is The Dark Knight Rises… and yet, I really enjoyed that movie. We could quibble about the quantity and magnitude of the nitpicks in both films, but a recent discussion with a friend on both movies made me start wondering about nitpicks again. It’s something I’ve seen before, though I don’t think I’ve ever really written about it in detail.

The origin of the term comes from the process of removing the eggs of lice (aka nits) from the host’s hair. Because the nits attach themselves to individual strands of hair, the process of removing them is tedious and slow. You could shave all the hair off and later, chemical methods of treating lice infestations became available. But the term nitpicking has lived on as a way describing the practice of meticulously examining a subject in search of subtle errors in detail. In the context of this post, we’re talking about movies, but this gets applied to lots of other things.

When it comes to movies and TV series, nitpicks can go either way. Some will claim that the existence of nitpicks are evidence that the show or movie is sloppy and poorly made. Others will claim that the nitpickers are missing the forest for the trees. Nitpickers just don’t “get it” and are taking the fun out of everything. In fairness, there’s probably an element of truth to both sides of that argument, but I think they’re both missing the point of nitpicks, which is this: Nitpicks are almost always emblematic of a deeper problem with the story or characters. Oh sure, there are some people who can’t turn their brains off and nitpick because they’re just analytical by nature (one definition of engineer’s disease), but even in those cases, I think there’s something to be said for a deeper dislike than the nitpicks would seem to indicate.

Nitpicks are the symptoms, not the disease. I didn’t dislike Prometheus because, for example, their spaceship was in a constant state of thrust at the beginning of the movie or because there was no explanation for how the ship maintained gravity in space. But both of those things were immediately obvious to me, which tells me that I wasn’t really immersed in the story that was being told. As the movie unfolds, a number of breathtakingly stupid plot developments were continually taking me out of the story. Perhaps if the movie wasn’t so stupid, I may have overlooked those initial observations, but as the nitpicks mounted, it became harder and harder to overlook them. I don’t go into a movie hoping that it will suck. There’s a certain amount of goodwill that a movie has to wear away at in order to ruin immersion, and for whatever reason the quantity and magnitude of nitpicks with Prometheus wore out that goodwill pretty quickly. The Dark Knight Rises, on the other hand, didn’t bother me nearly as much. In fact, as I mentioned in my review, most of the nitpicks I have with that movie came to light after the fact. It’s what Hitchcock calls a “refrigerator” movie: something that makes sense while you’re watching it, but falls apart under critical examination (while standing in front of the refrigerator later in the night). That being said, for lots of people, that wasn’t enough. And that’s perfectly understandable.

In general, it seems that people are perhaps less objective than they’d like to think. One of the great things about art is that the pieces that move us usually aren’t doing so solely on an intellectual level… and when it comes to emotion, words sometimes fail us. Take, for example, a comedy. The great thing about laughter is that you don’t have to think about it, it just happens. Different people have different tastes, of course, and that’s where subjectivity comes in. But for whatever reason, we don’t like to admit that, so we try to rationalize our feelings about a given movie. And if we don’t like that movie, such rationalizations may manifest in the form of nitpicks. None of this is absolute, of course. Most art works on both intellectual and emotional levels, and as you gain experience with a given medium or genre (or whatever), you will start to pick out patterns and tropes. One of the interesting things about this is that what gets labeled a “nitpick” can vary widely in scope. Nitpicks can range from trivial mistakes to serious continuity errors, but they all get lumped under the same category. As such, I think it can be difficult to discern what’s a nitpick and what’s the root cause of said nitpick.

A few years ago, I was discussing John Scalzi’s book Old Man’s War in an online forum. I (and a number of other forum members) enjoyed the book greatly, but one person didn’t. When asked why, she responded that it was disappointing that, during one scene earlier in the book, a doctor spent time explaining how some machines worked to his patient. This is a nitpick if I’ve ever seen one. What she said was true – it was somewhat unrealistic that these two characters would stop what they’re doing to have a discussion about how certain technologies operated. But I was wrapped up in the story by that point, so I barely even noticed it. Even after it was pointed out, it didn’t ruin the book for me. She was not invested in the story though, so that scene was jarring to her. After further discussion, it turns out that this was a specific manifestation of a larger issue she had with the book, which was that it lazily introduced concepts through awkward exposition or dialogue, and never followed through on any of it. I don’t particularly agree with her on that, but I can see where she’s coming from.

I think the lesson here is that when people are nitpicking a movie to death, it’s not necessarily the specific nitpicks that are so bothersome. Perhaps, in some cases, it’s the combined weight of all the nitpicks that causes an issue, but I suspect that even in those cases, the nitpicks are merely the most obvious examples of a deeper problem. I think both critics and defenders would do well to recognize this sort of thing. It’s fun to list out nitpicks or examples of something you don’t like about a work of art, but that’s not really what criticism is about. I don’t mean to say that you can’t or shouldn’t do this sort of thing, just that it would be useful at some point to look back at that list and wonder what it was about the book or movie or whatever that inspired you to meticulously chronicle minor errors or whatever. This is probably easier said than done. I can’t say as though I succeed at this all the time, but then, I’m just some dude wanking on the internets. Ultimately, all of this is somewhat superfluous, but it’s something worth considering the next time you find yourself cataloging trivial errors in detail.

Peak Performance

A few years ago, Malcolm Gladwell wrote an article called How David Beats Goliath, and the internets rose up in nerdy fury. Like a lot of Gladwell’s work, the article is filled with anecdotes (whatever you may think of Gladwell, he’s a master of anecdotes), most of which surround the notion of a full-court press in basketball. I should note at this point that I absolutely loath the sport of basketball, so I don’t really know enough about the mechanics of the game to comment on the merits of this strategy. That being said, the general complaint about the article is that Gladwell chose two examples that aren’t really representative of the full-court press. The primary example seems to be a 12 year old girls basketball team, coached by an immigrant unfamiliar with the game:

Ranadive was puzzled by the way Americans played basketball. He is from Mumbai. He grew up with cricket and soccer. He would never forget the first time he saw a basketball game. He thought it was mindless. Team A would score and then immediately retreat to its own end of the court. Team B would inbound the ball and dribble it into Team A’s end, where Team A was patiently waiting. Then the process would reverse itself. A basketball court was ninety-four feet long. But most of the time a team defended only about twenty-four feet of that, conceding the other seventy feet. Occasionally, teams would play a full-court press—that is, they would contest their opponent’s attempt to advance the ball up the court. But they would do it for only a few minutes at a time. It was as if there were a kind of conspiracy in the basketball world about the way the game ought to be played, and Ranadive thought that that conspiracy had the effect of widening the gap between good teams and weak teams. Good teams, after all, had players who were tall and could dribble and shoot well; they could crisply execute their carefully prepared plays in their opponent’s end. Why, then, did weak teams play in a way that made it easy for good teams to do the very things that made them so good?

The strategy apparently worked well, to the point where they made it to the national championship tournament:

The opposing coaches began to get angry. There was a sense that Redwood City wasn’t playing fair – that it wasn’t right to use the full-court press against twelve-year-old girls, who were just beginning to grasp the rudiments of the game. The point of basketball, the dissenting chorus said, was to learn basketball skills. Of course, you could as easily argue that in playing the press a twelve-year-old girl learned something much more valuable – that effort can trump ability and that conventions are made to be challenged.

Most of the criticism of this missed the forest for the trees. A lot of people nitpicked some specifics, or argued as if Gladwell was advocating for all teams playing a press (when he was really just illustrating a broader point that underdogs don’t always need to play by the stronger teams’ conventions). One of the most common complaints was that “the press isn’t always an advantage” which I’m sure is true, but again, it kinda misses the point that Gladwell was trying to make. Tellingly, most folks didn’t argue about Gladwell’s wargame anecdote, though you could probably make similar nitpicky arguments.

Anyway, the reason I’m bringing this up three years after the fact is not to completely validate Gladwell’s article or hate on his critics. As I’ve already mentioned, I don’t care a whit about basketball, but I do think Gladwell has a more general point that’s worth exploring. Oddly enough, after recently finishing the novel Redishirts, I got an itch to revisit some Star Trek: The Next Generation episodes and rediscovered one of my favorite episodes. Oh sure, it’s not one of the celebrated episodes that make top 10 lists or anything, but I like it nonetheless. It’s called Peak Performance, and it’s got quite a few parallels to Gladwell’s article.

The main plot of the episode has to do with a war simulation exercise in which the Enterprise engages in a mock battle with an inferior ship (with a skeleton crew lead by Commander Riker). There’s an obvious parallel here between the episode and Gladwell’s article (when asked how a hopelessly undermatched ship can compete with the Enterprise, Worf responds “Guile.”), but it’s the B plot of the episode that is even more relevant (the main plot goes in a bit of a different direction due to some meddling Ferengi).

The B plot concerns the military strategist named Kolrami. He’s acting as an observer of the exercise and he’s arrogant, smarmy, and condescending. He’s also a master at Strategema, one of Star Trek’s many fictional (and nonsensical) games. Riker challenges this guy to a match because he’s a glutton for punishment (this really is totally consistent with his character) – he just wants to say that he played the master, even if he lost… which, of course, he does. Later, Dr. Pulaski volunteers Data to play a game, with the thought being that the android would easily dispatch Kolrami, thus knocking him down a peg. But even Data loses.

Data is shaken by the loss. He even removes himself from duty. He expected to do better. According to the rules, he “made no mistakes”, and yet he still lost. After analyzing his failure and discussing the matter with the captain (who basically tells Data to shut up and get back to work), Data resumes his duty, eventually even challenging Kolrami to a rematch. But this time, Data alters his premise for playing the game. “Working under the assumption that Kolrami was attempting to win, it is reasonable to assume that expected me to play for the same goal.” But Data wasn’t playing to win. He was playing for a stalemate. Whenever opportunities for advancement appeared, Data held back, attempting to maintain a balance. He estimated that he should be able to keep the game going indefinitely. Frustrated by Data’s stalling, Kolrami forfeits in a huff.

There’s an interesting parallel here. Many people took Gladwell’s article to mean that he thought the press was a strategy that should be employed by all teams, but that’s not really the point. The examples he gave were situations in which the press made sense. Similarly, Data’s strategy of playing for stalemate was uniquely suited to him. The reason he managed to win was that he is an android without any feelings. He doesn’t get frustrated or bored, and his patience is infinite. So while Kolrami may have technically been a better player, he was no match for Data once Data played to his own strengths.

Obviously, quoting fiction does nothing to bolster Gladwell’s argument, but I was struck by the parallels. One of the complaints to Gladwell’s article that rang at least a little true was that the article’s overarching point was “so broad and obvious as to be not worth writing about at all.” I don’t know that I fully buy that, as a lot of great writing can ultimately be boiled down to something “broad and obvious”, but it’s a fair point. On the other hand, even if you think that, I do find that there’s value in highlighting examples of how it’s done, whether it’s a 12 year old girls basketball team, or a fictional android playing a nonsensical (but metaphorically apt) game on a TV show. It seems that human beings sometimes need to be reminded that thinking outside the box is an option.

Reamde

Neal Stephenson wasn’t particularly successful early in his career. I imagine having trouble for a few years is rather common amongst successful authors, and obviously Stephenson has gone on to establish himself as a big name, especially in the nerdy science fiction community. But, as he snarkily wrote in his author bio on my copy of Snow Crash:

His first novel, The Big U, was published in 1984 and vanished without a trace. His second novel, Zodiac: the Eco-thriller, came out in 1988 and quickly developed a cult following among water-pollution-control engineers. It was also enjoyed, though rarely bought, by many radical environmentalists.

While writing Snow Crash, Stephenson started looking into other options. Because who would want to read a book where a hacker/pizza delivery boy/cyber-ninja researches Sumerian mythology and linguistics theory? In an old interview, he comments on his career thusly:

I was writing Snow Crash about the same time my uncle, George Jewsbury, and I started talking about doing collaborations. The rationale behind that was, clearly, I may be able to limp along indefinitely, writing these little books that get bought by 5,000 people, but really it would be smart to try to get some kind of serious career going. We had heard somewhere that Tom Clancy had made like $17 million in a year. So we thought, ‘Let’s give this a try.’ The whole idea was that ‘Stephen Bury’ would be a successful thriller writer and subsidize my pathetic career under the name Neal Stephenson. It ended up going the other way. I would guess most of the people who have bought the Stephen Bury books have done so because they know I’ve written them. It just goes to show there’s no point in trying to plan your career.

Indeed! I actually rather enjoyed the Stephen Bury books, and they actually presage Reamde in their thriller genre roots. But Stephenson has gone on to write impenetrable books that have become quite popular amongst a certain type of geek (i.e. me). Unfortunately, this presents something of a problem. Long time readers of this blog know that I’m a huge fan of Stephenson, but in reality, I’ve never actually met a person that really loves his books (the online world is another story). This makes it quite difficult to recommend my favorite novels to other people, because I generally know they’re not going to like it (I generally settle on Snow Crash as a recommendation, but there are things about that book that often don’t go over well with normal folks). In particular, Cryptonomicon (which is my favorite novel) seems to polarize readers. Shamus describes the phenomenon best:

In fact, I have yet to introduce anyone to the book and have them like it. I’m slowly coming to the realization that Cryptonomicon is not a book for normal people. Flaws aside, there are wonderful parts to this book. The problem is, you have to really love math, history, and programming to derive enjoyment from them. You have to be odd in just the right way to love the book. Otherwise the thing is a bunch of wanking.

Similarly, The Baroque Cycle (basically a 2700 page prequel to Cryptonomicon) is not a series for normal people. The subjects are similar, but weighted differently. Much less programming, and much more history and monetary theory. Anathem probably appeals to folks who love Philosophy and/or Quantum Physics, with some linguistics thrown in for fun. The common factor with all of this is that Stephenson’s books aren’t particularly accessible to mainstream audiences. Thus it’s hard to find a way to introduce people to his work.

Enter Reamde, Stephenson’s latest and most accessible novel. Well, accessible for folks who don’t mind reading 1000+ page novels. Ironically, this accessibility seems to have garnered the only real complaints about the book. Which isn’t to say that people don’t like the book. Reviews seem to be overwhelmingly positive, but the one thing that comes up again and again is that it’s “just a thriller.” It is not a novel that plumbs the depths of technology or philosophy, nor does it wrestle with big questions the way a lot of Stephenson’s other works do. For my part, I finished it a few weeks ago and find myself thinking about it often. This isn’t to say that I think there’s something profound going on beneath the surface, but who knows? Maybe a second reading will unearth something more. But then, I don’t really need it to be a profound life-changing book. It’s a page-turning thriller written with wit and humor, and I enjoyed the hell out of it.

Stephenson’s fans will certainly not be bored. Despite the fact that many seem to enjoy the inaccessibility of his earlier novels, I do think there are plenty of Stephensonian digressions that will keep fans interested. Take, for instance, “The Apostropocalypse”, wherein one of our main characters explains how two writers he hired to provide background material for his video game argued over the semiotics of fantasy naming conventions. The video game itself is rather cleverly designed, and Stephenson spends a lot of time describing its mechanics, allowing him to delve into geography, monetary theory and the practice of gold farming in MMORPGs. Stephenson even addresses how this game came to compete with World of Warcraft by catering to the Chinese market. Later in the novel, there’s an interesting digression into how great circle routes work. These are things that Stephenson excels at, and there’s certainly a lot to chew on here. He’s taken standard genre tropes and overlaid his own style, ultimately elevating this book from much of its competition.

The basics of the plot itself are rather straightforward. Richard Forthrast is one of our primary characters. He was a draft-dodger who figured out a way to cross the Canadian border undetected, parlayed that knowledge into marijuana smuggling, then turned legit serial entrepreneur. His latest venture is a fantasy MMORPG video game called T’Rain, and it’s become quite successful. He’s hired his niece, Zula Forthrast, to work for his company. As circumstances would have it, Zula ends up getting kidnapped by Russian mobsters who are afflicted with a virus from the game (this virus has locked up the mobsters’ monetary livelyhood). Pissed off to no ends, these Russian mobsters want Zula to help find the virus writers (no doubt Chinese kids) so that revenge can be exacted. Along the way, we run into a lively cast of characters, including a group of Jihadis (who eventually become the main villains of the novel), a Hungarian hacker, a Chinese mountain-girl, the Chinese kid who wrote the virus, an MI6 agent, and, of course, a badass Russian security consultant. The terrorists want to kill lots of people, and most of the other folks want to stop them. Typical thriller stuff, I guess, but done with more nuance than you’d normally expect.

As characters go, the Forthrast clan, Iowa natives, will strike most Stephenson fans as being familiar. Not quite Waterhouses (from Cryptonomicon/Baroque Cycle), but Richard certainly leans in that direction. The Forthrasts also bear a resemblance to the family clan in The Cobweb. Sokolov, the Russian security consultant, is more of a Shaftoe kinda guy. This isn’t to say that the novel is completely derivative of Stephenson’s earlier novels – there are plenty of wholly new characters, and I generally enjoyed most of them. Depth seems to be reserved more for the Forthrasts, Richard and Zula, while the others are more surface-level affairs, but they’re generally a likable bunch. And they’re all pretty damn competent too. Indeed, most of the time, they’re downright Sherlockian. Take this quick sequence, in which Sokolov deduces what’s happening from very little information:

Sokolov retrieved his spare clip and other goods from the wreckage now strewn around the conference table, but paused on his way out of the suite to shine his flashlight over the dead man’s face. He was ethnic Chinese.

Why had they taken his clothes?

Because something about them made them useful.

A uniform. The guy was a cop, or a security guard.

Thought processes like these are peppered throughout the book, and our intrepid heros and nefarious villains are all pretty damn good at this form of deduction.

The book does start off a bit on the slower side, and you’re not really sure where it’s going until about 50 pages in, when things kick into high gear and don’t really let up for about 600 or so pages, and even then, there is only a brief respite as various characters are maneuvered to the ultimate showdown. And there are a lot of concurrent storylines being maintained here, much moreso than Stephenson’s recent work. He may not have been shooting for profundity when writing this novel, but he sure amped up the complexity, to the point where calling it “just a thriller” doesn’t do it much justice. I’m not a particularly accomplished thriller reader, but I from what I have read, this is far more complex and adroit than I would have expected. And it’s funny too.

She picked up her phone, navigated to the “Recent Calls” list, and punched in Richard Forthrast’s number.

It rang a few times. But then finally his voice came on the line. “British spy chick,” he said.

“Is that how you think of me?”

“Can you think of a better description?”

“You didn’t like my fake name?”

“Already forgot it. You’re in my phone directory as British Spy Chick.”

And then there’s this bit, from perhaps the funniest chapter in the book:

How could your cover be blown in Canada? Why even bother going dark there? How could you tell?

After which we get to witness a hysterical chain of emails with two spys basically berating one another while getting actual espionage work done. Great stuff.

There were perhaps a couple of times where the MMORPG side of the story seemed a bit incongruous, like maybe Stephenson was writing about it for its own sake rather than advancing the story, but he manages to tie it all together by the end. Stephenson sometimes gets dinged by folks for his digressions and his endings, but this is a tight novel, and the ending is an epic gunfight ranging over a hundred pages (or maybe even more). There’s even a chapter of wrapping things up. Another minor complaint is that Stephenson seemed to go to extreme lengths to get his characters romantically paired up. Actually, I didn’t really mind it, but at the same time, I did find it a bit odd in at least a couple of cases (Alex mentioned that it may be a preemptive strike against fan fiction authors who would pair the characters up, but if that’s the case, then I actually kinda hate it. I think it’s really just that Stephenson likes his characters and wants to see them together…)

Ultimately, it’s a fantastic novel and I loved it. This should not surprise you, as I tend to love all of his novels, but as a longtime fan of Stephenson, it is really nice to be able to point to a book that anyone could read and enjoy without being scared away by weird SF tropes, mathematics, obscure history, detailed monetary theory, existential philosophy, the creation of a new vocabulary that is similar, but not quite the same as ours, etc… There’s enough Stephensonian digressions into obscure topics that it will give a new reader a nice introduction to Stephenson without drowning them, and I appreciate that because while I love Snow Crash (the book I used to recommend as a place to start with Stephenson), it’s got a few things that seem to turn off “normal” people. As for the accessibility issue, I don’t really get that as a complaint. No, the book hasn’t changed my life, but few do, and I don’t think all art needs to be like that. Indeed, artists often overreach when they try to shoehorn “profound” into a story that doesn’t need it. And this story doesn’t. What it needs is action and thrills and laughs, which are present in abundance. It’s an excellent book, and a good introduction to Stephenson. For those who aren’t scared of long books, that is…

Update: Otakun comments with some interesting MMORPG perspectives.

Flow and Games

When I read a book, especially a non-fiction book, I usually find myself dog-earing pages with passages I find particularly interesting or illuminating. To some book lovers, I’m sure this practice seems barbaric and disrespectful, but it’s never really bothered me. Indeed, the best books are the ones with the most dog-ears. Sometimes there are so many dog-ears that the width of the book is distorted so that the top of the book (which is where the majority of my dog-ears go) is thicker than the bottom. The book Flow, by Mihaly Csikszentmihalyi1 is one such book.

I’ve touched on this concept before, in posts about Interrupts and Context Switching and Communication. This post isn’t a direct continuation of that series, but it is related. My conception of flow in those posts is technically accurate, but also imprecise. My concern was mostly focused around how fragile the state of flow can be – something that Csikszentmihalyi doesn’t spend much time on in the book. My description basically amounted to a state of intense concentration. Again, while technically accurate, there’s more to it than that, and Csikszentmihalyi equates the state with happiness and enjoyment (from page 2 of my edition):

… happiness is not something that happens. It is not the result of good fortune or random chance. It is not something that money can buy or power command. It does not depend on outside events, but, rather, on how we interpret them. Happiness, in fact, is a condition that must be prepared for, cultivated, and defended privately by each person. People who learn to control inner experience will be able to determine the quality of their lives, which is as close as any of us can come to being happy.

Yet we cannot reach happiness by consciously searching for it. “Ask yourself whether you are happy,” said J.S. Mill, “and you cease to be so.” It is by being fully involved with every detail of our lives, whether good or bad, that we find happiness, not by trying to look for it directly.

In essence, the world is a chaotic place, but there are times when we actually feel like we have achieved some modicum of control. When we become masters of our own fate. It’s an exhilarating feeling that Csikszentmihalyi calls “optimal experience”. It can happen at any time, whether external forces are favorable or not. It’s an internal condition of the mind. One of the most interesting things about this condition is that it doesn’t feel like happiness when it’s happening (page 3):

Contrary to what we usually believe, moments like these, the best moments of our lives, are not the passive, receptive, relaxing times – although such experiences can also be enjoyable, if we have worked hard to attain them. The best moments usually occur when a person’s body or mind is stretched to its limits in a voluntary effort to accomplish something difficult and worthwhile. Optimal experience is thus something that we make happen. For a child, it could be placing with trembling fingers the last block on a tower she has built, higher than any she has built so far; for a swimmer, it could be trying to beat his own record; for a violinist, mastering an intricate musical passage. For each person there are thousands of opportunities, challenges to expand ourselves.

Such experiences are not necessarily pleasant at the time they occur. The swimmer’s muscles might have ached during his most memorable race, his lungs might have felt like exploding, and he might have been dizzy with fatigue – yet these could have been the best moments of his life. Getting control of life is never easy, and sometimes it can be definitely painful. But in the long run optimal experiences add up to a sense of mastery – or perhaps better, a sense of participation in determining the content of life – that comes as close to what is usually meant by happiness as anything else we can conceivably imagine.

This is an interesting observation. The best times of our lives are often hectic, busy, and frustrating while they’re happening, and yet the feeling of satisfaction we get after-the-fact seems worth the effort. Interestingly, since Flow is a state of mind, experiences that are normally passive can become a flow activity through taking a more active role. Csikszentmihalyi makes an interesting distinction between “pleasure” and “enjoyment” (page 46):

Experiences that give pleasure can also give enjoyment, but the two sensations are quite different. For instance, everyone takes pleasure in eating. To enjoy food, however, is more difficult. A gourmet enjoys eating, as does anyone who pays enough attention to a meal so as to discriminate the various sensations provided by it. As this example suggests, we can experience pleasure without any investment of psychic energy, whereas enjoyment happens only as a result of unusual investments of attention. A person can feel pleasure without any effort, if the appropriate centers in his brain are electrically stimulated, or as a result of the chemical stimulation of drugs. But it is impossible to enjoy a tennis game, a book, or a conversation unless attention is fully concentrated on the activity.

As someone who watches a lot of movies and reads a lot of books, I can definitely see what Csikszentmihalyi is saying here. Reading a good book will not always be a passive activity, but a dialogue2. Rarely do I accept what someone has written unconditionally or without reserve. For instance, in the passage above, I remember thinking about how arbitrary Csikszentmihalyi’s choice of terms was – would the above passage be any different if we switched “pleasure” and “enjoyment”? Ultimately, that doesn’t really matter. Csikszentmihalyi’s point is that there’s a distinction between hedonistic, passive experiences and complex, active experiences.

There is, of course, a limit to what we can experience. In a passage that is much more concise than my post on Interrupts and Context Switching, Csikszentmihalyi expands on this concept:

Unfortunately, the nervous system has definite limits on how much information it can process at any given time. There are just so many “events” that can appear in consciousness and be recognized and handled appropriately before they begin to crowd each other out. Walking across a room while chewing bubble gum at the same time is not too difficult, even though some statesmen have been alleged to be unable to do it; but, in fact, there is not that much more that can be done concurrently. Thoughts have to follow each other, or they get jumbled. While we are thinking about a problem we cannot truly experience either happiness or sadness. We cannot run, sing, and balance the checkbook simultaneously, because each one of those activities exhausts most of our capacity for attention.

In other words, human beings are kinda like computers in that we execute instructions in a serial fashion, and things like context switches are quite disruptive to the concept of optimal experience3.

Given all of the above, it’s easy to see why there isn’t really an easy answer about how to cultivate flow. Csikszentmihalyi is a psychologist and is thus quite careful about how he phrases these things. His research is extensive, but necessarily imprecise. Nevertheless, he has identified eight overlapping “elements of enjoyment” that are usually present during flow. Through his extensive interviews, he has noticed at least a few of these major components come up whenever someone discusses a flow activity. A quick summary of the components (pages 48-67):

  • A Challenging Activity that Requires Skills – This is pretty self explanatory, but it should also be noted that “challenging” does not mean “impossible”. We need to confront tasks which push our boundaries, but which we also actually have a chance of completing.
  • The Merging of Action and Awareness – When all of our energy is concentrated on the relevant stimuli. This is related to some of the below components.
  • Clear Goals and Feedback – These are actually two separate components, but they are interrelated and on a personal level, I feel like these are the most important of the components… or at least, one of the most difficult. In particular, accurate feedback and measurement are much more difficult than they sound. Sure, for some activities, they’re simple and easy, but for a lot of more complex ones, the metrics either don’t exist or are too obtuse. This is something I struggle with in my job. There are certain metrics that are absolute and pretty easy to track, but there are others that are more subjective and exceedingly difficult to quantify.
  • Concentration on the Task at Hand – Very much related to the second point above, this particular component is all about how that sort of intense concentration removes from awareness all the worries and frustrations of everyday life. You are so focused on your task that there is no room in your mind for irrelevant information.
  • The Paradox of Control – Enjoyable experiences allow people to exercise a sense of control over their actions. To look at this another way, you could see it as a lack of worry about losing control. The paradox comes into play because this feeling is somewhat illusory. What’s important is the “possibility, rather than the actuality, of control.”
  • The Loss of Self-Consciousness – Again related to a couple of the above, this one is about how when you’re involved in flow, concern about the self disappears. Being so engrossed in a project or a novel or whatever that you forget to eat lunch, and things along those lines. Interestingly, this sort of thing eventually does lead to a sense of self that emerges stronger after the activity has ended.
  • The Transformation of Time – The sense of duration of time is altered. Hours pass by in minutes, or conversely, minutes pass by in what seem like hours. As Einstein once said: “Put your hand on a hot stove for a minute, and it seems like an hour. Sit with a pretty girl for an hour, and it seems like a minute. THAT’S relativity.”

So what are the implications of all this? There were a few things that kept coming to mind while reading this book.

First, to a large extent, I think this helps explain why video games are so popular. Indeed, many of the flow activities in the book are games or sports. Chess, swimming, dancing, etc… He doesn’t mention video games specifically, but they seem to fit the mold. Skills are certainly involved in video games. They require concentration and thus often lead to a loss of self-consciousness and lack of awareness of the outside world. They cause you to lose track of time. They permit a palpable sense of control over their digital environment (indeed, the necessity of a limited paradigm of reality is essential to video games, which lends the impression of control and agency to the player). And perhaps most importantly, the goals are usually very clear and the feedback is nearly instantaneous. It’s not uncommon for people to refer to video games in terms of addiction, which brings up an interesting point about flow (page 70):

The flow experience, like everything else, is not “good” in an absolute sense. It is good only in that it has the potential to make life more rich, intense, and meaningful; it is good because it increases the strength and complexity of the self. But whether the consequences of any particular instance of flow is good in a larger sense needs to be discussed and evaluated in terms of more inclusive social criteria. The same is true, however, of all human activities, whether science, religion, or politics.

Flow is value neutral. In the infamous words of Buckethead, “Like the atom, the flyswatter can be a force for great good or great evil.” So while video games could certainly be a flow activity, are they a good activity? That is usually where the controversy stems from. I believe the flow achieved during video game playing to be valuable, but I can also see why some wouldn’t feel that way. Since flow is an internal state of the mind, it’s difficult to observe just how that condition is impacting a given person.

Another implication that kept occurring to me throughout the book is what’s being called “The gamification of everything”. The idea is to use the techniques of game design to get people interested in what are normally non-game activities. This concept is gaining traction all over the place, but especially in business. For example, Target encouraged their cashiers to speed up checkout of customers by instituting a system of scoring and leaderboards to give cashiers instant feedback. In the book, Csikszentmihalyi recounts several examples of employees in seemingly boring jobs, such as assembly lines, who have turned their job from a tedious bore to a flow activity thanks to measurement and feedback. There are a lot of internet startups that use techniques from gaming to enhance their services. Many use an awards system with points and leaderboards. Take FourSquare, with its badges and “Mayorships”, which turns “going out” (to restaurants, bars, and other commercial establishments) into a game. Daily Burn uses game mechanics to help people lose weight. Mint.com is a service that basically turns personal finance into a game. The potential examples are almost infinite4.

Again, none of this is necessarily a “good” thing. If Target employees are gamed into checking out faster, are they sacrificing accuracy in the name of speed? What is actually gained by being the “mayor” of a bar in Foursquare? Indeed, many marketing schemes that revolve around the gamification of everything are essentially ways to “trick” customers or “exploit” psychology for profit. I don’t really have a problem with this, but I do think it’s an interesting trend, and its basis is the flow created by playing games.

On a more personal note, one thing I can’t help but notice is that my latest hobby of homebrewing beer seems, at first glance, to be a poor flow activity. Or, at least, the feedback part of the process is not very good. When you brew a beer, you have to wait a few weeks after brew day to bottle or keg your beer, then you have to wait some time after that (less if you keg) before you can actually taste the beer to see how it came out (sure, you can drink the unfermented wort or the uncarbonated/unconditioned beer after primary fermentation, but that’s not an exact measurement, and even then, you have to wait long periods of time). On the other hand, flow is an internal state of mind. The process of brewing the beer in the first place has many places for concentration and smaller bits of feedback. When I thought about it more, I feel like those three hours are, in themselves, something of a flow activity. The fact that I get to try it a few weeks/months later to see how it turned out is just an added bonus. Incidentally, the saison I brewed a few weeks ago? It seems to have turned out well – I think it’s my best batch yet.

In case you can’t tell, I really enjoyed this book, and as longwinded as this post turned out, there’s a ton of great material in the book that I’m only touching on. I’ll leave you with a quite that seems to sum things up pretty well (page 213): “Being in control of the mind means that literally anything that happens can be a source of joy.”

1 – I guess it’s a good thing that I’m writing this as opposed to speaking about it, as I have no idea how to pronounce any part of Mihaly Csikszentmihalyi’s name.

2 – Which is not to take away the power of books or movies where you sit down, turn your brain off, and veg out for a while. Hey, I think True Blood is coming on soon…

3 – This is, of course, a massive simplification of a subject that we don’t even really understand that well. My post on Interrupts and Context Switching goes into more detail, but even that is lacking in a truly detailed understanding of the conscious mind.

4 – I have to wonder how familiar Casinos are with these concepts. I’m not talking about the games of chance themselves, though that is also a good example of a flow activity (and you can see why gambling addiction could be a problem as a result). Take, for example, blackjack. The faster the dealer gets through a hand of blackjack, the higher the throughput of the table, and thus the more money a Casino would make. Casinos are all about probability, and the higher the throughput, the bigger their take. I seriously wonder if blackjack dealers are measured in some way (in terms of timing, not money).

Two (Bad) Movie Ideas

At lunch with some coworkers today, the inevitable topic of Palau came up. You see, we all work for a retail website and most of us live in Pennsylvania. Anyone in PA who has attempted to order online will no doubt recognize the pet peeve when filling out the Shipping Address: You enter your info, tab to the State field and press “p”, expecting to see Pennsylvania come up… but instead, we get Palau.

This brought to mind a video I recently saw on the interwebs. It’s from Jellyfish Lake in Palau. It’s a surreal video, and quite dissonant if you’re used to typical jellyfish, but these have apparently evolved differently: “Twelve thousand years ago these jellyfish became trapped in a natural basin on the island when the ocean receded. With no predators amongst them for thousands of years, they evolved into a new species that lost most of their stinging ability as they no longer had to protect themselves.”

So my first movie idea was a killer jellyfish movie, filmed at Jellyfish Lake in Palau. Andy why not, they’ve done it for every other type of creature, even seemingly ambivalent ones. The video linked above is almost scary all by itself. You just want to scream, Look out, Jellyfish! Oh God, they’ve surrounded you! Run! Go! Get to the choppah! All we’d really need is a decent physical actor/actress, a good makeup guy (for the gore), and a camera that can operate underwater. Just imagine all the cool shots that could be in this movie. Indeed, the typically boring horror movie POV shot could be quite effective here – jellyfish have an interesting, irregular pattern of movement, which could make for a really good stalking sequence. The great thing about this is that it would not involve any CGI – all practical effects, and in the case of the Jellyfish swarm, I apparently won’t even need to do anything special. This could be a great (bad) movie.

Of course, the topic then shifted into Sci-Fi (sorry, SyFy) original movies like Mega Shark vs Giant Octopus and Mega Python vs. Gatoroid. In speculating on the origins of Gatoroid, I stumbled upon my second movie idea. You see, I figure that our story starts with an alligator that has taken up residence in the sewer system beneath a popular gym. Like all gyms, there are lots of steroid abusing muscle-men in residence. But! One day, the police make a drug raid, and in order to avoid getting arrested, our juicing heroes flush all their illegal drugs down the drain… right to our hapless alligator, who unwittingly ingests said drug/sewage cocktail, thus ceasing to be an alligator and turning into Gatoroid!

Now, assuming that’s not how it actually happens in Mega Python vs. Gatoroid, I think we’re on to something here, but to avoid copyright woes, we may have to switch our monster from an Alligator to a Crocodile, thus making him Crocoroid.

Now all I need is a few million dollars.

Update: A coworker comments: “Why not make Crocoroid’s achilles’ heel be jellyfish? Then you only have to make one movie.” I’ve made him an executive producer.

Communication

About two years ago (has it really been that long!?), I wrote a post about Interrupts and Context Switching. As long and ponderous as that post was, it was actually meant to be part of a larger series of posts. This post is meant to be the continuation of that original post and hopefully, I’ll be able to get through the rest of the series in relatively short order (instead of dithering for another couple years). While I’m busy providing context, I should also note that this series was also planned for my internal work blog, but in the spirit of arranging my interests in parallel (and because I don’t have that much time at work dedicated to blogging on our intranet), I’ve decided to publish what I can here. Obviously, some of the specifics of my workplace have been removed from what follows, but it should still contain enough general value to be worthwhile.

In the previous post, I wrote about how computers and humans process information and in particular, how they handle switching between multiple different tasks. It turns out that computers are much better at switching tasks than humans are (for reasons belabored in that post). When humans want to do something that requires a lot of concentration and attention, such as computer programming or complex writing, they tend to work best when they have large amounts of uninterrupted time and can work in an environment that is quiet and free of distractions. Unfortunately, such environments can be difficult to find. As such, I thought it might be worth examining the source of most interruptions and distractions: communication.

Of course, this is a massive subject that can’t even be summarized in something as trivial as a blog post (even one as long and bloviated as this one is turning out to be). That being said, it’s worth examining in more detail because most interruptions we face are either directly or indirectly attributable to communication. In short, communication forces us to do context switching, which, as we’ve already established, is bad for getting things done.

Let’s say that you’re working on something large and complex. You’ve managed to get started and have reached a mental state that psychologists refer to as flow (also colloquially known as being “in the zone”). Flow is basically a condition of deep concentration and immersion. When you’re in this state, you feel energized and often don’t even recognize the passage of time. Seemingly difficult tasks no longer feel like they require much effort and the work just kinda… flows. Then someone stops by your desk to ask you an unrelated question. As a nice person and an accomodating coworker, you stop what you’re doing, listen to the question and hopefully provide a helpful answer. This isn’t necessarily a bad thing (we all enjoy helping other people out from time to time) but it also represents a series of context switches that would most likely break you out of your flow.

Not all work requires you to reach a state of flow in order to be productive, but for anyone involved in complex tasks like engineering, computer programming, design, or in-depth writing, flow is a necessity. Unfortunately, flow is somewhat fragile. It doesn’t happen instantaneously; it requires a transition period where you refamiliarize yourself with the task at hand and the myriad issues and variables you need to consider. When your collegue departs and you can turn your attention back to the task at hand, you’ll need to spend some time getting your brain back up to speed.

In isolation, the kind of interruption described above might still be alright every now and again, but imagine if the above scenario happened a couple dozen times in a day. If you’re supposed to be working on something complicated, such a series of distractions would be disasterous. Unfortunately, I work for a 24/7 retail company and the nature of our business sometimes requires frequen interruptions and thus there are times when I am in a near constant state of context switching. Noe of this is to say I’m not part of the problem. I am certainly guilty of interrupting others, sometimes frequently, when I need some urgent information. This makes working on particularly complicated problems extremely difficult.

In the above example, there are only two people involved: you and the person asking you a question. However, in most workplace environments, that situation indirectly impacts the people around you as well. If they’re immersed in their work, an unrelated conversation two cubes down may still break them out of their flow and slow their progress. This isn’t nearly as bad as some workplaces that have a public address system – basically a way to interrupt hundreds or even thousands of people in order to reach one person – but it does still represent a challenge.

Now, the really insideous part about all this is that communication is really a good thing, a necessary thing. In a large scale organization, no one person can know everything, so communication is unavoidable. Meetings and phone calls can be indispensible sources of information and enablers of collaboration. The trick is to do this sort of thing in a way that interrupts as few people as possible. In some cases, this will be impossible. For example, urgency often forces disruptive communication (because you cannot afford to wait for an answer, you will need to be more intrusive). In other cases, there are ways to minimize the impact of frequent communication.

One way to minimize communication is to have frequently requested information documented in a common repository, so that if someone has a question, they can find it there instead of interrupting you (and potentially those around you). Naturally, this isn’t quite as effective as we’d like, mostly because documenting information is a difficult and time consuming task in itself and one that often gets left out due to busy schedules and tight timelines. It turns out that documentation is hard! A while ago, Shamus wrote a terrific rant about technical documentation:

The stereotype is that technical people are bad at writing documentation. Technical people are supposedly inept at organizing information, bad at translating technical concepts into plain English, and useless at intuiting what the audience needs to know. There is a reason for this stereotype. It’s completely true.

I don’t think it’s quite as bad as Shamus points out, mostly because I think that most people suffer from the same issues as technical people. Technology tends to be complex and difficult to explain in the first place, so it’s just more obvious there. Technology is also incredibly useful because it abstracts many difficult tasks, often through the use of metaphors. But when a user experiences the inevitable metaphor shear, they have to confront how the system really works, not the easy abstraction they’ve been using. This descent into technical details will almost always be a painful one, no matter how well documented something is, which is part of why documentation gets short shrift. I think the fact that there actually is documentation is usually a rather good sign. Then again, lots of things aren’t documented at all.

There are numerous challenges for a documentation system. It takes resources, time, and motivation to write. It can become stale and inaccurate (sometimes this can happen very quickly) and thus it requires a good amount of maintenance (this can involve numerous other topics, such as version histories, automated alert systems, etc…). It has to be stored somewhere, and thus people have to know where and how to find it. And finally, the system for building, storing, maintaining, and using documentation has to be easy to learn and easy to use. This sounds all well and good, but in practice, it’s a nonesuch beast. I don’t want to get too carried away talking about documentation, so I’ll leave it at that (if you’re still interested, that nonesuch beast article is quite good). Ultimately, documentation is a good thing, but it’s obviously not the only way to minimize communication strain.

I’ve previously mentioned that computer programming is one of those tasks that require a lot of concentration. As such, most programmers abhor interruptions. Interestingly, communication technology has been becoming more and more reliant on software. As such, it should be no surprise that a lot of new tools for communication are asynchronous, meaning that the exchange of information happens at each participant’s own convenience. Email, for example, is asynchronous. You send an email to me. I choose when I want to review my messages and I also choose when I want to respond. Theoretically, email does not interrupt me (unless I use automated alerts for new email, such as the default Outlook behavior) and thus I can continue to work, uninterrupted.

The aformentioned documentation system is also a form of asynchronous communication and indeed, most of the internet itself could be considered a form of documentation. Even the communication tools used on the web are mostly asynchronous. Twitter, Facebook, YouTube, Flickr, blogs, message boards/forums, RSS and aggregators are all reliant on asynchronous communication. Mobile phones are obviously very popular, but I bet that SMS texting (which is asynchronous) is used just as much as voice, if not moreso (at least, for younger people). The only major communication tools invented in the past few decades that wouldn’t be asynchronous are instant messaging and chat clients. And even those systems are often used in a more asynchronous way than traditional speech or conversation. (I suppose web conferencing is a relatively new communication tool, though it’s really just an extension of conference calls.)

The benefit of asynchronous communication is, of course, that it doesn’t (or at least it shouldn’t) represent an interruption. If you’re immersed in a particular task, you don’t have to stop what you’re doing to respond to an incoming communication request. You can deal with it at your own convenience. Furthermore, such correspondence (even in a supposedly short-lived medium like email) is usually stored for later reference. Such records are certainly valuable resources. Unfortunately, asynchronous communication has it’s own set of difficulties as well.

Miscommunication is certainly a danger in any case, but it seems more prominent in the world of asynchronous communication. Since there is no easy back-and-forth in such a method, there is no room for clarification and one is often left only with their own interpretation. Miscommunication is doubly challenging because it creates an ongoing problem. What could have been a single conversation has now ballooned into several asynchronous touch-points and even the potential for wasted work.

One of my favorite quotations is from Anne Morrow Lindbergh:

To write or to speak is almost inevitably to lie a little. It is an attempt to clothe an intangible in a tangible form; to compress an immeasurable into a mold. And in the act of compression, how the Truth is mangled and torn!

It’s difficult to beat the endless nuance of face-to-face communication, and for some discussions, nothing else will do. But as Lindbergh notes, communication is, in itself, a difficult proposition. Difficult, but necessary. About the best we can do is to attempt to minimize the misunderstanding.

I suppose one way to mitigate the possibility of miscommunication is to formalize the language in which the discussion is happening. This is easier said than done, as our friends in the legal department would no doubt say. Take a close look at a formal legal contract and you can clearly see the flaws in formal language. They are ostensibly written in English, but they require a lot of effort to compose or to read. Even then, opportunities for miscommunication or loopholes exist. Such a process makes sense when dealing with two separate organizations that each have their own agenda. But for internal collaboration purposes, such a formalization of communication would be disastrous.

You could consider computer languages a form of formal communication, but for most practical purposes, this would also fall short of a meaningful method of communication. At least, with other humans. The point of a computer language is to convert human thought into computational instructions that can be carried out in an almost mechanical fashion. While such a language is indeed very formal, it is also tedious, unintuitive, and difficult to compose and read. Our brains just don’t work like that. Not to mention the fact that most of the communication efforts I’m talking about are the precursors to the writing of a computer program!

Despite all of this, a light formalization can be helpful and the fact that teams are required to produce important documentation practically requires a compromise between informal and formal methods of communication. In requirements specifications, for instance, I have found it quite beneficial to formally define various systems, acronyms, and other jargon that is referenced later in the document. This allows for a certain consistency within the document itself, and it also helps establish guidelines surrounding meaningful dialogue outside of the document. Of course, it wouldn’t quite be up to legal standards and it would certainly lack the rigid syntax of computer languages, but it can still be helpful.

I am not an expert in linguistics, but it seems to me that spoken language is much richer and more complex than written language. Spoken language features numerous intricacies and tonal subtleties such as inflections and pauses. Indeed, spoken language often contains its own set of grammatical patterns which can be different than written language. Furthermore, face-to-face communication also consists of body language and other signs that can influence the meaning of what is said depending on the context in which it is spoken. This sort of nuance just isn’t possible in written form.

This actually illustrates a wider problem. Again, I’m no linguist and haven’t spent a ton of time examining the origins of language, but it seems to me that language emerged as a more immediate form of communication than what we use it for today. In other words, language was meant to be ephemeral, but with the advent of written language and improved technological means for recording communication (which are, historically, relatively recent developments), we’re treating it differently. What was meant to be short-lived and transitory is now enduring and long-lived. As a result, we get things like the ever changing concept of political-correctness. Or, more relevant to this discussion, we get the aforementioned compromise between formal and informal language.

Another drawback to asynchronous communication is the propensity for over-communication. The CC field in an email can be a dangerous thing. It’s very easy to broadcast your work out to many people, but the more this happens, the more difficult it becomes to keep track of all the incoming stimuli. Also, the language used in such a communication may be optimized for one type of reader, while the audience may be more general. This applies to other asynchronous methods as well. Documentation in a wiki is infamously difficult to categorize and find later. When you have an army of volunteers (as Wikipedia does), it’s not as large a problem. But most organizations don’t have such luxuries. Indeed, we’re usually lucky if something is documented at all, let alone well organized and optimized.

The obvious question, which I’ve skipped over for most of this post (and, for that matter, the previous post), is: why communicate in the first place? If there are so many difficulties that arise out of communication, why not minimize such frivolities so that we can get something done?

Indeed, many of the greatest works in history were created by one mind. Sometimes, two. If I were to ask you to name the greatest inventor of all time, what would you say? Leonardo da Vinci or perhaps Thomas Edison. Both had workshops consisting of many helping hands, but their greatest ideas and conceptual integrity came from one man. Great works of literature? Shakespeare is the clear choice. Music? Bach, Mozart, Beethoven. Painting? da Vinci (again!), Rembrandt, Michelangelo. All individuals! There are collaborations as well, but usually only among two people. The Wright brothers, Gilbert and Sullivan, and so on.

So why has design and invention gone from solo efforts to group efforts? Why do we know the names of most of the inventors of 19th and early 20th century innovations, but not later achievements? For instance, who designed the Saturn V rocket? No one knows that, because it was a large team of people (and it was the culmination of numerous predecessors made by other teams of people). Why is that?

The biggest and most obvious answer is the increasing technological sophistication in nearly every area of engineering. The infamous Lazarus Long adage that “Specialization is for insects.” notwithstanding, the amount of effort and specialization in various fields is astounding. Take a relatively obscure and narrow branch of mechanical engineering like Fluid Dynamics, and you’ll find people devoting most of their life to the study of that field. Furthermore, the applications of that field go far beyond what we’d assume. Someone tinkering in their garage couldn’t make the Saturn V alone. They’d require too much expertise in a wide and disparate array of fields.

This isn’t to say that someone tinkering in their garage can’t create something wonderful. Indeed, that’s where the first personal computer came from! And we certainly know the names of many innovators today. Mark Zuckerberg and Larry Page/Sergey Brin immediately come to mind… but even their inventions spawned large companies with massive teams driving future innovation and optimization. It turns out that scaling a product up often takes more effort and more people than expected. (More information about the pros and cons of moving to a collaborative structure will have to wait for a separate post.)

And with more people comes more communication. It’s a necessity. You cannot collaborate without large amounts of communication. In Tom DeMarco and Timothy Lister’s book Peopleware, they call this the High-Tech Illusion:

…the widely held conviction among people who deal with any aspect of new technology (as who of us does not?) that they are in an intrinsically high-tech business. … The researchers who made fundamental breakthroughs in those areas are in a high-tech business. The rest of us are appliers of their work. We use computers and other new technology components to develop our products or to organize our affairs. Because we go about this work in teams and projects and other tightly knit working groups, we are mostly in the human communication business. Our successes stem from good human interactions by all participants in the effort, and our failures stem from poor human interactions.

(Emphasis mine.) That insight is part of what initially inspired this series of posts. It’s very astute, and most organizations work along those lines, and thus need to figure out ways to account for the additional costs of communication (this is particularly daunting, as such things are notoriously difficult to measure, but I’m getting ahead of myself). I suppose you could argue that both of these posts are somewhat inconclusive. Some of that is because they are part of a larger series, but also, as I’ve been known to say, human beings don’t so much solve problems as they do trade one set of problems for another (in the hopes that the new problems are preferable the old). Recognizing and acknowledging the problems introduced by collaboration and communication is vital to working on any large project. As I mentioned towards the beginning of this post, this only really scratches the surface of the subject of communication, but for the purposes of this series, I think I’ve blathered on long enough. My next topic in this series will probably cover the various difficulties of providing estimates. I’m hoping the groundwork laid in these first two posts will mean that the next post won’t be quite so long, but you never know!

6WH: Slasher Statistics

There are certain RULES that one must abide by in order to successfully survive a horror movie. For instance, number one: you can never have sex. BIG NO NO! BIG NO NO! Sex equals death, okay? Number two: you can never drink or do drugs. The sin factor! It’s a sin. It’s an extension of number one. And number three: never, ever, ever under any circumstances say, “I’ll be right back.” Because you won’t be back. — Randy (Scream, 1996)

The slasher film is an unusual beast. It’s often criticized for its lack of originality, simplistic premises, repetitive nature, and strict adherence to formula. Of course, it’s often praised for such qualities as well. For fans of the slasher, watching a new film that follows the formula is like eating comfort food.

Ahhh, horror comfort food. Watching an ’80s bodycount film, I find, is relaxing. You kinda know what’s going to happen and all of the characters act in predictable ways, but that’s why it’s like putting a sweater on on a chilly day.

The funny thing about this is that the so-called formula isn’t exactly precise. I’ve written about genres in general before:

A genre is typically defined as a category of artistic expression marked by a distinctive style, form, or content. However, anyone who is familiar with genre film or literature knows that there are plenty of movies or books that are difficult to categorize. As such, specific genres such as horror, sci-fi, or comedy are actually quite inclusive. Some genres, Drama in particular, are incredibly broad and are often accompanied by the conventions of other genres (we call such pieces “cross-genre,” though I think you could argue that almost everything incorporates “Drama”). The point here is that there is often a blurry line between what constitutes one genre from another.

As such, it’s usually easy to spot a Slasher flick, even if there are lots of traits that are uncommon or unique. That being said, there are a number of characteristics common to a lot of slasher films:

  • A Killer: Usually a lone, male killer, but not always.
  • Victims: Usually more than two victims, introduced at the beginning and slowly killed off as the film progresses (in the manner of Ten Little Indians)
  • A Survivor: Usually a female, and usually the main protagonist that defeats the killer in the end.
  • Gratuitious Violence: Usually a variety of weaponry is used to dispatch the victims in a relatively gruesome manner. Rarely are impersonal weapons (such as guns) used, except in certain exotic cases (such as the speargun, common to the Friday the 13th series). More personal weapons, like knives and other bladed weapons, are usually the norm, and the result is generally depicted in gory detail.
  • Sex: Nudity and sex are usually involved, and are generally indicators that those participating will die. Sometimes this is a deliberate commentary on sexuality, sometimes it’s just a more specific example of punishing those who are distracted.
  • History: There is usually some tragedy in the past that is being revisited upon the present in some way. This is less common than the above tropes, but still frequent enough to be mentioned.

There are tons of other tropes that I could go into, but that covers a good portion of the conventions used in the slasher film. Another interesting thing about the slasher film is that while there are a number of Ur Examples (i.e. primitive slashers) and Trope Makers/Codifiers, there are some pretty distinct time periods that are important. Again, there are lots of pre-slashers, notably movies like Psycho and Black Christmas1, but for all intents and purposes, the slasher film started in 1978 with Halloween and went into overdrive with the release of Friday the 13th in 1980. The period between 1980 and 1983 saw the release of countless imitators and sequels, and by 1986, the sub-genre had slowed considerably2. There were still some series limping by (Friday the 13th, Halloween, Nightmare on Elm Street, etc…), but by the mid-90s, the sub-genre was all but dead. Wes Craven then revived things with the ultra-self-aware, mega-referential Scream, but by that point, the tropes of the sub-genre were so well established that subverting them became the order of the day. Post-Scream slashers don’t quite resemble the early 80s slashers and perhaps deserve their own sub-genre definition (neo-slashers?).3

So to me, the “true” slasher film was made between the years of 1978 and 1996, with the primary concentration being in the early 80s. Sure, there were a ton of influential films made before 1978 that featured or established important tropes, but none of those films even approached the success of Halloween and it’s imitators. Similarly, films made after Scream were forced to acknowledge the tropes and conventions of the sub-genre, and thus they shouldn’t really count.

In 1992, Carol Clover coined the term Final Girl to describe the lone surviving character at the end of slasher films, and a new controversy was born. Because of its seemingly rigid conventions, the slasher film is ripe for post-modern interpretations and deconstructions, and it’s easy to get carried away with such things. Clover started a more academic discussion of the sub-genre, and it’s continued for the past 18 years. The discussion has mostly revolved around the role of women in these films, with the general contention being that more women are killed than men, and in a more graphic way. There have been papers arguing one way or the other, and as you might expect, none are particularly definitive.

Which brings me to a relatively recent scholarly article, Sex and Violence in the Slasher Horror Film: A Content Analysis of Gender Differences in the Depiction of Violence (.pdf). Published in 2009, the article summarizes the existing arguments and, more notably, attempts to do a pretty thorough quantitative analysis of 50 slasher films.

The article is detailed and thorough enough that it would be of interest to any fans of the genre, even if it’s possible to nitpick a number of details in their methodology. Given what I wrote about above, I think you can see where my nitpicking was focused. In particular, I was baffled by the film sample list (see page 11).

Earlier in the article, the authors discuss previous efforts, and dismiss them for various reasons. One of the previous articles is criticized for a small sample size – which is a pretty legitimate criticism. Another is criticized because it selected films by commercial success:

The sample size

in the Molitor and Sapolsky (1993) study is adequate; however the decision to sample the most commercially successful films may raise problems with sample bias and interpretation of the findings (Molitor & Sapolsky, 1993; Sapolsky et al., 2003). Films featuring frequent presentations of extremely graphic violence may appeal to a smaller audience, generating lower box office revenues. Thus, the findings in the existing research may not reflect the true nature of violent presentations characteristic of the slasher subgenre.

This I find less valid, especially given the author’s concerns surrounding the impact of slasher films on society. If a film is not commercially successful, it is less influential, almost by definition.

All that being said, the authors came up with a new methodology which involved using IMDB’s power search capabilities. To my mind, their new methodology is probably just as problematic as previous studies. Their definition of the slasher sub-genre seems a bit broad, and as such, some of the films chosen as part of their study are questionable at best. For one thing, they include several pre-Halloween films and several post-Scream films, which dilutes the sample. Indeed, some of the films are arguably not even slashers. For instance, the inclusion of two Saw films seems like a bit of a stretch. It is true that Saw leverages some similar tropes, but it’s also one of the defining films in a different sub-genre – the “Torture Porn” film. Perhaps I’m splitting hairs, but I can’t imagine anyone jumping to Saw when asked to think of a slasher film.

The lack of any sort of measurement of influence is another issue. This is a more general problem, but it impacts this study in particular due to the random nature of the sample collection. For instance, there is no way that a movie like Cherry Falls should be used as a representative member of the slasher sub-genre. A study that focuses on commercial success of a film (i.e. box office and home video sales) would never have included that film.

Ultimately, these complaints amount to nitpicks. Even with these flaws, some of the study’s conclusions are still interesting:

Contrary to the findings reported in previous research, the current analysis suggests that there are several differences in the nature of violent presentations involving male and female characters. Male characters in slasher horror films are more likely to experience relatively quick, graphic, and serious acts of violence. Comparatively, female characters are more likely to be victims of less serious and less graphic forms of violence, such as stalking or confinement, with increased cinematic focus on depicting close-up states of prolonged terror. Women in slasher films are also more likely to be featured in scenes involving sexual content. Specifically, female characters are far more likely to be featured as partially or fully naked and, when sexual and violent images are concomitantly present, the film’s antagonist is significantly more likely to attack a woman.

This is ultimately not all that surprising, though I do wonder about a few things. For instance, since the Final Girl is a common convention, and since the final battle with the killer is likely to last a lot longer than earlier murders, it would make sense that the violence against women characters is less serious, but prolonged. I suppose one could also argue about the inclusion of non-physical violence as violence, which could get a bit hairy. The stats surrounding nudity and sex are also interesting, though I wonder how they would compare against other film genres (action films, for instance). The study presents the slasher as some sort of outlier, but I don’t know if that’s the case (not that it would excuse anything). I don’t know that any of these correlations can be tied to a causation, but it’s interesting nonetheless.

It’s an interesting article, and well worth a read for anyone interested in the sub-genre. Thanks to And Now the Screaming Starts for the pointer and stay tuned for the next installment of the Six Weeks of Halloween movie marathon. That’s all for now, but don’t worry, I’ll be right back!

1 I’m particularly fascinated by pre-slasher films, of which there are many. Psycho, Peeping Tom, Blood and Black Lace (and other Giallos), Twitch of the Death Nerve (aka Bay of Blood), The Texas Chain Saw Massacre, Black Christmas, Silent Night, Bloody Night, Alice Sweet Alice, The Hills Have Eyes, and so on. Even some older films nor normally associated with slashers presage the idea, like Thirteen Women or And Then There Were None.

2 In particular, April Fool’s Day and Jason Lives: Friday the 13th Part VI, both released in 1986, began to recognize the conventions of the genre and started the self-awareness trend that would culminate in Craven’s Scream. There are probably lots of other good slashers made during this 1986-1996 corridor, but the slasher film was seriously on the decline at that point.

3 It might be a bit insulting to Film Noir, but there are some parallels here. Critics basically defined the film noir after the fact and once that definition became popular, all new films that featured noir-like characteristics became known as neo-noir. Of course, this is not a perfect parallel, but there is a similarity here. Once people self-consciously started making noir films, they lost a certain quality, and the same is probably true for the slasher, and in particular, films like Scream and those that followed.

A/B Testing Spaghetti Sauce

Earlier this week I was perusing some TED Talks and ran across this old (and apparently popular) presentation by Malcolm Gladwell. It struck me as particularly relevant to several topics I’ve explored on this blog, including Sunday’s post on the merits of A/B testing. In the video, Gladwell explains why there are a billion different varieties of Spaghetti sauce at most supermarkets:

Again, this video touches on several topics explored on this blog in the past. For instance, it describes the origins of what’s become known as the Paradox of Choice (or, as some would have you believe, the Paradise of Choice) – indeed, there’s another TED talk linked right off the Gladwell video that covers that topic in detail.

The key insight Gladwell discusses in his video is basically the destruction of the Platonic Ideal (I’ll summarize in this paragraph in case you didn’t watch the video, which covers the topic in much more depth). He talks about Howard Moskowitz, who was a market research consultant with various food industry companies that were attempting to optimize their products. After conducting lots of market research and puzzling over the results, Moskowitz eventually came to a startling conclusion: there is no perfect product, only perfect products. Moskowitz made his name working with spaghetti sauce. Prego had hired him in order to find the perfect spaghetti sauce (so that they could compete with rival company, Ragu). Moskowitz developed dozens of prototype sauces and went on the road, testing each variety with all sorts of people. What he found was that there was no single perfect spaghetti sauce, but there were basically three types of sauce that people responded to in roughly equal proportion: standard, spicy, and chunky. At the time, there were no chunky spaghetti sauces on the market, so when Prego released their chunky spaghetti sauce, their sales skyrocketed. A full third of the market was underserved, and Prego filled that need.

Decades later, this is hardly news to us and the trend has spread from the supermarket into all sorts of other arenas. In entertainment, for example, we’re seeing a move towards niches. The era of huge blockbuster bands like The Beatles is coming to an end. Of course, there will always be blockbusters, but the really interesting stuff is happening in the niches. This is, in part, due to technology. Once you can fit 30,000 songs onto an iPod and you can download “free” music all over the internet, it becomes much easier to find music that fits your tastes better. Indeed, this becomes a part of peoples’ identity. Instead of listening to the mass produced stuff, they listen to something a little odd and it becomes an expression of their personality. You can see evidence of this everywhere, and the internet is a huge enabler in this respect. The internet is the land of niches. Click around for a few minutes and you can easily find absurdly specific, single topic, niche websites like this one where every post features animals wielding lightsabers or this other one that’s all about Flaming Garbage Cans In Hip Hop Videos (there are thousands, if not millions of these types of sites). The internet is the ultimate paradox of choice, and you’re free to explore almost anything you desire, no matter how odd or obscure it may be (see also, Rule 34).

In relation to Sunday’s post on A/B testing, the lesson here is that A/B testing is an optimization tool that allows you to see how various segments respond to different versions of something. In that post, I used an example where an internet retailer was attempting to find the ideal imagery to sell a diamond ring. A common debate in the retail world is whether that image should just show a closeup of the product, or if it should show a model wearing the product. One way to solve that problem is to A/B test it – create both versions of the image, segment visitors to your site, and track the results.

As discussed Sunday, there are a number of challenges with this approach, but one thing I didn’t mention is the unspoken assumption that there actually is an ideal image. In reality, there are probably some people that prefer the closeup and some people who prefer the model shot. An A/B test will tell you what the majority of people like, but wouldn’t it be even better if you could personalize the imagery used on the site depending on what customers like? Show the type of image people prefer, and instead of catering to the most popular segment of customer, you cater to all customers (the simple diamond ring example begins to break down at this point, but more complex or subtle tests could still show significant results when personalized). Of course, this is easier said than done – just ask Amazon, who does CRM and personalization as well as any retailer on the web, and yet manages to alienate a large portion of their customers every day! Interestingly, this really just shifts the purpose of A/B testing from one of finding the platonic ideal to finding a set of ideals that can be applied to various customer segments. Once again we run up against the need for more and better data aggregation and analysis techniques. Progress is being made, but I’m not sure what the endgame looks like here. I suppose time will tell. For now, I’m just happy that Amazon’s recommendations aren’t completely absurd for me at this point (which I find rather amazing, considering where they were a few years ago).

Groundhog Day and A/B Testing

Jeff Atwood recently made a fascinating observation about the similarities between the classic film Groundhog Day and A/B Testing.

In case you’ve only recently emerged from a hermit-like existence, Groundhog Day is a film about Phil (played by Bill Murray). It seems that Phil has been doomed (or is it blessed) to live the same day over and over again. It doesn’t seem to matter what he does during this day, he always wakes up at 6 am on Groundhog Day. In the film, we see the same day repeated over and over again, but only in bits and pieces (usually skipping repetitive parts). The director of the film, Harold Ramis, believes that by the end of the film, Phil has spent the equivalent of about 30 or 40 years reliving that same day.

Towards the beginning of the film, Phil does a lot of experimentation, and Atwood’s observation is that this often takes the form of an A/B test. This is a concept that is perhaps a little more esoteric, but the principles are easy. Let’s take a simple example from the world of retail. You want to sell a new ring on a website. What should the main image look like? For simplification purposes, let’s say you narrow it down to two different concepts: one, a closeup of the ring all by itself, and the other a shot of a model wearing the ring. Which image do you use? We could speculate on the subject for hours and even rationalize some pretty convincing arguments one way or the other, but it’s ultimately not up to us – in retail, it’s all about the customer. You could “test” the concept in a serial fashion, but ultimately the two sets of results would not be comparable. The ring is new, so whichever image is used first would get an unfair advantage, and so on. The solution is to show both images during the same timeframe. You do this by splitting your visitors into two segments (A and B), showing each segment a different version of the image, and then tracking the results. If the two images do, in fact, cause different outcomes, and if you get enough people to look at the images, it should come out in the data.

This is what Phil does in Groundhog Day. For instance, Phil falls in love with Rita (played by Andie MacDowell) and spends what seems like months compiling lists of what she likes and doesn’t like, so that he can construct the perfect relationship with her.

Phil doesn’t just go on one date with Rita, he goes on thousands of dates. During each date, he makes note of what she likes and responds to, and drops everything she doesn’t. At the end he arrives at — quite literally — the perfect date. Everything that happens is the most ideal, most desirable version of all possible outcomes on that date on that particular day. Such are the luxuries afforded to a man repeating the same day forever.

This is the purest form of A/B testing imaginable. Given two choices, pick the one that “wins”, and keep repeating this ad infinitum until you arrive at the ultimate, most scientifically desirable choice.

As Atwood notes, the interesting thing about this process is that even once Phil has constructed that perfect date, Rita still rejects Phil. From this example and presumably from experience with A/B testing, Atwood concludes that A/B testing is empty and that subjects can often sense a lack of sincerity behind the A/B test.

It’s an interesting point, but to be sure, I’m not sure it’s entirely applicable in all situations. Of course, Atwood admits that A/B testing is good at smoothing out details, but there’s something more at work in Groundhog’s Day that Atwood is not mentioning. Namely, that Phil is using A/B testing to misrepresent himself as the ideal mate for Rita. Yes, he’s done the experimentation to figure out what “works” and what doesn’t, but his initial testing was ultimately shallow. Rita didn’t reject him because he had all the right answers, she rejected him because he was attempting to deceive her. His was misrepresenting himself, and that certainly can lead to a feeling of emptiness.

If you look back at my example above about the ring being sold on a retail website, you’ll note that there’s no deception going on there. Somehow I doubt either image would result in a hollow feeling by the customer. Why is this different than Groundhog Day? Because neither image misrepresents the product, and one would assume that the website is pretty clear about the fact that you can buy things there. Of course, there are a million different variables you could test (especially once you get into text and marketing hooks, etc…) and some of those could be more deceptive than others, but most of the time, deception is not the goal. There is a simple choice to be made, instead of constantly wondering about your product image and second guessing yourself, why not A/B test it and see what customers like better?

There are tons of limitations to this approach, but I don’t think it’s as inherently flawed as Atwood seems to believe. Still, the data you get out of an A/B test isn’t always conclusive and even if it is, whatever learnings you get out of it aren’t necessarily applicable in all situations. For instance, what works for our new ring can’t necessarily be applied to all new rings (this is a problem for me, as my employer has a high turnover rate for products – as such, the simple example of the ring as described above would not be a good test for my company unless the ring would be available for a very long time). Furthermore, while you can sometimes pick a winner, it’s not always clear why it’s a winner. This is especially the case when the differences between A and B are significant (for instance, testing an entirely redesigned page might yield results, but you will not know which of the changes to the page actually caused said results – on the other hand, A/B testing is really the only way to accurately calculate ROI on significant changes like that.)

Obviously these limitations should be taken into account when conducting an A/B test, and I think what Phil runs into in Groundhog’s Day is a lack of conclusive data. One of the problems with interpreting inconclusive data is that it can be very tempting to rationalize the data. Phils initial attempts to craft the perfect date for Rita fail because he’s really only scraping the surface of her needs and desires. In other words, he’s testing the wrong thing, misunderstanding the data, and thus getting inconclusive results.

The interesting thing about the Groundhog’s Day example is that, in the end, the movie is not a condemnation of A/B testing at all. Phil ultimately does manage to win the affections of Rita. Of course it took him decades to do so, and that’s worth taking into account. Perhaps what the film is really saying is that A/B testing is often more complicated than it seems and that the only results you get depend on what you put into it. A/B testing is not the easy answer it’s often portrayed as and it should not be the only tool in your toolbox (i.e. forcing employees to prove that using 3, 4 or 5 pixels for a border is ideal is probably going a bit too far ), but neither is it as empty as Atwood seems to be indicating. (And we didn’t even talk about multivariate tests! Let’s get Christopher Nolan on that. He’d be great at that sort of movie, wouldn’t he?)