Culture

A/B Testing Spaghetti Sauce

Earlier this week I was perusing some TED Talks and ran across this old (and apparently popular) presentation by Malcolm Gladwell. It struck me as particularly relevant to several topics I’ve explored on this blog, including Sunday’s post on the merits of A/B testing. In the video, Gladwell explains why there are a billion different varieties of Spaghetti sauce at most supermarkets:

Again, this video touches on several topics explored on this blog in the past. For instance, it describes the origins of what’s become known as the Paradox of Choice (or, as some would have you believe, the Paradise of Choice) – indeed, there’s another TED talk linked right off the Gladwell video that covers that topic in detail.

The key insight Gladwell discusses in his video is basically the destruction of the Platonic Ideal (I’ll summarize in this paragraph in case you didn’t watch the video, which covers the topic in much more depth). He talks about Howard Moskowitz, who was a market research consultant with various food industry companies that were attempting to optimize their products. After conducting lots of market research and puzzling over the results, Moskowitz eventually came to a startling conclusion: there is no perfect product, only perfect products. Moskowitz made his name working with spaghetti sauce. Prego had hired him in order to find the perfect spaghetti sauce (so that they could compete with rival company, Ragu). Moskowitz developed dozens of prototype sauces and went on the road, testing each variety with all sorts of people. What he found was that there was no single perfect spaghetti sauce, but there were basically three types of sauce that people responded to in roughly equal proportion: standard, spicy, and chunky. At the time, there were no chunky spaghetti sauces on the market, so when Prego released their chunky spaghetti sauce, their sales skyrocketed. A full third of the market was underserved, and Prego filled that need.

Decades later, this is hardly news to us and the trend has spread from the supermarket into all sorts of other arenas. In entertainment, for example, we’re seeing a move towards niches. The era of huge blockbuster bands like The Beatles is coming to an end. Of course, there will always be blockbusters, but the really interesting stuff is happening in the niches. This is, in part, due to technology. Once you can fit 30,000 songs onto an iPod and you can download “free” music all over the internet, it becomes much easier to find music that fits your tastes better. Indeed, this becomes a part of peoples’ identity. Instead of listening to the mass produced stuff, they listen to something a little odd and it becomes an expression of their personality. You can see evidence of this everywhere, and the internet is a huge enabler in this respect. The internet is the land of niches. Click around for a few minutes and you can easily find absurdly specific, single topic, niche websites like this one where every post features animals wielding lightsabers or this other one that’s all about Flaming Garbage Cans In Hip Hop Videos (there are thousands, if not millions of these types of sites). The internet is the ultimate paradox of choice, and you’re free to explore almost anything you desire, no matter how odd or obscure it may be (see also, Rule 34).

In relation to Sunday’s post on A/B testing, the lesson here is that A/B testing is an optimization tool that allows you to see how various segments respond to different versions of something. In that post, I used an example where an internet retailer was attempting to find the ideal imagery to sell a diamond ring. A common debate in the retail world is whether that image should just show a closeup of the product, or if it should show a model wearing the product. One way to solve that problem is to A/B test it – create both versions of the image, segment visitors to your site, and track the results.

As discussed Sunday, there are a number of challenges with this approach, but one thing I didn’t mention is the unspoken assumption that there actually is an ideal image. In reality, there are probably some people that prefer the closeup and some people who prefer the model shot. An A/B test will tell you what the majority of people like, but wouldn’t it be even better if you could personalize the imagery used on the site depending on what customers like? Show the type of image people prefer, and instead of catering to the most popular segment of customer, you cater to all customers (the simple diamond ring example begins to break down at this point, but more complex or subtle tests could still show significant results when personalized). Of course, this is easier said than done – just ask Amazon, who does CRM and personalization as well as any retailer on the web, and yet manages to alienate a large portion of their customers every day! Interestingly, this really just shifts the purpose of A/B testing from one of finding the platonic ideal to finding a set of ideals that can be applied to various customer segments. Once again we run up against the need for more and better data aggregation and analysis techniques. Progress is being made, but I’m not sure what the endgame looks like here. I suppose time will tell. For now, I’m just happy that Amazon’s recommendations aren’t completely absurd for me at this point (which I find rather amazing, considering where they were a few years ago).

Groundhog Day and A/B Testing

Jeff Atwood recently made a fascinating observation about the similarities between the classic film Groundhog Day and A/B Testing.

In case you’ve only recently emerged from a hermit-like existence, Groundhog Day is a film about Phil (played by Bill Murray). It seems that Phil has been doomed (or is it blessed) to live the same day over and over again. It doesn’t seem to matter what he does during this day, he always wakes up at 6 am on Groundhog Day. In the film, we see the same day repeated over and over again, but only in bits and pieces (usually skipping repetitive parts). The director of the film, Harold Ramis, believes that by the end of the film, Phil has spent the equivalent of about 30 or 40 years reliving that same day.

Towards the beginning of the film, Phil does a lot of experimentation, and Atwood’s observation is that this often takes the form of an A/B test. This is a concept that is perhaps a little more esoteric, but the principles are easy. Let’s take a simple example from the world of retail. You want to sell a new ring on a website. What should the main image look like? For simplification purposes, let’s say you narrow it down to two different concepts: one, a closeup of the ring all by itself, and the other a shot of a model wearing the ring. Which image do you use? We could speculate on the subject for hours and even rationalize some pretty convincing arguments one way or the other, but it’s ultimately not up to us – in retail, it’s all about the customer. You could “test” the concept in a serial fashion, but ultimately the two sets of results would not be comparable. The ring is new, so whichever image is used first would get an unfair advantage, and so on. The solution is to show both images during the same timeframe. You do this by splitting your visitors into two segments (A and B), showing each segment a different version of the image, and then tracking the results. If the two images do, in fact, cause different outcomes, and if you get enough people to look at the images, it should come out in the data.

This is what Phil does in Groundhog Day. For instance, Phil falls in love with Rita (played by Andie MacDowell) and spends what seems like months compiling lists of what she likes and doesn’t like, so that he can construct the perfect relationship with her.

Phil doesn’t just go on one date with Rita, he goes on thousands of dates. During each date, he makes note of what she likes and responds to, and drops everything she doesn’t. At the end he arrives at — quite literally — the perfect date. Everything that happens is the most ideal, most desirable version of all possible outcomes on that date on that particular day. Such are the luxuries afforded to a man repeating the same day forever.

This is the purest form of A/B testing imaginable. Given two choices, pick the one that “wins”, and keep repeating this ad infinitum until you arrive at the ultimate, most scientifically desirable choice.

As Atwood notes, the interesting thing about this process is that even once Phil has constructed that perfect date, Rita still rejects Phil. From this example and presumably from experience with A/B testing, Atwood concludes that A/B testing is empty and that subjects can often sense a lack of sincerity behind the A/B test.

It’s an interesting point, but to be sure, I’m not sure it’s entirely applicable in all situations. Of course, Atwood admits that A/B testing is good at smoothing out details, but there’s something more at work in Groundhog’s Day that Atwood is not mentioning. Namely, that Phil is using A/B testing to misrepresent himself as the ideal mate for Rita. Yes, he’s done the experimentation to figure out what “works” and what doesn’t, but his initial testing was ultimately shallow. Rita didn’t reject him because he had all the right answers, she rejected him because he was attempting to deceive her. His was misrepresenting himself, and that certainly can lead to a feeling of emptiness.

If you look back at my example above about the ring being sold on a retail website, you’ll note that there’s no deception going on there. Somehow I doubt either image would result in a hollow feeling by the customer. Why is this different than Groundhog Day? Because neither image misrepresents the product, and one would assume that the website is pretty clear about the fact that you can buy things there. Of course, there are a million different variables you could test (especially once you get into text and marketing hooks, etc…) and some of those could be more deceptive than others, but most of the time, deception is not the goal. There is a simple choice to be made, instead of constantly wondering about your product image and second guessing yourself, why not A/B test it and see what customers like better?

There are tons of limitations to this approach, but I don’t think it’s as inherently flawed as Atwood seems to believe. Still, the data you get out of an A/B test isn’t always conclusive and even if it is, whatever learnings you get out of it aren’t necessarily applicable in all situations. For instance, what works for our new ring can’t necessarily be applied to all new rings (this is a problem for me, as my employer has a high turnover rate for products – as such, the simple example of the ring as described above would not be a good test for my company unless the ring would be available for a very long time). Furthermore, while you can sometimes pick a winner, it’s not always clear why it’s a winner. This is especially the case when the differences between A and B are significant (for instance, testing an entirely redesigned page might yield results, but you will not know which of the changes to the page actually caused said results – on the other hand, A/B testing is really the only way to accurately calculate ROI on significant changes like that.)

Obviously these limitations should be taken into account when conducting an A/B test, and I think what Phil runs into in Groundhog’s Day is a lack of conclusive data. One of the problems with interpreting inconclusive data is that it can be very tempting to rationalize the data. Phils initial attempts to craft the perfect date for Rita fail because he’s really only scraping the surface of her needs and desires. In other words, he’s testing the wrong thing, misunderstanding the data, and thus getting inconclusive results.

The interesting thing about the Groundhog’s Day example is that, in the end, the movie is not a condemnation of A/B testing at all. Phil ultimately does manage to win the affections of Rita. Of course it took him decades to do so, and that’s worth taking into account. Perhaps what the film is really saying is that A/B testing is often more complicated than it seems and that the only results you get depend on what you put into it. A/B testing is not the easy answer it’s often portrayed as and it should not be the only tool in your toolbox (i.e. forcing employees to prove that using 3, 4 or 5 pixels for a border is ideal is probably going a bit too far ), but neither is it as empty as Atwood seems to be indicating. (And we didn’t even talk about multivariate tests! Let’s get Christopher Nolan on that. He’d be great at that sort of movie, wouldn’t he?)

Tasting Notes…

So Nick from CHUD recently revived the idea of a “Tasting Notes…” post that features a bunch of disconnected, scattershot notes on a variety of topics that don’t really warrant a full post. It sounds like fun, so here are a few tasting notes…

Television

  • The latest season of True Blood seems to be collapsing under the weight of all the new characters and plotlines. It’s still good, but the biggest issue with the series is that nothing seems to happen from week to week. That’s the problem when you have a series with 15 different subplots, I guess. The motif for this season seems to be to end each episode with Vampire Bill doing something absurdly crazy. I still have hope for the series, but it was much better when I was watching it on DVD/On Demand, when all the episodes are available so you don’t have to wait a week between each episode.
  • Netflix Watch Instantly Pick of the Week: The Dresden Files. An underappreciated Sci-Fi (er, SyFy) original series based on a series of novels by Jim Butcher, this focuses on that other magician named Harry. This one takes the form of a creature-of-the-week series mixed with a bit of a police procedural, and it’s actually pretty good. We’re not talking groundbreaking or anything, but it’s great disposable entertainment and well worth a watch if you like magic and/or police procedurals. Unfortunately, it only lasted about 12 episodes, so there’s still some loose threads and whatnot, but it’s still a fun series.

Video Games

  • A little late to the party (but not as late as some others), I’ve started playing Grand Theft Auto IV recently. It’s a fine game, I guess, but I’ve had this problem with the GTA series ever since I played GTA III: There doesn’t seem to be anything new or interesting in the game. GTA III was a fantastic game, and it seems like all of the myriad sequels since then have added approximately nothing to its legacy. Vice City and San Andreas added some minor improvements to various gameplay mechanics and whatnot, but they were ultimately the same game with some minor improvements. GTA IV seems basically like the same game, but with HD graphics. Also, is it me, or is it harder to drive around town without constantly spinning out? Maybe Burnout Paradise ruined me on GTA driving, which I used to think of as a lot of fun.
  • I have to admit that this year’s E3 seems like a bit of a bust for me. Microsoft had Kinect, which looks like it will be a silly failure (not that it really matters for me, as I have a PS3). Sony has finally caught up to where the Wii was a few years ago with Move, and I don’t particularly care, as motion control games have consistently disappointed me. Sony also seems to have bet the farm on 3D gaming, but that would require me to purchase a new $5,000 TV and $100 glasses for anyone who wants to watch. Also, there’s the fact that I could care less about 3D. Speaking of which, Nintendo announced the 3DS, which is a portable gaming system with 3D that doesn’t require glasses. This is neat, I guess, but I could really care less about portable systems. There are a couple of interesting games for the Wii, namely the new Goldeneye and the new Zelda, but in both cases, I’m a little wary. My big problem with Nintendo this generation has been that they didn”t do anything new or interesting after Wii Sports (and possibly Wii Fit). Everything else has been retreads of old games. There is a certain nostalgia value there, and I can enjoy some of those retreads (Mario Kart Wii was fun, but it’s not really that different from a game that came out about 20 years ago, ditto for New Super Mario Brothers Wii, and about 10 other games), but at the same time, I’m getting sick of all that.
  • One game that was announced at E3 that I am looking forward to is called Journey. It’s made by the same team as Flower and will hopefully be just as good.
  • Otherwise, I’ll probably play a little more of GTA IV, just so I can get far enough to really cause some mayhem in Liberty City (this is another problem with a lot of sequels – you often start the sequel powered-down and have to build up various abilities that you’re used to having) and pick up some games from last year, like Uncharted 2 and Batman: Arkham Asylum.

Movies

  • I saw Predators last weekend, and despite being a member of this year’s illustrious Top 5 Movies I Want To See Even Though I Know They’ll Suck list, I actually enjoyed it. Don’t get me wrong, it’s not fine cinema by any stretch of the imagination, but it knows where its bread is buttered and it hits all the appropriate beats. As MovieBob notes, this movie fills in the expected sequel trajectory of the Alien series. It’s Aliens to <a href="Predator“>Predator‘s Alien, if that makes any sense. In other words, it’s Predator but with multiple predators and higher stakes. It’s ultimately derivative in the extreme, but I really enjoyed the first movie, so that’s not that bad. I mean, you’ve got the guy with the gatling gun, the tough ethnic girl who recognizes the predators, the tough ethnic guy who pulls off his shirt and faces the predator with a sword in hand to hand combat, and so on. Again, it’s a fun movie, and probably the best since the original (although, that’s not really saying much). Just don’t hope for much in the way of anything new or exciting.
  • Netflix Watch Instantly Pick of the Week: The Girl with the Dragon Tattoo, for reasons expounded upon in Sunday’s post.
  • Looking forward to Inception this weekend. Early reviews are positive, but I’m not really hoping for that much. Still in a light year for movies, this looks decent.

The Finer Things

  • A couple weekends ago, I went out on my deck on a gorgeous night and drank a beer whilst smoking a cigar. I’m pretty good with beer, so I feel confident in telling you that if you get the chance, Affligem Dubbel is an great beer. It has a dark amber color and a great, full bodied taste. It’s as smooth as can be, but carbonated enough that it doesn’t taste flat. All in all, one of my favorite recent discoveries. I know absolutely nothing about cigars, but I had an Avo Uvezian Notturno XO (it came in an orange tube). It’s a bit smaller than most other cigars I’ve had, but I actually enjoyed it quite a bit. Again, a cigar connoisseur, I am not, so take this with a grain of salt.
  • I just got back from my monthly beer club meeting. A decent selection tonight, with the standout and surprise winner being The Woodwork Series – Acasia Barreled. It’s a tasty double style beer (perhaps not as good as the aforementioned Affligem, but still quite good) and well worth a try (I’m now interested in trying the other styles, which all seem to be based around the type of barrel the beer is stored in). Other standouts included a homebrewed Triple (nice work Dana!), and, of course, someone brought Ommegang Abby Ale (another Dubbel!) which is a longtime favorite of mine. The beer I brought was a Guldenberg (Belgian tripel), but it must not have liked the car ride as it pretty much exploded when we opened it. I think it tasted a bit flat after that, but it had a great flavor and I think I will certainly have to try this again (preferably not shaking it around so much before I open it).

And I think that just about wraps up this edition of Tasting Notes, which I rather enjoyed writing and will probably try again at some point.

Incompetence

Noted documentary filmmaker Errol Morris has been writing a series of posts about incompetence for the NY Times. The most interesting parts feature an interview with David Dunning, a psychologist whose experiments have discovered what’s called the Dunning-Kruger Effect: our incompetence masks our ability to recognize our incompetence.

DAVID DUNNING: There have been many psychological studies that tell us what we see and what we hear is shaped by our preferences, our wishes, our fears, our desires and so forth. We literally see the world the way we want to see it. But the Dunning-Kruger effect suggests that there is a problem beyond that. Even if you are just the most honest, impartial person that you could be, you would still have a problem — namely, when your knowledge or expertise is imperfect, you really don’t know it. Left to your own devices, you just don’t know it. We’re not very good at knowing what we don’t know.

I found this interesting in light of my recent posting about universally self-affirming outlooks (i.e. seeing the world the way we want to see it). In any case, the interview continues:

ERROL MORRIS: Knowing what you don’t know? Is this supposedly the hallmark of an intelligent person?

DAVID DUNNING: That’s absolutely right. It’s knowing that there are things you don’t know that you don’t know. [4] Donald Rumsfeld gave this speech about “unknown unknowns.” It goes something like this: “There are things we know we know about terrorism. There are things we know we don’t know. And there are things that are unknown unknowns. We don’t know that we don’t know.” He got a lot of grief for that. And I thought, “That’s the smartest and most modest thing I’ve heard in a year.”

It may be smart and modest, but that sort of thing usually gets politicians in trouble. But most people aren’t politicians, and so it’s worth looking into this concept a little further. An interesting result of this effect is that a lot of the smartest, most intelligent people also tend to be somewhat modest (this isn’t to say that they don’t have an ego or that they can’t act in arrogant ways, just that they tend to have a better idea about how much they don’t know). Steve Schwartz has an essay called No One Knows What the F*** They’re Doing (or “The 3 Types of Knowledge”) that explores these ideas in some detail:

To really understand how it is that no one knows what they’re doing, we need to understand the three fundamental categories of information.

There’s the shit you know, the shit you know you don’t know, and the shit you don’t know you don’t know.

Schwartz has a series of very helpful charts that illustrate this, but most people drastically overestimate the amount of knowledge in the “shit you know” category. In fact, that’s the smallest category and it is dwarfed b the shit you know you don’t know category, which is, in itself, dwarfed by the shit you don’t know you don’t know. The result is that most people who receive a lot of praise or recognition are surprised and feel a bit like a fraud.

This is hardly a new concept, but it’s always worth keeping in mind. When we learn something new, we’ve gained some knowledge. We’ve put some information into the “shit we know” category. But more importantly, we’ve probably also taken something out of the “shit we don’t know that we don’t know” category and put it into the “shit we know that we don’t know” category. This is important because that unknown unknowns category is the most dangerous of the categories, not the least of which is that our ignorance prevents us from really exploring it. As mentioned at the beginning of this post, our incompetence masks our ability to recognize our incompetence. In the interview, Morris references a short film he did once:

ERROL MORRIS: And I have an interview with the president of the Alcor Life Extension Foundation, a cryonics organization, on the 6 o’clock news in Riverside, California. One of the executives of the company had frozen his mother’s head for future resuscitation. (It’s called a “neuro,” as opposed to a “full-body” freezing.) The prosecutor claimed that they may not have waited for her to die. In answer to a reporter’s question, the president of the Alcor Life Extension Foundation said, “You know, we’re not stupid . . . ” And then corrected himself almost immediately, “We’re not that stupid that we would do something like that.”

DAVID DUNNING: That’s pretty good.

ERROL MORRIS: “Yes. We’re stupid, but we’re not that stupid.”

DAVID DUNNING: And in some sense we apply that to the human race. There’s some comfort in that. We may be stupid, but we’re not that stupid.

One might be tempted to call this a cynical outlook, but what it basically amounts to is that there’s always something new to learn. Indeed, the more we learn, the more there is to learn. Now, if only we could invent the technology like what’s presented in Diaspora (from my previous post), so we can live long enough to really learn a lot about the universe around us…

Internalizing the Ancient

Otaku Kun points to a wonderful entry in the Astronomy Picture of the Day series:

APOD: Milky Way Over Ancient Ghost Panel

The photo features two main elements: a nice view of the stars in the sky and a series of paintings on a canyon wall in Utah (it’s the angle of the photograph and the clarity of the sky that makes it seem unreal to me, but looking at the larger version makes things a bit more clear). As OK points out, there are two corresponding kinds of antiquity here: “one cosmic, the other human”. He speculates:

I think it’s impossible to really relate to things beyond human timescales. The idea of something being “ancient” has no meaning if it predates our human comprehension. The Neanderthals disappeared 30,000 years ago, which is probably really the farthest back we can reflect on. When we start talking about human forebears of 100,000 years ago and more, it becomes more abstract – that’s why it’s no coincidence that the Battlestar Galactica series finale set the events 150,000 years ago, well beyond even the reach of mythological narrative.

I’m reminded of an essay by C. Northcote Parkinson, called High Finance or The Point of Vanishing Interest (the essay appears in Parkinson’s Law, a collection of essays). Parkinson writes about how finance committees work:

People who understand high finance are of two kinds: those who have vast fortunes of their own and those who have nothing at all. To the actual millionaire a million dollars is something real and comprehensible. To the applied mathematician and the lecturer in economics (assuming both to be practically starving) a million dollars is at least as real as a thousand, they having never possessed either sum. But the world is full of people who fall between these two categories, knowing nothing of millions but well accustomed to think in thousands, and it is these that finance committees are mostly comprised.

He then goes on to explore what he calls the “Law of Triviality”. Briefly stated, it means that the time spent on any item of the agenda will be in inverse proportion to the sum involved. Thus he concludes, after a number of humorous but fitting examples, that there is a point of vanishing interest where the committee can no longer comment with authority. Astonishingly, the amount of time that is spent on $10 million and on $10 may well be the same. There is clearly a space of time which suffices equally for the largest and smallest sums.

In short, it’s difficult to internalize numbers that high, whether we’re talking about large sums of money or cosmic timescales. Indeed, I’d even say that Parkinson was being a bit optimistic. Millionaires and mathematicians may have a better grasp on the situation than most, but even they are probably at a loss when we start talking about cosmic timeframes. OK also mentions Battlestar Galactica, which did end on an interesting note (even if that finale was quite disappointing as a whole) and which brings me to one of the reasons I really enjoy science fiction: the contemplation of concepts and ideas that are beyond comprehension. I can’t really internalize the cosmic information encoded in the universe around me in such a way to do anything useful with it, but I can contemplate it and struggle to understand it, which is interesting and valuable in its own right. Perhaps someday, we will be able to devise ways to internalize and process information on a cosmic scale (this sort of optimistic statement perhaps represents another reason I enjoy SF).

Predictions

Someone sent me a note about a post I wrote on the 4th Kingdom boards in 2005 (August 3, 2005, to be more precise). It was in a response to a thread about technology and consumer electronics trends, and the original poster gave two examples that were exploding at the times: “camera phones and iPods.” This is what I wrote in response:

Heh, I think the next big thing will be the iPod camera phone. Or, on a more general level, mp3 player phones. There are already some nifty looking mp3 phones, most notably the Sony/Ericsson “Walkman” branded phones (most of which are not available here just yet). Current models are all based on flash memory, but it can’t be long before someone releases something with a small hard drive (a la the iPod). I suspect that, in about a year, I’ll be able to hit 3 birds with one stone and buy a new cell phone with an mp3 player and digital camera.

As for other trends, as you mention, I think we’re goint to see a lot of hoopla about the next gen gaming consoles. The new Xbox comes out in time for Xmas this year and the new Playstation 3 hits early next year. The new playstation will probably have blue-ray DVD capability, which brings up another coming tech trend: the high capacity DVD war! It seems that Sony may actually be able to pull this one out (unlike Betamax), but I guess we’ll have to wait and see…

For an off-the-cuff informal response, I think I did pretty well. Of course, I still got a lot of the specifics wrong. For instance, I’m pretty clearly talking about the iPhone, though that would have to wait about 2 years before it became a reality. I also didn’t anticipate the expansion of flash memory to more usable sizes and prices. Though I was clearly talking about a convergence device, I didn’t really say anything about what we now call “apps”.

In terms of game consoles, I didn’t really say much. My first thought upon reading this post was that I had completely missed the boat on the Wii, however, it appears that the Wii’s new controller scheme wasn’t shown until September 2005 (about a month after my post). I did manage to predict a winner in the HD video war though, even if I framed the prediction as a “high capacity DVD war” and spelled blu-ray wrong.

I’m not generally good at making predictions about this sort of thing, but it’s nice to see when I do get things right. Of course, you could make the argument that I was just stating the obvious (which is basically what I did with my 2008 predictions). Then again, everything seems obvious in hindsight, so perhaps it is still a worthwhile exercise for me. If nothing else, it gets me to think in ways I’m not really used to… so here are a few predictions for the rest of this year:

  • Microsoft will release Natal this year, and it will be a massive failure. There will be a lot of neat talk about it and speculation about the future, but the fact is that gesture based interfaces and voice controls aren’t especially great. I’ll bet everyone says they’d like to use the Minority Report interface… but once they get to use it, I doubt people would actually find it more useful than current input methods. If it does attain success though, it will be because of the novelty of that sort of interaction. As a gaming platform, I think it will be a near total bust. The only way Microsoft would get Natal into homes is to bundle it with the XBox 360 (without raising the price)
  • Speaking of which, I think Sony’s Playstation Move platform will be mildly more successful than Natal, which is to say that it will also be a failure. I don’t see anything in their initial slate of games that makes me even want to try it out. All that being said, the PS3 will continue to gain ground against the Xbox 360, though not so much that it will overtake the other console.
  • While I’m at it, I might as well go out on a limb and say that the Wii will clobber both the PS3 and the Xbox 360. As of right now, their year in games seems relatively tame, so I don’t see the Wii producing favorable year over year numbers (especially since I don’t think they’ll be able to replicate the success of New Super Mario Brothers Wii, which is selling obscenely well, even to this day). The one wildcard on the Wii right now is the Vitality Sensor. If Nintendo is able to put out the right software for that and if they’re able to market it well, it could be a massive, audience-shifting blue ocean win for them. Coming up with a good “relaxation” game and marketing it to the proper audience is one hell of a challenge though. On the other hand, if anyone can pull that off, it’s Nintendo.
  • Sony will also release some sort of 3D gaming and movie functionality for the home. It will also be a failure. In general, I think attitudes towards 3D are declining. I think it will take a high profile failure to really temper Hollywood’s enthusiasm (and even then, the “3D bump” of sales seems to outweigh the risk in most cases). Nevertheless, I don’t think 3D is here to stay. The next major 3D revolution will be when it becomes possible to do it without glasses (which, at that point, might be a completely different technology like holograms or something).
  • At first, I was going to predict that Hollywood would be seeing a dip in ticket sales, until I realized that Avatar was mostly a 2010 phenomenon, and that Alice in Wonderland has made about $1 billion worldwide already. Furthermore, this summer sees the release of The Twilight Saga: Eclipse, which could reach similar heights (for reference, New Moon did $700 million worldwide) and the next Harry Potter is coming in November (for reference, the last Potter film did around $930 million). Altogether, the film world seems to be doing well… in terms of sales. I have to say that from my perspective, things are not looking especially well when it comes to quality. I’m not even as interested in seeing a lot of the movies released so far this year (an informal look at my past few years indicates that I’ve normally seen about twice as many movies as I have this year – though part of that is due to the move of the Philly film fest to October).
  • I suppose I should also make some Apple predictions. The iPhone will continue to grow at a fast rate, though its growth will be tempered by Android phones. Right now, both of them are eviscerating the rest of the phone market. Once that is complete, we’ll be left with a few relatively equal players, and I think that will lead to good options for us consumers. The iPhone has been taken to task more and more for Apple’s control-freakism, but it’s interesting that Android’s open features are going to present more and more of a challenge to that as time goes on. Most recently, Google announced that the latest version of Android would feature the ability for your 3G/4G phone to act as a WiFi hotspot, which will most likely force Apple to do the same (apparently if you want to do this today, you have to jailbreak your iPhone). I don’t think this spells the end of the iPhone anytime soon, but it does mean that they have some legitimate competition (and that competition is already challenging Apple with its feature-set, which is promising).
  • The iPad will continue to have modest success. Apple may be able to convert that to a huge success if they are able to bring down the price and iron out some of the software kinks (like multi-tasking, etc… something we already know is coming). The iPad has the potential to destroy the netbook market. Again, the biggest obstacle at this point is the price.
  • The Republicans will win more seats in the 2010 elections than the Democrats. I haven’t looked close enough at the numbers to say whether or not they could take back either (or both) house of Congress, but they will gain ground. This is not a statement of political preference either way for me, and my reasons for making this prediction are less about ideology than simple voter disenfranchisement. People aren’t happy with the government and that will manifest as votes against the incumbents. It’s too far away from the 2012 elections to be sure, but I suspect Obama will hang on, if for no other reason than that he seems to be charismatic enough that people give him a pass on various mistakes or other bad news.

And I think that’s good enough for now. In other news, I have started a couple of posts that are significantly more substantial than what I’ve been posting lately. Unfortunately, they’re taking a while to produce, but at least there’s some interesting stuff in the works.

Blast from the Past

A coworker recently unearthed a stash of a publication called The Net, a magazine published circa 1997. It’s been an interesting trip down memory lane. In no particular order, here are some thoughts about this now defunct magazine.

  • Website: There was a website, using the oh-so-memorable URL of www.thenet-usa.com (I suppose they were trying to distinguish themselves from all the other countries with thenet websites). Naturally, the website is no longer available, but archive.org has a nice selection of valid content from the 96-97 era. It certainly wasn’t the worst website in the world, but it’s not exactly great either. Just to give you a taste – for a while, it apparently used frames. Judging by archive.org, the site apparently went on until at least February of 2000, but the domain apparently lapsed sometime around May of that year. Random clicking around the dates after 2000 yielded some interesting results. Apparently someone named Phil Viger used it as their personal webpage for a while, complete with MIDI files (judging from his footer, he was someone who bought up a lot of URLs and put his simple page on there as a placeholder). By 2006, the site lapsed again, and it has remained vacant since then.
  • Imagez: One other fun thing about the website is that their image directory was called “imagez” (i.e. http://web.archive.org/web/19970701135348/www.thenet-usa.com/imagez/menubar/menu.gif). They thought they were so hip in the 90s. Of course, 10 years from now, some dufus will be writing a post very much like this and wondering why there’s an “r” at the end of flickr.
  • Headlines: Some headlines from the magazine:
    • Top Secrets of the Webmaster Elite (And as if that weren’t enough, we get the subhead: Warning: This information could create dangerously powerful Web Sites)
    • Are the Browser Wars Over? – Interestingly, the issue I’m looking at was from February 1997, meaning that IE and NN were still on their 3.x iterations. More on this story below
    • Unlock the Secrets of the Search Engines – Particularly notable in that this magazine was published before google. Remember Excite (apparently, they’re still around – who knew)?

    I could go on and on. Just pick up a magazine, open to a random page, and you can observe something very dated or featuring a horrible pun (like Global Warning… get it? Instead of Global Warming, he’s saying Global Warning! He’s so clever!)

  • Browser Wars: With the impending release of IE4 and Netscape Communicator Suite, everyone thought that web browsers were going to go away, or be consumed by the OS. One of the regular features of the magazine is to ask a panel of experts a simple question, such as “Are Web Browsers an endangered species?” Some of the answers are ridiculous, like this one:

    The Web browser (content) and the desktop itself (functions) will all be integrated into our e-mail packages (communications).

    There is, perhaps, a nugget of truth there, but it certainly didn’t happen that way. Still, the line between browser, desktop, and email client is shifting, this guy just picked the wrong central application. Speaking of which, this is another interesting answer:

    The desktop will give way to the webtop. You will hardly notice where the Web begins and your documents end.

    Is it me, or is this guy describing Chrome OS? This guy’s answer and a lot of the others are obviously written with 90s terminology, but describing things that are happening today. For instance, the notion of desktop widgets (or gadgets or screenlets or whatever you call them) is mentioned multiple times, but not with our terminology.

  • Holy shit, remember VRML?
  • Pre-Google Silliness: “A search engine for searching search engines? Sure why not?” Later in the same issue, I saw an ad for a program that would automatically search multiple search engines and provide you with a consolidated list of results… for only $70!
  • Standards: This one’s right on the money: “HTML will still be the standard everyone loves to hate.” Of course, the author goes on to speculate that java applets will rule the day, so it’s not exactly prescient.
  • The Psychic: In one of my favorite discoveries, the magazine pitted The Suit Versus the Psychic. Of course, the suit gives relatively boring answers to the questions, but the Psychic, he’s awesome. Regarding NN vs IE, he says “I foresee Netscape over Microsoft’s IE for 1997. Netscape is cleaner on an energy level. It appears to me to be more flexible and intuitive. IE has lower energy. I see encumbrances all around it.” Nice! Regarding IPOs, our clairvoyant friend had this to say “I predict IPOs continuing to struggle throughout 1997. I don’t know anything about them on this level, but that just came to me.” Hey, at least he’s honest. Right?

Honestly, I’m not sure I’m even doing this justice. I need to read through more of these magazines. Perhaps another post is forthcoming…

More on Visual Literacy

In response to my post on Visual Literacy and Rembrandt’s J’accuse, long-time Kaedrin friend Roy made some interesting comments about director Peter Greenaway’s insistence that our ability to analyze visual art forms like paintings is ill-informed and impoverished.

It depends on what you mean by visually illiterate, I guess. Because I think that the majority of people are as visually literate as they are textually literate. What you seem to be comparing is the ability to read into a painting with the ability to read words, but that’s not just reading, you’re talking about analyzing and deconstructing at that point. I mean, most people can watch a movie or look at a picture and do some basic contextualizing. … It’s not for lack of literacy, it’s for lack of training. You know how it is… there’s reading, and then there’s Reading. Most people in the United States know how to read, but that doesn’t mean that they know how to Read. Likewise with visual materials–most people know how to view a painting, they just don’t know how to View a Painting. I don’t think we’re visually illiterate morons, I just think we’re only superficially trained.

I mostly agree with Roy, and I spent most of my post critiquing Greenaway’s film for similar reasons. However, I find the subject of visual literacy interesting. First, as Roy mentions, it depends on how you define the phrase. When we hear the term literacy, we usually mean the ability to read and write, but there’s also a more general definition of being educated or having knowledge within a particular subject or field (i.e. computer literacy or in our case, visual literacy). Greenaway is clearly emphasizing the more general definition. It’s not that he thinks we can’t see a painting, it’s that we don’t know enough about the context of the paintings we are viewing.

Roy is correct to point out that most people actually do have relatively sophisticated visual skills:

Even when people don’t have the vocabulary or training, they still pick up on things, because I think we use symbols and visual language all the time. We read expressions and body language really well, for example. Almost all of our driving rules are encoded first and foremost as symbols, not words–red=stop, green=go, yellow=caution. You don’t need “Stop” or “Yield” on the sign to know which it is–the shape of the sign tells you.

Those are great examples of visual encoding and conventions, but do they represent literacy? Why does a stop sign represent what it does? There are three main components to the stop sign:

Stop

  1. Text – It literally says “Stop” on the sign. However, this is not universal. In Israel, for instance, there is no text. In it’s place is an image of a hand in a “stop” gesture.
  2. Shape – The octagonal shape of the sign is unique, and so the sign is identifiable even if obscured. The shape also allows drivers facing the back of the sign to identify that oncoming drivers have a stop sign…
  3. Color – The sign is red, a “hot” color that stands out more than most colors. Blood and fire are red, and red is associated with sin, guilt, passion, and anger, among many other things. As such, red is often used to represent warnings, hence it’s frequent use in traffic signals such as the stop sign.

Interestingly, these different components are overlapping and reinforcing. If one fails (for someone who is color-blind or someone who can’t read, for example), another can still communicate the meaning of the sign. There’s something similar going on with traffic lights, as the position of the light is just as important (if not more important) than the color of the light.

However, it’s worth noting that the clear meaning of a stop sign is also due to the fact that it’s a near universal convention used throughout the entire world. Not all traffic signals are as well defined. Case in point, what does a blinking green traffic light represent? Blinking red means to “stop, then proceed with caution” (kinda like a stop sign). Blinking yellow means to “slow down and proceed with caution.” So what does a blinking green mean? James Grimmelmann tried to figure it out:

It turns out (courtesy of the ODP and rec.travel), perhaps unsurpsingly, that there is no uniform agreement on the meaning of a blinking green light. In a bunch of Canadian provinces, it has the same general meaning that a regular green light does, with the added modifier that you are the undisputed master of all you survey. All other traffic entering the intersection has a stop sign or a red light, and must bow down before your awesome cosmic powers. On the other hand, if you’re in Massachusetts or British Columbia and you try a no-look Ontario-style left turn on a blinking green, you’re liable to get into a smackup, since the blinking green means only that cross traffic is seeing red, with no guarantees about oncoming traffic.

Now, maybe it’s just because we’re starting to get obscure and complicated here, but the reason traffic signals work is because we’ve established a set of conventions that are similar most everywhere. But when we mess around with them or get too complicated, it could be a problem. Luckly, we don’t do that sort of thing very often (even the blinking green example is probably vanishingly obscure – I’ve never seen or even heard of that happening until reading James’ post). These conventions are learned, usually through simple observation, though we also regulate who can drive and require people to study the rules of driving (including signs and lights) before granting a license.

Another example, perhaps surprising because it is something primarily thought of as a textual medium, is newspapers. Take a look at this front page of a newspaper1 :

The Onion Newspaper

Newspapers use numerous techniques (such as prominence, grouping, and nesting) to establish a visual hierarchy, allowing readers to scan the page to find what stories they want to read. In the image above, the size of the headline (Victory!) as well as its placement on the page makes it clear at a glance that this is the most important story. The headline “Miami Police Department Unveils New Pastel Pink and Aqua Uniforms” spans three columns of text, making it obvious that they’re all part of the same story. Furthermore, we know the picture of Crockett and Tubbs goes with the same story because both the picture and the text are spanned by the same headline. And so on.

Now I know what my younger readers2 are thinking: What the fuck is this “newspaper” thing you’re babbling about? Well, it turns out that a lot of the same conventions apply to the web. There are, of course, new conventions on the web (for instance, links are usually represented by different colored text that is also underlined), but many of the same techniques are used to establish a visual hierarchy on the web.

What’s more interesting about newspapers and the web is that we aren’t really trained how to read them, but we figure it out anyway. In his excellent book on usability, Don’t Make Me Think, Steve Krug writes:

At some point in our youth, without ever being taught, we all learned to read a newspaper. Not the words, but the conventions.

We learned, for instance, that a phrase in very large type is usually a headline that summarizes the story underneath it, and that the text underneath a picture is either a caption that tells me what it’s a picture of, or – if it’s in very small type – a photo credit that tells me who took the picture.

We learned that knowing the various conventions of page layout and formatting made it easier and faster to scan a newspaper and find the stories we were interested in. And when we started traveling to other cities, we learned that all newspapers used the same conventions (with slight variations), so knowing the conventions made it easy to read any newspaper.

The tricky part about this is that the learning seems to happen subconsciously. Large type is pretty obvious, but column spanning? Captions? Nesting? Some of this stuff gets pretty subtle, and for the most part, people don’t care. They just scan the page, find what they want, and read the story. It’s just intuitive.

But designing a layout is not quite as intuitive. Many of the lessons we have internalized in reading a newspaper (or a website) aren’t really available to us in a situation where we’re asked to design a layout. If you want a good example of this, look at web pages designed in the mid-90s. By now, we’ve got blogs and mini-CMS style systems that automate layouts and take design out of most people’s hands.

So, does Greenaway have a valid point? Or is Roy right? Obviously, we all process visual information, and visual symbolism is frequently used to encode large amounts of information into a relatively small space. Does that make us visually literate? I guess it all comes down to your definition of literate. Roy seems to take the more specific definition of “able to read or write” while Greenaway seems to be more concerned with “education or knowledge in a specified field.” The question then becomes, are we more textually literate than we are visually literate? Greenaway certainly seems to think so. Roy seems to think we’re just about equal on both fronts. I think both positions are defensible, especially when you consider that Greenaway is talking specifically about art. Furthermore, his movie is about a classical painting that was created several centuries ago. For most young people today, art is more diffuse. When you think about it, almost anything can be art. I suspect Greenaway would be disgusted by that sort of attitude, which is perhaps another way to view his thoughts on visual literacy.

1 – Yeah, it’s the Onion and not a real newspaper per say, but it’s fun and it’s representative of common newspaper conventions.

2 – Hahaha, as if I have more than 5 readers, let alone any young readers.

12DC – Day 12: Merry Christmas

In 1897, Virginia O’Hanlon sent a letter to the New York Sun asking a simple question: is there a Santa Claus? You see, she had a bad father. He didn’t want to answer the question, so he transferred his fatherly responsibilities to the newspaper, claiming that “If you see it in The Sun, it’s so.” An editor at the Sun, Francis Pharcellus Church, took the opportunity to answer Virginia’s question and also addressed the deeper philosophical quandry. His now famous response can be summed up as “Yes, Virginia, there is a Santa Claus.”

Is there a Santa Claus?

Merry Christmas, all!