Computers & Internet

Unnecessary Gadgets

So the NY Times has an article debating the necessity of the various gadgets. The argument here is that we’re seeing a lot of convergence in tech devices, and that many technologies that once warranted a dedicated device are now covered by something else. Let’s take a look at their devices, what they said, and what I think:

  • Desktop Computer – NYT says to chuck it in favor of laptops. I’m a little more skeptical. Laptops are certainly better now than they’ve ever been, but I’ve been hearing about desktop-killers for decades now and I’m not even that old (ditto for thin clients, though the newest hype around the “cloud” computing thing is slightly more appealing – but even that won’t supplant desktops entirely). I think desktops will be here to stay. I’ve got a fair amount of experience with both personal and work laptops, and I have to say that they’re both inferior to desktops. This is fine when I need to use the portability, but that’s not often enough to justify some of the pain of using laptops. For instance, I’m not sure what kinda graphics capabilities my work laptop has, but it really can’t handle my dual-monitor setup, and even on one monitor, the display is definitely crappier than my old desktop (and that thing was ancient). I do think we’re going to see some fundamental changes in the desktop/laptop/smartphone realm. The three form factors are all fundamentally useful in their own way, but I’d still expect some sort of convergence in the next decade or so. I’m expecting that smartphones will become ubiquitous, and perhaps become some sort of portable profile that you could use across your various devices. That’s a more long term thing though.
  • High Speed Internet at Home – NYT says to keep it, and I agree. Until we can get a real 4G network (i.e. not the slightly enhanced 3G stuff the current telecom companies are peddling), there’s no real question here.
  • Cable TV – NYT plays the “maybe” card on this one, but I think i can go along with that. It all depends on whether you watch TV or not (and/or if you enjoy live TV, like sporting events). I’m on the fence with this one myself. I have cable, and a DVR does make dealing with broadcast television much easier, and I like the opportunities afforded by OnDemand, etc… But it is quite expensive. If I ever get into a situation where I need to start pinching pennies, Cable is going to be among the first things to go.
  • Point and Shoot Camera – NYT says to lose it in favor of the smartphone, and I probably agree. Obviously there’s still a market for dedicated high-end cameras, but the small point-and-click ones are quickly being outclassed by their fledgling smartphone siblings. My current iPhone camera is kinda crappy (2 MP, no flash), but even that works ok for my purposes. There are definitely times when I wish I had a flash or better quality, but they’re relatively rare and I’ve had this phone for like 3 years now (probably upgrading this summer). My next camera will most likely meet all my photography needs.
  • Camcorder – NYT says to lose it, and that makes a sort of sense. As they say, camcorders are getting squeezed from both ends of the spectrum, with smartphones and cheap flip cameras on one end, and high end cameras on the other. I don’t really know much about this though. I’m betting that camcorders will still be around, just not quite as popular as before.
  • USB Thumb Drive – NYT says lose it, and I think I agree, though not necessarily for the same reasons. They think that the internet means you don’t need to use physical media to transfer data anymore. I suppose there’s something to that, but my guess is that Smartphones could easily pick up the slack and allow for portable data without a dedicated device. That being said, I’ve used a thumb drive, like, 3 times in my life.
  • Digital Music Player – NYT says ditch it in favor of smartphones, with the added caveat that people who exercise a lot might like a smaller, dedicated device. I can see that, but on a personal level, I have both and don’t mind it at all. I don’t like using up my phone battery playing music, and I honestly don’t really like the iPhone music player interface, so I actually have a regular old iPod nano for music and podcasts (also, I like to have manual control over what music/podcasts get on my device, and that’s weird on the iPhone – at least, it used to be). My setup works fine for me most times, and in an emergency, I do have music (and a couple movies) on my iPhone, so I could make due.
  • Alarm Clock – NYT says keep it, though I’m not entirely convinced. Then again, I have an alarm clock, so I can’t mount much of a offense against it. I’ve realized, though, that the grand majority of clocks that I use in my house are automatically updated (Cable box, computers, phone) and synced with some external source (no worrying about DST, etc…) My alarm clock isn’t, though. I still use my phone as a failsafe for when I know I need to get up early, but that’s more based on the possibility of snoozing myself into oblivion (I can easily snooze for well over an hour). I think I may actually end up replacing my clock, but I can see some young whipper-snappers relying on some other device for their wakeup calls…
  • GPS Unit – NYT says lose it, and I agree. With the number of smartphone apps (excluding the ones that come with your phone, which are usually functional but still kinda clunky as a full GPS system) that are good at this sort of thing (and a lot cheaper), I can’t see how anyone could really justify a dedicated device for this. On a recent trip, a friend used Navigon’s Mobile Navigator ($30, and usable on any of his portable devices) and it worked like a charm. Just as good as any GPS I’ve ever used. The only problem, again, is that it will drain the phone battery (unless you plug it in, which we did).
  • Books – NYT says to keep them, and I mostly agree. The only time I can see really wanting to use a dedicated eReader is when travelling, and even then, I’d want it to be a broad device, not dedicated to books. I have considered the Kindle (as it comes down in price), but for now, I’m holding out on a tablet device that will actually have a good enough screen for this sort of thing. Which, I understand, isn’t too far off on the horizon. There are a couple of other nice things about digital books though, namely, the ability to easily mark favorite passages, or to do a search (two things that would probably save me a lot of time). I can’t see books every going away, but I can see digital readers being a part of my life too.

A lot of these made me think of Neal Stephenson’s System of the World. In that book, one of the characters ponders how new systems supplant older systems:

“It has been my view for some years that a new System of the World is being created around us. I used to suppose that it would drive out and annihilate any older Systems. But things I have seen recently … have convinced me that new Systems never replace old ones, but only surround and encapsulate them, even as, under a microscope, we may see that living within our bodies are animalcules, smaller and simpler than us, and yet thriving even as we thrive. … And so I say that Alchemy shall not vanish, as I always hoped. Rather, it shall be encapsulated within the new System of the World, and become a familiar and even comforting presence there, though its name may change and its practitioners speak no more about the Philosopher’s Stone.” (page 639)

That sort of “surround and encapsulate” concept seems broadly applicable to a lot of technology, actually.

Artificial Memory

Nicholas Carr cracks me up. He’s a skeptic of technology, and in particular, the internet. He’s the the guy who wrote the wonderfully divisive article, Is Google Making Us Stupid? The funny thing about all this is that he seems to have gained the most traction on the very platform he criticizes so much. Ultimately, though, I think he does have valuable insights and, if nothing else, he does raise very interesting questions about the impacts of technology on our lives. He makes an interesting counterweight to the techno-geeks who are busy preaching about transhumanism and the singularity. Of course, in a very real sense, his opposition dooms him to suffer from the same problems as those he criticizes. Google and the internet may not be a direct line to godhood, but it doesn’t represent a descent into hell either. Still, reading some Carr is probably a good way to put techno-evangelism into perspective and perhaps reach some sort of Hegelian synthesis of what’s really going on.

Otakun recently pointed to an excerpt from Carr’s latest book. The general point of the article is to examine how human memory is being conflated with computer memory, and whether or not that makes sense:

…by the middle of the twentieth century memorization itself had begun to fall from favor. Progressive educators banished the practice from classrooms, dismissing it as a vestige of a less enlightened time. What had long been viewed as a stimulus for personal insight and creativity came to be seen as a barrier to imagination and then simply as a waste of mental energy. The introduction of new storage and recording media throughout the last century—audiotapes, videotapes, microfilm and microfiche, photocopiers, calculators, computer drives—greatly expanded the scope and availability of “artificial memory.” Committing information to one’s own mind seemed ever less essential. The arrival of the limitless and easily searchable data banks of the Internet brought a further shift, not just in the way we view memorization but in the way we view memory itself. The Net quickly came to be seen as a replacement for, rather than just a supplement to, personal memory. Today, people routinely talk about artificial memory as though it’s indistinguishable from biological memory.

While Carr is perhaps more blunt than I would be, I have to admit that I agree with a lot of what he’s saying here. We often hear about how modern education is improved by focusing on things like “thinking skills” and “problem solving”, but the big problem with emphasizing that sort of work ahead of memorization is that the analysis needed for such processes require a base level of knowledge in order to be effective. This is something I’ve expounded on at length in a previous post, so I won’t rehash that here.

The interesting thing about the internet is that it enables you to get to a certain base level of knowledge and competence very quickly. This doesn’t come without it’s own set of challenges, and I’m sure Carr would be quick to point out that such a crash course would yield a false sense of security on us hapless internet users. After all, how do we know when we’ve reached that base level of confidence? Our incompetence could very well be masking our ability to recognize our incompetence. However, I don’t think that’s an insurmountable problem. Most of us that use the internet a lot view it as something of a low-trust environment, which can, ironically, lead to a better result. On a personal level, I find that what the internet really helps with is to determine just how much I don’t know about a subject. That might seem like a silly thing to say, but even recognizing that your unknown unknowns are large can be helpful.

Some other assorted thoughts about Carr’s excerpt:

  • I love the concept of a “commonplace book” and immediately started thinking of how I could implement one… which is when I realized that I’ve actually been keeping one, more or less, for the past 10 or so years on this blog. That being said, it’s something I wouldn’t mind becoming more organized about, and I’ve got some interesting ideas about what my personal take on a commonplace would look like.
  • Carr insists that the metaphor that portrays the brain as a computer is wrong. It’s a metaphor I’ve certainly used in the past, though I think what I find most interesting about that metaphor is how different computers and brains really are. The problem with the metaphor is that our brains work nothing even remotely like the way our current computers actually work. However, many of the concepts of computer science and engineering can be useful in helping to model how the brain works. I’m certainly not an expert on the subject, but for example: You could model the brain as a binary computer because our neurons are technically binary. However, our neurons don’t just turn on or off, they pulse, and things like frequency and duration can yield dramatically different results. Not to mention the fact that the brain seems to be a massively parallel computing device, as opposed to the mostly serial electronic tools we use. That is, of course, a drastic simplification, but you get the point. The metaphor is flawed, as all metaphors are, but it can also be useful.
  • One thing that Carr doesn’t really get into (though he may cover this in a later chapter) is how notoriously unreliable human memory actually is. Numerous psychological studies show just how impressionable and faulty our memory of an event can be. This doesn’t mean we should abandon our biological memory, just that having an external, artificial memory of an event (i.e. some sort of recording) can be useful in helping to identify and shape our perceptions.
  • Of course, even recordings can yield a false sense of truth, so things like Visual Literacy are still quite important. And again, one cannot analyze said recordings accurately without a certain base set of knowledge about what we’re looking at – this is another concept that has been showing up on this blog for a while now as well: Exformation.

And that’s probably enough babbling about Carr’s essay. I generally disagree with the guy, but on this particular subject, I think we’re more in agreement.

Opera 11 Beta

I’m one of the few people that actually uses Opera to do the grand majority of my web browsing. In recent years, I’ve been using Firefox more, especially for web development purposes (it’s hard to beat the Firebug/Web Dev Toolbar combo – Opera has a tool called Dragonfly that’s decent, but not quite as good). A few years ago, I wrote a comparison of Firefox and Opera across 8 categories, and it came out a tie. The biggest advantage that Opera had was it’s usability and easy of use. On the other hand, Firefox’s strength was its extensibility, something that Opera never fully embraced. Until now!

Opera recently released a beta of their next version, and I’ve been using it this week. It’s looking like an excellent browser, with some big improvements over previous versions:

  • Extensions – Opera has finally taken the plunge. Having only been available for a few days, there isn’t quite the extensive library that Firefox has, and given the smaller user base and Firefox’s head start, I’m not sure they’ll be able to catch up anytime soon. That being said, it’s a welcome addition, and when combined with Opera’s superior native features, perhaps this will even the score a bit. Extensions also represents an interesting dilemma for Opera – will they turn the most popular extensions into native features? One issue with Extensions is that they can be somewhat unreliable and yield poor performance (for instance, the various Mouse Gesture extensions for Firefox can’t hold a candle to Opera’s native functionality). That was always Opera’s worry about Extensions, so I’m betting we will see extensions rolled into the native app in future versions.
  • Performance and Speed – Opera 11 is noticeably faster than it’s predecessors (no small feat, as Opera has always been good in this respect) and probably it’s competition too. Of course, I’m going on a purely subjective observation here and I’m obviously biased, but it seems faster than Firefox as well. It’s probably on par with Chrome, but Opera has certainly closed the gap (especially on javascript-heavy pages, which is what Chrome excels at). Once this browser is out of beta, I’d be really interested in seeing how it stacks up. Somewhat related is improved support of various standards, notably HTML 5, so there’s that too.
  • Tab stacking – Opera was the first browser with tabs, and now they’re making small, incremental improvements. In this case, it’s the ability to group a bunch of tabs together and allow you to expand or contract them. I haven’t actually used this feature much, but I can imagine scenarios where I’d have dozens of tabs open and grouping them might be helpful (this also makes their tab preview on mouseover functionality more meaningful, as mousing over a contracted group of tabs shows you a preview of all the tabs (this was only marginally useful if not a complete waste on regular tabs, but in this scenario it works well)). On the other hand, I’m not sure the trouble of grouping and maintaining the tab stacks would ultimately save time (but perhaps future iterations will come up with smarter methods of automatically grouping tabs – an approach that could be problematic, but which could also be beneficial if implemented well).
  • Search predictions from Google – This is minor, but just another “We’re catching up to Firefox functionality” addition, and a welcome one.

There are some other things, but the above are the best additions. Some of the other stuff is a bit extraneous (in particular, the visual mouse gestures are unnecessary, though they don’t seem to hurt anything either), and some of it won’t matter to most folks (the email client). I’ve run into some buggy behavior, but nothing unusual, and it actually seems pretty stable for a beta. So I’m looking forward to the final release of this browser.

Link Dump

A few interesting links from the depths of teh interwebs:

  • Singel-Minded: How Facebook Could Beat Google to Win the Net – Wired’s Ryan Singer makes an interesting case for Facebook to challenge Google in the realm of advertising. Right now, Facebook only advertises on their site (in a small, relatively tasteful fashion), but it’s only really a matter of time until they make the same move Google did with AdSense. And their advantage their is that Facebook has much more usable data about people than Google. The operative word there is “usable”, as Google certainly has lots of data about its users, but it seems Google’s mantra of “Do no evil” will come back to bite them in the ass. Google’s promised not to use search history and private emails, etc… to help target ads. Facebook has no such restrictions, and the ads on their site seem to be more targeted (they’ve recently been trying to get me to buy Neal Stephenson audio books, which would be a pretty good bet for them… if I hadn’t already read everything that guy’s written). This got me wondering, is targeted advertising the future and will people be ok with that. Everyone hates commercials, but would they hate them if the ads were for things you wanted? Obviously privacy is a concern… or is it? It’s not like Facebook has been immaculate in the area of privacy, and yet it’s as popular as it ever was. I don’t necessarily see it as a good thing, but it will probably happen, and somehow I doubt Google will take it for long without figuring out a way to leverage all that data they’ve been collecting…
  • If We Don’t, Remember Me: Animated gifs have long been a staple of the web and while they’re not normally a bastion of subtlety, this site is. They all seem to be from good movies, and I think this one is my favorite. (via kottke)
  • The Tall Man Reunites With Don Coscarelli for John Dies at the End: I posted about this movie back in 2008, then promptly forgot about it. I just assumed that it was one of those projects that would never really get off the ground (folks in Hollywood often publish the rights for something, even when they don’t necessarily have any plans to make it) or that Coscarelli was focusing on one of his other projects (i.e. the long-rumored sequel to BubbaHo-Tep, titled Bubba Nosferatu: Curse of the She-Vampires). But it appears that things are actually moving on JDatE and some casting was recently announced, including long time Coscarelli collaborator Angus Scrimm (who played the infamous Tall Man in the Phantasm films), Paul Giamatti and Clancy Brown. This is all well and good, but at the same time – I have no idea what roles any of these folks will play. None seem like the two leads (David and the titular John). Nevertheless, here’s to hoping we see some new Coscarelli soon. I think his sensibility would match rather well with David Wong (nee Jason Pargin). (Update: Quint over at AiCN has more on the casting and who’s playing what)
  • Curtis Got Slapped by a White Teacher!: Words cannot describe this 40 page document (which is, itself, comprised mostly of words, but whatever). Its… breathtaking.

That’s all for now.

A/B Testing Spaghetti Sauce

Earlier this week I was perusing some TED Talks and ran across this old (and apparently popular) presentation by Malcolm Gladwell. It struck me as particularly relevant to several topics I’ve explored on this blog, including Sunday’s post on the merits of A/B testing. In the video, Gladwell explains why there are a billion different varieties of Spaghetti sauce at most supermarkets:

Again, this video touches on several topics explored on this blog in the past. For instance, it describes the origins of what’s become known as the Paradox of Choice (or, as some would have you believe, the Paradise of Choice) – indeed, there’s another TED talk linked right off the Gladwell video that covers that topic in detail.

The key insight Gladwell discusses in his video is basically the destruction of the Platonic Ideal (I’ll summarize in this paragraph in case you didn’t watch the video, which covers the topic in much more depth). He talks about Howard Moskowitz, who was a market research consultant with various food industry companies that were attempting to optimize their products. After conducting lots of market research and puzzling over the results, Moskowitz eventually came to a startling conclusion: there is no perfect product, only perfect products. Moskowitz made his name working with spaghetti sauce. Prego had hired him in order to find the perfect spaghetti sauce (so that they could compete with rival company, Ragu). Moskowitz developed dozens of prototype sauces and went on the road, testing each variety with all sorts of people. What he found was that there was no single perfect spaghetti sauce, but there were basically three types of sauce that people responded to in roughly equal proportion: standard, spicy, and chunky. At the time, there were no chunky spaghetti sauces on the market, so when Prego released their chunky spaghetti sauce, their sales skyrocketed. A full third of the market was underserved, and Prego filled that need.

Decades later, this is hardly news to us and the trend has spread from the supermarket into all sorts of other arenas. In entertainment, for example, we’re seeing a move towards niches. The era of huge blockbuster bands like The Beatles is coming to an end. Of course, there will always be blockbusters, but the really interesting stuff is happening in the niches. This is, in part, due to technology. Once you can fit 30,000 songs onto an iPod and you can download “free” music all over the internet, it becomes much easier to find music that fits your tastes better. Indeed, this becomes a part of peoples’ identity. Instead of listening to the mass produced stuff, they listen to something a little odd and it becomes an expression of their personality. You can see evidence of this everywhere, and the internet is a huge enabler in this respect. The internet is the land of niches. Click around for a few minutes and you can easily find absurdly specific, single topic, niche websites like this one where every post features animals wielding lightsabers or this other one that’s all about Flaming Garbage Cans In Hip Hop Videos (there are thousands, if not millions of these types of sites). The internet is the ultimate paradox of choice, and you’re free to explore almost anything you desire, no matter how odd or obscure it may be (see also, Rule 34).

In relation to Sunday’s post on A/B testing, the lesson here is that A/B testing is an optimization tool that allows you to see how various segments respond to different versions of something. In that post, I used an example where an internet retailer was attempting to find the ideal imagery to sell a diamond ring. A common debate in the retail world is whether that image should just show a closeup of the product, or if it should show a model wearing the product. One way to solve that problem is to A/B test it – create both versions of the image, segment visitors to your site, and track the results.

As discussed Sunday, there are a number of challenges with this approach, but one thing I didn’t mention is the unspoken assumption that there actually is an ideal image. In reality, there are probably some people that prefer the closeup and some people who prefer the model shot. An A/B test will tell you what the majority of people like, but wouldn’t it be even better if you could personalize the imagery used on the site depending on what customers like? Show the type of image people prefer, and instead of catering to the most popular segment of customer, you cater to all customers (the simple diamond ring example begins to break down at this point, but more complex or subtle tests could still show significant results when personalized). Of course, this is easier said than done – just ask Amazon, who does CRM and personalization as well as any retailer on the web, and yet manages to alienate a large portion of their customers every day! Interestingly, this really just shifts the purpose of A/B testing from one of finding the platonic ideal to finding a set of ideals that can be applied to various customer segments. Once again we run up against the need for more and better data aggregation and analysis techniques. Progress is being made, but I’m not sure what the endgame looks like here. I suppose time will tell. For now, I’m just happy that Amazon’s recommendations aren’t completely absurd for me at this point (which I find rather amazing, considering where they were a few years ago).

Groundhog Day and A/B Testing

Jeff Atwood recently made a fascinating observation about the similarities between the classic film Groundhog Day and A/B Testing.

In case you’ve only recently emerged from a hermit-like existence, Groundhog Day is a film about Phil (played by Bill Murray). It seems that Phil has been doomed (or is it blessed) to live the same day over and over again. It doesn’t seem to matter what he does during this day, he always wakes up at 6 am on Groundhog Day. In the film, we see the same day repeated over and over again, but only in bits and pieces (usually skipping repetitive parts). The director of the film, Harold Ramis, believes that by the end of the film, Phil has spent the equivalent of about 30 or 40 years reliving that same day.

Towards the beginning of the film, Phil does a lot of experimentation, and Atwood’s observation is that this often takes the form of an A/B test. This is a concept that is perhaps a little more esoteric, but the principles are easy. Let’s take a simple example from the world of retail. You want to sell a new ring on a website. What should the main image look like? For simplification purposes, let’s say you narrow it down to two different concepts: one, a closeup of the ring all by itself, and the other a shot of a model wearing the ring. Which image do you use? We could speculate on the subject for hours and even rationalize some pretty convincing arguments one way or the other, but it’s ultimately not up to us – in retail, it’s all about the customer. You could “test” the concept in a serial fashion, but ultimately the two sets of results would not be comparable. The ring is new, so whichever image is used first would get an unfair advantage, and so on. The solution is to show both images during the same timeframe. You do this by splitting your visitors into two segments (A and B), showing each segment a different version of the image, and then tracking the results. If the two images do, in fact, cause different outcomes, and if you get enough people to look at the images, it should come out in the data.

This is what Phil does in Groundhog Day. For instance, Phil falls in love with Rita (played by Andie MacDowell) and spends what seems like months compiling lists of what she likes and doesn’t like, so that he can construct the perfect relationship with her.

Phil doesn’t just go on one date with Rita, he goes on thousands of dates. During each date, he makes note of what she likes and responds to, and drops everything she doesn’t. At the end he arrives at — quite literally — the perfect date. Everything that happens is the most ideal, most desirable version of all possible outcomes on that date on that particular day. Such are the luxuries afforded to a man repeating the same day forever.

This is the purest form of A/B testing imaginable. Given two choices, pick the one that “wins”, and keep repeating this ad infinitum until you arrive at the ultimate, most scientifically desirable choice.

As Atwood notes, the interesting thing about this process is that even once Phil has constructed that perfect date, Rita still rejects Phil. From this example and presumably from experience with A/B testing, Atwood concludes that A/B testing is empty and that subjects can often sense a lack of sincerity behind the A/B test.

It’s an interesting point, but to be sure, I’m not sure it’s entirely applicable in all situations. Of course, Atwood admits that A/B testing is good at smoothing out details, but there’s something more at work in Groundhog’s Day that Atwood is not mentioning. Namely, that Phil is using A/B testing to misrepresent himself as the ideal mate for Rita. Yes, he’s done the experimentation to figure out what “works” and what doesn’t, but his initial testing was ultimately shallow. Rita didn’t reject him because he had all the right answers, she rejected him because he was attempting to deceive her. His was misrepresenting himself, and that certainly can lead to a feeling of emptiness.

If you look back at my example above about the ring being sold on a retail website, you’ll note that there’s no deception going on there. Somehow I doubt either image would result in a hollow feeling by the customer. Why is this different than Groundhog Day? Because neither image misrepresents the product, and one would assume that the website is pretty clear about the fact that you can buy things there. Of course, there are a million different variables you could test (especially once you get into text and marketing hooks, etc…) and some of those could be more deceptive than others, but most of the time, deception is not the goal. There is a simple choice to be made, instead of constantly wondering about your product image and second guessing yourself, why not A/B test it and see what customers like better?

There are tons of limitations to this approach, but I don’t think it’s as inherently flawed as Atwood seems to believe. Still, the data you get out of an A/B test isn’t always conclusive and even if it is, whatever learnings you get out of it aren’t necessarily applicable in all situations. For instance, what works for our new ring can’t necessarily be applied to all new rings (this is a problem for me, as my employer has a high turnover rate for products – as such, the simple example of the ring as described above would not be a good test for my company unless the ring would be available for a very long time). Furthermore, while you can sometimes pick a winner, it’s not always clear why it’s a winner. This is especially the case when the differences between A and B are significant (for instance, testing an entirely redesigned page might yield results, but you will not know which of the changes to the page actually caused said results – on the other hand, A/B testing is really the only way to accurately calculate ROI on significant changes like that.)

Obviously these limitations should be taken into account when conducting an A/B test, and I think what Phil runs into in Groundhog’s Day is a lack of conclusive data. One of the problems with interpreting inconclusive data is that it can be very tempting to rationalize the data. Phils initial attempts to craft the perfect date for Rita fail because he’s really only scraping the surface of her needs and desires. In other words, he’s testing the wrong thing, misunderstanding the data, and thus getting inconclusive results.

The interesting thing about the Groundhog’s Day example is that, in the end, the movie is not a condemnation of A/B testing at all. Phil ultimately does manage to win the affections of Rita. Of course it took him decades to do so, and that’s worth taking into account. Perhaps what the film is really saying is that A/B testing is often more complicated than it seems and that the only results you get depend on what you put into it. A/B testing is not the easy answer it’s often portrayed as and it should not be the only tool in your toolbox (i.e. forcing employees to prove that using 3, 4 or 5 pixels for a border is ideal is probably going a bit too far ), but neither is it as empty as Atwood seems to be indicating. (And we didn’t even talk about multivariate tests! Let’s get Christopher Nolan on that. He’d be great at that sort of movie, wouldn’t he?)

Predictions

Someone sent me a note about a post I wrote on the 4th Kingdom boards in 2005 (August 3, 2005, to be more precise). It was in a response to a thread about technology and consumer electronics trends, and the original poster gave two examples that were exploding at the times: “camera phones and iPods.” This is what I wrote in response:

Heh, I think the next big thing will be the iPod camera phone. Or, on a more general level, mp3 player phones. There are already some nifty looking mp3 phones, most notably the Sony/Ericsson “Walkman” branded phones (most of which are not available here just yet). Current models are all based on flash memory, but it can’t be long before someone releases something with a small hard drive (a la the iPod). I suspect that, in about a year, I’ll be able to hit 3 birds with one stone and buy a new cell phone with an mp3 player and digital camera.

As for other trends, as you mention, I think we’re goint to see a lot of hoopla about the next gen gaming consoles. The new Xbox comes out in time for Xmas this year and the new Playstation 3 hits early next year. The new playstation will probably have blue-ray DVD capability, which brings up another coming tech trend: the high capacity DVD war! It seems that Sony may actually be able to pull this one out (unlike Betamax), but I guess we’ll have to wait and see…

For an off-the-cuff informal response, I think I did pretty well. Of course, I still got a lot of the specifics wrong. For instance, I’m pretty clearly talking about the iPhone, though that would have to wait about 2 years before it became a reality. I also didn’t anticipate the expansion of flash memory to more usable sizes and prices. Though I was clearly talking about a convergence device, I didn’t really say anything about what we now call “apps”.

In terms of game consoles, I didn’t really say much. My first thought upon reading this post was that I had completely missed the boat on the Wii, however, it appears that the Wii’s new controller scheme wasn’t shown until September 2005 (about a month after my post). I did manage to predict a winner in the HD video war though, even if I framed the prediction as a “high capacity DVD war” and spelled blu-ray wrong.

I’m not generally good at making predictions about this sort of thing, but it’s nice to see when I do get things right. Of course, you could make the argument that I was just stating the obvious (which is basically what I did with my 2008 predictions). Then again, everything seems obvious in hindsight, so perhaps it is still a worthwhile exercise for me. If nothing else, it gets me to think in ways I’m not really used to… so here are a few predictions for the rest of this year:

  • Microsoft will release Natal this year, and it will be a massive failure. There will be a lot of neat talk about it and speculation about the future, but the fact is that gesture based interfaces and voice controls aren’t especially great. I’ll bet everyone says they’d like to use the Minority Report interface… but once they get to use it, I doubt people would actually find it more useful than current input methods. If it does attain success though, it will be because of the novelty of that sort of interaction. As a gaming platform, I think it will be a near total bust. The only way Microsoft would get Natal into homes is to bundle it with the XBox 360 (without raising the price)
  • Speaking of which, I think Sony’s Playstation Move platform will be mildly more successful than Natal, which is to say that it will also be a failure. I don’t see anything in their initial slate of games that makes me even want to try it out. All that being said, the PS3 will continue to gain ground against the Xbox 360, though not so much that it will overtake the other console.
  • While I’m at it, I might as well go out on a limb and say that the Wii will clobber both the PS3 and the Xbox 360. As of right now, their year in games seems relatively tame, so I don’t see the Wii producing favorable year over year numbers (especially since I don’t think they’ll be able to replicate the success of New Super Mario Brothers Wii, which is selling obscenely well, even to this day). The one wildcard on the Wii right now is the Vitality Sensor. If Nintendo is able to put out the right software for that and if they’re able to market it well, it could be a massive, audience-shifting blue ocean win for them. Coming up with a good “relaxation” game and marketing it to the proper audience is one hell of a challenge though. On the other hand, if anyone can pull that off, it’s Nintendo.
  • Sony will also release some sort of 3D gaming and movie functionality for the home. It will also be a failure. In general, I think attitudes towards 3D are declining. I think it will take a high profile failure to really temper Hollywood’s enthusiasm (and even then, the “3D bump” of sales seems to outweigh the risk in most cases). Nevertheless, I don’t think 3D is here to stay. The next major 3D revolution will be when it becomes possible to do it without glasses (which, at that point, might be a completely different technology like holograms or something).
  • At first, I was going to predict that Hollywood would be seeing a dip in ticket sales, until I realized that Avatar was mostly a 2010 phenomenon, and that Alice in Wonderland has made about $1 billion worldwide already. Furthermore, this summer sees the release of The Twilight Saga: Eclipse, which could reach similar heights (for reference, New Moon did $700 million worldwide) and the next Harry Potter is coming in November (for reference, the last Potter film did around $930 million). Altogether, the film world seems to be doing well… in terms of sales. I have to say that from my perspective, things are not looking especially well when it comes to quality. I’m not even as interested in seeing a lot of the movies released so far this year (an informal look at my past few years indicates that I’ve normally seen about twice as many movies as I have this year – though part of that is due to the move of the Philly film fest to October).
  • I suppose I should also make some Apple predictions. The iPhone will continue to grow at a fast rate, though its growth will be tempered by Android phones. Right now, both of them are eviscerating the rest of the phone market. Once that is complete, we’ll be left with a few relatively equal players, and I think that will lead to good options for us consumers. The iPhone has been taken to task more and more for Apple’s control-freakism, but it’s interesting that Android’s open features are going to present more and more of a challenge to that as time goes on. Most recently, Google announced that the latest version of Android would feature the ability for your 3G/4G phone to act as a WiFi hotspot, which will most likely force Apple to do the same (apparently if you want to do this today, you have to jailbreak your iPhone). I don’t think this spells the end of the iPhone anytime soon, but it does mean that they have some legitimate competition (and that competition is already challenging Apple with its feature-set, which is promising).
  • The iPad will continue to have modest success. Apple may be able to convert that to a huge success if they are able to bring down the price and iron out some of the software kinks (like multi-tasking, etc… something we already know is coming). The iPad has the potential to destroy the netbook market. Again, the biggest obstacle at this point is the price.
  • The Republicans will win more seats in the 2010 elections than the Democrats. I haven’t looked close enough at the numbers to say whether or not they could take back either (or both) house of Congress, but they will gain ground. This is not a statement of political preference either way for me, and my reasons for making this prediction are less about ideology than simple voter disenfranchisement. People aren’t happy with the government and that will manifest as votes against the incumbents. It’s too far away from the 2012 elections to be sure, but I suspect Obama will hang on, if for no other reason than that he seems to be charismatic enough that people give him a pass on various mistakes or other bad news.

And I think that’s good enough for now. In other news, I have started a couple of posts that are significantly more substantial than what I’ve been posting lately. Unfortunately, they’re taking a while to produce, but at least there’s some interesting stuff in the works.

Remix Culture and Soviet Montage Theory

A video mashup of The Beastie Boys’ popular and amusing Sabotage video with scenes from Battlestar Galactica has been making the rounds recently. It’s well done, but a little on the disposable side of remix culture. The video lead Sunny Bunch to question “remix culture”:

It’s quite good. But, ultimately, what’s the point?

Leaving aside the questions of copyright and the rest: Seriously…what’s the point? Does this add anything to the culture? I won’t dispute that there’s some technical prowess in creating this mashup. But so what? What does it add to our understanding of the world, or our grasp of the problems that surround us? Anything? Nothing? Is it just “there” for us to have a chuckle with and move on? Is this the future of our entertainment?

These are good questions, and I’m not surprised that the BSG Sabotage video prompted them. The implication of Sonny’s post is that he thinks it is an unoriginal waste of talent (he may be playing a bit of devil’s advocate here, but I’m willing to play along because these are interesting questions and because it will give me a chance to pedantically lecture about film history later in this post!) In the comments, Julian Sanchez makes a good point (based on a video he produced earlier that was referenced by someone else in the comment thread), which will be something I’ll expand on later in this post:

First, the argument I’m making in that video is precisely that exclusive focus on the originality of the contribution misses the value in the activity itself. The vast majority of individual and collective cultural creation practiced by ordinary people is minimally “original” and unlikely to yield any final product of wide appeal or enduring value. I’m thinking of, e.g., people singing karaoke, playing in a garage band, drawing, building models, making silly YouTube videos, improvising freestyle poetry, whatever. What I’m positing is that there’s an intrinsic value to having a culture where people don’t simply get together to consume professionally produced songs and movies, but also routinely participate in cultural creation. And the value of that kind of cultural practice doesn’t depend on the stuff they create being particularly awe-inspiring.

To which Sonny responds:

I’m actually entirely with you on the skill that it takes to produce a video like the Brooklyn hipsters did — I have no talent for lighting, camera movements, etc. I know how hard it is to edit together something like that, let alone shoot it in an aesthetically pleasing manner. That’s one of the reasons I find the final product so depressing, however: An impressive amount of skill and talent has gone into creating something that is not just unoriginal but, in a way, anti-original. These are people who are so devoid of originality that they define themselves not only by copying a video that they’ve seen before but by copying the very personalities of characters that they’ve seen before.

Another good point, but I think Sonny is missing something here. The talents of the BSG Sabotage editor or the Brooklyn hipsters are certainly admirable, but while we can speculate, we don’t necessarily know their motivations. About 10 years ago, a friend and amateur filmmaker showed me a video one of his friends had produced. It took scenes from Star Wars and Star Trek: The Wrath of Khan and recut them so it looked like the Millennium Falcon was fighting the Enterprise. It would show Han Solo shooting, then cut to the Enterprise being hit. Shatner would exclaim “Fire!” and then it would cut to a blast hitting the Millennium Falcon. And so on. Another video from the same guy took the musical number George Lucas had added to Return of the Jedi in the Special Edition, laid Wu-Tang Clan in as the soundtrack, then re-edited the video elements so everything matched up.

These videos sound fun, but not particularly original or even special in this day and age. However, these videos were made ten to fifteen years ago. I was watching them on a VHS(!) and the person making the edits was using analog techniques and equipment. It turns out that these videos were how he honed his craft before he officially got a job as an editor in Hollywood. I’m sure there were tons of other videos, probably much less impressive, that he had created before the ones I’m referencing. Now, I’m not saying that the BSG Sabotage editor or the Brooklyn Hipsters are angling for professional filmmaking jobs, but it’s quite possible that they are at least exploring their own possibilities. I would also bet that these people have been making videos like this (though probably much less sophisticated) since they were kids. The only big difference now is that technology has enabled them to make a slicker experience and, more importantly, to distribute it widely.

It’s also worth noting that this sort of thing is not without historical precedent. Indeed, the history of editing and montage is filled with this sort of thing. In the 1910s and 1920s, Russian filmmaker Lev Kuleshov conducted a series of famous experiments that helped express the role of editing in films. In these experiments, he would show a man with an expressionless face, then cut to various other shots. In one example, he showed the expressionless face, then cut to a bowl of soup. When prompted, audiences would claim that they found that the man was hungry. Kuleshov then took the exact same footage of the expressionless face and cut to a pretty girl. This time, audiences reported that the man was in love. Another experiment alternated between the expressionless face and a coffin, a juxtaposition that lead audiences to believe that the man was stricken with grief. This phenomenon has become known as the Kuleshov Effect.

For the current discussion, one notable aspect of these experiments is that Kuleshov was working entirely from pre-existing material. And this sort of thing was not uncommon, either. At the time, there was a shortage of raw film stock in Russia. Filmmakers had to make due with what they had, and often spent their time re-cutting existing material, which lead to what’s now called Soviet Montage Theory. When D.W. Griffith’s Intolerance, which used advanced editing techniques (it featured a series of cross cut narratives which eventually converged in the last reel), opened in Russia in 1919, it quickly became very popular. The Russian film community saw this as a validation and popularization of their theories and also as an opportunity. Russian critics and filmmakers were impressed by the film’s technical qualities, but dismissed the story as “bourgeois”, claiming that it failed to resolve issues of class conflict, and so on. So, not having much raw film stock of their own, they took to playing with Griffith’s film, re-editing certain sections of the film to make it more “agitational” and revolutionary.

The extent to which this happened is a bit unclear, and certainly public exhibitions were not as dramatically altered as I’m making it out to be. However, there are Soviet versions of the movie that contained small edits and a newly filmed prologue. This was done to “sharpen the class conflict” and “anti-exploitation” aspects of the film, while still attempting to respect the author’s original intentions. This was part of a larger trend of adding Soviet propaganda to pre-existing works of art, and given the ideals of socialism, it makes sense. (The preceeding is a simplification of history, of course… see this chapter from Inside the Film Factory for a more detailed discussion of Intolerance and it’s impact on Russian cinema.) In the Russian film world, things really began to take off with Sergei Eisenstein and films like Battleship Potemkin. Watch that film today, and you’ll be struck by how modern-feeling the editing is, especially during the infamous Odessa Steps sequence (which you’ll also recognize if you’ve ever seen Brian De Palma’s “homage” in The Untouchables).

Now, I’m not really suggesting that the woman who produced BSG Sabotage is going to be the next Eisenstein, merely that the act of cutting together pre-existing footage is not necessarily a sad waste of talent. I’ve drastically simplified the history of Soviet Montage Theory above, but there are parallels between Soviet filmmakers then and YouTube videomakers today. Due to limited resources and knowledge, they began experimenting with pre-existing footage. They learned from the experience and went on to grander modifications of larger works of art (Griffith’s Intolerance). This eventually culminated in original works of art, like those produced by Eisenstein.

Now, YouTube videomakers haven’t quite made that expressive leap yet, but it’s only been a few years. It’s going to take time, and obviously editing and montage are already well established features of film, so innovation won’t necessarily come from that direction. But that doesn’t mean that nothing of value can emerge from this sort of thing, nor does messing around with videos on YouTube limit these young artists to film. While Roger Ebert’s valid criticisms are vaid, more and more, I’m seeing interactivity as the unexplored territory of art. Video games like Heavy Rain are an interesting experience and hint at something along these lines, but they are still severely limited in many ways (in other words, Ebert is probably right when it comes to that game). It will take a lot of experimentation to get to a point where maybe Ebert would be wrong (if it’s even possible at all). Learning about the visual medium of film by editing together videos of pre-existing material would be an essential step in the process. Improving the technology with which to do so is also an important step. And so on.

To return back to the BSG Sabotage video for a moment, I think that it’s worth noting the origins of that video. The video is clearly having fun by juxtaposing different genres and mediums (it is by no means the best or even a great example of this sort of thing, but it’s still there. For a better example of something built entirely from pre-existing works, see Shining.). Battlestar Galactica was a popular science fiction series, beloved by many, and this video comments on the series slightly by setting the whole thing to an unconventional music choice (though given the recent Star Trek reboot’s use of the same song, I have to wonder what the deal is with SF and Sabotage). Ironically, even the “original” Beastie Boys video was nothing more than a pastiche of 70s cop television shows. While I’m no expert, the music on Ill Communication, in general, has a very 70s feel to it. I suppose you could say that association only exists because of the Sabotage video itself, but even other songs on that album have that feel – for one example, take Sabrosa. Indeed, the Beastie Boys are themselves known for this sort of appropriation of pre-existing work. Their album Paul’s Boutique infamously contains literally hundreds of samples and remixes of popular music. I’m not sure how they got away with some of that stuff, but I suppose this happened before getting sued for sampling was common. Nowadays, in order to get away with something like Paul’s Boutique, you’ll need to have deep pockets, which sorta defeats the purpose of using a sample in the first place. After all, samples are used in the absence of resources, not just because of a lack of originality (though I guess that’s part of it). In 2004 Nate Harrison put together this exceptional video explaining how a 6 second drum beat (known as the Amen Break) exploded into its own sub-culture:

There is certainly some repetition here, and maybe some lack of originality, but I don’t find this sort of thing “sad”. To be honest, I’ve never been a big fan of hip hop music, but I can’t deny the impact it’s had on our culture and all of our music. As I write this post, I’m listening to Danger Mouse’s The Grey Album:

It uses an a cappella version of rapper Jay-Z’s The Black Album and couples it with instrumentals created from a multitude of unauthorized samples from The Beatles’ LP The Beatles (more commonly known as The White Album). The Grey Album gained notoriety due to the response by EMI in attempting to halt its distribution.

I’m not familiar with Jay-Z’s album and I’m probably less familiar with The White Album than I should be, but I have to admit that this combination and the artistry with which the two seemingly incompatible works are combined into one cohesive whole is impressive. Despite the lack of an official release (that would have made Danger Mouse money), The Grey Album made many best of the year (and best of the decade) lists. I see some parallels between the 1980s and 1990s use of samples, remixes, and mashups, and what was happening in Russian film in the 1910s and 1920s. There is a pattern worth noticing here: New technology enables artists to play with existing art, then apply their learnings to something more original later. Again, I don’t think that the BSG Sabotage video is particularly groundbreaking, but that doesn’t mean that the entire remix culture is worthless. I’m willing to bet that remix culture will eventually contribute towards something much more original than BSG Sabotage

Incidentally, the director of the original Beastie Boys Sabotage video? Spike Jonze, who would go on to direct movies like Being John Malkovich, Adaptation., and Where the Wild Things Are. I think we’ll see some parallels between the oft-maligned music video directors, who started to emerge in the film world in the 1990s, and YouTube videomakers. At some point in the near future, we’re going to see film directors coming from the world of short-form internet videos. Will this be a good thing? I’m sure there are lots of people who hate the music video aesthetic in film, but it’s hard to really be that upset that people like David Fincher and Spike Jonze are making movies these days. I doubt YouTubers will have a more popular style, and I don’t think they’ll be dominant or anything, but I think they will arrive. Or maybe YouTube videomakers will branch out into some other medium or create something entirely new (as I mentioned earlier, there’s a lot of room for innovation in the interactive realm). In all honesty, I don’t really know where remix culture is going, but maybe that’s why I like it. I’m looking forward to seeing where it leads.

Blast from the Past

A coworker recently unearthed a stash of a publication called The Net, a magazine published circa 1997. It’s been an interesting trip down memory lane. In no particular order, here are some thoughts about this now defunct magazine.

  • Website: There was a website, using the oh-so-memorable URL of www.thenet-usa.com (I suppose they were trying to distinguish themselves from all the other countries with thenet websites). Naturally, the website is no longer available, but archive.org has a nice selection of valid content from the 96-97 era. It certainly wasn’t the worst website in the world, but it’s not exactly great either. Just to give you a taste – for a while, it apparently used frames. Judging by archive.org, the site apparently went on until at least February of 2000, but the domain apparently lapsed sometime around May of that year. Random clicking around the dates after 2000 yielded some interesting results. Apparently someone named Phil Viger used it as their personal webpage for a while, complete with MIDI files (judging from his footer, he was someone who bought up a lot of URLs and put his simple page on there as a placeholder). By 2006, the site lapsed again, and it has remained vacant since then.
  • Imagez: One other fun thing about the website is that their image directory was called “imagez” (i.e. http://web.archive.org/web/19970701135348/www.thenet-usa.com/imagez/menubar/menu.gif). They thought they were so hip in the 90s. Of course, 10 years from now, some dufus will be writing a post very much like this and wondering why there’s an “r” at the end of flickr.
  • Headlines: Some headlines from the magazine:
    • Top Secrets of the Webmaster Elite (And as if that weren’t enough, we get the subhead: Warning: This information could create dangerously powerful Web Sites)
    • Are the Browser Wars Over? – Interestingly, the issue I’m looking at was from February 1997, meaning that IE and NN were still on their 3.x iterations. More on this story below
    • Unlock the Secrets of the Search Engines – Particularly notable in that this magazine was published before google. Remember Excite (apparently, they’re still around – who knew)?

    I could go on and on. Just pick up a magazine, open to a random page, and you can observe something very dated or featuring a horrible pun (like Global Warning… get it? Instead of Global Warming, he’s saying Global Warning! He’s so clever!)

  • Browser Wars: With the impending release of IE4 and Netscape Communicator Suite, everyone thought that web browsers were going to go away, or be consumed by the OS. One of the regular features of the magazine is to ask a panel of experts a simple question, such as “Are Web Browsers an endangered species?” Some of the answers are ridiculous, like this one:

    The Web browser (content) and the desktop itself (functions) will all be integrated into our e-mail packages (communications).

    There is, perhaps, a nugget of truth there, but it certainly didn’t happen that way. Still, the line between browser, desktop, and email client is shifting, this guy just picked the wrong central application. Speaking of which, this is another interesting answer:

    The desktop will give way to the webtop. You will hardly notice where the Web begins and your documents end.

    Is it me, or is this guy describing Chrome OS? This guy’s answer and a lot of the others are obviously written with 90s terminology, but describing things that are happening today. For instance, the notion of desktop widgets (or gadgets or screenlets or whatever you call them) is mentioned multiple times, but not with our terminology.

  • Holy shit, remember VRML?
  • Pre-Google Silliness: “A search engine for searching search engines? Sure why not?” Later in the same issue, I saw an ad for a program that would automatically search multiple search engines and provide you with a consolidated list of results… for only $70!
  • Standards: This one’s right on the money: “HTML will still be the standard everyone loves to hate.” Of course, the author goes on to speculate that java applets will rule the day, so it’s not exactly prescient.
  • The Psychic: In one of my favorite discoveries, the magazine pitted The Suit Versus the Psychic. Of course, the suit gives relatively boring answers to the questions, but the Psychic, he’s awesome. Regarding NN vs IE, he says “I foresee Netscape over Microsoft’s IE for 1997. Netscape is cleaner on an energy level. It appears to me to be more flexible and intuitive. IE has lower energy. I see encumbrances all around it.” Nice! Regarding IPOs, our clairvoyant friend had this to say “I predict IPOs continuing to struggle throughout 1997. I don’t know anything about them on this level, but that just came to me.” Hey, at least he’s honest. Right?

Honestly, I’m not sure I’m even doing this justice. I need to read through more of these magazines. Perhaps another post is forthcoming…

Computer Desks

I have recently come into possession of a second LCD monitor, and hooked it up to do some dual monitor awesomeness (amazingly enough, I didn’t even need to upgrade my graphics card to do so). The problem is that my current desk is one of those crappy turn-of-the-century numbers that assumes you only have one monitor and thus doesn’t have space for the second. I managed to work around this… by ripping off the hutch portion of the desk, but I could still use a new desk, as this one really has seen better days.

So I started thinking about what I need my desk to do, and have quickly descended into Paradox of Choice hell. At a minimum, a new desk would need to be able to handle:

  • Two Monitors
  • Keyboard and Mouse (Preferably in a pullout thingy)
  • Cable Modem and Router
  • Tower Computer (needs good ventilation, especially considering that there are a couple fans mounted on the side of my computer)
  • Two speakers
  • External Hard Drive
  • Associated Cables/Wires

It’s also worth noting that I often have my TV on in the background. It’s currently positioned to my left, so I can just glance over and see what’s going on. My current desk has a couple of drawers and before I got rid of the hutch, it had other storage space. This allowed me to keep some books, CDs/DVDs, etc… in a handy position. However, it’d probably be just as easy to find some other piece of furniture to handle those (but it would be nice to have a small filing cabinet thing as part of the desk).

In terms of taste, I tend to be a minimalist. I don’t need lots of flying doodads or space-age design. Just something simple that covers the above. In looking around, this seems to be a rarity. As per usual when it comes to this sort of thing, Jeff Atwood has already posted about this, and the comment thread there is quite interesting (and still being updated, years later).

The best desk I’ve found so far seems to be the D2 Pocket Desk. Of course the big problem with that one is that it’s obscenely expensive (even on sale, it’s wayyyy to expensive). But it’s perfect for me. It’s notable almost as much for what you don’t see as what you do see – apparently there’s a big compartment in the back that’s big enough to stuff all the cables, wires, routers, etc… that I need (and you can see the two little holes meant to corral the wires into that area). It being as expensive as it is, it’s not something I’m seriously considering, but I’m trying to find a cheaper, but similarly designed option (perhaps something that doesn’t use cherry wood, which is apparently quite expensive). I’m kinda surprised at how few computer desks even attempt to account for cable management. Anyway, here’s a quick picture:

D2 Pocket Desk Picture

The other notable option I found at Jeff’s site was from a company called Anthro. Not the model he mentions, which is a monstrosity. However, Anthro features lots of models and everything is customizable in the extreme. While they seem like good quality desks, they’re also much more reasonably priced. Unfortunately, their configuration tool does little to help you visualize what I’ll end up with. Still, the 48″ AnthroCart seems like it would fit my needs and given the modular nature of the desk, I can always add on to it later. If you look at the 3rd picture on that page, it’s kinda what I’m looking for (but without the bottom shelf and maybe a filing cabinet attachment)

The big questions I have about the AnthroCart are how well their keyboard/mouse solutions work (all of the varieties have seem to be quite small – and my current option is actually kinda large, which I really like for some reason…) There’s also the question of how well those extra shelfs on the top and bottom work. And color. Yeah, so this one is definitely in Paradox of Choice territory. However, they’re apparently pretty agreeable and will help guide you in choosing the various accessories, etc… So maybe I’ll start up a chat with a rep when I get a chance…

Some other stuff I’ve been looking at:

  • Liso Computer Desk with Keyboard (from Target)
  • Onyx Matrix Computer Desk (from Office Depot)
  • Drake Desk (from Crate & Barrel – would be good if it weren’t for the glass top)
  • Ikea has some interesting stuff, but most of it is too small. On the other hand, for my bedroom, I did buy one of those generic Ikea tables and made it work as a desk. But it’s also kinda tucked into the corner of my room – the new desk needs to be in the middle of my living room, so it needs to look somewhat more presentable…

Any other ideas? As of right now, I’m thinking a simple AnthroCart setup would be best, but I’m still trying to find an imitation D2 Pocket Desk, which I still think would be ideal…

Update: Desk 51 from BlueDot (via) is pretty interesting. I’m wondering how sturdy it is.

Again Update: This Landon Desk from Crate and Barrel has grown on me a bit, especially after seeing a similar desk on Flickr. The good thing about C&B is that there is a store near me, so I can at least check it out in person…

Another Update: Well, that’s an idea… which I suppose also brings up the “Build your own” option, which could be a rewarding experience.

Yet Another Update: For reference, here’s a pic of my desk as currently configured, and here’s the surprisingly sturdy keyboard tray.