Computers & Internet

Another Store You Made

I’m totally stealing an idea from Jason Kottke here (let’s call it a meme!), but it’s kinda neat:

Whenever I link to something at Amazon on kottke.org, there’s an affiliate code associated with the link. When I log into my account, I can access a listing of what people bought1. The interesting bit is that everything someone buys after clicking through to Amazon counts and is listed, even items I didn’t link to directly. These purchased-but-unlinked-to items form a sort of store created by kottke.org readers of their own accord.

I have about 1/1000000th the readership of Kottke, but I do have an Amazon affiliate account (it doesn’t even come close to helping pay for the site, but it does feed my book/movie/music/video game addictions). Of course, I don’t sell nearly as much stuff either, but here are a few things sold that haven’t been directly linked:

And that about covers the unexpected stuff. I do get lots of Asimov orders as well as Christmas movie orders, but those are popular sections of the site…

Interrupts and Context Switching

To drastically simplify how computers work, you could say that computers do nothing more that shuffle bits (i.e. 1s and 0s) around. All computer data is based on these binary digits, which are represented in computers as voltages (5 V for a 1 and 0 V for a 0), and these voltages are physically manipulated through transistors, circuits, etc… When you get into the guts of a computer and start looking at how they work, it seems amazing how many operations it takes to do something simple, like addition or multiplication. Of course, computers have gotten a lot smaller and thus a lot faster, to the point where they can perform millions of these operations per second, so it still feels fast. The processor is performing these operations in a serial fashion – basically a single-file line of operations.

This single-file line could be quite inefficent and there are times when you want a computer to be processing many different things at once, rather than one thing at a time. For example, most computers rely on peripherals for input, but those peripherals are often much slower than the processor itself. For instance, when a program needs some data, it may have to read that data from the hard drive first. This may only take a few milliseconds, but the CPU would be idle during that time – quite inefficient. To improve efficiency, computers use multitasking. A CPU can still only be running one process at a time, but multitasking gets around that by scheduling which tasks will be running at any given time. The act of switching from one task to another is called Context Switching. Ironically, the act of context switching adds a fair amount of overhead to the computing process. To ensure that the original running program does not lose all its progress, the computer must first save the current state of the CPU in memory before switching to the new program. Later, when switching back to the original, the computer must load the state of the CPU from memory. Fortunately, this overhead is often offset by the efficiency gained with frequent context switches.

If you can do context switches frequently enough, the computer appears to be doing many things at once (even though the CPU is only processing a single task at any given time). Signaling the CPU to do a context switch is often accomplished with the use of a command called an Interrupt. For the most part, the computers we’re all using are Interrupt driven, meaning that running processes are often interrupted by higher-priority requests, forcing context switches.

This might sound tedious to us, but computers are excellent at this sort of processing. They will do millions of operations per second, and generally have no problem switching from one program to the other and back again. The way software is written can be an issue, but the core functions of the computer described above happen in a very reliable way. Of course, there are physical limits to what can be done with serial computing – we can’t change the speed of light or the size of atoms or a number of other physical constraints, and so performance cannot continue to improve indefinitely. The big challenge for computers in the near future will be to figure out how to use parallel computing as well as we now use serial computing. Hence all the talk about Multi-core processing (most commonly used with 2 or 4 cores).

Parallel computing can do many things which are far beyond our current technological capabilities. For a perfect example of this, look no further than the human brain. The neurons in our brain are incredibly slow when compared to computer processor speeds, yet we can rapidly do things which are far beyond the abilities of the biggest and most complex computers in existance. The reason for that is that there are truly massive numbers of neurons in our brain, and they’re all operating in parallel. Furthermore, their configuration appears to be in flux, frequently changing and adapting to various stimuli. This part is key, as it’s not so much the number of neurons we have as how they’re organized that matters. In mammals, brain size roughly correlates with the size of the body. Big animals generally have larger brains than small animals, but that doesn’t mean they’re proportionally more intelligent. An elephant’s brain is much larger than a human’s brain, but they’re obviously much less intelligent than humans.

Of course, we know very little about the details of how our brains work (and I’m not an expert), but it seems clear that brain size or neuron count are not as important as how neurons are organized and crosslinked. The human brain has a huge number of neurons (somewhere on the order of one hundred billion), and each individual neuron is connected to several thousand other neurons (leading to a total number of connections in the hundreds of trillions). Technically, neurons are “digital” in that if you were to take a snapshot of the brain at a given instant, each neuron would be either “on” or “off” (i.e. a 1 or a 0). However, neurons don’t work like digital electronics. When a neuron fires, it doesn’t just turn on, it pulses. What’s more, each neuron is accepting input from and providing output to thousands of other neurons. Each connection has a different priority or weight, so that some connections are more powerful or influential than others. Again, these connections and their relative influence tends to be in flux, constantly changing to meet new needs.

This turns out to be a good thing in that it gives us the capability to be creative and solve problems, to be unpredictable – things humans cherish and that computers can’t really do on their own.

However, this all comes with its own set of tradeoffs. With respect to this post, the most relevant of which is that humans aren’t particularly good at doing context switches. Our brains are actually great at processing a lot of information in parallel. Much of it is subconscious – heart pumping, breathing, processing sensory input, etc… Those are also things that we never really cease doing (while we’re alive, at least), so those resources are pretty much always in use. But because of the way our neurons are interconnected, sometimes those resources trigger other processing. For instance, if you see something familiar, that sensory input might trigger memories of childhood (or whatever).

In a computer, everything is happening in serial and thus it is easy to predict how various inputs will impact the system. What’s more, when a computer stores its CPU’s current state in memory, that state can be restored later with perfect accuracy. Because of the interconnected and parallel nature of the brain, doing this sort of context switching is much more difficult. Again, we know very little about how the humain brain really works, but it seems clear that there is short-term and long-term memory, and that the process of transferring data from short-term memory to long-term memory is lossy. A big part of what the brain does seems to be filtering data, determining what is important and what is not. For instance, studies have shown that people who do well on memory tests don’t necessarily have a more effective memory system, they’re just better at ignoring unimportant things. In any case, human memory is infamously unreliable, so doing a context switch introduces a lot of thrash in what you were originally doing because you will have to do a lot of duplicate work to get yourself back to your original state (something a computer has a much easier time doing). When you’re working on something specific, you’re dedicating a significant portion of your conscious brainpower towards that task. In otherwords, you’re probably engaging millions if not billions of neurons in the task. When you consider that each of these is interconnected and working in parallel, you start to get an idea of how complex it would be to reconfigure the whole thing for a new task. In a computer, you need to ensure the current state of a single CPU is saved. Your brain, on the other hand, has a much tougher job, and its memory isn’t quite as reliable as a computer’s memory. I like to refer to this as metal inertia. This sort of issue manifests itself in many different ways.

One thing I’ve found is that it can be very difficult to get started on a project, but once I get going, it becomes much easier to remain focused and get a lot accomplished. But getting started can be a problem for me, and finding a few uninterrupted hours to delve into something can be difficult as well. One of my favorite essays on the subject was written by Joel Spolsky – its called Fire and Motion. A quick excerpt:

Many of my days go like this: (1) get into work (2) check email, read the web, etc. (3) decide that I might as well have lunch before getting to work (4) get back from lunch (5) check email, read the web, etc. (6) finally decide that I’ve got to get started (7) check email, read the web, etc. (8) decide again that I really have to get started (9) launch the damn editor and (10) write code nonstop until I don’t realize that it’s already 7:30 pm.

Somewhere between step 8 and step 9 there seems to be a bug, because I can’t always make it across that chasm. For me, just getting started is the only hard thing. An object at rest tends to remain at rest. There’s something incredible heavy in my brain that is extremely hard to get up to speed, but once it’s rolling at full speed, it takes no effort to keep it going.

I’ve found this sort of mental inertia to be quite common, and it turns out that there are several areas of study based around this concept. The state of thought where your brain is up to speed and humming along is often referred to as “flow” or being “in the zone.” This is particularly important for working on things that require a lot of concentration and attention, such as computer programming or complex writing.

From my own personal experience a couple of years ago during a particularly demanding project, I found that my most productive hours were actually after 6 pm. Why? Because there were no interruptions or distractions, and a two hour chunk of uninterrupted time allowed me to get a lot of work done. Anecdotal evidence suggests that others have had similar experiences. Many people come into work very early in the hopes that they will be able to get more done because no one else is here (and complain when people are here that early). Indeed, a lot of productivity suggestions basically amount to carving out a large chunk of time and finding a quiet place to do your work.

A key component of flow is finding a large, uninterrupted chunk of time in which to work. It’s also something that can be difficult to do here at a lot of workplaces. Mine is a 24/7 company, and the nature of our business requires frequent interruptions and thus many of us are in a near constant state of context switching. Between phone calls, emails, and instant messaging, we’re sure to be interrupted many times an hour if we’re constantly keeping up with them. What’s more, some of those interruptions will be high priority and require immediate attention. Plus, many of us have large amounts of meetings on our calendars which only makes it more difficult to concentrate on something important.

Tell me if this sounds familiar: You wake up early and during your morning routine, you plan out what you need to get done at work today. Let’s say you figure you can get 4 tasks done during the day. Then you arrive at work to find 3 voice messages and around a hundred emails and by the end of the day, you’ve accomplished about 15 tasks, none of which are the 4 you had originally planned to do. I think this happens more often than we care to admit.

Another example, if it’s 2:40 pm and I know I have a meeting at 3 pm – should I start working on a task I know will take me 3 solid hours or so to complete? Probably not. I might be able to get started and make some progress, but as soon my brain starts firing on all cylinders, I’ll have to stop working and head to the meeting. Even if I did get something accomplished during those 20 minutes, chances are when I get back to my desk to get started again, I’m going to have to refamiliarize myself with the project and what I had already done before proceeding.

Of course, none of what I’m saying here is especially new, but in today’s world it can be useful to remind ourselves that we don’t need to always be connected or constantly monitoring emails, RSS, facebook, twitter, etc… Those things are excellent ways to keep in touch with friends or stay on top of a given topic, but they tend to split attention in many different directions. It’s funny, when you look at a lot of attempts to increase productivity, efforts tend to focus on managing time. While important, we might also want to spend some time figuring out how we manage our attention (and the things that interrupt it).

(Note: As long and ponderous as this post is, it’s actually part of a larger series of posts I have planned. Some parts of the series will not be posted here, as they will be tailored towards the specifics of my workplace, but in the interest of arranging my interests in parallel (and because I don’t have that much time at work dedicated to blogging on our intranet), I’ve decided to publish what I can here. Also, given the nature of this post, it makes sense to pursue interests in my personal life that could be repurposed in my professional life (and vice/versa).)

Screenshots

When I write about movies or anime, I like to include screenshots. Heck, half the fun of the Friday the 13th marathon has been the screenshots. However, I’ve been doing this manually and it’s become somewhat time intensive… So I’ve been looking for ways to make the process of creating the screenshots easier. I was going to write a post about a zombie movie tonight and I had about 15 screenshots I wanted to use…

I take screenshots using PowerDVD, which produces .bmp files. To create a screenshot for a post, I will typically crop out any unsightly black borders (they’re ugly and often asymmetrical), convert to .jpg and rename the file. Then I will create a smaller version (typically 320 pixels, while maintaining the aspect ratio), using a variant of the original .jpg’s filename. This smaller version is what you see in my post, while the larger one is what you see when you click on the image in my post.

I’ve always used GIMP to accomplish this, but it’s a pretty manual process, so I started looking around for some batch image processing programs. There are tons of the things out there. I found several promising programs. Batch Image Resizer was pretty awesome and did exactly what I wanted, but the free trial version inserted a huge unwanted watermark that essentially rendered the output useless. I looked at a few other free apps, but they didn’t meet some of my needs.

Eventually, I came accross the open source Phatch, which looked like it would provide everything I needed. The only issue was the installation process. It turns out that Phatch was written in Python, so in addition to Phatch, you also need to download and install Python, wxPython, Python Imaging Library and the Python Win32 Extensions. What’s more is that the Phatch documentation has not taken into account that new versions of all of those are available and not all of them are compatible with each other. After a false start, I managed to download and install all the necessary stuff. Then, to run the application, I have to use the goddamned command line. Yeah, I know windows users don’t get much support from the linux community, but this is kinda ridiculous.

But I got it all working and now I was on my way. As I’ve come to expect from open source apps, Phatch has a different way of setting up your image processing than most of the other apps I’d seen… but I was able to figure it out relatively quickly. According to the Phatch documentation, the Crop action looked pretty easy to use… the only problem was that when I ran Phatch, Crop did not appear to be on the list of actions. Confused, I looked around the documentation some more and it appeared that there were several other actions that could be used to crop images. For example, if I used the Canvas action, I could technically crop the image by specifying measurements smaller than the image itself – this is how I eventually accomplished the feat of converting several screenshots from their raw form to their edited versions. Here’s an example of the zombietastic results (for reference, a .jpg of the original):

Zombietastic

Bonus points to anyone who can name the movie!

The process has been frustrating and it took me a while to get all of this done. At this point, I have to wonder if I’d have been better off just purchasing that first app I found… and then I would have been done with it (and probably wouldn’t be posting this at all). I’m hardly an expert on the subject of batch image manipulation and maybe I’m missing something fairly obvious, but I have to wonder why Phatch is so difficult to download, install, and use. I like open source applications and use several of them regularly, but sometimes they make things a lot harder than they need to be.

Update: I just found David’s Batch Processor (a plugin for GIMP), but its renaming functionality is horrible (you can’t actually rename the images – but you can add a prefix or suffix to the original filename.) Otherwise, it’s decent.

And I also found FastStone Photo Resizer, which does everything I need it to do, and I don’t need to run it from the command line either. This is what I’ll probably be using in the future…

Update II: I got an email from Stani, who works on Phatch and was none to pleased about the post. It seems he had trouble posting a comment here (d’oh – second person this week who mentioned that, which is strange as it seems to have been working fine for the past few months and I haven’t changed anything…). Anyway, here are his responses to the above:

As your comment system doesn’t work, I post it through email. Considering the rant of your blog post, I would appreciate if you publish it as a comment for:

https://kaedrin.com/weblog/archive/001652.html

> Eventually, I came accross the open source Phatch, which looked like it would provide everything I needed.

Thanks for taking the effort to try out Phatch.

> What’s more is that the Phatch documentation has not taken into account that new versions of all of those are available and not all of them are compatible with each other.

The Phatch documentation is a wiki. The installation process for Windows would be much less a pain if Windows users would help improving the wiki and keeping the wiki up to date.

Unfortunately I’ve run into this behavior:

http://photobatch.wikidot.com/forum/t-145786/windows-installation-question

Luckily Linux users update the wiki themselves or send me the instructions, but don’t run away. (Hint, hint)

I know several people have installed Phatch on Windows, but none of them documented for their fellow Window users. I only update the instructions with every major release.

> Then, to run the application, I have to use the goddamned command line.

If you installed Python right, you could just double click on phatch.py to start it or make a shortcut for it on your desktop.

> Yeah, I know windows users don’t get much support from the linux community, but this is kinda ridiculous.

I hope to see your contribution on the wiki. Until then the situation is indeed ridiculous.

> the Crop action looked pretty easy to use…

You’re right, but the crop action is part of the next release, Phatch 0.2 which is packed with many new features. If you want to be a beta tester, please let me know.

> maybe I’m missing something fairly obvious, but I have to wonder why Phatch is so difficult to download, install, and use. I like open source applications and use several of them regularly, but sometimes they make things a lot harder than they need to be.

I hope I explained it to you. I only use Windows to test my open source software. Maybe you would want me to make a one click installer. You probably understand that such negative ranting is not really stimulating.

And my response:

Apologies if my ranting wasn’t stimulating enough, but considering that it took a couple of hours to get everything working and that I value my time, I wasn’t exactly enthused with the application or the documentation. Believe it or not, I did click on the “edit” link the wiki with the intention of adding some notes about the updated version numbers, but it said I had to be registered and I was already pretty fed up and not in the mood to sign up for anything. I admit that I neglected to do my part, but I got into this to save time and it ended up being an enormous time-sink. If I get a chance, I’ll take another look.

It looks like I can just double-click on the .py file, but the documentation says to run it from the command line (another thing for me to fix, perhaps?)

As for a simple installer, I would love to… if I had the time, motivation, or, uh, talent to create one. In the mean time, I’ll see what I can do about the documentation, but honestly, I doubt that will help much until someone does create a windows installer.

Sorry about the comment functionality on my blog. I’ve been having issues with spammers and the plugin I’m using to block spammers seems to block legitimate comments sometimes as well (Question: did you use the “preview” function?). Yet another thing I’ll have to look into…

Update III: Ben over at Midnight Tease has been having fun with Open Source as well…

Nerdy

I’ve always considered myself something of a nerd, even back when being nerdy wasn’t cool. Nowadays, everyone thinks they’re a nerd. MGK recently noticed this:

Recently, I was surfing the net looking for lols, and came across a personal ad on Craigslist. The ad was not in and of itself hilarious, but one thing struck me. The writer described herself as “nerdy,” and as an example of her nerdiness, explained that she loved to watch Desperate Housewives.

My god, people, have we allowed “nerdy” to be defined down so greatly that watching Desperate Housewives – a top 20 Neilsen primetime soap opera with no actual nerd content per se – qualifies as “nerdy” now? That is just wrong. The nerdular act cannot be allowed to be so mainstream.

To address this situation, he has devised “a handy guide for people to define their own nerdiness, based on a number of nerdistic passions.” I’m a little surprised at how poorly I did in some of these categories.

  • BatmanNot Nerdy. When I think about it, it’s not that surprising. After all, I have never read any of the comic books, not even Year One or The Dark Knight Returns, which MGK specifically calls out later in his creteria as not being particularly nerdy. That said, I wonder how watching The Dark Knight 5 times (three times in the theater) in less than a year qualifies.
  • Star WarsSlightly Nerdy. Now this one is surprising. Sure, according to this guide, I’m nerdier about Star Wars than I am about Batman, but only a little. I suppose if he had loosened the criteria or chose a different random fact for the “nerdy” level, I could easily have reached that level, for I have had some experience with the “expanded universe” Star Wars novels. One other gripe is that no self-respecting nerd would defend the idea of Jar Jar Binks!
  • Harry Potter – Somwhere between Not Nerdy and Slightly Nerdy. I didn’t particularly love Harry Potter and the Order of the Phoenix, and my dislike may disqualify me from the Slightly Nerdy level. On the other hand, I didn’t particularly hate the novel either, and I had no problem blowing through it rather quickly.
  • Magic: The GatheringSlightly Nerdy. I have to say that I didn’t play this game that much, but I really did enjoy it when I did. But it got way too complicated later on, and some people took it wayyy to seriously.
  • H.P. LovecraftDangerously Nerdy. Finally! Though I have to admit that I don’t qualify for three of the lesser levels… However, I have read several of his stories, which is apparently dangerously nerdy.
  • Nerd TelevisionDangerously Nerdy. Totally. The two shows I haven’t watched much of are the lowest ranked ones. I’ve seen a significant portion of the other ones, including The Adventures of Brisco County Jr. (at this point, even recognizing what Brisco County Jr. is, is probably nerdworthy).
  • Star Trek – I think I might be Fairly Nerdy here, otherwise I’m Not Nerdy. It’s just that I don’t actually remember which one Picard rode the dune buggy in. That probably disqualifies me. I do love TNG though. Could never get into any of the other spinoffs.
  • Computer UseNerdy. Potentially Really Nerdy, but there are definitely a couple of coding jokes in XKCD that I haven’t gotten (but I get a pretty good portion of them).

Again, I am a bit surprised at how non-nerdy I am. I mean, aside from a couple of dangerously nerdy subjects, I’m not very nerdy at all. How did you do?

Firefox versus Opera

I use Opera to do most of my web browsing and have done so for quite a while. Is it time to switch to another browser? Or does Opera still meet my needs? After some consideration, the only realistic challenger is Firefox. What follows is not meant to be an objective comparison, though I will try to maintain impartiality and some of the criteria will be more fact based than others. Still, I’m not claiming this to be a definitive guide or anything. There are many features of both browsers that appeal to me, and many that I find irrelevant. Your experience will probably be different. Anyway, to start things off, a little history:

I first became aware of Opera in the late 1990s and I tried out version 3.5 and 4, but neither really made much of an impression. Plus, at the time, Opera was trialware… there was a free trial, but after that ended you needed to purchase the software if you wanted to keep using it. Starting with version 5, Opera became free, but it was ad-supported, and there was this big, honking banner ad built into the browser. On the other hand, Opera 5 was also the first browser to implement mouse gestures, the most addicting browser feature I’ve encountered (more on this later). As time went on and other browsers emerged, Opera finally relented and released a completely free browser in 2005. I’ve used Opera as much as possible since then, though I’ve occasionally used other browsers for various reasons. The biggest complaint I’ve had about Opera is that some websites don’t render or operate correctly in Opera, thus forcing me to fire up IE or FF. This complaint has lessened with each successive release though, and Opera 9.x seems to be compatible with most websites. The only time I find myself opening another browser is to watch Netflix online movies, which only work in IE (more on this later). Opera is certainly not a perfect browser, but each release seems to contain new and innovative features, and it has always served me well.

The only browser that has really compared with Opera is Firefox. It’s based on the open source Mozilla project, which began in 1998 as a replacement for the Netscape 4.x browser (which was badly in need of an overhaul). Unfortunately, development of the open source browser was slow going, allowing Microsoft to completely dominate the market. However, version 1.0 of the Mozilla Application Suite (which included more than just a browser) was launched in 2002. It was bloated and slow, but the underlying code (particularly the rendering engine, named gecko) was used as the base for several new projects, including Firefox. Firefox 1.0 was released in late 2004, and has been picking up steam ever since. It’s the first browser to challenge IE’s dominance of the market, and it’s also far superior to IE. The current version of Firefox is mature and stable, and a new version (3.0) is on its way that will supposedly address many of the current complaints about FF.

Of course, these are not the only two browsers out there. Internet Explorer is notable for it’s widespread adoption (during Q2 of 2004, IE had an asounding 95% share of the market). IE isn’t very good compared to the competition, but its one virtue is that most websites will load and render properly in IE (and some websites will only work in IE). As a web developer, I have an intense dislike for IE, as it has poor standards support and is generally a pain to work with (especially IE6). IE7, while an improvement in many ways, also features some bizarre interface changes that make the browser less usable.

Also of note is Safari, Apple’s default browser in OS X. Based on the open source KHTML engine (which runs KDE’s Konqueror, the primary open source competitor to Mozilla/Firefox), it implements many of the same features of Opera and FF, but in a simple, lightweight way. I’ve never been much of a fan of Safari, though it should be noted as a valid competitor. It’s a solid browser, fast and clean, but ultimately nothing really special (perhaps with more use, I would be won over). Finally, there are a number of other smaller scale or specialized browsers like Flock (which has many features tailored around integrating with social networking sites), but nothing there really fits me.

So the most realistic options for me are Opera and Firefox. Both have new browsers in Beta (or higher), but I’ll be primarily using the current releases (Opera 9.27 and Firefox 2.0.0.14). I’ve played around with Opera 9.5 and Firefox 3 RC1 and will keep them in mind. For reference, I’m running a PC with Intel Core 2 Duo (2.4 GHz), 2 GB RAM, and Windows XP SP2.

  • Default/Native Features: These first two criteria are tricky because they reflect the underlying philosophy of the two companies. Opera cleary has the better feature set out-of-the-box. Firefox is no slouch, of course, but it can’t compete with the quality and quantity of Opera’s default feature set. Both browsers have strong standards support, tabbed browsing, popup blocking, integrated web search, and other standard browser features. Now here’s the tricky part. Opera has several features that FF doesn’t. However, FF has one big feature that Opera doesn’t, and that’s their Extensions and Add-Ons (more on that in a moment). Opera does have a few major pieces of native functionality, like Mouse Gestures and Speed Dial, as well as other, smaller touches, like paste-and-go and the Trash Can. Now, the inclusion of all these features by default has its disadvantages as well. Especially when you consider all the features that aren’t very useful. Opera includes an email client (which is decent except that I don’t use it anymore), integrated BitTorrent support (which is awful and should be disabled), and the particularly weird Widgets (which are near useless, more on this below). This leads to the frequent claim by Firefox supporters that Opera is “bloated” with extra features. I suppose that’s technically true, but then, Opera is also a smaller download (Opera 9.5b2 is 5,117 KB versus Firefox 3RC1’s 7,317 KB), takes up less space on the HD (Opera at 6.02 MB versus Firefox at 22.6 MB, though FF also has Add-Ons), and has a lower memory footprint. Call it bloated if you like, but that doesn’t mean that FF isn’t bloated too (honestly though, this is a quibble – both are way, way better than IE).

    Winner: Opera

  • Add-Ons/Extensions/Plugins: While Firefox does not have many features installed by default, it does have support for Add-Ons, and there is a huge community of developers and a large number of useful Add-Ons available for download. Many of the things Opera does natively can be replicated using a FF Add-On (in my experience, the Add-On is not as good as the native support, but passable). In effect, Firefox actually has more features available than Opera because of these Add-Ons. Now, this philosophy also has its drawbacks. First, you have to seek out and install each Add-On, and second, some Add-Ons are poorly written and cause performance problems within FF. In the end, though, the usefulness of the Add-Ons outweigh the negatives. Opera remains stalwart in its refusal to implement any sort of plugin system (beyond the rudimentary, circa 1995 Netscape-like system they have now), though they did launch something called Widgets, which are pretty much worthless. Opera’s reasoning for not supporting extensions is sound, but also limiting:

    Opera does not support third party extensions. Opera has rather incorporated the most useful and popular features in its browser and holds itself accountable for the functionality of these features. With integrated features rather than extensions, users are not subjected to the vulnerabilities of extensions created by third parties, which may or may not go through a verification or testing process. With the largest Web browser development lab in the world, Opera ensures that all of its features are smoothly integrated, tested and ready for the user.

    This is certainly one way to approach the situation, and it’s also probably the reason why Opera’s native functionality works better than Firefox’s Add-Ons, but again, it’s quite limiting. More than anything else, Extensions are what would make me switch from Opera to FF. Opera is very innovative and they were the first to implement many features into their browser (for instance, tabbed browsing, mouse gestures, and more recently, speed dial), but even when Opera does manage to implement a brand new feature not in FF, it doesn’t take long for someone to put together an Add-On to duplicate the functionality. I’ll talk a little more about my favorite extensions as we go. Again, the positives of having an open system for third-party extensions far outweigh the negatives.

    Winner: Firefox

  • Other Customization: Both browsers are highly customizeable and powerful. The interface customization abilities are more extensive in Opera and their Theme manager is easier to use, but Firefox can generally follow along, though sometimes they need to rely on an Extension to allow customization. I don’t do a whole lot of advanced configuration in either browser, but both browsers have a way to configure various preferences (beyond the basic options in the menu), etc…

    Winner: Tie

  • General Web Browsing: There are a lot of elements to this that will be separated out (i.e., Mouse Gestures, speed, performance, etc…), so what this amounts to is how well each browser loads pages. Since Opera has never commanded more than a few percent of the browser share, most web development doesn’t take Opera into account. In the past, this meant that many pages did not look right or operate correctly in Opera. As time has gone on and web standards have become more prevalent, Opera has improved considerably in this respect (well, technically, Opera has always been relatively standards compliant, it’s just that the standards are being used more these days) to the point now where I very rarely need to open a different browser. However, there are still pages that render poorly and would look better in other browsers and a page I use frequently, the Netflix streaming video functionality, won’t work in Opera. Of course, it won’t work in Firefox either, but Firefox has one of those crazy Add-Ons called IE Tab which loads an instance of IE inside Firefox’s tabs (meaning that you don’t have to exit out of Firefox or fire up a separate IE window). Firefox has captured around 15% of the market and is a favorite of the web development community (see next bullet for more), so it has much better support amongst websites. Opera still lags behind because of its small market share, to the point where even Internet software giants like Google don’t launch applications with strong Opera support (for instance, every time Google Reader upgrades their interface, it stops working in Opera for a few days while the Google developers scramble to issue a fix).

    Winner: Firefox

  • Mouse Gestures: This is probably the most important piece of functionality a browser must have for me. Browsing the internet is a mouse intensive activity, and Opera realized early on that providing this functionality would drastically improve the browsing experience. Opera has native Mouse Gestures support, while Firefox has an Add-On (actually, it has several, but only one of them is worth its salt) that provides similar functionality. However, Opera’s functionality has always felt smoother and easier to use. The FF Add-On is a little buggy, the browsing experience is a little rougher, and it seems to be easier to screw up a gesture. I think part of this is that Opera has more caching enabled by default than Firefox, which leads to a more seamless experience when browsing. I’m sure there are ways to make FF more responsive, but I haven’t played around with it (and Opera is fantastic by default). I might not be representative of the general internet population, but I think this is one of the most useful and important features a browser can have, and Opera’s implementation is just plain better.

    Winner: Opera

  • Web Development Tools: Part of my job requires frontend web development, and Firefox unquestionably has the better web development tools. The Web Developer Toolbar and Firebug tandem is difficult to beat. Opera’s latest revision of their developer tools, called Dragonfly, is an impressive leap forward and requires some more inspection, but my initial impression is that they still have a ways to go before they catch up with Firefox’s Add Ons.

    Winner: Firefox

  • Speed: Opera is often the winner in various benchmark tests, including this relatively old but thorough comparison of browser speeds (it’s been updated a few times and has Opera 9 and FF 2, but is now retired and does not contain stats for the latest releases). Similarly, spot-checking various other benchmarks seems to further indicate Opera’s speed. Then again, some initial reports of FF3 seem to indicate an improvement. As always, you have to take these sorts of benchmarks and reports with a grain of salt. My subjective perception of speed is that Opera is faster, but I haven’t used FF3 very much, and I’m also not sure how much of that speed is due to caching settings.

    Winner: Opera

  • Performance: This one seems to be more tricky. In my admittedly arbitrary and unscientific test, I opened 10 tabs of commonly visited websites in both browsers. Opera was using ~99 MB, while FF was using ~150 MB. (Sites used include Kaedrin Weblog, CBS Sportsline’s Fantasy Baseball LiveScoring page, GMail, Google Reader, Wikipedia, IMDB, and a few others) It’s worth noting that Firefox has always had complaints about memory usage, especially when you have a lot of tabs open. In some cases, memory issues were traced to malfunctioning Add-Ons or plugins. I’ve seen other benchmark tests that have closer results and apparently FF 3 has made massive improvements in this area. In my own subjective experience, FF tends to bog down, especially when I have many tabs open, so I’m going to give this to Opera, but if FF 3 works out the way everyone thinks, this may be up for grabs. I’d like to do some more detailed and formal tests on this one though (perhaps later this week).

    Winner: Opera

  • Intangibles: As I’ve already mentioned, I primarily use Opera to browse, so I am obviously biased towards Opera. I suppose there’s also something to be said for rooting for the underdog, though when it comes to usability and performance, that shouldn’t matter (and really, it doesn’t – Opera is a genuinely great browser). And finally, Opera is more innovative than any other browser. They had tabbed browsing years and years before anyone else, their implementation of Mouse Gestures was revolutionary (for me, at least), and more recently, Speed Dial has become a favorite of mine. Their advances small interface issues (like the Trash Can or Paste-and-Go) are rarely noted, but are very useful (enough so that FF has had Add-Ons created to replicate the interfaces). The fact that Firefox can do all of these things doesn’t mean they would have come up with them first, and I suppose that’s worth mentioning. On the other hand, Firefox is an open source project (there is some controversy about that, but it’s still better than Opera) and their philosophy of Add-Ons allows for a much broader range of browser capabilities and customization. In general, I prefer openness to closed systems, so there’s another point for Firefox. It’s also worth noting that Firefox’s market share has been steadily increasing while Opera’s has been decreasing (and when your high point is around 2.5%, that’s not saying much). Opera has made a name for themselves in the embedded market (i.e. it’s on lots of cell phones and other hardware (like the Wii)), so they won’t be going away anytime soon, but it seems like Firefox is moving faster now. This is a really close one, but I’ll lean towards Firefox because it seems to have a brighter future.

    Winner: Firefox

Well look at that, we’ve got a tie. Both Opera and Firefox have won 4 of the 8 above categories, which means I’ll have to come up with some sort of tie-breaker or weighting. I think I’m going to end up staying with Opera for now, with the caveat that Firefox seems to have a brighter future. Opera does the right things really well, while Firefox is more flexible and open. I also tend to use Firefox as my primary browser for web development efforts (but that’s a strange one, as I use all browsers in web development, though the FF web developer’s toolbar and Firebug are really indispensible). However, for day to day activity, Opera is still good for me.

So what does the future hold? If Opera continues to lose market share and doesn’t find a way to account for the extensions of Firefox, it’s going to be in real trouble (they seem to think their Widgets system will do this, but it really won’t). Honestly, if FF 3 really does solve their memory problems, I might even be switching over that soon.

Netflix Activity

The recent bout with myTV on DVD addiction necessitated an increase in Netflix usage, which made me curious. How well have I really taken advantage of the Netflix service, and is it worth the monthly expense?

If I were to rent a movie at a local video store like Blockbuster, each rental would cost somewhere around $4 (this is an extremely charitable estimate, as I’m sure it’s probably closer to $5 at this point), plus the expense in time and effort (I mean, come on, I’d have to drive about a mile out of my way to go to one of these places!) Netflix costs me $15.99 a month for the 3-disc-at-a-time plan (this plan was $17.99 when I signed up, but decreased in price two times during around two years of membership), so it takes about 4-5 Netflix rentals to recoup my costs and bring the price of an average rental down below $4. I’ve been a member for one year and ten months… how did I do (click for a larger version)?

My Netflix Activity Chart

A few notes on the data:

  • The chart shows both DVD rentals and movies or shows watched online through Netflix’s “Watch Instant” service. There are certain distinctions that should be made here, namely that DVD rentals are measured by the date the DVD was returned, while Watch Instant rentals are measured when you watch them. Also, when watching a TV series on Watch Instant, each episode counts as a separate rental (if I were watching on DVD, there’s usually 3-4 episodes on one disc, but since I’m watching on the Watch Instant service, each episode counts as a separate rental).
  • As you can see, my initial usage was a little erratic, though I apparently tend to fall into a 4-5 month pattern (and you can see two nearly identical curves in 2007) where DVD rentals range from 6-13 per month. 13 appears to be my ceiling for a month, though I’ve hit that several times.
  • I’ve only fallen below the 4 disc per month ratio needed to bring the average rental down below $4 once (twice if you count July 2006, but that was my first month of service and does not constitute a full month’s worth of data). To be honest, I don’t remember why I only returned 2 movies in January 2007, but that was the first and only time I fell below the necessary 4 rentals.
  • My Watch Instantly service usage started off with a bang in July 2007 but quickly trailed off until 2008, when usage skyrocketed. This is when I discovered the TV show Dexter and quickly worked my way through all of the first season episodes (13 in all). Following Dexter, I started in on Ghost in the Shell: Stand Alone Complex and I just finished that today (expect a review later this week), so that means I watched 26 episodes online. Expect this to drop sharply next month (though I still plan on using it significantly, as I’ll be following along with Filmspotting’s 70’s SF marathon, which features several movies in the Watch Instantly catalog). All in all, it’s a reasonable service, though I have to admit that watching it on my computer just isn’t the same – I bought that 50″ widescreen HDTV for a reason, you know…
  • You’ll also notice that both March and April of 2008 have me hitting the ceiling of 13 movies per month. This is the first time I’ve done that in consecutive months and is largely due to watching BSG season 3 and my discovery and addiction to The Wire.
  • As of April 2008, I’m averaging 9 movies a month (I’ve rented 198 DVDs). Even if I were to use my original price of $17.99 a month, that works out to around $2 a DVD rental. When you factor in the price drops and the Watch Instantly viewing (I’ve watched 51 things, though again, in some cases what I’m watching is a single episode of a TV show), I’m betting it would come out around $1.50-$1.75.

So it seems that the service is definitely worth the money and is indeed saving me a lot. Plus, Netflix has a far greater selection than any local video store (with the potential exception of TLA Video, but they’re too far from my home to count), thus allowing me to indulge in various genres that you don’t see much of in a typical video store. The only potential downside to Netflix is that you can’t really rent something on impulse (unless it’s on the Watch Instantly service). There are also times when new or popular movies take some time before they’re actually available to you, but you have to contend with that from video rental stores as well. Indeed, I can only think of 3-4 times I’ve had to wait for a movie (this is mostly due to the fact that I tend to rent more obscure fare where people aren’t exactly lining up to see it…) For the most part, Netflix has been reliable as well, almost always turning around my returns in short order (I mail it one day, and get the next films two days later). There have been a few mixups and I do remember one movie that wasn’t available on the east coast and had to be shipped from California, so it came after a wait of 3-4 days, but for the most part, I’m very happy with the service.

This has been an interesting exercise, because I feel like I’m a little more consistent than the data actually shows. I’m really surprised that there are several months where my rentals went down to 6… I could have sworn I watched at least 2-3 discs a week, with the occasional exception. Still, an average of 9 movies a month is nothing to sneeze about, I guess. I’ve heard horror stories of where Netflix will start throttling you and take longer to deliver discs if you go above a certain amount of rentals per month (at a certain point, the cost of processing your rentals becomes more than you’re paying, which I guess is what prompts Netflix to start throttling you), but I haven’t had a problem yet. If I keep up my recent viewing habits though, this could change…

Requiem for a Meme

In July of this year, I attempted to start a Movie Screenshot Meme. The idea was simple and (I thought) neat. I would post a screenshot, and visitors would guess what movie it was from. The person who guessed correctly would continue the game by either posting the next round on their blog, or if they didn’t have a blog, they could send me a screenshot or just ask me to post another round. Things went reasonably well at first, and the game experienced some modest success. However, the game eventually morphed into the Mark, Alex, and Roy show, as the rounds kept cycling through each of our blogs. The last round was posted in September and despite a winning entry, the game has not continued.

The challenge of starting this meme was apparent from the start, but there were some other things that hindered the game a bit. Here are some assorted thoughts about the game, what held it back, and what could be done to improve the chances of adoption.

  • Low Traffic: The most obvious reason the game tapered off was that my blog doesn’t get a ton of traffic. I have a small dedicated core of visitors though, and I think that’s why the game lasted as long as it did. Still, the three blogs that comprised the bulk of rounds in the game weren’t very high traffic blogs. As such, the pool of potential participants was relatively small, which is the sort of thing that would make it difficult for a meme to expand.
  • Barriers to Entry: The concept of allowing the winner to continue the game on their blog turned out to be a bit prohibitive, as most visitors don’t have a blog. Also, a couple of winners expressed confusion as to how to get screenshots, and some didn’t respond at all after winning. Of course, it is easy to start a new blog, and my friend Dave even did so specifically to post his round of the game, but none of these things helped get more eyes looking at the game.
  • Difficulty: I intentionally made my initial entries easy (at one point, I even considered making it obscenely easy, but decided to just use that screenshot as a joke), in an attempt to ensnare casual movie viewers, but as the game progressed, screenshots became more and more difficult, and were coming from obscure movies. Actually, if you look at most of the screenshots outside of my blog, there aren’t many mainstream movies. Here are some of the lesser known movies featured in the game Hedwig and the Angry Inch (this one stumped the interwebs), The Big Tease, Rosencrantz & Guildenstern Are Dead, Children of Men (mainstream, I guess, though I’m pretty sure it wasn’t even out on DVD yet), Cry-Baby, Brotherhood of the Wolf, The City of Lost Children, Everything Is Illuminated, Wings of Desire, Who Framed Roger Rabbit (mainstream), Run, Lola, Run, Masters of the Universe (!), I Heart Huckabees, and Runaway. Now, of the ones I’ve seen, none of these are terrible films (er, well, He-Man was pretty bad, as was Runaway, but they’re 80s movies, so slack is to be cut, right?), but they’re also pretty difficult to guess for a casual movie watcher. I mean, most are independent, several are foreign, and it doesn’t help when the screenshot is difficult to place (even some of the mainstream ones, like Who Framed Roger Rabbit, were a little difficult). Heck, by the end, even I was posting difficult stuff (the 5 screenshot extravaganza featured a couple of really difficult ones). Again, there’s nothing inherently wrong with these movie selections, but they’re film-geek selections that pretty much exclude mainstream viewers. If the game had become more widespread, this wouldn’t have been as big of a deal, as I’d imagine that more movie geeks would be attracted to it. This is an interesting issue though, as several people thought their screenshots were easy, even though their visitors thought they were hard. Movies are subjective, so I guess it can be hard to judge the difficulty of a given screenshot. A screenshot that is blatantly obvious to me might be oppressively difficult to someone else.
  • Again Traffic: Speaking of which, once the game had made its way around most of my friends’ blogs, things began to slow down a bit because we were all hoping that someone new would win a round. Several non-bloggers posted comments to the effect of: I know the answer, but I don’t have a blog and I want this game to spread so I’ll hold off for now. I know I held back on several rounds because of this, but as the person who started this whole thing, this is understandable. In some ways, it was nice to see other people enjoying the game enough to care about it’s success, but that also didn’t help a whole lot.
  • Detectives: At least a couple of people were able to find answers by researching rather than recognizing the movie. I know I was guilty of this. I’d recognize an actor, then look them up on IMDB and see what they’ve done, which helps narrow down the field considerably. I don’t know that this is actually a bad thing, but I did find it interesting.
  • Memerific: The point of a meme is that it’s supposed to be self-sustaining and self-propagating. While this game did achieve a modest success at the beginning, it never really became self-sustaining. At least a couple of times, I prodded the game to move it forward, and Roy and Alex did the same. I guess the memetic inertia was constantly being worn down by the factors discussed in this post.
  • Help: Given the above, there were several things that could have helped. I could have done a better job promoting the game, for instance. I could have made it easier for other bloggers to post a round. One of the things I wanted to do was create little javascript snippits that people could use to very quickly display the unweildy rules (perhaps using nifty display techniques that hide most of the text initially until you click to learn more) and another little javascript that would display the current round (in a nice little graphical button or something). Unfortunately, this game pretty much coincided with the busiest time of my professional career, and I didn’t have a lot of time to do anything (just keeping up with the latest round was a bit of a challenge for me).
  • Variants: One thing that may have helped would be to spread the game further out by allowing winners to “tag” other bloggers they wanted to see post screenshots, rather than just letting the winner post their own. I actually considered this when designing the game, but after some thought, I decided against it. Many people hate memes and don’t like being “tagged” to participate. Knowing this, a lot of people who do participate in memes are hesitant to “tag” other people. I didn’t want to annoy people with the blogging equivalent to chain letters, so I decided against it. However, it might have helped this meme spread out much further, as it doesn’t require casual movie fans to participate more and it would allow the meme to spread much further, much faster. If I said the winner should tag 5 other bloggers to participate, the meme could spread exponentially. This would be much more difficult to track, but on the other hand, it might actually catch on. This might be the biggest way to improve the meme’s chances at survival.
  • Alternatives: This strikes me as something that would work really well on a message board type system, especially one that allowed users to upload their own images. Heck, I wouldn’t be surprised to see something like this out there. It also might have been a good idea to create a way to invite others to play the game via email (which probably would only work on a message board or dedicated website, where there’s one central place that screenshots are posted). However, one of the things that’s neat about blog memes is that they tend to get your blog exposed to people who wouldn’t otherwise visit.

It was certainly an interesting and fun experience, and I’m glad I did it. Just for kicks, I’ll post another screenshot. Feel free to post your answer in the comments, but I’m not especially expecting this to progress much further than it did before (though anything’s possible):

Screenshot Game, round 24

(click image for a larger version) I’d say this is difficult except that it’s blatantly obvious who that is in the screenshot. It shouldn’t be that hard to pick out the movie even if you haven’t seen it. What the heck, the winner of this round can pick 5 blogs they’d like to see post a screenshot and post a screenshot on their blog if they desire. As I mentioned above, I’m hesitant to annoy people with this sort of thing, but hey, why not? Let’s give this meme some legs.

The Paradise of Choice?

A while ago, I wrote a post about the Paradox of Choice based on a talk by Barry Schwartz, the author of a book by the same name. The basic argument Schwartz makes is that choice is a double-edged sword. Choice is a good thing, but too much choice can have negative consequences, usually in the form of some kind of paralysis (where there are so many choices that you simply avoid the decision) and consumer remorse (elevated expectations, anticipated regret, etc…). The observations made by Schwartz struck me as being quite astute, and I’ve been keenly aware of situations where I find myself confronted with a paradox of choice ever since. Indeed, just knowing and recognizing these situations seems to help deal with the negative aspects of having too many choices available.

This past summer, I read Chris Anderson’s book, The Long Tail, and I was a little pleasantly surprised to see a chapter in his book titled “The Paradise of Choice.” In that chapter, Anderson explicitely addresses Schwartz’s book. However, while I liked Anderson’s book and generally agreed with his basic points, I think his dismissal of the Paradox of Choice is off target. Part of the problem, I think, is that Anderson is much more concerned with the choices rather than the consequences of those choices (which is what Schwartz focuses on). It’s a little difficult to tell though, as Anderson only dedicates 7 pages or so to the topic. As such, his arguments don’t really eviscerate Schwartz’s work. There are some good points though, so let’s take a closer look.

Anderson starts with a summary of Schwartz’s main concepts, and points to some of Schwartz’s conclusions (from page 171 in my edition):

As the number of choices keeps growing, negative aspects of having a multitude of options begin to appear. As the number of choices grows further, the negatives escalate until we become overloaded. At this point, choice no longer liberates, but debilitates. It might even be said to tyrannize.

Now, the way Anderson presents this is a bit out of context, but we’ll get to that in a moment. Anderson continues and then responds to some of these points (again, page 171):

As an antidote to this poison of our modern age, Schwartz recommends that consumers “satisfice,” in the jargon of social science, not “maximize”. In other words, they’d be happier if they just settled for what was in front of them rather than obsessing over whether something else might be even better. …

I’m skeptical. The alternative to letting people choose is choosing for them. The lessons of a century of retail science (along with the history of Soviet department stores) are that this is not what most consumers want.

Anderson has completely missed the point here. Later in the chapter, he spends a lot of time establishing that people do, in fact, like choice. And he’s right. My problem is twofold: First, Schwartz never denies that choice is a good thing, and second, he never advocates removing choice in the first place. Yes, people love choice, the more the better. However, Schwartz found that even though people preferred more options, they weren’t necessarily happier because of it. That’s why it’s called the paradox of choice – people obviously prefer something that ends up having negative consequences. Schwartz’s book isn’t some sort of crusade against choice. Indeed, it’s more of a guide for how to cope with being given too many choices. Take “satisficing.” As Tom Slee notes in a critique of this chapter, Anderson misstates Schwartz’s definition of the term. He makes it seem like satisficing is settling for something you might not want, but Schwartz’s definition is much different:

To satisfice is to settle for something that is good enough and not worry about the possibility that there might be something better. A satisficer has criteria and standards. She searches until she finds an item that meets those standards, and at that point, she stops.

Settling for something that is good enough to meet your needs is quite different than just settling for what’s in front of you. Again, I’m not sure Anderson is really arguing against Schwartz. Indeed, Anderson even acknowledges part of the problem, though he again misstate’s Schwartz’s arguments:

Vast choice is not always an unalloyed good, of course. It too often forces us to ask, “Well, what do I want?” and introspection doesn’t come naturally to all. But the solution is not to limit choice, but to order it so it isn’t oppressive.

Personally, I don’t think the problem is that introspection doesn’t come naturally to some people (though that could be part of it), it’s more that some people just don’t give a crap about certain things and don’t want to spend time figuring it out. In Schwartz’s talk, he gave an example about going to the Gap to buy a pair of jeans. Of course, the Gap offers a wide variety of jeans (as of right now: Standard Fit, Loose Fit, Boot Fit, Easy Fit, Morrison Slim Fit, Low Rise Fit, Toland Fit, Hayes Fit, Relaxed Fit, Baggy Fit, Carpenter Fit). The clerk asked him what he wanted, and he said “I just want a pair of jeans!”

The second part of Anderson’s statement is interesting though. Aside from again misstating Schwartz’s argument (he does not advocate limiting choice!), the observation that the way a choice is presented is important is interesting. Yes, the Gap has a wide variety of jean styles, but look at their website again. At the top of the page is a little guide to what each of the styles means. For the most part, it’s helpful, and I think that’s what Anderson is getting at. Too much choice can be oppressive, but if you have the right guide, you can get the best of both worlds. The only problem is that finding the right guide is not as easy as it sounds. The jean style guide at Gap is neat and helpful, but you do have to click through a bunch of stuff and read it. This is easier than going to a store and trying all the varieties on, but it’s still a pain for someone who just wants a pair of jeans dammit.

Anderson spends some time fleshing out these guides to making choices, noting the differences between offline and online retailers:

In a bricks-and-mortar store, products sit on the shelf where they have been placed. If a consumer doesn’t know what he or she wants, the only guide is whatever marketing material may be printed on the package, and the rough assumption that the product offered in the greatest volume is probably the most popular.

Online, however, the consumer has a lot more help. There are a nearly infinite number of techniques to tap the latent information in a marketplace and make that selection process easier. You can sort by price, by ratings, by date, and by genre. You can read customer reviews. You can compare prices across products and, if you want, head off to Google to find out as much about the product as you can imagine. Recommendations suggest products that ‘people like you’ have been buying, and surprisingly enough, they’re often on-target. Even if you know nothing about the category, ranking best-sellers will reveal the most popular choice, which both makes selection easier and also tends to minimize post-sale regret. …

… The paradox of choice is simply and artifact of the limitations of the physical world, where the information necessary to make an informed choice is lost.

I think it’s a very good point he’s making, though I think he’s a bit too optimistic about how effective these guides to buying really are. For one thing, there are times when a choice isn’t clear, even if you do have a guide. Also, while I think retailers that offer Recommendations based on what other customer purchases are important and helpful, who among us hasn’t seen absurd recommendations? From my personal experience, a lot of people don’t like the connotations of recommendations either (how do they know so much about me? etc…). Personally, I really like recommendations, but I’m a geek and I like to figure out why they’re offering me what they are (Amazon actually tells you why something is recommended, which is really neat). In any case, from my own personal anecdotal observations, no one puts much faith in probablistic systems like recommendations or ratings (for a number of reasons, such as cheating or distrust). There’s nothing wrong with that, and that’s part of why such systems are effective. Ironically, acknowledging their imperfections allow users to better utilize the systems. Anderson knows this, but I think he’s still a bit too optimistic about our tools for traversing the long tail. Personally, I think they need a lot of work.

When I was younger, one of the big problems in computing was storage. Computers are the perfect data gatering tool, but you need somewhere to store all that data. In the 1980s and early 1990s, computers and networks were significantly limited by hardware, particularly storage. By the late 1990s, Moore’s law had eroded this deficiency significantly, and today, the problem of storage is largely solved. You can buy a terrabyte of storage for just a couple hundred dollars. However, as I’m fond of saying, we don’t so much solve problems as trade one set of problems for another. Now that we have the ability to store all this information, how do we get at it in a meaninful way? When hardware was limited, analysis was easy enough. Now, though, you have so much data available that the simple analyses of the past don’t cut it anymore. We’re capturing all this new information, but are we really using it to its full potential?

I recently caught up with Malcolm Gladwell’s article on the Enron collapse. The really crazy thing about Enron was that they didn’t really hide what they were doing. They fully acknowledged and disclosed what they were doing… there was just so much complexity to their operations that no one really recognized the issues. They were “caught” because someone had the persistence to dig through all the public documentation that Enron had provided. Gladwell goes into a lot of detail, but here are a few excerpts:

Enron’s downfall has been documented so extensively that it is easy to overlook how peculiar it was. Compare Enron, for instance, with Watergate, the prototypical scandal of the nineteen-seventies. To expose the White House coverup, Bob Woodward and Carl Bernstein used a source-Deep Throat-who had access to many secrets, and whose identity had to be concealed. He warned Woodward and Bernstein that their phones might be tapped. When Woodward wanted to meet with Deep Throat, he would move a flower pot with a red flag in it to the back of his apartment balcony. That evening, he would leave by the back stairs, take multiple taxis to make sure he wasn’t being followed, and meet his source in an underground parking garage at 2 A.M. …

Did Jonathan Weil have a Deep Throat? Not really. He had a friend in the investment-management business with some suspicions about energy-trading companies like Enron, but the friend wasn’t an insider. Nor did Weil’s source direct him to files detailing the clandestine activities of the company. He just told Weil to read a series of public documents that had been prepared and distributed by Enron itself. Woodward met with his secret source in an underground parking garage in the hours before dawn. Weil called up an accounting expert at Michigan State.

When Weil had finished his reporting, he called Enron for comment. “They had their chief accounting officer and six or seven people fly up to Dallas,” Weil says. They met in a conference room at the Journal’s offices. The Enron officials acknowledged that the money they said they earned was virtually all money that they hoped to earn. Weil and the Enron officials then had a long conversation about how certain Enron was about its estimates of future earnings. …

Of all the moments in the Enron unravelling, this meeting is surely the strangest. The prosecutor in the Enron case told the jury to send Jeffrey Skilling to prison because Enron had hidden the truth: You’re “entitled to be told what the financial condition of the company is,” the prosecutor had said. But what truth was Enron hiding here? Everything Weil learned for his Enron expose came from Enron, and when he wanted to confirm his numbers the company’s executives got on a plane and sat down with him in a conference room in Dallas.

Again, there’s a lot more detail in Gladwell’s article. Just how complicated was the public documentation that Enron had released? Gladwell gives some examples, including this one:

Enron’s S.P.E.s were, by any measure, evidence of extraordinary recklessness and incompetence. But you can’t blame Enron for covering up the existence of its side deals. It didn’t; it disclosed them. The argument against the company, then, is more accurately that it didn’t tell its investors enough about its S.P.E.s. But what is enough? Enron had some three thousand S.P.E.s, and the paperwork for each one probably ran in excess of a thousand pages. It scarcely would have helped investors if Enron had made all three million pages public. What about an edited version of each deal? Steven Schwarcz, a professor at Duke Law School, recently examined a random sample of twenty S.P.E. disclosure statements from various corporations-that is, summaries of the deals put together for interested parties-and found that on average they ran to forty single-spaced pages. So a summary of Enron’s S.P.E.s would have come to a hundred and twenty thousand single-spaced pages. What about a summary of all those summaries? That’s what the bankruptcy examiner in the Enron case put together, and it took up a thousand pages. Well, then, what about a summary of the summary of the summaries? That’s what the Powers Committee put together. The committee looked only at the “substance of the most significant transactions,” and its accounting still ran to two hundred numbingly complicated pages and, as Schwarcz points out, that was “with the benefit of hindsight and with the assistance of some of the finest legal talent in the nation.”

Again, Gladwell’s article has a lot of other details and is a fascinating read. What interested me the most, though, was the problem created by so much data. That much information is useless if you can’t sift through it quickly or effectively enough. Bringing this back to the paradise of choice, the current systems we have for making such decisions are better than ever, but still require a lot of improvement. Anderson is mostly talking about simple consumer products, so none are really as complicated as the Enron case, but even then, there are still a lot of problems. If we’re really going to overcome the paradox of choice, we need better information analysis tools to help guide us. That said, Anderson’s general point still holds:

More choice really is better. But now we know that variety alone is not enough; we also need information about that variety and what other consumers before us have done with the same choices. … The paradox of choice turned out to be more about the poverty of help in making that choice than a rejection of plenty. Order it wrong and choice is oppressive; order it right and it’s liberating.

Personally, while the help in making choices has improved, there’s still a long way to go before we can really tackle the paradox of choice (though, again, just knowing about the paradox of choice seems to do wonders in coping with it).

As a side note, I wonder if the video game playing generations are better at dealing with too much choice – video games are all about decisions, so I wonder if folks who grew up working on their decision making apparatus are more comfortable with being deluged by choice.

The Spinning Silhouette

This Spinning Silhouette optical illusion is making the rounds on the internet this week, and it’s being touted as a “right brain vs left brain test.” The theory goes that if you see the silhouette spinning clockwise, you’re right brained, and you’re left brained if you see it spinning counterclockwise.

Everytime I looked at the damn thing, it was spinning a different direction. I closed my eyes and opened them again, and it spun a different direction. Every now and again, and it would stay the same direction twice in a row, but if I looked away and looked back, it changed direction. Now, if I focus my eyes on a point below the illusion, it doesn’t seem to rotate all the way around at all, instead it seems like she’s moving from one side to the other, then back (i.e. changing directions every time the one leg reaches the side of the screen – and the leg always seems to be in front of the silhouette).

Of course, this is the essense of the illusion. The silhouette isn’t actually spinning at all, because it’s two dimensional. However, since my brain is used to living in a three dimensional world (and thus parsing three dimensional images), it’s assuming that the image is also three dimensional. We’re actually making lots of assumptions about the image, and that’s why we can see it going one way or the other.

Eventually, after looking at the image for a while and pondering the issues, I got curious. I downloaded the animated gif and opened it up in the GIMP to see how the frames are built. I could be wrong, but I’m pretty sure this thing is either broken or it’s cheating. Well, I shouldn’t say that. I noticed something off on one of the frames, and I’d be real curious to know how that affects people’s perception of the illusion (to me, it means the image is definitely moving counterclockwise). I’m almost positive that it’s too subtle to really affect anything, but I did find it interesting. More on this, including images and commentary, below the fold.

Manuals, or the lack thereof…

When I first started playing video games and using computer applications, I remember having to read the instruction manuals to figure out what was happening on screen. I don’t know if this was because I was young and couldn’t figure this stuff out, or because some of the controls were obtuse and difficult. It was perhaps a combination of both, but I think the latter was more prevalent, especially when applications and games became more complex and powerful. I remember sitting down at a computer running DOS and loading up Wordperfect. The interface that appears is rather simplistic, and the developers apparently wanted to avoid the “clutter” of on-screen menus, so they used keyboard combinations. According to Wikipedia, Wordperfect used “almost every possible combination of function keys with Ctrl, Alt, and Shift modifiers.” I vaguely remember needing to use those stupid keyboard templates (little pieces of laminated paper that fit snugly around the keyboard keys, helping you remember what key or combo does what.)

Video Games used to have great manuals too. I distinctly remember several great manuals from the Atari 2600 era. For example, the manual for Pitfall II was a wonderful document done in the style of Pitfall Harry’s diary. The game itself had little in the way of exposition, so you had to read the manual to figure out that you were trying to rescue your niece Rhonda and her cat, Quickclaw, who became trapped in a catacomb while searching for the fabled Raj diamond. Another example for the Commodore 64 was Temple of Apshai. The game had awful graphics, but each room you entered had a number, and you had to consult your manual to get a description of the room.

By the time of the NES, the importance of manuals had waned from Apshai levels, but they were still somewhat necessary at times, and gaming companies still went to a lot of trouble to produce helpful documents. The one that stands out in my mind was the manual for Dragon Warrior III, which was huge (at least 50 pages) and also contained a nice fold-out chart of most of the monsters and wapons in the game (with really great artwork). PC games were also getting more complex, and as Roy noted recently, companies like Sierra put together really nice instruction manuals for complex games like the King’s Quest series.

In the early 1990s, my family got its first Windows PC, and several things changed. With the Word for Windows software, you didn’t need any of those silly keyboard templates. Everything you needed to do was in a menu somewhere, and you could just point and click instead of having to memorize strange keyboard combos. Naturally, computer purists love the keyboard, and with good reason. If you really want to be efficient, the keyboard is the way to go, which is why Linux users are so fond of the command line and simple looking but powerful applications like Emacs. But for your average user, the GUI was very important, and made things a lot easier to figure out. Word had a user manual, and it was several hundred pages long, but I don’t think I ever cracked it open, except maybe in curiosity (not because I needed to).

The trends of improving interfaces and less useful manuals proceeded throughout the next decade and today, well, I can’t think of the last time I had to consult a physical manual for anything. Steven Den Beste has been playing around with flash for a while, but he says he never looks at the manual. “Manuals are for wimps.” In his post, Roy wonders where all the manuals have gone. He speculates that manufacturing costs are a primary culprit, and I have no doubt that they are, but there are probably a couple of other reasons as well. For one, interfaces have become much more intuitive and easy to use. This is in part due to familiarity with computers and the emergence of consistent standards for things like dialog boxes (of course, when you eschew those standards, you get what Jacob Nielson describes as a catastrophic failure). If you can easily figure it out through the interface, what use are the manuals? With respect to gaming, the in-game tutorials have largely taken the place of instruction manuals. Another thing that has perhaps affected official instruction manuals are the unofficial walkthroughs and game guides. Visit a local bookstore and you’ll find entire bookcases devoted to vide game guides and walkthrough. As nice as the manual for Pitfall II was, you really didn’t need much more than 10 pages to explain how to play that game, but several hundred pages barely does justice to some of the more complex video games in today’s market. Perhaps the reason gaming companies don’t give you instruction manuals with the game is not just that printing the manual is costly, but that they can sell you a more detailed and useful one.

Steven Johnson’s book Everything Bad is Good for You has a chapter on Video Games that is very illuminating (in fact, the whole book is highly recommended – even if you don’t totally agree with his premise, he still makes a compelling argument). He talks about the official guides and why they’re so popular:

The dirty little secret of gaming is how much time you spend not having fun. You may be frustrated; you may be confused or disoriented; you may be stuck. When you put the game down and move back into the real world, you may find yourself mentally working through the problem you’ve been wrestling with, as though you were worrying a loose tooth. If this is mindless escapism, it’s a strangely masochistic version.

He gives an example of a man who spends six months working as a smith (mindless work) in Ultima online so that he can attain a certain ability, and he also talks about how people spend tons of money on guides for getting past various roadblocks. Why would someone do this? Johnson spends a fair amount of time going into the neurological underpinnings of this, most notably what he calls the “reward circuitry of the brain.” In games, rewards are everywhere. More life, more magic spells, new equipment, etc… And how do we get these rewards? Johnson thinks there are two main modes of intellectual labor that go into video gaming, and he calls them probing and telescoping.

Probing is essentially exploration of the game and its possibilities. Much of this is simply the unconscious exploration of the controls and the interface, figuring out how the game works and how you’re supposed to interact with it. However, probing also takes the more conscious form of figuring out the limitations of the game. For instance, in a racing game, it’s usually interesting to see if you can turn your car around backwards, pick up a lot of speed, then crash head-on into a car going the “correct” way. Or, in Rollercoaster Tycoon, you can creatively place balloon stands next to a roller coaster to see what happens (the result is hilarious). Probing the limits of game physics and finding ways to exploit them are half the fun (or challenge) of video games these days, which is perhaps another reason why manuals are becoming less frequent.

Telescoping has more to do with the games objectives. Once you’ve figured out how to play the game through probing, you seek to exploit your knowledge to achieve the game’s objectives, which are often nested in a hierarchical fashion. For instance, to save the princess, you must first enter the castle, but you need a key to get into the castle and the key is guarded by a dragon, etc… Indeed, the structure is sometimes even more complicated, and you essentially build this hierarchy of goals in your head as the game progresses. This is called telescoping.

So why is this important? Johnson has the answer (page 41 in my edition):

… far more than books or movies or music, games force you to make decisions. Novels may activate our imagination, and music may conjure up powerful emotions, but games force you to decide, to choose, to prioritize. All the intellectual benefits of gaming derive from this fundamental virtue, because learning how to think is ultimately about learning to make the right decisions: weighing evidence, analyzing situations, consulting your long term goals, and then deciding. No other pop culture form directly engages the brain’s decision-making apparatus in the same way. From the outside, the primary activity of a gamer looks like a fury of clicking and shooting, which is why much of the conventional wisdom about games focuses on hand-eye coordination. But if you peer inside the gamer’s mind, the primary activity turns out to be another creature altogether: making decisions, some of them snap judgements, some long-term strategies.

Probing and telescoping are essential to learning in any sense, and the way Johnson describes them in the book reminds me of a number of critical thinking methods. Probing, developing a hypothesis, reprobing, and then rethinking the hypothesis is essentially the same thing as the scientific method or the hermenutic circle. As such, it should be interesting to see if video games ever really catch on as learning tools. There have been a lot of attempts at this sort of thing, but they’re often stifled by the reputation of video games being a “colossal waste of time” (in recent years, the benefits of gaming are being acknowledged more and more, though not usually as dramatically as Johnson does in his book).

Another interesting use for video games might be evaluation. A while ago, Bill Simmons made an offhand reference to EA Sports’ Madden games in the context of hiring football coaches (this shows up at #29 on his list):

The Maurice Carthon fiasco raises the annual question, “When teams are hiring offensive and defensive coordinators, why wouldn’t they have them call plays in video games to get a feel for their play calling?” Seriously, what would be more valuable, hearing them B.S. about the philosophies for an hour, or seeing them call plays in a simulated game at the all-Madden level? Same goes for head coaches: How could you get a feel for a coach until you’ve played poker and blackjack with him?

When I think about how such a thing would actually go down, I’m not so sure, because the football world created by Madden, as complex and comprehensive as it is, still isn’t exactly the same as the real football world. However, I think the concept is still sound. Theoretically, you could see how a prospective coach would actually react to a new, and yet similar, football paradigm and how they’d find weaknesses and exploit them. The actual plays they call aren’t that important; what you’d be trying to figure out is whether or not the coach was making intelligent decisions or not.

So where are manuals headed? I suspect that they’ll become less and less prevalent as time goes on and interfaces become more and more intuitive (though there is still a long ways to go before I’d say that computer interfaces are truly intuitive, I think they’re much more intuitive now than they were ten years ago). We’ll see more interactive demos and in-game tutorials, and perhaps even games used as teaching tools. I could probably write a whole separate post about how this applies to Linux, which actually does require you to look at manuals sometimes (though at least they have a relatively consistent way of treating manuals; even when the documentation is bad, you can usually find it). Manuals and passive teaching devices will become less important. And to be honest, I don’t think we’ll miss them. They’re annoying.