Science & Technology

The Victorian Internet and Centralized Solutions

A few weeks ago, I wrote a post about how the internet affects our ability to think, pulling from Nicholas Carr’s post on internet and mindlessness. I disagreed with Carr’s skepticism, and in the comments, Samael noted that Carr was actually using a common form of argument.

This seems to be a pretty common form of argument, though.

If the advent of a technology tends to create a certain problem, what is to blame for that problem?

It’s not much different from the “guns/video games/music/movies are to blame for violence” or “video games/television are to blame for short attention spans” or “junk food is responsible for obesity” arguments.

Carr’s argument is in the same form – the sea of information made possible by the internet is to blame for a deterioration in our ability to think. I rejected that because of choice – technology does not force us to think poorly; we choose how we interact with technology (especially on-demand technology like the internet). It’s possible to go overboard, but there’s nothing forcing that to happen. It’s our choice. In any case, this isn’t the first time a technology that lead to a massive increase in communication caused these problems. In his book The Victorian Internet, Tom Standage explores the parallels between the telegraph networks of the nineteenth century and the internet of today. Jon Udell summarizes the similarities:

A 19th-century citizen transported to today would be amazed by air travel, Standage suggests, but not by the Internet. Been there, done that.

Multiprotocol routers? Check. Back then, they translated between Morse code and scraps of paper in canisters shot through pneumatic tubes.

Fraud? Check. Stock market feeds were being spoofed in the 1830s, back when the telegraph network ran on visual semaphores rather than electrical pulses.

Romance? Check. The first online marriage was really a telegraph marriage, performed not long after the dawn of electric telegraphy.

Continuous partial attention? Check. In 1848 the New York businessman W.E. Dodge was already feeling the effects of always-on connectivity: “The merchant goes home after a day of hard work and excitement to a late dinner, trying amid the family circle to forget business, when he is interrupted by a telegram from London.”

All too often, when I listen to someone describe a problem, I feel a sensationalistic vibe. It’s usually not that I totally disagree that something is a problem, but the more I read of history and the more I analyze certain issues, I find that much of what people are complaining about today isn’t all that new. Yes, the internet has given rise to certain problems, but they’re not really new problems. They’re the same problems ported to a new medium. As shown in the quote above, many of the internet’s problems also affected telegraphy nearly two centuries ago (I’d wager that the advance of the printing press lead to similar issues in its time as well). That doesn’t make them less of a problem (indeed, it actually means that the problem is not easily solved!), but it does mean we should perhaps step back and maybe turn down the rhetoric a bit. These are extremely large problems and they’re not easily solved.

It almost feels like we expect there to be a simple solution for everything. I’ve observed before that there is a lot of talk about problems that are incredibly complex as if they really aren’t that complex. Everyone is trying to “solve” these problems, but as I’ve noted many times, we don’t so much solve problems as we trade one set of problems for another (with the hope that the new set of problems is more favorable than the old). What’s more, we expect these “solutions” to come at a high level. In politics, this translates to a Federal solution rather than relying upon state and local solutions. A Federal law has the conceit of being universal and fair, but I don’t think that’s really true. When it comes to large problems, perhaps the answer isn’t large solutions, but small ones. Indeed, that’s one of the great things about the structure of our government – we have state and local governments which (in theory) are more responsive and flexible than the Federal government. I think what you find with a centralized solution is something that attempts to be everything to everyone, and as a result, it doesn’t help anyone.

For example, Bruce Schneier recently wrote about identity theft laws.

California was the first state to pass a law requiring companies that keep personal data to disclose when that data is lost or stolen. Since then, many states have followed suit. Now Congress is debating federal legislation that would do the same thing nationwide.

Except that it won’t do the same thing: The federal bill has become so watered down that it won’t be very effective. I would still be in favor of it — a poor federal law is better than none — if it didn’t also pre-empt more-effective state laws, which makes it a net loss.

It’s a net loss because the state laws are stricter. This also brings up another point about centralized systems – they’re much more vulnerable to attack than a decentralized or distributed system. It’s much easier to lobby against (or water down) a single Federal law than it is to do the same thing to 50 state laws. State and local governments aren’t perfect either, but their very structure makes them a little more resilient. Unfortunately, we seem to keep focusing on big problems and proposing big centralized solutions, bypassing rather than taking advantage of the system our founding fathers wisely put into place.

Am I doing what I decry here? Am I being alarmist? Probably. The trend for increasing federalization is certainly not new. However, in an increasingly globalized world, I’m thinking that resilience will come not from large centralized systems, but at the grassroots level. During the recent French riots, John Robb observed:

Resilience isn’t limited to security. It is also tied to economic prosperity. There aren’t any answers to this on the national level. The answer is at the grassroots level. It is only at that level that you get the flexibility, innovation, and responsiveness to compete effectively. The first western country that creates a platform for economic interop and at the same time decentralizes power over everything else is going to be a big winner.

None of this is to say that grassroots efforts are perfect. There are a different set of issues there. But as I’ve observed many times in the past, the fact that there are issues shouldn’t stop us. There are problems with everything. What’s important is that the new issues we face be more favorable than the old…

Technology Link Dump

My last post on technological change seems to have struck a nerve and I’ve been running across a lot of things along similar lines this week… Here are a few links on the subject:

  • Charlie Stross is writing his next novel on his cell phone:

    Being inclined towards crazy stunt performances, I’m planning on writing “Halting State” on my mobile phone. This is technologically feasible because the phone in question has more memory and online storage than every mainframe in North America in 1972 (and about the same amount of raw processing power as a 1977-vintage Cray-1 supercomputer). It’s a zeitgeist thing: I need to get into the right frame of mind, and I need to use a mobile phone for the same reason Neal Stephenson used a fountain pen when he wrote the Baroque cycle. Afters all, I want to stick my head ten years into the future. Personal computers are already pass�; sales are declining, performance is stagnating, the real action is all in the interstitial networked devices that keep washing up on the beaches of our bandwidth ocean, crazy-weird things like 3G phones and battery-powered network attached storage boxes and bluetooth-controlled vibrators. (It’s getting weird out there in embedded intelligence land; the net is alive to the sound of pinging toasters, RFID chips are the latest virus target, and people are making business deals inside computer games.)

    I have yet to read one of Stross’s novels, but he’s in the queue…

  • Speaking of speculative fiction, Steven Den Beste has a post on Chizumatic (no permalinks, so you’ll have to go a scrollin’) about the difficulties faced in creating a plausible science fiction story placed in the future:

    1. Science and engineering now are expanding on an exponential curve.

    2. But not equally in all areas. In some areas they have run up against intransigent problems.

    3. Advances in one area can have incalculably large effects on other areas which at first seem completely unrelated.

    4. Much of this is driven by economic forces in ways which are difficult to predict or even understand after the fact.

    For instance, there was a period in which the main driver of technical advances in desktop computing was business use. But starting about 1994 that changed, and for a period of about ten years the bleeding edge was computer gamers. …

    You look at the history of technological development and it becomes clear that it isn’t possible for any person to predict it. I can tell you for sure that when we were working on the Arpanet at BBN in the 1980’s, we didn’t have the slightest clue as to what the Internet would eventually become, or all the ways in which it would be used. The idea of 8 megabit pipes into the home was preposterous in the extreme — but that’s what I’ve got. This is James Burke’s “Connections” idea: it all relates, and serendipitous discoveries in one area can revolutionize other areas which are not apparently related in any way. How much have advances in powered machinery changed the lives and careers of farmers, for instance?

    With acceleration in development of new technologies, just what kind of advances could we really expect 200 years from now? The only thing that’s certain is that it’s impossible for us to guess. But if you posit interstellar travel by then, then there should be a lot of advances in other areas, and those advances may be used in “unimportant” ways to make life easier for people, and not just in big-ticket “obvious” ways.

    It’s an excellent post and it ends on an… interesting note.

  • Shamus found an old 2001 article in PC Gamer speculating what computers would be like in 2006. It turns out that in some areas (like CPU speed), they were wildly overoptimistic, in other areas (broadband and portable devices), not so much.
  • Your Upgrade Is Ready: This popular mechanics article summarizes some advancements on the biological engineering and nanotechnology front.

    Weddell seals can stay underwater comfortably for more than an hour. As concrete-shoe wearers have discovered, humans can’t make it past a few minutes. Why not? The seals don’t have enormous lungs in comparison to humans–but they do have extraordinary blood, capable of storing great quantities of oxygen. Robert Freitas, a research fellow at the Institute of Molecular Manufacturing, has published a detailed blueprint for an artificial red blood cell, which he calls a respirocyte. Injected into the bloodstream, these superefficient oxygen-grabbers could put the scuba industry out of business.

    As Freitas envisions it, each respirocyte–a ball measuring a thousandth of a millimeter across–is a tiny pressurized gas tank. Inject the balls and they course through the blood vessels, releasing oxygen and absorbing carbon dioxide in the body’s periphery and recharging themselves with oxygen in the lungs. Freitas says respirocytes would transport oxygen 236 times more efficiently than red blood cells–and a syringeful could carry as much oxygen as your entire bloodstream.

    I tend to take stuff like this with a grain of salt, as such overviews usually end up being a little more sensational than reality, but still interesting reading. [via Event Horizon]

That’s all for now…

Is Technology Advancing or Declining?

In Isaac Asimov’s novel Prelude to Foundation, an unknown mathematician named Hari Seldon travels from his podunk home planet to the Galactic Empire’s capital world to give a presentation on a theoretical curiosity he dubs psychohistory (which is essentially a way to broadly predict the future). Naturally, the potential for this theory attracts the powerful, and Seldon goes on the run with the help of a journalist friend named Chetter Hummin. Hummin contends that “The Galactic Empire is Dying.” Seldon is frankly surprised by this thesis and eventually asks for an explanation:

… “all over the Galaxy trade is stagnating. People think that because there are no rebellions at the moment and because things are quiet that all is well and that the difficulties of the past few centuries are over. However, political infighting, rebellions, and unrest are all signs of a certain vitality too. But now there’s a general weariness. It’s quiet, not because people are satisfied and prosperous, but because they’re tired and have given up.”

“Oh, I don’t know,” Seldon said dubiously.

“I do. And the Antigrav phenomenon we’ve talked about is another case in point. We have a few gravitic lifts in operation, but new ones aren’t being constructed. It’s an unprofitable venture and there seems no interest in trying to make it profitable. The rate of technological advance has been slowing for centuries and is down to a crawl now. In some cases, it has stopped altogether. Isn’t this something you’ve noticed? After all, you’re a mathematician.”

“I can’t say I’ve given the matter any thought.”

Hummin acknowledges that he could be wrong (partly out of a desire to manipulate Seldon to develop psychohistory so as to confirm whether or not the Empire really is dying), but those who’ve read the Foundation Novels know he’s right.

The reasons for this digression into decaying Galactic Empires include my affinity for quoting fiction to make a point and a post by Ken at ChicagoBoyz regarding technological stagnation (which immediately made me think of Asimov’s declining Empire). Are we in a period of relative technological stagnation? I’m hardly an expert, but I have a few thoughts.

First, what constitutes advance or stagnation? Ken points to a post that argues that the century of maximum change is actually the period 1825-1925. It’s an interesting post, but it only pays lipservice to the changes he sees occurring now:

From time to time I stumble across articles by technology-oriented writers claiming that we’re living in an era of profound, unprecedented technological change. And their claim usually hinges on the emergence of the computer.

Gimme a break.

I’ll concede that in certain areas such as biology and medicine, changes over the past few decades have been more profound than at any time in history. And true, computers have made important changes in details of our daily lives.

But in those daily life terms, the greatest changes happened quite a while ago.

The post seems to focus on disruptive changes, but if something is not disruptive, does that really mean that technology is not advancing? And why are changes in transportation capabilities (for instance) more important than communication, biology, or medicine? Also, when we’re talking about measuring technological change over a long period of time, it’s worth noting that advances or declines would probably not move in a straight line. There would be peaks where it seems like everything is changing at once, and lulls when it seems like nothing is changing (even though all the pieces may be falling into place for a huge change).

Most new technological advances are really abstracted efficiencies – it’s the great unglamorous march of technology. They’re small and they’re obfuscated by abstraction, thus many of the advances are barely noticed. Computers and networks represent a massive improvement in information processing and communication capabilities. I’d wager that even if we are in a period of relative technological stagnation (which I don’t think we are), we’re going to pull out of it in relatively short order because the advent of computers and networks means that information can spread much faster than it could in the past. A while ago, Steven Den Beste argued that the four most important inventions in history are: “spoken language, writing, movable type printing and digital electronic information processing (computers and networks).”

When knowledge could only spread by speech, it might take a thousand years for a good idea to cross the planet and begin to make a difference. With writing it could take a couple of centuries. With printing it could happen in fifty years. With computer networks, it can happen in a week if not less. … That’s a radical change in capability; a sufficient difference in degree to represent a difference in kind. It means that people all over the world can participate in debate about critical subjects with each other in real time.

We’re already seeing some of the political, technological and cultural effects of the Internet, and this is just a start. What this means is that drastic cultural shakeout cannot be avoided. The next fifty years are going to be a very interesting time as the Internet truly creates the Global Village.

Indeed, part of the reason technologists are so optimistic about the rate of technological change is that we see it all the time on the internet. We see some guy halfway across the world make an observation or write a script, and suddently it shows up everywhere, spawning all sorts of variants and improvements. When someone invents something these days, it only takes a few days for it to be spread throughout the world and improved upon.

Of course, there are many people who would disagree with Ken’s assertion that we’re in a period of technological stagnation. People like Ray Kurzweil or Vernor Vinge would argue that we’re on the edge of a technological singularity – that technology is advancing so quickly that we can’t quantify it, and that we’re going to eventually use technology to create an entity with greater than human intelligence.

I definitely think there is a problem with determining the actual rate of change. As I mentioned before, what qualifies as a noteworthy change? It’s also worth noting that long-term technological effects are sometimes difficult to forecast. Most people picture the internet as being a centrally planned network, but it wasn’t. Structurally, the internet is more like an evolving ecosystem than anything that was centrally designed. Those who worked on the internet in the 1960s and 1970s probably had no idea what it would eventually become or how it would affect our lives today. And honestly, I’m not sure we know today what it will be like in another 30 years…

One of the reasons I quoted Asimov’s novel at the beginning of this post is that I think he captured what a technologically declining civilization would be like. The general weariness, the apathy, and the lack of desire to even question why. Frankly, I find it hard to believe that things are slowing down these days. Perhaps we’re in a lull (it sure doesn’t seem like it though), but I can see that edge, and I don’t see weariness in those that will take us there…

Unintended Customers

The Art of Rainmaking by Guy Kawasaki: An interesting article about salesmanship and what is referred to as “rainmaking.” Kawasaki lists out several ways to practice the art of rainmaking, but this first one caught my eye because it immediately reminded me of Neal Stephenson’s Cryptonomicon, and regular readers (all 5 of you) know I can’t resist a Stephenson reference.

“Let a hundred flowers blossom.” I stole this from Chairman Mao although I’m not sure how he implemented it. In the context of capitalism (Chairman Mao must be turning over in his grave), the dictum means that you sow seeds in many markets, see what takes root, and harvest what blooms. Many companies freak out when unintended customers buy their product. Many companies also freak out when intended customers buy their product but use it in unintended ways. Don’t be proud. Take the money.

This immediately reminded me of the data haven (a secure computer system that is protected by it’s lack of governmental oversight as well as technical means like encryption) in the “modern-day” segments of Cryptonomicon. Randy Waterhouse works for the company that’s attempting to sett up a data haven, and he finds that the most of his customers want to use the data haven to store money. Pretty straightforward, right? Well, most of the people who want to store their money their are criminals of the worst sort. I guess in that particular case, there is reason to freak out at these unexpected customers, but I thought the reference was interesting because while there may be lots of legitimate uses for a data haven, the criminal element would almost certainly be attracted to a way to store their drug money (or whatever) with impugnity (that and probably spam, pornography, and gambling). Like all advances in technology, the data haven could be used for good or for ill…

A Spectrum of Articles

When you browse the web often, especially when you’re looking at mostly weblogs, you start to see some patterns emerging. A new site is discovered, then propagates throughout the blogosphere in fairly short order. I’m certainly no expert at spotting such discoveries, but one thing I’ve noticed being repeatedly referenced this past week is the IEEE Spectrum (a magazine devoted to electrical engineering). I’ve seen multiple blogs referencing multiple articles from this magazine, though I can’t think of a single reference in the past. Here’s a few articles that seem interesting:

  • Re-engineering Iraq (February 2006): A close look at rebuilding Iraq’s electrical system. Alas, no mentions of anything resembling Operation Solar Eagle… (don’t remember who posted about this one, but I did see it in a couple of places).
  • How Europe Missed The Transistor (November 2005): One of the most important inventions of the 20th century (which is no slouch when it comes to important inventions) was the transistor. This article delves into the early history of the transistor and similar technologies developed in Europe and the U.S., as well as how these devices became commercially successful. David Foster has an excellent post about the “importance of decentralization and individual entrepreneurship” in facilitating the commercialization of new technologies.
  • Patents 2.0 (February 2006): Slashdot posted about this interesting proposal recently: “a new type of patent that wouldn’t require formal examination, would cost significantly less than traditional patents, would last only 4 years from date of first commercial product, and which wouldn’t carry a presumption of validity.” Interesting stuff. It does appear that the high rate of technological advance should be driving the implementation of something like this when it comes to both patents and copyright law.

I haven’t read all of this yet, but there’s definitely good stuff there. Perhaps more comments later this week (time is still short, but my schedule will hopefully be opening up a bit in the next few weeks).

Analysis and Ignorance

A common theme on this blog is the need for better information analysis capabilities. There’s nothing groundbreaking about the observation, which is probably why I keep running into stories that seemingly confirms the challenge we’re facing. A little while ago, Boing Boing pointed to a study on “visual working memory” in which the people who did well weren’t better at remembering things than other people – they were better at ignoring unimportant things.

“Until now, it’s been assumed that people with high capacity visual working memory had greater storage but actually, it’s about the bouncer – a neural mechanism that controls what information gets into awareness,” Vogel said.

The findings turn upside down the popular concept that a person’s memory capacity, which is strongly related to intelligence, is solely dependent upon the amount of information you can cram into your head at one time. These results have broad implications and may lead to developing more effective ways to optimize memory as well as improved diagnosis and treatment of cognitive deficits associated with attention deficit disorder and schizophrenia.

In Feedback and Analysis, I examined an aspect of how the human eye works:

So the brain gets some input from the eye, but it’s sending significantly more information towards the eye than it’s receiving. This implies that the brain is doing a lot of processing and extrapolation based on the information it’s been given. It seems that the information gathering part of the process, while important, is nowhere near as important as the analysis of that data. Sound familiar?

Back in high school (and to a lesser extent, college), there were always people who worked extremely hard, but still couldn’t manage to get good grades. You know, the people who would spend 10 hours studying for a test and still bomb it. I used to infuriate these people. I spent comparatively little time studying, and I did better than them. Now, there were a lot of reasons for this, and most of them don’t have anything to do with me being smarter than anyone else. One thing I found was that if I paid attention in class, took good notes, and spent an honest amount of effort on homework, I didn’t need to spend that much time cramming before a test (shocking revelation, I know). Another thing was that I knew what to study. I didn’t waste time memorizing things that weren’t necessary. In other words, I was good at figuring out what to ignore.

Analysis of the data is extremely important, but you need to have the appropriate data to start with. When you think about it, much of analysis is really just figuring out what is unimportant. Once you remove the noise, you’re left with the signal and you just need to figure out what that signal is telling you. The problem right now is that we keep seeing new and exciting ways to collect more and more information withought a corresponding increase in analysis capabilities. This is an important technical challenge that we’ll have to overcome, and I think we’re starting to see the beginnings of a genuine solution. At this point another common theme on this blog will rear its ugly head. Like any other technological advance, systems that help us better analyze information will involve tradeoffs. More on this subject later this week…

More Trilemmas

Looking into the trilemma subject from last week’s entry, I stumbled across Jason Kottke’s post about what he calls a “Pick Two” system, using the “good, fast, or cheap, pick two” example to start, but then listing out a whole bunch more:

Elegant, documented, on time.

Privacy, accuracy, security.

Have fun, do good, stay out of trouble.

Study, socialize, sleep.

Diverse, free, equal.

Fast, efficient, useful.

Cheap, healthy, tasty.

Secure, usable, affordable.

Short, memorable, unique.

Cheap, light, strong.

I don’t know if I agree with all of those, but regardless of their authenticity, Kottke is right to question why the “Pick Two” logic appears to be so attractive. Indeed, I even devised my own a while back when I was looking at my writing habits.

Why is “pick two out of three” the rule? Why not “one out of two” or “four out of six”? Or is “pick two out of three” just a cultural assumption?

He also wonders if there is some sort of underlying scientific or economic relationship at work, but was unable to find anything that fit really well. Personally, I found the triangle to be closest to what he was looking for. In a triangle, the sum of the interior angles is always 180 degrees. If you “pick two” of the angles, you know what the third will be. Since time and money are both discrete, quantifiable values, you should theoretically be able to control the quality of your project by playing with those variables.

In a more general sense, I tend to think of a system with three main components as being inherently stable. I think this is because such a system is simple, yet complex enough to allow for a lot of dynamism. As one of the commmenters on Kottke’s post noted:

Seems like two out of three is the smallest tradeoff that’s interesting. One out of two is boring. One out of three doesn’t satisfy. Two out of three allows the chooser to feel like s/he is getting something out of the tradeoff (not just 50/50).

And once you start getting larger than three, the system begins to get too complex. Tweaking one part of the system has progressively less and less predictable results the bigger the system gets. The good thing about a system with three major components is that if one piece starts acting up, the other two can adjust to overcome the deficiency. In a larger system, the potential for deadlock and unintended consequences begins to increase.

I’ve written about this stability of three before. The steriotypical example of a triangular system is the U.S. Federal government:

One of the primary goals of the American Constitutional Convention was to devise a system that would be resistant to tyranny. The founders were clearly aware of the damage that an unrestrained government could do, so they tried to design the new system in such a way that it wouldn’t become tyrannical. Democratic institions like mandatory periodic voting and direct accountability to the people played a large part in this, but the founders also did some interesting structural work as well.

Taking their cue from the English Parliament’s relationship with the King of England, the founders decided to create a legislative branch separate from the executive. This, in turn, placed the two governing bodies in competition. However, this isn’t a very robust system. If one of the governing bodies becomes more powerful than the other, they can leverage their advantage to accrue more power, thus increasing the imbalance.

A two-way balance of power is unstable, but a three-way balance turns out to be very stable. If any one body becomes more powerful than the other two, the two usually can and will temporarily unite, and their combined power will still exceed the third. So the founders added a third governing body, an independent judiciary.

The result was a bizarre sort of stable oscillation of power between the three major branches of the federal government. Major shifts in power (such as wars) disturbed the system, but it always fell back to a preferred state of flux. This stable oscillation turns out to be one of the key elements of Chaos theory, and is referred to as a strange attractor. These “triangular systems” are particularly good at this, and there are many other examples…

Another great example of how well a three part system works is a classic trilemma: “Rock, Paper, Scissors.”

The Design Trilemma

I’ve been writing about design and usability recently, including a good example with the iPod and a case where a new elevator system could use some work. Naturally, there are many poorly designed systems out there, and they’re easy to spot, but even in the case of the iPod, which I think is well designed and elegant, I was able to find some things that could use improvement. Furthermore, I’m not sure there’s all that much that can really be done to improve the iPod design without removing something that detracts more from the experience. As I mentioned in that post, a common theme on this blog has always been the trade-offs inherent in technological advance: we don’t so much solve problems as we trade one set of disadvantages for another, in the hopes that the new set is more favorable than the old.

When confronted with an obviously flawed system, most people’s first thought is probably something along the lines of: What the hell were they thinking when they designed this thing? Its certainly an understandable lamentation, but after the initial shock of the poor experience, I often find myself wondering what held the designers back. I’ve been involved in the design of many web applications, and I sometimes find the end result is different from what I originally envisioned. Why? Its usually not that hard to design a workable system, but it can become problematic when you consider how the new system impacts existing systems (or, perhaps more importantly, how existing systems impact new ones). Of course, there are considerations completely outside the technical realm as well.

There’s an old engineering aphorism that says Pick two: Fast, Cheap, Good. The idea is that when you’re tackling a project, you can complete it quickly, you can do it cheaply, and you can create a good product, but you can’t have all three. If you want to make a quality product in a short period of time, it’s going to cost you. Similarly, if you need to do it on the cheap and also in a short period of time, you’re not going to end up with a quality product. This is what’s called a Trilemma, and it has applications ranging from international economics to theology (I even applied it to writing a while back).

Dealing with trilemmas like this can be frustrating when you’re involved in designing a system. For example, a new feature that would produce a tangible but relatively minor enhancement to customer experience would also require a disproportionate amount of effort to implement. I’ve run into this often enough to empathize with those who design systems that turn out horribly. Not that this excuses design failures or that this is the only cause of problems, but it is worth noting that the designers aren’t always devising crazy schemes to make your life harder…

iPod Usability

After several weeks of using my new iPod (yes, I’m going to continue rubbing it in for those who don’t have one), I’ve come to realize that there are a few things that are *gasp* not perfect about the iPod. A common theme on this blog has always been the tradeoffs inherent in technological advance: we don’t so much solve problems as we trade one set of disadvantages for another, in the hopes that the new set is more favorable than the old.

Don’t get me wrong, I love the iPod. It represents a gigantic step forward in my portable media capability, but it’s not perfect. It seems that some of the iPod’s greatest strengths are also it’s greatest weaknesses. Let’s look at some considerations:

  • The Click Wheel – Simultaneously the best and worst feature of the iPod. How can this be? Well the good things the click wheel brings to the picture far outweigh the bad. What’s so bad about the click wheel? I think the worst thing about it is the lack of precision. The click wheel is very sensitive and it is thus very easy to overshoot your desired selection. If you’re sitting still at a desk, this isn’t that much of a problem, but if you’re exercising or driving, it can be a bit touchy. It’s especially tricky with volume, as I sometimes want to increase the volume just a tick, but often overshoot and need to readjust. However, Apple does attempt to mitigate some of that with the “clicks,” the little sounds generated as you scroll through your menu options. As I say, the good things about the click wheel far outweigh this issue. More on the good things in a bit.
  • The “clean” design – As Gerry Gaffney observed in a recent article for The Age:

    When products are not differentiated primarily by features and prices are already competitive, factors such as ease-of-use and emotional response can provide a real edge.

    The Apple iPod is often cited as an example; a little gadget that combines relative ease of use with a strong emotional response. This helps separate the iPod from the swathe of other portable players that are comparable in terms of features and price.

    There are two main pieces to the design of the iPod in my mind, one is the seamless construction and the other is the simplicity of the design. The seamlessness of the device and it’s simple white or black monochrome appearance defintely provides the sort of emotional response that Gaffney cites. But it might be even more than that – some people believe that the design is so universally accepted as “Clean” because the materials it uses evoke a subconscious feeling of cleanliness:

    Of course, we were aware of the obvious cues such as minimalist design; the simple, intuitive interface; the neutral white color. But these attributes alone inadequately explain this seemingly universal perception. It had to be referencing a deeper convention in the social consciousness… so, if a designer claimed that he had the answer—we were all ears.

    “So… as I was sitting on the toilet this morning” (this is of course where most good ideas come from), “I noticed the shiny white porcelain of the bathtub and the reflective chrome of the faucet on the wash basin… and then it hit me! Everybody perceives the iPod as ‘clean’ because it references bathroom materials!”

    The author also noticed that seamless design and a lack of moving parts is often used in science-fiction to indicate advanced technology (think “Monolith” from 2001). Obviously, a “Clean” design doesn’t necessarily make it better or more usable, but good design often bundles clean with easy-to-use, and in the iPod, the two are inseparable. The click wheel’s lack of precision notwithstanding, it’s actually quite easy to use for the most common tasks. It’s also ambidexterous and easy to use whether you are left or right-handed. Some devices have lots of buttons and controls, which can be useful at times, but the iPod covers the absolutely necessary features extremely well with a minimum of actual physical controls. What’s more, this economy of physical buttons does not detract from the usability, it actually increases it because the controls are so simple and intuitive. In the end, it looks great and is easy to use. What more can you ask for?

  • One thing I enjoy about the iPod is using it’s shuffle songs feature. Now that I’ve got most of my library in one device, I enjoy hearing random songs, one after the other. Sometimes it makes for great listening, sometimes appalling, but always interesting. However, there is one feature I’d like to see: if I’m listening to one song, and I want to “break out” of the shuffle (and listen to the next song on that particular album), there’s no way to do so short of navigating to that album and then playing the next song manually (at least, I don’t know of a way to do so – perhaps there is a not-so-intuitive way to do it, which wouldn’t be surprising, as I imagine this is a somewhat obscure request). Perhaps it’s just that I like to listen to albums that have tracks that seamlessly run into one another, the prototypical example being Pink Floyd’s Dark Side of the Moon – the last 4 songs have a seamless quality that I really like to listen to as a whole, but which can be jarring if I only hear one of them.
  • This usability critique of the iPod makes mention of several of the above points, as well as some other good and bad features of the iPod:

    In Rob Walker’s New York Times Magazine article, “The Guts of a New Machine”, Steve Jobs stated. “Most people make the mistake of thinking design is what it looks like,…That’s not what we think design is. It’s not just what it looks like and feels like. Design is how it works.”

    He mentions the same lack of precision issue I mentioned above, and also something about the blacklight being difficult to turn on or off (which is something that I imagine is only a probem for the non-color screens).

In many ways, the iPod is very similar to it’s competing players. It has comparable features and price, and I’m quite sure that, even though the iPod’s usability is excellent, its competitors probably aren’t that far off. But there is definitely something more to the iPod and it’s design, and it’s difficult to place. There seems to be a large contingency of people who are extremely hostile towards the iPod (probably for this very reason), insisting that people who like the iPod are some sort of brainwashed morons who are paying extra only for the privelege of having a player with a piece of fruit engraved on it. Perhaps, but even with the issues I cited above, the iPod has exceeded my expectations rather well.

Elevators & Usability

David Foster recently wrote a post about a new elevator system:

One might assume that elevator technology is fairly static, but then one would be wrong. The New York Times (11/2) has an article about significant improvements in elevator control systems. The idea is that you select your floor before you get on the elevator, rather than after, thereby allowing the system to dispatch elevators more intelligently–a 30% reduction in average trip time is claimed. … All good stuff; shorter waiting times and presumably lower energy consumption as well.

(NYT article is here) Foster has some interesting comments on the management types who want to use this system to avoid being in an elevator with the normal folks, but the story caught my attention from a different angle.

I recently attended the World Usability Day event in Philadelphia, and the keynote speaker (Tom Tullis, of Fidelity Investments) started his presentation with a long anecdote concerning this new elevator technology. It seems that while this technology may have good intentions, it’s execution could use a little work.

Elevator KeypadPerhaps it was just the particular implementation at the building he went to, but the system installed there was extremely difficult to use for a first time user. First, the new system wasn’t called out very much, so Tullis had actually gotten into one of the elevators and was flummoxed at the lack of buttons inside. Eventually, after riding the elevator up and then back down to the lobby, he noticed a keypad next to the elevator he had gotten into. So he understandably assumed that he should simply enter the desired floor there, figuring that the elevator would then open and take him to that floor. He typed in his destination floor, and was greeted with a scren that had a large “E” on it (there’s an image of this on the right, but the presentation has lots of images and more information on the evolution of the Elevator). Obviously an error, right? Well, no. Tullis eventually found a little sign in the lobby that had a 6 page (!) manual explaining how the elevators work, and it turns out that each elevator cab has a letter assigned to it, and when you enter your floor, it assigns you to one of the elevators. So “E” was referring to the “E” cab, not an error. Now armed with the knowledge of how the system works, Tullis was able to make it to his meeting (10 minutes late).

Naturally, I think this is a bit of an extreme case (though there were a few other bad things about his experience that I didn’t even mention). The system was brand new and the building hadn’t yet converted all of their elevators to the new system, so it seems obvious that the system usability would improve over time. There are several things that could make that experience easier:

  • In the image above, note the total lack of any directions whatsoever. It’s especially bad because the placement of the keypad implies that it only applies to the elevator it’s next to.
  • Depending on the layout of the elevator area, I think the best way to do this would be to have a choke point with a little podium that has the keypad and a concise list of instructions. This would force the user to see the system before they actually get to the elevators.
  • Once you use the system once and figure out how it works, it’s probably much better, especially if all of the claimed efficiencies work out the way they sound.
  • As the NYT article notes, there are some other issues that need to be dealt with. For instance, most groups would naturally like to ride in the same elevator, but this presents a problem to this system, especially when only one person in the group actually uses the system. There’s also some frustration with not being able to get on the first available elevator, though that may be mitigated by an elevator ride with less stops. You also can’t change your mind once you get in the elevator…
  • It seems to me that this sort of system would be ideally suited to an extremely large skyscraper with a high volume of traffic (like a hotel). Most elevators probably wouldn’t need to be converted, which means that most people wouldn’t be exposed to this sort of thing until they make it to one of the larger buildings (which also means that the usability for first time users will still be quite important, even though it gets easier to use after your first time).