Science & Technology

SOPA Blues

I was going to write the annual arbitrary movie awards tonight, but since the web has apparently gone on strike, I figured I’d spend a little time talking about that instead. Many sites, including the likes of Wikipedia and Reddit, have instituted a complete blackout as part of a protest against two ill-conceived pieces of censorship legislation currently being considered by the U.S. Congress (these laws are called the Stop Online Piracy Act and Protect Intellectual Property Act, henceforth to be referred to as SOPA and PIPA). I can’t even begin to pretend that blacking out my humble little site would accomplish anything, but since a lot of my personal and professional livelihood depends on the internet, I suppose I can’t ignore this either.

For the uninitiated, if the bills known as SOPA and PIPA become law, many websites could be taken offline involuntarily, without warning, and without due process of law, based on little more than an alleged copyright owner’s unproven and uncontested allegations of infringement1. The reason Wikipedia is blacked out today is that they depend solely on user-contributed content, which means they would be a ripe target for overzealous copyright holders. Sites like Google haven’t blacked themselves out, but have staged a bit of a protest as well, because under the provisions of the bill, even just linking to a site that infringes upon copyright is grounds for action (and thus search engines have a vested interest in defeating these bills). You could argue that these bills are well intentioned, and from what I can tell, their original purpose seemed to be more about foreign websites and DNS, but the road to hell is paved with good intentions and as written, these bills are completely absurd.

Lots of other sites have been registering their feelings on the matter. ArsTechnica has been posting up a storm. Shamus has a good post on the subject which is followed by a lively comment thread. But I think Aziz hits the nail on the head:

Looks like the DNS provisions in SOPA are getting pulled, and the House is delaying action on the bill until February, so it’s gratifying to see that the activism had an effect. However, that activism would have been put to better use to educate people about why DRM is harmful, why piracy should be fought not with law but with smarter pro-consumer marketing by content owners (lowered prices, more options for digital distribution, removal of DRM, fair use, and ubiquitous time-shifting). Look at the ridiculous limitations on Hulu Plus – even if you’re a paid subscriber, some shows won’t air episodes until the week after, old episodes are not always available, some episodes can only be watched on the computer and are restricted from mobile devices. These are utterly arbitrary limitations on watching content that just drive people into the pirates’ arms.

I may disagree with some of the other things in Aziz’s post, but the above paragraph is important, and for some reason, people aren’t talking about this aspect of the story. Sure, some folks are disputing the numbers, but few are pointing out the things that IP owners could be doing instead of legislation. For my money, the most important thing that IP owners have forgotten is convenience. Aziz points out Hulu, which is one of the worst services I’ve ever seen in terms of being convenient or even just intuitive to customers. I understand that piracy is frustrating for content owners and artists, but this is not the way to fight piracy. It might be disheartening to acknowledge that piracy will always exist, but it probably will, so we’re going to have to figure out a way to deal with it. The one thing we’ve seen work is convenience. Despite the fact that iTunes had DRM, it was loose enough and convenient enough that it became a massive success (it now doesn’t have DRM, which is even better). People want to spend money on this stuff, but more often than not, content owners are making it harder on the paying customer than on the pirate. SOPA/PIPA is just the latest example of this sort of thing.

I’ve already written about my thoughts on Intellectual Property, Copyright and DRM, so I encourage you to check that out. And if you’re so inclined, you can find out what senators and representatives are supporting these bills, and throw them out in November (or in a few years, if need be). I also try to support companies or individuals that put out DRM-free content (for example, Louis CK’s latest concert video has been made available, DRM free, and has apparently been a success).

Intellectual Property and Copyright is a big subject, and I have to be honest in that I don’t have all the answers. But the way it works right now just doesn’t seem right. A copyrighted work released just before I was born (i.e. Star Wars) probably won’t enter the public domain until after I’m dead (I’m generally an optimistic guy, so I won’t complain if I do make it to 2072, but still). Both protection and expiration are important parts of the way copyright works in the U.S. It’s a balancing act, to be sure, but I think the pendulum has swung too far in one direction. Maybe it’s time we swing it back. Now if you’ll excuse me, I’m going to participate in a different kind of blackout to protest SOPA.

1 – Thanks to James for the concise description. There are lots of much longer longer and better sourced descriptions of the shortcomings of this bill and the issues surrounding it, so I won’t belabor the point here.

Communication

About two years ago (has it really been that long!?), I wrote a post about Interrupts and Context Switching. As long and ponderous as that post was, it was actually meant to be part of a larger series of posts. This post is meant to be the continuation of that original post and hopefully, I’ll be able to get through the rest of the series in relatively short order (instead of dithering for another couple years). While I’m busy providing context, I should also note that this series was also planned for my internal work blog, but in the spirit of arranging my interests in parallel (and because I don’t have that much time at work dedicated to blogging on our intranet), I’ve decided to publish what I can here. Obviously, some of the specifics of my workplace have been removed from what follows, but it should still contain enough general value to be worthwhile.

In the previous post, I wrote about how computers and humans process information and in particular, how they handle switching between multiple different tasks. It turns out that computers are much better at switching tasks than humans are (for reasons belabored in that post). When humans want to do something that requires a lot of concentration and attention, such as computer programming or complex writing, they tend to work best when they have large amounts of uninterrupted time and can work in an environment that is quiet and free of distractions. Unfortunately, such environments can be difficult to find. As such, I thought it might be worth examining the source of most interruptions and distractions: communication.

Of course, this is a massive subject that can’t even be summarized in something as trivial as a blog post (even one as long and bloviated as this one is turning out to be). That being said, it’s worth examining in more detail because most interruptions we face are either directly or indirectly attributable to communication. In short, communication forces us to do context switching, which, as we’ve already established, is bad for getting things done.

Let’s say that you’re working on something large and complex. You’ve managed to get started and have reached a mental state that psychologists refer to as flow (also colloquially known as being “in the zone”). Flow is basically a condition of deep concentration and immersion. When you’re in this state, you feel energized and often don’t even recognize the passage of time. Seemingly difficult tasks no longer feel like they require much effort and the work just kinda… flows. Then someone stops by your desk to ask you an unrelated question. As a nice person and an accomodating coworker, you stop what you’re doing, listen to the question and hopefully provide a helpful answer. This isn’t necessarily a bad thing (we all enjoy helping other people out from time to time) but it also represents a series of context switches that would most likely break you out of your flow.

Not all work requires you to reach a state of flow in order to be productive, but for anyone involved in complex tasks like engineering, computer programming, design, or in-depth writing, flow is a necessity. Unfortunately, flow is somewhat fragile. It doesn’t happen instantaneously; it requires a transition period where you refamiliarize yourself with the task at hand and the myriad issues and variables you need to consider. When your collegue departs and you can turn your attention back to the task at hand, you’ll need to spend some time getting your brain back up to speed.

In isolation, the kind of interruption described above might still be alright every now and again, but imagine if the above scenario happened a couple dozen times in a day. If you’re supposed to be working on something complicated, such a series of distractions would be disasterous. Unfortunately, I work for a 24/7 retail company and the nature of our business sometimes requires frequen interruptions and thus there are times when I am in a near constant state of context switching. Noe of this is to say I’m not part of the problem. I am certainly guilty of interrupting others, sometimes frequently, when I need some urgent information. This makes working on particularly complicated problems extremely difficult.

In the above example, there are only two people involved: you and the person asking you a question. However, in most workplace environments, that situation indirectly impacts the people around you as well. If they’re immersed in their work, an unrelated conversation two cubes down may still break them out of their flow and slow their progress. This isn’t nearly as bad as some workplaces that have a public address system – basically a way to interrupt hundreds or even thousands of people in order to reach one person – but it does still represent a challenge.

Now, the really insideous part about all this is that communication is really a good thing, a necessary thing. In a large scale organization, no one person can know everything, so communication is unavoidable. Meetings and phone calls can be indispensible sources of information and enablers of collaboration. The trick is to do this sort of thing in a way that interrupts as few people as possible. In some cases, this will be impossible. For example, urgency often forces disruptive communication (because you cannot afford to wait for an answer, you will need to be more intrusive). In other cases, there are ways to minimize the impact of frequent communication.

One way to minimize communication is to have frequently requested information documented in a common repository, so that if someone has a question, they can find it there instead of interrupting you (and potentially those around you). Naturally, this isn’t quite as effective as we’d like, mostly because documenting information is a difficult and time consuming task in itself and one that often gets left out due to busy schedules and tight timelines. It turns out that documentation is hard! A while ago, Shamus wrote a terrific rant about technical documentation:

The stereotype is that technical people are bad at writing documentation. Technical people are supposedly inept at organizing information, bad at translating technical concepts into plain English, and useless at intuiting what the audience needs to know. There is a reason for this stereotype. It’s completely true.

I don’t think it’s quite as bad as Shamus points out, mostly because I think that most people suffer from the same issues as technical people. Technology tends to be complex and difficult to explain in the first place, so it’s just more obvious there. Technology is also incredibly useful because it abstracts many difficult tasks, often through the use of metaphors. But when a user experiences the inevitable metaphor shear, they have to confront how the system really works, not the easy abstraction they’ve been using. This descent into technical details will almost always be a painful one, no matter how well documented something is, which is part of why documentation gets short shrift. I think the fact that there actually is documentation is usually a rather good sign. Then again, lots of things aren’t documented at all.

There are numerous challenges for a documentation system. It takes resources, time, and motivation to write. It can become stale and inaccurate (sometimes this can happen very quickly) and thus it requires a good amount of maintenance (this can involve numerous other topics, such as version histories, automated alert systems, etc…). It has to be stored somewhere, and thus people have to know where and how to find it. And finally, the system for building, storing, maintaining, and using documentation has to be easy to learn and easy to use. This sounds all well and good, but in practice, it’s a nonesuch beast. I don’t want to get too carried away talking about documentation, so I’ll leave it at that (if you’re still interested, that nonesuch beast article is quite good). Ultimately, documentation is a good thing, but it’s obviously not the only way to minimize communication strain.

I’ve previously mentioned that computer programming is one of those tasks that require a lot of concentration. As such, most programmers abhor interruptions. Interestingly, communication technology has been becoming more and more reliant on software. As such, it should be no surprise that a lot of new tools for communication are asynchronous, meaning that the exchange of information happens at each participant’s own convenience. Email, for example, is asynchronous. You send an email to me. I choose when I want to review my messages and I also choose when I want to respond. Theoretically, email does not interrupt me (unless I use automated alerts for new email, such as the default Outlook behavior) and thus I can continue to work, uninterrupted.

The aformentioned documentation system is also a form of asynchronous communication and indeed, most of the internet itself could be considered a form of documentation. Even the communication tools used on the web are mostly asynchronous. Twitter, Facebook, YouTube, Flickr, blogs, message boards/forums, RSS and aggregators are all reliant on asynchronous communication. Mobile phones are obviously very popular, but I bet that SMS texting (which is asynchronous) is used just as much as voice, if not moreso (at least, for younger people). The only major communication tools invented in the past few decades that wouldn’t be asynchronous are instant messaging and chat clients. And even those systems are often used in a more asynchronous way than traditional speech or conversation. (I suppose web conferencing is a relatively new communication tool, though it’s really just an extension of conference calls.)

The benefit of asynchronous communication is, of course, that it doesn’t (or at least it shouldn’t) represent an interruption. If you’re immersed in a particular task, you don’t have to stop what you’re doing to respond to an incoming communication request. You can deal with it at your own convenience. Furthermore, such correspondence (even in a supposedly short-lived medium like email) is usually stored for later reference. Such records are certainly valuable resources. Unfortunately, asynchronous communication has it’s own set of difficulties as well.

Miscommunication is certainly a danger in any case, but it seems more prominent in the world of asynchronous communication. Since there is no easy back-and-forth in such a method, there is no room for clarification and one is often left only with their own interpretation. Miscommunication is doubly challenging because it creates an ongoing problem. What could have been a single conversation has now ballooned into several asynchronous touch-points and even the potential for wasted work.

One of my favorite quotations is from Anne Morrow Lindbergh:

To write or to speak is almost inevitably to lie a little. It is an attempt to clothe an intangible in a tangible form; to compress an immeasurable into a mold. And in the act of compression, how the Truth is mangled and torn!

It’s difficult to beat the endless nuance of face-to-face communication, and for some discussions, nothing else will do. But as Lindbergh notes, communication is, in itself, a difficult proposition. Difficult, but necessary. About the best we can do is to attempt to minimize the misunderstanding.

I suppose one way to mitigate the possibility of miscommunication is to formalize the language in which the discussion is happening. This is easier said than done, as our friends in the legal department would no doubt say. Take a close look at a formal legal contract and you can clearly see the flaws in formal language. They are ostensibly written in English, but they require a lot of effort to compose or to read. Even then, opportunities for miscommunication or loopholes exist. Such a process makes sense when dealing with two separate organizations that each have their own agenda. But for internal collaboration purposes, such a formalization of communication would be disastrous.

You could consider computer languages a form of formal communication, but for most practical purposes, this would also fall short of a meaningful method of communication. At least, with other humans. The point of a computer language is to convert human thought into computational instructions that can be carried out in an almost mechanical fashion. While such a language is indeed very formal, it is also tedious, unintuitive, and difficult to compose and read. Our brains just don’t work like that. Not to mention the fact that most of the communication efforts I’m talking about are the precursors to the writing of a computer program!

Despite all of this, a light formalization can be helpful and the fact that teams are required to produce important documentation practically requires a compromise between informal and formal methods of communication. In requirements specifications, for instance, I have found it quite beneficial to formally define various systems, acronyms, and other jargon that is referenced later in the document. This allows for a certain consistency within the document itself, and it also helps establish guidelines surrounding meaningful dialogue outside of the document. Of course, it wouldn’t quite be up to legal standards and it would certainly lack the rigid syntax of computer languages, but it can still be helpful.

I am not an expert in linguistics, but it seems to me that spoken language is much richer and more complex than written language. Spoken language features numerous intricacies and tonal subtleties such as inflections and pauses. Indeed, spoken language often contains its own set of grammatical patterns which can be different than written language. Furthermore, face-to-face communication also consists of body language and other signs that can influence the meaning of what is said depending on the context in which it is spoken. This sort of nuance just isn’t possible in written form.

This actually illustrates a wider problem. Again, I’m no linguist and haven’t spent a ton of time examining the origins of language, but it seems to me that language emerged as a more immediate form of communication than what we use it for today. In other words, language was meant to be ephemeral, but with the advent of written language and improved technological means for recording communication (which are, historically, relatively recent developments), we’re treating it differently. What was meant to be short-lived and transitory is now enduring and long-lived. As a result, we get things like the ever changing concept of political-correctness. Or, more relevant to this discussion, we get the aforementioned compromise between formal and informal language.

Another drawback to asynchronous communication is the propensity for over-communication. The CC field in an email can be a dangerous thing. It’s very easy to broadcast your work out to many people, but the more this happens, the more difficult it becomes to keep track of all the incoming stimuli. Also, the language used in such a communication may be optimized for one type of reader, while the audience may be more general. This applies to other asynchronous methods as well. Documentation in a wiki is infamously difficult to categorize and find later. When you have an army of volunteers (as Wikipedia does), it’s not as large a problem. But most organizations don’t have such luxuries. Indeed, we’re usually lucky if something is documented at all, let alone well organized and optimized.

The obvious question, which I’ve skipped over for most of this post (and, for that matter, the previous post), is: why communicate in the first place? If there are so many difficulties that arise out of communication, why not minimize such frivolities so that we can get something done?

Indeed, many of the greatest works in history were created by one mind. Sometimes, two. If I were to ask you to name the greatest inventor of all time, what would you say? Leonardo da Vinci or perhaps Thomas Edison. Both had workshops consisting of many helping hands, but their greatest ideas and conceptual integrity came from one man. Great works of literature? Shakespeare is the clear choice. Music? Bach, Mozart, Beethoven. Painting? da Vinci (again!), Rembrandt, Michelangelo. All individuals! There are collaborations as well, but usually only among two people. The Wright brothers, Gilbert and Sullivan, and so on.

So why has design and invention gone from solo efforts to group efforts? Why do we know the names of most of the inventors of 19th and early 20th century innovations, but not later achievements? For instance, who designed the Saturn V rocket? No one knows that, because it was a large team of people (and it was the culmination of numerous predecessors made by other teams of people). Why is that?

The biggest and most obvious answer is the increasing technological sophistication in nearly every area of engineering. The infamous Lazarus Long adage that “Specialization is for insects.” notwithstanding, the amount of effort and specialization in various fields is astounding. Take a relatively obscure and narrow branch of mechanical engineering like Fluid Dynamics, and you’ll find people devoting most of their life to the study of that field. Furthermore, the applications of that field go far beyond what we’d assume. Someone tinkering in their garage couldn’t make the Saturn V alone. They’d require too much expertise in a wide and disparate array of fields.

This isn’t to say that someone tinkering in their garage can’t create something wonderful. Indeed, that’s where the first personal computer came from! And we certainly know the names of many innovators today. Mark Zuckerberg and Larry Page/Sergey Brin immediately come to mind… but even their inventions spawned large companies with massive teams driving future innovation and optimization. It turns out that scaling a product up often takes more effort and more people than expected. (More information about the pros and cons of moving to a collaborative structure will have to wait for a separate post.)

And with more people comes more communication. It’s a necessity. You cannot collaborate without large amounts of communication. In Tom DeMarco and Timothy Lister’s book Peopleware, they call this the High-Tech Illusion:

…the widely held conviction among people who deal with any aspect of new technology (as who of us does not?) that they are in an intrinsically high-tech business. … The researchers who made fundamental breakthroughs in those areas are in a high-tech business. The rest of us are appliers of their work. We use computers and other new technology components to develop our products or to organize our affairs. Because we go about this work in teams and projects and other tightly knit working groups, we are mostly in the human communication business. Our successes stem from good human interactions by all participants in the effort, and our failures stem from poor human interactions.

(Emphasis mine.) That insight is part of what initially inspired this series of posts. It’s very astute, and most organizations work along those lines, and thus need to figure out ways to account for the additional costs of communication (this is particularly daunting, as such things are notoriously difficult to measure, but I’m getting ahead of myself). I suppose you could argue that both of these posts are somewhat inconclusive. Some of that is because they are part of a larger series, but also, as I’ve been known to say, human beings don’t so much solve problems as they do trade one set of problems for another (in the hopes that the new problems are preferable the old). Recognizing and acknowledging the problems introduced by collaboration and communication is vital to working on any large project. As I mentioned towards the beginning of this post, this only really scratches the surface of the subject of communication, but for the purposes of this series, I think I’ve blathered on long enough. My next topic in this series will probably cover the various difficulties of providing estimates. I’m hoping the groundwork laid in these first two posts will mean that the next post won’t be quite so long, but you never know!

Unnecessary Gadgets

So the NY Times has an article debating the necessity of the various gadgets. The argument here is that we’re seeing a lot of convergence in tech devices, and that many technologies that once warranted a dedicated device are now covered by something else. Let’s take a look at their devices, what they said, and what I think:

  • Desktop Computer – NYT says to chuck it in favor of laptops. I’m a little more skeptical. Laptops are certainly better now than they’ve ever been, but I’ve been hearing about desktop-killers for decades now and I’m not even that old (ditto for thin clients, though the newest hype around the “cloud” computing thing is slightly more appealing – but even that won’t supplant desktops entirely). I think desktops will be here to stay. I’ve got a fair amount of experience with both personal and work laptops, and I have to say that they’re both inferior to desktops. This is fine when I need to use the portability, but that’s not often enough to justify some of the pain of using laptops. For instance, I’m not sure what kinda graphics capabilities my work laptop has, but it really can’t handle my dual-monitor setup, and even on one monitor, the display is definitely crappier than my old desktop (and that thing was ancient). I do think we’re going to see some fundamental changes in the desktop/laptop/smartphone realm. The three form factors are all fundamentally useful in their own way, but I’d still expect some sort of convergence in the next decade or so. I’m expecting that smartphones will become ubiquitous, and perhaps become some sort of portable profile that you could use across your various devices. That’s a more long term thing though.
  • High Speed Internet at Home – NYT says to keep it, and I agree. Until we can get a real 4G network (i.e. not the slightly enhanced 3G stuff the current telecom companies are peddling), there’s no real question here.
  • Cable TV – NYT plays the “maybe” card on this one, but I think i can go along with that. It all depends on whether you watch TV or not (and/or if you enjoy live TV, like sporting events). I’m on the fence with this one myself. I have cable, and a DVR does make dealing with broadcast television much easier, and I like the opportunities afforded by OnDemand, etc… But it is quite expensive. If I ever get into a situation where I need to start pinching pennies, Cable is going to be among the first things to go.
  • Point and Shoot Camera – NYT says to lose it in favor of the smartphone, and I probably agree. Obviously there’s still a market for dedicated high-end cameras, but the small point-and-click ones are quickly being outclassed by their fledgling smartphone siblings. My current iPhone camera is kinda crappy (2 MP, no flash), but even that works ok for my purposes. There are definitely times when I wish I had a flash or better quality, but they’re relatively rare and I’ve had this phone for like 3 years now (probably upgrading this summer). My next camera will most likely meet all my photography needs.
  • Camcorder – NYT says to lose it, and that makes a sort of sense. As they say, camcorders are getting squeezed from both ends of the spectrum, with smartphones and cheap flip cameras on one end, and high end cameras on the other. I don’t really know much about this though. I’m betting that camcorders will still be around, just not quite as popular as before.
  • USB Thumb Drive – NYT says lose it, and I think I agree, though not necessarily for the same reasons. They think that the internet means you don’t need to use physical media to transfer data anymore. I suppose there’s something to that, but my guess is that Smartphones could easily pick up the slack and allow for portable data without a dedicated device. That being said, I’ve used a thumb drive, like, 3 times in my life.
  • Digital Music Player – NYT says ditch it in favor of smartphones, with the added caveat that people who exercise a lot might like a smaller, dedicated device. I can see that, but on a personal level, I have both and don’t mind it at all. I don’t like using up my phone battery playing music, and I honestly don’t really like the iPhone music player interface, so I actually have a regular old iPod nano for music and podcasts (also, I like to have manual control over what music/podcasts get on my device, and that’s weird on the iPhone – at least, it used to be). My setup works fine for me most times, and in an emergency, I do have music (and a couple movies) on my iPhone, so I could make due.
  • Alarm Clock – NYT says keep it, though I’m not entirely convinced. Then again, I have an alarm clock, so I can’t mount much of a offense against it. I’ve realized, though, that the grand majority of clocks that I use in my house are automatically updated (Cable box, computers, phone) and synced with some external source (no worrying about DST, etc…) My alarm clock isn’t, though. I still use my phone as a failsafe for when I know I need to get up early, but that’s more based on the possibility of snoozing myself into oblivion (I can easily snooze for well over an hour). I think I may actually end up replacing my clock, but I can see some young whipper-snappers relying on some other device for their wakeup calls…
  • GPS Unit – NYT says lose it, and I agree. With the number of smartphone apps (excluding the ones that come with your phone, which are usually functional but still kinda clunky as a full GPS system) that are good at this sort of thing (and a lot cheaper), I can’t see how anyone could really justify a dedicated device for this. On a recent trip, a friend used Navigon’s Mobile Navigator ($30, and usable on any of his portable devices) and it worked like a charm. Just as good as any GPS I’ve ever used. The only problem, again, is that it will drain the phone battery (unless you plug it in, which we did).
  • Books – NYT says to keep them, and I mostly agree. The only time I can see really wanting to use a dedicated eReader is when travelling, and even then, I’d want it to be a broad device, not dedicated to books. I have considered the Kindle (as it comes down in price), but for now, I’m holding out on a tablet device that will actually have a good enough screen for this sort of thing. Which, I understand, isn’t too far off on the horizon. There are a couple of other nice things about digital books though, namely, the ability to easily mark favorite passages, or to do a search (two things that would probably save me a lot of time). I can’t see books every going away, but I can see digital readers being a part of my life too.

A lot of these made me think of Neal Stephenson’s System of the World. In that book, one of the characters ponders how new systems supplant older systems:

“It has been my view for some years that a new System of the World is being created around us. I used to suppose that it would drive out and annihilate any older Systems. But things I have seen recently … have convinced me that new Systems never replace old ones, but only surround and encapsulate them, even as, under a microscope, we may see that living within our bodies are animalcules, smaller and simpler than us, and yet thriving even as we thrive. … And so I say that Alchemy shall not vanish, as I always hoped. Rather, it shall be encapsulated within the new System of the World, and become a familiar and even comforting presence there, though its name may change and its practitioners speak no more about the Philosopher’s Stone.” (page 639)

That sort of “surround and encapsulate” concept seems broadly applicable to a lot of technology, actually.

Artificial Memory

Nicholas Carr cracks me up. He’s a skeptic of technology, and in particular, the internet. He’s the the guy who wrote the wonderfully divisive article, Is Google Making Us Stupid? The funny thing about all this is that he seems to have gained the most traction on the very platform he criticizes so much. Ultimately, though, I think he does have valuable insights and, if nothing else, he does raise very interesting questions about the impacts of technology on our lives. He makes an interesting counterweight to the techno-geeks who are busy preaching about transhumanism and the singularity. Of course, in a very real sense, his opposition dooms him to suffer from the same problems as those he criticizes. Google and the internet may not be a direct line to godhood, but it doesn’t represent a descent into hell either. Still, reading some Carr is probably a good way to put techno-evangelism into perspective and perhaps reach some sort of Hegelian synthesis of what’s really going on.

Otakun recently pointed to an excerpt from Carr’s latest book. The general point of the article is to examine how human memory is being conflated with computer memory, and whether or not that makes sense:

…by the middle of the twentieth century memorization itself had begun to fall from favor. Progressive educators banished the practice from classrooms, dismissing it as a vestige of a less enlightened time. What had long been viewed as a stimulus for personal insight and creativity came to be seen as a barrier to imagination and then simply as a waste of mental energy. The introduction of new storage and recording media throughout the last century—audiotapes, videotapes, microfilm and microfiche, photocopiers, calculators, computer drives—greatly expanded the scope and availability of “artificial memory.” Committing information to one’s own mind seemed ever less essential. The arrival of the limitless and easily searchable data banks of the Internet brought a further shift, not just in the way we view memorization but in the way we view memory itself. The Net quickly came to be seen as a replacement for, rather than just a supplement to, personal memory. Today, people routinely talk about artificial memory as though it’s indistinguishable from biological memory.

While Carr is perhaps more blunt than I would be, I have to admit that I agree with a lot of what he’s saying here. We often hear about how modern education is improved by focusing on things like “thinking skills” and “problem solving”, but the big problem with emphasizing that sort of work ahead of memorization is that the analysis needed for such processes require a base level of knowledge in order to be effective. This is something I’ve expounded on at length in a previous post, so I won’t rehash that here.

The interesting thing about the internet is that it enables you to get to a certain base level of knowledge and competence very quickly. This doesn’t come without it’s own set of challenges, and I’m sure Carr would be quick to point out that such a crash course would yield a false sense of security on us hapless internet users. After all, how do we know when we’ve reached that base level of confidence? Our incompetence could very well be masking our ability to recognize our incompetence. However, I don’t think that’s an insurmountable problem. Most of us that use the internet a lot view it as something of a low-trust environment, which can, ironically, lead to a better result. On a personal level, I find that what the internet really helps with is to determine just how much I don’t know about a subject. That might seem like a silly thing to say, but even recognizing that your unknown unknowns are large can be helpful.

Some other assorted thoughts about Carr’s excerpt:

  • I love the concept of a “commonplace book” and immediately started thinking of how I could implement one… which is when I realized that I’ve actually been keeping one, more or less, for the past 10 or so years on this blog. That being said, it’s something I wouldn’t mind becoming more organized about, and I’ve got some interesting ideas about what my personal take on a commonplace would look like.
  • Carr insists that the metaphor that portrays the brain as a computer is wrong. It’s a metaphor I’ve certainly used in the past, though I think what I find most interesting about that metaphor is how different computers and brains really are. The problem with the metaphor is that our brains work nothing even remotely like the way our current computers actually work. However, many of the concepts of computer science and engineering can be useful in helping to model how the brain works. I’m certainly not an expert on the subject, but for example: You could model the brain as a binary computer because our neurons are technically binary. However, our neurons don’t just turn on or off, they pulse, and things like frequency and duration can yield dramatically different results. Not to mention the fact that the brain seems to be a massively parallel computing device, as opposed to the mostly serial electronic tools we use. That is, of course, a drastic simplification, but you get the point. The metaphor is flawed, as all metaphors are, but it can also be useful.
  • One thing that Carr doesn’t really get into (though he may cover this in a later chapter) is how notoriously unreliable human memory actually is. Numerous psychological studies show just how impressionable and faulty our memory of an event can be. This doesn’t mean we should abandon our biological memory, just that having an external, artificial memory of an event (i.e. some sort of recording) can be useful in helping to identify and shape our perceptions.
  • Of course, even recordings can yield a false sense of truth, so things like Visual Literacy are still quite important. And again, one cannot analyze said recordings accurately without a certain base set of knowledge about what we’re looking at – this is another concept that has been showing up on this blog for a while now as well: Exformation.

And that’s probably enough babbling about Carr’s essay. I generally disagree with the guy, but on this particular subject, I think we’re more in agreement.

A/B Testing Spaghetti Sauce

Earlier this week I was perusing some TED Talks and ran across this old (and apparently popular) presentation by Malcolm Gladwell. It struck me as particularly relevant to several topics I’ve explored on this blog, including Sunday’s post on the merits of A/B testing. In the video, Gladwell explains why there are a billion different varieties of Spaghetti sauce at most supermarkets:

Again, this video touches on several topics explored on this blog in the past. For instance, it describes the origins of what’s become known as the Paradox of Choice (or, as some would have you believe, the Paradise of Choice) – indeed, there’s another TED talk linked right off the Gladwell video that covers that topic in detail.

The key insight Gladwell discusses in his video is basically the destruction of the Platonic Ideal (I’ll summarize in this paragraph in case you didn’t watch the video, which covers the topic in much more depth). He talks about Howard Moskowitz, who was a market research consultant with various food industry companies that were attempting to optimize their products. After conducting lots of market research and puzzling over the results, Moskowitz eventually came to a startling conclusion: there is no perfect product, only perfect products. Moskowitz made his name working with spaghetti sauce. Prego had hired him in order to find the perfect spaghetti sauce (so that they could compete with rival company, Ragu). Moskowitz developed dozens of prototype sauces and went on the road, testing each variety with all sorts of people. What he found was that there was no single perfect spaghetti sauce, but there were basically three types of sauce that people responded to in roughly equal proportion: standard, spicy, and chunky. At the time, there were no chunky spaghetti sauces on the market, so when Prego released their chunky spaghetti sauce, their sales skyrocketed. A full third of the market was underserved, and Prego filled that need.

Decades later, this is hardly news to us and the trend has spread from the supermarket into all sorts of other arenas. In entertainment, for example, we’re seeing a move towards niches. The era of huge blockbuster bands like The Beatles is coming to an end. Of course, there will always be blockbusters, but the really interesting stuff is happening in the niches. This is, in part, due to technology. Once you can fit 30,000 songs onto an iPod and you can download “free” music all over the internet, it becomes much easier to find music that fits your tastes better. Indeed, this becomes a part of peoples’ identity. Instead of listening to the mass produced stuff, they listen to something a little odd and it becomes an expression of their personality. You can see evidence of this everywhere, and the internet is a huge enabler in this respect. The internet is the land of niches. Click around for a few minutes and you can easily find absurdly specific, single topic, niche websites like this one where every post features animals wielding lightsabers or this other one that’s all about Flaming Garbage Cans In Hip Hop Videos (there are thousands, if not millions of these types of sites). The internet is the ultimate paradox of choice, and you’re free to explore almost anything you desire, no matter how odd or obscure it may be (see also, Rule 34).

In relation to Sunday’s post on A/B testing, the lesson here is that A/B testing is an optimization tool that allows you to see how various segments respond to different versions of something. In that post, I used an example where an internet retailer was attempting to find the ideal imagery to sell a diamond ring. A common debate in the retail world is whether that image should just show a closeup of the product, or if it should show a model wearing the product. One way to solve that problem is to A/B test it – create both versions of the image, segment visitors to your site, and track the results.

As discussed Sunday, there are a number of challenges with this approach, but one thing I didn’t mention is the unspoken assumption that there actually is an ideal image. In reality, there are probably some people that prefer the closeup and some people who prefer the model shot. An A/B test will tell you what the majority of people like, but wouldn’t it be even better if you could personalize the imagery used on the site depending on what customers like? Show the type of image people prefer, and instead of catering to the most popular segment of customer, you cater to all customers (the simple diamond ring example begins to break down at this point, but more complex or subtle tests could still show significant results when personalized). Of course, this is easier said than done – just ask Amazon, who does CRM and personalization as well as any retailer on the web, and yet manages to alienate a large portion of their customers every day! Interestingly, this really just shifts the purpose of A/B testing from one of finding the platonic ideal to finding a set of ideals that can be applied to various customer segments. Once again we run up against the need for more and better data aggregation and analysis techniques. Progress is being made, but I’m not sure what the endgame looks like here. I suppose time will tell. For now, I’m just happy that Amazon’s recommendations aren’t completely absurd for me at this point (which I find rather amazing, considering where they were a few years ago).

Incompetence

Noted documentary filmmaker Errol Morris has been writing a series of posts about incompetence for the NY Times. The most interesting parts feature an interview with David Dunning, a psychologist whose experiments have discovered what’s called the Dunning-Kruger Effect: our incompetence masks our ability to recognize our incompetence.

DAVID DUNNING: There have been many psychological studies that tell us what we see and what we hear is shaped by our preferences, our wishes, our fears, our desires and so forth. We literally see the world the way we want to see it. But the Dunning-Kruger effect suggests that there is a problem beyond that. Even if you are just the most honest, impartial person that you could be, you would still have a problem — namely, when your knowledge or expertise is imperfect, you really don’t know it. Left to your own devices, you just don’t know it. We’re not very good at knowing what we don’t know.

I found this interesting in light of my recent posting about universally self-affirming outlooks (i.e. seeing the world the way we want to see it). In any case, the interview continues:

ERROL MORRIS: Knowing what you don’t know? Is this supposedly the hallmark of an intelligent person?

DAVID DUNNING: That’s absolutely right. It’s knowing that there are things you don’t know that you don’t know. [4] Donald Rumsfeld gave this speech about “unknown unknowns.” It goes something like this: “There are things we know we know about terrorism. There are things we know we don’t know. And there are things that are unknown unknowns. We don’t know that we don’t know.” He got a lot of grief for that. And I thought, “That’s the smartest and most modest thing I’ve heard in a year.”

It may be smart and modest, but that sort of thing usually gets politicians in trouble. But most people aren’t politicians, and so it’s worth looking into this concept a little further. An interesting result of this effect is that a lot of the smartest, most intelligent people also tend to be somewhat modest (this isn’t to say that they don’t have an ego or that they can’t act in arrogant ways, just that they tend to have a better idea about how much they don’t know). Steve Schwartz has an essay called No One Knows What the F*** They’re Doing (or “The 3 Types of Knowledge”) that explores these ideas in some detail:

To really understand how it is that no one knows what they’re doing, we need to understand the three fundamental categories of information.

There’s the shit you know, the shit you know you don’t know, and the shit you don’t know you don’t know.

Schwartz has a series of very helpful charts that illustrate this, but most people drastically overestimate the amount of knowledge in the “shit you know” category. In fact, that’s the smallest category and it is dwarfed b the shit you know you don’t know category, which is, in itself, dwarfed by the shit you don’t know you don’t know. The result is that most people who receive a lot of praise or recognition are surprised and feel a bit like a fraud.

This is hardly a new concept, but it’s always worth keeping in mind. When we learn something new, we’ve gained some knowledge. We’ve put some information into the “shit we know” category. But more importantly, we’ve probably also taken something out of the “shit we don’t know that we don’t know” category and put it into the “shit we know that we don’t know” category. This is important because that unknown unknowns category is the most dangerous of the categories, not the least of which is that our ignorance prevents us from really exploring it. As mentioned at the beginning of this post, our incompetence masks our ability to recognize our incompetence. In the interview, Morris references a short film he did once:

ERROL MORRIS: And I have an interview with the president of the Alcor Life Extension Foundation, a cryonics organization, on the 6 o’clock news in Riverside, California. One of the executives of the company had frozen his mother’s head for future resuscitation. (It’s called a “neuro,” as opposed to a “full-body” freezing.) The prosecutor claimed that they may not have waited for her to die. In answer to a reporter’s question, the president of the Alcor Life Extension Foundation said, “You know, we’re not stupid . . . ” And then corrected himself almost immediately, “We’re not that stupid that we would do something like that.”

DAVID DUNNING: That’s pretty good.

ERROL MORRIS: “Yes. We’re stupid, but we’re not that stupid.”

DAVID DUNNING: And in some sense we apply that to the human race. There’s some comfort in that. We may be stupid, but we’re not that stupid.

One might be tempted to call this a cynical outlook, but what it basically amounts to is that there’s always something new to learn. Indeed, the more we learn, the more there is to learn. Now, if only we could invent the technology like what’s presented in Diaspora (from my previous post), so we can live long enough to really learn a lot about the universe around us…

Internalizing the Ancient

Otaku Kun points to a wonderful entry in the Astronomy Picture of the Day series:

APOD: Milky Way Over Ancient Ghost Panel

The photo features two main elements: a nice view of the stars in the sky and a series of paintings on a canyon wall in Utah (it’s the angle of the photograph and the clarity of the sky that makes it seem unreal to me, but looking at the larger version makes things a bit more clear). As OK points out, there are two corresponding kinds of antiquity here: “one cosmic, the other human”. He speculates:

I think it’s impossible to really relate to things beyond human timescales. The idea of something being “ancient” has no meaning if it predates our human comprehension. The Neanderthals disappeared 30,000 years ago, which is probably really the farthest back we can reflect on. When we start talking about human forebears of 100,000 years ago and more, it becomes more abstract – that’s why it’s no coincidence that the Battlestar Galactica series finale set the events 150,000 years ago, well beyond even the reach of mythological narrative.

I’m reminded of an essay by C. Northcote Parkinson, called High Finance or The Point of Vanishing Interest (the essay appears in Parkinson’s Law, a collection of essays). Parkinson writes about how finance committees work:

People who understand high finance are of two kinds: those who have vast fortunes of their own and those who have nothing at all. To the actual millionaire a million dollars is something real and comprehensible. To the applied mathematician and the lecturer in economics (assuming both to be practically starving) a million dollars is at least as real as a thousand, they having never possessed either sum. But the world is full of people who fall between these two categories, knowing nothing of millions but well accustomed to think in thousands, and it is these that finance committees are mostly comprised.

He then goes on to explore what he calls the “Law of Triviality”. Briefly stated, it means that the time spent on any item of the agenda will be in inverse proportion to the sum involved. Thus he concludes, after a number of humorous but fitting examples, that there is a point of vanishing interest where the committee can no longer comment with authority. Astonishingly, the amount of time that is spent on $10 million and on $10 may well be the same. There is clearly a space of time which suffices equally for the largest and smallest sums.

In short, it’s difficult to internalize numbers that high, whether we’re talking about large sums of money or cosmic timescales. Indeed, I’d even say that Parkinson was being a bit optimistic. Millionaires and mathematicians may have a better grasp on the situation than most, but even they are probably at a loss when we start talking about cosmic timeframes. OK also mentions Battlestar Galactica, which did end on an interesting note (even if that finale was quite disappointing as a whole) and which brings me to one of the reasons I really enjoy science fiction: the contemplation of concepts and ideas that are beyond comprehension. I can’t really internalize the cosmic information encoded in the universe around me in such a way to do anything useful with it, but I can contemplate it and struggle to understand it, which is interesting and valuable in its own right. Perhaps someday, we will be able to devise ways to internalize and process information on a cosmic scale (this sort of optimistic statement perhaps represents another reason I enjoy SF).

Predictions

Someone sent me a note about a post I wrote on the 4th Kingdom boards in 2005 (August 3, 2005, to be more precise). It was in a response to a thread about technology and consumer electronics trends, and the original poster gave two examples that were exploding at the times: “camera phones and iPods.” This is what I wrote in response:

Heh, I think the next big thing will be the iPod camera phone. Or, on a more general level, mp3 player phones. There are already some nifty looking mp3 phones, most notably the Sony/Ericsson “Walkman” branded phones (most of which are not available here just yet). Current models are all based on flash memory, but it can’t be long before someone releases something with a small hard drive (a la the iPod). I suspect that, in about a year, I’ll be able to hit 3 birds with one stone and buy a new cell phone with an mp3 player and digital camera.

As for other trends, as you mention, I think we’re goint to see a lot of hoopla about the next gen gaming consoles. The new Xbox comes out in time for Xmas this year and the new Playstation 3 hits early next year. The new playstation will probably have blue-ray DVD capability, which brings up another coming tech trend: the high capacity DVD war! It seems that Sony may actually be able to pull this one out (unlike Betamax), but I guess we’ll have to wait and see…

For an off-the-cuff informal response, I think I did pretty well. Of course, I still got a lot of the specifics wrong. For instance, I’m pretty clearly talking about the iPhone, though that would have to wait about 2 years before it became a reality. I also didn’t anticipate the expansion of flash memory to more usable sizes and prices. Though I was clearly talking about a convergence device, I didn’t really say anything about what we now call “apps”.

In terms of game consoles, I didn’t really say much. My first thought upon reading this post was that I had completely missed the boat on the Wii, however, it appears that the Wii’s new controller scheme wasn’t shown until September 2005 (about a month after my post). I did manage to predict a winner in the HD video war though, even if I framed the prediction as a “high capacity DVD war” and spelled blu-ray wrong.

I’m not generally good at making predictions about this sort of thing, but it’s nice to see when I do get things right. Of course, you could make the argument that I was just stating the obvious (which is basically what I did with my 2008 predictions). Then again, everything seems obvious in hindsight, so perhaps it is still a worthwhile exercise for me. If nothing else, it gets me to think in ways I’m not really used to… so here are a few predictions for the rest of this year:

  • Microsoft will release Natal this year, and it will be a massive failure. There will be a lot of neat talk about it and speculation about the future, but the fact is that gesture based interfaces and voice controls aren’t especially great. I’ll bet everyone says they’d like to use the Minority Report interface… but once they get to use it, I doubt people would actually find it more useful than current input methods. If it does attain success though, it will be because of the novelty of that sort of interaction. As a gaming platform, I think it will be a near total bust. The only way Microsoft would get Natal into homes is to bundle it with the XBox 360 (without raising the price)
  • Speaking of which, I think Sony’s Playstation Move platform will be mildly more successful than Natal, which is to say that it will also be a failure. I don’t see anything in their initial slate of games that makes me even want to try it out. All that being said, the PS3 will continue to gain ground against the Xbox 360, though not so much that it will overtake the other console.
  • While I’m at it, I might as well go out on a limb and say that the Wii will clobber both the PS3 and the Xbox 360. As of right now, their year in games seems relatively tame, so I don’t see the Wii producing favorable year over year numbers (especially since I don’t think they’ll be able to replicate the success of New Super Mario Brothers Wii, which is selling obscenely well, even to this day). The one wildcard on the Wii right now is the Vitality Sensor. If Nintendo is able to put out the right software for that and if they’re able to market it well, it could be a massive, audience-shifting blue ocean win for them. Coming up with a good “relaxation” game and marketing it to the proper audience is one hell of a challenge though. On the other hand, if anyone can pull that off, it’s Nintendo.
  • Sony will also release some sort of 3D gaming and movie functionality for the home. It will also be a failure. In general, I think attitudes towards 3D are declining. I think it will take a high profile failure to really temper Hollywood’s enthusiasm (and even then, the “3D bump” of sales seems to outweigh the risk in most cases). Nevertheless, I don’t think 3D is here to stay. The next major 3D revolution will be when it becomes possible to do it without glasses (which, at that point, might be a completely different technology like holograms or something).
  • At first, I was going to predict that Hollywood would be seeing a dip in ticket sales, until I realized that Avatar was mostly a 2010 phenomenon, and that Alice in Wonderland has made about $1 billion worldwide already. Furthermore, this summer sees the release of The Twilight Saga: Eclipse, which could reach similar heights (for reference, New Moon did $700 million worldwide) and the next Harry Potter is coming in November (for reference, the last Potter film did around $930 million). Altogether, the film world seems to be doing well… in terms of sales. I have to say that from my perspective, things are not looking especially well when it comes to quality. I’m not even as interested in seeing a lot of the movies released so far this year (an informal look at my past few years indicates that I’ve normally seen about twice as many movies as I have this year – though part of that is due to the move of the Philly film fest to October).
  • I suppose I should also make some Apple predictions. The iPhone will continue to grow at a fast rate, though its growth will be tempered by Android phones. Right now, both of them are eviscerating the rest of the phone market. Once that is complete, we’ll be left with a few relatively equal players, and I think that will lead to good options for us consumers. The iPhone has been taken to task more and more for Apple’s control-freakism, but it’s interesting that Android’s open features are going to present more and more of a challenge to that as time goes on. Most recently, Google announced that the latest version of Android would feature the ability for your 3G/4G phone to act as a WiFi hotspot, which will most likely force Apple to do the same (apparently if you want to do this today, you have to jailbreak your iPhone). I don’t think this spells the end of the iPhone anytime soon, but it does mean that they have some legitimate competition (and that competition is already challenging Apple with its feature-set, which is promising).
  • The iPad will continue to have modest success. Apple may be able to convert that to a huge success if they are able to bring down the price and iron out some of the software kinks (like multi-tasking, etc… something we already know is coming). The iPad has the potential to destroy the netbook market. Again, the biggest obstacle at this point is the price.
  • The Republicans will win more seats in the 2010 elections than the Democrats. I haven’t looked close enough at the numbers to say whether or not they could take back either (or both) house of Congress, but they will gain ground. This is not a statement of political preference either way for me, and my reasons for making this prediction are less about ideology than simple voter disenfranchisement. People aren’t happy with the government and that will manifest as votes against the incumbents. It’s too far away from the 2012 elections to be sure, but I suspect Obama will hang on, if for no other reason than that he seems to be charismatic enough that people give him a pass on various mistakes or other bad news.

And I think that’s good enough for now. In other news, I have started a couple of posts that are significantly more substantial than what I’ve been posting lately. Unfortunately, they’re taking a while to produce, but at least there’s some interesting stuff in the works.

Remix Culture and Soviet Montage Theory

A video mashup of The Beastie Boys’ popular and amusing Sabotage video with scenes from Battlestar Galactica has been making the rounds recently. It’s well done, but a little on the disposable side of remix culture. The video lead Sunny Bunch to question “remix culture”:

It’s quite good. But, ultimately, what’s the point?

Leaving aside the questions of copyright and the rest: Seriously…what’s the point? Does this add anything to the culture? I won’t dispute that there’s some technical prowess in creating this mashup. But so what? What does it add to our understanding of the world, or our grasp of the problems that surround us? Anything? Nothing? Is it just “there” for us to have a chuckle with and move on? Is this the future of our entertainment?

These are good questions, and I’m not surprised that the BSG Sabotage video prompted them. The implication of Sonny’s post is that he thinks it is an unoriginal waste of talent (he may be playing a bit of devil’s advocate here, but I’m willing to play along because these are interesting questions and because it will give me a chance to pedantically lecture about film history later in this post!) In the comments, Julian Sanchez makes a good point (based on a video he produced earlier that was referenced by someone else in the comment thread), which will be something I’ll expand on later in this post:

First, the argument I’m making in that video is precisely that exclusive focus on the originality of the contribution misses the value in the activity itself. The vast majority of individual and collective cultural creation practiced by ordinary people is minimally “original” and unlikely to yield any final product of wide appeal or enduring value. I’m thinking of, e.g., people singing karaoke, playing in a garage band, drawing, building models, making silly YouTube videos, improvising freestyle poetry, whatever. What I’m positing is that there’s an intrinsic value to having a culture where people don’t simply get together to consume professionally produced songs and movies, but also routinely participate in cultural creation. And the value of that kind of cultural practice doesn’t depend on the stuff they create being particularly awe-inspiring.

To which Sonny responds:

I’m actually entirely with you on the skill that it takes to produce a video like the Brooklyn hipsters did — I have no talent for lighting, camera movements, etc. I know how hard it is to edit together something like that, let alone shoot it in an aesthetically pleasing manner. That’s one of the reasons I find the final product so depressing, however: An impressive amount of skill and talent has gone into creating something that is not just unoriginal but, in a way, anti-original. These are people who are so devoid of originality that they define themselves not only by copying a video that they’ve seen before but by copying the very personalities of characters that they’ve seen before.

Another good point, but I think Sonny is missing something here. The talents of the BSG Sabotage editor or the Brooklyn hipsters are certainly admirable, but while we can speculate, we don’t necessarily know their motivations. About 10 years ago, a friend and amateur filmmaker showed me a video one of his friends had produced. It took scenes from Star Wars and Star Trek: The Wrath of Khan and recut them so it looked like the Millennium Falcon was fighting the Enterprise. It would show Han Solo shooting, then cut to the Enterprise being hit. Shatner would exclaim “Fire!” and then it would cut to a blast hitting the Millennium Falcon. And so on. Another video from the same guy took the musical number George Lucas had added to Return of the Jedi in the Special Edition, laid Wu-Tang Clan in as the soundtrack, then re-edited the video elements so everything matched up.

These videos sound fun, but not particularly original or even special in this day and age. However, these videos were made ten to fifteen years ago. I was watching them on a VHS(!) and the person making the edits was using analog techniques and equipment. It turns out that these videos were how he honed his craft before he officially got a job as an editor in Hollywood. I’m sure there were tons of other videos, probably much less impressive, that he had created before the ones I’m referencing. Now, I’m not saying that the BSG Sabotage editor or the Brooklyn Hipsters are angling for professional filmmaking jobs, but it’s quite possible that they are at least exploring their own possibilities. I would also bet that these people have been making videos like this (though probably much less sophisticated) since they were kids. The only big difference now is that technology has enabled them to make a slicker experience and, more importantly, to distribute it widely.

It’s also worth noting that this sort of thing is not without historical precedent. Indeed, the history of editing and montage is filled with this sort of thing. In the 1910s and 1920s, Russian filmmaker Lev Kuleshov conducted a series of famous experiments that helped express the role of editing in films. In these experiments, he would show a man with an expressionless face, then cut to various other shots. In one example, he showed the expressionless face, then cut to a bowl of soup. When prompted, audiences would claim that they found that the man was hungry. Kuleshov then took the exact same footage of the expressionless face and cut to a pretty girl. This time, audiences reported that the man was in love. Another experiment alternated between the expressionless face and a coffin, a juxtaposition that lead audiences to believe that the man was stricken with grief. This phenomenon has become known as the Kuleshov Effect.

For the current discussion, one notable aspect of these experiments is that Kuleshov was working entirely from pre-existing material. And this sort of thing was not uncommon, either. At the time, there was a shortage of raw film stock in Russia. Filmmakers had to make due with what they had, and often spent their time re-cutting existing material, which lead to what’s now called Soviet Montage Theory. When D.W. Griffith’s Intolerance, which used advanced editing techniques (it featured a series of cross cut narratives which eventually converged in the last reel), opened in Russia in 1919, it quickly became very popular. The Russian film community saw this as a validation and popularization of their theories and also as an opportunity. Russian critics and filmmakers were impressed by the film’s technical qualities, but dismissed the story as “bourgeois”, claiming that it failed to resolve issues of class conflict, and so on. So, not having much raw film stock of their own, they took to playing with Griffith’s film, re-editing certain sections of the film to make it more “agitational” and revolutionary.

The extent to which this happened is a bit unclear, and certainly public exhibitions were not as dramatically altered as I’m making it out to be. However, there are Soviet versions of the movie that contained small edits and a newly filmed prologue. This was done to “sharpen the class conflict” and “anti-exploitation” aspects of the film, while still attempting to respect the author’s original intentions. This was part of a larger trend of adding Soviet propaganda to pre-existing works of art, and given the ideals of socialism, it makes sense. (The preceeding is a simplification of history, of course… see this chapter from Inside the Film Factory for a more detailed discussion of Intolerance and it’s impact on Russian cinema.) In the Russian film world, things really began to take off with Sergei Eisenstein and films like Battleship Potemkin. Watch that film today, and you’ll be struck by how modern-feeling the editing is, especially during the infamous Odessa Steps sequence (which you’ll also recognize if you’ve ever seen Brian De Palma’s “homage” in The Untouchables).

Now, I’m not really suggesting that the woman who produced BSG Sabotage is going to be the next Eisenstein, merely that the act of cutting together pre-existing footage is not necessarily a sad waste of talent. I’ve drastically simplified the history of Soviet Montage Theory above, but there are parallels between Soviet filmmakers then and YouTube videomakers today. Due to limited resources and knowledge, they began experimenting with pre-existing footage. They learned from the experience and went on to grander modifications of larger works of art (Griffith’s Intolerance). This eventually culminated in original works of art, like those produced by Eisenstein.

Now, YouTube videomakers haven’t quite made that expressive leap yet, but it’s only been a few years. It’s going to take time, and obviously editing and montage are already well established features of film, so innovation won’t necessarily come from that direction. But that doesn’t mean that nothing of value can emerge from this sort of thing, nor does messing around with videos on YouTube limit these young artists to film. While Roger Ebert’s valid criticisms are vaid, more and more, I’m seeing interactivity as the unexplored territory of art. Video games like Heavy Rain are an interesting experience and hint at something along these lines, but they are still severely limited in many ways (in other words, Ebert is probably right when it comes to that game). It will take a lot of experimentation to get to a point where maybe Ebert would be wrong (if it’s even possible at all). Learning about the visual medium of film by editing together videos of pre-existing material would be an essential step in the process. Improving the technology with which to do so is also an important step. And so on.

To return back to the BSG Sabotage video for a moment, I think that it’s worth noting the origins of that video. The video is clearly having fun by juxtaposing different genres and mediums (it is by no means the best or even a great example of this sort of thing, but it’s still there. For a better example of something built entirely from pre-existing works, see Shining.). Battlestar Galactica was a popular science fiction series, beloved by many, and this video comments on the series slightly by setting the whole thing to an unconventional music choice (though given the recent Star Trek reboot’s use of the same song, I have to wonder what the deal is with SF and Sabotage). Ironically, even the “original” Beastie Boys video was nothing more than a pastiche of 70s cop television shows. While I’m no expert, the music on Ill Communication, in general, has a very 70s feel to it. I suppose you could say that association only exists because of the Sabotage video itself, but even other songs on that album have that feel – for one example, take Sabrosa. Indeed, the Beastie Boys are themselves known for this sort of appropriation of pre-existing work. Their album Paul’s Boutique infamously contains literally hundreds of samples and remixes of popular music. I’m not sure how they got away with some of that stuff, but I suppose this happened before getting sued for sampling was common. Nowadays, in order to get away with something like Paul’s Boutique, you’ll need to have deep pockets, which sorta defeats the purpose of using a sample in the first place. After all, samples are used in the absence of resources, not just because of a lack of originality (though I guess that’s part of it). In 2004 Nate Harrison put together this exceptional video explaining how a 6 second drum beat (known as the Amen Break) exploded into its own sub-culture:

There is certainly some repetition here, and maybe some lack of originality, but I don’t find this sort of thing “sad”. To be honest, I’ve never been a big fan of hip hop music, but I can’t deny the impact it’s had on our culture and all of our music. As I write this post, I’m listening to Danger Mouse’s The Grey Album:

It uses an a cappella version of rapper Jay-Z’s The Black Album and couples it with instrumentals created from a multitude of unauthorized samples from The Beatles’ LP The Beatles (more commonly known as The White Album). The Grey Album gained notoriety due to the response by EMI in attempting to halt its distribution.

I’m not familiar with Jay-Z’s album and I’m probably less familiar with The White Album than I should be, but I have to admit that this combination and the artistry with which the two seemingly incompatible works are combined into one cohesive whole is impressive. Despite the lack of an official release (that would have made Danger Mouse money), The Grey Album made many best of the year (and best of the decade) lists. I see some parallels between the 1980s and 1990s use of samples, remixes, and mashups, and what was happening in Russian film in the 1910s and 1920s. There is a pattern worth noticing here: New technology enables artists to play with existing art, then apply their learnings to something more original later. Again, I don’t think that the BSG Sabotage video is particularly groundbreaking, but that doesn’t mean that the entire remix culture is worthless. I’m willing to bet that remix culture will eventually contribute towards something much more original than BSG Sabotage

Incidentally, the director of the original Beastie Boys Sabotage video? Spike Jonze, who would go on to direct movies like Being John Malkovich, Adaptation., and Where the Wild Things Are. I think we’ll see some parallels between the oft-maligned music video directors, who started to emerge in the film world in the 1990s, and YouTube videomakers. At some point in the near future, we’re going to see film directors coming from the world of short-form internet videos. Will this be a good thing? I’m sure there are lots of people who hate the music video aesthetic in film, but it’s hard to really be that upset that people like David Fincher and Spike Jonze are making movies these days. I doubt YouTubers will have a more popular style, and I don’t think they’ll be dominant or anything, but I think they will arrive. Or maybe YouTube videomakers will branch out into some other medium or create something entirely new (as I mentioned earlier, there’s a lot of room for innovation in the interactive realm). In all honesty, I don’t really know where remix culture is going, but maybe that’s why I like it. I’m looking forward to seeing where it leads.

Interrupts and Context Switching

To drastically simplify how computers work, you could say that computers do nothing more that shuffle bits (i.e. 1s and 0s) around. All computer data is based on these binary digits, which are represented in computers as voltages (5 V for a 1 and 0 V for a 0), and these voltages are physically manipulated through transistors, circuits, etc… When you get into the guts of a computer and start looking at how they work, it seems amazing how many operations it takes to do something simple, like addition or multiplication. Of course, computers have gotten a lot smaller and thus a lot faster, to the point where they can perform millions of these operations per second, so it still feels fast. The processor is performing these operations in a serial fashion – basically a single-file line of operations.

This single-file line could be quite inefficent and there are times when you want a computer to be processing many different things at once, rather than one thing at a time. For example, most computers rely on peripherals for input, but those peripherals are often much slower than the processor itself. For instance, when a program needs some data, it may have to read that data from the hard drive first. This may only take a few milliseconds, but the CPU would be idle during that time – quite inefficient. To improve efficiency, computers use multitasking. A CPU can still only be running one process at a time, but multitasking gets around that by scheduling which tasks will be running at any given time. The act of switching from one task to another is called Context Switching. Ironically, the act of context switching adds a fair amount of overhead to the computing process. To ensure that the original running program does not lose all its progress, the computer must first save the current state of the CPU in memory before switching to the new program. Later, when switching back to the original, the computer must load the state of the CPU from memory. Fortunately, this overhead is often offset by the efficiency gained with frequent context switches.

If you can do context switches frequently enough, the computer appears to be doing many things at once (even though the CPU is only processing a single task at any given time). Signaling the CPU to do a context switch is often accomplished with the use of a command called an Interrupt. For the most part, the computers we’re all using are Interrupt driven, meaning that running processes are often interrupted by higher-priority requests, forcing context switches.

This might sound tedious to us, but computers are excellent at this sort of processing. They will do millions of operations per second, and generally have no problem switching from one program to the other and back again. The way software is written can be an issue, but the core functions of the computer described above happen in a very reliable way. Of course, there are physical limits to what can be done with serial computing – we can’t change the speed of light or the size of atoms or a number of other physical constraints, and so performance cannot continue to improve indefinitely. The big challenge for computers in the near future will be to figure out how to use parallel computing as well as we now use serial computing. Hence all the talk about Multi-core processing (most commonly used with 2 or 4 cores).

Parallel computing can do many things which are far beyond our current technological capabilities. For a perfect example of this, look no further than the human brain. The neurons in our brain are incredibly slow when compared to computer processor speeds, yet we can rapidly do things which are far beyond the abilities of the biggest and most complex computers in existance. The reason for that is that there are truly massive numbers of neurons in our brain, and they’re all operating in parallel. Furthermore, their configuration appears to be in flux, frequently changing and adapting to various stimuli. This part is key, as it’s not so much the number of neurons we have as how they’re organized that matters. In mammals, brain size roughly correlates with the size of the body. Big animals generally have larger brains than small animals, but that doesn’t mean they’re proportionally more intelligent. An elephant’s brain is much larger than a human’s brain, but they’re obviously much less intelligent than humans.

Of course, we know very little about the details of how our brains work (and I’m not an expert), but it seems clear that brain size or neuron count are not as important as how neurons are organized and crosslinked. The human brain has a huge number of neurons (somewhere on the order of one hundred billion), and each individual neuron is connected to several thousand other neurons (leading to a total number of connections in the hundreds of trillions). Technically, neurons are “digital” in that if you were to take a snapshot of the brain at a given instant, each neuron would be either “on” or “off” (i.e. a 1 or a 0). However, neurons don’t work like digital electronics. When a neuron fires, it doesn’t just turn on, it pulses. What’s more, each neuron is accepting input from and providing output to thousands of other neurons. Each connection has a different priority or weight, so that some connections are more powerful or influential than others. Again, these connections and their relative influence tends to be in flux, constantly changing to meet new needs.

This turns out to be a good thing in that it gives us the capability to be creative and solve problems, to be unpredictable – things humans cherish and that computers can’t really do on their own.

However, this all comes with its own set of tradeoffs. With respect to this post, the most relevant of which is that humans aren’t particularly good at doing context switches. Our brains are actually great at processing a lot of information in parallel. Much of it is subconscious – heart pumping, breathing, processing sensory input, etc… Those are also things that we never really cease doing (while we’re alive, at least), so those resources are pretty much always in use. But because of the way our neurons are interconnected, sometimes those resources trigger other processing. For instance, if you see something familiar, that sensory input might trigger memories of childhood (or whatever).

In a computer, everything is happening in serial and thus it is easy to predict how various inputs will impact the system. What’s more, when a computer stores its CPU’s current state in memory, that state can be restored later with perfect accuracy. Because of the interconnected and parallel nature of the brain, doing this sort of context switching is much more difficult. Again, we know very little about how the humain brain really works, but it seems clear that there is short-term and long-term memory, and that the process of transferring data from short-term memory to long-term memory is lossy. A big part of what the brain does seems to be filtering data, determining what is important and what is not. For instance, studies have shown that people who do well on memory tests don’t necessarily have a more effective memory system, they’re just better at ignoring unimportant things. In any case, human memory is infamously unreliable, so doing a context switch introduces a lot of thrash in what you were originally doing because you will have to do a lot of duplicate work to get yourself back to your original state (something a computer has a much easier time doing). When you’re working on something specific, you’re dedicating a significant portion of your conscious brainpower towards that task. In otherwords, you’re probably engaging millions if not billions of neurons in the task. When you consider that each of these is interconnected and working in parallel, you start to get an idea of how complex it would be to reconfigure the whole thing for a new task. In a computer, you need to ensure the current state of a single CPU is saved. Your brain, on the other hand, has a much tougher job, and its memory isn’t quite as reliable as a computer’s memory. I like to refer to this as metal inertia. This sort of issue manifests itself in many different ways.

One thing I’ve found is that it can be very difficult to get started on a project, but once I get going, it becomes much easier to remain focused and get a lot accomplished. But getting started can be a problem for me, and finding a few uninterrupted hours to delve into something can be difficult as well. One of my favorite essays on the subject was written by Joel Spolsky – its called Fire and Motion. A quick excerpt:

Many of my days go like this: (1) get into work (2) check email, read the web, etc. (3) decide that I might as well have lunch before getting to work (4) get back from lunch (5) check email, read the web, etc. (6) finally decide that I’ve got to get started (7) check email, read the web, etc. (8) decide again that I really have to get started (9) launch the damn editor and (10) write code nonstop until I don’t realize that it’s already 7:30 pm.

Somewhere between step 8 and step 9 there seems to be a bug, because I can’t always make it across that chasm. For me, just getting started is the only hard thing. An object at rest tends to remain at rest. There’s something incredible heavy in my brain that is extremely hard to get up to speed, but once it’s rolling at full speed, it takes no effort to keep it going.

I’ve found this sort of mental inertia to be quite common, and it turns out that there are several areas of study based around this concept. The state of thought where your brain is up to speed and humming along is often referred to as “flow” or being “in the zone.” This is particularly important for working on things that require a lot of concentration and attention, such as computer programming or complex writing.

From my own personal experience a couple of years ago during a particularly demanding project, I found that my most productive hours were actually after 6 pm. Why? Because there were no interruptions or distractions, and a two hour chunk of uninterrupted time allowed me to get a lot of work done. Anecdotal evidence suggests that others have had similar experiences. Many people come into work very early in the hopes that they will be able to get more done because no one else is here (and complain when people are here that early). Indeed, a lot of productivity suggestions basically amount to carving out a large chunk of time and finding a quiet place to do your work.

A key component of flow is finding a large, uninterrupted chunk of time in which to work. It’s also something that can be difficult to do here at a lot of workplaces. Mine is a 24/7 company, and the nature of our business requires frequent interruptions and thus many of us are in a near constant state of context switching. Between phone calls, emails, and instant messaging, we’re sure to be interrupted many times an hour if we’re constantly keeping up with them. What’s more, some of those interruptions will be high priority and require immediate attention. Plus, many of us have large amounts of meetings on our calendars which only makes it more difficult to concentrate on something important.

Tell me if this sounds familiar: You wake up early and during your morning routine, you plan out what you need to get done at work today. Let’s say you figure you can get 4 tasks done during the day. Then you arrive at work to find 3 voice messages and around a hundred emails and by the end of the day, you’ve accomplished about 15 tasks, none of which are the 4 you had originally planned to do. I think this happens more often than we care to admit.

Another example, if it’s 2:40 pm and I know I have a meeting at 3 pm – should I start working on a task I know will take me 3 solid hours or so to complete? Probably not. I might be able to get started and make some progress, but as soon my brain starts firing on all cylinders, I’ll have to stop working and head to the meeting. Even if I did get something accomplished during those 20 minutes, chances are when I get back to my desk to get started again, I’m going to have to refamiliarize myself with the project and what I had already done before proceeding.

Of course, none of what I’m saying here is especially new, but in today’s world it can be useful to remind ourselves that we don’t need to always be connected or constantly monitoring emails, RSS, facebook, twitter, etc… Those things are excellent ways to keep in touch with friends or stay on top of a given topic, but they tend to split attention in many different directions. It’s funny, when you look at a lot of attempts to increase productivity, efforts tend to focus on managing time. While important, we might also want to spend some time figuring out how we manage our attention (and the things that interrupt it).

(Note: As long and ponderous as this post is, it’s actually part of a larger series of posts I have planned. Some parts of the series will not be posted here, as they will be tailored towards the specifics of my workplace, but in the interest of arranging my interests in parallel (and because I don’t have that much time at work dedicated to blogging on our intranet), I’ve decided to publish what I can here. Also, given the nature of this post, it makes sense to pursue interests in my personal life that could be repurposed in my professional life (and vice/versa).)