Kaedrin.com
You are here: Kaedrin > Weblog > Archives > Computers & Internet

Computers & Internet
Sunday, September 15, 2013

The Myth of Digital Distribution
The movie lover's dream service would be something we could subscribe to that would give us a comprehensive selection of movies to stream. This service is easy to conceive, and it's such an alluring idea that it makes people want to eschew tried-and-true distribution methods like DVDs and Blu-Ray. We've all heard the arguments before: physical media is dead, streaming is the future. When I made the move to Blu-Ray about 6 years ago, I estimated that it would take at least 10 years for a comprehensive streaming service to become feasible. The more I see, the more I think that I drastically underestimated that timeline... and am beginning to feel like it might never happen at all.

MGK illustrates the problem well with this example:
this is the point where someone says "but we're all going digital instead" and I get irritated by this because digital is hardly an answer. First off, renting films - and when you "buy" digital movies, that's what you're doing almost every single time - is not the same as buying them. Second, digital delivery is getting more and more sporadic as rights get more and more expensive for distributors to purchase.

As an example, take Wimbledon, a charming little 2004 sports film/romcom starring Paul Bettany and Kirsten Dunst. I am not saying Wimbledon is an unsung treasure or anything; it's a lesser offering from the Working Title factory that cranks out chipper British romcoms, a solid B-grade movie: well-written with a few flashes of inspiration, good performances all around (including a younger Nikolai Coster-Waldau before he became the Kingslayer) and mostly funny, although Jon Favreau's character is just annoying. But it's fun, and it's less than a decade old. It should be relatively easy to catch digitally, right? But no. It's not anywhere. And there are tons of Wimbledons out there.
Situations like this are an all too common occurrence, and not just with movies. It turns out that content owners can't be bothered with a title unless it's either new or in the public domain. This graph from a Rebecca Rosen article nicely illustrates the black hole that our extended copyright regime creates:
Books available by decade
Rosen explains:
[The graph] reveals, shockingly, that there are substantially more new editions available of books from the 1910s than from the 2000s. Editions of books that fall under copyright are available in about the same quantities as those from the first half of the 19th century. Publishers are simply not publishing copyrighted titles unless they are very recent.

The books that are the worst affected by this are those from pretty recent decades, such as the 80s and 90s, for which there is presumably the largest gap between what would satisfy some abstract notion of people's interest and what is actually available.
More interpretation:
This is not a gently sloping downward curve! Publishers seem unwilling to sell their books on Amazon for more than a few years after their initial publication. The data suggest that publishing business models make books disappear fairly shortly after their publication and long before they are scheduled to fall into the public domain. Copyright law then deters their reappearance as long as they are owned. On the left side of the graph before 1920, the decline presents a more gentle time-sensitive downward sloping curve.
This is absolutely absurd, though it's worth noting that it doesn't control for used books (which are generally pretty easy to find on Amazon) and while content owners don't seem to be rushing to digitize their catalog, future generations won't experience the same issue we're having with the 80s and 90s. Actually, I suspect they will have trouble with 80s and 90s content, but stuff from 2010 should theoretically be available on an indefinite basis because anything published today gets put on digital/streaming services.

Of course, intellectual property law being what it is, I'm sure that new proprietary formats and readers will render old digital copies obsolete, and once again, consumers will be hard pressed to see that 15 year old movie or book ported to the latest-and-greatest channel. It's a weird and ironic state of affairs when the content owners are so greedy in hoarding and protecting their works, yet so unwilling to actually, you know, profit from them.

I don't know what the solution is here. There have been some interesting ideas about having copyright expire for books that have been out of print for a certain period of time (say, 5-10 years), but that would only work now - again, future generations will theoretically have those digital versions available. They may be in a near obsolete format, but they're available! It doesn't seem likely that sensible copyright reform could be passed, but it would be nice to see if we could take a page from the open source playbook, but I'm seriously doubting that content owners would ever be that forward thinking.

As MGK noted, DVD ushered in an era of amazing availability, but much of that stuff has gone out of print, and we somehow appear to be regressing from that.
Posted by Mark on September 15, 2013 at 06:03 PM .: Comments (3) | link :.


End of This Day's Posts

Wednesday, July 31, 2013

Serendipity (Again)
Every so often, someone posts an article like Connor Simpson's The Lost Art of the Random Find and everyone loses their shit, bemoaning the decline of big-box video, book and music stores (of course, it wasn't that long ago when similar folks were bemoaning the rise of big-box video, book and music stores for largely the same reasons, but I digress) and what that means for serendipity. This mostly leads to whining about the internet, like so:
...going to a real store and buying something because it caught your eye, not because some algorithm told you you'd like it — is slowly disappearing because of the Internet...

...there is nothing left to "discover," because the Internet already knows all. If you "find" a new bad thing, it's likely on a blog that millions of other people read daily. If you "find" a new movie, like the somehow-growing-in-popularity Sharknado, it's because you read one of the millions of blogs that paid far too much attention to a movie that, in the old days, would have gone straight into a straight-to-DVD bargain bin.
I've got news for you, you weren't "discovering" anything back in the day either. It probably felt like you were, but you weren't. The internet is just allowing you to easily find and connect with all your fellow travelers. Occasionally something goes viral, but so what? Yeah, sometimes it sucks when a funny joke gets overtold, but hey, that's life and it happens all the time. Simpson mentions Sharknado as if it came out of nowhere. The truth of the matter is that Sharknado is the culmination of decades of crappy cult SciFi (now SyFy) movies. Don't believe me? This was written in 2006:
Nothing makes me happier when I'm flipping through the channels on a rainy Saturday afternoon than stumbling upon whatever god-awful original home-grown suckfest-and-craptasm movie is playing on the Sci-Fi Channel. Nowhere else can you find such a clusterfuck of horrible plot contrivances and ill-conceived premises careening face-first into a brick wall of one-dimensional cardboard characters and banal, inane, poorly-delivered dialogue. While most television stations and movie production houses out there are attempting to retain some shred of dignity or at least a modicum of credibility, it's nice to know that the Sci-Fi Channel has no qualms whatsoever about brazenly showing twenty minute-long fight scenes involving computer-generated dinosaurs, dragons, insects, aliens, sea monsters and Gary Bussey all shooting laser beams at each other and battling for control of a planet-destroying starship as the self-destruct mechanism slowly ticks down and the fate of a thousand parallel universes hangs in the balance. You really have to give the execs at Sci-Fi credit for basically just throwing their hands up in the air and saying, "well let's just take all this crazy shit and mash it together into one giant ridiculous mess". Nothing is off-limits for those folks; if you want to see American troops in Iraq battle a giant man-eating Chimaera, you've got it. A genetically-altered Orca Whale the eats seamen and icebergs? Check. A plane full of mutated pissed-off killer bees carrying the Hanta Virus? Check. They pull out all the stops to cater to their target audience, who are pretty much so desensitized to bad science-fiction that no plot could be too over-the-top to satiate their need for giant monsters that eat people and faster-than-light spaceships shaped like the Sphynx.
And as a long time viewer of the SciFi/SyFy network since near its inception, I can tell you that this sort of love/hate has been going on for decades. That the normals finally saw the light/darkness with Sharknado was inevitable. But it will be short-lived. At least, until SyFy picks up my script for Crocoroid Versus Jellyfish.

It's always difficult for me to take arguments like this seriously. Look, analog serendipity (browsing the stacks, digging through crates, blind buying records at a store, etc...) obviously has value and yes, opportunities to do so have lessened somewhat in recent years. And yeah, it sucks. I get it. But while finding stuff serendipitously on the internet is a different experience, but it's certainly possible. Do these people even use the internet? Haven't they ever been on TV Tropes?

It turns out that I've written about this before, during another serendipity flareup back in 2006. In that post, I reference Steven Johnson's response, which is right on:
I find these arguments completely infuriating. Do these people actually use the web? I find vastly more weird, unplanned stuff online than I ever did browsing the stacks as a grad student. Browsing the stacks is one of the most overrated and abused examples in the canon of things-we-used-to-do-that-were-so-much-better. (I love the whole idea of pulling down a book because you like the "binding.") Thanks to the connective nature of hypertext, and the blogosphere's exploratory hunger for finding new stuff, the web is the greatest serendipity engine in the history of culture. It is far, far easier to sit down in front of your browser and stumble across something completely brilliant but surprising than it is walking through a library looking at the spines of books.
This whole thing basically amounts to a signal versus noise problem. Serendipity is basically finding signal by accident, and it happens all the damn time on the internet. Simpson comments:
...the fall of brick-and-mortar and big-box video, book and music stores has pushed most of our consumption habits to iTunes, Amazon and Netflix. Sure, that's convenient. But it also limits our curiosity.
If the internet limits your curiosity, you're doing it wrong. Though I guess if your conception of the internet is limited to iTunes, Amazon, and Netflix, I guess I can see why you'd be a little disillusioned. Believe it or not, there is more internet out there.

As I was writing this post, I listened to a few songs on Digital Mumbles (hiatus over!) as well as Dynamite Hemmorage. Right now, I'm listening to a song Mumbles describes as "something to fly a mech to." Do I love it? Not really! But it's a damn sight better than, oh, just about every time I blind bought a CD in my life (which, granted, wasn't that often, but still). I will tell you this, nothing I've listened to tonight would have been something I picked up in a record store, or on iTunes for that matter. Of course, I suck at music, so take this all with a grain of salt, but still.

In the end, I get the anxiety around the decline of analog serendipity. Really, I do. I've had plenty of pleasant experiences doing so, and there is something sad about how virtual the world is becoming. Indeed, one of the things I really love about obsessing over beer is aimlessly wandering the aisles and picking up beers based on superficial things like labels or fancy packaging (or playing Belgian Beer Roulette). Beer has the advantage of being purely physical, so it will always involve a meatspace transaction. Books, movies, and music are less fortunate, I suppose. But none of this means that the internet is ruining everything. It's just different. I suppose those differences will turn some people off, but stores are still around, and I doubt they'll completely disappear anytime soon.

In Neal Stephenson's The System of the World, the character Daniel Waterhouse ponders how new systems supplant older systems:
"It has been my view for some years that a new System of the World is being created around us. I used to suppose that it would drive out and annihilate any older Systems. But things I have seen recently ... have convinced me that new Systems never replace old ones, but only surround and encapsulate them, even as, under a microscope, we may see that living within our bodies are animalcules, smaller and simpler than us, and yet thriving even as we thrive. ... And so I say that Alchemy shall not vanish, as I always hoped. Rather, it shall be encapsulated within the new System of the World, and become a familiar and even comforting presence there, though its name may change and its practitioners speak no more about the Philosopher's Stone." (page 639)
In this Slashdot interview, Stephenson applies the same "surround and encapsulate" concept to the literary world. And so perhaps the internet will surround and encapsulate, but never destroy, serendipitous analog discovery. (hat tip to the Hedonist Jive twitter feed)
Posted by Mark on July 31, 2013 at 10:43 PM .: Comments (2) | link :.


End of This Day's Posts

Wednesday, May 29, 2013

The Irony of Copyright Protection
In Copyright Protection That Serves to Destroy, Terry Teachout lays out some of the fundamental issues surrounding the preservation of art, in particular focusing on recorded sound:
Nowadays most people understand the historical significance of recorded sound, and libraries around the world are preserving as much of it as possible. But recording technology has evolved much faster than did printing technology—so fast, in fact, that librarians can't keep up with it. It's hard enough to preserve a wax cylinder originally cut in 1900, but how do you preserve an MP3 file? Might it fade over time? And will anybody still know how to play it a quarter-century from now? If you're old enough to remember floppy disks, you'll get the point at once: A record, unlike a book, is only as durable as our ability to play it back.
Digital preservation is already a big problem for current librarians, and not just because of the mammoth amounts of digital data being produced. Just from a simple technological perspective, there are many non-trivial challenges. Even if the storage medium/reading mechanisms remain compatible over the next century, there are nontrivial challenges with ensuring these devices will remain usable that far into the future. Take hard drives. A lot of film and audio (and, I suppose books these days too) are being archived on hard drives. But you can't just take a hard drive and stick it on a shelf somewhere and fire it up in 30 years. Nor should you keep it spinning for 30 years. It requires use, but not constant use. And even then you'll need to ensure redundancy because hard drives fail.

Just in writing that, you can see the problem. Hard drives clearly aren't the solution. Too many modes of failure there. We need something more permanent. Which means something completely new... and thus something that will make hard drives (and our ability to read them) obsolete.

And that's from a purely technological perspective. They're nontrivial, but I'm confident that technology will rise to the challenge. However, once you start getting into the absolutely bonkers realm of intellectual property law, things get stupid really fast. If technology will rise to the challenge, IP owners and lawmakers seem to be engaged in an ever-escalating race to the bottom of the barrel:
In Europe, sound recordings enter the public domain 50 years after their initial release. Once that happens, anyone can reissue them, which makes it easy for Europeans to purchase classic records of the past. In America, by contrast, sound recordings are "protected" by a prohibitive snarl of federal and state legislation whose effect was summed up in a report issued in 2010 by the National Recording Preservation Board of the Library of Congress: "The effective term of copyright protection for even the oldest U.S. recordings, dating from the late 19th century, will not end until the year 2067 at the earliest.… Thus, a published U.S. sound recording created in 1890 will not enter the public domain until 177 years after its creation, constituting a term of rights protection 82 years longer than that of all other forms of audio visual works made for hire."

Among countless other undesirable things, this means that American record companies that aren't interested in reissuing old records can stop anyone else from doing so, and can also stop libraries from making those same records readily accessible to scholars who want to use them for noncommercial purposes. Even worse, it means that American libraries cannot legally copy records made before 1972 to digital formats for the purpose of preservation...
Sheer insanity. The Library of Congress appears to be on the right side of the issue, suggesting common-sense recommendations for copyright reform... that will almost certainly never be enacted by IP owners or lawmakers. Still, their "National Recording Preservation Plan" seems like a pretty good idea. Again, it's a pity that almost none of their recommendations will be enacted, and while the need for Copyright reform is blindingly obvious to anyone with a brain, I don't see it happening anytime soon. It's a sad state of affairs when the only victories we can celebrate in this realm is grassroots opposition to absurd laws like SOPA/PIPA/ACTA.

I don't know the way forward. When you look at the economics of the movie industry, as recently laid out by Steven Soderberg in a speech that's been making the rounds of late (definitely worth a watch, if you've got a half hour), you start to see why media companies are so protective of their IP. As currently set up, your movie needs to make 120 million dollars, minimum, before you start to actually turn a profit (and that's just the marketing costs - you'd have to add on the budget to get a better idea). That, too, is absurd. I don't envy the position of media companies, but on the other hand, their response to such problems isn't to fix the problem but to stomp their feet petulantly, hold on to copyrighted works for far too long, and to antagonize their best customers.

That's the irony of protecting copyright. If you protect it too much, no one actually benefits from it, not even the copyright holders...
Posted by Mark on May 29, 2013 at 10:46 PM .: Comments (0) | link :.


End of This Day's Posts

Wednesday, May 08, 2013

Kindle Updates
I have, for the most part, been very pleased with using my Kindle Touch to read over the past couple years. However, while it got the job done, I felt like there were a lot of missed opportunities, especially when it came to metadata and personal metrics. Well, Amazon just released a new update to their Kindle software, and mixed in with the usual (i.e. boring) updates to features I don't use (like "Whispersinc" or Parental Controls), there was this little gem:
The Time To Read feature uses your reading speed to let you know how much time is left before you finish your chapter or before you finish your book. Your specific reading speed is stored only on your Kindle Touch; it is not stored on Amazon servers.
Hot damn, that's exactly what I was asking for! Of course, it's all locked down and you can't really see what your reading speed is (or plot it over time, or by book, etc...), but this is the single most useful update to a device like this that I think I've ever encountered. Indeed, the fact that it tells you how much time until you finish both your chapter and the entire book is extremely useful, and it addresses my initial curmudgeonly complaints about the Kindle's hatred of page numbers and love of percentage.
Time to Read in Action
Will finish this book in about 4 hours!
The notion of measuring book length by time mitigates the issues surrounding book length by giving you a personalized measurement that is relevant and intuitive. No more futzing with the wild variability in page numbers or Amazon's bizarre location system, you can just peek at the remaining time, and it's all good.

And I love that they give a time to read for both the current chapter and the entire book. One of the frustrating things about reading an ebook is that you never really knew how long it will take to read a chapter. With a physical book, you can easily flip ahead and see where the chapter ends. Now, ebooks have that personalized time, which is perfect.

I haven't spent a lot of time with this new feature, but so far, I love it. I haven't done any formal tracking, but it seems accurate, too (it seems like I'm reading faster than it says, but it's close). It even seems to recognize when you've taken a break (though I'm not exactly sure of that). Of course, I would love it if Amazon would allow us access to the actual reading speed data in some way. I mean, I can appreciate their commitment to privacy, and I don't think that needs to change either; I'd just like to be able to see some reports on my actual reading speed. Plot it over time, see how different books impact speed, and so on. Maybe I'm just a data visualization nerd, but think of the graphs! I love this update, but they're still only scratching the surface here. There's a lot more there for the taking. Let's hope we're on our way...
Posted by Mark on May 08, 2013 at 08:42 PM .: Comments (0) | link :.


End of This Day's Posts

Sunday, March 17, 2013

Requiem for Google Reader
This past week, Google dropped a bombshell on a certain segment of internet nerdery: they announced they were going to discontinue Google Reader. For the uninitiated, Reader was an RSS agregator - it allowed you to subscribe to the internet, and collected all that content in one place. It was awesome, I use it every day, and Google is going to turn it off on July 1. It shouldn't have been so shocking, but it was. It shouldn't have been so disappointing, but it was. And a big part of this is on me. This post might seem whiny, and I suppose it is, but I am finding this experience interesting (in the Chinese curse sense, but still).

It's hard to talk about this without seeming hysterical. This isn't the end of the world, and it's most certainly not the end of Google. All the petitions and talk of tough lessons and quicky websites (though, for serious, I love that gif) and videos... they're really just wishful thinking. It's nice to think that our Google overlords are surprised by the immediate and intense response to what probably seemed like a straightforward business decision, but I don't think they are. Outrage on the internet happens at the speed of twitter and fades even quicker. We'll find alternatives (more about this in a moment), we'll move on, and Google will too. But my view of Google has changed pretty quickly.

Of course, I'm not so naive to think that Google really gives a crap what I think, but I used to stick up for Google. Their "Don't be Evil" motto was surprisingly effective, and it looked like they walked the walk, too. That's a rare thing, to be sure, but it also molded the perception of Google to be something idealistic, something with an optimistic vision. We're drowning in information, and Google was going to help us deal with that. Their applications felt like public services. The shuttering of Reader, while ultimately not that big of a deal in isolation, rips all that artifice away from Google's image. We caught them being a business, and that just feels like a betrayal. It's completely unfair and naive, but that doesn't make it any less real. It's also selfish, but why should I care?

For the first time in years, I'm looking into alternatives. Google is forcing me to find an alternative to Reader, but if they're going to turn off something that so many people rely on so heavily, shouldn't I look for replacements to all of Google's other services? I'm surprised by how much I use Google services, and while I can't see myself replacing Gmail anytime soon, some of this other stuff might not be so necessary.

Speaking of alternatives, I've played around with a few, and the one I like the most is Feedly. It's not perfect, but then, neither was Reader. The transition was easy and seamless - I logged into Google and provided access to Feedly and boom: my entire set of feeds (and it looks like usage history too) was ported over to the new app. Once Google sunsets Reader, Feedly will transition to their backend, built specifically for this purpose. The interface may take some getting used to, but hey, keyboard shortcuts still work and it's got a much better suite of social sharing and tagging options. I'm a little annoyed by the notion that you need to install some sort of extension to your browser to get it to work, but it still seems like the best option available at the moment. Of course, nothing stops Feedly from acting like douchebags further down the road, but they're not the only alternative either. There are lots of others. Hell, even Digg (yeah, remember them?) is trying to capitalize on this whole thing.

I still don't really understand why Reader was such an anathema to Google. A lot of people have mentioned that they could see this coming for a while, and yeah, I think any user of Reader could tell that it wasn't among Google's favorite applications. It never got as many updates as, say, Maps or Gmail, and while it had some fantastic and innovative community features like sharing and commenting (stuff that you never saw much of when it came to RSS readers), Google completely neutered all that stuff in the name of pointless integration with Google+. Google did a redesign a little while back and, while I certainly can see why they did it and I value consistency, they made Reader harder to use. I mean, the point of this application is to allow you to read stuff - why are you slathering everything in grey and dedicating so much of the screen to unnecessary global navigation? Now, I wasn't a big user of their community features and while I wasn't a fan of the redesign, it was still the best option out there.

Google's stated reason for getting rid of Reader is that usage was down and they feel like they've spread themselves too thin with the number of services they support. I can sympathize with that second part, but the first part is ridiculous. The above mentioned changes to community features and the redesign were tailored towards reducing usage of the application. That was their whole purpose - Google wanted their community on G+, which is fair enough, I guess, but then it seems disingenuous to turn around and close the app because usage is down. Rather, that's not really an explanation. It feels like something else is going on here and it's hard to put my finger on it...

People have speculated that the reason for the shutdown is because they couldn't find a way to monetize it, but that doesn't seem right. At the very least, there were no ads on it, and while people don't particularly enjoy ads, they'd probably like them better than not having reader at all. I've always considered Google's strategy to be something along the lines of: Increased internet usage in general means that we can serve more ads to more people. Reader certainly accomplished that goal, and it did so for a lot of people. Usage may have been down, but it was still large and drove massive amounts of traffic. Just look at the graph on this Buzzfeed article. It's not at all comprehensive and there are probably a lot of caveats, but I would bet the general thrust is correct - far more people discover content through Reader than they do on G+...

In a more general sense, this development is reopening the debate about RSS and the relevancy of things like Blogs, here in the age of Facebook and Twitter. There are valid concerns about this stuff, especially when it comes to average users of the internet. And I don't mean that as a slight on average users. I know the ins and outs of RSS because I'm a nerd and my profession requires that sort of knowledge. But who wants to sit down and figure this stuff out if you don't have to? People are busy, they have jobs, they have kids, they don't have time to futz with markup languages, and that's not a bad thing at all. Google Reader was a step in the right direction, but Google never really developed that aspect of it (which seems to have faded away) and I get the impression that they have lost faith in RSS as a way to help us all make sense of the morass of information on the internet.

This is a generous interpretation of Google's actions, but I like that better than the cynical explanations about difficulty monetizing Reader or Google's official line about usage. On the other hand, what is Google doing to help us sift through the detritus of the internets? I don't think Google+ is the solution, and Search has its own issues. That's why the people like me, looking for ways to aggregate and analyze data in efficient ways, were big users of Reader in the first place. It's why we're so hurt by the decision to shut it down. It would be one thing if usage of Reader was declining because there was a better way to consume content (which, I'm sure is debatable to some Social evangelists, but that's a topic for another post). Closing Reader now seems premature and baffling.

So Google cut me, they cut me deep. It's partly my own fault; I let my guard down. I'm confident that this malaise will pass and that I'll stop trying to find ways to spite them, but I won't see Google the same way I did before. I'm curious to see how Google moves forward. This isn't the first time they shuttered an application, but it might be the most widely-used and beloved service they've given the axe... On its face, this move seems as stupid as Netflix's Qwikster debacle. Netflix's solution was easy, they saw the error in their ways and reversed course. The response to that wasn't immediate, but Netflix is doing much better now. Google has a more difficult road ahead. Of course, this decision isn't as breathtakingly stupid as Qwikster and like I said above, everyone will probably move on in pretty short order. But Google may face an image problem. I don't think just turning Reader back on would do the trick, as the damage is already done and it wasn't really a direct consequence of the action. The damage here is more than the sum of its parts. Can Google repair that? I'm open to the possibility, but it might be a while...
Posted by Mark on March 17, 2013 at 02:04 PM .: Comments (2) | link :.


End of This Day's Posts

Wednesday, February 27, 2013

Recent and Future Podcastery
I have a regular stable of podcasts that generally keep me happy on a weekly basis, but as much as I love all of them, I will sometimes greedily consume them all too quickly, leaving me with nothing. Plus, it's always good to look out for new and interesting stuff. Quite frankly, I've not done a particularly good job keeping up with the general podcasting scene, so here's a few things I caught up with recently (or am planning to listen to in the near future):
  • Idle Thumbs - This is primarily a video game podcast, though there are some interesting satellite projects too. I have to admit that my video game playing time has reduced itself considerably in the past year or so, but I still sometimes enjoy listening to this sort of thing. Plus, the Idle Book Club is, well, exactly what it sounds like - a book club podcast, with a book a month. I've not actually listened to much of any of this stuff, but it seems like fertile ground.
  • Firewall & Iceberg Podcast - The podcast from famed television critics Alan Sepinwall and Dan Fienberg. It focuses, not surprisingly, on television shows, which is something that I've been watching more of lately (due to the ability to mainline series on Netflix, etc...) Again, I haven't heard much, but they seem pretty knowledgeable and affable. I suspect this will be one of those shows that I download after I watch a series to see what they have to say about it.
  • Film Pigs Podcast - A movie podcast that's ostensibly right in my wheelhouse, and it's a pretty fun podcast, though I'm not entirely sure how bright it's future really is at this point given that they seem to be permanently missing one member of their normal crew and publish on a bi-monthly schedule. Still, there's some fun stuff here, and I'll probably listen to more of their back catalog when I run out of my regulars...
Speaking of that regular stable, this is what it's currently looking like: There are a few others that I hit up on an inconsistent basis too, but those are the old standbys...
Posted by Mark on February 27, 2013 at 09:43 PM .: Comments (0) | link :.


End of This Day's Posts

Sunday, February 10, 2013

Netflix's House of Cards
Last weekend, Netflix debuted their highly anticipated original series House of Cards. Based on an old BBC series, starring Kevin Spacey and directed by David Fincher, the show certainly has an impressive pedigree and has been garnering mostly positive reviews. From what I've watched so far, it doesn't quite reach the heights of my favorite television shows, but it's on the same playing field, which is pretty impressive for original content from an internet-based company that was predicated solely on repackaging and reselling existing content from other sources. It's a good show, but the most interesting things about the series are the meta-discussions surrounding the way it was produced and released.

Like the way free music streaming services are changing the narrative of that industry, I'm seeing something similar happening with Netflix... and like the music industry, I don't really know where this will end up. Netflix certainly fell on hard times a couple years ago; after a perfectly understandable price hike and the inexplicable Qwikster debacle their stock price plummeted from 300+ to around 60. Since then, it's been more or less ping-ponging up and and down in the 60-140 range, depending on various business events (earnings reports, etc...) and newly licensed content.

Recently, the stock has been rising rapidly, thanks to new content deals with the likes of Disney and Warner Bros., and now because of House of Cards. Perhaps fed up with wrangling the rising cost of streaming content (which are ever rising at a spectacular pace and cutting into Netflix's meager profit margins), Netflix has started to make their own content. Early last year, Netflix launched Lillyhammer to middling reviews and not a lot of fanfare... I have not watched the series (and quite frankly, the previews look like a parody or SNL sketch or something), but it perhaps represented Netflix's dry run for this recent bid for original content. A lot of the interesting things about House of Cards' release were presaged by that previous series.

For instance, the notion of releasing the entire 13 episode run of the first season on day one of release. Netflix has done a lot of research on their customers' viewing habits, observing that people will often mainline old series (or previous seasons of current series like Mad Men or Breaking Bad), watching entire seasons or even several over the course of a few days or weeks. I've wondered about this sort of thing in the past, because this is the way I prefer to consume content. I can never really get into the rhythm of "destination" television, except in very limited scenarios (the only show I watch on a weekly basis at the time it airs is Game of Thrones, because I like the show and the timeslot fits into my schedule). There are some shows that I look forward to every week, but even those usually get stored away on the DVR until I can watch several at once. So what I'm saying here is that this release of all episodes at once is right up my alley, and I'm apparently not alone.

With the lack of physical shelf space or broadcast schedule needed, I suspect this would also lead to shows actually getting to finish their season instead of being canceled after two episodes, which could be an interesting development. On the other hand, what kinds of shows will this produce? Netflix greenlit this series based on a mountain of customer data, not just about how viewers consumed TV series, but also on their response to Kevin Spacey and David Fincher, and probably a hundred other data-points.

And the series does kinda feel like it's built in a lab. Everything is top notch about the show. Great actors, high production value, solid writing, the show is optimized for that binge-watching experience. Is that a good thing? In this case, it seems to be working well enough. But can that sort of data-driven model hold up over time? Of course, that's nothing new in the entertainment industry. Look no further than the whole vampire/zombie resurgence of the past decade or so. But I wonder if Netflix will ever do something that sets the trends, rather than chasing the data.

What does this all mean for the world of streaming? Netflix appears to have stemmed the tide of defecting subscribers, but will they gain new subscribers simply because of their original content? Will this be successful enough for other streaming players to take the same gamble? Will we have Hulu and Amazon series? Will we have to subscribe to 8 different services to keep up with this? Or will Netflix actually license out their original content to the likes of Cable or Network television? Ok, that's probably unlikely, but on the other hand, it could be a big source of revenue and a way to expand their audience.

Will Netflix be able to keep growing thanks to these original content efforts? House of Cards is just the first of several original series being released this year. Will the revived Arrested Development (season 4, coming in May) draw in new subscribers? Or the new Ricky Gervais show? Will any of this allow Netflix to expand their streaming content beyond the laughable movie selection they currently command (seriously, they have a good TV selection, but their movie selection is horrible)? Will we ever get that dream service, a single subscription that will give you access to everything you could ever want to watch? Technologically, this is all possible, but technology won't drive that, and I'm curious if such a thing will ever come to fruition (Netflix or not!) In the meantime, I'm most likely going to finish off House of Cards, which is probably a good thing for Netflix.
Posted by Mark on February 10, 2013 at 02:01 PM .: Comments (0) | link :.


End of This Day's Posts

Sunday, January 06, 2013

What's in a Book Length?
I mentioned recently that book length is something that's been bugging me. It seems that we have a somewhat elastic relationship with length when it comes to books. The traditional indicator of book length is, of course, page number... but due to variability in font size, type, spacing, format, media, and margins, the hallowed page number may not be as concrete as we'd like. Ebooks theoretically provide an easier way to maintain a consistent measurement across different books, but it doesn't look like anyone's delivered on that promise. So how are we to know the lengths of our books? Fair warning, this post is about to get pretty darn nerdy, so read on at your own peril.

In terms of page numbers, books can vary wildly. Two books with the same amount of pages might be very different in terms of actual length. Let's take two examples: Gravity's Rainbow (784 pages) and Harry Potter and the Goblet of Fire (752 pages). Looking at page number alone, you'd say that Gravity's Rainbow is only slightly longer than Goblet of Fire. With the help of the magical internets, let's a closer look at the print inside the books (click image for a bigger version):
Pages from Gravitys Rainbow and Harry Potter and the Goblet of Fire
As you can see, there is much more text on the page in Gravity's Rainbow. Harry Potter has a smaller canvas to start with (at least, in terms of height), but larger margins, more line spacing, and I think even a slightly larger font. I don't believe it would be an exaggeration to say that when you take all this into account, the Harry Potter book is probably less than half the length of Gravity's Rainbow. I'd estimate it somewhere on the order of 300-350 pages. And that's even before we get into things like vocabulary and paragraph breaks (which I assume would also serve to inflate Harry Potter's length.) Now, this is an extreme example, but it illustrates the variability of page numbers.

Ebooks present a potential solution. Because Ebooks have different sized screens and even allow the reader to choose font sizes and other display options, page numbers start to seem irrelevant. So Ebook makers devised what's called reflowable documents, which adapt their presentation to the output device. For example, Amazon's Kindle uses an Ebook format that is reflowable. It does not (usually) feature page numbers, instead relying on a percentage indicator and the mysterious "Location" number.

The Location number is meant to be consistent, no matter what formatting options you're using on your ereader of choice. Sounds great, right? Well, the problem is that the Location number is pretty much just as arbitrary as page numbers. It is, of course, more granular than a page number, so you can easily skip to the exact location on multiple devices, but as for what actually constitutes a single "Location Number", that is a little more tricky.

In looking around the internets, it seems there is distressingly little information about what constitutes an actual Location. According to this thread on Amazon, someone claims that: "Each location is 128 bytes of data, including formatting and metadata." This rings true to me, but unfortunately, it also means that the Location number is pretty much meaningless.

The elastic relationship we have with book length is something I've always found interesting, but what made me want to write this post was when I wanted to pick a short book to read in early December. I was trying to make my 50 book reading goal, so I wanted something short. In looking through my book queue, I saw Alfred Bester's classic SF novel The Stars My Destination. It's one of those books I consistently see at the top of best SF lists, so it's always been on my radar, and looking at Amazon, I saw that it was only 236 pages long. Score! So I bought the ebook version and fired up my Kindle only to find that in terms of locations, it's the longest book I have on my Kindle (as of right now, I have 48 books on there). This is when I started looking around at Locations and trying to figure out what they meant. As it turns out, while the Location numbers provide a consistent reference within the book, they're not at all consistent across books.

I did a quick spot check of 6 books on my Kindle, looking at total Location numbers, total page numbers (resorting to print version when not estimated by Amazon), and file size of the ebook (in KB). I also added a column for Locations per page number and Locations per KB. This is an admittedly small sample, but what I found is that there is little consistency among any of the numbers. The notion of each Location being 128 bytes of data seems useful at first, especially when you consider that the KB information is readily available, but because that includes formatting and metadata, it's essentially meaningless. And the KB number also includes any media embedded in the book (i.e. illustrations crank up the KB, which distorts any calculations you might want to do with that data).

It turns out that The Stars My Destination will probably end up being relatively short, as the page numbers would imply. There's a fair amount of formatting within the book (which, by the way, doesn't look so hot on the Kindle), and doing spot checks of how many Locations I pass when cycling to the next screen, it appears that this particular ebook is going at a rate of about 12 Locations per cycle, while my previous book was going at a rate of around 5 or 6 per cycle. In other words, while the total Locations for The Stars My Destination were nearly twice what they were for my previously read book, I'm also cycling through Locations at double the rate. Meaning that, basically, this is the same length as my previous book.

Various attempts have been made to convert Location numbers to page numbers, with low degrees of success. This is due to the generally elastic nature of a page, combined with the inconsistent size of Locations. For most books, it seems like dividing the Location numbers by anywhere from 12-16 (the linked post posits dividing by 16.69, but the books I checked mostly ranged from 12-16) will get you a somewhat accurate page number count that is marginally consistent with print editions. Of course, for The Stars My Destination, that won't work at all. For that book, I have to divide by 40.86 to get close to the page number.

Why is this important at all? Well, there's clearly an issue with ebooks in academia, because citations are so important for that sort of work. Citing a location won't get readers of a paper anywhere close to a page number in a print edition (whereas, even using differing editions, you can usually track down the quote relatively easily if a page number is referenced). On a personal level, I enjoy reading ebooks, but one of the things I miss is the easy and instinctual notion of figuring out how long a book will take to read just by looking at it. Last year, I was shooting for reading quantity, so I wanted to tackle shorter books (this year, I'm trying not to pay attention to length as much and will be tackling a bunch of large, forbidding tomes, but that's a topic for another post)... but there really wasn't an easily accessible way to gauge the length. As we've discovered, both page numbers and Location numbers are inconsistent. In general, the larger the number, the longer the book, but as we've seen, that can be misleading in certain edge cases.

So what is the solution here? Well, we've managed to work with variable page numbers for thousands of years, so maybe no solution is really needed. A lot of newer ebooks even contain page numbers (despite the variation in display), so if we can find a way to make that more consistent, that might help make things a little better. But the ultimate solution would be to use something like Word Count. That's a number that might not be useful in the midst of reading a book, but if you're really looking to determine the actual length of the book, Word Count appears to be the best available measurement. It would also be quite easily calculated for ebooks. Is it perfect? Probably not, but it's better than page numbers or location numbers.

In the end, I enjoy using my Kindle to read books, but I wish they'd get on the ball with this sort of stuff. If you're still reading this (Kudos to you) and want to read some more babbling about ebooks and where I think they should be going, check out my initial thoughts and my ideas for additional metadata and the gamification of reading. The notion of ereaders really does open up a whole new world of possibilities... it's a shame that Amazon and other ereader companies keep their platforms so locked down and uninteresting. Of course, reading is its own reward, but I really feel like there's a lot more we can be doing with our ereader software and hardware.
Posted by Mark on January 06, 2013 at 08:02 PM .: Comments (4) | link :.


End of This Day's Posts

Sunday, December 02, 2012

Companies Don't Force You Into Piracy
But let's be honest with ourselves, that doesn't mean that all those same media companies don't suck. Let me back up a minute, as this is an old argument. Most recently, this article from The Guardian bemoans the release window system:
A couple of months ago, I purchased the first season of the TV series Homeland from the iTunes Store. I paid $32 for 12 episodes that all landed seamlessly in my iPad. I gulped them in a few days and was left in a state of withdrawal. Then, on 30 September, when season 2 started over, I would have had no alternative but to download free but illegal torrent files. Hundreds of thousands of people anxious to find out the whereabouts of the Marine turncoat pursued by the bi-polar CIA operative were in the same quandary
This is, of course, stupid. This guy does have a pretty simple alternative: wait a few months to watch the show. It's a shitty alternative, to be sure, but that doesn't excuse piracy. As Sonny Bunch notes:
Of course you have an alternative you ninny! It's not bread for your starving family. You're not going to die if you have to wait six months to watch a TV show. You're not morally justified in your thievery.
Others have also responded as such:
This argument is both ludicrous, and wrong. Ludicrous, because if piracy is actually wrong, it doesn't get less wrong simply because you can't have the product exactly when and where you want it at a price you wish to pay. You are not entitled to shoplift Birkin bags on the grounds that they are ludicrously overpriced, and you cannot say you had no alternative but to break into an the local ice cream parlor at 2 am because you are really craving some Rocky Road and the insensitive bastards refused to stay open 24/7 so that you could have your favorite sweet treat whenever you want. You are not forced into piracy because you can't get a television show at the exact moment when you want to see it; you are choosing piracy.
This is all well and good, and the original Guardian article has a poor premise... but that doesn't mean that the release window system isn't antiquated and doesn't suck. The original oped could easily be tweaked to omit the quasi-justification for piracy. Instead, the piracy is included and thus the article overreaches. On the flip side, the responses also tend to overstate their case, usually including something like this: "you can't have the product exactly when and where you want it at a price you wish to pay." This is true, of course, but that doesn't make it any less frustrating for consumers. And with respect to streaming, the media company stance is just as ludicrous as those defending piracy.

Here's a few examples I've run into:
  • HBOGO - This is a streaming service that HBO makes available to it's cable subscribers. It's got a deep back catalog of their original content, as well as much of their current movie lineup. Sounds great, right? What's my problem? I can't actually watch HBOGO on my TV. For some unfathomable reason, Comcast blocks HBOGO from working on most streaming devices. It works on my computer, and it was recently launched on XBOX 360 (but I have a PS3 and I'm not shelling out another couple hundred bucks just so I can gain this single ability), but is otherwise not available. I'd like to watch the (ten year old) second season of Deadwood, but I can't do so unless I sit at my desk to watch it. Now, yes, I'm whining here about the fact that I can't watch this content how and where I choose, but is it really so unreasonable to want to watch a television show... on my television? Is this entitlement, or just common sense? How many dedicated streaming devices do I have to own before I can claim exhaustion? 4? 6? 15? Of course, I've got other options. I could purchase or rent the DVDs... but why do that when I'm paying for this other service?
  • Books and Ebooks - So I'd like to read a book called Permutation City, by Greg Egan. It was originally published in 1994, frequents Best SF Novel lists, and has long since fallen out of print. This is actually understandable, as Egan is an author with a small, niche audience and limited mainstream appeal. None of his novels get big print runs to start with, and despite all the acclaim, I doubt even this book would sell a lot of copies here in 2012. Heck, I'd even understand it if the publisher claimed that this was low on their ebook conversion priority list. But it's not. The ebook is available in the UK, but I guess the publisher has not secured rights in the US? I get that these sorts of rights situations are complicated, but patronizing a library or purchasing a used copy isn't going to make the rights holders any money on this stuff.
  • DVD on Linux - I've got multiple computers and one runs linux (at various other times, I've only had linux PCs). One of the things I like to do for this blog is take a screenshot of a movie I'm writing about. However, it is illegal for me to even play my DVDs on my linux box. These are purchased DVDs, not pirated anything. To be sure, I'm capable of playing DVDs on my linux PCs, but I'm technically breaking the law when doing so. There are various complications in all sorts of digital formats that make this a touchy topic. Even something as simple as MP3s trip up various linux distros, not even getting into stuff like iTunes or DRMed formats.
  • Blu-Ray - A few months ago, I wrote about a movie called Detention. I loved it and wrote a glowing review. Wanting to include a few screenshots to really sell the movie to my (admittedly low in quantity) readers, but when I plopped the BD into my shiny new BD drive on my computer, the BD player (Cyberlink PowerDVD) informed me that I wasn't able to play the disc. I was admittedly lazy at the time and didn't try too hard to circumvent this restriction (something about reinstalling the software (which I'm not even sure I have access to) and downloading patches and purchasing some key or something?) and to this day, I don't even know if it was just an issue with that one disc, or if it's all BDs. But still, who wins here? I get that the IP owners don't want to encourage piracy... but I don't see how frustrating me (a paying customer) serves them in the end. It's not like this "protection" stops or even slows down pirates. All it does is frustrate paying customers.
  • iTunes - I don't even really know the answer to this, but if I don't have an AppleTV, is there a way to view iTunes stuff on my television? I don't have an iPad, but if I bought one, would I be able to plug the iPad into the TV and stream video that way? I think there is software I can buy on PC that will stream iTunes... but should I have to purchase extra software or hardware (above and beyond the 5-10 devices I have right now) just to make iTunes work? And the last time I toyed with this type of software (I believe it was called PlayOn), it didn't work very well. Constant interruptions and low quality video. The fact that there are even questions surrounding this at all is a failure. For the most part, I can avoid this because Amazon and Netflix have good selections and actually work on all of my devices (i.e. they actually care to have me as a customer, which is nice).
Now, this doesn't mean I'm going to go out and pirate season 2 of Deadwood or any of the other things I mentioned above. Frustration does not excuse piracy. No, I'm just going to play a game or read a different book or go out to a bar or something. I have no shortage of things to do, so while I do want to watch any number of HBO shows on HBOGO, I can just as easily occupy my time with other activities (though, as above, I've certainly run into issues with other stuff). Pretty soon, I may realize that I don't actually need cable, at which point I'll cancel that service and... no one wins. I don't get to watch the show I want, and HBO and Comcast are out a customer. Why? I really don't know. If someone can explain why Comcast won't let me stream HBOGO, I'm all ears. They don't have the content available ondemand, and they're not losing me as a customer by allowing me to watch the shows (again, you have to be an HBO subscriber to get HBOGO).

I get that these are all businesses and need to make money, but I don't understand the insistence on alienating their own customers, frequently and thoroughly. I'm not turning to piracy, I'm just a frustrated customer. I've already bought a bunch of devices and services so that I can watch this stuff, and yet I'm still not able to watch even a small fraction of what I want. Frustration doesn't excuse piracy, but I don't see why I should be excusing these companies for being so annoying about when and where and how I can consume their content. It's especially frustrating because so much of this is done in the name of piracy. I suppose this post is coming off petulant and whiny on my part, but if you think I'm bad, just try listening to the MPAA or similar institution talk about piracy and the things they do to their customers to combat it. In essence, these companies hurt their best customers to spite non-customers. So I don't pirate shows or movies or books, but then, I often don't get to watch or read the ones I want to either. In a world where media companies are constantly whining about declining sales, it's a wonder that they don't actually, you know, try to sell me stuff I can watch/read. I guess they find it easier to assume I'm a thief and treat me as such.
Posted by Mark on December 02, 2012 at 08:19 PM .: Comments (0) | link :.


End of This Day's Posts

Wednesday, August 22, 2012

Tweets of Glory
There's some great stuff on Twitter, but the tweets just keep coming, so there's a fair chance you've missed some funny stuff, even from the people you follow. Anywho, time is short tonight, so it's time for another installment of Tweets of Glory:




I have to admit, hatewatching The Newsroom has actually been pretty entertaining, but I'd much rather watch this proposed feline-themed show.



Yeah, so that one's a little out of date, but for the uninitiated, Duncan Jones is David Bowie's son.





(I love the internet)



Well, that happened. Stay tuned for some (hopefully) more fulfilling content on Sunday...
Posted by Mark on August 22, 2012 at 09:54 PM .: Comments (0) | link :.


End of This Day's Posts

Wednesday, August 08, 2012

Web browsers I have known, 1996-2012
Jason Kottke recently recapped all of the browsers he used as his default for the past 18 years. It sounded like fun, so I'm going to shamelessly steal the idea and list out my default browsers for the past 16 years (prior to 1996, I was stuck in the dark ages of dialup AOL - but once I went away to college and discovered the joys of T1/T3 connections, my browsing career started in earnest, so that's when I'm starting this list).
  • 1996 - Netscape Navigator 3 - This was pretty much the uncontested king of browsers at the time, but it's reign would be short. I had a copy of IE3 (I think?) on my computer too, but I almost never used it...
  • 1997-1998 - Netscape Communicator 4 - Basically Netscape Navigator 4, but the Communicator was a whole suite of applications which appealed to me at the time. I used it for email and even to start playing with some HTML editing (though I would eventually abandon everything but the browser from this suite). IE4 did come out sometime in this timeframe and I used it occasionally, but I think I stuck with NN4 way longer than I probably should have.
  • 1999-2000 - Internet Explorer 5 - With the release of IE5 and the increasing issues surrounding NN4, I finally jumped ship to Microsoft. I was never particularly comfortable with IE though, and so I was constantly looking for alternatives and trying new things. I believe early builds of Mozilla were available, and I kept downloading the updates in the hopes that it would allow me to dispense with IE, but it was still early in the process for Mozilla. This was also my first exposure to Opera, which at the time wasn't that remarkable (we're talking version 3.5 - 4 here) except that, as usual, they were ahead of the curve on tabbed browsing (a mixed blessing, as monitor resolutions at the time weren't great). Opera was also something you had to pay for at the time, and a lot of sites didn't work in Opera. This would all change at the end of 2000, though, with the release of Opera 5.
  • 2001 - Opera 5 - This browser changed everything for me. It was the first "free" Opera browser available, although the free version was ad-supported (quite annoying, but it was easy enough to get rid of the ads). The thing that was revolutionary about this browser, though, was mouse gestures. It was such a useful feature, and Opera's implementation was (and quite frankly, still is) the best, smoothest implementation of the functionality I've seen. At this point, I was working at a website, so for work, I was still using IE5 and IE6 as my primary browser (because at the time, they represented something like 85-90% of the traffic to our site). I was also still experimenting with the various Mozilla-based browsers at the time as well, but Opera was my default for personal browsing. Of course, no one codes for Opera, so there were plenty of sites that I'd have to fire up IE for (this has always been an issue with Opera)
  • 2002-2006 - Opera 6/7/8/9 - I pretty much kept rolling with Opera during this timeframe. Again, for my professional use, IE6/IE7 was still a must, but in 2004, Firefox 1.0 launched, so that added another variable to the mix. I wasn't completely won over by the initial Firefox offerings, but it was the first new browser in a long time that I thought had a bright future. It also provided a credible alternative for when Opera crapped out on a weirdly coded page. However, as web standards started to actually be implemented, Opera's issues became fewer as time went on...
  • 2007 - Firefox 2/Opera 9 - It was around this time that Firefox started to really assert itself in my personal and professional usage. I still used Opera a lot for personal usage, but for professional purposes, Firefox was a simple must. At the time, I was embroiled in a year-long site redesign project for my company, and I was doing a ton of HTML/CSS/JavaScript development... Firefox was an indispensable tool at the time, mostly due to extensions like Firebug and the Web-Developer Toolbar. I suppose I should note that Safari first came to my attention at this point, mostly for troubleshooting purposes. I freakin' hate that browser.
  • 2008-2011 - Firefox/Opera - After 2007, there was a slow, inexorable drive towards Firefox. Opera kept things interesting with a feature they call Speed Dial (and quite frankly, I like that feature much better than what Chrome and recent versions of Firefox have implemented), but the robust and mature list of extensions for Firefox were really difficult to compete with, especially when I was trying to get stuff done. Chrome also started to gain popularity in this timeframe, but while I loved how well it loaded Ajax and other JavaScript-heavy features, I could never really get comfortable with the interface. Firefox still afforded more control, and Opera's experience was generally better.
  • 2012/Present - Firefox - Well, I think it's pretty telling that I'm composing this post on Firefox. That being said, I still use Opera for simple browsing purposes semi-frequently. Indeed, I usually have both browsers open at all times on my personal computer. At work, I'm primarily using Firefox, but I'm still forced to use IE8, as our customers tend to still prefer IE (though the percentage is much less these days). I still avoid Safari like the plague (though I do sometimes need to troubleshoot and I suppose I do use Mobile Safari on my phone). I think I do need to give Chrome a closer look, as it's definitely more attractive these days...
Well, there you have it. I do wonder if I'll ever get over my stubborn love for Opera, a browser that almost no one but me uses. They really do manage to keep up with the times, and have even somewhat recently allowed Firefox and Chrome style extensions, though I think it's a little too late for them. FF and Chrome just have a more robust community surrounding their development than Opera. I feel like it's a browser fated to die at some point, but I'll probably continue to use it until it does... So what browser do you use?
Posted by Mark on August 08, 2012 at 09:23 PM .: Comments (5) | link :.


End of This Day's Posts

Wednesday, May 02, 2012

Tweets of Glory
One of the frustrating things about Twitter is that it's impossible to find something once it's gone past a few days. I've gotten into the habit of favoriting ones I find particularly funny or that I need to come back to, which is nice, as it allows me to publish a cheap Wednesday blog entry (incidentally, sorry for the cheapness of this entry) that will hopefully still be fun for folks to read. So here are some tweets of glory:




Note: This was Stephenson's first tweet in a year and a half.

This one is obviously a variation on a million similar tweets (and, admit it, it's a thought we've all had), but the first one I saw (or at least, favorited - I'm sure it's far from the first time someone made that observation though)



Well, that happened. Stay tuned for some (hopefully) more fulfilling content on Sunday...
Posted by Mark on May 02, 2012 at 08:36 PM .: link :.


End of This Day's Posts

Sunday, April 15, 2012

Kickstarted
When the whole Kickstarter thing started, I went through a number of phases. First, it's a neat idea and it leverages some of the stuff that makes the internet great. Second, as my systems analyst brain started chewing on it, I had some reservations... but that was shortlived as, third, some really interesting stuff started getting funded. Here are some of the ones I'm looking forward to:
  • Singularity & Co. - Save the SciFi! - Yeah, so you'll be seeing a lot of my nerdy pursuits represented here, and this one is particularly interesting. This is a project dedicated to saving SF books that are out of print, out of circulation, and, ironically, unavailable in any sort of digital format. The Kickstarter is funding the technical solution for scanning the books as well as tracking down and securing copyright. Judging from the response (over $50,000), this is a venture that has found a huge base of support, and I'm really looking forward to discovering some of these books (some of which are from well known authors, like Arthur C. Clarke).
  • A Show With Ze Frank - One of the craziest things I've seen on the internet is Ze Frank's The Show. Not just the content, which is indeed crazy, but the sheer magnitude of what he did - a video produced every weekday for an entire year. Ze Frank grew quite a following at the time, and in fact, half the fun was his interactions with the fans. Here's to hoping that Sniff, hook, rub, power makes another appearance. And at $146 thousand, I have no idea what we're in for. I always wondered how he kept himself going during the original show, but now at least he'll be funded.
  • Oast House Hop Farm - And now we come to my newest obsession: beer. This is a New Jersey farm that's seeking to convert a (very) small portion of their land into a Hop Farm. Hops in the US generally come from the west coast (Washington's Yakima valley, in particular). In the past, that wasn't the case, but some bad luck (blights and infestations) brought east coast hops down, then Prohibition put a nail in the coffin. The farm hopes to supply NJ brewers as well as homebrewers, so mayhaps I'll be using some of their stuff in the future! So far, they've planted Cascade and Nugget hops, with Centennial and Newport coming next. I'm really curious to see how this turns out. My understanding is that it takes a few years for a hop farm to mature, and that each crop varies. I wonder how the East Coast environs will impact the hops...
  • American Beer Blogger - Despite the apparent failure of Discovery's Brewmasters, there's got to be room for some sort of beer television show, and famous beer blogger and author Lew Bryson wants to give it a shot. The Kickstarter is just for the pilot episode, but assuming things go well, there may be follow up efforts. I can only hope it turns out well. I enjoyed Brewmasters for what it was, but being centered on Dogfish Head limited it severely. Sam Calagione is a great, charismatic guy, but the show never really captured the amazing stuff going on in the US right now (which is amazing because it is so broad and local and a million other things Brewmasters couldn't really highlight given its structure).
Well, there you have it. I... probably should have been linking to these before they were funded, but whatever, I'm really happy to see that all of these things will be coming. I'm still curious to see if this whole Kickstarter thing will remain sustainable, but I guess time will tell, and for now, I'm pretty happy with the stuff being funded. There are definitely a ton of other campaigns that I think are interesting, especially surrounding beer and video games, but I'm a little tight on time here, so I'll leave it at that...
Posted by Mark on April 15, 2012 at 08:28 PM .: link :.


End of This Day's Posts

Wednesday, April 11, 2012

More Disgruntled, Freakish Reflections on ebooks and Readers
While I have some pet peeves with the Kindle, I've mostly found it to be a good experience. That being said, there are some things I'd love to see in the future. These aren't really complaints, as some of this stuff isn't yet available, but there are a few opportunities afforded by the electronic nature of eBooks that would make the whole process better.
  • The Display - The electronic ink display that the basic Kindles use is fantastic... for reading text. Once you get beyond simple text, things are a little less fantastic. Things like diagrams, artwork, and photography aren't well represented in e-ink, and even in color readers (like the iPad or Kindle Fire), there are issues with resolution and formatting that often show up in eBooks. Much of this comes down to technology and cost, both of which are improving quickly. Once stuff like IMOD displays start to deliver on their promise (low power consumption, full color, readable in sunlight, easy on the eyes, capable of supporting video, etc...), we should see a new breed of reader.

    I'm not entirely sure how well this type of display will work, at least initially. For instance, how will it compare to the iPad 3's display? What's the resolution like? How much will it cost? And so on. Current implementations aren't full color, and I suspect that future iterations will go through a phase where the tech isn't quite there yet... but I think it will be good enough to move forward. I think Amazon will most certainly jump on this technology when it becomes feasible (both from a technical and cost perspective). I'm not sure if Apple would switch though. I feel like they'd want a much more robust and established display before they committed.
  • General Metrics and Metadata - While everyone would appreciate improvements in device displays, I'm not sure how important this would be. Maybe it's just me, but I'd love to see a lot more in the way of metadata and flexibility, both about the book and about device usage. With respect to the book itself, this gets to the whole page number issue I was whinging about in my previous post, but it's more than that. I'd love to see a statistical analysis of what I'm reading, on both individual and collective levels.

    I'm not entirely sure what this looks like, but it doesn't need to be rocket science. Simple Flesch-Kincaid grades seems like an easy enough place to start, and it would be pretty simple to implement. Calculating such things for my entire library (or a subset of my library), or ranking my library by grade (or similar sorting methods) would be interesting. I don't know that this would provide a huge amount of value, but I would personally find it very illuminating and fun to play around with... and it would be very easy to implement. Individual works wouldn't even require any processing power on the reader, it could be part of the download. Doing calculations of your collective library might be a little more complicated, but even that could probably be done in the cloud.

    Other metadata would also be interesting to view. For example, Goodreads will graph your recently read books by year of publication - a lot of analysis could be done about your collection (or a sub-grouping of your collection) of books along those lines. Groupings by decade or genre or reading level, all would be very interesting to know.
  • Personal Metrics and Metadata - Basically, I'd like to have a way to track my reading speed. For whatever reason, this is something I'm always trying to figure out for myself. I've never gone through the process of actually recording my reading habits and speeds because it would be tedious and manual and maybe not even all that accurate. But now that I'm reading books in an electronic format, there's no reason why the reader couldn't keep track of what I'm reading, when I'm reading, and how fast I'm reading. My anecdotal experience suggests that I read anywhere from 20-50 pages an hour, depending mostly on the book. As mentioned in the previous post, a lot of this has to do with the arbitrary nature of page numbers, so perhaps standardizing to a better metric (words per minute or something like that) would normalize my reading speed.

    Knowing my reading speed and graphing changes over time could be illuminating. I've played around a bit with speed reading software, and the results are interesting, but not drastic. In any case, one thing that would be really interesting to know when reading a book would be how much time you have left before you finish. Instead of having 200 pages, maybe you have 8 hours of reading time left.

    Combining my personal data with the general data could also yield some interesting results. Maybe I read trashy SF written before 1970 much faster than more contemporary literary fiction. Maybe I read long books faster than short books. There are a lot of possibilities here.

    There are a few catches to this whole personal metrics thing though. You'd need a way to account for breaks and interruptions. I might spend three hours reading tonight, but I'm sure I'll take a break to get a glass of water or answer a phone call, etc... There's not really an easy way around this, though there could be mitigating factors like when the reader goes to sleep mode or something like that. Another problem is that one device can be used by multiple people, which would require some sort of profile system. That might be fine, but it also adds a layer of complexity to the interface that I'm sure most companies would like to avoid. The biggest and most concerning potential issue is that of privacy. I'd love to see this information about myself, but would I want Amazon to have access to it? On the other hand, being able to aggregate data from all Kindles might prove interesting in its own right. Things like average reading speed, number of books read in a year, and so on. All interesting and useful info.

    This would require an openness and flexibility that Amazon has not yet demonstrated. It's encouraging that the Kindle Fire runs a flavor of Android (an open source OS), but on the other hand, it's a forked version that I'm sure isn't as free (as in speech) as I'd like (and from what I know, the Fire is partially limited by its hardware). Expecting comprehensive privacy controls from Amazon seems naive.

    I'd like to think that these metrics would be desirable to a large audience of readers, but I really have no inclination what the mass market appeal would be. It's something I'd actually like to see in a lot of other places too. Video games, for instance, provide a lot of opportunity for statistics, and some games provide a huge amount of data on your gaming habits (be it online or in a single player mode). Heck, half the fun of sports games (or sports in general) is tracking the progress of your players (particularly prospects). Other games provide a lack of depth that is most baffling. People should be playing meta-games like Fantasy Baseball, but with MLB The Show providing the data instead of real life.
  • The Gamification of Reading - Much of the above wanking about metrics could probably be summarized as a way to make reading a game. The metrics mentioned above readily lend themselves to point scores, social-app-like badges, and leaderboards. I don't know that this would necessarily be a good thing, but it could make for an intriguing system. There's an interesting psychology at work in systems like this, and I'd be curious to see if someone like Amazon could make reading more addictive. Assuming most people don't try to abuse the system (though there will always be a cohort that will attempt to exploit stuff like this), it could ultimately lead to beneficial effects for individuals who "play" the game competitively with their friends. Again, this isn't necessarily a good thing. Perhaps the gamification of reading will lead to a sacrifice of comprehension in the name of speed, or other mitigating effects. Still, it would be nice to see the "gamification of everything" used for something other than a way for companies to trick customers into buying their products.
As previously mentioned, the need for improved displays is a given (and not just for ereaders). But assuming these nutty metrics (and the gamification of reading) are an appealing concept, I'd like to think that it would provide an opening for someone to challenge Amazon in the market. An open, flexible device using a non-DRMed format and tied to a common store would be very nice. Throw in some game elements, add a great display, and you've got something close to my ideal reader. Unfortunately, it doesn't seem like we're all that close just yet. Maybe in 5-10 years? Seems possible, but it's probably more likely that Amazon will continue its dominance.
Posted by Mark on April 11, 2012 at 09:22 PM .: link :.


End of This Day's Posts

Wednesday, February 15, 2012

Zemanta
Last week, I looked at commonplace books and various implementation solutions. Ideally, I wanted something open and flexible that would also provide some degree of analysis in addition to the simple data aggregation most tools provide. I wanted something that would take into account a wide variety of sources in addition to my own writing (on this blog, for instance). Most tools provide a search capability of some kind, but I was hoping for something more advanced. Something that would make connections between data, or find similarities with something I'm currently writing.

At a first glance, Zemanta seemed like a promising candidate. It's a "content suggestion engine" specifically built for blogging and it comes pre-installed on a lot of blogging software (including Movable Type). I just had to activate it, which was pretty simple. Theoretically, it continually scans a post in progress (like this one) and provides content recommendations, ranging from simple text links defining key concepts (i.e. links to Wikipedia, IMDB, Amazon, etc...), to imagery (much of which seems to be integrated with Flickr and Wikipedia), to recommended blog posts from other folks' blogs. One of the things I thought was really neat was that I could input my own blogs, which would then give me more personalized recommendations.

Unfortunately, results so far have been mixed. There are some things I really like about Zemanta, but it's pretty clearly not the solution I'm looking for. Some assorted thoughts:

  • Zemanta will only work when using the WYSIWYG Rich Text editor, which turns out to be a huge pain in the arse.  I'm sure lots of people are probably fine with that, but I've been editing my blog posts in straight HTML for far too long. I suppose this is more of a hangup on my end than a problem with Zemanta, but it's definitely something I find annoying.  When I write a post in WYSIWYG format, I invariably switch it back to no formatting and jump through a bunch of hoops getting the post to look like what I want.
  • The recommended posts haven't been very useful so far. Some of the external choices are interesting, but so far, nothing has really helped me in writing my posts. I was really hoping that loading my blog into Zemanta would add a lot of value, but it turns out that Zemanta only really scanned my recent posts, and it sorta recommended most of them, which doesn't really help me that much.  I know what I've written recently, what I was hoping for was that Zemanta would be able to point out some post I wrote in 2005 along similar lines (In my previous post on Taxonomy Platforms, I specifically referenced the titles of some of my old blog posts, but since they were old, Zemanta didn't find them and recommend them.  Even more annoying, when writing this post, the Taxonomy Platforms post wasn't one of the recommended articles despite my specifically mentioning it. Update: It has it now, but it didn't seem to appear until after I'd already gone through the trouble of linking it...) It appears that Zemanta is basing all of this on my RSS feed, which makes sense, but I wish there was a way to upload my full archives, as that might make this tool a little more powerful...
  • The recommendations seem to be based on a relatively simplistic algorithm. A good search engine will index data and learn associations between individual words by tracking their frequency and how close they are to other words.  Zemanta doesn't seem to do that.  In my previous post, I referenced famous beer author Michael Jackson. What did Zemanta recommend?  Lots of pictures and articles about the musician, nothing about the beer journalist. I don't know if I'm expecting too much out of the system, but it would be nice if the software would pick up on the fact that this guy's name was showing up near lots of beer talk, with nary a reference to music. It's probably too much to hope that my specifically calling out that I was talking about "the beer critic, not the pop star" would influence the system (and indeed, my reference to "pop star" may have influenced the recommendations, despite the fact that I was trying to negate that).
  • The "In-Text Links", on the other hand, seem to come in quite handy. I actually leveraged several of them in my past few posts, and they were very easy to use. Indeed, I particularly appreciated their integration with Amazon, where I could enter my associates ID, and the links that were inserted were automatically generated with my ID. This is normally a pretty intensive process involving multiple steps that has been simplified down to the press of a button.  Very well done, and most of the suggestions there were very relevant.

I will probably continue to play with Zemanta, but I suspect it will be something that doesn't last much longer. It provides some value, but it's ultimately not as convenient as I'd like, and it's analysis and recommendation functions don't seem as useful as I'd like.

I've also been playing around with Evernote more and more, and I feel like that could be a useful tool, despite the fact that it doesn't really offer any sort of analysis (though it does have a simple search function). There's at least one third party, though, that seems to be positioning itself as an analysis tool that will integrate with Evernote.  That tool is called Topicmarks.  Unfortunately, I seem to be having some issues integrating my Evernote data with that service. At this rate, I don't know that I'll find a great tool for what I want, but it's an interesting subject, and I'm guessing it will be something that will become more and more important as time goes on. We're living in the Information Age, it seems only fair that our aggregation and analysis tools get more sophisticated.

Enhanced by Zemanta
Posted by Mark on February 15, 2012 at 06:08 PM .: link :.


End of This Day's Posts

Wednesday, February 08, 2012

Commonplacing
During the Enlightenment, most intellectuals kept what's called a Commonplace Book. Basically, folks like John Locke or Mark Twain would curate transcriptions of interesting quotes from their readings. It was a personalized record of interesting ideas that the author encountered. When I first heard about the concept, I immediately started thinking of how I could implement one... which is when I realized that I've actually been keeping one, more or less, for the past decade or so on this blog. It's not very organized, though, and it's something that's been banging around in my head for the better part of the last year or so.

Locke was a big fan of Commonplace Books, and he spent years developing an intricate system for indexing his books' content. It was, of course, a ridiculous and painstaking process, but it worked. Fortunately for us, this is exactly the sort of thing that computer systems excel at, right? The reason I'm writing this post is a small confluence of events that has lead me to consider creating a more formal Commonplace Book. Despite my earlier musing on the subject, this blog doesn't really count. It's not really organized correctly, and I don't publish all the interesting quotes that I find. Even if I did, it's not really in a format that would do me much good. So I'd need to devise another plan.

Why do I need a plan at all? What's the benefit of a commonplace book? Well, I've been reading Steven Johnson's book Where Good Ideas Come From: The Natural History of Innovation and he mentions how he uses a computerized version of the commonplace book:
For more than a decade now, I have been curating a private digital archive of quotes that I've found intriguing, my twenty-first century version of the commonplace book. ... I keep all these quotes in a database using a program called DEVONthink, where I also store my own writing: chapters, essays, blog posts, notes. By combining my own words with passages from other sources, the collection becomes something more than just a file storage system. It becomes a digital extension of my imperfect memory, an archive of all my old ideas, and the ideas that have influenced me.
This DEVONthink software certainly sounds useful. It's apparently got this fancy AI that will generate semantic connections between quotes and what you're writing. It's advanced enough that many of those connections seem to be subtle and "lyrical", finding connections you didn't know you were looking for. It sounds perfect except for the fact that it only runs on Mac OSX. Drats. It's worth keeping in mind in case I ever do make the transition from PC to Mac, but it seems like lunacy to do so just to use this application (which, for all I know, will be useless to me).

As sheer happenstance, I've also been playing around with Pinterest lately, and it occurs to me that it's a sort of commonplace book, albeit one with more of a narrow focus on images and video (and recipes?) than quotes. There are actually quite a few sites like that. I've been curating a large selection of links on Delicious for years now (1600+ links on my account). Steven Johnson himself has recently contributed to a new web startup called Findings, which is primarily concerned with book quotes. All of this seems rather limiting, and quite frankly, I don't want to be using 7 completely different tools to do the same thing, but for different types of media.

I also took a look at Tumblr again, this time evaluating it from a commonplacing perspective. There are some really nice things about the interface and the ease with which you can curate your collection of media. The problem, though, is that their archiving system is even more useless than most blog software. It's not quite the hell that is Twitter archives, but that's a pretty low bar. Also, as near as I can tell, the data is locked up on their server, which means that even if I could find some sort of indexing and analysis tool to run through my data, I won't really be able to do so (Update: apparently Tumblr does have a backup tool, but only for use with OSX. Again!? What is it with you people? This is the internet, right? How hard is it to make this stuff open?)

Evernote shows a lot of promise and probably warrants further examination. It seems to be the go-to alternative for lots of researchers and writers. It's got a nice cloud implementation with a robust desktop client and the ability to export data as I see fit. I'm not sure if its search will be as sophisticated as what I ultimately want, but it could be an interesting tool.

Ultimately, I'm not sure the tool I'm looking for exists. DEVONthink sounds pretty close, but it's hard to tell how it will work without actually using the damn thing. The ideal would be a system where you can easily maintain a whole slew of data and metadata, to the point where I could be writing something (say a blog post or a requirements document for my job) and the tool would suggest relevant quotes/posts based on what I'm writing. This would probably be difficult to accmomplish in real-time, but a "Find related content" feature would still be pretty awesome. Anyone know of any alternatives?

Enhanced by ZemantaUpdate: Zemanta! I completely forgot about this. It comes installed by default with my blogging software, but I had turned it off a while ago because it took forever to load and was never really that useful. It's basically a content recommendation engine, pulling content from lots of internet sources (notably Wikipedia, Amazon, Flickr and IMDB). It's also grown considerably in the time since I'd last used it, and it now features a truckload of customization options, including the ability to separate general content recommendations from your own, personally curated sources. So far, I've only connected my two blogs to the software, but it would be interesting if I could integrate Zemanta with Evernote, Delicious, etc... I have no idea how great the recommendations will be (or how far back it will look on my blogs), but this could be exactly what I was looking for. Even if integration with other services isn't working, I could probably create myself another blog just for quotes, and then use that blog with Zemanta. I'll have to play around with this some more, but I'm intrigued by the possibilities
Posted by Mark on February 08, 2012 at 05:31 PM .: link :.


End of This Day's Posts

Wednesday, January 18, 2012

SOPA Blues
I was going to write the annual arbitrary movie awards tonight, but since the web has apparently gone on strike, I figured I'd spend a little time talking about that instead. Many sites, including the likes of Wikipedia and Reddit, have instituted a complete blackout as part of a protest against two ill-conceived pieces of censorship legislation currently being considered by the U.S. Congress (these laws are called the Stop Online Piracy Act and Protect Intellectual Property Act, henceforth to be referred to as SOPA and PIPA). I can't even begin to pretend that blacking out my humble little site would accomplish anything, but since a lot of my personal and professional livelihood depends on the internet, I suppose I can't ignore this either.

For the uninitiated, if the bills known as SOPA and PIPA become law, many websites could be taken offline involuntarily, without warning, and without due process of law, based on little more than an alleged copyright owner's unproven and uncontested allegations of infringement1. The reason Wikipedia is blacked out today is that they depend solely on user-contributed content, which means they would be a ripe target for overzealous copyright holders. Sites like Google haven't blacked themselves out, but have staged a bit of a protest as well, because under the provisions of the bill, even just linking to a site that infringes upon copyright is grounds for action (and thus search engines have a vested interest in defeating these bills). You could argue that these bills are well intentioned, and from what I can tell, their original purpose seemed to be more about foreign websites and DNS, but the road to hell is paved with good intentions and as written, these bills are completely absurd.

Lots of other sites have been registering their feelings on the matter. ArsTechnica has been posting up a storm. Shamus has a good post on the subject which is followed by a lively comment thread. But I think Aziz hits the nail on the head:
Looks like the DNS provisions in SOPA are getting pulled, and the House is delaying action on the bill until February, so it’s gratifying to see that the activism had an effect. However, that activism would have been put to better use to educate people about why DRM is harmful, why piracy should be fought not with law but with smarter pro-consumer marketing by content owners (lowered prices, more options for digital distribution, removal of DRM, fair use, and ubiquitous time-shifting). Look at the ridiculous limitations on Hulu Plus - even if you’re a paid subscriber, some shows won’t air episodes until the week after, old episodes are not always available, some episodes can only be watched on the computer and are restricted from mobile devices. These are utterly arbitrary limitations on watching content that just drive people into the pirates’ arms.
I may disagree with some of the other things in Aziz's post, but the above paragraph is important, and for some reason, people aren't talking about this aspect of the story. Sure, some folks are disputing the numbers, but few are pointing out the things that IP owners could be doing instead of legislation. For my money, the most important thing that IP owners have forgotten is convenience. Aziz points out Hulu, which is one of the worst services I've ever seen in terms of being convenient or even just intuitive to customers. I understand that piracy is frustrating for content owners and artists, but this is not the way to fight piracy. It might be disheartening to acknowledge that piracy will always exist, but it probably will, so we're going to have to figure out a way to deal with it. The one thing we've seen work is convenience. Despite the fact that iTunes had DRM, it was loose enough and convenient enough that it became a massive success (it now doesn't have DRM, which is even better). People want to spend money on this stuff, but more often than not, content owners are making it harder on the paying customer than on the pirate. SOPA/PIPA is just the latest example of this sort of thing.

I've already written about my thoughts on Intellectual Property, Copyright and DRM, so I encourage you to check that out. And if you're so inclined, you can find out what senators and representatives are supporting these bills, and throw them out in November (or in a few years, if need be). I also try to support companies or individuals that put out DRM-free content (for example, Louis CK's latest concert video has been made available, DRM free, and has apparently been a success).

Intellectual Property and Copyright is a big subject, and I have to be honest in that I don't have all the answers. But the way it works right now just doesn't seem right. A copyrighted work released just before I was born (i.e. Star Wars) probably won't enter the public domain until after I'm dead (I'm generally an optimistic guy, so I won't complain if I do make it to 2072, but still). Both protection and expiration are important parts of the way copyright works in the U.S. It's a balancing act, to be sure, but I think the pendulum has swung too far in one direction. Maybe it's time we swing it back. Now if you'll excuse me, I'm going to participate in a different kind of blackout to protest SOPA.

1 - Thanks to James for the concise description. There are lots of much longer longer and better sourced descriptions of the shortcomings of this bill and the issues surrounding it, so I won't belabor the point here.
Posted by Mark on January 18, 2012 at 06:20 PM .: link :.


End of This Day's Posts

Sunday, July 24, 2011

Streaming and Netflix's Woes
A few years ago, when I was still contemplating the purchase of a Blu-Ray player (which ended up being the PS3), there was a lot of huffing-and-puffing about how Blu-Ray would never catch on, physical media was dead, and that streaming was the future. My thoughts on that at the time were that streaming is indeed the future, but that it would take at least 10 years before it actually happened in an ideal form. The more I see, the more I'm convinced that I actually underestimated the time it would take to get a genuinely great streaming service running.

One of the leading examples of a streaming service is Netflix's Watch Instantly service. As a long time Netflix member, I can say that it is indeed awesome, especially now that I can easily stream it to my television. However, there is one major flaw to their streaming service: the selection. Now, they have somewhere on the order of 20,000-30,000 titles available, which is certainly a huge selection... but it's about 1/5th of what they have available on physical media. For some folks, I'm sure that's enough, but for movie nerds like myself, I'm going to want to keep the physical option on my plan...

The reason Netflix's selection is limited is the same reason I don't think we'll see an ideal streaming service anytime soon. The problems are not technological. It all comes down to intellectual property. Studios and distributors own the rights, and they often don't want to allow streaming, especially for new releases. Indeed, several studios won't even allow Netflix to rent physical media for the first month of release. In order for a streaming service to actually supplant physical media, it will have to feature a comprehensive selection. Netflix does have a vested interest in making that happen (the infrastructure needed for physical media rentals via mail is massive and costly, while streaming is, at least, more streamlined from a logistical point of view), but I don't see this happening anytime soon.

Netflix has recently encountered some issues along these lines, and as a result, they've changed their pricing structure. It used to be that you could buy a plan that would allow you to rent 1, 2, 3, or 4 DVDs or BDs at a time. If you belonged to one of those plans, you also got free, unlimited streaming. Within the past year or so, they added another option for folks who only wanted streaming. And just a few weeks ago, they made streaming an altogether separate service. Instead of buying the physical media plan of your choice and getting streaming "for free", you now also need to pay for streaming. I believe their most popular plan used to be 1 disc with unlimited streaming, which was $9.99. This plan is now $16.98.

As you might expect, this has resulted in a massive online shitstorm of infantile rage and fury. Their blog post announcing the change currently has 12,000+ comments from indignant users. There are even more comments on their Facebook page (somewhere on the order of 80,000 comments there), and of course, other social media sites like Twitter were filled with indignant posts on the subject.

So why did Netflix risk the ire of their customers? They've even acknowledged that they were expecting some outrage at the change. My guess is that the bill's about to come due, and Netflix didn't really have a choice in the matter.

Indeed, a few weeks ago, Netflix had to temporarily stop streaming all of its Sony movies (which are distributed through Starz). It turns out that there's a contractual limit on the number of subscribers that Sony will allow, so now Netflix needs to renegotiate with Sony/Starz. The current cost to license Sony/Starz content for streaming is around $30 million annually. Details aren't really public (and it's probably not finalized yet), but it's estimated that the new contract will cost Netflix somewhere on the order of $200-$350 million a year. And that's just Sony/Starz. I imagine other studios will now be chomping at the bit. And of course, all these studios will continually up their rates as Netflix tries to expand their streaming selection.

So I think that all of the invective being thrown Netflix's way is mostly unwarranted (or, rather, misplaced). All that rage should really be directed at the studios who are trying to squeeze every penny out of their IP. At least Netflix seems to be doing business in an honest and open way here, and yet everyone's bitching about it. Other companies would do something sneaky. For instance, movie theaters (which also get a raw deal from studios) seem to be raising ticket prices by a quarter every few months. Any given increase is met with a bit of a meh, but consolidated over the past few years, ticket prices have risen considerably.

Ultimately, it's quite possible that Netflix will take a big hit on this in the next few years. Internet nerd-rage notwithstanding, I'm doubting that their customer base will drop, but if their cost of doing business goes up the way it seems, I can see their profits dropping considerably. But if that happens, it won't be Netflix that we should blame, it will be the studios... I don't want to completely demonize the studios here - they do create and own the content, and are entitled to be compensated for that. However, I don't think anyone believes they're being fair about this. They've been trying to slow Netflix down for years, after all. Quite frankly, Netflix has been much more customer friendly than the studios.
Posted by Mark on July 24, 2011 at 06:33 PM .: link :.


End of This Day's Posts

Sunday, May 22, 2011

Communication
About two years ago (has it really been that long!?), I wrote a post about Interrupts and Context Switching. As long and ponderous as that post was, it was actually meant to be part of a larger series of posts. This post is meant to be the continuation of that original post and hopefully, I'll be able to get through the rest of the series in relatively short order (instead of dithering for another couple years). While I'm busy providing context, I should also note that this series was also planned for my internal work blog, but in the spirit of arranging my interests in parallel (and because I don't have that much time at work dedicated to blogging on our intranet), I've decided to publish what I can here. Obviously, some of the specifics of my workplace have been removed from what follows, but it should still contain enough general value to be worthwhile.

In the previous post, I wrote about how computers and humans process information and in particular, how they handle switching between multiple different tasks. It turns out that computers are much better at switching tasks than humans are (for reasons belabored in that post). When humans want to do something that requires a lot of concentration and attention, such as computer programming or complex writing, they tend to work best when they have large amounts of uninterrupted time and can work in an environment that is quiet and free of distractions. Unfortunately, such environments can be difficult to find. As such, I thought it might be worth examining the source of most interruptions and distractions: communication.

Of course, this is a massive subject that can't even be summarized in something as trivial as a blog post (even one as long and bloviated as this one is turning out to be). That being said, it's worth examining in more detail because most interruptions we face are either directly or indirectly attributable to communication. In short, communication forces us to do context switching, which, as we've already established, is bad for getting things done.

Let's say that you're working on something large and complex. You've managed to get started and have reached a mental state that psychologists refer to as flow (also colloquially known as being "in the zone"). Flow is basically a condition of deep concentration and immersion. When you're in this state, you feel energized and often don't even recognize the passage of time. Seemingly difficult tasks no longer feel like they require much effort and the work just kinda... flows. Then someone stops by your desk to ask you an unrelated question. As a nice person and an accomodating coworker, you stop what you're doing, listen to the question and hopefully provide a helpful answer. This isn't necessarily a bad thing (we all enjoy helping other people out from time to time) but it also represents a series of context switches that would most likely break you out of your flow.

Not all work requires you to reach a state of flow in order to be productive, but for anyone involved in complex tasks like engineering, computer programming, design, or in-depth writing, flow is a necessity. Unfortunately, flow is somewhat fragile. It doesn't happen instantaneously; it requires a transition period where you refamiliarize yourself with the task at hand and the myriad issues and variables you need to consider. When your collegue departs and you can turn your attention back to the task at hand, you'll need to spend some time getting your brain back up to speed.

In isolation, the kind of interruption described above might still be alright every now and again, but imagine if the above scenario happened a couple dozen times in a day. If you're supposed to be working on something complicated, such a series of distractions would be disasterous. Unfortunately, I work for a 24/7 retail company and the nature of our business sometimes requires frequen interruptions and thus there are times when I am in a near constant state of context switching. Noe of this is to say I'm not part of the problem. I am certainly guilty of interrupting others, sometimes frequently, when I need some urgent information. This makes working on particularly complicated problems extremely difficult.

In the above example, there are only two people involved: you and the person asking you a question. However, in most workplace environments, that situation indirectly impacts the people around you as well. If they're immersed in their work, an unrelated conversation two cubes down may still break them out of their flow and slow their progress. This isn't nearly as bad as some workplaces that have a public address system - basically a way to interrupt hundreds or even thousands of people in order to reach one person - but it does still represent a challenge.

Now, the really insideous part about all this is that communication is really a good thing, a necessary thing. In a large scale organization, no one person can know everything, so communication is unavoidable. Meetings and phone calls can be indispensible sources of information and enablers of collaboration. The trick is to do this sort of thing in a way that interrupts as few people as possible. In some cases, this will be impossible. For example, urgency often forces disruptive communication (because you cannot afford to wait for an answer, you will need to be more intrusive). In other cases, there are ways to minimize the impact of frequent communication.

One way to minimize communication is to have frequently requested information documented in a common repository, so that if someone has a question, they can find it there instead of interrupting you (and potentially those around you). Naturally, this isn't quite as effective as we'd like, mostly because documenting information is a difficult and time consuming task in itself and one that often gets left out due to busy schedules and tight timelines. It turns out that documentation is hard! A while ago, Shamus wrote a terrific rant about technical documentation:
The stereotype is that technical people are bad at writing documentation. Technical people are supposedly inept at organizing information, bad at translating technical concepts into plain English, and useless at intuiting what the audience needs to know. There is a reason for this stereotype. It’s completely true.
I don't think it's quite as bad as Shamus points out, mostly because I think that most people suffer from the same issues as technical people. Technology tends to be complex and difficult to explain in the first place, so it's just more obvious there. Technology is also incredibly useful because it abstracts many difficult tasks, often through the use of metaphors. But when a user experiences the inevitable metaphor shear, they have to confront how the system really works, not the easy abstraction they've been using. This descent into technical details will almost always be a painful one, no matter how well documented something is, which is part of why documentation gets short shrift. I think the fact that there actually is documentation is usually a rather good sign. Then again, lots of things aren't documented at all.

There are numerous challenges for a documentation system. It takes resources, time, and motivation to write. It can become stale and inaccurate (sometimes this can happen very quickly) and thus it requires a good amount of maintenance (this can involve numerous other topics, such as version histories, automated alert systems, etc...). It has to be stored somewhere, and thus people have to know where and how to find it. And finally, the system for building, storing, maintaining, and using documentation has to be easy to learn and easy to use. This sounds all well and good, but in practice, it's a nonesuch beast. I don't want to get too carried away talking about documentation, so I'll leave it at that (if you're still interested, that nonesuch beast article is quite good). Ultimately, documentation is a good thing, but it's obviously not the only way to minimize communication strain.

I've previously mentioned that computer programming is one of those tasks that require a lot of concentration. As such, most programmers abhor interruptions. Interestingly, communication technology has been becoming more and more reliant on software. As such, it should be no surprise that a lot of new tools for communication are asynchronous, meaning that the exchange of information happens at each participant's own convenience. Email, for example, is asynchronous. You send an email to me. I choose when I want to review my messages and I also choose when I want to respond. Theoretically, email does not interrupt me (unless I use automated alerts for new email, such as the default Outlook behavior) and thus I can continue to work, uninterrupted.

The aformentioned documentation system is also a form of asynchronous communication and indeed, most of the internet itself could be considered a form of documentation. Even the communication tools used on the web are mostly asynchronous. Twitter, Facebook, YouTube, Flickr, blogs, message boards/forums, RSS and aggregators are all reliant on asynchronous communication. Mobile phones are obviously very popular, but I bet that SMS texting (which is asynchronous) is used just as much as voice, if not moreso (at least, for younger people). The only major communication tools invented in the past few decades that wouldn't be asynchronous are instant messaging and chat clients. And even those systems are often used in a more asynchronous way than traditional speech or conversation. (I suppose web conferencing is a relatively new communication tool, though it's really just an extension of conference calls.)

The benefit of asynchronous communication is, of course, that it doesn't (or at least it shouldn't) represent an interruption. If you're immersed in a particular task, you don't have to stop what you're doing to respond to an incoming communication request. You can deal with it at your own convenience. Furthermore, such correspondence (even in a supposedly short-lived medium like email) is usually stored for later reference. Such records are certainly valuable resources. Unfortunately, asynchronous communication has it's own set of difficulties as well.

Miscommunication is certainly a danger in any case, but it seems more prominent in the world of asynchronous communication. Since there is no easy back-and-forth in such a method, there is no room for clarification and one is often left only with their own interpretation. Miscommunication is doubly challenging because it creates an ongoing problem. What could have been a single conversation has now ballooned into several asynchronous touch-points and even the potential for wasted work.

One of my favorite quotations is from Anne Morrow Lindbergh:
To write or to speak is almost inevitably to lie a little. It is an attempt to clothe an intangible in a tangible form; to compress an immeasurable into a mold. And in the act of compression, how the Truth is mangled and torn!
It's difficult to beat the endless nuance of face-to-face communication, and for some discussions, nothing else will do. But as Lindbergh notes, communication is, in itself, a difficult proposition. Difficult, but necessary. About the best we can do is to attempt to minimize the misunderstanding.

I suppose one way to mitigate the possibility of miscommunication is to formalize the language in which the discussion is happening. This is easier said than done, as our friends in the legal department would no doubt say. Take a close look at a formal legal contract and you can clearly see the flaws in formal language. They are ostensibly written in English, but they require a lot of effort to compose or to read. Even then, opportunities for miscommunication or loopholes exist. Such a process makes sense when dealing with two separate organizations that each have their own agenda. But for internal collaboration purposes, such a formalization of communication would be disastrous.

You could consider computer languages a form of formal communication, but for most practical purposes, this would also fall short of a meaningful method of communication. At least, with other humans. The point of a computer language is to convert human thought into computational instructions that can be carried out in an almost mechanical fashion. While such a language is indeed very formal, it is also tedious, unintuitive, and difficult to compose and read. Our brains just don't work like that. Not to mention the fact that most of the communication efforts I'm talking about are the precursors to the writing of a computer program!

Despite all of this, a light formalization can be helpful and the fact that teams are required to produce important documentation practically requires a compromise between informal and formal methods of communication. In requirements specifications, for instance, I have found it quite beneficial to formally define various systems, acronyms, and other jargon that is referenced later in the document. This allows for a certain consistency within the document itself, and it also helps establish guidelines surrounding meaningful dialogue outside of the document. Of course, it wouldn't quite be up to legal standards and it would certainly lack the rigid syntax of computer languages, but it can still be helpful.

I am not an expert in linguistics, but it seems to me that spoken language is much richer and more complex than written language. Spoken language features numerous intricacies and tonal subtleties such as inflections and pauses. Indeed, spoken language often contains its own set of grammatical patterns which can be different than written language. Furthermore, face-to-face communication also consists of body language and other signs that can influence the meaning of what is said depending on the context in which it is spoken. This sort of nuance just isn't possible in written form.

This actually illustrates a wider problem. Again, I'm no linguist and haven't spent a ton of time examining the origins of language, but it seems to me that language emerged as a more immediate form of communication than what we use it for today. In other words, language was meant to be ephemeral, but with the advent of written language and improved technological means for recording communication (which are, historically, relatively recent developments), we're treating it differently. What was meant to be short-lived and transitory is now enduring and long-lived. As a result, we get things like the ever changing concept of political-correctness. Or, more relevant to this discussion, we get the aforementioned compromise between formal and informal language.

Another drawback to asynchronous communication is the propensity for over-communication. The CC field in an email can be a dangerous thing. It's very easy to broadcast your work out to many people, but the more this happens, the more difficult it becomes to keep track of all the incoming stimuli. Also, the language used in such a communication may be optimized for one type of reader, while the audience may be more general. This applies to other asynchronous methods as well. Documentation in a wiki is infamously difficult to categorize and find later. When you have an army of volunteers (as Wikipedia does), it's not as large a problem. But most organizations don't have such luxuries. Indeed, we're usually lucky if something is documented at all, let alone well organized and optimized.

The obvious question, which I've skipped over for most of this post (and, for that matter, the previous post), is: why communicate in the first place? If there are so many difficulties that arise out of communication, why not minimize such frivolities so that we can get something done?

Indeed, many of the greatest works in history were created by one mind. Sometimes, two. If I were to ask you to name the greatest inventor of all time, what would you say? Leonardo da Vinci or perhaps Thomas Edison. Both had workshops consisting of many helping hands, but their greatest ideas and conceptual integrity came from one man. Great works of literature? Shakespeare is the clear choice. Music? Bach, Mozart, Beethoven. Painting? da Vinci (again!), Rembrandt, Michelangelo. All individuals! There are collaborations as well, but usually only among two people. The Wright brothers, Gilbert and Sullivan, and so on.

So why has design and invention gone from solo efforts to group efforts? Why do we know the names of most of the inventors of 19th and early 20th century innovations, but not later achievements? For instance, who designed the Saturn V rocket? No one knows that, because it was a large team of people (and it was the culmination of numerous predecessors made by other teams of people). Why is that?

The biggest and most obvious answer is the increasing technological sophistication in nearly every area of engineering. The infamous Lazarus Long adage that "Specialization is for insects." notwithstanding, the amount of effort and specialization in various fields is astounding. Take a relatively obscure and narrow branch of mechanical engineering like Fluid Dynamics, and you'll find people devoting most of their life to the study of that field. Furthermore, the applications of that field go far beyond what we'd assume. Someone tinkering in their garage couldn't make the Saturn V alone. They'd require too much expertise in a wide and disparate array of fields.

This isn't to say that someone tinkering in their garage can't create something wonderful. Indeed, that's where the first personal computer came from! And we certainly know the names of many innovators today. Mark Zuckerberg and Larry Page/Sergey Brin immediately come to mind... but even their inventions spawned large companies with massive teams driving future innovation and optimization. It turns out that scaling a product up often takes more effort and more people than expected. (More information about the pros and cons of moving to a collaborative structure will have to wait for a separate post.)

And with more people comes more communication. It's a necessity. You cannot collaborate without large amounts of communication. In Tom DeMarco and Timothy Lister's book Peopleware, they call this the High-Tech Illusion:
...the widely held conviction among people who deal with any aspect of new technology (as who of us does not?) that they are in an intrinsically high-tech business. ... The researchers who made fundamental breakthroughs in those areas are in a high-tech business. The rest of us are appliers of their work. We use computers and other new technology components to develop our products or to organize our affairs. Because we go about this work in teams and projects and other tightly knit working groups, we are mostly in the human communication business. Our successes stem from good human interactions by all participants in the effort, and our failures stem from poor human interactions.
(Emphasis mine.) That insight is part of what initially inspired this series of posts. It's very astute, and most organizations work along those lines, and thus need to figure out ways to account for the additional costs of communication (this is particularly daunting, as such things are notoriously difficult to measure, but I'm getting ahead of myself). I suppose you could argue that both of these posts are somewhat inconclusive. Some of that is because they are part of a larger series, but also, as I've been known to say, human beings don't so much solve problems as they do trade one set of problems for another (in the hopes that the new problems are preferable the old). Recognizing and acknowledging the problems introduced by collaboration and communication is vital to working on any large project. As I mentioned towards the beginning of this post, this only really scratches the surface of the subject of communication, but for the purposes of this series, I think I've blathered on long enough. My next topic in this series will probably cover the various difficulties of providing estimates. I'm hoping the groundwork laid in these first two posts will mean that the next post won't be quite so long, but you never know!
Posted by Mark on May 22, 2011 at 07:51 PM .: link :.


End of This Day's Posts

Sunday, April 03, 2011

Unnecessary Gadgets
So the NY Times has an article debating the necessity of the various gadgets. The argument here is that we're seeing a lot of convergence in tech devices, and that many technologies that once warranted a dedicated device are now covered by something else. Let's take a look at their devices, what they said, and what I think:
  • Desktop Computer - NYT says to chuck it in favor of laptops. I'm a little more skeptical. Laptops are certainly better now than they've ever been, but I've been hearing about desktop-killers for decades now and I'm not even that old (ditto for thin clients, though the newest hype around the "cloud" computing thing is slightly more appealing - but even that won't supplant desktops entirely). I think desktops will be here to stay. I've got a fair amount of experience with both personal and work laptops, and I have to say that they're both inferior to desktops. This is fine when I need to use the portability, but that's not often enough to justify some of the pain of using laptops. For instance, I'm not sure what kinda graphics capabilities my work laptop has, but it really can't handle my dual-monitor setup, and even on one monitor, the display is definitely crappier than my old desktop (and that thing was ancient). I do think we're going to see some fundamental changes in the desktop/laptop/smartphone realm. The three form factors are all fundamentally useful in their own way, but I'd still expect some sort of convergence in the next decade or so. I'm expecting that smartphones will become ubiquitous, and perhaps become some sort of portable profile that you could use across your various devices. That's a more long term thing though.
  • High Speed Internet at Home - NYT says to keep it, and I agree. Until we can get a real 4G network (i.e. not the slightly enhanced 3G stuff the current telecom companies are peddling), there's no real question here.
  • Cable TV - NYT plays the "maybe" card on this one, but I think i can go along with that. It all depends on whether you watch TV or not (and/or if you enjoy live TV, like sporting events). I'm on the fence with this one myself. I have cable, and a DVR does make dealing with broadcast television much easier, and I like the opportunities afforded by OnDemand, etc... But it is quite expensive. If I ever get into a situation where I need to start pinching pennies, Cable is going to be among the first things to go.
  • Point and Shoot Camera - NYT says to lose it in favor of the smartphone, and I probably agree. Obviously there's still a market for dedicated high-end cameras, but the small point-and-click ones are quickly being outclassed by their fledgling smartphone siblings. My current iPhone camera is kinda crappy (2 MP, no flash), but even that works ok for my purposes. There are definitely times when I wish I had a flash or better quality, but they're relatively rare and I've had this phone for like 3 years now (probably upgrading this summer). My next camera will most likely meet all my photography needs.
  • Camcorder - NYT says to lose it, and that makes a sort of sense. As they say, camcorders are getting squeezed from both ends of the spectrum, with smartphones and cheap flip cameras on one end, and high end cameras on the other. I don't really know much about this though. I'm betting that camcorders will still be around, just not quite as popular as before.
  • USB Thumb Drive - NYT says lose it, and I think I agree, though not necessarily for the same reasons. They think that the internet means you don't need to use physical media to transfer data anymore. I suppose there's something to that, but my guess is that Smartphones could easily pick up the slack and allow for portable data without a dedicated device. That being said, I've used a thumb drive, like, 3 times in my life.
  • Digital Music Player - NYT says ditch it in favor of smartphones, with the added caveat that people who exercise a lot might like a smaller, dedicated device. I can see that, but on a personal level, I have both and don't mind it at all. I don't like using up my phone battery playing music, and I honestly don't really like the iPhone music player interface, so I actually have a regular old iPod nano for music and podcasts (also, I like to have manual control over what music/podcasts get on my device, and that's weird on the iPhone - at least, it used to be). My setup works fine for me most times, and in an emergency, I do have music (and a couple movies) on my iPhone, so I could make due.
  • Alarm Clock - NYT says keep it, though I'm not entirely convinced. Then again, I have an alarm clock, so I can't mount much of a offense against it. I've realized, though, that the grand majority of clocks that I use in my house are automatically updated (Cable box, computers, phone) and synced with some external source (no worrying about DST, etc...) My alarm clock isn't, though. I still use my phone as a failsafe for when I know I need to get up early, but that's more based on the possibility of snoozing myself into oblivion (I can easily snooze for well over an hour). I think I may actually end up replacing my clock, but I can see some young whipper-snappers relying on some other device for their wakeup calls...
  • GPS Unit - NYT says lose it, and I agree. With the number of smartphone apps (excluding the ones that come with your phone, which are usually functional but still kinda clunky as a full GPS system) that are good at this sort of thing (and a lot cheaper), I can't see how anyone could really justify a dedicated device for this. On a recent trip, a friend used Navigon's Mobile Navigator ($30, and usable on any of his portable devices) and it worked like a charm. Just as good as any GPS I've ever used. The only problem, again, is that it will drain the phone battery (unless you plug it in, which we did).
  • Books - NYT says to keep them, and I mostly agree. The only time I can see really wanting to use a dedicated eReader is when travelling, and even then, I'd want it to be a broad device, not dedicated to books. I have considered the Kindle (as it comes down in price), but for now, I'm holding out on a tablet device that will actually have a good enough screen for this sort of thing. Which, I understand, isn't too far off on the horizon. There are a couple of other nice things about digital books though, namely, the ability to easily mark favorite passages, or to do a search (two things that would probably save me a lot of time). I can't see books every going away, but I can see digital readers being a part of my life too.
A lot of these made me think of Neal Stephenson's System of the World. In that book, one of the characters ponders how new systems supplant older systems:
"It has been my view for some years that a new System of the World is being created around us. I used to suppose that it would drive out and annihilate any older Systems. But things I have seen recently ... have convinced me that new Systems never replace old ones, but only surround and encapsulate them, even as, under a microscope, we may see that living within our bodies are animalcules, smaller and simpler than us, and yet thriving even as we thrive. ... And so I say that Alchemy shall not vanish, as I always hoped. Rather, it shall be encapsulated within the new System of the World, and become a familiar and even comforting presence there, though its name may change and its practitioners speak no more about the Philosopher's Stone." (page 639)
That sort of "surround and encapsulate" concept seems broadly applicable to a lot of technology, actually.
Posted by Mark on April 03, 2011 at 07:42 PM .: link :.


End of This Day's Posts

Wednesday, March 30, 2011

Artificial Memory
Nicholas Carr cracks me up. He's a skeptic of technology, and in particular, the internet. He's the the guy who wrote the wonderfully divisive article, Is Google Making Us Stupid? The funny thing about all this is that he seems to have gained the most traction on the very platform he criticizes so much. Ultimately, though, I think he does have valuable insights and, if nothing else, he does raise very interesting questions about the impacts of technology on our lives. He makes an interesting counterweight to the techno-geeks who are busy preaching about transhumanism and the singularity. Of course, in a very real sense, his opposition dooms him to suffer from the same problems as those he criticizes. Google and the internet may not be a direct line to godhood, but it doesn't represent a descent into hell either. Still, reading some Carr is probably a good way to put techno-evangelism into perspective and perhaps reach some sort of Hegelian synthesis of what's really going on.

Otakun recently pointed to an excerpt from Carr's latest book. The general point of the article is to examine how human memory is being conflated with computer memory, and whether or not that makes sense:
...by the middle of the twentieth century memorization itself had begun to fall from favor. Progressive educators banished the practice from classrooms, dismissing it as a vestige of a less enlightened time. What had long been viewed as a stimulus for personal insight and creativity came to be seen as a barrier to imagination and then simply as a waste of mental energy. The introduction of new storage and recording media throughout the last century—audiotapes, videotapes, microfilm and microfiche, photocopiers, calculators, computer drives—greatly expanded the scope and availability of “artificial memory.” Committing information to one’s own mind seemed ever less essential. The arrival of the limitless and easily searchable data banks of the Internet brought a further shift, not just in the way we view memorization but in the way we view memory itself. The Net quickly came to be seen as a replacement for, rather than just a supplement to, personal memory. Today, people routinely talk about artificial memory as though it’s indistinguishable from biological memory.
While Carr is perhaps more blunt than I would be, I have to admit that I agree with a lot of what he's saying here. We often hear about how modern education is improved by focusing on things like "thinking skills" and "problem solving", but the big problem with emphasizing that sort of work ahead of memorization is that the analysis needed for such processes require a base level of knowledge in order to be effective. This is something I've expounded on at length in a previous post, so I won't rehash that here.

The interesting thing about the internet is that it enables you to get to a certain base level of knowledge and competence very quickly. This doesn't come without it's own set of challenges, and I'm sure Carr would be quick to point out that such a crash course would yield a false sense of security on us hapless internet users. After all, how do we know when we've reached that base level of confidence? Our incompetence could very well be masking our ability to recognize our incompetence. However, I don't think that's an insurmountable problem. Most of us that use the internet a lot view it as something of a low-trust environment, which can, ironically, lead to a better result. On a personal level, I find that what the internet really helps with is to determine just how much I don't know about a subject. That might seem like a silly thing to say, but even recognizing that your unknown unknowns are large can be helpful.

Some other assorted thoughts about Carr's excerpt:
  • I love the concept of a "commonplace book" and immediately started thinking of how I could implement one... which is when I realized that I've actually been keeping one, more or less, for the past 10 or so years on this blog. That being said, it's something I wouldn't mind becoming more organized about, and I've got some interesting ideas about what my personal take on a commonplace would look like.
  • Carr insists that the metaphor that portrays the brain as a computer is wrong. It's a metaphor I've certainly used in the past, though I think what I find most interesting about that metaphor is how different computers and brains really are. The problem with the metaphor is that our brains work nothing even remotely like the way our current computers actually work. However, many of the concepts of computer science and engineering can be useful in helping to model how the brain works. I'm certainly not an expert on the subject, but for example: You could model the brain as a binary computer because our neurons are technically binary. However, our neurons don't just turn on or off, they pulse, and things like frequency and duration can yield dramatically different results. Not to mention the fact that the brain seems to be a massively parallel computing device, as opposed to the mostly serial electronic tools we use. That is, of course, a drastic simplification, but you get the point. The metaphor is flawed, as all metaphors are, but it can also be useful.
  • One thing that Carr doesn't really get into (though he may cover this in a later chapter) is how notoriously unreliable human memory actually is. Numerous psychological studies show just how impressionable and faulty our memory of an event can be. This doesn't mean we should abandon our biological memory, just that having an external, artificial memory of an event (i.e. some sort of recording) can be useful in helping to identify and shape our perceptions.
  • Of course, even recordings can yield a false sense of truth, so things like Visual Literacy are still quite important. And again, one cannot analyze said recordings accurately without a certain base set of knowledge about what we're looking at - this is another concept that has been showing up on this blog for a while now as well: Exformation.
And that's probably enough babbling about Carr's essay. I generally disagree with the guy, but on this particular subject, I think we're more in agreement.
Posted by Mark on March 30, 2011 at 06:06 PM .: link :.


End of This Day's Posts

Wednesday, December 01, 2010

Opera 11 Beta
I'm one of the few people that actually uses Opera to do the grand majority of my web browsing. In recent years, I've been using Firefox more, especially for web development purposes (it's hard to beat the Firebug/Web Dev Toolbar combo - Opera has a tool called Dragonfly that's decent, but not quite as good). A few years ago, I wrote a comparison of Firefox and Opera across 8 categories, and it came out a tie. The biggest advantage that Opera had was it's usability and easy of use. On the other hand, Firefox's strength was its extensibility, something that Opera never fully embraced. Until now!

Opera recently released a beta of their next version, and I've been using it this week. It's looking like an excellent browser, with some big improvements over previous versions:
  • Extensions - Opera has finally taken the plunge. Having only been available for a few days, there isn't quite the extensive library that Firefox has, and given the smaller user base and Firefox's head start, I'm not sure they'll be able to catch up anytime soon. That being said, it's a welcome addition, and when combined with Opera's superior native features, perhaps this will even the score a bit. Extensions also represents an interesting dilemma for Opera - will they turn the most popular extensions into native features? One issue with Extensions is that they can be somewhat unreliable and yield poor performance (for instance, the various Mouse Gesture extensions for Firefox can't hold a candle to Opera's native functionality). That was always Opera's worry about Extensions, so I'm betting we will see extensions rolled into the native app in future versions.
  • Performance and Speed - Opera 11 is noticeably faster than it's predecessors (no small feat, as Opera has always been good in this respect) and probably it's competition too. Of course, I'm going on a purely subjective observation here and I'm obviously biased, but it seems faster than Firefox as well. It's probably on par with Chrome, but Opera has certainly closed the gap (especially on javascript-heavy pages, which is what Chrome excels at). Once this browser is out of beta, I'd be really interested in seeing how it stacks up. Somewhat related is improved support of various standards, notably HTML 5, so there's that too.
  • Tab stacking - Opera was the first browser with tabs, and now they're making small, incremental improvements. In this case, it's the ability to group a bunch of tabs together and allow you to expand or contract them. I haven't actually used this feature much, but I can imagine scenarios where I'd have dozens of tabs open and grouping them might be helpful (this also makes their tab preview on mouseover functionality more meaningful, as mousing over a contracted group of tabs shows you a preview of all the tabs (this was only marginally useful if not a complete waste on regular tabs, but in this scenario it works well)). On the other hand, I'm not sure the trouble of grouping and maintaining the tab stacks would ultimately save time (but perhaps future iterations will come up with smarter methods of automatically grouping tabs - an approach that could be problematic, but which could also be beneficial if implemented well).
  • Search predictions from Google - This is minor, but just another "We're catching up to Firefox functionality" addition, and a welcome one.
There are some other things, but the above are the best additions. Some of the other stuff is a bit extraneous (in particular, the visual mouse gestures are unnecessary, though they don't seem to hurt anything either), and some of it won't matter to most folks (the email client). I've run into some buggy behavior, but nothing unusual, and it actually seems pretty stable for a beta. So I'm looking forward to the final release of this browser.
Posted by Mark on December 01, 2010 at 08:30 PM .: link :.


End of This Day's Posts

Wednesday, November 17, 2010

Link Dump
A few interesting links from the depths of teh interwebs:
  • Singel-Minded: How Facebook Could Beat Google to Win the Net - Wired's Ryan Singer makes an interesting case for Facebook to challenge Google in the realm of advertising. Right now, Facebook only advertises on their site (in a small, relatively tasteful fashion), but it's only really a matter of time until they make the same move Google did with AdSense. And their advantage their is that Facebook has much more usable data about people than Google. The operative word there is "usable", as Google certainly has lots of data about its users, but it seems Google's mantra of "Do no evil" will come back to bite them in the ass. Google's promised not to use search history and private emails, etc... to help target ads. Facebook has no such restrictions, and the ads on their site seem to be more targeted (they've recently been trying to get me to buy Neal Stephenson audio books, which would be a pretty good bet for them... if I hadn't already read everything that guy's written). This got me wondering, is targeted advertising the future and will people be ok with that. Everyone hates commercials, but would they hate them if the ads were for things you wanted? Obviously privacy is a concern... or is it? It's not like Facebook has been immaculate in the area of privacy, and yet it's as popular as it ever was. I don't necessarily see it as a good thing, but it will probably happen, and somehow I doubt Google will take it for long without figuring out a way to leverage all that data they've been collecting...
  • If We Don't, Remember Me: Animated gifs have long been a staple of the web and while they're not normally a bastion of subtlety, this site is. They all seem to be from good movies, and I think this one is my favorite. (via kottke)
  • The Tall Man Reunites With Don Coscarelli for John Dies at the End: I posted about this movie back in 2008, then promptly forgot about it. I just assumed that it was one of those projects that would never really get off the ground (folks in Hollywood often publish the rights for something, even when they don't necessarily have any plans to make it) or that Coscarelli was focusing on one of his other projects (i.e. the long-rumored sequel to BubbaHo-Tep, titled Bubba Nosferatu: Curse of the She-Vampires). But it appears that things are actually moving on JDatE and some casting was recently announced, including long time Coscarelli collaborator Angus Scrimm (who played the infamous Tall Man in the Phantasm films), Paul Giamatti and Clancy Brown. This is all well and good, but at the same time - I have no idea what roles any of these folks will play. None seem like the two leads (David and the titular John). Nevertheless, here's to hoping we see some new Coscarelli soon. I think his sensibility would match rather well with David Wong (nee Jason Pargin). (Update: Quint over at AiCN has more on the casting and who's playing what)
  • Curtis Got Slapped by a White Teacher!: Words cannot describe this 40 page document (which is, itself, comprised mostly of words, but whatever). Its... breathtaking.
That's all for now.
Posted by Mark on November 17, 2010 at 09:16 PM .: link :.


End of This Day's Posts

Wednesday, August 04, 2010

A/B Testing Spaghetti Sauce
Earlier this week I was perusing some TED Talks and ran across this old (and apparently popular) presentation by Malcolm Gladwell. It struck me as particularly relevant to several topics I've explored on this blog, including Sunday's post on the merits of A/B testing. In the video, Gladwell explains why there are a billion different varieties of Spaghetti sauce at most supermarkets:
Again, this video touches on several topics explored on this blog in the past. For instance, it describes the origins of what's become known as the Paradox of Choice (or, as some would have you believe, the Paradise of Choice) - indeed, there's another TED talk linked right off the Gladwell video that covers that topic in detail.

The key insight Gladwell discusses in his video is basically the destruction of the Platonic Ideal (I'll summarize in this paragraph in case you didn't watch the video, which covers the topic in much more depth). He talks about Howard Moskowitz, who was a market research consultant with various food industry companies that were attempting to optimize their products. After conducting lots of market research and puzzling over the results, Moskowitz eventually came to a startling conclusion: there is no perfect product, only perfect products. Moskowitz made his name working with spaghetti sauce. Prego had hired him in order to find the perfect spaghetti sauce (so that they could compete with rival company, Ragu). Moskowitz developed dozens of prototype sauces and went on the road, testing each variety with all sorts of people. What he found was that there was no single perfect spaghetti sauce, but there were basically three types of sauce that people responded to in roughly equal proportion: standard, spicy, and chunky. At the time, there were no chunky spaghetti sauces on the market, so when Prego released their chunky spaghetti sauce, their sales skyrocketed. A full third of the market was underserved, and Prego filled that need.

Decades later, this is hardly news to us and the trend has spread from the supermarket into all sorts of other arenas. In entertainment, for example, we're seeing a move towards niches. The era of huge blockbuster bands like The Beatles is coming to an end. Of course, there will always be blockbusters, but the really interesting stuff is happening in the niches. This is, in part, due to technology. Once you can fit 30,000 songs onto an iPod and you can download "free" music all over the internet, it becomes much easier to find music that fits your tastes better. Indeed, this becomes a part of peoples' identity. Instead of listening to the mass produced stuff, they listen to something a little odd and it becomes an expression of their personality. You can see evidence of this everywhere, and the internet is a huge enabler in this respect. The internet is the land of niches. Click around for a few minutes and you can easily find absurdly specific, single topic, niche websites like this one where every post features animals wielding lightsabers or this other one that's all about Flaming Garbage Cans In Hip Hop Videos (there are thousands, if not millions of these types of sites). The internet is the ultimate paradox of choice, and you're free to explore almost anything you desire, no matter how odd or obscure it may be (see also, Rule 34).

In relation to Sunday's post on A/B testing, the lesson here is that A/B testing is an optimization tool that allows you to see how various segments respond to different versions of something. In that post, I used an example where an internet retailer was attempting to find the ideal imagery to sell a diamond ring. A common debate in the retail world is whether that image should just show a closeup of the product, or if it should show a model wearing the product. One way to solve that problem is to A/B test it - create both versions of the image, segment visitors to your site, and track the results.

As discussed Sunday, there are a number of challenges with this approach, but one thing I didn't mention is the unspoken assumption that there actually is an ideal image. In reality, there are probably some people that prefer the closeup and some people who prefer the model shot. An A/B test will tell you what the majority of people like, but wouldn't it be even better if you could personalize the imagery used on the site depending on what customers like? Show the type of image people prefer, and instead of catering to the most popular segment of customer, you cater to all customers (the simple diamond ring example begins to break down at this point, but more complex or subtle tests could still show significant results when personalized). Of course, this is easier said than done - just ask Amazon, who does CRM and personalization as well as any retailer on the web, and yet manages to alienate a large portion of their customers every day! Interestingly, this really just shifts the purpose of A/B testing from one of finding the platonic ideal to finding a set of ideals that can be applied to various customer segments. Once again we run up against the need for more and better data aggregation and analysis techniques. Progress is being made, but I'm not sure what the endgame looks like here. I suppose time will tell. For now, I'm just happy that Amazon's recommendations aren't completely absurd for me at this point (which I find rather amazing, considering where they were a few years ago).
Posted by Mark on August 04, 2010 at 07:54 PM .: link :.


End of This Day's Posts

Sunday, August 01, 2010

Groundhog Day and A/B Testing
Jeff Atwood recently made a fascinating observation about the similarities between the classic film Groundhog Day and A/B Testing.

In case you've only recently emerged from a hermit-like existence, Groundhog Day is a film about Phil (played by Bill Murray). It seems that Phil has been doomed (or is it blessed) to live the same day over and over again. It doesn't seem to matter what he does during this day, he always wakes up at 6 am on Groundhog Day. In the film, we see the same day repeated over and over again, but only in bits and pieces (usually skipping repetitive parts). The director of the film, Harold Ramis, believes that by the end of the film, Phil has spent the equivalent of about 30 or 40 years reliving that same day.

Towards the beginning of the film, Phil does a lot of experimentation, and Atwood's observation is that this often takes the form of an A/B test. This is a concept that is perhaps a little more esoteric, but the principles are easy. Let's take a simple example from the world of retail. You want to sell a new ring on a website. What should the main image look like? For simplification purposes, let's say you narrow it down to two different concepts: one, a closeup of the ring all by itself, and the other a shot of a model wearing the ring. Which image do you use? We could speculate on the subject for hours and even rationalize some pretty convincing arguments one way or the other, but it's ultimately not up to us - in retail, it's all about the customer. You could "test" the concept in a serial fashion, but ultimately the two sets of results would not be comparable. The ring is new, so whichever image is used first would get an unfair advantage, and so on. The solution is to show both images during the same timeframe. You do this by splitting your visitors into two segments (A and B), showing each segment a different version of the image, and then tracking the results. If the two images do, in fact, cause different outcomes, and if you get enough people to look at the images, it should come out in the data.

This is what Phil does in Groundhog Day. For instance, Phil falls in love with Rita (played by Andie MacDowell) and spends what seems like months compiling lists of what she likes and doesn't like, so that he can construct the perfect relationship with her.
Phil doesn't just go on one date with Rita, he goes on thousands of dates. During each date, he makes note of what she likes and responds to, and drops everything she doesn't. At the end he arrives at -- quite literally -- the perfect date. Everything that happens is the most ideal, most desirable version of all possible outcomes on that date on that particular day. Such are the luxuries afforded to a man repeating the same day forever.

This is the purest form of A/B testing imaginable. Given two choices, pick the one that "wins", and keep repeating this ad infinitum until you arrive at the ultimate, most scientifically desirable choice.
As Atwood notes, the interesting thing about this process is that even once Phil has constructed that perfect date, Rita still rejects Phil. From this example and presumably from experience with A/B testing, Atwood concludes that A/B testing is empty and that subjects can often sense a lack of sincerity behind the A/B test.

It's an interesting point, but to be sure, I'm not sure it's entirely applicable in all situations. Of course, Atwood admits that A/B testing is good at smoothing out details, but there's something more at work in Groundhog's Day that Atwood is not mentioning. Namely, that Phil is using A/B testing to misrepresent himself as the ideal mate for Rita. Yes, he's done the experimentation to figure out what "works" and what doesn't, but his initial testing was ultimately shallow. Rita didn't reject him because he had all the right answers, she rejected him because he was attempting to deceive her. His was misrepresenting himself, and that certainly can lead to a feeling of emptiness.

If you look back at my example above about the ring being sold on a retail website, you'll note that there's no deception going on there. Somehow I doubt either image would result in a hollow feeling by the customer. Why is this different than Groundhog Day? Because neither image misrepresents the product, and one would assume that the website is pretty clear about the fact that you can buy things there. Of course, there are a million different variables you could test (especially once you get into text and marketing hooks, etc...) and some of those could be more deceptive than others, but most of the time, deception is not the goal. There is a simple choice to be made, instead of constantly wondering about your product image and second guessing yourself, why not A/B test it and see what customers like better?

There are tons of limitations to this approach, but I don't think it's as inherently flawed as Atwood seems to believe. Still, the data you get out of an A/B test isn't always conclusive and even if it is, whatever learnings you get out of it aren't necessarily applicable in all situations. For instance, what works for our new ring can't necessarily be applied to all new rings (this is a problem for me, as my employer has a high turnover rate for products - as such, the simple example of the ring as described above would not be a good test for my company unless the ring would be available for a very long time). Furthermore, while you can sometimes pick a winner, it's not always clear why it's a winner. This is especially the case when the differences between A and B are significant (for instance, testing an entirely redesigned page might yield results, but you will not know which of the changes to the page actually caused said results - on the other hand, A/B testing is really the only way to accurately calculate ROI on significant changes like that.)

Obviously these limitations should be taken into account when conducting an A/B test, and I think what Phil runs into in Groundhog's Day is a lack of conclusive data. One of the problems with interpreting inconclusive data is that it can be very tempting to rationalize the data. Phils initial attempts to craft the perfect date for Rita fail because he's really only scraping the surface of her needs and desires. In other words, he's testing the wrong thing, misunderstanding the data, and thus getting inconclusive results.

The interesting thing about the Groundhog's Day example is that, in the end, the movie is not a condemnation of A/B testing at all. Phil ultimately does manage to win the affections of Rita. Of course it took him decades to do so, and that's worth taking into account. Perhaps what the film is really saying is that A/B testing is often more complicated than it seems and that the only results you get depend on what you put into it. A/B testing is not the easy answer it's often portrayed as and it should not be the only tool in your toolbox (i.e. forcing employees to prove that using 3, 4 or 5 pixels for a border is ideal is probably going a bit too far ), but neither is it as empty as Atwood seems to be indicating. (And we didn't even talk about multivariate tests! Let's get Christopher Nolan on that. He'd be great at that sort of movie, wouldn't he?)
Posted by Mark on August 01, 2010 at 09:57 PM .: link :.


End of This Day's Posts

Sunday, May 30, 2010

Predictions
Someone sent me a note about a post I wrote on the 4th Kingdom boards in 2005 (August 3, 2005, to be more precise). It was in a response to a thread about technology and consumer electronics trends, and the original poster gave two examples that were exploding at the times: "camera phones and iPods." This is what I wrote in response:
Heh, I think the next big thing will be the iPod camera phone. Or, on a more general level, mp3 player phones. There are already some nifty looking mp3 phones, most notably the Sony/Ericsson "Walkman" branded phones (most of which are not available here just yet). Current models are all based on flash memory, but it can't be long before someone releases something with a small hard drive (a la the iPod). I suspect that, in about a year, I'll be able to hit 3 birds with one stone and buy a new cell phone with an mp3 player and digital camera.

As for other trends, as you mention, I think we're goint to see a lot of hoopla about the next gen gaming consoles. The new Xbox comes out in time for Xmas this year and the new Playstation 3 hits early next year. The new playstation will probably have blue-ray DVD capability, which brings up another coming tech trend: the high capacity DVD war! It seems that Sony may actually be able to pull this one out (unlike Betamax), but I guess we'll have to wait and see...
For an off-the-cuff informal response, I think I did pretty well. Of course, I still got a lot of the specifics wrong. For instance, I'm pretty clearly talking about the iPhone, though that would have to wait about 2 years before it became a reality. I also didn't anticipate the expansion of flash memory to more usable sizes and prices. Though I was clearly talking about a convergence device, I didn't really say anything about what we now call "apps".

In terms of game consoles, I didn't really say much. My first thought upon reading this post was that I had completely missed the boat on the Wii, however, it appears that the Wii's new controller scheme wasn't shown until September 2005 (about a month after my post). I did manage to predict a winner in the HD video war though, even if I framed the prediction as a "high capacity DVD war" and spelled blu-ray wrong.

I'm not generally good at making predictions about this sort of thing, but it's nice to see when I do get things right. Of course, you could make the argument that I was just stating the obvious (which is basically what I did with my 2008 predictions). Then again, everything seems obvious in hindsight, so perhaps it is still a worthwhile exercise for me. If nothing else, it gets me to think in ways I'm not really used to... so here are a few predictions for the rest of this year:
  • Microsoft will release Natal this year, and it will be a massive failure. There will be a lot of neat talk about it and speculation about the future, but the fact is that gesture based interfaces and voice controls aren't especially great. I'll bet everyone says they'd like to use the Minority Report interface... but once they get to use it, I doubt people would actually find it more useful than current input methods. If it does attain success though, it will be because of the novelty of that sort of interaction. As a gaming platform, I think it will be a near total bust. The only way Microsoft would get Natal into homes is to bundle it with the XBox 360 (without raising the price)
  • Speaking of which, I think Sony's Playstation Move platform will be mildly more successful than Natal, which is to say that it will also be a failure. I don't see anything in their initial slate of games that makes me even want to try it out. All that being said, the PS3 will continue to gain ground against the Xbox 360, though not so much that it will overtake the other console.
  • While I'm at it, I might as well go out on a limb and say that the Wii will clobber both the PS3 and the Xbox 360. As of right now, their year in games seems relatively tame, so I don't see the Wii producing favorable year over year numbers (especially since I don't think they'll be able to replicate the success of New Super Mario Brothers Wii, which is selling obscenely well, even to this day). The one wildcard on the Wii right now is the Vitality Sensor. If Nintendo is able to put out the right software for that and if they're able to market it well, it could be a massive, audience-shifting blue ocean win for them. Coming up with a good "relaxation" game and marketing it to the proper audience is one hell of a challenge though. On the other hand, if anyone can pull that off, it's Nintendo.
  • Sony will also release some sort of 3D gaming and movie functionality for the home. It will also be a failure. In general, I think attitudes towards 3D are declining. I think it will take a high profile failure to really temper Hollywood's enthusiasm (and even then, the "3D bump" of sales seems to outweigh the risk in most cases). Nevertheless, I don't think 3D is here to stay. The next major 3D revolution will be when it becomes possible to do it without glasses (which, at that point, might be a completely different technology like holograms or something).
  • At first, I was going to predict that Hollywood would be seeing a dip in ticket sales, until I realized that Avatar was mostly a 2010 phenomenon, and that Alice in Wonderland has made about $1 billion worldwide already. Furthermore, this summer sees the release of The Twilight Saga: Eclipse, which could reach similar heights (for reference, New Moon did $700 million worldwide) and the next Harry Potter is coming in November (for reference, the last Potter film did around $930 million). Altogether, the film world seems to be doing well... in terms of sales. I have to say that from my perspective, things are not looking especially well when it comes to quality. I'm not even as interested in seeing a lot of the movies released so far this year (an informal look at my past few years indicates that I've normally seen about twice as many movies as I have this year - though part of that is due to the move of the Philly film fest to October).
  • I suppose I should also make some Apple predictions. The iPhone will continue to grow at a fast rate, though its growth will be tempered by Android phones. Right now, both of them are eviscerating the rest of the phone market. Once that is complete, we'll be left with a few relatively equal players, and I think that will lead to good options for us consumers. The iPhone has been taken to task more and more for Apple's control-freakism, but it's interesting that Android's open features are going to present more and more of a challenge to that as time goes on. Most recently, Google announced that the latest version of Android would feature the ability for your 3G/4G phone to act as a WiFi hotspot, which will most likely force Apple to do the same (apparently if you want to do this today, you have to jailbreak your iPhone). I don't think this spells the end of the iPhone anytime soon, but it does mean that they have some legitimate competition (and that competition is already challenging Apple with its feature-set, which is promising).
  • The iPad will continue to have modest success. Apple may be able to convert that to a huge success if they are able to bring down the price and iron out some of the software kinks (like multi-tasking, etc... something we already know is coming). The iPad has the potential to destroy the netbook market. Again, the biggest obstacle at this point is the price.
  • The Republicans will win more seats in the 2010 elections than the Democrats. I haven't looked close enough at the numbers to say whether or not they could take back either (or both) house of Congress, but they will gain ground. This is not a statement of political preference either way for me, and my reasons for making this prediction are less about ideology than simple voter disenfranchisement. People aren't happy with the government and that will manifest as votes against the incumbents. It's too far away from the 2012 elections to be sure, but I suspect Obama will hang on, if for no other reason than that he seems to be charismatic enough that people give him a pass on various mistakes or other bad news.
And I think that's good enough for now. In other news, I have started a couple of posts that are significantly more substantial than what I've been posting lately. Unfortunately, they're taking a while to produce, but at least there's some interesting stuff in the works.
Posted by Mark on May 30, 2010 at 09:00 PM .: link :.


End of This Day's Posts

Sunday, March 14, 2010

Remix Culture and Soviet Montage Theory
A video mashup of The Beastie Boys' popular and amusing Sabotage video with scenes from Battlestar Galactica has been making the rounds recently. It's well done, but a little on the disposable side of remix culture. The video lead Sunny Bunch to question "remix culture":
It’s quite good. But, ultimately, what’s the point?

Leaving aside the questions of copyright and the rest: Seriously…what’s the point? Does this add anything to the culture? I won’t dispute that there’s some technical prowess in creating this mashup. But so what? What does it add to our understanding of the world, or our grasp of the problems that surround us? Anything? Nothing? Is it just “there” for us to have a chuckle with and move on? Is this the future of our entertainment?
These are good questions, and I'm not surprised that the BSG Sabotage video prompted them. The implication of Sonny's post is that he thinks it is an unoriginal waste of talent (he may be playing a bit of devil's advocate here, but I'm willing to play along because these are interesting questions and because it will give me a chance to pedantically lecture about film history later in this post!) In the comments, Julian Sanchez makes a good point (based on a video he produced earlier that was referenced by someone else in the comment thread), which will be something I'll expand on later in this post:
First, the argument I’m making in that video is precisely that exclusive focus on the originality of the contribution misses the value in the activity itself. The vast majority of individual and collective cultural creation practiced by ordinary people is minimally “original” and unlikely to yield any final product of wide appeal or enduring value. I’m thinking of, e.g., people singing karaoke, playing in a garage band, drawing, building models, making silly YouTube videos, improvising freestyle poetry, whatever. What I’m positing is that there’s an intrinsic value to having a culture where people don’t simply get together to consume professionally produced songs and movies, but also routinely participate in cultural creation. And the value of that kind of cultural practice doesn’t depend on the stuff they create being particularly awe-inspiring.
To which Sonny responds:
I’m actually entirely with you on the skill that it takes to produce a video like the Brooklyn hipsters did — I have no talent for lighting, camera movements, etc. I know how hard it is to edit together something like that, let alone shoot it in an aesthetically pleasing manner. That’s one of the reasons I find the final product so depressing, however: An impressive amount of skill and talent has gone into creating something that is not just unoriginal but, in a way, anti-original. These are people who are so devoid of originality that they define themselves not only by copying a video that they’ve seen before but by copying the very personalities of characters that they’ve seen before.
Another good point, but I think Sonny is missing something here. The talents of the BSG Sabotage editor or the Brooklyn hipsters are certainly admirable, but while we can speculate, we don't necessarily know their motivations. About 10 years ago, a friend and amateur filmmaker showed me a video one of his friends had produced. It took scenes from Star Wars and Star Trek: The Wrath of Khan and recut them so it looked like the Millennium Falcon was fighting the Enterprise. It would show Han Solo shooting, then cut to the Enterprise being hit. Shatner would exclaim "Fire!" and then it would cut to a blast hitting the Millennium Falcon. And so on. Another video from the same guy took the musical number George Lucas had added to Return of the Jedi in the Special Edition, laid Wu-Tang Clan in as the soundtrack, then re-edited the video elements so everything matched up.

These videos sound fun, but not particularly original or even special in this day and age. However, these videos were made ten to fifteen years ago. I was watching them on a VHS(!) and the person making the edits was using analog techniques and equipment. It turns out that these videos were how he honed his craft before he officially got a job as an editor in Hollywood. I'm sure there were tons of other videos, probably much less impressive, that he had created before the ones I'm referencing. Now, I'm not saying that the BSG Sabotage editor or the Brooklyn Hipsters are angling for professional filmmaking jobs, but it's quite possible that they are at least exploring their own possibilities. I would also bet that these people have been making videos like this (though probably much less sophisticated) since they were kids. The only big difference now is that technology has enabled them to make a slicker experience and, more importantly, to distribute it widely.

It's also worth noting that this sort of thing is not without historical precedent. Indeed, the history of editing and montage is filled with this sort of thing. In the 1910s and 1920s, Russian filmmaker Lev Kuleshov conducted a series of famous experiments that helped express the role of editing in films. In these experiments, he would show a man with an expressionless face, then cut to various other shots. In one example, he showed the expressionless face, then cut to a bowl of soup. When prompted, audiences would claim that they found that the man was hungry. Kuleshov then took the exact same footage of the expressionless face and cut to a pretty girl. This time, audiences reported that the man was in love. Another experiment alternated between the expressionless face and a coffin, a juxtaposition that lead audiences to believe that the man was stricken with grief. This phenomenon has become known as the Kuleshov Effect.

For the current discussion, one notable aspect of these experiments is that Kuleshov was working entirely from pre-existing material. And this sort of thing was not uncommon, either. At the time, there was a shortage of raw film stock in Russia. Filmmakers had to make due with what they had, and often spent their time re-cutting existing material, which lead to what's now called Soviet Montage Theory. When D.W. Griffith's Intolerance, which used advanced editing techniques (it featured a series of cross cut narratives which eventually converged in the last reel), opened in Russia in 1919, it quickly became very popular. The Russian film community saw this as a validation and popularization of their theories and also as an opportunity. Russian critics and filmmakers were impressed by the film's technical qualities, but dismissed the story as "bourgeois", claiming that it failed to resolve issues of class conflict, and so on. So, not having much raw film stock of their own, they took to playing with Griffith's film, re-editing certain sections of the film to make it more "agitational" and revolutionary.

The extent to which this happened is a bit unclear, and certainly public exhibitions were not as dramatically altered as I'm making it out to be. However, there are Soviet versions of the movie that contained small edits and a newly filmed prologue. This was done to "sharpen the class conflict" and "anti-exploitation" aspects of the film, while still attempting to respect the author's original intentions. This was part of a larger trend of adding Soviet propaganda to pre-existing works of art, and given the ideals of socialism, it makes sense. (The preceeding is a simplification of history, of course... see this chapter from Inside the Film Factory for a more detailed discussion of Intolerance and it's impact on Russian cinema.) In the Russian film world, things really began to take off with Sergei Eisenstein and films like Battleship Potemkin. Watch that film today, and you'll be struck by how modern-feeling the editing is, especially during the infamous Odessa Steps sequence (which you'll also recognize if you've ever seen Brian De Palma's "homage" in The Untouchables).

Now, I'm not really suggesting that the woman who produced BSG Sabotage is going to be the next Eisenstein, merely that the act of cutting together pre-existing footage is not necessarily a sad waste of talent. I've drastically simplified the history of Soviet Montage Theory above, but there are parallels between Soviet filmmakers then and YouTube videomakers today. Due to limited resources and knowledge, they began experimenting with pre-existing footage. They learned from the experience and went on to grander modifications of larger works of art (Griffith's Intolerance). This eventually culminated in original works of art, like those produced by Eisenstein.

Now, YouTube videomakers haven't quite made that expressive leap yet, but it's only been a few years. It's going to take time, and obviously editing and montage are already well established features of film, so innovation won't necessarily come from that direction. But that doesn't mean that nothing of value can emerge from this sort of thing, nor does messing around with videos on YouTube limit these young artists to film. While Roger Ebert's valid criticisms are vaid, more and more, I'm seeing interactivity as the unexplored territory of art. Video games like Heavy Rain are an interesting experience and hint at something along these lines, but they are still severely limited in many ways (in other words, Ebert is probably right when it comes to that game). It will take a lot of experimentation to get to a point where maybe Ebert would be wrong (if it's even possible at all). Learning about the visual medium of film by editing together videos of pre-existing material would be an essential step in the process. Improving the technology with which to do so is also an important step. And so on.

To return back to the BSG Sabotage video for a moment, I think that it's worth noting the origins of that video. The video is clearly having fun by juxtaposing different genres and mediums (it is by no means the best or even a great example of this sort of thing, but it's still there. For a better example of something built entirely from pre-existing works, see Shining.). Battlestar Galactica was a popular science fiction series, beloved by many, and this video comments on the series slightly by setting the whole thing to an unconventional music choice (though given the recent Star Trek reboot's use of the same song, I have to wonder what the deal is with SF and Sabotage). Ironically, even the "original" Beastie Boys video was nothing more than a pastiche of 70s cop television shows. While I'm no expert, the music on Ill Communication, in general, has a very 70s feel to it. I suppose you could say that association only exists because of the Sabotage video itself, but even other songs on that album have that feel - for one example, take Sabrosa. Indeed, the Beastie Boys are themselves known for this sort of appropriation of pre-existing work. Their album Paul's Boutique infamously contains literally hundreds of samples and remixes of popular music. I'm not sure how they got away with some of that stuff, but I suppose this happened before getting sued for sampling was common. Nowadays, in order to get away with something like Paul's Boutique, you'll need to have deep pockets, which sorta defeats the purpose of using a sample in the first place. After all, samples are used in the absence of resources, not just because of a lack of originality (though I guess that's part of it). In 2004 Nate Harrison put together this exceptional video explaining how a 6 second drum beat (known as the Amen Break) exploded into its own sub-culture:


There is certainly some repetition here, and maybe some lack of originality, but I don't find this sort of thing "sad". To be honest, I've never been a big fan of hip hop music, but I can't deny the impact it's had on our culture and all of our music. As I write this post, I'm listening to Danger Mouse's The Grey Album:
It uses an a cappella version of rapper Jay-Z's The Black Album and couples it with instrumentals created from a multitude of unauthorized samples from The Beatles' LP The Beatles (more commonly known as The White Album). The Grey Album gained notoriety due to the response by EMI in attempting to halt its distribution.
I'm not familiar with Jay-Z's album and I'm probably less familiar with The White Album than I should be, but I have to admit that this combination and the artistry with which the two seemingly incompatible works are combined into one cohesive whole is impressive. Despite the lack of an official release (that would have made Danger Mouse money), The Grey Album made many best of the year (and best of the decade) lists. I see some parallels between the 1980s and 1990s use of samples, remixes, and mashups, and what was happening in Russian film in the 1910s and 1920s. There is a pattern worth noticing here: New technology enables artists to play with existing art, then apply their learnings to something more original later. Again, I don't think that the BSG Sabotage video is particularly groundbreaking, but that doesn't mean that the entire remix culture is worthless. I'm willing to bet that remix culture will eventually contribute towards something much more original than BSG Sabotage...

Incidentally, the director of the original Beastie Boys Sabotage video? Spike Jonze, who would go on to direct movies like Being John Malkovich, Adaptation., and Where the Wild Things Are. I think we'll see some parallels between the oft-maligned music video directors, who started to emerge in the film world in the 1990s, and YouTube videomakers. At some point in the near future, we're going to see film directors coming from the world of short-form internet videos. Will this be a good thing? I'm sure there are lots of people who hate the music video aesthetic in film, but it's hard to really be that upset that people like David Fincher and Spike Jonze are making movies these days. I doubt YouTubers will have a more popular style, and I don't think they'll be dominant or anything, but I think they will arrive. Or maybe YouTube videomakers will branch out into some other medium or create something entirely new (as I mentioned earlier, there's a lot of room for innovation in the interactive realm). In all honesty, I don't really know where remix culture is going, but maybe that's why I like it. I'm looking forward to seeing where it leads.
Posted by Mark on March 14, 2010 at 02:18 PM .: link :.


End of This Day's Posts

Wednesday, March 10, 2010

Blast from the Past
A coworker recently unearthed a stash of a publication called The Net, a magazine published circa 1997. It's been an interesting trip down memory lane. In no particular order, here are some thoughts about this now defunct magazine.
  • Website: There was a website, using the oh-so-memorable URL of www.thenet-usa.com (I suppose they were trying to distinguish themselves from all the other countries with thenet websites). Naturally, the website is no longer available, but archive.org has a nice selection of valid content from the 96-97 era. It certainly wasn't the worst website in the world, but it's not exactly great either. Just to give you a taste - for a while, it apparently used frames. Judging by archive.org, the site apparently went on until at least February of 2000, but the domain apparently lapsed sometime around May of that year. Random clicking around the dates after 2000 yielded some interesting results. Apparently someone named Phil Viger used it as their personal webpage for a while, complete with MIDI files (judging from his footer, he was someone who bought up a lot of URLs and put his simple page on there as a placeholder). By 2006, the site lapsed again, and it has remained vacant since then.
  • Imagez: One other fun thing about the website is that their image directory was called "imagez" (i.e. http://web.archive.org/web/19970701135348/www.thenet-usa.com/imagez/menubar/menu.gif). They thought they were so hip in the 90s. Of course, 10 years from now, some dufus will be writing a post very much like this and wondering why there's an "r" at the end of flickr.
  • Headlines: Some headlines from the magazine:
    • Top Secrets of the Webmaster Elite (And as if that weren't enough, we get the subhead: Warning: This information could create dangerously powerful Web Sites)
    • Are the Browser Wars Over? - Interestingly, the issue I'm looking at was from February 1997, meaning that IE and NN were still on their 3.x iterations. More on this story below
    • Unlock the Secrets of the Search Engines - Particularly notable in that this magazine was published before google. Remember Excite (apparently, they're still around - who knew)?
    I could go on and on. Just pick up a magazine, open to a random page, and you can observe something very dated or featuring a horrible pun (like Global Warning... get it? Instead of Global Warming, he's saying Global Warning! He's so clever!)
  • Browser Wars: With the impending release of IE4 and Netscape Communicator Suite, everyone thought that web browsers were going to go away, or be consumed by the OS. One of the regular features of the magazine is to ask a panel of experts a simple question, such as "Are Web Browsers an endangered species?" Some of the answers are ridiculous, like this one:
    The Web browser (content) and the desktop itself (functions) will all be integrated into our e-mail packages (communications).
    There is, perhaps, a nugget of truth there, but it certainly didn't happen that way. Still, the line between browser, desktop, and email client is shifting, this guy just picked the wrong central application. Speaking of which, this is another interesting answer:
    The desktop will give way to the webtop. You will hardly notice where the Web begins and your documents end.
    Is it me, or is this guy describing Chrome OS? This guy's answer and a lot of the others are obviously written with 90s terminology, but describing things that are happening today. For instance, the notion of desktop widgets (or gadgets or screenlets or whatever you call them) is mentioned multiple times, but not with our terminology.
  • Holy shit, remember VRML?
  • Pre-Google Silliness: "A search engine for searching search engines? Sure why not?" Later in the same issue, I saw an ad for a program that would automatically search multiple search engines and provide you with a consolidated list of results... for only $70!
  • Standards: This one's right on the money: "HTML will still be the standard everyone loves to hate." Of course, the author goes on to speculate that java applets will rule the day, so it's not exactly prescient.
  • The Psychic: In one of my favorite discoveries, the magazine pitted The Suit Versus the Psychic. Of course, the suit gives relatively boring answers to the questions, but the Psychic, he's awesome. Regarding NN vs IE, he says "I foresee Netscape over Microsoft's IE for 1997. Netscape is cleaner on an energy level. It appears to me to be more flexible and intuitive. IE has lower energy. I see encumbrances all around it." Nice! Regarding IPOs, our clairvoyant friend had this to say "I predict IPOs continuing to struggle throughout 1997. I don't know anything about them on this level, but that just came to me." Hey, at least he's honest. Right?
Honestly, I'm not sure I'm even doing this justice. I need to read through more of these magazines. Perhaps another post is forthcoming...
Posted by Mark on March 10, 2010 at 07:19 PM .: link :.


End of This Day's Posts

Sunday, January 10, 2010

Computer Desks
I have recently come into possession of a second LCD monitor, and hooked it up to do some dual monitor awesomeness (amazingly enough, I didn't even need to upgrade my graphics card to do so). The problem is that my current desk is one of those crappy turn-of-the-century numbers that assumes you only have one monitor and thus doesn't have space for the second. I managed to work around this... by ripping off the hutch portion of the desk, but I could still use a new desk, as this one really has seen better days.

So I started thinking about what I need my desk to do, and have quickly descended into Paradox of Choice hell. At a minimum, a new desk would need to be able to handle:
  • Two Monitors
  • Keyboard and Mouse (Preferably in a pullout thingy)
  • Cable Modem and Router
  • Tower Computer (needs good ventilation, especially considering that there are a couple fans mounted on the side of my computer)
  • Two speakers
  • External Hard Drive
  • Associated Cables/Wires
It's also worth noting that I often have my TV on in the background. It's currently positioned to my left, so I can just glance over and see what's going on. My current desk has a couple of drawers and before I got rid of the hutch, it had other storage space. This allowed me to keep some books, CDs/DVDs, etc... in a handy position. However, it'd probably be just as easy to find some other piece of furniture to handle those (but it would be nice to have a small filing cabinet thing as part of the desk).

In terms of taste, I tend to be a minimalist. I don't need lots of flying doodads or space-age design. Just something simple that covers the above. In looking around, this seems to be a rarity. As per usual when it comes to this sort of thing, Jeff Atwood has already posted about this, and the comment thread there is quite interesting (and still being updated, years later).

The best desk I've found so far seems to be the D2 Pocket Desk. Of course the big problem with that one is that it's obscenely expensive (even on sale, it's wayyyy to expensive). But it's perfect for me. It's notable almost as much for what you don't see as what you do see - apparently there's a big compartment in the back that's big enough to stuff all the cables, wires, routers, etc... that I need (and you can see the two little holes meant to corral the wires into that area). It being as expensive as it is, it's not something I'm seriously considering, but I'm trying to find a cheaper, but similarly designed option (perhaps something that doesn't use cherry wood, which is apparently quite expensive). I'm kinda surprised at how few computer desks even attempt to account for cable management. Anyway, here's a quick picture:

D2 Pocket Desk Picture

The other notable option I found at Jeff's site was from a company called Anthro. Not the model he mentions, which is a monstrosity. However, Anthro features lots of models and everything is customizable in the extreme. While they seem like good quality desks, they're also much more reasonably priced. Unfortunately, their configuration tool does little to help you visualize what I'll end up with. Still, the 48" AnthroCart seems like it would fit my needs and given the modular nature of the desk, I can always add on to it later. If you look at the 3rd picture on that page, it's kinda what I'm looking for (but without the bottom shelf and maybe a filing cabinet attachment)

The big questions I have about the AnthroCart are how well their keyboard/mouse solutions work (all of the varieties have seem to be quite small - and my current option is actually kinda large, which I really like for some reason...) There's also the question of how well those extra shelfs on the top and bottom work. And color. Yeah, so this one is definitely in Paradox of Choice territory. However, they're apparently pretty agreeable and will help guide you in choosing the various accessories, etc... So maybe I'll start up a chat with a rep when I get a chance...

Some other stuff I've been looking at:
  • Liso Computer Desk with Keyboard (from Target)
  • Onyx Matrix Computer Desk (from Office Depot)
  • Drake Desk (from Crate & Barrel - would be good if it weren't for the glass top)
  • Ikea has some interesting stuff, but most of it is too small. On the other hand, for my bedroom, I did buy one of those generic Ikea tables and made it work as a desk. But it's also kinda tucked into the corner of my room - the new desk needs to be in the middle of my living room, so it needs to look somewhat more presentable...
Any other ideas? As of right now, I'm thinking a simple AnthroCart setup would be best, but I'm still trying to find an imitation D2 Pocket Desk, which I still think would be ideal...

Update: Desk 51 from BlueDot (via) is pretty interesting. I'm wondering how sturdy it is.

Again Update: This Landon Desk from Crate and Barrel has grown on me a bit, especially after seeing a similar desk on Flickr. The good thing about C&B is that there is a store near me, so I can at least check it out in person...

Another Update: Well, that's an idea... which I suppose also brings up the "Build your own" option, which could be a rewarding experience.

Yet Another Update: For reference, here's a pic of my desk as currently configured, and here's the surprisingly sturdy keyboard tray.
Posted by Mark on January 10, 2010 at 07:00 PM .: link :.


End of This Day's Posts

Wednesday, November 18, 2009

Another Store You Made
I'm totally stealing an idea from Jason Kottke here (let's call it a meme!), but it's kinda neat:
Whenever I link to something at Amazon on kottke.org, there's an affiliate code associated with the link. When I log into my account, I can access a listing of what people bought1. The interesting bit is that everything someone buys after clicking through to Amazon counts and is listed, even items I didn't link to directly. These purchased-but-unlinked-to items form a sort of store created by kottke.org readers of their own accord.
I have about 1/1000000th the readership of Kottke, but I do have an Amazon affiliate account (it doesn't even come close to helping pay for the site, but it does feed my book/movie/music/video game addictions). Of course, I don't sell nearly as much stuff either, but here are a few things sold that haven't been directly linked: And that about covers the unexpected stuff. I do get lots of Asimov orders as well as Christmas movie orders, but those are popular sections of the site...
Posted by Mark on November 18, 2009 at 07:23 PM .: link :.


End of This Day's Posts

Sunday, June 28, 2009

Interrupts and Context Switching
To drastically simplify how computers work, you could say that computers do nothing more that shuffle bits (i.e. 1s and 0s) around. All computer data is based on these binary digits, which are represented in computers as voltages (5 V for a 1 and 0 V for a 0), and these voltages are physically manipulated through transistors, circuits, etc... When you get into the guts of a computer and start looking at how they work, it seems amazing how many operations it takes to do something simple, like addition or multiplication. Of course, computers have gotten a lot smaller and thus a lot faster, to the point where they can perform millions of these operations per second, so it still feels fast. The processor is performing these operations in a serial fashion - basically a single-file line of operations.

This single-file line could be quite inefficent and there are times when you want a computer to be processing many different things at once, rather than one thing at a time. For example, most computers rely on peripherals for input, but those peripherals are often much slower than the processor itself. For instance, when a program needs some data, it may have to read that data from the hard drive first. This may only take a few milliseconds, but the CPU would be idle during that time - quite inefficient. To improve efficiency, computers use multitasking. A CPU can still only be running one process at a time, but multitasking gets around that by scheduling which tasks will be running at any given time. The act of switching from one task to another is called Context Switching. Ironically, the act of context switching adds a fair amount of overhead to the computing process. To ensure that the original running program does not lose all its progress, the computer must first save the current state of the CPU in memory before switching to the new program. Later, when switching back to the original, the computer must load the state of the CPU from memory. Fortunately, this overhead is often offset by the efficiency gained with frequent context switches.

If you can do context switches frequently enough, the computer appears to be doing many things at once (even though the CPU is only processing a single task at any given time). Signaling the CPU to do a context switch is often accomplished with the use of a command called an Interrupt. For the most part, the computers we're all using are Interrupt driven, meaning that running processes are often interrupted by higher-priority requests, forcing context switches.

This might sound tedious to us, but computers are excellent at this sort of processing. They will do millions of operations per second, and generally have no problem switching from one program to the other and back again. The way software is written can be an issue, but the core functions of the computer described above happen in a very reliable way. Of course, there are physical limits to what can be done with serial computing - we can't change the speed of light or the size of atoms or a number of other physical constraints, and so performance cannot continue to improve indefinitely. The big challenge for computers in the near future will be to figure out how to use parallel computing as well as we now use serial computing. Hence all the talk about Multi-core processing (most commonly used with 2 or 4 cores).

Parallel computing can do many things which are far beyond our current technological capabilities. For a perfect example of this, look no further than the human brain. The neurons in our brain are incredibly slow when compared to computer processor speeds, yet we can rapidly do things which are far beyond the abilities of the biggest and most complex computers in existance. The reason for that is that there are truly massive numbers of neurons in our brain, and they're all operating in parallel. Furthermore, their configuration appears to be in flux, frequently changing and adapting to various stimuli. This part is key, as it's not so much the number of neurons we have as how they're organized that matters. In mammals, brain size roughly correlates with the size of the body. Big animals generally have larger brains than small animals, but that doesn't mean they're proportionally more intelligent. An elephant's brain is much larger than a human's brain, but they're obviously much less intelligent than humans.

Of course, we know very little about the details of how our brains work (and I'm not an expert), but it seems clear that brain size or neuron count are not as important as how neurons are organized and crosslinked. The human brain has a huge number of neurons (somewhere on the order of one hundred billion), and each individual neuron is connected to several thousand other neurons (leading to a total number of connections in the hundreds of trillions). Technically, neurons are "digital" in that if you were to take a snapshot of the brain at a given instant, each neuron would be either "on" or "off" (i.e. a 1 or a 0). However, neurons don't work like digital electronics. When a neuron fires, it doesn't just turn on, it pulses. What's more, each neuron is accepting input from and providing output to thousands of other neurons. Each connection has a different priority or weight, so that some connections are more powerful or influential than others. Again, these connections and their relative influence tends to be in flux, constantly changing to meet new needs.

This turns out to be a good thing in that it gives us the capability to be creative and solve problems, to be unpredictable - things humans cherish and that computers can't really do on their own.

However, this all comes with its own set of tradeoffs. With respect to this post, the most relevant of which is that humans aren't particularly good at doing context switches. Our brains are actually great at processing a lot of information in parallel. Much of it is subconscious - heart pumping, breathing, processing sensory input, etc... Those are also things that we never really cease doing (while we're alive, at least), so those resources are pretty much always in use. But because of the way our neurons are interconnected, sometimes those resources trigger other processing. For instance, if you see something familiar, that sensory input might trigger memories of childhood (or whatever).

In a computer, everything is happening in serial and thus it is easy to predict how various inputs will impact the system. What's more, when a computer stores its CPU's current state in memory, that state can be restored later with perfect accuracy. Because of the interconnected and parallel nature of the brain, doing this sort of context switching is much more difficult. Again, we know very little about how the humain brain really works, but it seems clear that there is short-term and long-term memory, and that the process of transferring data from short-term memory to long-term memory is lossy. A big part of what the brain does seems to be filtering data, determining what is important and what is not. For instance, studies have shown that people who do well on memory tests don't necessarily have a more effective memory system, they're just better at ignoring unimportant things. In any case, human memory is infamously unreliable, so doing a context switch introduces a lot of thrash in what you were originally doing because you will have to do a lot of duplicate work to get yourself back to your original state (something a computer has a much easier time doing). When you're working on something specific, you're dedicating a significant portion of your conscious brainpower towards that task. In otherwords, you're probably engaging millions if not billions of neurons in the task. When you consider that each of these is interconnected and working in parallel, you start to get an idea of how complex it would be to reconfigure the whole thing for a new task. In a computer, you need to ensure the current state of a single CPU is saved. Your brain, on the other hand, has a much tougher job, and its memory isn't quite as reliable as a computer's memory. I like to refer to this as metal inertia. This sort of issue manifests itself in many different ways.

One thing I've found is that it can be very difficult to get started on a project, but once I get going, it becomes much easier to remain focused and get a lot accomplished. But getting started can be a problem for me, and finding a few uninterrupted hours to delve into something can be difficult as well. One of my favorite essays on the subject was written by Joel Spolsky - its called Fire and Motion. A quick excerpt:
Many of my days go like this: (1) get into work (2) check email, read the web, etc. (3) decide that I might as well have lunch before getting to work (4) get back from lunch (5) check email, read the web, etc. (6) finally decide that I've got to get started (7) check email, read the web, etc. (8) decide again that I really have to get started (9) launch the damn editor and (10) write code nonstop until I don't realize that it's already 7:30 pm.

Somewhere between step 8 and step 9 there seems to be a bug, because I can't always make it across that chasm. For me, just getting started is the only hard thing. An object at rest tends to remain at rest. There's something incredible heavy in my brain that is extremely hard to get up to speed, but once it's rolling at full speed, it takes no effort to keep it going.
I've found this sort of mental inertia to be quite common, and it turns out that there are several areas of study based around this concept. The state of thought where your brain is up to speed and humming along is often referred to as "flow" or being "in the zone." This is particularly important for working on things that require a lot of concentration and attention, such as computer programming or complex writing.

From my own personal experience a couple of years ago during a particularly demanding project, I found that my most productive hours were actually after 6 pm. Why? Because there were no interruptions or distractions, and a two hour chunk of uninterrupted time allowed me to get a lot of work done. Anecdotal evidence suggests that others have had similar experiences. Many people come into work very early in the hopes that they will be able to get more done because no one else is here (and complain when people are here that early). Indeed, a lot of productivity suggestions basically amount to carving out a large chunk of time and finding a quiet place to do your work.

A key component of flow is finding a large, uninterrupted chunk of time in which to work. It's also something that can be difficult to do here at a lot of workplaces. Mine is a 24/7 company, and the nature of our business requires frequent interruptions and thus many of us are in a near constant state of context switching. Between phone calls, emails, and instant messaging, we're sure to be interrupted many times an hour if we're constantly keeping up with them. What's more, some of those interruptions will be high priority and require immediate attention. Plus, many of us have large amounts of meetings on our calendars which only makes it more difficult to concentrate on something important.

Tell me if this sounds familiar: You wake up early and during your morning routine, you plan out what you need to get done at work today. Let's say you figure you can get 4 tasks done during the day. Then you arrive at work to find 3 voice messages and around a hundred emails and by the end of the day, you've accomplished about 15 tasks, none of which are the 4 you had originally planned to do. I think this happens more often than we care to admit.

Another example, if it's 2:40 pm and I know I have a meeting at 3 pm - should I start working on a task I know will take me 3 solid hours or so to complete? Probably not. I might be able to get started and make some progress, but as soon my brain starts firing on all cylinders, I'll have to stop working and head to the meeting. Even if I did get something accomplished during those 20 minutes, chances are when I get back to my desk to get started again, I'm going to have to refamiliarize myself with the project and what I had already done before proceeding.

Of course, none of what I'm saying here is especially new, but in today's world it can be useful to remind ourselves that we don't need to always be connected or constantly monitoring emails, RSS, facebook, twitter, etc... Those things are excellent ways to keep in touch with friends or stay on top of a given topic, but they tend to split attention in many different directions. It's funny, when you look at a lot of attempts to increase productivity, efforts tend to focus on managing time. While important, we might also want to spend some time figuring out how we manage our attention (and the things that interrupt it).

(Note: As long and ponderous as this post is, it's actually part of a larger series of posts I have planned. Some parts of the series will not be posted here, as they will be tailored towards the specifics of my workplace, but in the interest of arranging my interests in parallel (and because I don't have that much time at work dedicated to blogging on our intranet), I've decided to publish what I can here. Also, given the nature of this post, it makes sense to pursue interests in my personal life that could be repurposed in my professional life (and vice/versa).)
Posted by Mark on June 28, 2009 at 03:44 PM .: link :.


End of This Day's Posts

Wednesday, June 10, 2009

Screenshots
When I write about movies or anime, I like to include screenshots. Heck, half the fun of the Friday the 13th marathon has been the screenshots. However, I've been doing this manually and it's become somewhat time intensive... So I've been looking for ways to make the process of creating the screenshots easier. I was going to write a post about a zombie movie tonight and I had about 15 screenshots I wanted to use...

I take screenshots using PowerDVD, which produces .bmp files. To create a screenshot for a post, I will typically crop out any unsightly black borders (they're ugly and often asymmetrical), convert to .jpg and rename the file. Then I will create a smaller version (typically 320 pixels, while maintaining the aspect ratio), using a variant of the original .jpg's filename. This smaller version is what you see in my post, while the larger one is what you see when you click on the image in my post.

I've always used GIMP to accomplish this, but it's a pretty manual process, so I started looking around for some batch image processing programs. There are tons of the things out there. I found several promising programs. Batch Image Resizer was pretty awesome and did exactly what I wanted, but the free trial version inserted a huge unwanted watermark that essentially rendered the output useless. I looked at a few other free apps, but they didn't meet some of my needs.

Eventually, I came accross the open source Phatch, which looked like it would provide everything I needed. The only issue was the installation process. It turns out that Phatch was written in Python, so in addition to Phatch, you also need to download and install Python, wxPython, Python Imaging Library and the Python Win32 Extensions. What's more is that the Phatch documentation has not taken into account that new versions of all of those are available and not all of them are compatible with each other. After a false start, I managed to download and install all the necessary stuff. Then, to run the application, I have to use the goddamned command line. Yeah, I know windows users don't get much support from the linux community, but this is kinda ridiculous.

But I got it all working and now I was on my way. As I've come to expect from open source apps, Phatch has a different way of setting up your image processing than most of the other apps I'd seen... but I was able to figure it out relatively quickly. According to the Phatch documentation, the Crop action looked pretty easy to use... the only problem was that when I ran Phatch, Crop did not appear to be on the list of actions. Confused, I looked around the documentation some more and it appeared that there were several other actions that could be used to crop images. For example, if I used the Canvas action, I could technically crop the image by specifying measurements smaller than the image itself - this is how I eventually accomplished the feat of converting several screenshots from their raw form to their edited versions. Here's an example of the zombietastic results (for reference, a .jpg of the original):

Zombietastic

Bonus points to anyone who can name the movie!

The process has been frustrating and it took me a while to get all of this done. At this point, I have to wonder if I'd have been better off just purchasing that first app I found... and then I would have been done with it (and probably wouldn't be posting this at all). I'm hardly an expert on the subject of batch image manipulation and maybe I'm missing something fairly obvious, but I have to wonder why Phatch is so difficult to download, install, and use. I like open source applications and use several of them regularly, but sometimes they make things a lot harder than they need to be.

Update: I just found David's Batch Processor (a plugin for GIMP), but its renaming functionality is horrible (you can't actually rename the images - but you can add a prefix or suffix to the original filename.) Otherwise, it's decent.

And I also found FastStone Photo Resizer, which does everything I need it to do, and I don't need to run it from the command line either. This is what I'll probably be using in the future...

Update II: I got an email from Stani, who works on Phatch and was none to pleased about the post. It seems he had trouble posting a comment here (d'oh - second person this week who mentioned that, which is strange as it seems to have been working fine for the past few months and I haven't changed anything...). Anyway, here are his responses to the above:
As your comment system doesn't work, I post it through email. Considering the rant of your blog post, I would appreciate if you publish it as a comment for: http://kaedrin.com/weblog/archive/001652.html

> Eventually, I came accross the open source Phatch, which looked like it would provide everything I needed.

Thanks for taking the effort to try out Phatch.

> What's more is that the Phatch documentation has not taken into account that new versions of all of those are available and not all of them are compatible with each other.

The Phatch documentation is a wiki. The installation process for Windows would be much less a pain if Windows users would help improving the wiki and keeping the wiki up to date.

Unfortunately I've run into this behavior:
http://photobatch.wikidot.com/forum/t-145786/windows-installation-question
Luckily Linux users update the wiki themselves or send me the instructions, but don't run away. (Hint, hint)

I know several people have installed Phatch on Windows, but none of them documented for their fellow Window users. I only update the instructions with every major release.

> Then, to run the application, I have to use the goddamned command line.

If you installed Python right, you could just double click on phatch.py to start it or make a shortcut for it on your desktop.

> Yeah, I know windows users don't get much support from the linux community, but this is kinda ridiculous.

I hope to see your contribution on the wiki. Until then the situation is indeed ridiculous.

> the Crop action looked pretty easy to use...

You're right, but the crop action is part of the next release, Phatch 0.2 which is packed with many new features. If you want to be a beta tester, please let me know.

> maybe I'm missing something fairly obvious, but I have to wonder why Phatch is so difficult to download, install, and use. I like open source applications and use several of them regularly, but sometimes they make things a lot harder than they need to be.

I hope I explained it to you. I only use Windows to test my open source software. Maybe you would want me to make a one click installer. You probably understand that such negative ranting is not really stimulating.
And my response:
Apologies if my ranting wasn't stimulating enough, but considering that it took a couple of hours to get everything working and that I value my time, I wasn't exactly enthused with the application or the documentation. Believe it or not, I did click on the "edit" link the wiki with the intention of adding some notes about the updated version numbers, but it said I had to be registered and I was already pretty fed up and not in the mood to sign up for anything. I admit that I neglected to do my part, but I got into this to save time and it ended up being an enormous time-sink. If I get a chance, I'll take another look.

It looks like I can just double-click on the .py file, but the documentation says to run it from the command line (another thing for me to fix, perhaps?)

As for a simple installer, I would love to... if I had the time, motivation, or, uh, talent to create one. In the mean time, I'll see what I can do about the documentation, but honestly, I doubt that will help much until someone does create a windows installer.

Sorry about the comment functionality on my blog. I've been having issues with spammers and the plugin I'm using to block spammers seems to block legitimate comments sometimes as well (Question: did you use the "preview" function?). Yet another thing I'll have to look into...
Update III: Ben over at Midnight Tease has been having fun with Open Source as well...
Posted by Mark on June 10, 2009 at 09:54 PM .: link :.


End of This Day's Posts

Wednesday, February 04, 2009

Nerdy
I've always considered myself something of a nerd, even back when being nerdy wasn't cool. Nowadays, everyone thinks they're a nerd. MGK recently noticed this:
Recently, I was surfing the net looking for lols, and came across a personal ad on Craigslist. The ad was not in and of itself hilarious, but one thing struck me. The writer described herself as “nerdy,” and as an example of her nerdiness, explained that she loved to watch Desperate Housewives.

My god, people, have we allowed “nerdy” to be defined down so greatly that watching Desperate Housewives - a top 20 Neilsen primetime soap opera with no actual nerd content per se - qualifies as “nerdy” now? That is just wrong. The nerdular act cannot be allowed to be so mainstream.
To address this situation, he has devised "a handy guide for people to define their own nerdiness, based on a number of nerdistic passions." I'm a little surprised at how poorly I did in some of these categories.
  • Batman - Not Nerdy. When I think about it, it's not that surprising. After all, I have never read any of the comic books, not even Year One or The Dark Knight Returns, which MGK specifically calls out later in his creteria as not being particularly nerdy. That said, I wonder how watching The Dark Knight 5 times (three times in the theater) in less than a year qualifies.
  • Star Wars - Slightly Nerdy. Now this one is surprising. Sure, according to this guide, I'm nerdier about Star Wars than I am about Batman, but only a little. I suppose if he had loosened the criteria or chose a different random fact for the "nerdy" level, I could easily have reached that level, for I have had some experience with the “expanded universe” Star Wars novels. One other gripe is that no self-respecting nerd would defend the idea of Jar Jar Binks!
  • Harry Potter - Somwhere between Not Nerdy and Slightly Nerdy. I didn't particularly love Harry Potter and the Order of the Phoenix, and my dislike may disqualify me from the Slightly Nerdy level. On the other hand, I didn't particularly hate the novel either, and I had no problem blowing through it rather quickly.
  • Magic: The Gathering - Slightly Nerdy. I have to say that I didn't play this game that much, but I really did enjoy it when I did. But it got way too complicated later on, and some people took it wayyy to seriously.
  • H.P. Lovecraft - Dangerously Nerdy. Finally! Though I have to admit that I don't qualify for three of the lesser levels... However, I have read several of his stories, which is apparently dangerously nerdy.
  • Nerd Television - Dangerously Nerdy. Totally. The two shows I haven't watched much of are the lowest ranked ones. I've seen a significant portion of the other ones, including The Adventures of Brisco County Jr. (at this point, even recognizing what Brisco County Jr. is, is probably nerdworthy).
  • Star Trek - I think I might be Fairly Nerdy here, otherwise I'm Not Nerdy. It's just that I don't actually remember which one Picard rode the dune buggy in. That probably disqualifies me. I do love TNG though. Could never get into any of the other spinoffs.
  • Computer Use - Nerdy. Potentially Really Nerdy, but there are definitely a couple of coding jokes in XKCD that I haven't gotten (but I get a pretty good portion of them).
Again, I am a bit surprised at how non-nerdy I am. I mean, aside from a couple of dangerously nerdy subjects, I'm not very nerdy at all. How did you do?
Posted by Mark on February 04, 2009 at 10:45 PM .: link :.


End of This Day's Posts

Sunday, May 18, 2008

Firefox versus Opera
I use Opera to do most of my web browsing and have done so for quite a while. Is it time to switch to another browser? Or does Opera still meet my needs? After some consideration, the only realistic challenger is Firefox. What follows is not meant to be an objective comparison, though I will try to maintain impartiality and some of the criteria will be more fact based than others. Still, I'm not claiming this to be a definitive guide or anything. There are many features of both browsers that appeal to me, and many that I find irrelevant. Your experience will probably be different. Anyway, to start things off, a little history:

I first became aware of Opera in the late 1990s and I tried out version 3.5 and 4, but neither really made much of an impression. Plus, at the time, Opera was trialware... there was a free trial, but after that ended you needed to purchase the software if you wanted to keep using it. Starting with version 5, Opera became free, but it was ad-supported, and there was this big, honking banner ad built into the browser. On the other hand, Opera 5 was also the first browser to implement mouse gestures, the most addicting browser feature I've encountered (more on this later). As time went on and other browsers emerged, Opera finally relented and released a completely free browser in 2005. I've used Opera as much as possible since then, though I've occasionally used other browsers for various reasons. The biggest complaint I've had about Opera is that some websites don't render or operate correctly in Opera, thus forcing me to fire up IE or FF. This complaint has lessened with each successive release though, and Opera 9.x seems to be compatible with most websites. The only time I find myself opening another browser is to watch Netflix online movies, which only work in IE (more on this later). Opera is certainly not a perfect browser, but each release seems to contain new and innovative features, and it has always served me well.

The only browser that has really compared with Opera is Firefox. It's based on the open source Mozilla project, which began in 1998 as a replacement for the Netscape 4.x browser (which was badly in need of an overhaul). Unfortunately, development of the open source browser was slow going, allowing Microsoft to completely dominate the market. However, version 1.0 of the Mozilla Application Suite (which included more than just a browser) was launched in 2002. It was bloated and slow, but the underlying code (particularly the rendering engine, named gecko) was used as the base for several new projects, including Firefox. Firefox 1.0 was released in late 2004, and has been picking up steam ever since. It's the first browser to challenge IE's dominance of the market, and it's also far superior to IE. The current version of Firefox is mature and stable, and a new version (3.0) is on its way that will supposedly address many of the current complaints about FF.

Of course, these are not the only two browsers out there. Internet Explorer is notable for it's widespread adoption (during Q2 of 2004, IE had an asounding 95% share of the market). IE isn't very good compared to the competition, but its one virtue is that most websites will load and render properly in IE (and some websites will only work in IE). As a web developer, I have an intense dislike for IE, as it has poor standards support and is generally a pain to work with (especially IE6). IE7, while an improvement in many ways, also features some bizarre interface changes that make the browser less usable.

Also of note is Safari, Apple's default browser in OS X. Based on the open source KHTML engine (which runs KDE's Konqueror, the primary open source competitor to Mozilla/Firefox), it implements many of the same features of Opera and FF, but in a simple, lightweight way. I've never been much of a fan of Safari, though it should be noted as a valid competitor. It's a solid browser, fast and clean, but ultimately nothing really special (perhaps with more use, I would be won over). Finally, there are a number of other smaller scale or specialized browsers like Flock (which has many features tailored around integrating with social networking sites), but nothing there really fits me.

So the most realistic options for me are Opera and Firefox. Both have new browsers in Beta (or higher), but I'll be primarily using the current releases (Opera 9.27 and Firefox 2.0.0.14). I've played around with Opera 9.5 and Firefox 3 RC1 and will keep them in mind. For reference, I'm running a PC with Intel Core 2 Duo (2.4 GHz), 2 GB RAM, and Windows XP SP2.
  • Default/Native Features: These first two criteria are tricky because they reflect the underlying philosophy of the two companies. Opera cleary has the better feature set out-of-the-box. Firefox is no slouch, of course, but it can't compete with the quality and quantity of Opera's default feature set. Both browsers have strong standards support, tabbed browsing, popup blocking, integrated web search, and other standard browser features. Now here's the tricky part. Opera has several features that FF doesn't. However, FF has one big feature that Opera doesn't, and that's their Extensions and Add-Ons (more on that in a moment). Opera does have a few major pieces of native functionality, like Mouse Gestures and Speed Dial, as well as other, smaller touches, like paste-and-go and the Trash Can. Now, the inclusion of all these features by default has its disadvantages as well. Especially when you consider all the features that aren't very useful. Opera includes an email client (which is decent except that I don't use it anymore), integrated BitTorrent support (which is awful and should be disabled), and the particularly weird Widgets (which are near useless, more on this below). This leads to the frequent claim by Firefox supporters that Opera is "bloated" with extra features. I suppose that's technically true, but then, Opera is also a smaller download (Opera 9.5b2 is 5,117 KB versus Firefox 3RC1's 7,317 KB), takes up less space on the HD (Opera at 6.02 MB versus Firefox at 22.6 MB, though FF also has Add-Ons), and has a lower memory footprint. Call it bloated if you like, but that doesn't mean that FF isn't bloated too (honestly though, this is a quibble - both are way, way better than IE).

    Winner: Opera

  • Add-Ons/Extensions/Plugins: While Firefox does not have many features installed by default, it does have support for Add-Ons, and there is a huge community of developers and a large number of useful Add-Ons available for download. Many of the things Opera does natively can be replicated using a FF Add-On (in my experience, the Add-On is not as good as the native support, but passable). In effect, Firefox actually has more features available than Opera because of these Add-Ons. Now, this philosophy also has its drawbacks. First, you have to seek out and install each Add-On, and second, some Add-Ons are poorly written and cause performance problems within FF. In the end, though, the usefulness of the Add-Ons outweigh the negatives. Opera remains stalwart in its refusal to implement any sort of plugin system (beyond the rudimentary, circa 1995 Netscape-like system they have now), though they did launch something called Widgets, which are pretty much worthless. Opera's reasoning for not supporting extensions is sound, but also limiting:
    Opera does not support third party extensions. Opera has rather incorporated the most useful and popular features in its browser and holds itself accountable for the functionality of these features. With integrated features rather than extensions, users are not subjected to the vulnerabilities of extensions created by third parties, which may or may not go through a verification or testing process. With the largest Web browser development lab in the world, Opera ensures that all of its features are smoothly integrated, tested and ready for the user.
    This is certainly one way to approach the situation, and it's also probably the reason why Opera's native functionality works better than Firefox's Add-Ons, but again, it's quite limiting. More than anything else, Extensions are what would make me switch from Opera to FF. Opera is very innovative and they were the first to implement many features into their browser (for instance, tabbed browsing, mouse gestures, and more recently, speed dial), but even when Opera does manage to implement a brand new feature not in FF, it doesn't take long for someone to put together an Add-On to duplicate the functionality. I'll talk a little more about my favorite extensions as we go. Again, the positives of having an open system for third-party extensions far outweigh the negatives.

    Winner: Firefox

  • Other Customization: Both browsers are highly customizeable and powerful. The interface customization abilities are more extensive in Opera and their Theme manager is easier to use, but Firefox can generally follow along, though sometimes they need to rely on an Extension to allow customization. I don't do a whole lot of advanced configuration in either browser, but both browsers have a way to configure various preferences (beyond the basic options in the menu), etc...

    Winner: Tie

  • General Web Browsing: There are a lot of elements to this that will be separated out (i.e., Mouse Gestures, speed, performance, etc...), so what this amounts to is how well each browser loads pages. Since Opera has never commanded more than a few percent of the browser share, most web development doesn't take Opera into account. In the past, this meant that many pages did not look right or operate correctly in Opera. As time has gone on and web standards have become more prevalent, Opera has improved considerably in this respect (well, technically, Opera has always been relatively standards compliant, it's just that the standards are being used more these days) to the point now where I very rarely need to open a different browser. However, there are still pages that render poorly and would look better in other browsers and a page I use frequently, the Netflix streaming video functionality, won't work in Opera. Of course, it won't work in Firefox either, but Firefox has one of those crazy Add-Ons called IE Tab which loads an instance of IE inside Firefox's tabs (meaning that you don't have to exit out of Firefox or fire up a separate IE window). Firefox has captured around 15% of the market and is a favorite of the web development community (see next bullet for more), so it has much better support amongst websites. Opera still lags behind because of its small market share, to the point where even Internet software giants like Google don't launch applications with strong Opera support (for instance, every time Google Reader upgrades their interface, it stops working in Opera for a few days while the Google developers scramble to issue a fix).

    Winner: Firefox

  • Mouse Gestures: This is probably the most important piece of functionality a browser must have for me. Browsing the internet is a mouse intensive activity, and Opera realized early on that providing this functionality would drastically improve the browsing experience. Opera has native Mouse Gestures support, while Firefox has an Add-On (actually, it has several, but only one of them is worth its salt) that provides similar functionality. However, Opera's functionality has always felt smoother and easier to use. The FF Add-On is a little buggy, the browsing experience is a little rougher, and it seems to be easier to screw up a gesture. I think part of this is that Opera has more caching enabled by default than Firefox, which leads to a more seamless experience when browsing. I'm sure there are ways to make FF more responsive, but I haven't played around with it (and Opera is fantastic by default). I might not be representative of the general internet population, but I think this is one of the most useful and important features a browser can have, and Opera's implementation is just plain better.

    Winner: Opera

  • Web Development Tools: Part of my job requires frontend web development, and Firefox unquestionably has the better web development tools. The Web Developer Toolbar and Firebug tandem is difficult to beat. Opera's latest revision of their developer tools, called Dragonfly, is an impressive leap forward and requires some more inspection, but my initial impression is that they still have a ways to go before they catch up with Firefox's Add Ons.

    Winner: Firefox

  • Speed: Opera is often the winner in various benchmark tests, including this relatively old but thorough comparison of browser speeds (it's been updated a few times and has Opera 9 and FF 2, but is now retired and does not contain stats for the latest releases). Similarly, spot-checking various other benchmarks seems to further indicate Opera's speed. Then again, some initial reports of FF3 seem to indicate an improvement. As always, you have to take these sorts of benchmarks and reports with a grain of salt. My subjective perception of speed is that Opera is faster, but I haven't used FF3 very much, and I'm also not sure how much of that speed is due to caching settings.

    Winner: Opera

  • Performance: This one seems to be more tricky. In my admittedly arbitrary and unscientific test, I opened 10 tabs of commonly visited websites in both browsers. Opera was using ~99 MB, while FF was using ~150 MB. (Sites used include Kaedrin Weblog, CBS Sportsline's Fantasy Baseball LiveScoring page, GMail, Google Reader, Wikipedia, IMDB, and a few others) It's worth noting that Firefox has always had complaints about memory usage, especially when you have a lot of tabs open. In some cases, memory issues were traced to malfunctioning Add-Ons or plugins. I've seen other benchmark tests that have closer results and apparently FF 3 has made massive improvements in this area. In my own subjective experience, FF tends to bog down, especially when I have many tabs open, so I'm going to give this to Opera, but if FF 3 works out the way everyone thinks, this may be up for grabs. I'd like to do some more detailed and formal tests on this one though (perhaps later this week).

    Winner: Opera

  • Intangibles: As I've already mentioned, I primarily use Opera to browse, so I am obviously biased towards Opera. I suppose there's also something to be said for rooting for the underdog, though when it comes to usability and performance, that shouldn't matter (and really, it doesn't - Opera is a genuinely great browser). And finally, Opera is more innovative than any other browser. They had tabbed browsing years and years before anyone else, their implementation of Mouse Gestures was revolutionary (for me, at least), and more recently, Speed Dial has become a favorite of mine. Their advances small interface issues (like the Trash Can or Paste-and-Go) are rarely noted, but are very useful (enough so that FF has had Add-Ons created to replicate the interfaces). The fact that Firefox can do all of these things doesn't mean they would have come up with them first, and I suppose that's worth mentioning. On the other hand, Firefox is an open source project (there is some controversy about that, but it's still better than Opera) and their philosophy of Add-Ons allows for a much broader range of browser capabilities and customization. In general, I prefer openness to closed systems, so there's another point for Firefox. It's also worth noting that Firefox's market share has been steadily increasing while Opera's has been decreasing (and when your high point is around 2.5%, that's not saying much). Opera has made a name for themselves in the embedded market (i.e. it's on lots of cell phones and other hardware (like the Wii)), so they won't be going away anytime soon, but it seems like Firefox is moving faster now. This is a really close one, but I'll lean towards Firefox because it seems to have a brighter future.

    Winner: Firefox
Well look at that, we've got a tie. Both Opera and Firefox have won 4 of the 8 above categories, which means I'll have to come up with some sort of tie-breaker or weighting. I think I'm going to end up staying with Opera for now, with the caveat that Firefox seems to have a brighter future. Opera does the right things really well, while Firefox is more flexible and open. I also tend to use Firefox as my primary browser for web development efforts (but that's a strange one, as I use all browsers in web development, though the FF web developer's toolbar and Firebug are really indispensible). However, for day to day activity, Opera is still good for me.

So what does the future hold? If Opera continues to lose market share and doesn't find a way to account for the extensions of Firefox, it's going to be in real trouble (they seem to think their Widgets system will do this, but it really won't). Honestly, if FF 3 really does solve their memory problems, I might even be switching over that soon.
Posted by Mark on May 18, 2008 at 08:22 PM .: link :.


End of This Day's Posts

Sunday, April 27, 2008

Netflix Activity
The recent bout with myTV on DVD addiction necessitated an increase in Netflix usage, which made me curious. How well have I really taken advantage of the Netflix service, and is it worth the monthly expense?

If I were to rent a movie at a local video store like Blockbuster, each rental would cost somewhere around $4 (this is an extremely charitable estimate, as I'm sure it's probably closer to $5 at this point), plus the expense in time and effort (I mean, come on, I'd have to drive about a mile out of my way to go to one of these places!) Netflix costs me $15.99 a month for the 3-disc-at-a-time plan (this plan was $17.99 when I signed up, but decreased in price two times during around two years of membership), so it takes about 4-5 Netflix rentals to recoup my costs and bring the price of an average rental down below $4. I've been a member for one year and ten months... how did I do (click for a larger version)?

My Netflix Activity Chart

A few notes on the data:
  • The chart shows both DVD rentals and movies or shows watched online through Netflix's "Watch Instant" service. There are certain distinctions that should be made here, namely that DVD rentals are measured by the date the DVD was returned, while Watch Instant rentals are measured when you watch them. Also, when watching a TV series on Watch Instant, each episode counts as a separate rental (if I were watching on DVD, there's usually 3-4 episodes on one disc, but since I'm watching on the Watch Instant service, each episode counts as a separate rental).
  • As you can see, my initial usage was a little erratic, though I apparently tend to fall into a 4-5 month pattern (and you can see two nearly identical curves in 2007) where DVD rentals range from 6-13 per month. 13 appears to be my ceiling for a month, though I've hit that several times.
  • I've only fallen below the 4 disc per month ratio needed to bring the average rental down below $4 once (twice if you count July 2006, but that was my first month of service and does not constitute a full month's worth of data). To be honest, I don't remember why I only returned 2 movies in January 2007, but that was the first and only time I fell below the necessary 4 rentals.
  • My Watch Instantly service usage started off with a bang in July 2007 but quickly trailed off until 2008, when usage skyrocketed. This is when I discovered the TV show Dexter and quickly worked my way through all of the first season episodes (13 in all). Following Dexter, I started in on Ghost in the Shell: Stand Alone Complex and I just finished that today (expect a review later this week), so that means I watched 26 episodes online. Expect this to drop sharply next month (though I still plan on using it significantly, as I'll be following along with Filmspotting's 70's SF marathon, which features several movies in the Watch Instantly catalog). All in all, it's a reasonable service, though I have to admit that watching it on my computer just isn't the same - I bought that 50" widescreen HDTV for a reason, you know...
  • You'll also notice that both March and April of 2008 have me hitting the ceiling of 13 movies per month. This is the first time I've done that in consecutive months and is largely due to watching BSG season 3 and my discovery and addiction to The Wire.
  • As of April 2008, I'm averaging 9 movies a month (I've rented 198 DVDs). Even if I were to use my original price of $17.99 a month, that works out to around $2 a DVD rental. When you factor in the price drops and the Watch Instantly viewing (I've watched 51 things, though again, in some cases what I'm watching is a single episode of a TV show), I'm betting it would come out around $1.50-$1.75.
So it seems that the service is definitely worth the money and is indeed saving me a lot. Plus, Netflix has a far greater selection than any local video store (with the potential exception of TLA Video, but they're too far from my home to count), thus allowing me to indulge in various genres that you don't see much of in a typical video store. The only potential downside to Netflix is that you can't really rent something on impulse (unless it's on the Watch Instantly service). There are also times when new or popular movies take some time before they're actually available to you, but you have to contend with that from video rental stores as well. Indeed, I can only think of 3-4 times I've had to wait for a movie (this is mostly due to the fact that I tend to rent more obscure fare where people aren't exactly lining up to see it...) For the most part, Netflix has been reliable as well, almost always turning around my returns in short order (I mail it one day, and get the next films two days later). There have been a few mixups and I do remember one movie that wasn't available on the east coast and had to be shipped from California, so it came after a wait of 3-4 days, but for the most part, I'm very happy with the service.

This has been an interesting exercise, because I feel like I'm a little more consistent than the data actually shows. I'm really surprised that there are several months where my rentals went down to 6... I could have sworn I watched at least 2-3 discs a week, with the occasional exception. Still, an average of 9 movies a month is nothing to sneeze about, I guess. I've heard horror stories of where Netflix will start throttling you and take longer to deliver discs if you go above a certain amount of rentals per month (at a certain point, the cost of processing your rentals becomes more than you're paying, which I guess is what prompts Netflix to start throttling you), but I haven't had a problem yet. If I keep up my recent viewing habits though, this could change...
Posted by Mark on April 27, 2008 at 11:09 PM .: link :.


End of This Day's Posts

Sunday, November 25, 2007

Requiem for a Meme
In July of this year, I attempted to start a Movie Screenshot Meme. The idea was simple and (I thought) neat. I would post a screenshot, and visitors would guess what movie it was from. The person who guessed correctly would continue the game by either posting the next round on their blog, or if they didn't have a blog, they could send me a screenshot or just ask me to post another round. Things went reasonably well at first, and the game experienced some modest success. However, the game eventually morphed into the Mark, Alex, and Roy show, as the rounds kept cycling through each of our blogs. The last round was posted in September and despite a winning entry, the game has not continued.

The challenge of starting this meme was apparent from the start, but there were some other things that hindered the game a bit. Here are some assorted thoughts about the game, what held it back, and what could be done to improve the chances of adoption.
  • Low Traffic: The most obvious reason the game tapered off was that my blog doesn't get a ton of traffic. I have a small dedicated core of visitors though, and I think that's why the game lasted as long as it did. Still, the three blogs that comprised the bulk of rounds in the game weren't very high traffic blogs. As such, the pool of potential participants was relatively small, which is the sort of thing that would make it difficult for a meme to expand.
  • Barriers to Entry: The concept of allowing the winner to continue the game on their blog turned out to be a bit prohibitive, as most visitors don't have a blog. Also, a couple of winners expressed confusion as to how to get screenshots, and some didn't respond at all after winning. Of course, it is easy to start a new blog, and my friend Dave even did so specifically to post his round of the game, but none of these things helped get more eyes looking at the game.
  • Difficulty: I intentionally made my initial entries easy (at one point, I even considered making it obscenely easy, but decided to just use that screenshot as a joke), in an attempt to ensnare casual movie viewers, but as the game progressed, screenshots became more and more difficult, and were coming from obscure movies. Actually, if you look at most of the screenshots outside of my blog, there aren't many mainstream movies. Here are some of the lesser known movies featured in the game Hedwig and the Angry Inch (this one stumped the interwebs), The Big Tease, Rosencrantz & Guildenstern Are Dead, Children of Men (mainstream, I guess, though I'm pretty sure it wasn't even out on DVD yet), Cry-Baby, Brotherhood of the Wolf, The City of Lost Children, Everything Is Illuminated, Wings of Desire, Who Framed Roger Rabbit (mainstream), Run, Lola, Run, Masters of the Universe (!), I Heart Huckabees, and Runaway. Now, of the ones I've seen, none of these are terrible films (er, well, He-Man was pretty bad, as was Runaway, but they're 80s movies, so slack is to be cut, right?), but they're also pretty difficult to guess for a casual movie watcher. I mean, most are independent, several are foreign, and it doesn't help when the screenshot is difficult to place (even some of the mainstream ones, like Who Framed Roger Rabbit, were a little difficult). Heck, by the end, even I was posting difficult stuff (the 5 screenshot extravaganza featured a couple of really difficult ones). Again, there's nothing inherently wrong with these movie selections, but they're film-geek selections that pretty much exclude mainstream viewers. If the game had become more widespread, this wouldn't have been as big of a deal, as I'd imagine that more movie geeks would be attracted to it. This is an interesting issue though, as several people thought their screenshots were easy, even though their visitors thought they were hard. Movies are subjective, so I guess it can be hard to judge the difficulty of a given screenshot. A screenshot that is blatantly obvious to me might be oppressively difficult to someone else.
  • Again Traffic: Speaking of which, once the game had made its way around most of my friends' blogs, things began to slow down a bit because we were all hoping that someone new would win a round. Several non-bloggers posted comments to the effect of: I know the answer, but I don't have a blog and I want this game to spread so I'll hold off for now. I know I held back on several rounds because of this, but as the person who started this whole thing, this is understandable. In some ways, it was nice to see other people enjoying the game enough to care about it's success, but that also didn't help a whole lot.
  • Detectives: At least a couple of people were able to find answers by researching rather than recognizing the movie. I know I was guilty of this. I'd recognize an actor, then look them up on IMDB and see what they've done, which helps narrow down the field considerably. I don't know that this is actually a bad thing, but I did find it interesting.
  • Memerific: The point of a meme is that it's supposed to be self-sustaining and self-propagating. While this game did achieve a modest success at the beginning, it never really became self-sustaining. At least a couple of times, I prodded the game to move it forward, and Roy and Alex did the same. I guess the memetic inertia was constantly being worn down by the factors discussed in this post.
  • Help: Given the above, there were several things that could have helped. I could have done a better job promoting the game, for instance. I could have made it easier for other bloggers to post a round. One of the things I wanted to do was create little javascript snippits that people could use to very quickly display the unweildy rules (perhaps using nifty display techniques that hide most of the text initially until you click to learn more) and another little javascript that would display the current round (in a nice little graphical button or something). Unfortunately, this game pretty much coincided with the busiest time of my professional career, and I didn't have a lot of time to do anything (just keeping up with the latest round was a bit of a challenge for me).
  • Variants: One thing that may have helped would be to spread the game further out by allowing winners to "tag" other bloggers they wanted to see post screenshots, rather than just letting the winner post their own. I actually considered this when designing the game, but after some thought, I decided against it. Many people hate memes and don't like being "tagged" to participate. Knowing this, a lot of people who do participate in memes are hesitant to "tag" other people. I didn't want to annoy people with the blogging equivalent to chain letters, so I decided against it. However, it might have helped this meme spread out much further, as it doesn't require casual movie fans to participate more and it would allow the meme to spread much further, much faster. If I said the winner should tag 5 other bloggers to participate, the meme could spread exponentially. This would be much more difficult to track, but on the other hand, it might actually catch on. This might be the biggest way to improve the meme's chances at survival.
  • Alternatives: This strikes me as something that would work really well on a message board type system, especially one that allowed users to upload their own images. Heck, I wouldn't be surprised to see something like this out there. It also might have been a good idea to create a way to invite others to play the game via email (which probably would only work on a message board or dedicated website, where there's one central place that screenshots are posted). However, one of the things that's neat about blog memes is that they tend to get your blog exposed to people who wouldn't otherwise visit.
It was certainly an interesting and fun experience, and I'm glad I did it. Just for kicks, I'll post another screenshot. Feel free to post your answer in the comments, but I'm not especially expecting this to progress much further than it did before (though anything's possible):

Screenshot Game, round 24

(click image for a larger version) I'd say this is difficult except that it's blatantly obvious who that is in the screenshot. It shouldn't be that hard to pick out the movie even if you haven't seen it. What the heck, the winner of this round can pick 5 blogs they'd like to see post a screenshot and post a screenshot on their blog if they desire. As I mentioned above, I'm hesitant to annoy people with this sort of thing, but hey, why not? Let's give this meme some legs.
Posted by Mark on November 25, 2007 at 03:04 PM .: link :.


End of This Day's Posts

Sunday, November 18, 2007

The Paradise of Choice?
A while ago, I wrote a post about the Paradox of Choice based on a talk by Barry Schwartz, the author of a book by the same name. The basic argument Schwartz makes is that choice is a double-edged sword. Choice is a good thing, but too much choice can have negative consequences, usually in the form of some kind of paralysis (where there are so many choices that you simply avoid the decision) and consumer remorse (elevated expectations, anticipated regret, etc...). The observations made by Schwartz struck me as being quite astute, and I've been keenly aware of situations where I find myself confronted with a paradox of choice ever since. Indeed, just knowing and recognizing these situations seems to help deal with the negative aspects of having too many choices available.

This past summer, I read Chris Anderson's book, The Long Tail, and I was a little pleasantly surprised to see a chapter in his book titled "The Paradise of Choice." In that chapter, Anderson explicitely addresses Schwartz's book. However, while I liked Anderson's book and generally agreed with his basic points, I think his dismissal of the Paradox of Choice is off target. Part of the problem, I think, is that Anderson is much more concerned with the choices rather than the consequences of those choices (which is what Schwartz focuses on). It's a little difficult to tell though, as Anderson only dedicates 7 pages or so to the topic. As such, his arguments don't really eviscerate Schwartz's work. There are some good points though, so let's take a closer look.

Anderson starts with a summary of Schwartz's main concepts, and points to some of Schwartz's conclusions (from page 171 in my edition):
As the number of choices keeps growing, negative aspects of having a multitude of options begin to appear. As the number of choices grows further, the negatives escalate until we become overloaded. At this point, choice no longer liberates, but debilitates. It might even be said to tyrannize.
Now, the way Anderson presents this is a bit out of context, but we'll get to that in a moment. Anderson continues and then responds to some of these points (again, page 171):
As an antidote to this poison of our modern age, Schwartz recommends that consumers "satisfice," in the jargon of social science, not "maximize". In other words, they'd be happier if they just settled for what was in front of them rather than obsessing over whether something else might be even better. ...

I'm skeptical. The alternative to letting people choose is choosing for them. The lessons of a century of retail science (along with the history of Soviet department stores) are that this is not what most consumers want.
Anderson has completely missed the point here. Later in the chapter, he spends a lot of time establishing that people do, in fact, like choice. And he's right. My problem is twofold: First, Schwartz never denies that choice is a good thing, and second, he never advocates removing choice in the first place. Yes, people love choice, the more the better. However, Schwartz found that even though people preferred more options, they weren't necessarily happier because of it. That's why it's called the paradox of choice - people obviously prefer something that ends up having negative consequences. Schwartz's book isn't some sort of crusade against choice. Indeed, it's more of a guide for how to cope with being given too many choices. Take "satisficing." As Tom Slee notes in a critique of this chapter, Anderson misstates Schwartz's definition of the term. He makes it seem like satisficing is settling for something you might not want, but Schwartz's definition is much different:
To satisfice is to settle for something that is good enough and not worry about the possibility that there might be something better. A satisficer has criteria and standards. She searches until she finds an item that meets those standards, and at that point, she stops.
Settling for something that is good enough to meet your needs is quite different than just settling for what's in front of you. Again, I'm not sure Anderson is really arguing against Schwartz. Indeed, Anderson even acknowledges part of the problem, though he again misstate's Schwartz's arguments:
Vast choice is not always an unalloyed good, of course. It too often forces us to ask, "Well, what do I want?" and introspection doesn't come naturally to all. But the solution is not to limit choice, but to order it so it isn't oppressive.
Personally, I don't think the problem is that introspection doesn't come naturally to some people (though that could be part of it), it's more that some people just don't give a crap about certain things and don't want to spend time figuring it out. In Schwartz's talk, he gave an example about going to the Gap to buy a pair of jeans. Of course, the Gap offers a wide variety of jeans (as of right now: Standard Fit, Loose Fit, Boot Fit, Easy Fit, Morrison Slim Fit, Low Rise Fit, Toland Fit, Hayes Fit, Relaxed Fit, Baggy Fit, Carpenter Fit). The clerk asked him what he wanted, and he said "I just want a pair of jeans!"

The second part of Anderson's statement is interesting though. Aside from again misstating Schwartz's argument (he does not advocate limiting choice!), the observation that the way a choice is presented is important is interesting. Yes, the Gap has a wide variety of jean styles, but look at their website again. At the top of the page is a little guide to what each of the styles means. For the most part, it's helpful, and I think that's what Anderson is getting at. Too much choice can be oppressive, but if you have the right guide, you can get the best of both worlds. The only problem is that finding the right guide is not as easy as it sounds. The jean style guide at Gap is neat and helpful, but you do have to click through a bunch of stuff and read it. This is easier than going to a store and trying all the varieties on, but it's still a pain for someone who just wants a pair of jeans dammit.

Anderson spends some time fleshing out these guides to making choices, noting the differences between offline and online retailers:
In a bricks-and-mortar store, products sit on the shelf where they have been placed. If a consumer doesn't know what he or she wants, the only guide is whatever marketing material may be printed on the package, and the rough assumption that the product offered in the greatest volume is probably the most popular.

Online, however, the consumer has a lot more help. There are a nearly infinite number of techniques to tap the latent information in a marketplace and make that selection process easier. You can sort by price, by ratings, by date, and by genre. You can read customer reviews. You can compare prices across products and, if you want, head off to Google to find out as much about the product as you can imagine. Recommendations suggest products that 'people like you' have been buying, and surprisingly enough, they're often on-target. Even if you know nothing about the category, ranking best-sellers will reveal the most popular choice, which both makes selection easier and also tends to minimize post-sale regret. ...

... The paradox of choice is simply and artifact of the limitations of the physical world, where the information necessary to make an informed choice is lost.
I think it's a very good point he's making, though I think he's a bit too optimistic about how effective these guides to buying really are. For one thing, there are times when a choice isn't clear, even if you do have a guide. Also, while I think retailers that offer Recommendations based on what other customer purchases are important and helpful, who among us hasn't seen absurd recommendations? From my personal experience, a lot of people don't like the connotations of recommendations either (how do they know so much about me? etc...). Personally, I really like recommendations, but I'm a geek and I like to figure out why they're offering me what they are (Amazon actually tells you why something is recommended, which is really neat). In any case, from my own personal anecdotal observations, no one puts much faith in probablistic systems like recommendations or ratings (for a number of reasons, such as cheating or distrust). There's nothing wrong with that, and that's part of why such systems are effective. Ironically, acknowledging their imperfections allow users to better utilize the systems. Anderson knows this, but I think he's still a bit too optimistic about our tools for traversing the long tail. Personally, I think they need a lot of work.

When I was younger, one of the big problems in computing was storage. Computers are the perfect data gatering tool, but you need somewhere to store all that data. In the 1980s and early 1990s, computers and networks were significantly limited by hardware, particularly storage. By the late 1990s, Moore's law had eroded this deficiency significantly, and today, the problem of storage is largely solved. You can buy a terrabyte of storage for just a couple hundred dollars. However, as I'm fond of saying, we don't so much solve problems as trade one set of problems for another. Now that we have the ability to store all this information, how do we get at it in a meaninful way? When hardware was limited, analysis was easy enough. Now, though, you have so much data available that the simple analyses of the past don't cut it anymore. We're capturing all this new information, but are we really using it to its full potential?

I recently caught up with Malcolm Gladwell's article on the Enron collapse. The really crazy thing about Enron was that they didn't really hide what they were doing. They fully acknowledged and disclosed what they were doing... there was just so much complexity to their operations that no one really recognized the issues. They were "caught" because someone had the persistence to dig through all the public documentation that Enron had provided. Gladwell goes into a lot of detail, but here are a few excerpts:
Enron's downfall has been documented so extensively that it is easy to overlook how peculiar it was. Compare Enron, for instance, with Watergate, the prototypical scandal of the nineteen-seventies. To expose the White House coverup, Bob Woodward and Carl Bernstein used a source-Deep Throat-who had access to many secrets, and whose identity had to be concealed. He warned Woodward and Bernstein that their phones might be tapped. When Woodward wanted to meet with Deep Throat, he would move a flower pot with a red flag in it to the back of his apartment balcony. That evening, he would leave by the back stairs, take multiple taxis to make sure he wasn't being followed, and meet his source in an underground parking garage at 2 A.M. ...

Did Jonathan Weil have a Deep Throat? Not really. He had a friend in the investment-management business with some suspicions about energy-trading companies like Enron, but the friend wasn't an insider. Nor did Weil's source direct him to files detailing the clandestine activities of the company. He just told Weil to read a series of public documents that had been prepared and distributed by Enron itself. Woodward met with his secret source in an underground parking garage in the hours before dawn. Weil called up an accounting expert at Michigan State.

When Weil had finished his reporting, he called Enron for comment. "They had their chief accounting officer and six or seven people fly up to Dallas," Weil says. They met in a conference room at the Journal's offices. The Enron officials acknowledged that the money they said they earned was virtually all money that they hoped to earn. Weil and the Enron officials then had a long conversation about how certain Enron was about its estimates of future earnings. ...

Of all the moments in the Enron unravelling, this meeting is surely the strangest. The prosecutor in the Enron case told the jury to send Jeffrey Skilling to prison because Enron had hidden the truth: You're "entitled to be told what the financial condition of the company is," the prosecutor had said. But what truth was Enron hiding here? Everything Weil learned for his Enron expose came from Enron, and when he wanted to confirm his numbers the company's executives got on a plane and sat down with him in a conference room in Dallas.
Again, there's a lot more detail in Gladwell's article. Just how complicated was the public documentation that Enron had released? Gladwell gives some examples, including this one:
Enron's S.P.E.s were, by any measure, evidence of extraordinary recklessness and incompetence. But you can't blame Enron for covering up the existence of its side deals. It didn't; it disclosed them. The argument against the company, then, is more accurately that it didn't tell its investors enough about its S.P.E.s. But what is enough? Enron had some three thousand S.P.E.s, and the paperwork for each one probably ran in excess of a thousand pages. It scarcely would have helped investors if Enron had made all three million pages public. What about an edited version of each deal? Steven Schwarcz, a professor at Duke Law School, recently examined a random sample of twenty S.P.E. disclosure statements from various corporations-that is, summaries of the deals put together for interested parties-and found that on average they ran to forty single-spaced pages. So a summary of Enron's S.P.E.s would have come to a hundred and twenty thousand single-spaced pages. What about a summary of all those summaries? That's what the bankruptcy examiner in the Enron case put together, and it took up a thousand pages. Well, then, what about a summary of the summary of the summaries? That's what the Powers Committee put together. The committee looked only at the "substance of the most significant transactions," and its accounting still ran to two hundred numbingly complicated pages and, as Schwarcz points out, that was "with the benefit of hindsight and with the assistance of some of the finest legal talent in the nation."
Again, Gladwell's article has a lot of other details and is a fascinating read. What interested me the most, though, was the problem created by so much data. That much information is useless if you can't sift through it quickly or effectively enough. Bringing this back to the paradise of choice, the current systems we have for making such decisions are better than ever, but still require a lot of improvement. Anderson is mostly talking about simple consumer products, so none are really as complicated as the Enron case, but even then, there are still a lot of problems. If we're really going to overcome the paradox of choice, we need better information analysis tools to help guide us. That said, Anderson's general point still holds:
More choice really is better. But now we know that variety alone is not enough; we also need information about that variety and what other consumers before us have done with the same choices. ... The paradox of choice turned out to be more about the poverty of help in making that choice than a rejection of plenty. Order it wrong and choice is oppressive; order it right and it's liberating.
Personally, while the help in making choices has improved, there's still a long way to go before we can really tackle the paradox of choice (though, again, just knowing about the paradox of choice seems to do wonders in coping with it).

As a side note, I wonder if the video game playing generations are better at dealing with too much choice - video games are all about decisions, so I wonder if folks who grew up working on their decision making apparatus are more comfortable with being deluged by choice.
Posted by Mark on November 18, 2007 at 09:47 PM .: link :.


End of This Day's Posts

Wednesday, October 17, 2007

The Spinning Silhouette
This Spinning Silhouette optical illusion is making the rounds on the internet this week, and it's being touted as a "right brain vs left brain test." The theory goes that if you see the silhouette spinning clockwise, you're right brained, and you're left brained if you see it spinning counterclockwise.

Everytime I looked at the damn thing, it was spinning a different direction. I closed my eyes and opened them again, and it spun a different direction. Every now and again, and it would stay the same direction twice in a row, but if I looked away and looked back, it changed direction. Now, if I focus my eyes on a point below the illusion, it doesn't seem to rotate all the way around at all, instead it seems like she's moving from one side to the other, then back (i.e. changing directions every time the one leg reaches the side of the screen - and the leg always seems to be in front of the silhouette).

Of course, this is the essense of the illusion. The silhouette isn't actually spinning at all, because it's two dimensional. However, since my brain is used to living in a three dimensional world (and thus parsing three dimensional images), it's assuming that the image is also three dimensional. We're actually making lots of assumptions about the image, and that's why we can see it going one way or the other.

Eventually, after looking at the image for a while and pondering the issues, I got curious. I downloaded the animated gif and opened it up in the GIMP to see how the frames are built. I could be wrong, but I'm pretty sure this thing is either broken or it's cheating. Well, I shouldn't say that. I noticed something off on one of the frames, and I'd be real curious to know how that affects people's perception of the illusion (to me, it means the image is definitely moving counterclockwise). I'm almost positive that it's too subtle to really affect anything, but I did find it interesting. More on this, including images and commentary, below the fold. First thing's first, here's the actual spinning silhouette.

The Spinning Silhouette

Again, some of you will see it spinning in one direction, some in the other direction. Everyone seems to have a different trick for getting it to switch direction. Some say to focus on the shadow, some say to look at the ankles. Closing my eyes and reopening seems to do the trick for me. Now let's take a closer look at one of the frames. Here's frame 12:

In frame 12, the illusion is still intact

Looking at this frame, you should be able to switch back and forth, seeing the leg behind the person or in front of the person. Again, because it's a silhouette and a two dimensional image, our brain usually makes an assumption of depth, putting the leg in front or behind the body. Switching back and forth on this static image was actually a lot easier for me. Now the tricky part comes in the next frame, number 13 (obviously, the arrow was added by me):

In frame 13, there is a little gash in the leg

Now, if you look closely at the leg, you'll see a little imperfection in the silhouette. Maybe I'm wrong, but that little gash in the leg seems to imply that the leg is behind the body. If you try, you can still get yourself to see the image as having the leg in front, but then you've got this gash in the leg that just seems very out of place.

So what to make of this? First, the imperfection is subtle enough (it's on 1 frame out of 34) that everyone still seems to be able to see it rotate in both directions. Second, maybe I'm crazy, and the little gash doesn't imply what I think. Anyone have alternative explanations? Third, is that imperfection intentional? If so, why? It does not seem necessary, so I'd be curious to know if the creators knew about it, and what their intention was regarding it.

Finally, as far as the left brain versus right brain portion, I find that I don't really care, but I am interested in how the imperfection would affect this "test." This neuroscientist seems to be pretty adamant about the whole left/right thing being hogwash though:
...the notion that someone is "left-brained" or "right-brained" is absolute nonsense. All complex behaviours and cognitive functions require the integrated actions of multiple brain regions in both hemispheres of the brain. All types of information are probably processed in both the left and right hemispheres (perhaps in different ways, so that the processing carried out on one side of the brain complements, rather than substitutes, that being carried out on the other).
At the very least, the traditional left/right brain theory is a wildly oversimplified version of what's really happening. The post also goes into the way the brain "fill in the gaps" for confusing visual information, thus allowing the illusion.

Update: Strange - the image appears to be rotating MUCH faster in Firefox than in Opera or IE. I wonder how that affects perception.
Posted by Mark on October 17, 2007 at 10:42 PM .: link :.


End of This Day's Posts

Sunday, August 05, 2007

Manuals, or the lack thereof...
When I first started playing video games and using computer applications, I remember having to read the instruction manuals to figure out what was happening on screen. I don't know if this was because I was young and couldn't figure this stuff out, or because some of the controls were obtuse and difficult. It was perhaps a combination of both, but I think the latter was more prevalent, especially when applications and games became more complex and powerful. I remember sitting down at a computer running DOS and loading up Wordperfect. The interface that appears is rather simplistic, and the developers apparently wanted to avoid the "clutter" of on-screen menus, so they used keyboard combinations. According to Wikipedia, Wordperfect used "almost every possible combination of function keys with Ctrl, Alt, and Shift modifiers." I vaguely remember needing to use those stupid keyboard templates (little pieces of laminated paper that fit snugly around the keyboard keys, helping you remember what key or combo does what.)

Video Games used to have great manuals too. I distinctly remember several great manuals from the Atari 2600 era. For example, the manual for Pitfall II was a wonderful document done in the style of Pitfall Harry's diary. The game itself had little in the way of exposition, so you had to read the manual to figure out that you were trying to rescue your niece Rhonda and her cat, Quickclaw, who became trapped in a catacomb while searching for the fabled Raj diamond. Another example for the Commodore 64 was Temple of Apshai. The game had awful graphics, but each room you entered had a number, and you had to consult your manual to get a description of the room.

By the time of the NES, the importance of manuals had waned from Apshai levels, but they were still somewhat necessary at times, and gaming companies still went to a lot of trouble to produce helpful documents. The one that stands out in my mind was the manual for Dragon Warrior III, which was huge (at least 50 pages) and also contained a nice fold-out chart of most of the monsters and wapons in the game (with really great artwork). PC games were also getting more complex, and as Roy noted recently, companies like Sierra put together really nice instruction manuals for complex games like the King's Quest series.

In the early 1990s, my family got its first Windows PC, and several things changed. With the Word for Windows software, you didn't need any of those silly keyboard templates. Everything you needed to do was in a menu somewhere, and you could just point and click instead of having to memorize strange keyboard combos. Naturally, computer purists love the keyboard, and with good reason. If you really want to be efficient, the keyboard is the way to go, which is why Linux users are so fond of the command line and simple looking but powerful applications like Emacs. But for your average user, the GUI was very important, and made things a lot easier to figure out. Word had a user manual, and it was several hundred pages long, but I don't think I ever cracked it open, except maybe in curiosity (not because I needed to).

The trends of improving interfaces and less useful manuals proceeded throughout the next decade and today, well, I can't think of the last time I had to consult a physical manual for anything. Steven Den Beste has been playing around with flash for a while, but he says he never looks at the manual. "Manuals are for wimps." In his post, Roy wonders where all the manuals have gone. He speculates that manufacturing costs are a primary culprit, and I have no doubt that they are, but there are probably a couple of other reasons as well. For one, interfaces have become much more intuitive and easy to use. This is in part due to familiarity with computers and the emergence of consistent standards for things like dialog boxes (of course, when you eschew those standards, you get what Jacob Nielson describes as a catastrophic failure). If you can easily figure it out through the interface, what use are the manuals? With respect to gaming, the in-game tutorials have largely taken the place of instruction manuals. Another thing that has perhaps affected official instruction manuals are the unofficial walkthroughs and game guides. Visit a local bookstore and you'll find entire bookcases devoted to vide game guides and walkthrough. As nice as the manual for Pitfall II was, you really didn't need much more than 10 pages to explain how to play that game, but several hundred pages barely does justice to some of the more complex video games in today's market. Perhaps the reason gaming companies don't give you instruction manuals with the game is not just that printing the manual is costly, but that they can sell you a more detailed and useful one.

Steven Johnson's book Everything Bad is Good for You has a chapter on Video Games that is very illuminating (in fact, the whole book is highly recommended - even if you don't totally agree with his premise, he still makes a compelling argument). He talks about the official guides and why they're so popular:
The dirty little secret of gaming is how much time you spend not having fun. You may be frustrated; you may be confused or disoriented; you may be stuck. When you put the game down and move back into the real world, you may find yourself mentally working through the problem you've been wrestling with, as though you were worrying a loose tooth. If this is mindless escapism, it's a strangely masochistic version.
He gives an example of a man who spends six months working as a smith (mindless work) in Ultima online so that he can attain a certain ability, and he also talks about how people spend tons of money on guides for getting past various roadblocks. Why would someone do this? Johnson spends a fair amount of time going into the neurological underpinnings of this, most notably what he calls the "reward circuitry of the brain." In games, rewards are everywhere. More life, more magic spells, new equipment, etc... And how do we get these rewards? Johnson thinks there are two main modes of intellectual labor that go into video gaming, and he calls them probing and telescoping.

Probing is essentially exploration of the game and its possibilities. Much of this is simply the unconscious exploration of the controls and the interface, figuring out how the game works and how you're supposed to interact with it. However, probing also takes the more conscious form of figuring out the limitations of the game. For instance, in a racing game, it's usually interesting to see if you can turn your car around backwards, pick up a lot of speed, then crash head-on into a car going the "correct" way. Or, in Rollercoaster Tycoon, you can creatively place balloon stands next to a roller coaster to see what happens (the result is hilarious). Probing the limits of game physics and finding ways to exploit them are half the fun (or challenge) of video games these days, which is perhaps another reason why manuals are becoming less frequent.

Telescoping has more to do with the games objectives. Once you've figured out how to play the game through probing, you seek to exploit your knowledge to achieve the game's objectives, which are often nested in a hierarchical fashion. For instance, to save the princess, you must first enter the castle, but you need a key to get into the castle and the key is guarded by a dragon, etc... Indeed, the structure is sometimes even more complicated, and you essentially build this hierarchy of goals in your head as the game progresses. This is called telescoping.

So why is this important? Johnson has the answer (page 41 in my edition):
... far more than books or movies or music, games force you to make decisions. Novels may activate our imagination, and music may conjure up powerful emotions, but games force you to decide, to choose, to prioritize. All the intellectual benefits of gaming derive from this fundamental virtue, because learning how to think is ultimately about learning to make the right decisions: weighing evidence, analyzing situations, consulting your long term goals, and then deciding. No other pop culture form directly engages the brain's decision-making apparatus in the same way. From the outside, the primary activity of a gamer looks like a fury of clicking and shooting, which is why much of the conventional wisdom about games focuses on hand-eye coordination. But if you peer inside the gamer's mind, the primary activity turns out to be another creature altogether: making decisions, some of them snap judgements, some long-term strategies.
Probing and telescoping are essential to learning in any sense, and the way Johnson describes them in the book reminds me of a number of critical thinking methods. Probing, developing a hypothesis, reprobing, and then rethinking the hypothesis is essentially the same thing as the scientific method or the hermenutic circle. As such, it should be interesting to see if video games ever really catch on as learning tools. There have been a lot of attempts at this sort of thing, but they're often stifled by the reputation of video games being a "colossal waste of time" (in recent years, the benefits of gaming are being acknowledged more and more, though not usually as dramatically as Johnson does in his book).

Another interesting use for video games might be evaluation. A while ago, Bill Simmons made an offhand reference to EA Sports' Madden games in the context of hiring football coaches (this shows up at #29 on his list):
The Maurice Carthon fiasco raises the annual question, "When teams are hiring offensive and defensive coordinators, why wouldn't they have them call plays in video games to get a feel for their play calling?" Seriously, what would be more valuable, hearing them B.S. about the philosophies for an hour, or seeing them call plays in a simulated game at the all-Madden level? Same goes for head coaches: How could you get a feel for a coach until you've played poker and blackjack with him?
When I think about how such a thing would actually go down, I'm not so sure, because the football world created by Madden, as complex and comprehensive as it is, still isn't exactly the same as the real football world. However, I think the concept is still sound. Theoretically, you could see how a prospective coach would actually react to a new, and yet similar, football paradigm and how they'd find weaknesses and exploit them. The actual plays they call aren't that important; what you'd be trying to figure out is whether or not the coach was making intelligent decisions or not.

So where are manuals headed? I suspect that they'll become less and less prevalent as time goes on and interfaces become more and more intuitive (though there is still a long ways to go before I'd say that computer interfaces are truly intuitive, I think they're much more intuitive now than they were ten years ago). We'll see more interactive demos and in-game tutorials, and perhaps even games used as teaching tools. I could probably write a whole separate post about how this applies to Linux, which actually does require you to look at manuals sometimes (though at least they have a relatively consistent way of treating manuals; even when the documentation is bad, you can usually find it). Manuals and passive teaching devices will become less important. And to be honest, I don't think we'll miss them. They're annoying.
Posted by Mark on August 05, 2007 at 10:58 AM .: link :.


End of This Day's Posts

Wednesday, June 27, 2007

The Dramatic Prairie Dog
I recently came across this silly video, and have since become interested in its evolution. It's strange how these memes progress. Is this really a worthwhile enterprise? It's amusing and fun, but also ephemeral. My initial thoughts are that stuff like this, while not necessarily brilliant in themselves, are a natural byproduct of a system that will produce good content. In other words, if you want to create something great, you'll probably have to endure creating a lot of crap before you cross over into greatness. Same thing with blogs, I think. Everyone tries different things and experiments, but only a few blogs become really good.
Posted by Mark on June 27, 2007 at 07:50 PM .: link :.


End of This Day's Posts

Wednesday, April 11, 2007

Twitter
So this Twitter thing seems to be all the rage these days. I signed up a few days ago, just to see what all the fuss was about. It turns out to be a little nebulous and I'm not sure it's something I'd use all that much. Everyone seems to have a different definition of what Twitter is, and they all seem to work. Mine is that it's a sorta mix between a public IM system and a stripped-down blogging system. It's got some similarities with certain aspects of MySpace and Facebook, but it's much simpler and stripped-down. Here's my twitter:


There's "Friends" and "Followers" and you can update your Twitter via a number of interfaces, including IM Clients, SMS messaging, and the web interface (amongst other similar connections). You can also get updates on such devices. I don't use any of these methods with regularity, though the concept of being able to update Twitter while waiting in line or something seems like a vaguely interesting use of normally wasted time.

I guess the idea is that if you and all your friends are on Twitter, you can keep up with what everyone's doing in one quick and easy place (the default way to read Twitter is with your posts and your friends' posts mixed together on one page). My problem: I don't think any of my friends would be into this. I suppose I could mess around on Twitter and find a bunch of folks that I'd want to keep up with for some reason, but that seems... strange. Why would I want to keep tabs on some stranger?

Jason Kottke claims that this is a huge time-saver and perfect for people who are really busy:
For people with little time, Twitter functions like an extremely stripped-down version of MySpace. Instead of customized pages, animated badges, custom music, top 8 friends, and all that crap, Twitter is just-the-facts-ma'am: where are my friends and what are they up to? ... Twitter seems to work equally well for busy people and not-busy people. It allows folks with little time to keep up with what their friends are up to without having to email and IM with them all day.
I suppose this would be true, though I've been busy lately and have only managed to update Twitter once or twice a day. Naturally, there are some interesting side-projects like Twittervision, which shows updates happening in real time on a map, or Twitterverse, which shows common words and users.

It's an interesting and simple concept, and it could be useful, but I'm not sure how much I'll get into it... It seems like more of a novelty at this point. Anyone else use it?

Update: Some people are using Twitter for unintended uses, and there are some great ficticious Twitterers like Darth Vader. It's interesting how quickly people start pushing the boundries of new stuff like this and using it for things that were never intended.

Update 4.12.07: Aziz comments. He's using it to power a section of his sidebar, dedicated to songs... a pretty good idea, and using Twitter ("a device-agnostic messaging system," as he calls it) to power it is a good fit.

Oh, and it appears that my little flash badge doesn't really update (it does, but most browsers cache it and Flash won't update unless you clear your cache manually).
Posted by Mark on April 11, 2007 at 09:37 PM .: Comments (0) | link :.


End of This Day's Posts

Wednesday, February 21, 2007

Link Dump
Various links for your enjoyment:
  • The Order of the Science Scouts of Exemplary Repute and Above Average Physique: Like the Boy Scouts, but for Scientists. Aside from the goofy name, they've got an ingenious and hilarious list of badges, including: The "my degree inadvertantly makes me competent in fixing household appliances" badge, The "I've touched human internal organs with my own hands" badge, The "has frozen stuff just to see what happens" badge (oh come one, who hasn't done that?), The "I bet I know more computer languages than you, and I'm not afraid to talk about it" badge (well, I used to know a bunch), and of course, The "dodger of monkey shit" badge. ("One of our self explanatory badges."). Sadly, I qualify for less of these than I'd like. Of course, I'm not a scientist, but still. I'm borderline on many though (for instance, the "I blog about science" badge requires that I maintain a blog where at least a quarter of the material is about science - I certainly blog about technology a lot, but explicitely science? Debateable, I guess.)
  • Dr. Ashen and Gizmodo Reviews The Gamespower 50 (YouTube): It's a funny review of a crappy portable video game device, just watch it. The games on this thing are so bad (there's actually one called "Grass Cutter," which is exactly what you think it is - a game where you mow the lawn).
  • Count Chocula Vandalism on Wikipedia: Some guy came up with an absurdly comprehensive history for Count Chocula:
    Ernst Choukula was born the third child to Estonian landowers in the late autumn of 1873. His parents, Ivan and Brushken Choukula, were well-established traders of Baltic grain who-- by the early twentieth century--had established a monopolistic hold on the export markets of Lithuania, Latvia and southern Finland. A clever child, Ernst advanced quickly through secondary schooling and, at the age of nineteen, was managing one of six Talinn-area farms, along with his father, and older brother, Grinsh. By twenty-four, he appeared in his first "barrelled cereal" endorsement, as the Choukula family debuted "Ernst Choukula's Golden Wheat Muesli", a packaged mix that was intended for horses, mules, and the hospital ridden. Belarussian immigrant silo-tenders started cutting the product with vodka, creating a crude mush-paste they called "gruhll" or "gruell," and would eat the concoction each morning before work.
    It goes on like that for a while. That particular edit has been removed from the real article, but there appears to actually be quite a debate on the Talk page as to whether or not to mention it in the official article.
  • The Psychology of Security by Bruce Schneier: A long draft of an article that delves into psychological reasons we make the security tradeoffs that we do. Interesting stuff.
  • The Sagan Diary by John Scalzi (Audio Book): I've become a great fan of Scalzi's fiction, and his latest work is available here as audio (a book is available too, but it appears to be a limited run). Since the book is essentially the diary of a woman, he got various female authors and friends to read a chapter. This actually makes for somewhat uneven listening, as some are great and others aren't as great. Now that I think about it, this book probably won't make sense if you haven't read Old Man's War and/or The Ghost Brigades. However, they're both wonderful books of the military scifi school (maybe I'll probably write a blog post or two about them in the near future).
Posted by Mark on February 21, 2007 at 08:16 PM .: link :.


End of This Day's Posts

Wednesday, February 14, 2007

Intellectual Property, Copyright and DRM
Roy over at 79Soul has started a series of posts dealing with Intellectual Property. His first post sets the stage with an overview of the situation, and he begins to explore some of the issues, starting with the definition of theft. I'm going to cover some of the same ground in this post, and then some other things which I assume Roy will cover in his later posts.

I think most people have an intuitive understanding of what intellectual property is, but it might be useful to start with a brief definition. Perhaps a good place to start would be Article 1, Section 8 of the U.S. Constitution:
To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries;
I started with this for a number of reasons. First, because I live in the U.S. and most of what follows deals with U.S. IP law. Second, because it's actually a somewhat controversial stance. The fact that IP is only secured for "limited times" is the key. In England, for example, an author does not merely hold a copyright on their work, they have a Moral Right.
The moral right of the author is considered to be -- according to the Berne convention -- an inalienable human right. This is the same serious meaning of "inalienable" the Declaration of Independence uses: not only can't these rights be forcibly stripped from you, you can't even give them away. You can't sell yourself into slavery; and neither can you (in Britain) give the right to be called the author of your writings to someone else.
The U.S. is different. It doesn't grant an inalienable moral right of ownership; instead, it allows copyright. In other words, in the U.S., such works are considered property (i.e. it can be sold, traded, bartered, or given away). This represents a fundamental distinction that needs to be made: some systems emphasize individual rights and rewards, and other systems are more limited. When put that way, the U.S. system sounds pretty awful, except that it was designed for something different: our system was built to advance science and the "useful arts." The U.S. system still rewards creators, but only as a means to an end. Copyright is granted so that there is an incentive to create. However, such protections are only granted for "limited Times." This is because when a copyright is eternal, the system stagnates as protected peoples stifle competition (this need not be malicious). Copyright is thus limited so that when a work is no longer protected, it becomes freely available for everyone to use and to build upon. This is known as the public domain.

The end goal here is the advancement of society, and both protection and expiration are necessary parts of the mix. The balance between the two is important, and as Roy notes, one of the things that appears to have upset the balance is technology. This, of course, extends as far back as the printing press, records, cassettes, VHS, and other similar technologies, but more recently, a convergence between new compression techniques and increasing bandwidth of the internet created an issue. Most new recording technologies were greeted with concern, but physical limitations and costs generally put a cap on the amount of damage that could be done. With computers and large networks like the internet, such limitations became almost negligible. Digital copies of protected works became easy to copy and distribute on a very large scale.

The first major issue came up as a result of Napster, a peer-to-peer music sharing service that essentially promoted widespread copyright infringement. Lawsuits followed, and the original Napster service was shut down, only to be replaced by numerous decentralized peer-to-peer systems and darknets. This meant that no single entity could be sued for the copyright infringement that occurred on the network, but it resulted in a number of (probably ill-advised) lawsuits against regular folks (the anonymity of internet technology and state of recordkeeping being what it is, this sometimes leads to hilarious cases like when the RIAA sued a 79 year old guy who doesn't even own a computer or know how to operate one).

Roy discusses the various arguments for or against this sort of file sharing, noting that the essential difference of opinion is the definition of the word "theft." For my part, I think it's pretty obvious that downloading something for free that you'd normally have to pay for is morally wrong. However, I can see some grey area. A few months ago, I pre-ordered Tool's most recent album, 10,000 Days from Amazon. A friend who already had the album sent me a copy over the internet before I had actually recieved my copy of the CD. Does this count as theft? I would say no.

The concept of borrowing a Book, CD or DVD also seems pretty harmless to me, and I don't have a moral problem with borrowing an electronic copy, then deleting it afterwords (or purchasing it, if I liked it enough), though I can see how such a practice represents a bit of a slippery slope and wouldn't hold up in an honest debate (nor should it). It's too easy to abuse such an argument, or to apply it in retrospect. I suppose there are arguments to be made with respect to making distinctions between benefits and harms, but I generally find those arguments unpersuasive (though perhaps interesting to consider).

There are some other issues that need to be discussed as well. The concept of Fair Use allows limited use of copyrighted material without requiring permission from the rights holders. For example, including a screenshot of a film in a movie review. You're also allowed to parody copyrighted works, and in some instances make complete copies of a copyrighted work. There are rules pertaining to how much of the copyrighted work can be used and in what circumstances, but this is not the venue for such details. The point is that copyright is not absolute and consumers have rights as well.

Another topic that must be addressed is Digital Rights Management (DRM). This refers to a range of technologies used to combat digital copying of protected material. The goal of DRM is to use technology to automatically limit the abilities of a consumer who has purchased digital media. In some cases, this means that you won't be able to play an optical disc on a certain device, in others it means you can only use the media a certain number of times (among other restrictions).

To be blunt, DRM sucks. For the most part, it benefits no one. It's confusing, it basically amounts to treating legitimate customers like criminals while only barely (if that much) slowing down the piracy it purports to be thwarting, and it's lead to numerous disasters and unintended consequences. Essential reading on this subject is this talk given to Microsoft by Cory Doctorow. It's a long but well written and straightforward read that I can't summarize briefly (please read the whole thing). Some details of his argument may be debateable, but as a whole, I find it quite compelling. Put simply, DRM doesn't work and it's bad for artists, businesses, and society as a whole.

Now, the IP industries that are pushing DRM are not that stupid. They know DRM is a fundamentally absurd proposition: the whole point of selling IP media is so that people can consume it. You can't make a system that will prevent people from doing so, as the whole point of having the media in the first place is so that people can use it. The only way to perfectly secure a piece of digital media is to make it unusable (i.e. the only perfectly secure system is a perfectly useless one). That's why DRM systems are broken so quickly. It's not that the programmers are necessarily bad, it's that the entire concept is fundamentally flawed. Again, the IP industries know this, which is why they pushed the Digital Millennium Copyright Act (DMCA). As with most laws, the DMCA is a complex beast, but what it boils down to is that no one is allowed to circumvent measures taken to protect copyright. Thus, even though the copy protection on DVDs is obscenely easy to bypass, it is illegal to do so. In theory, this might be fine. In practice, this law has extended far beyond what I'd consider reasonable and has also been heavily abused. For instance, some software companies have attempted to use the DMCA to prevent security researchers from exposing bugs in their software. The law is sometimes used to silence critics by threatening them with a lawsuit, even though no copright infringement was committed. The Chilling Effects project seems to be a good source for information regarding the DMCA and it's various effects.

DRM combined with the DMCA can be stifling. A good example of how awful DRM is, and how DMCA can affect the situation is the Sony Rootkit Debacle. Boing Boing has a ridiculously comprehensive timeline of the entire fiasco. In short, Sony put DRM on certain CDs. The general idea was to prevent people from putting the CDs in their computer and ripping them to MP3s. To accomplish this, Sony surreptitiously installed software on customer's computers (without their knowledge). A security researcher happened to notice this, and in researching the matter found that the Sony DRM had installed a rootkit that made the computer vulnerable to various attacks. Rootkits are black-hat cracker tools used to disguise the workings of their malicious software. Attempting to remove the rootkit broke the windows installation. Sony reacted slowly and poorly, releasing a service pack that supposedly removed the rootkit, but which actually opened up new security vulnerabilities. And it didn't end there. Reading through the timeline is astounding (as a result, I tend to shy away from Sony these days). Though I don't believe he was called on it, the security researcher who discovered these vulnerabilities was technically breaking the law, because the rootkit was intended to protect copyright.

A few months ago, my windows computer died and I decided to give linux a try. I wanted to see if I could get linux to do everything I needed it to do. As it turns out, I could, but not legally. Watching DVDs on linux is technically illegal, because I'm circumventing the copy protection on DVDs. Similar issues exist for other media formats. The details are complex, but in the end, it turns out that I'm not legally able to watch my legitimately purchased DVDs on my computer (I have since purchased a new computer that has an approved player installed). Similarly, if I were to purchase a song from the iTunes Music Store, it comes in a DRMed format. If I want to use that format on a portable device (let's say my phone, which doesn't support Apple's DRM format), I'd have to convert it to a format that my portable device could understand, which would be illegal.

Which brings me to my next point, which is that DRM isn't really about protecting copyright. I've already established that it doesn't really accomplish that goal (and indeed, even works against many of the reasons copyright was put into place), so why is it still being pushed? One can only really speculate, but I'll bet that part of the issue has to do with IP owners wanting to "undercut fair use and then create new revenue streams where there were previously none." To continue an earlier example, if I buy a song from the iTunes music store and I want to put it on my non-Apple phone (not that I don't want one of those), the music industry would just love it if I were forced to buy the song again, in a format that is readable by my phone. Of course, that format would be incompatible with other devices, so I'd have to purchase the song again if I wanted to listen to it on those devices. When put in those terms, it's pretty easy to see why IP owners like DRM, and given the general person's reaction to such a scheme, it's also easy to see why IP owners are always careful to couch the debate in terms of piracy. This won't last forever, but it could be a bumpy ride.

Interestingly enough, distributers of digital media like Apple and Yahoo have recently come out against DRM. For the most part, these are just symbolic gestures. Cynics will look at Steve Jobs' Thoughts on Music and say that he's just passing the buck. He knows customers don't like or understand DRM, so he's just making a calculated PR move by blaming it on the music industry. Personally, I can see that, but I also think it's a very good thing. I find it encouraging that other distributers are following suit, and I also hope and believe this will lead to better things. Apple has proven that there is a large market for legally purchased music files on the internet, and other companies have even shown that selling DRM-free files yields higher sales. Indeed, the emusic service sells high quality, variable bit rate MP3 files without DRM, and it has established emusic as the #2 retailer of downloadable music behind the iTunes Music Store. Incidentally, this was not done for pure ideological reasons - it just made busines sense. As yet, these pronouncements are only symbolic, but now that online media distributers have established themselves as legitimate businesses, they have ammunition with which to challenge the IP holders. This won't happen overnight, but I think the process has begun.

Last year, I purchased a computer game called Galactic Civilizations II (and posted about it several times). This game was notable to me (in addition to the fact that it's a great game) in that it was the only game I'd purchased in years that featured no CD copy protection (i.e. DRM). As a result, when I bought a new computer, I experienced none of the usual fumbling for 16 digit CD Keys that I normally experience when trying to reinstall a game. Brad Wardell, the owner of the company that made the game, explained his thoughts on copy protection on his blog a while back:
I don't want to make it out that I'm some sort of kumbaya guy. Piracy is a problem and it does cost sales. I just don't think it's as big of a problem as the game industry thinks it is. I also don't think inconveniencing customers is the solution.
For him, it's not that piracy isn't an issue, it's that it's not worth imposing draconian copy protection measures that infuriate customers. The game sold much better than expected. I doubt this was because they didn't use DRM, but I can guarantee one thing: People don't buy games because they want DRM. However, this shows that you don't need DRM to make a successful game.

The future isn't all bright, though. Peter Gutmann's excellent Cost Analysis of Windows Vista Content Protection provides a good example of how things could get considerably worse:
Windows Vista includes an extensive reworking of core OS elements in order to provide content protection for so-called "premium content", typically HD data from Blu-Ray and HD-DVD sources. Providing this protection incurs considerable costs in terms of system performance, system stability, technical support overhead, and hardware and software cost. These issues affect not only users of Vista but the entire PC industry, since the effects of the protection measures extend to cover all hardware and software that will ever come into contact with Vista, even if it's not used directly with Vista (for example hardware in a Macintosh computer or on a Linux server).
This is infuriating. In case you can't tell, I've never liked DRM, but at least it could be avoided. I generally take articles like the one I'm referencing with a grain of salt, but if true, it means that the DRM in Vista is so oppressive that it will raise the price of hardware… And since Microsoft commands such a huge share of the market, hardware manufacturers have to comply, even though a some people (linux users, Mac users) don't need the draconian hardware requirements. This is absurd. Microsoft should have enough clout to stand up to the media giants, there's no reason the DRM in Vista has to be so invasive (or even exist at all). As Gutmann speculates in his cost analysis, some of the potential effects of this are particularly egregious, to the point where I can't see consumers standing for it.

My previous post dealt with Web 2.0, and I posted a YouTube video that summarized how changing technology is going to force us to rethink a few things: copyright, authorship, identity, ethics, aesthetics, rhetorics, governance, privacy, commerce, love, family, ourselves. All of these are true. Earlier, I wrote that the purpose of copyright was to benefit society, and that protection and expiration were both essential. The balance between protection and expiration has been upset by technology. We need to rethink that balance. Indeed, many people smarter than I already have. The internet is replete with examples of people who have profited off of giving things away for free. Creative Commons allows you to share your content so that others can reuse and remix your content, but I don't think it has been adopted to the extent that it should be.

To some people, reusing or remixing music, for example, is not a good thing. This is certainly worthy of a debate, and it is a discussion that needs to happen. Personally, I don't mind it. For an example of why, watch this video detailing the history of the Amen Break. There are amazing things that can happen as a result of sharing, reusing and remixing, and that's only a single example. The current copyright environment seems to stifle such creativity, not the least of which because copyright lasts so long (currently the life of the author plus 70 years). In a world where technology has enabled an entire generation to accellerate the creation and consumption of media, it seems foolish to lock up so much material for what could easily be over a century. Despite all that I've written, I have to admit that I don't have a definitive answer. I'm sure I can come up with something that would work for me, but this is larger than me. We all need to rethink this, and many other things. Maybe that Web 2.0 thing can help.

Update: This post has mutated into a monster. Not only is it extremely long, but I reference several other long, detailed documents and even somewhere around 20-25 minutes of video. It's a large subject, and I'm certainly no expert. Also, I generally like to take a little more time when posting something this large, but I figured getting a draft out there would be better than nothing. Updates may be made...

Update 2.15.07: Made some minor copy edits, and added a link to an Ars Technica article that I forgot to add yesterday.
Posted by Mark on February 14, 2007 at 11:44 PM .: link :.


End of This Day's Posts

Sunday, February 11, 2007

Web 2.0 ... The Machine is Us/ing Us
Via The Rodent's Burrow, I come across this YouTube video on Web 2.0:

It's an interesting video, but I have to admit that the term Web 2.0 always bothered me. This is odd, because obsessing over terminology is also annoying. As you can see, I'm in a bit of a bind here. Web 2.0 has become a shorthand for the current renaissance in web development which is focused new web services and applications that emphasize social collaboration and openness. That, of course, is a lame definition. Most definitions of Web 2.0 are. However, I think Paul Graham hits the nail on the head in his essay on the subject:
Web 2.0 means using the web the way it's meant to be used. The "trends" we're seeing now are simply the inherent nature of the web emerging from under the broken models that got imposed on it during the Bubble.
Right on. Key to understanding "Web 2.0" is the concept of the internet itself. I should also note that the web and the internet are not the same thing. The internet is a collection of interconnected computer networks (i.e. the physical hardware), the web is a collection of interconnected documents and data that lives on the internet. If you don't understand the historical resources that lead to the topology of the internet, "Web 2.0" won't make much sense. The internet is made by human beings, and it's history extends back to the 1950s (well, the branch of mathematics that represents our thinking about networks is called graph theory, which finds its roots in the early eighteenth century, but the physical internet has its roots in ARPANET, the 1950s governmental precursor to the internet), but it was not a centrally designed system.

Structurally, the internet is like an ecosystem. It's essentially a self-organizing system, and the gigantic information resource we call the web is the emergent result of billions of interactions. Note that while this information resource was the goal, the system's designers did not go about planning what that information would look like. Their primary strategy was to build an efficient system of collaboration. Sound familiar? "Web 2.0" isn't really new. It's the whole point of the internet. Sure, there are specific technological advances and tools that have accellerated the process (i.e. thanks to AJAX, javascript actually kinda became a legitimate web-based scripting language), but the technology of the internet and the web are just the natural extensions of the grand experiment of life, driven by evolution and selection.

The web isn't all that different, but we are, and we're taking advantage of it.

Update 2.14.07: It seems that this post has kicked off a little discussion of Intellectual Property, starting over at 79Soul with a response by me here.
Posted by Mark on February 11, 2007 at 08:07 PM .: Comments (4) | link :.


End of This Day's Posts

Wednesday, January 10, 2007

iPhone
iPhoneA couple of years ago, I was in the market for a new phone. After looking around at all the options and features, I ended up settling on a relatively "low-end" phone that was good for calls and SMS and that's about it. It was small, simple, and to the point, and while it has served me well, I have kinda regretted not getting a camera in the phone (this is the paradox of choice in action). I considered the camera phone, as well as phones that played music (three birds with one stone!), but it struck me that feature packed devices like that simply weren't ready yet. They were expensive, clunky, and the interface looked awful.

Enter Apple's new iPhone. Put simply, they've done a phenominal job with this phone. I'm impressed. Watch the keynote presentation here. Some highlights that I found interesting:
  • Just to mention some of the typical stuff: it's got all the features of a video iPod, it's got a phone, it's got a camera, and it's got the internet. It has an iPod connector, so you can hook it up to your computer and sync all the appropriate info (music, contacts, video, etc...) through iTunes (i.e. an application that everyone is already familiar with because they use it with their iPod.) It runs Mac OSX (presumably a streamlined version) and has a browser, email app, and widgets. Battery life seems very reasonable.
  • Ok enough of the functionality. The functionality is mostly, well, normal. There are smart phones that do all of the above. Indeed, one of the things that worries me about this phone is that by cramming so much functionality into this new phone, Apple will also be muddying the interface... but the interface is what's innovative about this phone. This is what the other smart phones don't do. In short, the interface is a touch screen (no physical keyboard, and no stylus; it takes up the majority of the surface area of a side of the phone and you use your fingers to do stuff. Yes, I said fingers, as in multiple. More later.) This allows them to tailor the interface to the application currently in use. Current smart phones all have physical controls that must stay fixed (i.e. a mini qwerty keyboard, and a set of directional buttons, etc...) and which are there whether you need them for what you're doing or not. By using a touch screen, Apple has solved that problem rather neatly (Those of you familiar with this blog know what's coming, but it'll be a moment).
  • Scrolling looks fun. Go and watch the demo. It looks neat and, more importantly, it appears to be consistent between all the applications (i.e. scrolling your music library, scrolling through your contacts, scrolling down a web page, etc...). Other "multi-touch" operations also look neat, such as the ability to zoom into web page by squeezing your fingers on the desired area (iPhone loads the actual page, not the WAP version, and allows you to zoom in to read what you want - another smart phone problem solved (yes, yes, it's coming, don't worry)). The important thing about the touch interface is that it is extremely intuitive. You don't need to learn that much in order to use this phone, and the touch screen interface.
  • The phone does a few interesting new things. It has a feature they're calling "visual voicemail" which lets you see all of your voicemail, then select which one you want to listen to first (a great feature). It also makes conference calls a snap, too. This is honestly something I can't see using that much, but the interface to do it is better than any other conference call interface I've seen, and it's contextual in that you don't have to deal with it until you've got two people on the phone.
  • It's gyroscopic, dude. It has motion sensors that detect the phone's orientation. If you're looking at a picture, and you turn the phone, the picture will turn with you (and if it's a landscape picture, it'll fill more of the screen too). It senses the lighting and adjusts the screen's display to compensate for the environment (saves power, provides better display). When you put the phone by your ear to take a call, it senses that, and deactivates the touchscreen, saving power and avoiding unwanted "touches" on the screen (you don't want your ear to hang up, after all). Another problem solved (wait for it). Unfortunately, the iPhone does not also feature Wiimote functionality (wiiPhone anyone?)
  • Upgradeable Interface: One of the most important things that having a touch screen interface allows Apple to do is provide updates to installed software and even new applications (given that it's running a version of OS X, this is probably a given). Let's say that the interface for browsing contacts is a little off, or the keyboard is spaced wrong. With a physical keyboard on a smart phone, you can't fix that problem without redesigning the whole thing and making the customer purchase a new piece of hardware. The iPhone can just roll out an update.
  • Apple could put Blackberry out of business with this thing, provided that the functionality is there (it appears that it is for Yahoo mail, but will it work with my company? I can't tell just yet.). Blackberries always seemed like a fully featured kludge to me. The iPhone is incredibly elegant in comparison (not that it isn't elegant all by itself). This would also mitigate the whole high price issue: companies might pay for this thing if it works as well as it seems, and people are always more willing to spend their companies money than their own.
Ok, you know what's coming. Human beings don't solve problems. They trade one set of problems for another, in the hopes that the new are better than the old. Despite the fact that I haven't actually used the iPhone, what are some potential issues?
  • The touchscreen: Like the iPod's clickwheel, the iPhone's greatest strength could prove to be it's greatest weakness. Touch screens have been in use for years and have become pretty well understood and revised... but they can also be imprecise and, well, touchy. When watching the demo, Steve didn't seem to be having any problem executing various options, but I'm not sure how well the device will be able to distinguish between "I want to scroll" and "I want to select" (unless selecting was a double-tap, but I don't think it was). Designing a new touch screen input interface is a tricky human factors problem, and I'm willing to be it will take a little while to be perfected. Like the scrollwheel, I can see it being easy to overshoot or select the wrong item. I could certainly be wrong, and I look forward to fiddling with it at the local Mac store to see just how responsive it really is (it's hard to comment on something you've never used). However, I'm betting that (again like the scrollwheel) the touchscreen will be a net positive experience.
  • Durability: Steven Den Beste hits (scroll down) on what I think may be the biggest problem with the touch screen:
    I have some serious concerns about long term reliability of the touch panel. When it's riding inside a woman's purse, for instance, how long before the touch panel gets wrecked? Perhaps there's a soft carrying case for it -- but a lot of people will toss that, and carry the phone bare. Nothing protects that panel, and it covers one of the two largest faces on the unit. There are a thousand environmental hazards which could wreck it: things dropped onto it, or it being dropped onto other things. And if the touch panel goes bad, the rest of the unit is unusable.
    Indeed. iPods are notorious for getting scratched up, especially the screens. How will that impact the display? How will it impact the touch screen?
  • Two hands? It looks like you need to use two hands to do a lot of these touch screen operations (one to hold, the other to gesture). Also, when writing an email, a little qwerty keyboard appears on the touch screen... which is nice, but which also might be difficult to use with one hand or without looking (physical keyboards allow you to figure out what key you're on by touch, and also have little nubs - home keys - which don't translate to the touch screen). I don't know how much of an issue this will be, but it will affect some people (I know someone who will type emails on their Blackberry with one hand, while driving. This is an extreme case, to be sure, but it doesn't seem possible with the touch screen).
  • Zooming: The zooming feature in web browsing is neat, but the page they used in the demo (the NY Times homepage) has 5 columns, which seems ideal for zooming. How will other pages render? Will zooming be as useful? The glimpses at this functionality aren't enough to tell how well it will handle the web... (Google Maps looked great though)
  • Does it do too much? This phone looks amazing, but it's price tag is prohibitive for me, especially since I probably won't use a significant portion of the functionality. I love that it does video, and while the 3.5" screen is bigger than my iPod's screen, I have to admit that I've never used the iPod video to watch something (maybe if I travelled more...) Brian Tiemann notes:
    If it weren't for the phone, I would buy this in a heartbeat. As it is, I wish (as does Damien Del Russo) that there were a way to buy it without the Cingular plan, so you could just use it as an iPod with wireless web browsing and e-mail and the like.
    Again, there is a worry that a device that tries to do everything for everyone will end up being mediocre at everything. However, I think Apple has made a very admirable attempt, and the touch screen concept really does cut down on this by allowing applications their own UIs and also allowing updates to those UIs if it becomes necessary. They've done as good a job as I think is possible at this time.
  • Battery Life: This goes along with the "does it do too much" point. I mentioned above that the battery life seems decent, and it does. However, with a device that does this much, I have a feeling that the 5 hours of use they claim will still feel a little short, especially when you're using all that stuff. This is one of the reasons I never seriously considered getting a music/camera/phone a while back: I don't want to run out my batteries playing music, then not be able to make an important call. This is a problem for mobile devices in general, and battery technology doesn't seem to be advancing as rapidly as everything else.
  • Monopoly: This phone will only further cement iTunes' dominant position in the marketplace. Is this a good thing or a bad thing? I go back and forth. Sometimes Apple seems every bit as evil as Microsoft, but then, they also seem a lot more competant too. The Zune looks decent, but it's completely overshadowed by this. We could have a worse monopoly, I guess, but I don't like stuff like DRM (which is reasonable, yes, but still not desirable except insofar as it calms down content owners) and proprietary formats that Apple won't license. Will third parties be able to develop apps for the iPhone? It could certainly be worse, but I'm a little wary.
All in all, it's quite impressive. Most of the potential issues don't seem insurmountable, and I think Apple has a hit on their hands. It should also be interesting to see if other cell phone makers respond in any way. The cell phone market is gigantic (apparently nearly a billion cell phones were sold last year), and it seems like a lot of the best phones are only available overseas. Will we start to see better phones at a cheaper price? Unfortunately, I don't think I'll be getting an iPhone anytime soon, though I will keep a close eye on it. Once they work out the bugs and the price comes down, I'll definitely be tempted.

Updates: Brian Tiemann has further thoughts. Kevin Murphy has some thoughts as well. Ars Technica also notes some issues with the iPhone, and has some other good commentary (actually, just read their Infiinite Loop journal). I think the biggest issue I forgot to mention is that the iPhone is exclusive to Cingular (and you have to get a 2 year plan at that).
Posted by Mark on January 10, 2007 at 12:08 AM .: Comments (4) | link :.


End of This Day's Posts

Wednesday, December 27, 2006

Again New Computer
A few weeks ago, I wrote about what I was looking for in a new computer, and various buying options. I had it narrowed down to a few options, but being cognizant of the paradox of choice, I decided on ordering a Prelude system from Maingear, a small custom computer shop that actually had reasonable prices (I got the system I was looking for: Intel Core 2 Duo E6600, 2 GB RAM, 320 GB Hard Drive, etc...). I probably paid a little more than I would have if I just bought all the components and then put it together myself, but I was willing to pay for the convenience of a pre-configured system. Also, unlike other cheap custom PC shops like CyberPowerPC, Maingear has a fantastic reputation for building quality systems and providing excellent support. I'm pleased to report that Maingear lives up to its reputation. Shortly after ordering my PC, they contacted me to confirm a few things and ask if I had any questions or special requests (I understand they'll preinstall various games for you if you want, provided you have the CD Key. Alas, I have no such games, so I didn't get to request this, but that's a neat service.)

They also informed me that they (like every other retailer) were quite busy at this time of the year, but that they would try to get me the PC before Christmas. And it arrived just in the nick of time, on Saturday, December 23 (another Festivus miracle!). It was well packaged, and appeared to be in working order (as compared to a friend's experience with CyberPowerPC where his DVD drive was mounted incorrectly amongst a bunch of other strange problems). The case looks great (I don't know why, but most custom PC cases are very crappy looking or obscenely gaudy):

PC Case

The insides are arranged about as neat as could be expected, with all the various wires and connectors hidden or tied tightly together. This is nothing short of amazing when compared to my previous computer.

PC Case

And it came with a nice personalized binder that had all of the installation CDs, backup CDs, and documentation for the computer.

PC Case

When I fired up the computer, I was pleased to find that no Windows configuration was really necessary. The desktop was relatively clean (no annoying special offers from AOL, etc...), all the latest patches and updated drivers had been installed, and everything was ready for me to install my favorite apps. As far as performance goes, it appears to be a champ (according to a screenshot they included, it scores a 5453 in 3DMark06 - but I have no frame of reference for telling just how good that is). They also included a copy of Hitman: Blood Money (an unexpected and pleasant bonus), which I've been working my way through ( it's one of those annoying DIAS type of games, but hey, I'm not complaining).

All in all, I couldn't be happier with my new computer. For something I use as often as I use my computer, I think it was worth every penny.
Posted by Mark on December 27, 2006 at 06:52 PM .: Comments (2) | link :.


End of This Day's Posts

Monday, December 04, 2006

New Computer
As I've recently mentioned, my old computer isn't doing so well. Built with turn-of-the-century hardware, she's lasted a long time, more than I could really expect. So it's time to get a new computer. As I've also mentioned recently, the amount of options for building a new computer are staggering (and the amount of choices can lead to problems). However, with the help of the newly released Ars Technica System Guides (specifically the Hot Rod) and some general research, I should be able to slap something together in relatively short order. After some initial poking around, here's what I'm looking for: I'm leaning pretty close to the recommendations from Ars Technica, with only a few minor tweaks. They claim their Hot Rod rig can be had for around $1622.71, but when you add in shipping, an OS, and my tweaks, I'm betting that's more around $1800. Of course, I'll have to order all this stuff, assemble it, and install the OS, which will probably take a few hours, so let's make a conservative estimate of around $2000 (I'm valuing my time at around $50 an hour here). Not too shabby, and it's a pretty impressive PC. So is it worth putting it together myself, or can I order a comparable system from somewhere else that is cheaper and/or easier? Let's take a look at my options:
  • Dell: A comparably configured XPS 410 system comes in around $2200. The only major addition here is the 2 year warranty and support.
  • HP: Well, the HP Pavilion d4650y series computer I configured came in at a pretty cheap $1600. However, I wasn't able to get the GeForce 7950GT 512 MB and had to settle for a 256 MB card (I'm sure there are other computer models that I could configure, but this seemed reasonable enough).
  • CyberPowerPC.com: A comparably configured Intel� Core� 2 Duo Custom Build machine runs about $2017. They also have a 3 year limited warranty and support. However, I should note that a friend recently purchased a PC through CyberPower and was thoroughly dissatisfied: several incorrectly installed pieces of hardware as well as an OS that had to be reinstalled. From online reviews, their support seems notoriously bad. However, it's difficult to tell with online reviews sometimes. The good reviews outnumber the bad. I'm still considering these guys because they can save me some time and energy without having to really pay too much. However, I'm guessing that I'll have to do some mucking around with the hardware and software, which would put the price up a bit when you consider time and effort.
  • ABS.com: ABS is the parent company of newegg and has a mildly better reputation than CyberPowerPC. However, the price here comes to around $2200, and it wasn't exactly what I wanted.
  • Maingear: Most high end brands or boutiques like Alienware, Voodoo, or Hypersonic can get pretty expensive (easily $3000+), but Maingear was surprisingly reasonable. I was able to configure their Prelude system to what I wanted for around $2050. With some fiddling, and perhaps purchasing some components separately, I think I could drive that down a bit. Also, unlike CyberPowerPC (or Dell for that matter), these guys seem to have stellar reputation (there are only 10 ratings on ResellerRatings, but they're all great reviews and they also seem to be consistent with professional reviews). They're service and support appears to be good as well. I've got a good feeling about these guys, and I'm glad I'm writing this entry because I probably wouldn't have found them otherwise.
So I'm looking closer at Maingear and if that doesn't work out, it looks like I'm putting it together myself, unless anyone else has a better idea (if you do, leave a comment below). I'm going to hold off a few days before actually placing any orders, but I think I'll be happy with what I'm getting.

Update: After some fiddling, I got the Maingear PC down to around $1800 without a monitor. I'm also getting a lightscribe DVD burner, which is a totally frivolous expense (extra $70), but pretty neat too.
Posted by Mark on December 04, 2006 at 09:16 PM .: Comments (0) | link :.


End of This Day's Posts

Sunday, November 19, 2006

Link Dump
Time is short this week, so a few quick links:
  • The 1,000 Greatest Films: Aggregated from 1,193 individual critics' and filmmakers' top-ten lists. They've got all sorts of different ways to look at the numbers, including a way to keep track of which ones you have seen. As you might expect, the list is diverse and somewhat contentious, with lots of foriegn films and some very questionable choices. There are tons of films I've never even heard of. The list is somewhat skewed towards older films, as they use some older lists (some of the lists used are as old as 1952), but then, that's still to be expected. Older films tend to get credit for their importance, and not as much because of their entertainment value today (I'm horribly understating this issue, which could probably use a blog entry of its own). As an aside, the list sometimes reads like the Criterion Collection catalog, which is pretty funny. I used the listkeeper site (which is pretty neat and might help make these type of memes a little easier to deal with), and I've apparently seen somewhere around 16% of the list. Given the breadth of the films covered in the list, I think that's pretty impressive (though I'll probably never get past 30%).
  • Shuttle Launch Seen From ISS: Photos of a Space Shuttle launch as seen from the International Space Station. Neato.
  • A Million Random Digits with 100,000 Normal Deviates: Ok, so this is a book comprised solely of a bunch of random numbers, and that's it. Nothing funny or entertaining there, except the Amazon reviewers are having a field day with it. My favorite review:
    The book is a promising reference concept, but the execution is somewhat sloppy. Whatever algorithm they used was not fully tested. The bulk of each page seems random enough. However at the lower left and lower right of alternate pages, the number is found to increment directly.
    Ahhh, geek humor. [via Schneier]
  • BuzzFeed: A new aggregator that features "movies, music, fashion, ideas, technology, and culture" that are generating buzz (in the form of news stories and blog posts, etc...). It's an interesting idea as it's not really a breaking news site, but it seems to have it's finger on the pulse of what folks are talking about (on the homepage now are sections on the Wii, PS3, Borat, and (of course Snoop Dogg's new line of pet clothing). It's not like Digg or Reddit, and thus it doesn't suffer from a lot of their issues (unless they branch out into politics and religion). I'm sure some people will try to game the system, but it seems inherently more secure against such abuse.
That's all for now.

Update: This Lists of Bests website is neat. It remembers what movies you've seen, and applies them to other lists. For example, without even going through the AFI top 100, I know that I've seen at least 41% of the list (because of all the stuff I noted when going through the top 1000). You can also compare yourself with other people on the site, and invite others to do so as well. Cool stuff.
Posted by Mark on November 19, 2006 at 10:59 PM .: Comments (2) | link :.


End of This Day's Posts

Friday, November 17, 2006

Bag O' Crap: Close, but no cigar
The term "woot" (or more accurately, "w00t") is slang for expressing excitement, usually on the internet (especially popular in chat and video games). The etymology is a little unclear (many speculated origins), but the word itself just sounds celebratory. In any case, there is an online store that has appropriated the term and "focuses on selling cool stuff cheap." They basically sell one item a day, and that's it. Talk about your simple concepts. I should also mention that their product descriptions are awesome - they have a lot of fun with it, so that even though I don't think I've ever bought a Woot, I still stop by frequently. For instance, a while ago, their description for a JVC Camcorder was written as a letter from Osama Bin Laden to his subordinates:
To: Media Relations Division
From: OBL

Well guys, we're starting to see the infidel press reviews of our latest audio release, and they're not good. First of all, the heathens had to subject the thing to two days' worth of analysis just to be sure it was my voice! Then CNN said "Poor quality." CBS called it "Insignificant." And the most devastating criticism of all came from Pitchfork Media: "badly-recorded, smug pontificating for those who find the spoken-word releases of Jello Biafra too funny and incisive." They gave it a 2.4! No distributor will touch it now!
Heh. Anyway, when that item sells out, the site starts selling alternate items in what is called a "Woot-Off." These alternate items are typically in shorter stock than the original Woot, so they don't usually last long, and you see a lot of items during the rest of the day (as each Woot-Off item sells out, it is replaced by the next item, and so on).

Now, the holy grail of Woot is this thing called the Bag O' Crap. Basically, instead of selling an item, they offer a grab bag that is typically filled with dollar store junk, but which sometimes contains things of significant value (I heard of someone getting a decent quality graphics card in a BOC). Naturally, this is a popular item, and it usually sells out within minutes. I have never even seen one, though I always know when I've missed it. Quite frustrating, but today was different. I go to Woot this afternoon, and I get a "Server Too Busy" error message. This essentially means that they're selling a BOC, and everyone is going to the site in a furious attempt to purchase one (well, typically you purchase 3 at a time), clogging up their servers. A few reloads later, and I see it (click for larger image):

Woot: Bag O Crap (click for larger image)


Overjoyed, I attempted to get one. After several minutes of tense refreshing to get past server errors, I finally get to the page where you confirm your order, I click, and I get the message:
Sorry, we're now sold out of this item or we don't have enough left to complete your order.
Khaaan! You win this round, Woot. But I'll be back. I'll get that Bag O' Crap someday.
Posted by Mark on November 17, 2006 at 07:14 PM .: Comments (2) | link :.


End of This Day's Posts

Sunday, November 12, 2006

Stupid T-Shirt
How awesome is the internet? A little while ago, I was watching David Fincher's far-fetched but entertaining thriller, The Game. If you haven't seen the film, there are spoilers ahead.

At the end of the movie, some pretty unlikely things happen, but it's a lot of fun, and I think most audiences let it slide. One of the funny moments at the end is when a character gives Michael Douglas' character a t-shirt which describes his experiences. After watching the movie, I thought it would make a pretty funny t-shirt... but I couldn't remember exactly what the shirt said. Naturally, I turned to the internet. Not only was I able to figure out what it said (from multiple sites), I also found a site that actually sells the shirt.

The Game t-shirt: I was drugged and left for dead in Mexico - And all I got was this stupid T-shirt.

They've even got a screenshot from the movie. Alas, it's a bit pricey for such a simplistic shirt. Still, the idea that such a shirt would be anything more than some custom thing a film nerd whipped up is pretty funny. I mean, how many people would even get the reference?
Posted by Mark on November 12, 2006 at 09:45 PM .: link :.


End of This Day's Posts

Sunday, November 05, 2006

Choice, Productivity and Feature Bloat
Jacob Neilson's recent column on productivity and screen size referenced an interesting study comparing a feature-rich application with a simpler one:
The distinction between operations and tasks is important in application design because the goal is to optimize the user interface for task performance, rather than sub-optimize it for individual operations. For example, Judy Olson and Erik Nilsen wrote a classic paper comparing two user interfaces for large data tables. One interface offered many more features for table manipulation and each feature decreased task-performance time in specific circumstances. The other design lacked these optimized features and was thus slower to operate under the specific conditions addressed by the first design's special features.

So, which of these two designs was faster to use? The one with the fewest features. For each operation, the planning time was 2.9 seconds in the stripped-down design and 4.6 seconds in the feature-rich design. With more choices, it takes more time to make a decision on which one to use. The extra 1.7 seconds required to consider the richer feature set consumed more time than users saved by executing faster operations.
In this case, more choices means less productive. So why aren't all of our applications much smaller and less feature-intesive? Well, as I went over a few weeks ago, people tend to overvalue measurable things like features and undervalue less tangible aspects like usability and productivity. Here's another reason we endure feature bloat:
A lot of software developers are seduced by the old "80/20" rule. It seems to make a lot of sense: 80% of the people use 20% of the features. So you convince yourself that you only need to implement 20% of the features, and you can still sell 80% as many copies.

Unfortunately, it's never the same 20%. Everybody uses a different set of features. In the last 10 years I have probably heard of dozens of companies who, determined not to learn from each other, tried to release "lite" word processors that only implement 20% of the features. This story is as old as the PC.
That quote is from a relatively old article, and when I first read it, I still didn't get why you couldn't create a "lite" word processor that would be significantly smaller than Word, but still get the job done. Then I started using several of the more obscure features of Word, notably the "Track Changes" feature (which was a life saver at the time), which never would have made it into a "lite" version (yes, there are other options for collaborative editing these days, but you gotta use what you have at hand at the time). Add in the ever increasing computer power and ever decreasing cost of memory and storage, and feature bloat looks like less of a problem. However, as this post started out by noting, productivity often suffers as a result (and as Neilson's article shows, productivity is more difficult to measure than counting a list of features).

The one approach for dealing with "featuritis" that seems to be catching on these days is starting with your "lite" version, then allowing people to install plugins to fill in the missing functionality. This is one of the things that makes Firefox so popular, as it not only allows plugins, it actually encourages users to create their own. Alas, this has lead to choice problems of it's own. One of my required features for any browser that I would consider for personal use is mouse gestures. Firefox has at least 4 extensions available that implement mouse gestures in one way or another (though it's not immediately obvious what the differences are, and there appear to be other extensions which utilize mouse gestures for other functions). By contrast, my other favorite browser, Opera, natively supports mouse gestures.

Of course, this is not a new approach to the feature bloat problem. Indeed, as far as I can see, this is one of the primary driving forces behind *nix-based applications. Their text editors don't have a word count feature because there is already a utility for doing so (command line: wc [filename]). And so on. It's part of *nix's modular design, and it's one of the things that makes it great, but it also presents problems of it's own (which I belabored at length last week)

In the end, it comes down to tradeoffs. Humans don't solve problems, they exchange problems, and so on. Right now, the plugin strategy seems to make a reasonable tradeoff, but it certainly isn't perfect.
Posted by Mark on November 05, 2006 at 11:50 PM .: link :.


End of This Day's Posts

Sunday, October 29, 2006

Adventures in Linux, Paradox of Choice Edition
Last week, I wrote about the paradox of choice: having too many options often leads to something akin to buyer's remorse (paralysis, regret, dissatisfaction, etc...), even if their choice was ultimately a good one. I had attended a talk given by Barry Schwartz on the subject (which he's written a book about) and I found his focus on the psychological impact of making decisions fascinating. In the course of my ramblings, I made an offhand comment about computers and software:
... the amount of choices in assembling your own computer can be stifling. This is why computer and software companies like Microsoft, Dell, and Apple (yes, even Apple) insist on mediating the user's experience with their hardware & software by limiting access (i.e. by limiting choice). This turns out to be not so bad, because the number of things to consider really is staggering.
The foolproofing that these companies do can sometimes be frustrating, but for the most part, it works out well. Linux, on the other hand, is the poster child for freedom and choice, and that's part of why it can be a little frustrating to use, even if it is technically a better, more stable operating system (I'm sure some OSX folks will get a bit riled with me here, but bear with me). You see this all the time with open source software, especially when switching from regular commercial software to open source.

One of the admirable things about Linux is that it is very well thought out and every design decision is usually done for a specific reason. The problem, of course, is that those reasons tend to have something to do with making programmers' lives easier... and most regular users aren't programmers. I dabble a bit here and there, but not enough to really benefit from these efficiencies. I learned most of what I know working with Windows and Mac OS, so when some enterprising open source developer decides that he doesn't like the way a certain Windows application works, you end up seeing some radical new design or paradigm which needs to be learned in order to use it. In recent years a lot of work has gone into making Linux friendlier for the regular user, and usability (especially during the installation process) has certainly improved. Still, a lot of room for improvement remains, and I think part of that has to do with the number of choices people have to make.

Let's start at the beginning and take an old Dell computer that we want to install Linux on (this is basically the computer I'm running right now). First question: which distrubution of Linux do we want to use? Well, to be sure, we could start from scratch and just install the Linux Kernel and build upwards from there (which would make the process I'm about to describe even more difficult). However, even Linux has it's limits, so there are lots of distrubutions of linux which package the OS, desktop environments, and a whole bunch of software together. This makes things a whole lot easier, but at the same time, there are a ton of distrutions to choose from. The distributions differ in a lot of ways for various reasons, including technical (issues like hardware support), philosophical (some distros poo poo commercial involvement) and organizational (things like support and updates). These are all good reasons, but when it's time to make a decision, what distro do you go with? Fedora? Suse? Mandriva? Debian? Gentoo? Ubuntu? A quick look at Wikipedia reveals a comparison of Linux distros, but there are a whopping 67 distros listed and compared in several different categories. Part of the reason there are so many distros is that there are a lot of specialized distros built off of a base distro. For example, Ubuntu has several distributions, including Kubuntu (which defaults to the KDE desktop environment), Edubuntu (for use in schools), Xubuntu (which uses yet another desktop environment called Xfce), and, of course, Ubuntu: Christian Edition (linux for Christians!).

So here's our first choice. I'm going to pick Ubuntu, primarily because their tagline is "Linux for Human Beings" and hey, I'm human, so I figure this might work for me. Ok, and it has a pretty good reputation for being an easy to use distro focused more on users than things like "enterprises."

Alright, the next step is to choose a desktop environment. Lucky for us, this choice is a little easier, but only because Ubuntu splits desktop environments into different distributions (unlike many others which give you the choice during installation). For those who don't know what I'm talking about here, I should point out that a desktop environment is basically an operating system's GUI - it uses the desktop metaphor and includes things like windows, icons, folders, and abilities like drag-and-drop. Microsoft Windows and Mac OSX are desktop environments, but they're relatively locked down (to ensure consistency and ease of use (in theory, at least)). For complicated reasons I won't go into, Linux has a modular system that allows for several different desktop environments. As with linux distributions, there are many desktop environments. However, there are really only two major players: KDE and Gnome. Which is better appears to be a perennial debate amongst linux geeks, but they're both pretty capable (there are a couple of other semi-popular ones like Xfce and Enlightenment, and then there's the old standby, twm (Tom's Window Manager)). We'll just go with the default Gnome installation.

Note that we haven't even started the installation process and if we're a regular user, we've already made two major choices, each of which will make you wonder things like: Would I have this problem if I installed Suse instead of Ubuntu? Is KDE better than Gnome?

But now we're ready for installation. This, at least, isn't all that bad, depending on the computer you're starting with. Since we're using an older Dell model, I'm assuming that the hardware is fairly standard stuff and that it will all be supported by my distro (if I were using a more bleeding edge type box, I'd probably want to check out some compatibility charts before installing). As it turns out, Ubuntu and it's focus on creating a distribution that human beings can understand has a pretty painless installation. It was actually a little easier than Windows, and when I was finished, I didn't have to remove the mess of icons and trial software offers (purchasing a Windows PC through somone like HP is apparently even worse). When you're finished installing Ubuntu, you're greeted with a desktop that looks like this (click the pic for a larger version):

Default Ubuntu Desktop (click for larger)

No desktop clutter, no icons, no crappy trial software. It's beautiful! It's a little different from what we're used to, but not horribly so. Windows users will note that there are two bars, one on the top and one on the bottom, but everything is pretty self explanatory and this desktop actually improves on several things that are really strange about Windows (i.e. to turn off you're computer, first click on "Start!"). Personally, I think having two toolbars is a bit much so I get rid of one of them, and customize the other so that it has everything I need (I also put it at the bottom of the screen for several reasons I won't go into here as this entry is long enough as it is).

Alright, we're almost homefree, and the installation was a breeze. Plus, lots of free software has been installed, including Firefox, Open Office, and a bunch of other good stuff. We're feeling pretty good here. I've got most of my needs covered by the default software, but let's just say we want to install Amarok, so that we can update our iPod. Now we're faced with another decision: How do we install this application? Since Ubuntu has so thoughtfully optimized their desktop for human use, one of the things we immediately notice in the "Applications" menu is an option which says "Add/Remove..." and when you click on it, a list of software comes up and it appears that all you need to do is select what you want and it will install it for you. Sweet! However, the list of software there doesn't include every program, so sometimes you need to use the Synaptic package manager, which is also a GUI application installation program (though it appears to break each piece of software into smaller bits). Also, in looking around the web, you see that someone has explained that you should download and install software by typing this in the command line: apt-get install amarok. But wait! We really should be using the aptitude command instead of apt-get to install applications.

If you're keeping track, that's four different ways to install a program, and I haven't even gotten into repositories (main, restricted, universe, multiverse, oh my!), downloadable package files (these operate more or less the way a Windows user would download a .exe installation file, though not exactly), let alone downloading the source code and compiling (sounds fun, doesn't it?). To be sure, they all work, and they're all pretty easy to figure out, but there's little consistency, especially when it comes to support (most of the time, you'll get a command line in response to a question, which is completely at odds with the expectations of someone switching from Windows). Also, in the case of Amarok, I didn't fare so well (for reasons belabored in that post).

Once installed, most software works pretty much the way you'd expect. As previously mentioned, open source developers sometimes get carried away with their efficiencies, which can sometimes be confusing to a newbie, but for the most part, it works just fine. There are some exceptions, like the absurd Blender, but that's not necessarily a hugely popular application that everyone needs.

Believe it or not, I'm simplifying here. There are that many choices in Linux. Ubuntu tries its best to make things as simple as possible (with considerable success), but when using Linux, it's inevitable that you'll run into something that requires you to break down the metaphorical walls of the GUI and muck around in the complicated swarm of text files and command lines. Again, it's not that difficult to figure this stuff out, but all these choices contribute to the same decision fatigue I discussed in my last post: anticipated regret (there are so many distros - I know I'm going to choose the wrong one), actual regret (should I have installed Suse?), dissatisfaction, excalation of expectations (I've spent so much time figuring out what distro to use that it's going to perfectly suit my every need!), and leakage (i.e. a bad installation process will affect what you think of a program, even after installing it - your feelings before installing leak into the usage of the application).

None of this is to say that Linux is bad. It is free, in every sense of the word, and I believe that's a good thing. But if they ever want to create a desktop that will rival Windows or OSX, someone needs to create a distro that clamps down on some of these choices. Or maybe not. It's hard to advocate something like this when you're talking about software that is so deeply predicated on openess and freedom. However, as I concluded in my last post:
Without choices, life is miserable. When options are added, welfare is increased. Choice is a good thing. But too much choice causes the curve to level out and eventually start moving in the other direction. It becomes a matter of tradeoffs. Regular readers of this blog know what's coming: We don't so much solve problems as we trade one set of problems for another, in the hopes that the new set of problems is more favorable than the old.
Choice is a double edged sword, and by embracing that freedom, Linux has to deal with the bad as well as the good (just as Microsoft and Apple have to deal with the bad aspects of suppressing freedom and choice). Is it possible to create a Linux distro that is as easy to use as Windows or OSX while retaining the openness and freedom that makes it so wonderful? I don't know, but it would certainly be interesting.
Posted by Mark on October 29, 2006 at 07:18 PM .: Comments (0) | link :.


End of This Day's Posts

Sunday, October 22, 2006

The Paradox of Choice
At the UI11 Conference I attended last week, one of the keynote presentations was made by Barry Schwartz, author of The Paradox of Choice: Why More Is Less. Though he believes choice to be a good thing, his presentation focused more on the negative aspects of offering too many choices. He walks through a number of examples that illustrate the problems with our "official syllogism" which is:
  • More freedom means more welfare
  • More choice means more freedom
  • Therefore, more choice means more welfare
In the United States, we have operated as if this syllogism is unambigiously true, and as a result, we're deluged with choices. Just take a look at a relatively small supermarket: there are 285 cookies, 75 iced teas, 275 cereals, 40 toothpastes, 230 soups, and 175 salad dressings (not including 12 extra virgin olive oils and 18 vinegars which could be combined to make hundreds of vinaigrettes) to choose from (and this was supposedly a smaller supermarket). At your typical Circuit City, the sheer breadth of stereo components allows you to create any one of 6.5 million possible stereo systems. And this applies all throughout our lives, extending even to working, marriage, and whether or not to have children. In the past, these things weren't much of a question. Today, everything is a choice. [thanks to Jesper R�nn-Jensen for his notes on Schwartz's talk - it's even got pictures!]

So how do we react to all these choices? Luke Wroblewski provides an excellent summary, which I will partly steal (because, hey, he's stealing from Schwartz after all):
  • Paralysis: When faced with so many choices, people are often overwhelmed and put off the decision. I often find myself in such a situation: Oh, I don't have time to evaluate all of these options, I'll just do it tomorrow. But, of course, tomorrow is usually not so different than today, so you see a lot of procrastination.
  • Decision Quality: Of course, you can't procrastinate forever, so when forced to make a decision, people will often use simple heuristics to evaluate the field of options. In retail, this often boils down to evaluation based mostly on Brand and Price. I also read a recent paper on feature fatigue (full article not available, but the abstract is there) that fits nicely here.

    In fields where there are many competing products, you see a lot of feature bloat. Loading a product with all sorts of bells and whistles will differentiate that product and often increase initial sales. However, all of these additional capabilities come at the expense of usability. What's more, even when people know this, they still choose high-feature models. The only thing that really helps is when someone actually uses a product for a certain amount of time, at which point they realize that they either don't use the extra features or that the tradeoffs in terms of usability make the additional capabilities considerably less attractive. Part of the problem is perhaps that usability is an intangible and somewhat subjective attribute of a product. Intellectually, everyone knows that it is important, but when it comes down to decision-time, most people base their decisions on something that is more easily measured, like number of features, brand, or price. This is also part of why focus groups are so bad at measuring usability. I've been to a number of focus groups that start with a series of exercises in front of a computer, then end with a roundtable discussion about their experiences. Usually, the discussion was completely at odds with what the people actually did when in front of the computer. Watch what they do, not what they say...
  • Decision Satisfaction: When presented with a lot of choices, people may actually do better for themselves, yet they often feel worse due to regret or anticipated regret. Because people resort to simplifying their decision making process, and because they know they're simplifying, they might also wonder if one or more of the options they cut was actually better than what they chose. A little while ago, I bought a new cell phone. I actually did a fair amount of work evaluating the options, and I ended up going with a low-end no-frills phone... and instantly regretted it. Of course, the phone itself wasn't that bad (and for all I know, it was better than the other phones I passesd over), but I regret dismissing some of the other options, such as the camera (how many times over the past two years have I wanted to take a picture and thought Hey, if I had a camera on my phone I could have taken that picture!)
  • Escalation of expectations: When we have so many choices and we do so much work evaluating all the options, we begin to expect more. When things were worse (i.e. when there were less choices), it was much easier to exceed expectations. In the cell phone example above, part of the regret was no doubt fueled by the fact that I spent a lot of time figuring out which phone to get.
  • Maximizer Impact: There are some people who always want to have the best, and the problems inherent in too many choices hit these people the hardest.
  • Leakage: The conditions present when you're making a decision exert influence long after the decision has actually been made, contributing to the dissatisfaction (i.e. regret, anticipated regret) and escalation of expectations outlined above.
As I was watching this presentation, I couldn't help but think of various examples in my own life that illustrated some of the issues. There was the cell phone choice which turned out badly, but I also thought about things I had chosen that had come out well. For example, about a year ago, I bought an iPod, and I've been extremely happy with it (even though it's not perfect), despite the fact that there were many options which I considered. Why didn't the process of evaluating all the options evoke a feeling of regret? Because my initial impulse was to purchase the iPod, and I looked at the other options simply out of curiosity. I also had the opportunity to try out some of the players, and that experience helped enormously. And finally, the one feature that had given me pause was video (which wasn't available on the iPod when I started looking around). The Cowon iAudio X5 was giving me pause because it had video capabilities and the iPod at the time didn't. As it turned out, about a week later the Video iPod was released and made my decision very easy. I got that and haven't looked back since. The funny thing is that since I've gotten that iPod, I haven't used the video feature for anything useful. Not even once.

Another example is my old PC which has recently kicked the bucket. I actually assembled that PC from a bunch of parts, rather than going through a mainstream company like Dell, and the number of components available would probably make the Circuit City stereo example I gave earlier look tiny by comparison. Interestingly, this diversity of choices for PCs is often credited as part of the reason PCs overtook Macs:
Back in the early days of Macintoshes, Apple engineers would reportedly get into arguments with Steve Jobs about creating ports to allow people to add RAM to their Macs. The engineers thought it would be a good idea; Jobs said no, because he didn't want anyone opening up a Mac. He'd rather they just throw out their Mac when they needed new RAM, and buy a new one.

Of course, we know who won this battle. The "Wintel" PC won: The computer that let anyone throw in a new component, new RAM, or a new peripheral when they wanted their computer to do something new. Okay, Mac fans, I know, I know: PCs also "won" unfairly because Bill Gates abused his monopoly with Windows. Fair enough.

But the fact is, as Hill notes, PCs never aimed at being perfect, pristine boxes like Macintoshes. They settled for being "good enough" -- under the assumption that it was up to the users to tweak or adjust the PC if they needed it to do something else.
But as Schwartz would note, the amount of choices in assembling your own computer can be stifling. This is why computer and software companies like Microsoft, Dell, and Apple (yes, even Apple) insist on mediating the user's experience with their hardware by limiting access (i.e. by limiting choice). This turns out to be not so bad, because the number of things to consider really is staggering. So why was I so happy with my computer? Because I really didn't make many of the decisions - I simply went over to Ars Technica's System Guide and used their recommendations. When it comes time to build my next computer, what do you think I'm going to do? Indeed, Ars is currently compiling recommendations for their October system guide, due out sometime this week. My new computer will most likely be based off of their "Hot Rod" box. (Linux presents some interesting issues in this context as well, though I think I'll save that for another post.)

So what are the lessons here? One of the big ones is to separate the analysis from the choice by getting recommendations from someone else (see the Ars Technica example above). In the market for a digital camera? Call a friend (preferably one who is into photography) and ask them what to get. Another thing that strikes me is that just knowing about this can help you overcome it to a degree. Try to keep your expectations in check, and you might open up some room for pleasant surprises (doing this is suprisingly effective with movies). If possible, try using the product first (borrow a friend's, use a rental, etc...). Don't try to maximize the results so much; settle for things that are good enough (this is what Schwartz calls satisficing).

Without choices, life is miserable. When options are added, welfare is increased. Choice is a good thing. But too much choice causes the curve to level out and eventually start moving in the other direction. It becomes a matter of tradeoffs. Regular readers of this blog know what's coming: We don't so much solve problems as we trade one set of problems for another, in the hopes that the new set of problems is more favorable than the old. So where is the sweet spot? That's probably a topic for another post, but my initial thoughts are that it would depend heavily on what you're doing and the context in which you're doing it. Also, if you were to take a wider view of things, there's something to be said for maximizing options and then narrowing the field (a la the free market). Still, the concept of choice as a double edged sword should not be all that surprising... after all, freedom isn't easy. Just ask Spider Man.
Posted by Mark on October 22, 2006 at 10:56 AM .: Comments (2) | link :.


End of This Day's Posts

Sunday, October 15, 2006

Link Dump
I've been quite busy lately so once again it's time to unleash the chain-smoking monkey research squad and share the results:
  • The Truth About Overselling!: Ever wonder how web hosting companies can offer obscene amounts of storage and bandwidth these days? It turns out that these web hosting companies are offering more than they actually have. Josh Jones of Dreamhost explains why this practice is popular and how they can get away with it (short answer - most people emphatically don't use or need that much bandwidth).
  • Utterly fascinating pseudo-mystery on Metafilter. Someone got curious about a strange flash advertisement, and a whole slew of people started investigating, analyzing the flash file, plotting stuff on a map, etc... Reminded me a little of that whole Publius Enigma thing [via Chizumatic].
  • Weak security in our daily lives: "Right now, I am going to give you a sequence of minimal length that, when you enter it into a car's numeric keypad, is guaranteed to unlock the doors of said car. It is exactly 3129 keypresses long, which should take you around 20 minutes to go through." [via Schneier]
  • America's Most Fonted: The 7 Worst Fonts: Fonts aren't usually a topic of discussion here, but I thought it was funny that the Kaedrin logo (see upper left hand side of this page) uses the #7 worst font. But it's only the logo and that's ok... right? RIGHT?
  • Architecture is another topic rarely discussed here, but I thought that the new trend of secret rooms was interesting. [via Kottke]
That's all for now. Things appear to be slowing down, so that will hopefully mean more time for blogging (i.e. less link dumpy type posts).
Posted by Mark on October 15, 2006 at 11:09 PM .: link :.


End of This Day's Posts

Sunday, October 08, 2006

Linux Humor & Blog Notes
I'll be attending the User Interface 11 conference this week, and as such, won't have much time to check in. Try not to wreck the place while I'm gone. Since I'm off to the airport in fairly short order (why did I schedule a flight to conflict with the Eagles/Cowboys matchup? Dammit!) here's a quick comic with some linux humor:

sudo make me a sandwich


The author, Randall Munroe, is a NASA scientist who has a keen sense of humor (and is apparently deathly afraid of raptors) and publishes a new comic a few times a week. The comic above is one of his most popular, and even graces one of his T-Shirts (I also like the "Science. It works, bitches." shirt)

I'm sure I'll be able to wrangle some internet access during the week, but chances are that it will be limited (I need to get me a laptop at some point). I'll be back late Thursday night, so posting will probably resume next Sunday.
Posted by Mark on October 08, 2006 at 03:17 PM .: Comments (0) | link :.


End of This Day's Posts

Tuesday, October 03, 2006

Adventures in Linux, iPod edition
Last weekend, my Windows machine died and I decided to give linux a shot. My basic thought was that if I could get a linux box to do everything I need, why bother getting another copy of windows? So I cast about looking for applications to fulfill my needs, and thus found myself on Mark Pilgrim's recently updated list of linux Essentials (Pilgrim has recently experienced a bit of net notoriety due to his decision to abandon Apple for Ubuntu).

So I need something to replace iTunes (which I use to play music and update my iPod). No problem:
amaroK. It’s just like iTunes except it automatically fetches lyrics from Argentina, automatically looks up bands on Wikipedia, automatically identifies songs with MusicBrainz, and its developers are actively working on features that don’t involve pushing DRM-infected crap down my throat. Add the amarok repository to get the latest version. apt-get install amarok
After taking that advice and installing Amarok, I think that paragraph would be better written as:
amaroK. It’s just like iTunes except it automatically orphans most of your library so that you can't see or play most of your music on your iPod, it doesn't handle video, it can't write to the iPod's podcast directory, and (my personal favorite) if you plug your Amarokized iPod into a windows machine, it crashes iTunes. Add the amarok repository to get the latest version, as the latest version doesn't seem to have those problems.
Yes, that's right, I plugged in my iPod and Amarok corrupted the itunes database. I could still use my iPod, but I could only see 256 songs (out of around 1000). It didn't delete the files - all 1000 songs were still on the iPod - it just screwed up the database that controls the ipod. The issue turns out to be that I installed an older version of Amarok, and since Mark recommended getting the latest version, I really can't fault him for this debacle. You see, Ubuntu comes with a few user-friendly ways of installing programs. These are based on what's called "Repositories" which are basically databases full of programs that you can browse. So I fired up one of these installation programs, found Amarok, and installed it... not realizing that the default Ubuntu repository had an older version of the program.

Some thoughts:
  • Linux is dangerous (it's the hole hawg of operating systems)! Sometimes doing simple things can have catastrophic results.
  • When someone says get the latest version, get the latest version.
  • I learned what repositories are and how to add one to my system.
  • When asking for help, you'll probably get an answer quickly, but it will usually consist of several command lines which you probably won't understand. This is particularly nerve wracking when combined with the first bullet point. I get a little anxious whenever someone tells me to type one of these things in the command line, because I don't know what's going on and I don't want my system to explode. Linux is dangerous. Sometimes doing simple things can have catastrophic results. When I said I don't want my system to explode, I meant that metaphorically, but I'm positive that if I set my mind to it, I could make my computer literally explode by altering a simple text file somewhere. This reliance on the command line is also one of the reasons it's hard to learn linux - they usually work, but you don't understand why unless you look up the commands (and even then, it can be a little difficult to understand. Documentation isn't one of open source's strong points). Plus, whenever I am forced to do these command lines, I'm usually very task oriented. I don't have time to research the intricacies of every command line utility, I just want to complete my task.
To their credit, I posted my problem to the Amarok forum (at 3 in the morning) and received several helpful responses by the time I woke up in the morning, just a few hours later. I was able to install the latest version of Amarok, though that didn't really help me repair my iPod (there was a feature which would do this in theory, but when I tried it, the application just started eating up lots of memory until it hit the system limit, and then it just shut down). I had to use a different utility, called gtkpod, to scan through my iPod and rescue all of the orphaned files (and it took a few hours to do so). For some reason, a lot of my music is being recognized as podcasts in my iPod, but otherwise the iPod is in much better shape. I can see all my music now, and plugging it into a windows computer doesn't crash iTunes anymore.

Obviously, I had a bad experience here, but I'm still a little confused as to how Amarok is a valid iTunes replacement. Even with the latest version, it still has no support for videos (and the developers don't plan to either, their excuse being that Amarok is just a music player) and it's podcast support isn't ideal (I can upload them to my iPod, but they get put in the general library, not the special podcast library Strike that. It turns out that when the iPod isn't corrupted, the podcasts work as they should, though I'm still not sure it's the ideal interface). The interface for viewing and maintaining the iPod is a little sparse and lacks some of the maintenance features of iTunes. As far as I can tell, Amarok is a fine music player and probably rivals or surpasses iTunes in that respect (I assume this is why people seem to love it so much). But in terms of maintaining an iPod, it sucks (at least, so far - I'm willing to bet there's lots of functionality I'm missing). Support for iPods in general seems to be a bit lacking in linux, though there are some things you can do in linux that you can't do in windows. It's also something that could probably improve in time, but it's definitely not there yet.

Despite the problems, I find myself strangely bemused at the experience. It was exactly what I feared, but in the end, I'm not that upset about it. There's a part of me that likes digging into the details of a problem and troubleshooting like this... but then, there's also a part of me that knows spending 5 hours trying to install something I could install in about 10 minutes on a Windows box is ludicrous. All's well that ends well, I guess, but consider me unimpressed. It's not enough for me to forsake linux, but it's enough to make me want to create a dual boot machine rather than a pure linux box.

Update: In using Amarok a little more, I see that it supports podcasts better than I originally thought.
Posted by Mark on October 03, 2006 at 08:16 PM .: Comments (0) | link :.


End of This Day's Posts

Sunday, October 01, 2006

The Death of Sulaco
I have two computers running here at Kaedrin headquarters. My primary computer is a Windows box called Sulaco. My secondary computer is running Ubuntu Linux and is called Nostromo. Yesterday, Sulaco nearly died. I'll spare you the details (which are covered in the forum), but it started with some display trouble. It could have been the drivers for my video card, or it could have been that the video card itself was malfunctioning. In any case, by this morning, Sulaco's Windows registry was thoroughly corrupted. All attempts to salvage the installation failed. For some reason, my Windows XP CD failed to boot, and my trusty Win 98 floppy boot disk wouldn't let me run the setup from the XP CD (nor could I even see my hard drive, which had some files on it I wanted to retrieve).

To further complicate matters, the CD burner on my linux box has always been flaky, so I couldn't use that to create a new boot disk. However, I did remember that my Ubuntu installation disk could run as a Live CD. A few minutes of google searching yielded step-by-step instructions for booting a Windows box with an Ubuntu Live CD, mounting the Windows drive and sharing it via Windows File Sharing (i.e. Samba). A few minutes later and I was copying all appropriate data from Sulaco to Nostromo.

For all intents and purposes, Sulaco is dead. She has served me well, and it should be noted that she was constructed nearly 6 years ago with turn-of-the-century hardware. I'm actually amazed that she held up so well for so long, but her age was showing. Upgrades would have been necessary even without the display/registry problems. The question now is how to proceed.

I've been fiddling with Linux for, oh, 8 years or so. Until recently, I've never found it particularly useful. Even now, I'm wary of it. However, the ease with which I was able to install Ubuntu and get it up on my wireless network (this task had given me so much trouble in the past that I was overjoyed when I managed to get it working) made me reconsider a bit. Indeed, the fact that the way I recovered from a Windows crash was to use linux is also heartening. On the other hand, I also have to consider the fact that if someone hadn't written detailed instructions for the exact task I was attempting, I probably never would have figured it out in a reasonable timeframe. This is the problem with linux. It's hard to learn.

Yes, I know, it's a great operating system. I've fiddled with it enough to realize that some of the things that might seem maddeningly and deliberately obscure are actually done for the best of reasons in a quite logical manner (unless, of course, you're talking about the documentation, which is usually infuriating). I'm not so much worried that I can't figure it out, it's that I don't really have the time to work through its ideosyncracies. As I've said, recent experiences have been heartening, but I'm still wary. Open source software is a wonderful thing in theory, but I'd say that my experience with such applications has been mixed at best. For an example of what I'm worried about, see Shamus' attempts to use Blender, an open source 3d modeling program.

My next step will be to build a new box in Sulaco's place. As of right now, I'm leaning towards installing Ubuntu on that and using one of the various Windows emulators like WINE to run the windows proprietary software I need (which probably isn't much at this point). So right now, Nostromo is my guinea pig. If I can get this machine to do everything I need it to do in the next few days, I'll be a little less wary. If I can't, I'll find another Windows CD and install that. To be perfectly honest, Windows has served me well. Until yesterday, I've never had a problem with my installation of XP, which was stable and responsive for several years (conventional wisdom seems to dictate that running XP requires a complete reinstallation every few months - I've never had that problem). That said, I don't particularly feel like purchasing a new copy, especially when Vista is right around the corner...
Posted by Mark on October 01, 2006 at 11:13 PM .: Comments (0) | link :.


End of This Day's Posts

Sunday, September 17, 2006

Magic Design
A few weeks ago, I wrote about magic and how subconscious problem solving can sometimes seem magical:
When confronted with a particularly daunting problem, I'll work on it very intensely for a while. However, I find that it's best to stop after a bit and let the problem percolate in the back of my mind while I do completely unrelated things. Sometimes, the answer will just come to me, often at the strangest times. Occasionally, this entire process will happen without my intending it, but sometimes I'm deliberately trying to harness this subconscious problem solving ability. And I don't think I'm doing anything special here; I think everyone has these sort of Eureka! moments from time to time. ...

Once I noticed this, I began seeing similar patterns throughout my life and even history.
And indeed, Jason Kottke recently posted about how design works, referencing a couple of other designers, including Michael Bierut of Design Observer, who describes his process like this:
When I do a design project, I begin by listening carefully to you as you talk about your problem and read whatever background material I can find that relates to the issues you face. If you’re lucky, I have also accidentally acquired some firsthand experience with your situation. Somewhere along the way an idea for the design pops into my head from out of the blue. I can’t really explain that part; it’s like magic. Sometimes it even happens before you have a chance to tell me that much about your problem!
[emphasis mine] It is like magic, but as Bierut notes, this sort of thing is becoming more important as we move from an industrial economy to an information economy. He references a book about managing artists:
At the outset, the writers acknowledge that the nature of work is changing in the 21st century, characterizing it as "a shift from an industrial economy to an information economy, from physical work to knowledge work." In trying to understand how this new kind of work can be managed, they propose a model based not on industrial production, but on the collaborative arts, specifically theater.

... They are careful to identify the defining characteristics of this kind of work: allowing solutions to emerge in a process of iteration, rather than trying to get everything right the first time; accepting the lack of control in the process, and letting the improvisation engendered by uncertainty help drive the process; and creating a work environment that sets clear enough limits that people can play securely within them.
This is very interesting and dovetails nicely with several topics covered on this blog. Harnessing self-organizing forces to produce emergent results seems to be rising in importance significantly as we proceed towards an information based economy. As noted, collaboration is key. Older business models seem to focus on a more brute force way of solving problems, but as we proceed we need to find better and faster ways to collaborate. The internet, with it's hyperlinked structure and massive data stores, has been struggling with a data analysis problem since its inception. Only recently have we really begun to figure out ways to harness the collective intelligence of the internet and its users, but even now, we're only scraping the tip of the iceberg. Collaborative projects like Wikipedia or wisdom-of-crowds aggregators like Digg or Reddit represent an interesting step in the right direction. The challenge here is that we're not facing the problems directly anmore. If you want to create a comprehensive encyclopedia, you can hire a bunch of people to research, write, and edit entries. Wikipedia tried something different. They didn't explicitely create an encyclopedia, they created (or, at least, they deployed) a system that made it easy for large amount of people to collaborate on a large amount of topics. The encyclopedia is an emergent result of that collaboration. They sidestepped the problem, and as a result, they have a much larger and dynamic information resource.

None of those examples are perfect, of course, but the more I think about it, the more I think that their imperfection is what makes them work. As noted above, you're probably much better off releasing a site that is imperfect and iterating, making changes and learning from your mistakes as you go. When dealing with these complex problems, you're not going to design the perfect system all at once. I realize that I keep saying we need better information aggregation and analysis tools, and that we have these tools, but they leave something to be desired. The point of these systems, though, is that they get better with time. Many older information analysis systems break when you increase the workload quickly. They don't scale well. These newer systems only really work well once they have high participation rates and large amounts of data.

It remains to be seen whether or not these systems can actually handle that much data (and participation), but like I said, they're a good start and they're getting better with time.
Posted by Mark on September 17, 2006 at 08:01 PM .: Comments (0) | link :.


End of This Day's Posts

Sunday, September 10, 2006

YALD
Time is short this week, so it's time for Yet Another Link Dump (YALD!):
  • Who Writes Wikipedia? An interesting investigation of one of the controversial aspects of Wikipedia. Some contend that the authors are a small but dedicated bunch, others claim that authorship is large and diverse (meaning that the resulting encyclopedia is self-organizing and emergent). Aaron Swartz decided to look into it:
    When you put it all together, the story become clear: an outsider makes one edit to add a chunk of information, then insiders make several edits tweaking and reformatting it. In addition, insiders rack up thousands of edits doing things like changing the name of a category across the entire site -- the kind of thing only insiders deeply care about. As a result, insiders account for the vast majority of the edits. But it's the outsiders who provide nearly all of the content.

    And when you think about it, this makes perfect sense. Writing an encyclopedia is hard. To do anywhere near a decent job, you have to know a great deal of information about an incredibly wide variety of subjects. Writing so much text is difficult, but doing all the background research seems impossible.

    On the other hand, everyone has a bunch of obscure things that, for one reason or another, they've come to know well. So they share them, clicking the edit link and adding a paragraph or two to Wikipedia. At the same time, a small number of people have become particularly involved in Wikipedia itself, learning its policies and special syntax, and spending their time tweaking the contributions of everybody else.
    Depending on how you measure it, many perspectives are correct, but the important thing here is that both types of people (outsiders and insiders) are necessary to make the system work. Via James Grimmelman, who has also written an interesting post on Wikipedia Fallacies that's worth reading.
  • Cyber Cinema, 1981-2001: An absurdly comprehensive series of articles chronicling cyberpunk cinema. This guy appears to know his stuff, and chooses both obvious and not-so-obvious films to review. For example, he refers to Batman as "a fine example of distilled Cyberpunk." I probably wouldn't have pegged Batman as cyberpunk, but he makes a pretty good case for it... Anyway, I haven't read all of his choices (20 movies, 1 for each year), but it's pretty interesting stuff. [via Metaphlog]
  • The 3-Day Novel Contest: Well, it's too late to partake now, but this is an interesting contest where entrants all submit a novel written in 3 days. The contest is usually held over labor day weekend (allowing everyone to make the most of their long holiday weekend). The Survival Guide is worth reading even if you don't intend on taking part. Some excerpts: On the attitude required for such an endeavor:
    Perhaps the most important part of attitude when approaching a 3-Day Novel Contest is that of humility. It is not, as one might understandably and mistakenly expect, aggression or verve or toughness or (as it has been known) a sheer murderous intent to complete a 3-Day Novel (of this latter approach it is almost always the entrant who dies and not the contest). Let’s face it, what you are about to do, really, defies reality for most people. As when in foreign lands, a slightly submissive, respectful attitude generally fares better for the traveller than a self-defeating mode of overbearance. As one rather pompous contestant confessed after completing the contest: “I’ve been to Hell, and ended up writing about it.”
    On outlines and spontaneity:
    Those without a plan, more often than not, find themselves floundering upon the turbulent, unforgiving seas of forced spontaneous creativity. An outline can be quite detailed and, as veterans of the contest will also tell you, the chances of sticking to the outline once things get rolling are about 1,000 to 1. But getting started is often a major hurdle and an outline can be invaluable as an initiator.
    Two things that interest me about this: plans that fall apart, but must be made anyway (which I have written about before) and the idea that just getting started is important (which is something I'll probably write about sometime, assuming I haven't already done so and forgot).

    On eating:
    Keep it simple, and fast. Wieners (straight from the package—protein taken care of). Bananas and other fruit (vitamin C, potassium, etc.). Keep cooking to a minimum. Pizzas, Chinese—food to go. Forget balance, this is not a “spa”, there are no “healing days”. This is a competition; a crucible; a hill of sand. Climb! Climb!
    Lots of other fun stuff there. Also, who says you need to do it on Labor day weekend. Why not take a day off and try it out? [via Web Petals, who has some other interesting quotes from the contest]
That's all for now. Sorry for just throwing links at you all the time, but I've entered what's known as Wedding Season. Several weddings over the next few weekends, only one of which is in this area. This week's was in Rhode Island, so I had a wonderful 12-13 hours of driving to contend with (not to mention R.I.'s wonderful road system - apparently they don't think signs are needed). Thank goodness for podcasts - specifically Filmspotting, Mastercritic, and the Preston and Steve Show (who are professional broadcasters, but put their entire show (2+ hours) up, commercial free, every day).

Shockingly, it seems that I only needed to use two channels on my Monster FM Transmitter and both of those channels are the ones I use around Philly. Despite this, I've not been too happy with my FM transmitter thingy. It get's the job done, I guess, but I find myself consistently annoyed at its performace (this trip being an exception). It seems that these things are very idiosyncratic and unpredictible, working in some cars better than others (thus some people swear by one brand, while others will badmouth that same brand). In large cities like New York and Philadelphia, the FM dial gets crowded and thus it's difficult to find a suitable station, further complicating matters. I think my living in a major city area combined with an awkward placement of the cigarrette lighter in my car (which I assume is a factor) makes it somewhat difficult to find a good station. What would be really useful would be a list of available stations and an attempt to figure out ways to troubleshoot your car's idiosyncracies. Perhaps a wiki would work best for this, though I doubt I'll be motivated enought to spend the time installing a wiki system here for this purpose (does a similar site already exist? I did a quick search but came up empty-handed). (There are kits that allow you to tap into your car stereo, but they're costly and I don't feel like paying more for that than I did for the player... )
Posted by Mark on September 10, 2006 at 09:15 PM .: link :.


End of This Day's Posts

Wednesday, August 16, 2006

GPL & Asimov's First Law
Ars Technica reports on a Open source project called GPU. The purpose of this project is to provide an infrastructure for distributed computing (i.e. sharing CPU cycles). The developers of this project are apparently pacifists, and they've modified the GPL (the GNU General Public License, which is the primary license for open source software) to make that clear. One of the developers explains it thusly: "The fact is that open source is used by the military industry. Open source operating systems can steer warplanes and rockets. [This] patch should make clear to users of the software that this is definitely not allowed by the licenser."

Regardless of what you might think about the developers' intentions, the thing I find strangest about this is the way they've chosen to communicate their desires. They've modified the standard GPL to include a "patch" which is supposedly for no military use (full text here). Here is what this addition says [emphasis mine]:
PATCH FOR NO MILITARY USE

This patch restricts the field of endeavour of the Program in such a way that this license collides with paragraph 6 of the Open Source Definition. Therefore, this modified version of the GPL is no more OSI compliant.

The Program and its derivative work will neither be modified or executed to harm a ny human being nor through inaction permit any human being to be harmed. This is Asimov's first law of Robotics.
This is astoundingly silly, for several reasons. First, as many open source devotees have pointed out (and as the developers themselves even note in the above text), you're not allowed to modify the GPL. As Ars Technica notes:
Only sentences after their patch comes the phrase, "Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed." This is part of the GPL, and by modifying the license, the developers seem to run afoul of it. The Free Software Foundation has already contacted them about the matter.
Next, Asimov's laws of robotics were written for autonomous beings called robots. This might seem obvious to some, but apparently not to the developers, who have applied it to software. As Ars notes: "Code is not an autonomous agent that can go around bombing people or hauling them from burning buildings." Also, Asimov always alluded to the fact that the plain English definitions (which is what the developers used in their "patch") just gave you the basic idea of what the law did - the code that implemented this functionality in his robots was much more complex.

Third, we have a military for a reason, and their purpose extends far beyond bombing the crap out of people. For example, many major disasters are met with international aid delivered and administered by... military transports and personnel (there are many other examples, but this is a common one that illustrates the point well). Since this software is not allowed, through inaction, to permit any human being from being harmed, wouldn't the military be justified (if not actually required) to use it? Indeed, this "inaction" clause seems like it could cause lots of unintended consequences.

Finally, Asimov created the laws of robotics in a work of fiction as a literary device that allowed him to have fun with his stories. Anyone who has actually read the robot novels knows that they're basically just an extended exercise in subverting the three laws (eventually even superseding them with a "zeroth" law). He set himself some reasonable sounding laws, then went to town finding ways to get around them. For crying out loud, he had robots attempting murder on humans all throughout the series. The laws were created precisely to demonstrate how foolish it was to have such laws. Granted, many fictional stories with robots have featured Asimov's laws (or some variation), but that's more of an artistic homage (or parody, in a lot of cases). It's not something you put into a legal document.

Ars notes that not all the developers agree on the "patch," which is good, I guess. If I were more cynical, I'd say this was just a ploy to get more attention for their project, but I doubt that was the intention. If they were really serious about this, they'd probably have been a little more thorough with their legalese. Maybe in the next revision they'll actually mention that the military isn't allowed to use the software.

Update: It seems that someone on Slashdot has similar thoughts:
Have any of them actually read I, Robot? I swear to god, am I in some tiny minority who doesn't believe that this book was all about promulgating the infallible virtue of these three laws, but was instead a series of parables about the failings that result from codifying morality into inflexible dogma?
And another commenter does too:
From a plain English reading of the text "the program and its derivative work will neither be modified or executed to harm any human being nor through inaction permit any human being to be harmed", I am forced to conclude that the program will not through inaction allow any human being to be harmed. This isn't just silly; it's nonsensical. The Kwik-E-Mart's being robbed, and the program, through inaction (since it's running on a computer in another state, and has nothing to do with a convenience store), fails to save Apu from being shot in the leg. Has it violated the terms of it's own license? What does this clause even mean?
Heh.
Posted by Mark on August 16, 2006 at 09:01 PM .: Comments (3) | link :.


End of This Day's Posts

Sunday, August 06, 2006

IMDB Bookmarklet
In last week's post, I ended up linking to a whole bunch of movies on the IMDB. The process was somewhat tedious, and I lamented the lack of movable type plugins that would help. There are a few plugins that could potentially help, but not in the exact context I'm looking for (MT-Textile does have some IMDB shortcuts, but they're for IMDB searches).

So after a looking around, I decided that the best way to go would be to write a bookmarklet that would generate the code to insert a link to IMDB. I'm no expert on this stuff and I'm sure there's something wrong with the below code, but it appears to work passably well (maybe I should just call it IMDB Bookmarklet - Beta). Basically, all you need to do is go to the movie you want to link to on IMDB, click the bookmarklet in your browser, then copy and paste the text into your post (IE actually has a function that will copy a string directly to your clipboard, but no other browser will do so because of obvious security reasons. Therefore, I simply used a prompt() function to display the generated text which you have to then copy manually.)

This turned out to be something of a pain, mainly because I primarily use the Opera web browser, which is apparently more strict about javascript than any other browser. My first attempt at the bookmarklet appeared to work fine when I just pasted it into the location bar, but when I actually set up the bookmark, it choked. This apparently had something to do with single and double quotes (I thought you were supposed to be able to use both in javascript, but for whatever reason, Opera kept throwing syntax errors.)

Anyway, here's the code:
javascript:mname=document.title;murl=document.location;mdatepos=mname.lastIndexOf(' (');if(mdatepos!=-1){mname2=mname.slice(0,mdatepos);}else{mname2=mname;} temp=prompt('Copy text for a link to IMDB movie:','<a href=\''+murl+'\' title=\'IMDB: '+ mname2 +'\'>'+mname2+'</a>');focus();
Or just use this link: Generate IMDB Link

Again, all you need to do is go to the movie you want to link to on IMDB, click the bookmarklet in your browser, then copy and paste the text into your post. This is the output of the bookmarklet when you use it on IMDB's Miami Vice page:
<a href='http://imdb.com/title/tt0430357/' title='IMDB: Miami Vice'>Miami Vice</a>
A few nerdy coding things to note here:
  • The link that is generated uses single quotes (') instead of the usual double quotes ("). Both work in HTML, but I usually use double quotes and would prefer consistency. However, as previously mentioned, using double quotes does not appear to work in Opera (even when escaped with \"). If you use firefox and want to get double quotes in the generated link, try this:
    javascript:mname=document.title;murl=document.location;mdatepos=mname.lastIndexOf(' (');if(mdatepos!=-1){mname2=mname.slice(0,mdatepos);}else{mname2=mname;} temp=prompt('text','<a href=\"'+murl+'\" title=\"IMDB: '+ mname2 +'\">'+mname2+'</a>');focus();
  • The code is generated by reading in the page's URL and title tag. As such, I had to do some manipulation to remove the year from the page's title (otherwise the link would show up saying Miami Vice (2006). The way I did this may cause problems if a title has an open parentheses, but I tried to account for it. I might change it so that the year shows up in the title attribute of the link, but I don't think it's that big of a deal.
  • Foreign movies will still show up with the foreign title. So Sympathy for Mr. Vengeance will show up as Boksuneun naui geot. Personally, I think this still helps, but I don't see an easy way of generating the link with the English title (and sometimes it's nice to use the foreign title).
  • Now that I think about it, this would be helpful for linking to Amazon too. It seems like they make it more difficult to link using your Associates ID these days, so an automated way to do so will probably be helpful.
And that's it. If you're a javascript or bookmarklet expert and see something wrong with the above, please do let me know.

I realize this post has next to no appeal to the grand majority of my readers, but I ended up spending more time on this than I wanted. I'll see if I can make another post during the week this week...
Posted by Mark on August 06, 2006 at 07:10 PM .: Comments (5) | link :.


End of This Day's Posts

Sunday, June 25, 2006

Art for the computer age...
I was originally planning on doing a movie review while our gentle web-master is away, but a topic has come up too many times in the past few weeks for me not to write about it. First it came up in the tag map of Kaedrin, when I noticed that some people were writing pages just to create appealing tag-maps. Then it came up in Illinois and Louisiana. They've passed laws regulating the sale and distribution of "violent games" to minors. This, of course, has led to lawsuits and claims that the law violates free speech. After that, it was the guys at Penny Arcade. They posted links to We Feel Fine and Listening Post.. Those projects search the internet for blogs (maybe this one?) and pull text from them about feelings, and present those feelings to an audience in different ways. Very interesting. Finally, it came up when I opened up the July issue of Game Informer, and read Hideo Kojima's quote:
I believe that games are not art, and will never be art. Let me explain � games will only match their era, meaning what the people of that age want reflects the outcome of the game at that time. So, if you bring a game from 20 years ago out today, no one will say �wow.� There will be some essence where it�s fun, but there won�t be any wows or touching moments. Like a car, for example. If you bring a car from 20 years ago to the modern day, it will be appealing in a classic sense, but how much gasoline it uses, or the lack of air conditioning will simply not be appreciated in that era. So games will always be a kind of mass entertainment form rather than art. Of course, there will be artistic ways of representing games in that era, but it will still be entertainment. However, I believe that games can be a culture that represent their time. If it�s a light era, or a dark era, I always try to implement that era in my works. In the end, when we look back on the projects, we can say �Oh, it was that era.� So overall, when you look back, it becomes a culture.�
Every time I reread that quote, I cringe. Here's a man who is one of the most significant forces in video games today, the creator of Metal Gear, and he's saying "No, they're not art, and never will be." I find his distinction between mass entertaintment and art troubling, and his comparison to a car flawed.

It's true that games will always be a reflection of their times- just like anything else is. The limitations of the time and the attitudes of the culture at the time are going to have an effect on everything coming out of that time. A car made in the 60s is going to show the style of the 60s, and is going to have the tech of the 60s. That makes sense. Of course, a painting made in the 1700s is going to show the limits and is going to reflect the feelings of that time, too. The paints, brushes, and canvas used then aren't necessarily going to be the same as the ones used now, especially with the popular use of computers in painting. The fact that something is a reflection of the times isn't going to stop people from appreciating the artistic worth of that thing. The fact that the Egyptians hadn't mastered perspective doesn't stop anyone from wanting to see their statues.

What does that really tell us, though? Nothing. A car from the 80s may not be appreciated as much as a new model car as a means of transport, but Kojima seems to be completely forgetting that there are many cars that are appreciated as special. Nobody buys a 60s era muscle car because they think it's a good car for driving around in- they buy it because they think it's special, because some people view older cars as collectable. Some people do see them as more than a mere means of transportation. People are very much "wowed" by old cars. Is there any reason why this can't be true of games?

I am 8 Bit seems to suggest that there are people who are still wowed by those games. Kojima may be partially correct, though. Maybe most of those early games won't hold up in the long run. That shouldn't be a surprise. They're the first generation of games. The 8-Bit era was the begining of the new wave of games, though. For the first time, creators could start to tell real stories, beyond simple high-score pursuit. Game makers were just getting their wings, and starting to see what games were really capable of. Maybe early games aren't art. Does that mean that games aren't art?

The problem mostly seems to be that we're asking the wrong questions. We shouldn't be asking "are video games art" any more than we'd ask "are movies art." It's a loaded question and you'll never come to any real answer, because the answer is going to depend completely on what movie you're looking at, and who you're asking. The same holds true with games. The question shouldn't be whether all games are art, but whether a particular game has some artistic merrit. How we decide what counts as art is constantly up for debate, but there are games that raise such significant moral or philosophical questions, or have such an amazing sense of style, or tell such an amazing story, that it seems hard to argue that they have no artistic merrit.

All of this really is leading somewhere. Computers have changed everything. I know that seems obvious, but I think it's taking some people- people like Kojima- a little longer to realize it. Computers have opened up a level of interactivity and access to information that we've never really had before. I can update Kaedrin from Michigan, and can send a message to a friend in Germany, all while buying videos from Japan and playing chess with a man in Alaska (not that I'm actually doing those things... but I could). These changes are going to be reflected in the art our culture produces. There's going to be backlash and criticism, and we're going to find that some people just don't "get it" or don't want to. We've gone through the same thing countless times before. Nobody thought movies would be seen as art when they came on the scene, and they were sure that the talkies wouldn't. When Andy Warhol came out, there were plenty of nay-sayers. Soup cans? As art? Computers have generally been accepted as a tool for making art, but I think we're still seeing the limits pushed. We've barely scratched the surface. The interaction between art, artist, and viewer is blurring, and I, for one, can't wait to see what happens.
Posted by Samael on June 25, 2006 at 01:42 PM .: Comments (4) | link :.


End of This Day's Posts

Sunday, April 30, 2006

The Mindless Internet and Choice
Nicholas Carr has observed a few things about the internet and its effect on the way we think:
You can't have too much information. Or can you? Writing in the Guardian, Andrew Orlowski examines the "glut of hazy information, the consequences of which we have barely begun to explore, that the internet has made endlessly available." He wonders whether the "aggregation of [online] information," which some see as "synonymous with wisdom," isn't actually eroding our ability to think critically ... Like me, you've probably sensed the same thing, in yourself and in others - the way the constant collection of information becomes an easy substitute for trying to achieve any kind of true understanding.
Internet as "infocrack," as it were. In a follow up entry, Carr further comments:
The more we suck in information from the blogosphere or the web in general, the more we tune our minds to brief bursts of input. It becomes harder to muster the concentration required to read books or lengthy articles - or to follow the flow of dense or complex arguments in general. Haven't you, dear blog reader, noticed that, too?
As a matter of fact, I have. A few years ago, I blogged about Information Overload:
Some time ago, I used to blog a lot more often than I do now. And more than that, I used to read a great deal of blogs, especially new blogs (or at least blogs that were new to me). Eventually this had the effect of inducing a sort of ADD in me. I consumed way too many things way too quickly and I became very judgemental and dismissive. There were so many blogs that I scanned (I couldn't actually read them, that would take too long for marginal gain) that this ADD began to spread across my life. I could no longer sit down and just read a book, even a novel.

Eventually, I recognized this, took a bit of a break from blogging, and attempted to correct, with some success.
Carr seems to place the blame firmly on the internet (and technology in general). I don't agree, and you can see why in the above paragraph - as soon as I realized what happened, I took steps to mitigate and reverse the effect. It's a matter of choice, as Loryn at growstate writes:
Technology may change our intellectual environment, but doesn’t govern our behavior. We choose how we adapt. We choose our objectives and data sources and whether we challenge our assumptions. We choose on what to focus. We can choose.
Indeed. She does an impressive job demolishing Carr's argument as well... And yes, I'm aware that this post is made up almost entirely of pull-quotes, seemingly confirming Carr's argument. However, is there anything wrong with that?
Posted by Mark on April 30, 2006 at 09:52 PM .: Comments (2) | link :.


End of This Day's Posts

Sunday, January 29, 2006

Insert clever title for what is essentially a post full of links.
Again short on time, so just a few links turned up by the chain-smoking monkey research staff who actually run the blog:
  • The Beauty of Simplicity: An article that examines one of the more difficult challenges of new technology: usability. In last week's post, I mentioned the concept of the Nonesuch Beast, applications which are perfect solutions to certain complex problems. Unfortunately, these perfect solutions don't exist, and one of the biggest reasons they don't is that one requirement for complex problems is a simple, easy-to-use solution. It's that "easy-to-use" part that gets difficult.
  • Pandora: An interesting little web application that recommends music for you. All you've got to do is give it a band or song and it starts playing recommendations for you (it's like you're own radio station). You can tell it that you like or dislike songs, and it learns from your input. I'm not sure how much of what is being recommended is "learned" by the system (or how extensive their music library is), but as Buckethead notes, its recommendations are based on more than just genre. So far, it hasn't turned up much in the way of great recommendations for me, but still, it's interesting and I'm willing to play around with it on the assumption that it will get better.
  • Robodump 1.0: "I also decided to dress it in businessware to make coworkers less likely to try to talk to it... if it looks like a customer or visiting bigwig, they'll be less likely to offer help or ask for a courtesy flush." To understand this, you really just need to go there and look at the pictures.
  • Wikipedia's next five years: Jon Udell speculates as to upcoming enhancements to Wikipedia. I think the most interesting of these is the thought of having "stable" versions of articles:
    Stable versions. Although Wikipedia's change history does differentiate between minor and major edits, there's nothing corresponding to stable versions in open source software projects. In the early life of most articles that would be overkill. But for more mature articles, and especially active ones, version landmarks might be a useful organizational tool. Of course it's an open question as to how exactly a version could be declared stable.
    Having stable versions might go a long way towards indicating how trustworthy an individual article is (which is currently something of a challenge right now).
  • The Edge Annual Question - 2006: Every year, Edge asks a question to several notable thinkers and scientists and posts their answers. The answers are usually quite interesting, but I think this year's question: "What's your dangerous idea?" wasn't quite as good as the past few years' questions. Still, there's a lot of interesting stuff in there.
That's all for now. Again, I've been exceptionally busy lately and will probably continue to be so for at least another week or so...
Posted by Mark on January 29, 2006 at 08:10 PM .: Comments (0) | link :.


End of This Day's Posts

Sunday, January 22, 2006

Good Enough
Time is short this week, so just a quick pointer towards an old Collision Detection post in which Clive Thompson talks about iPods and briefly digresses into some differences between Apple and Microsoft computers:
Back in the early days of Macintoshes, Apple engineers would reportedly get into arguments with Steve Jobs about creating ports to allow people to add RAM to their Macs. The engineers thought it would be a good idea; Jobs said no, because he didn't want anyone opening up a Mac. He'd rather they just throw out their Mac when they needed new RAM, and buy a new one.

Of course, we know who won this battle. The "Wintel" PC won: The computer that let anyone throw in a new component, new RAM, or a new peripheral when they wanted their computer to do something new. Okay, Mac fans, I know, I know: PCs also "won" unfairly because Bill Gates abused his monopoly with Windows. Fair enough.

But the fact is, as Hill notes, PCs never aimed at being perfect, pristine boxes like Macintoshes. They settled for being "good enough" -- under the assumption that it was up to the users to tweak or adjust the PC if they needed it to do something else.
The concept of being "good enough" presents a few interesting dynamics that I've been considering a lot lately. One problem is, of course, how do you know what's "good enough" and what's just a piece of crap? Another interesting thing about the above anecdote is that "good enough" boils down to something that's customizable.

One thing I've been thinking about a lot lately is that some problems aren't meant to have perfect solutions. I see a lot talk about problems that are incredibly complex as if they really aren't that complex. Everyone is trying to "solve" these problems, but as I've noted many times, we don't so much solve problems as we trade one set of problems for another (with the hope that the new set of problems is more favorable than the old). As Michael Crichton noted in a recent speech on Complexity:
...one important assumption most people make is the assumption of linearity, in a world that is largely non-linear. ... Our human predisposition treat all systems as linear when they are not. A linear system is a rocket flying to Mars. Or a cannonball fired from a canon. Its behavior is quite easily described mathematically. A complex system is water gurgling over rocks, or air flowing over a bird’s wing. Here the mathematics are complicated, and in fact no understanding of these systems was possible until the widespread availability of computers.
Everyone seems to expect a simple, linear solution to many of the complex problems we face, but I'm not sure such a thing is really possible. I think perhaps what we're looking for is a Nonesuch Beast; it doesn't exist. What are these problems? I think one such problem is the environment, as mentioned in Crichton's speech, but there are really tons of other problems. The Nonesuch Beast article above mentions a few scenarios, all of which I'm familiar with because of my job: Documenation and Metrics. One problem I often talk about on this blog is the need for better information analysis, and if all my longwinded talk on the subject hasn't convinced you yet, I don't think there's any simple solution to the problem.

As such, we have to settle for systems that are "good enough" like Wikipedia and Google. As Shamus Young notes in response to my posts last week, "deciding what is 'good enough' is a bit abstract: It depends on what you want to do with the emergent data, and what your standards are for usefulness." Indeed, and it really depends on the individual using the system. Wikipedia, though, is really just a specific example of the "good enough" wiki system, which can be used for any number of applications. As I mentioned last week, Wikipedia has run into some issues because people expect an encyclopedia to be accurate, but other wiki systems don't necessarily suffer from the same issues.

I think Wiki systems belong to a certain class of applications that are so generic, simple, and easy to use that people want to use it for all sorts of specialized purposes. Another application that fits this mold is Excel. Excel is an incredibly powerful application, but it's generic and simple enough that people use it to create all sorts of ad hoc applications that take advantage of some of the latent power in Excel. I look around my office, and I see people using Excel in many varied ways, some of which are not obvious uses of a spreadsheet program. I think we're going to see something similar with Wikis in the future (though Wikis may be used for different problems like documentation and collaboration). All this despite Wiki's obvious and substantial drawbacks. Wikis aren't "the solution" but they might be "good enough" for now.

Well, that turned out to be longer than I thought. There's a lot more to discuss here, but it will have to wait... another busy week approaches.
Posted by Mark on January 22, 2006 at 10:23 PM .: Comments (2) | link :.


End of This Day's Posts

Sunday, January 15, 2006

Cheating Probabilistic Systems
Shamus Young makes some interesting comments regarding last week's post on probabilistic systems. He makes an important distinction between weblogs, which have no central point of control ("The weblog system is spontaneous and naturally occurring."), and the other systems I mentioned, which do. Systems like the ones used by Google or Amazon are centrally controlled and usually reside on a particular set of servers. Shamus then makes the observation that such centralization lends itself to "cheating." He uses Amazon as an example:
You’re a company like Amazon.com. You buy a million red widgets and a million blue widgets. You make a better margin on the blue ones, but it turns out that the red widgets are just a little better in quality. So the feedback for red is a little better. Which leads to red being recommended more often than blue, which leads to better sales, more feedback, and even more recommendations. Now you’re down to your last 100,000 red but you still have 500,000 blue.

Now comes the moment of truth: Do you cheat? You’d rather sell blue. You see that you could “nudge” the numbers in the feedback system. You own the software, pay the programmers who maintain it, and control the servers on which the system is run. You could easily adjust things so that blue recommendations appear more often, even though they are less popular. When Amazon comes up with “You might also enjoy… A blue widget” a customer has no idea of the numbers behind it. You could have the system try to even things out between the more popular red and the more profitable blue.
His post focuses mostly on malicious uses of the system by it's owners. This is certainly a worry, but one thing I think I need to note is that no one really thinks that these systems should be all that trustworthy. The reason the system works is that we all hold a certain degree of skepticism about it. Wikipedia, for instance, works best when you use it as a starting point. If you use it as the final authority, you're going to get burned at some point. The whole point of a probabilistic system is that the results are less consistent than traditional systems, and so people trust them less. The reason people still use such systems is that they can scale to handle the massive amounts of information being thrown at them (which is where traditional systems begin to break down).
Today Wikipedia offers 860,000 articles in English - compared with Britannica's 80,000 and Encarta's 4,500. Tomorrow the gap will be far larger.
You're much more likely to find what you're looking for at Wikipedia, even though the quality of any individual entry at Wikipedia ranges from poor and inaccurate to excellent and helpful. As I mentioned in my post, this lack of trustworthiness isn't necessarily bad, so long as it's disclosed up front. For instance, the problems that Wikipedia is facing are related to the fact that some people consider everything they read there to be very trustworthy. Wikipedia's policy of writing entries from a neutral point of view tends to exacerbate this (which is why the policy is a controversial one). Weblogs do not suffer from this problem because they are written in overtly subjective terms, and thus it is blatantly obvious that you're getting a biased view that should be taken with a grain of salt. Of course, that also makes it more difficult to glean useful information from weblogs, which is why Wikipedia's policy of writing entries from a neutral point of view isn't necessarily wrong (once again, it's all about tradeoffs).

Personally, Amazon's recommendations rarely convince me to buy something. Generally, I make the decision independently. For instance, in my last post I mentioned that Amazon recommended the DVD set of the Firefly TV series based on my previous purchases. At that point, I'd already determined that I wanted to buy that set and thus Amazon's recommendation wasn't so much convincing as it was convenient. Which is the point. By tailoring their featured offerings towards a customer's preferences, Amazon stands to make more sales. They use the term "recommendations," but that's probably a bit of a misnomer. Chances are, they're things we already know about and want to buy, hence it makes more sense to promote those items... When I look at my recommendations page, many items are things I already know I want to watch or read (and sometimes even buy, which is the point).

So is Amazon cheating with its recommendations? I don't know, but it doesn't really matter that much because I don't use their recommendations as an absolute guide. Also, if Amazon is cheating, all that really means is that Amazon is leaving room for a competitor to step up and provide better recommendations (and from my personal experience working on such a site, retail websites are definitely moving towards personalized product offerings).

One other thing to consider, though, is that it isn't just Amazon or Google that could be cheating. Gaming Google's search algorithms has actually become a bit of an industry. Wikipedia is under a constant assault of spammers who abuse the openness of the system for their own gain. Amazon may have set their system up to favor items that give them a higher margin (as Shamus notes), but it's also quite possible that companies go on Amazon and write glowing reviews for their own products, etc... in an effort to get their products recommended.

The whole point is that these systems aren't trustworthy. That doesn't mean they're not useful, it just means that we shouldn't totally trust them. You aren't supposed to trust them. Ironically, acknowledging that fact makes them more useful.

In response to Chris Anderson's The Probabilistic Age post , Nicholas Carr takes a skeptical view of these systems and wonders what the broader implications are:
By providing a free, easily and universally accessible information source at an average quality level of 5, will Wikipedia slowly erode the economic incentives to produce an alternative source with a quality level of 9 or 8 or 7? Will blogging do the same for the dissemination of news? Does Google-surfing, in the end, make us smarter or dumber, broader or narrower? Can we really put our trust in an alien logic's ability to create a world to our liking? Do we want to be optimized?
These are great questions, but I think it's worth noting that these new systems aren't really meant to replace the old ones. In Neal Stephenson's The System of the World, the character Daniel Waterhouse ponders how new systems supplant older systems:
"It has been my view for some years that a new System of the World is being created around us. I used to suppose that it would drive out and annihilate any older Systems. But things I have seen recently ... have convinced me that new Systems never replace old ones, but only surround and encapsulate them, even as, under a microscope, we may see that living within our bodies are animalcules, smaller and simpler than us, and yet thriving even as we thrive. ... And so I say that Alchemy shall not vanish, as I always hoped. Rather, it shall be encapsulated within the new System of the World, and become a familiar and even comforting presence there, though its name may change and its practitioners speak no more about the Philosopher's Stone." (page 639)
And so these new probabilistic systems will never replace the old ones, but only surround and encapsulate them...
Posted by Mark on January 15, 2006 at 11:57 AM .: Comments (2) | link :.


End of This Day's Posts

Sunday, January 08, 2006

Amazon's Recommendations are Probabilistic
Amazon.com is a fascinating website. It's one of the first eCommerce websites, but it started with a somewhat unique strategy. The initial launch of the site included such a comprehensive implementation of functionality that there are sites today that are still struggling to catch up. Why? Because much of the functionality that Amazon implemented early and continued to improve didn't directly attempt to solve the problems most retailers face: What products do I offer? How often do we change our offerings? And so on. Instead, Amazon attempted to set up a self-organizing system based on past usage and user preferences.

For the first several years of Amazon's existence, they operated at a net loss due to the high initial cost in setup. Competitors who didn't have such expenses seemed to be doing better. Indeed, Amazon's infamous recommendations were often criticized, and anyone who has used Amazon regularly has certainly had the experience of wondering how in the world they managed to recommend something so horrible. But over time, Amazon's recommendations engine has gained steam and produced better and better recommendations. This is due, in part, to improvements in the system (in terms of the information collected, the analysis of that information, and the technology used to do both of those things). Other factors include the growth of both Amazon's customer base and their product offerings, both of which improved their recommendation technology.

As I've written about before, the important thing about Amazon's system is that it doesn't directly solve retailing problems, it sets up a system that allows for efficient collaboration. By studying purchase habits, product ratings, common wishlist items, etc... Amazon is essentially allowing it's customers to pick recommendations for one another. As their customer base and product offerings grow, so does the quality of their recommendations. It's a self-organizing system, and recommendations are the emergent result. Many times, Amazon makes connections that I would have never made. For instance, a recent recommendation for me was the DVD set of the Firefly TV series. When I checked to see why (this openness is an excellent feature), it told me that it was recommended because I had also purchased Neal Stephenson's Baroque Cycle books. This is a connection I probably never would have made on my own, but once I saw it, it made sense.

Of course, the system isn't perfect. Truth be told, it probably won't ever be perfect, but overall, I'd bet that its still better than any manual process.

Chris Anderson (a writer for Wired who has been exploring the Long Tail concept) has an excellent post on his blog concerning these systems, which he refers to as "probabalistic systems:"
When professionals--editors, academics, journalists--are running the show, we at least know that it's someone's job to look out for such things as accuracy. But now we're depending more and more on systems where nobody's in charge; the intelligence is simply emergent. These probabilistic systems aren't perfect, but they are statistically optimized to excel over time and large numbers. They're designed to scale, and to improve with size. And a little slop at the microscale is the price of such efficiency at the macroscale.
Anderson's post is essentially a response to critics of probabilistic systems like Wikipedia, Google, and blogs, all of which have come under fire because of their less-than-perfect emergent results. He does an excellent job summarizing the advantages and disadvantages of these systems and it is highly recommended reading. I reference it for several reasons. It seems that Amazon's website qualifies as a probabilistic system, and so the same advantages and disadvantages Anderson observes apply. It also seems that Anderson's post touches on a few themes that often appear on this blog.

First is that human beings rarely solve problems outright. Instead, we typically seek to exchange one set of disadvantages for another in the hopes that the new set is more desirable than the old. Solving problems is all about tradeoffs. As Anderson mentions, a probabilistic system "sacrifices perfection at the microscale for optimization at the macroscale." Is this tradeoff worth it?

Another common theme on this blog is the need for better information analysis capabilities. Last week, I examined a study on "visual working memory," and it became apparent that one thing that is extremely important when facing a large amount of information is the ability to figure out what to ignore. In information theory, this is referred to as the signal-to-noise ratio (technically, this is a more informal usage of the terms). One of the biggest challenges facing us is an increase in the quantity of information we are presented with. In the modern world, we're literally saturated in information, so the ability to separate useful information from false or irrelevant information has become much more important.

Naturally, these two themes interact. As I concluded last week's post: " Like any other technological advance, systems that help us better analyze information will involve tradeoffs." While Amazon, Wikipedia, Google or blogs may not be perfect, they do provide a much deeper look into a wider variety of subjects than their predecessors.
Is Wikipedia "authoritative"? Well, no. But what really is? Britannica is reviewed by a smaller group of reviewers with higher academic degrees on average. There are, to be sure, fewer (if any) total clunkers or fabrications than in Wikipedia. But it's not infallible either; indeed, it's a lot more flawed that we usually give it credit for.

Britannica's biggest errors are of omission, not commission. It's shallow in some categories and out of date in many others. And then there are the millions of entries that it simply doesn't--and can't, given its editorial process--have. But Wikipedia can scale to include those and many more. Today Wikipedia offers 860,000 articles in English - compared with Britannica's 80,000 and Encarta's 4,500. Tomorrow the gap will be far larger.

The good thing about probabilistic systems is that they benefit from the wisdom of the crowd and as a result can scale nicely both in breadth and depth.
[Emphasis Mine] The bad thing about probabilistic systems is that they sacrifice perfection on the microscale. Any individual entry at Wikipedia may be less reliable than its Britannica counterpart (though not necessarily), and so we need to take any single entry with a grain of salt.
The same is true for blogs, no single one of which is authoritative. As I put it in this post, "blogs are a Long Tail, and it is always a mistake to generalize about the quality or nature of content in the Long Tail--it is, by definition, variable and diverse." But collectively they are proving more than an equal to mainstream media. You just need to read more than one of them before making up your own mind.
I once wrote a series of posts concerning this subject, starting with how the insights of reflexive documentary filmmaking are being used on blogs. Put simply, Reflexive Documentaries achieve a higher degree of objectivity by embracing and acknowledging their own biases and agenda. Ironically, by acknowledging their own subjectivity, these films are more objective and reliable. Probabilistic systems would also benefit from such acknowledgements. Blogs seem to excell at this, though it seems that many of the problems facing Wikipedia and other such systems is that people aren't aware of their subjective nature and thus assume a greater degree of objectivity than is really warranted.

It's obvious that probabilistic systems are not perfect, but that is precisely why they work. Is it worth the tradeoffs? Personally, I think they are, provided that such systems properly disclose their limitations. I also think it's worth noting that such systems will not fully replace non-probabilistic systems. One commonly referenced observation about Wikipedia, for instance, is that it "should be the first source of information, not the last. It should be a site for information exploration, not the definitive source of facts."
Posted by Mark on January 08, 2006 at 11:24 PM .: Comments (0) | link :.


End of This Day's Posts

Sunday, November 20, 2005

Podcast Reviews
As I've hinted at in recent entries, I've been delving a bit into podcasts. For the uninitiated, a "podcast" is just a fancy word for pre-recorded radio shows that you can subsribe to on the internet (people often download podcasts to listen to on their iPod, hence the name - though the term really is a misnomer, as you don't need an iPod to listen to a podcast, and it's not broadcast either).

In any case, my short commute actually doesn't lend itself to listening, so I haven't listened to that many podcasts and all of the ones I've listened to are at least tangentially movie-related. So here are a few short reviews of podcasts that I've listened to (again, mostly movie related):
  • The CHUD Show: A few months ago a friend of mine recommended CHUD's podcast to me. I've always been a fan of the site (which features lots of movie news, etc...), so it was the first podcast I checked out, and I was quite happy with it, though I have to admit, it's got limited appeal. Once you realize that the name of their site (Cinematic Happenings Under Development - CHUD) is partly an homage to a cheesy 80s horror flick (in which CHUD stands for "Cannibalistic Humanoid Underground Dwellers"), you get the idea. I'm a strange guy, so it doesn't bother me much, but the CHUD folks seem to have an affinity for really bad jokes and obscure movies (which most would also consider bad, but people like myself don't mind much). It's not the highest quality audio, and they appear to be released only sporadically (there's only 5 podcasts in 3-4 months), but they are extremely long (1 hour+) and for fans of cheesy horror and obscure actors, it's a real treat. If you hear the plot for the movie Castle Freak (a topic of discussion in one of their shows) and think it sounds like your type of movie, you'll probably love CHUD. I like it, but it's not for everybody...
  • Cinecast: A much more polished and slick podcast, Cinecast is also great and it has a broader appeal as well. This podcast is almost the polar opposite of CHUD. It's orderly, regularly published, and it usually features more mainstream fare. They release two 40 minute podcasts a week, and in each episode, they start with a movie review (each week they review a current release and an older film which is usually part of some genre that they're studying - they're currently watching horror films, much to my pleasure), they talk about comments they've received about previous podcasts, and they give a top 5 list (i.e. top 5 war movies, top 5 actors, etc...). It's quite entertaining, and the high frequency of new episodes helps greatly (much like a high frequency of blogging helps in that realm). Naturally, whether you'll like it or not greatly depends on if you've seen the movies they're talking about, but as podcasts go, this is probably the most professional I've heard yet.
  • Cinemaslave: I really wanted to like this one, but I just can't get into it. Reading through the topics on each podcast got me really excited to listen, but it ended up being quite disappointing. I think the biggest problem here is that it's just one guy talking the entire time (CHUD and Cinecast have at least 2 commentators) and the lack of interplay really takes its toll.
  • Bleatcast: I already wrote about this one, but it's worth mentioning again because Lileks is a fascinating fellow. If you enjoy the Bleat, chances are that you'll also enjoy the bleatcast.
So that's it for now. Do you have any podcasts that you enjoy (or that you think I'd enjoy)? Drop a comment below...
Posted by Mark on November 20, 2005 at 07:01 PM .: Comments (5) | link :.


End of This Day's Posts

Sunday, June 26, 2005

Bookmark Aggregation
This is hardly new, but since I've often observed the need for better information aggregation tools I figured I'd give del.icio.us a plug. del.icio.us is essentially an online bookmark (or favorites, in IE-speak) repository. It allows you to post sites to your own personal collection of links. This is great for those who frequently access the internet from multiple locations and different browsers (i.e. from work and home) as it is always accessible on the web. But the really powerful thing about del.icio.us is that everyone's bookmarks are public and easily viewable, and there are all sorts of ways to aggregate and correllate bookmarks. They like to call the system a social bookmarks manager.

The system uses a tagging scheme (or flat hierarchy, if you prefer) to organize links. In the context of a system like del.icio.us, tagging essentially means that for each bookmark you add, you choose a number of labels or categories (tags) which are used to organize your bookmarks so you can find them later. Again, since del.icio.us is a public system, you can see what other people are posting to the same tags. This becomes a good way to keep up on a particular topic (for example, CSS, the economy, movies, tacos or cheese). Jon Udell speculates that posted links would follow a power law distribution, where a few individuals really stand out as the most reliable contributors of valuable links for a given topic. Unfortunately, del.icio.us isn't particularly great at sorting that out yet (though you may be able to notice such patterns emerging if you really keep up on a topic and who is posting what, which can be somewhat daunting for popular tags like CSS, but perhaps not so for something more obscure like unicode). Udell also notes how useful tagging is when trying to organize something that you think will be useful in the future.

Tagging is a concept whose time has come, and despite its drawbacks, I have a feeling that 10 years from now, we're all going to look back and wonder how the heck we accomplished anything before something like tagging rolled around. del.icio.us certainly isn't the only site using tagging (Flickr has tagged photos, Technorati uses tags for blog posts, and there are several other sites). Of course, the concept does have its problems; namely, how do you know which tags to use? For instance, one of the more popular general subjects on del.icio.us is blogs and blogging, but what tags should be used? Blog, Blogging, Blogs, Weblog, Weblogs, blogosphere and so on... Luckily del.icio.us is getting better and better at this - their "experimental post" works wonders because it is actually able to recommend tags you should use based on what tags other people have used.

The system is actually quite simple and easy to use, but there's not much in the way of documentation. Check out this blog post or John Udell's screencast for some quick tutorials on how to get started. I've been playing around with it more and more, and it's proving very useful on multiple levels (organizing links I come across as well as finding new links in the first place!). If you're interested, you can check out my bookmarks. Some other interesting functionality:
  • Every page you view on del.icio.us has an RSS feed, so you can subscribe to feeds you like and read them along with your favorite news sites, blogs, &c.
  • One interesting thing you can do with tags is to create a continually updated set of links directed at one specific person. For instance, let's say I'm always finding links that I think my brother would enjoy. I can bookmark them with the tag "attn: goober" and send him the link, which will always be updated with the latest links I've sent him (and he could subscribe to the RSS for that page too).
  • del.icio.us/popular/ shows the pages that are being bookmarked most frequently - a good way to keep up with the leading edge. You can also add a tag to see only popular items for that tag. For example, to keep up with the most popular links about blogs, you could try del.icio.us/popular/blogs.
  • There's a lot of integration with Mozilla/Firefox, which is one reason for the service's popularity.
  • There also appears to be a lot of development that leverages del.icio.us data for other uses or in other applications.
  • del.icio.us picks your nose for you! Ok, er, it doesn't actually do that (and um, even if it did, would anyone use that feature?), but it does lots of other things too. Go sign up and check it out.
Again, it's a very useful site once you figure out what you're doing, and I have a few ideas that might show up on the blog (eventually). It should be particularly useful when I attempt to do something like this or this again. The system is far from perfect, and it's difficult to tell where some of the driving concepts are really going, but it certainly seems like there's something interesting and very useful going on here.

The important thing about del.icio.us is not that it was designed to create the perfect information resource, but rather an efficient system of collaboration. It's a systemic improvement; as such, the improvement in information output is an emergent property of internet use. Syndication, aggregation, and filtering on the internet still need to improve considerably, but this seems like a step in the right direction.
Posted by Mark on June 26, 2005 at 08:30 PM .: link :.


End of This Day's Posts

Sunday, May 22, 2005

Voters and Lurkers
Debating online, whether it be through message boards or blogs or any other method, can be rewarding, but it can also be quite frustrating. When most people think of a debate, they think of a group arguing an opponent, and one of the two factions "winning" the argument. It's a process of expression in which different people with different points of view will express their opinions, and are criticised by one another.

I've often found that specific threads tend to boil down to a point where the argument is going back and forth between two sole debaters (with very few interruptions from others). Inevitably, the debate gets to the point where both sides' assumptions (or axioms) have been exposed, and neither side is willing to agree with the other. To the debaters, this can be intensely frustrating. As such, anyone who has spent a significant amount of time debating others online can usually see that they're probably never going to convince their opponents. So who wins the argument?

The debaters can't decide who wins - they obviously think their argument is better than their opponents (or, at the very least, are unwilling to admit it) and so everyone thinks that they "won." But the debaters themselves don't "win" an argument, it's the people witnessing the debate that are the real winners. They decide which arguments are persuasive and which are not.

This is what the First Amendment of the US Constitution is based on, and it is a fundamental part of our democracy. In a vigorous marketplace of ideas, the majority of voters will discern the truth and vote accordingly.

Unfortunately, there never seems to be any sort of closure when debating online, because the audience is primarily comprised of lurkers, most of whom don't say anything (plus, there are no votes), and so it seems like nothing is accomplished. However, I assure you that is not the case. Perhaps not for all lurkers, but for a lot of them, they are reading the posts with a critical eye and coming out of the debate convinced one way or the other. They are the "voters" in an online debate. They are the ones who determine who won the debate. In a scenario where only 10-15 people are reading a given thread, this might not seem like much (and it's not), but if enough of these threads occur, then you really can see results...

I'm reminded of Benjamin Franklin's essay "An apology for printers," in which Franklin defended those who printed allegedly offensive opinion pieces. His thought was that very little would be printed if publishers only produced things that were not offensive to anybody.
Printers are educated in the Belief, that when Men differ in Opinion, both sides ought equally to have the Advantage of being heard by the Public; and that when Truth and Error have fair Play, the former is always an overmatch for the latter.
Posted by Mark on May 22, 2005 at 06:58 PM .: link :.


End of This Day's Posts

Friday, April 22, 2005

What is a Weblog, Part II
What is a weblog? My original thoughts leaned towards thinking of blogs as a genre within the internet. Like all genres, there is a common set of conventions that define the blogging genre, but the boundaries are soft and some sites are able to blur the lines quite thoroughly. Furthermore, each individual probably has their own definition as to what constitutes a blog (again similar to genres). The very elusiveness of a definition for blog indicates that perception becomes an important part of determining whether or not something is a blog. It has become clear that there is no one answer, but if we spread the decision out to a broad number of people, each with their own independent definition of blog, we should be able to come to the conclusion that a borderline site like Slashdot is a blog because most people call it a blog.

So now that we have a (non)definition for what a blog is, just how important are blogs? Caesar at Arstechnica writes that according to a new poll, Americans are somewhat ambivalent on blogs. In particular, they don't trust blogs.

I don't particularly mind this, however. For the most part, blogs don't make much of an effort to be impartial, and as I've written before, it is the blogger's willingness to embrace their subjectivity that is their primary strength. Making mistakes on a blog is acceptable, so long as you learn from your mistakes. Since blogs are typically more informal, it's easier for bloggers to acknowledge their mistakes.

Lexington Green from ChicagoBoyz recently wrote about blogging to a writer friend of his:
To paraphrase Truman Capote's famous jibe against Jack Kerouac, blogging is not writing, it is typing. A writer who is blogging is not writing, he is blogging. A concert pianist who is sitting down at the concert grand piano in Carnegie Hall in front of a packed house is the equivalent to an author publishing a finished book. The same person sitting down at the piano in his neighborhood bar on a Saturday night and knocking out a few old standards, doing a little improvisation, and even doing some singing -- that is blogging. Same instrument -- words, piano -- different medium. We forgive the mistakes and wrong-guesses because we value the immediacy and spontaneity. Plus, publish a book, it is fixed in stone. Write a blog post you later decide is completely wrong, it is actually good, since it gives you a good hook for a later post explaining your thoughts that led to the changed conclusion. The essence of a blog is to air things informally, to throw things out, to say "this interests me because ..." From time to time a more considered and article-like post is good. But most people read blogs by skimming. If a post is too long, in my observation, it does not get much response and may not be read at all.
Of course, his definition of what a blog is could be argued (as there are some popular and thoughtful bloggers who routinely write longer, more formal essays), but it actually struck me as being an excellent general description of blogging. Note his favorable attitude towards mistakes ("it gives you a good hook for a later post" is an excellent quote, though I think you might have to be a blogger to fully understand it). In the blogosphere, it's ok to be wrong:
Everyone makes mistakes. It's a fact of life. It isn't a cause for shame, it's just reality. Just as engineers are in the business of producing successful designs which can be fabricated out of less-than-ideal components, the engineering process is designed to produce successful designs out of a team made up of engineers every one of which screws up routinely. The point of the process is not to prevent errors (because that's impossible) but rather to try to detect them and correct them as early as possible.

There's nothing wrong with making a mistake. It's not that you want to be sloppy; everyone should try to do a good job, but we don't flog people for making mistakes.
The problem with the mainstream media is that they purport to be objective, as if they're just reporting the facts. Striving for objectivity can be a very good thing, but total objectivity is impossible, and if you deny the inherent subjectivity in journalism, then something is lost.

One thing Caesar mentions is that "the sensationalism surrounding blogs has got to go. Blogs don't solve world hunger, cure disease, save damsels in distress, or any of the other heroic things attributed to them." I agree with this too, though I do think there is something sensational about blogs, or more generally, the internet.

Steven Den Beste once wrote about what he thought were the four most important inventions of all time:
In my opinion, the four most important inventions in human history are spoken language, writing, movable type printing and digital electronic information processing (computers and networks). Each represented a massive improvement in our ability to distribute information and to preserve it for later use, and this is the foundation of all other human knowledge activities. There are many other inventions which can be cited as being important (agriculture, boats, metal, money, ceramic pottery, postmodernist literary theory) but those have less pervasive overall affects.
Regardless of whether or not you agree with the notion that these are the most important inventions, it is undeniable that the internet provides a stairstep in communication capability, which, in turn, significantly improves the process of large-scale collaboration that is so important to human existence.
When knowledge could only spread by speech, it might take a thousand years for a good idea to cross the planet and begin to make a difference. With writing it could take a couple of centuries. With printing it could happen in fifty years.

With computer networks, it can happen in a week if not less. After I've posted this article to a server in San Diego, it will be read by someone on the far side of a major ocean within minutes. That's a radical change in capability; a sufficient difference in degree to represent a difference in kind. It means that people all over the world can participate in debate about critical subjects with each other in real time.
And it appears that blogs, with their low barrier to entry and automated software processes, will play a large part in the worldwide debate. There is, of course, a ton of room for improvement, but things are progressing rapidly now and perhaps even accelerating. It is true that some blogging proponents are preaching triumphalism, but that's part of the charm. They're allowed to be wrong and if you look closely at what happens when someone makes such a comment, you see that for every exaggerated claim, there are 10 counters in other blogs that call bullshit. Those blogs might be on the long tail and probably won't garner as much attention, but that's part of the point. Blogs aren't trustworthy, which is precisely why they're so important.

Update 4.24.05: I forgot to link the four most important inventions article (and I changed some minor wording: I had originally referred to the four "greatest" inventions, which was not the wording Den Beste had used).
Posted by Mark on April 22, 2005 at 06:49 PM .: link :.


End of This Day's Posts

Sunday, April 17, 2005

What is a Weblog?
Caesar at ArsTechnica has written a few entries recently concerning blogs which interested me. The first simply asks: What, exactly, is a blog? Once you get past the overly-general definitions ("a blog is a frequently updated webpage"), it becomes a surprisingly difficult question.

Caesar quotes Wikipedia:
A weblog, web log or simply a blog, is a web application which contains periodic time-stamped posts on a common webpage. These posts are often but not necessarily in reverse chronological order. Such a website would typically be accessible to any Internet user. "Weblog" is a portmanteau of "web" and "log". The term "blog" came into common use as a way of avoiding confusion with the term server log.
Of course, as Caesar notes, the majority of internet sites could probably be described in such a way. What differentiates blogs from discussion boards, news organizations, and the like?

Reading through the resulting discussion provides some insight, but practically every definition is either too general or too specific.

Many people like to refer to Weblogs as a medium in itself. I can see the point, but I think it's more general than that. The internet is the medium, whereas a weblog is basically a set of commonly used conventions used to communicate through that medium. Among the conventions are things like a main page with chronological posts, permalinks, archives, comments, calendars, syndication (RSS), blogging software (CMS), trackbacks, &c. One problem is that no single convention is, in itself, definitive of a weblog. It is possible to publish a weblog without syndication, comments, or a calendar. Depending on the conventions being eschewed, such blogs may be unusual, but may still be just as much a blog as any other site.

For lack of a better term, I tend to think of weblogs as a genre. This is, of course, not totally appropriate but I think it does communicate what I'm getting at. A genre is typically defined as a category of artistic expression marked by a distinctive style, form, or content. However, anyone who is familiar with genre film or literature knows that there are plenty of movies or books that are difficult to categorize. As such, specific genres such as horror, sci-fi, or comedy are actually quite inclusive. Some genres, Drama in particular, are incredibly broad and are often accompanied by the conventions of other genres (we call such pieces "cross-genre," though I think you could argue that almost everything incorporates "Drama"). The point here is that there is often a blurry line between what constitutes one genre from another.

On the medium of the internet, there are many genres, one of which is a weblog. Other genres include commercial sites (i.e. sites that try to sell you things, Amazon.com, Ebay, &c.), reference sites (i.e. dictionaries & encyclopedias), Bulletin Board Systems and Forums, news sites, personal sites, weblogs, wikis, and probably many, many others.

Any given site is probably made up of a combination of genres and it is often difficult to pinpoint any one genre as being representative. Take, for example, Kaedrin.com. It is a personal site with some random features, a bunch of book & movie reviews, a forum, and, of course, a weblog (which is what you're reading now). Everything is clearly delineated here at Kaedrin, but other sites blur the lines between genres on every page. Take ArsTechnica itself: Is it a news site or a blog or something else entirely? I would say that the front page is really a combination of many different things, one of which is a blog. It's a "cross-genre" webpage, but that doesn't necessarily make it any less effective (though there is something to be said for simplicity and it is quite possible to load a page up with too much stuff, just as it's possible for a book or movie to be too ambitious and take on too much at once) just as Alien isn't necessarily a less effective Science Fiction film because it incorporates elements of Horror and Drama (or vice-versa).

Interestingly, much of what a weblog is can be defined as an already existing literary genre: the journal. People have kept journals and diaries all throughout history. The major difference between a weblog and a journal is that a weblog is published for all to see on the public internet (and also that weblogs can be linked together through the use of the hyperlink and the infrastructure of the internet). Historically, diaries were usually private, but there are notable exceptions which have been published in book form. Theoretically, one could take such diaries and publish them online - would they be blogs? Take, for instance, The Diary of Samuel Pepys which is currently being published daily as if it's a weblog circa 1662 (i.e. Today's entry is dated "Thursday 17 April 1662"). The only difference is that the author of that diary is dead and thus doesn't interact or respond to the rest of the weblog community (though there is still interaction allowed in the form of annotations).

A few other random observations about blogs:
  • Software: Many people brought up the fact that most blogs are produced with the assistance of Weblogging Software, such as Blogger or Movable Type. From my perspective, such tools are necessary for the spread of weblogs, but shouldn't be a part of the definition. They assist in the spread of weblogs because they automate the overly-technical details of publishing a website and make it easy for normal folks to participate. They're also useful for automatically propagating weblog conventions like permalinks, comments, trackbacks, and archives. However, it's possible to do all of this without the use of blogging specific software and it's also possible to use blogging software for other purposes (for instance, Kaedrin's very own Tandem Stories are powered by Movable Type). It's interesting that other genres have their own software as well, particularly bulletin boards and forums. Ironically, one could use such BBS software to publish a blog (or power tandem stories), if they were so inclined. The Pepys blog mentioned above actually makes use of wiki software (though that software powers the entries, it's mostly used to allow annotations). To me content management systems are important, but they don't define so much as propagate the genre.
  • Personality: One mostly common theme in definitions is that weblogs are personal - they're maintained by a person (or small group of people), not an official organization. A personality gets through. There is also the perception that a blog is less filtered than official communications. Part of the charm of weblogs is that you can be wrong (more on this later, possibly in another post). I'm actually not sure how important this is to the definition of a blog. Someone who posts nothing but links doesn't display much of a personality, except through more subtle means (the choice of links can tell you a lot about an individual, albeit in an indirect way that could lead to much confusion).
  • Communities: Any given public weblog is part of a community, whether it wants to be or not. The boundaries of any specific weblog are usually well delineated, but since weblogs are part of the internet, which is an on-demand medium (as opposed to television or radio, which are broadcast), blogs are often seen as relative to one another. Entries and links from different blogs are aggregated, compared, correlated and published in other weblogs. Any blog which builds enough of a readership provides a way connect people who share various interests through the infrastructure of the internet.
Some time ago, Derek Powazek asked What the Hell is a Weblog? You tell me. and published all the answers. It turns out that I answered this myself (last one on that page), many years ago:
I don't care what the hell a weblog is. It is what I say it is. Its something I update whenever I find an interesting tidbit on the web. And its fun. So there.
Heh. Interesting to note that my secondary definition there ("something I update whenever I find an interesting tidbit on the web") has changed significantly since I contributed that definition. This is why, I suppose, I had originally supplied the primary definition ("I don't care what the hell a weblog is. It is what I say it is.") and to be honest, I don't think that's changed (though I guess you could call that definition "too general"). Blogging is whatever I want it to be. Of course, I could up and call anything a blog, but I suppose it is also required that others perceive your blog as a blog. That way, the genre still retains some shape, but is still permeable enough to allow some flexibility.

I had originally intended to make several other points in this post, but since it has grown to a rather large size, I'll save them for other posts. Hopefully, I'll gather the motivation to do so before next week's scheduled entry, but there's no guarantee...
Posted by Mark on April 17, 2005 at 08:27 PM .: link :.


End of This Day's Posts

Sunday, March 27, 2005

Accelerating Change
Slashdot links to a fascinating and thought provoking one hour (!) audio stream of a speech "by futurist and developmental systems theorist, John Smart." The talk is essentially about the future of technology, more specifically information and communication technology. Obviously, there is a lot of speculation here, but it is interesting so long as you keep it in the "speculation" realm. Much of this is simply a high-level summary of the talk with a little commentary sprinkled in.

He starts by laying out some key motivations or guidelines to thinking about this sort of thing, and he paraphrases David Brin (and this is actually paraphrasing Smart):
We need a pragmatic optimism, a can-do attitude, a balance between innovation and preservation, honest dialogue on persistent problems, ... tolerance of the imperfect solutions we have today, and the ability to avoid both doomsaying and a paralyzing adherence to the status quo. ... Great input leads to great output.
So how do new systems supplant the old? They do useful things with less matter, less energy, and less space. They do this until they reach some sort of limit along those axes (a limitation of matter, energy, or space). It turns out that evolutionary processes are great at this sort of thing.

Smart goes on to list three laws of information and communication technology:
  1. Technology learns faster than you do (on the order of 10 million times faster). At some point, Smart speculates that there will be some sort of persistent Avatar (neural-net prosthesis) that will essentially mimic and predict your actions, and that the "thinking" it will do (pattern recognitions, etc...) will be millions of times faster than what our brain does. He goes on to wonder what we will look like to such an Avatar, and speculates that we'll be sort of like pets, or better yet, plants. We're rooted in matter, energy, and space/time and are limited by those axes, but our Avatars will have a large advantage, just as we have a large advantage over plants in that respect. But we're built on top of plants, just as our Avatars will be built on top of us. This opens up a whole new can of worms regarding exactly what these Avatars are, what is actually possible, and how they will be perceived. Is it possible for the next step in evolution to occur in man-made (or machine-made) objects? (This section is around 16:30 in the audio)
  2. Human beings are catalysts rather than controllers. We decide which things to accelerate and which to slow down, and this is tremendously important. There are certain changes that are evolutionarily inevitable, but the path we take to reach those ends is not set and can be manipulated. (This section is around 17:50 in the audio)
  3. Interface is extremely important and the goal should be a natural high-level interface. His example is calculators. First generation calculators simply automate human processes and take away your math skills. Second generation calculators like Mathematica allow you to get a much better look at the way math works, but the interface "sucks." Third generation calculators will have a sort of "deep, fluid, natural interface" that allows a kid to have the understanding of a grad student today. (This section is around 20:00 in the audio)
Interesting stuff. His view is that most social and technological advances of the last 75 years or so are more accelerating refinements (changes in the microcosm) rather than disruptive changes (changes in the macrocosm). Most new technological advances are really abstracted efficiencies - it's the great unglamorous march of technology. They're small and they're obfuscated by abstraction, thus many of the advances are barely noticed.

This about halfway through the speech, and he goes on to list many examples and he explores some more interesting concepts. Here are some bits I found interesting.
  • He talks about transportation and energy, and he argues that even though, on a high level we haven't advanced much (still using oil, natural gas - fossil fuels), there has actually been a massive amount of change, but that the change is mostly hidden in abstracted accelerating efficiencies. He mentions that we will probably have zero-emission fossil fuel vehicles 30-40 years from now (which I find hard to believe) and that rather than focusing on hydrogen or solar, we should be trying to squeeze more and more efficiency out of existing systems (i.e. abstracted efficiencies). He also mentions population growth as a variable in the energy debate, something that is rarely done, but if he is correct that population will peak around 2050 (and that population density is increasing in cities), then that changes all projections about energy usage as well. (This section is around 31:50-35 in the audio) He talks about hybrid technologies and also autonomous highways as being integral in accelerating efficiencies of energy use (This section is around 37-38 in the audio) I found this part of the talk fascinating because energy debates are often very myopic and don't consider things outside the box like population growth and density, autonomous solutions, phase shifts of the problem, &c. I'm reminded of this Michael Crichton speech where he says:
    Let's think back to people in 1900 in, say, New York. If they worried about people in 2000, what would they worry about? Probably: Where would people get enough horses? And what would they do about all the horseshit? Horse pollution was bad in 1900, think how much worse it would be a century later, with so many more people riding horses?
    None of which is to say that we shouldn't be pursuing alternative energy technology or that it can't supplant fossil fuels, just that things seem to be trending towards making fossil fuels more efficient. I see hybrid technology becoming the major enabler in this arena, possibly followed by the autonomous highway (that controls cars and can perhaps give an extra electric boost via magnetism). All of which is to say that the future is a strange thing, and these systems are enormously complex and are sometimes driven by seemingly unrelated events.
  • He mentions an experiment in genetic algorithms used for process automation. Such evolutionary algorithms are often used in circuit design and routing processes to find the most efficient configuration. He mentions one case where someone made a mistake in at the quantum level of a system, and when they used the genetic algorithm to design the circuit, they found that the imperfection was actually exploited to create a better circuit. These sorts of evolutionary systems are robust because failure actually drives the system. It's amazing. (This section is around 47-48 in the audio)
  • He then goes on to speculate as to what new technologies he thinks will represent disruptive change. The first major advance he mentions is the development of a workable LUI - a language-based user interface that utilizes a natural language that is easily understandable by both the average user and the computer (i.e. a language that doesn't require years of study to figure out, a la current programming languages). He thinks this will grow out of current search technologies (perhaps in a scenario similar to EPIC). One thing he mentions is that the internet right now doesn't give an accurate represtenation of the wide range of interests and knowledge that people have, but that this is steadily getting better over time. As more and more individuals, with more and more knowledge, begin interacting on the internet, they begin to become a sort of universal information resource. (This section is around 50-53 in the audio)
  • The other major thing he speculates about is the development of personality capture and parallel computing, which sort of integrates with the LUI. This is essentially the Avatar I mentioned earlier which mimics and predicts your actions.
As always, we need to keep our feet on the ground here. Futurists are fun to listen to, but it's easy to get carried away. The development of a LUI and a personality capture system would be an enormous help, but we still need good information aggregation and correlation systems if we're really going to progress. Right now the problem is finding the information we need, and analyzing the information. A LUI and personality capture system will help with the finding of information, but not so much with the analysis (the separating of the signal from the noise). As I mentioned before, the speech is long (one hour), but it's worth a listen if you have the time...
Posted by Mark on March 27, 2005 at 08:40 PM .: link :.


End of This Day's Posts

Sunday, March 13, 2005

A tale of two software projects
A few weeks ago, David Foster wrote an excellent post about two software projects. One was a failure, and one was a success.

The first project was the FBI's new Virtual Case File system; a tool that would allow agents to better organize, analyze and communicate data on criminal and terrorism cases. After 3 years and over 100 million dollars, it was announced that the system may be totally unusable. How could this happen?
When it became clear that the project was in trouble, Aerospace Corporation was contracted to perform an independent evaluation. It recommended that the software be abandoned, saying that "lack of effective engineering discipline has led to inadequate specification, design and development of VCF." SAIC has said it believes the problem was caused largely by the FBI: specifically, too many specification changes during the development process...an SAIC executive asserted that there were an average of 1.3 changes per day during the development. SAIC also believes that the current system is useable and can serve as a base for future development.
I'd be interested to see what the actual distribution of changes were (as opposed to the "average changes per day", which seems awfully vague and somewhat obtuse to me), but I don't find it that hard to believe that this sort of thing happened (especially because the software development firm was a separate entity). I've had some experience with gathering requirements, and it certainly can be a challenge, especially when you don't know the processes currently in place. This does not excuse anything, however, and the question remains: how could this happen?

The second project, the success, may be able to shed some light on that. DARPA was tapped by the US Army to help protect troops from enemy snipers. The requested application would spot incoming bullets and identify their point of origin, and it would have to be easy to use, mobile, and durable.
The system would identify bullets from their sound..the shock wave created as they travelled through the air. By using multiple microphones and precisely timing the arrival of the "crack" of the bullet, its position could, in theory, be calculated. In practice, though, there were many problems, particularly the high levels of background noise--other weapons, tank engines, people shouting. All these had to be filtered out. By Thanksgiving weekend, the BBN team was at Quantico Marine Base, collecting data from actual firing...in terrible weather, "snowy, freezing, and rainy" recalls DARPA Program Manager Karen Wood. Steve Milligan, BBN's Chief Technologist, came up with the solution to the filtering problem: use genetic algorithms. These are a kind of "simulated evolution" in which equations can mutate, be tested for effectivess, and sometimes even "mate," over thousands of simulated generations (more on genetic algorithms here.)

By early March, 2004, the system was operational and had a name--"Boomerang." 40 of them were installed on vehicles in Iraq. Based on feedback from the troops, improvements were requested. The system has now been reduced in size, shielded from radio interference, and had its display improved. It now tells soldiers the direction, range, and elevation of a sniper.
Now what was the biggest difference between the remarkable success of the Boomerang system and the spectacular failure of the Virtual Case File system? Obviously, the two projects present very different challenges, so a direct comparison doesn't necessarily tell the whole story. However, it seems to me that discipline (in the case of the Army) or the lack of discipline (in the case of the FBI) might have been a major contributor to the outcomes of these two projects.

It's obviously no secret that discipline plays a major role in the Army, but there is more to it than just that. Independence and initiative also play an important role in a military culture. In Neal Stephenson's Cryptonomicon, the way the character Bobby Shaftoe (a Marine Raider, which is "...like a Marine, only more so.") interacts with his superiors provides some insight (page 113 in my version):
Having now experienced all the phases of military existence except for the terminal ones (violent death, court-martial, retirement), he has come to understand the culture for what it is: a system of etiquette within which it becomes possible for groups of men to live together for years, travel to the ends of the earth, and do all kinds of incredibly weird shit without killing each other or completely losing their minds in the process. The extreme formality with which he addresses these officers carries an important subtext: your problem, sir, is doing it. My gung-ho posture says that once you give the order I'm not going to bother you with any of the details - and your half of the bargain is you had better stay on your side of the line, sir, and not bother me with any of the chickenshit politics that you have to deal with for a living.
Good military officers are used to giving an order, then staying out of their subordinate's way as they carry out that order. I didn't see any explicit measurement, but I would assume that there weren't too many specification changes during the development of the Boomerang system. Of course, the developers themselves made all sorts of changes to specifics and they also incorporated feedback from the Army in the field in their development process, but that is standard stuff.

I suspect that the FBI is not completely to blame, but as the report says, there was a "lack of effective engineering discipline." The FBI and SAIC share that failure. I suspect, from the number of changes requested by the FBI and the number of government managers involved, that micromanagement played a significant role. As Foster notes, we should be leveraging our technological abilities in the war on terror, and he suggests a loosely based oversight committe (headed by "a Director of Industrial Mobilization") to make sure things like this don't happen very often. Sounds like a reasonable idea to me...
Posted by Mark on March 13, 2005 at 08:47 PM .: link :.


End of This Day's Posts

Sunday, February 13, 2005

An Exercise in Aggregation
A few weeks ago I collected a ton of posts regarding the Iraqi elections. I did this for a few reasons. The elections were important and I wanted to know how they were going, but I could have just read up on them if that was the only reason. The real reason I made that post was to participate in and observe information aggregation and correlation in real time.

It was an interesting experience, and I learned a few things which should help in future exercises. Some of these are in my control to fix, some will depend on the further advance of technology.
  • Format - It seems to me that simply posting a buttload of links in a long list is not the best way to aggregate and correlate data. It does provide a useful service, it being a central place with links to diverse articles, but it would be much better if the posts were separated into smaller groups. This would better facilitate scanning and would allow those interested to focus on things that interest them. It would also be helpful to indicate threads of debate between different bloggers. For example, it seems that a ton of people responded to Juan Cole's comments, though I only listed one or two (and I did so in a way that wasn't exactly efficient).
  • Categorization - One thing that is frustrating about such an exercise is that many blogs are posting up a storm on the subject throughout the day, which means that someone like myself who is attempting to aggregate posts would have to continually check the blog throughout the day as well. Indeed, simply collecting all the links and posting them can be a challenge. What I ended up doing was linking to a few specific posts and then just including a general link to the blog with the instruction to "Keep scrolling." Dean Esmay demonstrated how bloggers can help aggregation by providing a category page where all of his Iraqi election posts were collected (and each individual post had an index of posts as well). This made things a lot easier for me, as I didn't have to collect a large number of links. All I had to do is post one link. Unfortunately this is somewhat rare, and given the tools we have to use, it is also understandable. Most people are concerned with getting their voice out there, and don't want to spend the time devising a categorization scheme. Movable Type 3.x has subcategories, which could help with this, but it takes time to figure this stuff out. Hopefully this is something that will improve in time as more enhancements are made to blogging software.
  • Trackbacks - Put simply, they suck for an exercise like this. For those who don't know, trackbacks are a way of notifying other websites that you're linking to them (and a way of indicating that other websites have linked to you). Movable type has a nifty feature that will automatically detect a trackback-enabled blog when you link to it, and set the site to be pinged. This is awesome when you're linking to a single post or even a handful of posts. However, when I was compiling the links for my Iraqi election post, I naturally had tons of trackbacks to send. I started getting trackback failures that weren't really failures. And because I was continually updating that post with new data, I ended up sending duplicate pings to the same few blogs (some got as many as five or six extraneous pings). I suppose I could have turned off the auto-detection feature and manually pinged the sites I wanted for that post, but that is hardly convenient.
  • Other notes - There has to be a better way to collect permalinks and generate a list than simply copying and pasting. I'm sure there are some bookmarklets or browser features that could prove helpful, though this would require a little research and a little tweaking to be useful.
Writing that post proved to be a most interesting exercise in aggregation, and I look forward to incorporating some of the lessons learned above in future posts...
Posted by Mark on February 13, 2005 at 10:39 AM .: link :.


End of This Day's Posts

Thursday, January 27, 2005

Evölutiön
In a stroke of oddly compelling genius (or possibly madness), Jon Udell has put together a remarkable flash screencast (note: there is sound and it looks best in full screen mode) detailing the evolution of the Heavy metal umlaut page on Wikipedia.
It's a wonderfully silly topic, but my point is somewhat serious too. The 8.5-minute screencast turns the change history of this Wiki page into a movie, scrolls forward and backward along the timeline of the document, and follows the development of several motifs. Creating this animated narration of a document's evolution was technically challenging, but I think it suggests interesting possibilities.
Wikis are one of those things that just don't sound right when you hear about what they are and how they work. It's one thing to institute a collaborative encyclopedia, but Wikis embrace a philosophy of openness that seems entirely too permissive. Wikis are open to the general public and allow anyone to modify their contents without any sort of prior review. What's to stop a troll from vandalizing a page? Nothing, except that someone will come along and correct it shortly thereafter (Udell covers an episode of vandalism in the screencast). It's a textbook self-organizing system (note that wikis focus not on the content, but rather on establishing an efficient mechanism for collaboration; the content is an emergent property of the system). It should be interesting to see how it progresses... [via Jonathon Delacour, who also has an interesting discussion about umlauts and diaereses and another older post about wikis]
Posted by Mark on January 27, 2005 at 08:02 PM .: link :.


End of This Day's Posts

Sunday, January 23, 2005

Long Tails, TV, and DVR
Apparently Chris Anderson (author of the Wired article I posted last week) has a blog in which he comments regularly on the long tail concept.

In one post, he speculates how the long tail relates to television programs, DVRs and the internet. In short, he proposes a browser plugin that you could use when you see a reference to a TV show that you are interested in and want to record. You would simply need to highlight the show title and right-click, where a new option would be available called "Record to DVR," at which point you could go about setting up your DVR to record the show.

I don't have a DVR, so perhaps I'm not the best person to comment, but it strikes me that if you're reading a recommendation for a show, you might want to go back and watch all the previous shows as well. For instance, a lot of people have been recommending Lost to me recently. If I had a PVR, I might set it to record the show, but I'd have missed a significant portion of the show (I don't know how important that would be or not). What I'd really love is to go back and watch the series from the beginning.

Comcast has a feature called "On Demand" which would be perfect for this, but they don't seem to have much in the way of capacity (though if you have HBO, I understand they sometimes make whole seasons of various popular shows available) and they don't have Lost. Evan Kirchoff recently posted something that put an interesting twist on this subject: other people are his PVR. When he finds a show he wants to watch, he simply downloads it via torrents:
What I really wanted all this time, it turns out, is just the assurance that somebody out there in the luminiferous aether is faithfully recording every show, in case I later decide that I want it. Setting a VCR in advance is way too much work, but having to download a 350-megabyte file is an action that's just affirmative enough to distill one's preferences.
It's certainly an interesting perspective - a typical emergent property of the self-organizing internet (along with all the warts that entails) - and it's a hell of a lot better than waiting for reruns. I don't have the 400 gigs of hard drive space on my system that Evan does, but I might check out an episode or two. Of course, there's something to be said about the quality of the watching-tv-on-a-computer experience and, as Evan mentions, I'm not quite sure about the legality of such a practice (his reasoning seems logical, but that doesn't necessarily mean anything). Perhaps a micropayment solution (i.e. download an episode for a dollar, or one season for $10) would work. Of course, this would destroy the DVD market (which I imagine some people would be none to happy about), but it would also lengthen the tail, as quality niche shows (i.e. the long tail) might be able to carve out a profitable piece of the pie.

The best solution would, of course, combine all the various features above into one application/experience, but I'm not holding my breath just yet.
Posted by Mark on January 23, 2005 at 11:55 AM .: link :.


End of This Day's Posts

Sunday, January 16, 2005

Chasing the Tail
The Long Tail by Chris Anderson : An excellent article from Wired that demonstrates a few of the concepts and ideas I've been writing about recently. One such concept is well described by Clay Shirky's excellent article Power Laws, Weblogs, and Inequality. A system governed by a power law distribution is essentially one where the power (whether it be measured in wealth, links, etc) is concentrated in a small population (when graphed, the rest of the population's power values resemble a long tail). This concentration occurs spontaneously, and it is often strengthened because members of the system have an incentive to leverage their power to accrue more power.
In systems where many people are free to choose between many options, a small subset of the whole will get a disproportionate amount of traffic (or attention, or income), even if no members of the system actively work towards such an outcome. This has nothing to do with moral weakness, selling out, or any other psychological explanation. The very act of choosing, spread widely enough and freely enough, creates a power law distribution.
As such, this distribution manifests in all sorts of human endeavors, including economics (for the accumulation of wealth), language (for word frequency), weblogs (for traffic or number of inbound links), genetics (for gene expression), and, as discussed in the Wired article, entertainment media sales. Typically, the sales of music, movies, and books follow a power law distribution, with a small number of hit artists who garner the grand majority of the sales. The typical rule of thumb is that 20% of available artists get 80% of the sales.

Because of the expense of producing the physical product, and giving it a physical point of sale (shelf-space, movie theaters, etc...), this is bad news for the 80% of artists who get 20% of the sales. Their books, movies, and music eventually go out of print and are generally forgotten, while the successful artists' works are continually reprinted and sold, building on their own success.

However, with the advent of the internet, this is beginning to change. Sales are still governed by the power law distribution, but the internet is removing the physical limitations of entertainment media.
An average movie theater will not show a film unless it can attract at least 1,500 people over a two-week run; that's essentially the rent for a screen. An average record store needs to sell at least two copies of a CD per year to make it worth carrying; that's the rent for a half inch of shelf space. And so on for DVD rental shops, videogame stores, booksellers, and newsstands.

In each case, retailers will carry only content that can generate sufficient demand to earn its keep. But each can pull only from a limited local population - perhaps a 10-mile radius for a typical movie theater, less than that for music and bookstores, and even less (just a mile or two) for video rental shops. It's not enough for a great documentary to have a potential national audience of half a million; what matters is how many it has in the northern part of Rockville, Maryland, and among the mall shoppers of Walnut Creek, California.
The decentralized nature of the internet makes it a much better way to distribute entertainment media, as that documentary that has a potential national (heck, worldwide) audience of half a million people could likely succeed if distributed online. The infrastructure for films isn't there yet, but it has been happening more in the digital music world, and even in a hybrid space like Amazon.com, which sells physical products, but in a non-local manner. With digital media, the cost of producing and distributing entertainment media goes way down, and thus even average artists can be considered successful, even if their sales don't approach that of the biggest sellers.

The internet isn't a broadcast medium; it is on-demand, driven by each individual's personal needs. Diversity is the key, and as Shirkey's article says: "Diversity plus freedom of choice creates inequality, and the greater the diversity, the more extreme the inequality." With respect to weblogs (or more generally, websites), big sites are, well, bigger, but links and traffic aren't the only metrics for success. Smaller websites are smaller in those terms, but are often more specialized, and thus they do better both in terms of connecting with their visitors (or customers) and in providing a more compelling value to their visitors. Larger sites, by virtue of their popularity, simply aren't able to interact with visitors as effectively. This is assuming, of course, that the smaller sites do a good job. My site is very small (in terms of traffic and links), but not very specialized, so it has somewhat limited appeal. However, the parts of my site that get the most traffic are the ones that are specialized (such as the Christmas Movies page, or the Asimov Guide). I think part of the reason the blog has never really caught on is that I cover a very wide range of topics, thus diluting the potential specialized value of any single topic.

The same can be said for online music sales. They still conform to a power law distribution, but what we're going to see is increasing sales of more diverse genres and bands. We're in the process of switching from a system in which only the top 20% are considered profitable, to one where 99% are valuable. This seems somewhat counterintuitive for a few reasons:
The first is we forget that the 20 percent rule in the entertainment industry is about hits, not sales of any sort. We're stuck in a hit-driven mindset - we think that if something isn't a hit, it won't make money and so won't return the cost of its production. We assume, in other words, that only hits deserve to exist. But Vann-Adib�, like executives at iTunes, Amazon, and Netflix, has discovered that the "misses" usually make money, too. And because there are so many more of them, that money can add up quickly to a huge new market.

With no shelf space to pay for and, in the case of purely digital services like iTunes, no manufacturing costs and hardly any distribution fees, a miss sold is just another sale, with the same margins as a hit. A hit and a miss are on equal economic footing, both just entries in a database called up on demand, both equally worthy of being carried. Suddenly, popularity no longer has a monopoly on profitability.

The second reason for the wrong answer is that the industry has a poor sense of what people want. Indeed, we have a poor sense of what we want.
The need to figure out what people want out of a diverse pool of options is where self-organizing systems come into the picture. A good example is Amazon's recommendations engine, and their ability to aggregate various customer inputs into useful correlations. Their "customers who bought this item also bought" lists (and the litany of variations on that theme), more often than not, provide a way to traverse the long tail. They encourage customer participation, allowing customers to write reviews, select lists, and so on, providing feedback loops that improve the quality of recommendations. Note that none of these features was designed to directly sell more items. The focus was on allowing an efficient system of collaborative feedback. Good recommendations are an emergent result of that system. Similar features are available in the online music services, and the Wired article notes:
For instance, the front screen of Rhapsody features Britney Spears, unsurprisingly. Next to the listings of her work is a box of "similar artists." Among them is Pink. If you click on that and are pleased with what you hear, you may do the same for Pink's similar artists, which include No Doubt. And on No Doubt's page, the list includes a few "followers" and "influencers," the last of which includes the Selecter, a 1980s ska band from Coventry, England. In three clicks, Rhapsody may have enticed a Britney Spears fan to try an album that can hardly be found in a record store.
Obviously, these systems aren't perfect. As I've mentioned before, a considerable amount of work needs to be done with respect to the aggregation and correlation aspects of these systems. Amazon and the online music services have a good start, and weblogs are trailing along behind them a bit, but the nature of self-organizing systems dictates that you don't get a perfect solution to start, but rather a steadily improving system. What's becoming clear, though, is that the little guys are (collectively speaking) just as important as the juggernauts, and that's why I'm not particularly upset that my blog won't be wildly popular anytime soon.
Posted by Mark on January 16, 2005 at 08:07 PM .: link :.


End of This Day's Posts

Sunday, January 02, 2005

Everyone Contributes in Some Way
Epic : A fascinating and possibly prophetic flash film of things to come in terms of information aggregation, recommendations, and filtering. It focuses on Google and Microsoft's (along with a host of others, including Blogger, Amazon, and Friendster) competing contributions to the field. It's eight minutes long, and well worth the watch. It touches on many of the concepts I've been writing about here, including self-organization and stigmergy, but in my opinion it stops just short of where such a system would go.

It's certainly interesting, but I don't think it gets it quite right (Googlezon?). Or perhaps it does, but the pessimistic ending doesn't feel right to me. Towards the end, it claims that a comprehensive social dossier would be compiled by Googlezon (note the name on the ID - Winston Smith) and that everyone would receive customized newscasts which are completely automated. Unfortunately, they forsee majority of these customized newscasts as being rather substandard - filled with inaccuracies, narrow, shallow and sensational. To me, this sounds an awful lot like what we have now, but on a larger (and less manageable) scale. Talented editors, who can navagate, filter, and correlate Googlezon's contents, are able to produce something astounding, but the problem (as envisioned by this movie) is that far too few people have access to these editors.

But I think that misses the point. Individual editors would produce interesting results, but if the system were designed correctly, in a way that allowed everyone to be editors and a way to implement feedback loops (i.e. selection mechanisms), there's no reason a meta-editor couldn't produce something spectacular. Of course, there would need to be a period of adjustment, where the system gets lots of things wrong, but that's how selection works. In self-organizing systems, failure is important, and it ironically ensures progress. If too many people are getting bad information in 2014 (when the movie is set), all that means is that the selection process hasn't matured quite yet. I would say that things would improve considerably by 2020.

The film is quite worth a watch. I doubt this specific scenario will play out, but it's likely that something along these lines will occur. [Via the Commissar]
Posted by Mark on January 02, 2005 at 05:34 PM .: link :.


End of This Day's Posts

Sunday, December 12, 2004

Stigmergic Notes
I've been doing a lot of reading and thinking about the concepts discussed in my last post. It's a fascinating, if a little bewildering, topic. I'm not sure I have a great handle on it, but I figured I'd share a few thoughts.

There are many systems that are incredibly flexible, yet they came into existence, grew, and self-organized without any actual planning. Such systems are often referred to as Stigmergic Systems. To a certain extent, free markets have self-organized, guided by such emergent effects as Adam Smith's "invisible hand". Many organisms are able to quickly adapt to changing conditions using a technique of continuous reproduction and selection. To an extent, there are forces on the internet that are beginning to self-organize and produce useful emergent properties, blogs among them.

Such systems are difficult to observe, and it's hard to really get a grasp on what a given system is actually indicating (or what properties are emerging). This is, in part, the way such systems are supposed to work. When many people talk about blogs, they find it hard to believe that a system composed mostly of small, irregularly updated, and downright mediocre (if not worse) blogs can have truly impressive emergent properties (I tend to model the ideal output of the blogosphere as an information resource). Believe it or not, blogging wouldn't work without all the crap. There are a few reasons for this:

The System Design: The idea isn't to design a perfect system. The point is that these systems aren't planned, they're self-organizing. What we design are systems which allow this self-organization to occur. In nature, this is accomplished through constant reproduction and selection (for example, some biological systems can be represented as a function of genes. There are hundreds of thousands of genes, with a huge and diverse number of combinations. Each combination can be judged based on some criteria, such as survival and reproduction. Nature introduces random mutations so that gene combinations vary. Efficient combinations are "selected" and passed on to the next generation through reproduction, and so on).

The important thing with respect to blogs are the tools we use. To a large extent, blogging is simply an extension of many mechanisms already available on the internet, most especially the link. Other weblog specific mechanisms like blogrolls, permanent-links, comments (with links of course) and trackbacks have added functionality to the link and made it more powerful. For a number of reasons, weblogs tend to be affected by power-law distribution, which spontaneously produces a sort of hierarchical organization. Many believe that such a distribution is inherently unfair, as many excellent blogs don't get the attention they deserve, but while many of the larger bloggers seek to promote smaller blogs (some even providing mechanisms for promotion), I'm not sure there is any reliable way to systemically "fix" the problem without harming the system's self-organizational abilities.
In systems where many people are free to choose between many options, a small subset of the whole will get a disproportionate amount of traffic (or attention, or income), even if no members of the system actively work towards such an outcome. This has nothing to do with moral weakness, selling out, or any other psychological explanation. The very act of choosing, spread widely enough and freely enough, creates a power law distribution.
This self-organization is one of the important things about weblogs; any attempt to get around it will end up harming you in the long run as the important thing is to find a state in which weblogs are working most efficiently. How can the weblog community be arranged to self-organize and find its best configuration? That is what the real question is, and that is what we should be trying to accomplish (emphasis mine):
...although the purpose of this example is to build an information resource, the main strategy is concerned with creating an efficient system of collaboration. The information resource emerges as an outcome if this is successful.
Failure is Important: Self-Organizing systems tend to have attractors (a preferred state of the system), such that these systems will always gravitate towards certain positions (or series of positions), no matter where they start. Surprising as it may seem, self-organization only really happens when you expose a system in a steady state to an environment that can destabilize it. By disturbing a steady state, you might cause the system to take up a more efficient position.

It's tempting to dismiss weblogs as a fad because so many of them are crap. But that crap is actually necessary because it destabilizies the system. Bloggers often add their perspective to the weblog community in the hopes that this new information will change the way others think (i.e. they are hoping to induce change - this is roughly referred to as Stigmergy). That new information will often prompt other individuals to respond in some way or another (even if not directly responding). Essentially, change is introduced in the system and this can cause unpredictable and destabilizing effects. Sometimes this destabilization actually helps the system, sometimes (and probably more often than not) it doesn't. Irregardless of its direct effects, the process is essential because it is helping the system become increasingly comprehensive. I touched on this in my last post among several others in which I claim that an argument achieves a higher degree of objectivity by embracing and acknowledging its own biases and agenda. It's not that any one blog or post is particularly reliable in itself, it's that blogs collectively are more objective and reliable than any one analyst (a journalist, for instance), despite the fact that many blogs are mediocre at best. An individual blog may fail to solve a problem, but that failure is important too when you look at the systemic level. Of course, all of this is also muddying the waters and causing the system to deteriorate to a state where it is less efficient to use. For every success story like Rathergate, there are probably 10 bizarre and absurd conspiracy theories to contend with.
This is the dilemma faced by all biological systems. The effects that cause them to become less efficient are also the effects that enable them to evolve into more efficient forms. Nature solves this problem with its evolutionary strategy of selecting for the fittest. This strategy makes sure that progress is always in a positive direction only.
So what weblogs need is a selection process that separates the good blogs from the bad. This ties in with the aforementioned power-law distribution of weblogs. Links, be they blogroll links or links to an individual post, essentially represent a sort of currency of the blogosphere and provide an essential internal feedback loop. There is a rudimentary form of this sort of thing going on, and it has proven to be very successful (as Jeremy Bowers notes, it certainly seems to do so much better than the media whose selection process appears to be simple heuristics). However, the weblog system is still young and I think there is considerable room for improvement in its selection processes. We've only hit the tip of the iceberg here. Syndication, aggregation, and filtering need to improve considerably. Note that all of those things are systemic improvements. None of them directly act upon the weblog community or the desired informational output of the community. They are improvements to the strategy of creating an efficient system of collaboration. A better informational output emerges as an outcome if the systemic improvements are successful.

This is truly a massive subject, and I'm only beginning to understand some of the deeper concepts, so I might end up repeating myself a bit in future posts on this subject, as I delve deeper into the underlying concepts and gain a better understanding. The funny thing is that it doesn't seem like the subject itself is very well defined, so I'm sure lots will be changing in the future. Below are a few links to information that I found helpful in writing this post.
Posted by Mark on December 12, 2004 at 11:15 PM .: link :.


End of This Day's Posts

Sunday, December 05, 2004

An Epic in Parallel Form
Tyler Cowen has an interesting post on the scholarly content of blogging in which he speculates as to how blogging and academic scholarship fit together. In so doing he makes some general observations about blogging:
Blogging is a fundamentally new medium, akin to an epic in serial form, but combining the functions of editor and author. Who doesn't dream of writing an epic?

Don't focus on the single post. Rather a good blog provides you a whole vision of what a field is about, what the interesting questions are, and how you might answer them. It is also a new window onto a mind. And by packaging intellectual content with some personality, bloggers appeal to the biological instincts of blog readers. Be as intellectual as you want, you still are programmed to find people more memorable than ideas.
It's an interesting perspective. Many blogs are general in subject, but some of the ones that really stand out have some sort of narrative (for lack of a better term) that you can follow from post to post. As Cowen puts it, an "epic in serial form." The suggestion that reading a single blog many times is more rewarding than reading the best posts from many different blogs is interesting. But while a single blog may give you a broad view of what a field is about, it can also be rewarding to aggregate the specific views of a wide variety of individuals, even biased and partisan individuals. As Cowen mentions, the blogosphere as a whole is the relevant unit of analysis. Even if each individual view is unimpressive on its own, that may not be the case when taken collectively. In a sense, while each individual is writing a flawed epic in serial form, they are all contributing to an epic in parallel form.

Which brings up another interesting aspect of blogs. When the blogosphere tackles a subject, it produces a diverse set of opinions and perspectives, all published independently by a network of analysts who are all doing work in parallel. The problem here is that the decentralized nature of the blogosphere makes aggregation difficult. Determining a group as large and diverse as the blogosphere's "answer" based on all of the disparate information they have produced is incredibly difficult, especially when the majority of data represents opinions of various analysts. A deficiency in aggregation is part of where groupthink comes from, but some groups are able to harness their disparity into something productive. The many are smarter than the few, but only if the many are able to aggregate their data properly.

In theory, blogs represent a self-organizing system that has the potential to evolve and display emergent properties (a sort of human hive mind). In practice, it's a little more difficult to say. I think it's clear that the spontaneous appearance of collective thought, as implemented through blogs or other communication systems, is happening frequently on the internet. However, each occurrence is isolated and only represents an incremental gain in productivity. In other words, a system will sometimes self-organize in order to analyze a problem and produce an enormous amount of data which is then aggregated into a shared vision (a vision which is much more sophisticated than anything that one individual could come up with), but the structure that appears in that case will disappear as the issue dies down. The incredible increase in analytic power is not a permanent stair step, nor is it ubiquitous. Indeed, it can also be hard to recognize the signal in a great sea of noise.

Of course, such systems are constantly and spontaneously self-organizing; themselves tackling problems in parallel. Some systems will compete with others, some systems will organize around trivial issues, some systems won't be nearly as effective as others. Because of this, it might be that we don't even recognize when a system really transcends its perceived limitations. Of course, such systems are not limited to blogs. In fact they are quite common, and they appear in lots of different types of systems. Business markets are, in part, self-organizing, with emergent properties like Adam Smith's "invisible hand". Open Source software is another example of a self-organizing system.

Interestingly enough, this subject ties in nicely with a series of posts I've been working on regarding the properties of Reflexive documentaries, polarized debates, computer security, and national security. One of the general ideas discussed in those posts is that an argument achieves a higher degree of objectivity by embracing and acknowledging its own biases and agenda. Ironically, in acknowledging one's own subjectivity, one becomes more objective and reliable. This applies on an individual basis, but becomes much more powerful when it is part of an emergent system of analysis as discussed above. Blogs are excellent at this sort of thing precisely because they are made up of independent parts that make no pretense at objectivity. It's not that any one blog or post is particularly reliable in itself, it's that blogs collectively are more objective and reliable than any one analyst (a journalist, for instance), despite the fact that many blogs are mediocre at best. The news media represents a competing system (the journalist being the media's equivalent of the blogger), one that is much more rigid and unyielding. The interplay between blogs and the media is fascinating, and you can see each medium evolving in response to the other (the degree to which this is occurring is naturally up for debate). You might even be able to make the argument that blogs are, themselves, emergent properties of the mainstream media.

Personally, I don't think I have that exact sort of narrative going here, though I do believe I've developed certain thematic consistencies in terms of the subjects I cover here. I'm certainly no expert and I don't post nearly often enough to establish the sort of narrative that Cowen is talking about, but I do think a reader would benefit from reading multiple posts. I try to make up for my low posting frequency by writing longer, more detailed posts, often referencing older posts on similar subjects. However, I get the feeling that if I were to break up my posts into smaller, more digestible pieces, the overall time it would take to read and produce the same material would be significantly longer. Of course, my content is rarely scholarly in nature, and my subject matter varies from week to week as well, but I found this interesting to think about nonetheless.

I think I tend to be more of an aggregator than anything else, which is interesting because I've never thought about what I do in those terms. It's also somewhat challenging, as one of my weaknesses is being timely with information. Plus aggregation appears to be one of the more tricky aspects of a system such as the ones discussed above, and with respect to blogs, it is something which definitely needs some work...

Update 12.13.04: I wrote some more on the subject. I aslo made a minor edit to this entry, moving one paragraph lower down. No content has actually changed, but the new order flows better.
Posted by Mark on December 05, 2004 at 09:23 PM .: link :.


End of This Day's Posts

Sunday, November 14, 2004

Hockey Video Games
With the NHL lockout upon us, I have been looking for some way to make up for this lack of hockey viewing. I've always been a big fan of hockey video games, so I figured that might do the trick. Over the past year, I've bought 2 hockey games: EA Sports NHL 2004, and ESPN NHL 2K5. I was very happy with EA's 2004 effort, but there were some annoyances and I appear to have misplaced it during the move, so I figured I'd get a 2005 game.

EA Sports is pretty much dominant when it comes to just about any sports game out there, and hockey is no exception. Ever since the halcyon days of NHL 1994 for the Genesis, EA has dominated the hockey space. So last year, in an effort to compete with EA, Sega announced that it's own hockey title was going to be branded with ESPN. Not only that, but they dropped their prices to around $20 (as compared to the standard $50 that EA charges) in the hope that the low price would lure gamers away from EA. So in looking at the reviews for EA's and ESPN's 2005 efforts, it appeared that ESPN had picked up significant ground on EA. With those reviews and that price, I figured I might as well check it out, so I took a chance and went with ESPN. To be honest, I'm not impressed. Below is a comparison between ESPN's 2005 effort and EA's 2004 game.

To give you an idea where I'm coming from, my favorite mode is franchise, so a lot of my observations will be coming from that perspective. Some things that annoy me might not annoy the casual gamer who just wants to play a game with their buddies every now and again. I'm playing on a Playstation 2, and I'm a usability nerd, so stuff that wouldn't bother other people might bother me. I'd also like to mention that I am far from a hardcore gamer, so my perceptions might be different than others.
  • Gameplay: Playing a hockey game is fun in both games, but ESPN is the king here. EA's gameplay was one of my minor annoyances. The controls were jerky and awkward, the speed of gameplay was too slow by default (but could be sped up), and the player behavior could be extremely frustrating (especially with Off Sides turned on). ESPN, by contrast, has smooth controls and movements, a good default gameplay speed, and much better player behavior and computer AI. EA's gameplay was rife with 2 line passes and off sides calls, which makes for frustrating play. Another advantage for ESPN is that it offers more and better gaming modes, including a franchise mode which is deeper than it's EA counterpart (more on that later) and a skills competition (which EA doesn't have). Advantage: ESPN
  • Sound: EA wins this one, hands down. Both games have decent sounds during an actual game, but where EA excels is in the maintenance screens. In all EA games, not just hockey, they have assembled a trendy group of songs from real mainstream bands, most of which seem appropriate as a soundtrack to a sports game. I don't know if EA has launched any bands into stardom, but they seem to have a knack for finding good music. ESPN totally falls flat in this respect. The only music they have that is even remotely compelling is the ESPN theme song, which is good, but short and when it repeats for the 10th time, it grates. Their other music is this lame generic instrumental rock music. Normally this wouldn't be that bad, but it just pales in comparison to EA's stylish lineup. This becomes especially important in dynasty or franchise modes, as you spend a significant amount of time tweaking team settings, doing offseason stuff, etc... Both games have play by play announcers that get annoying after a while, but EA's is slightly better in that their comments are usually relevant to what is happening. ESPN commentators will inexplicably throw out some odd comments from time to time. Advantage: EA
  • Graphics: Both games have decent graphics engines, but I think EA has a better overall look and feel. This goes both for the menu design and the gameplay design. The menus are neat and orderly, they look great, and are easy to use (this will be covered in more detail in the usability section). ESPN's menus are allright, but nothing special. In terms of gameplay, while ESPN has a better experience, EA just looks better. Their player animations are great, and their graphics engine is simply superior. ESPN has some nice touches (it sometimes feels like you're literally watching ESPN, as all of the screen elements have the same look and feel as ESPN tv) but it doesn't quite reach EA's heights. Advantage: EA
  • Usability: This isn't something that is usually covered in video game reviews, but this is an area I think is important. Again, this is something that becomes more relevant when you get into dynasty or franchise modes, where a lot of fiddling with team settings and player manipulations are required. You need to be able to navigate through a number of menus and screens to accomplish various tasks. I think EA has the edge here. Their menus and screens look great and are easy to use. More importantly the controls are somewhat intuitive, and there are usually enough hints at the bottom of the screen to let you know what button to press. ESPN, on the other hand, is awful at this. Sometimes their screens are poorly laid out to start with, but when you add to that the clumsy controls, it just makes things that much worse. Take, for instance the Edit Lines screens, typically consisting of one or more lines, along with a list of players you can substitute. Neither interface is perfect, but ESPN's list of substitutions is tiny and requires a lot of scrolling just to see your options. Another good example is sorting. EA's sort is generally accomplished with the O button, while ESPN makes you use one of the least featured buttons on the PS2, the L3 button (and I needed to use ESPN's help to figure that out). ESPN is just too awkward when it comes to this sort of thing. Gameplay controls are fine for both games, but EA is much better when it comes to the maintenance menus and screens. Advantage: EA
  • Depth of Features: As already mentioned, ESPN has more and better gaming modes than EA, and even within the modes, they have a much deeper feature set. Most notably in their franchise mode, where your control of the coaching staff, contracts (which are themselves much more detailed than their EA counterparts), young players, scouting, and drafting is very detailed, to the point of even setting up travel itineraries for your scout and exerting a large amount of control over your minor league team. Even when it comes to unlockables, ESPN has the edge. On the other hand, EA covers most of the same ground, but in a much less detailed fashion. Their simplistic approach will probably appeal to some people more than others. I have not played enough of ESPN's game to really give a feel for this, as one of the most enjoyable things about a franchise or dynasty mode is to watch your young players progress. EA's simplicity could make for a better overall experience, despite the lack of detail. Sometimes, less is more. One other thing to keep in mind is that ESPN's depth is partly nullified by their usability problems, sometimes making their more detailed features more confusing than anything else. If, that is, you can even find them. There are some features, such as the ability to specify line matchups for a game, which must be found by accident (as there is no way to even know such features exist, let alone how to use them). Advantage: ESPN, but it depends on what you're looking for. More depth doesn't necessarily mean more fun. EA's simplicity might be a better overall experience.
  • Injuries: One thing that really annoyed me with EA's 2004 game was the lack of information about injuries, especially when simming significant parts of the season. You'd sim 10 games, find out one of your star players was injured, but there was nowhere to look to find out how long that player was injured (if you were lucky enough to have your injury occur recently, you might find out through the news ticker at the bottom of the screen, but that goes away when you move a few games ahead). ESPN is better in that there is an actual injuries screen you can check. Unfortunately, that's where ESPN's advantages end - their auto-substitution code sucks, and it sometimes doesn't work at all. Indeed, injuries in general seem to really screw the game up. This is one of my major problems with the game. The game actually locks up for unknown reasons, and I literally cannot continue my franchise mode because one of my players got injured. I'm serious, I've tried it five or six times in the last hour, and nothing works. This is inexcusable, especially for a PS2 game (where there are no possible patches), and is reason enough to avoid ESPN's title altogether. Advantage: ESPN (technically, if it worked, ESPN would be better - the bug is more of a symptom of a larger problem that will play into the next section)
  • Franchise vs. Dynasty Modes: ESPN offers Franchise mode, while EA offers Dynasty mode. These are basically the same thing, where you take the role of general manager and control a team through many years, as opposed to just one season. It allows you to build your team up with young talent and watch them grow into superstars, etc... Personally, since I've been playing hockey video games for many years, and since these are among the first hockey games to have this mode, it is the most attractive part of both games (from my perspective, at least). I've already gone over some of the differences, most notably the difference in depth of features. EA is more simplistic and ESPN is more detailed. Unfortunately, since ESPN also has poor usability, the additional detail doesn't do it much good. Add to that the inexcusable crashing issues (ESPN seems to have a lot of problems handling its rosters, which leads to the game locking up all the time) and I think that EA wins this category. Honestly, it's difficult to tell, because I literally cannot continue playing the ESPN franchise. It freezes every time I try, no matter what settings I use. I honestly don't know how they could release this game with such a glaring bug. Advantage: EA
  • Customizability: ESPN has far more configurations than EA, and their defaults are near perfect. Even better, you aren't forced to choose these configurations when you start a season, but you do have the ability to change them if you want. Basically, ESPN has a lot of power under the hood, but you aren't confronted with it unless you really want to look. This is one area in which ESPN really accels. Unfortunately, it's not as important as some of the other areas and this is also sometimes hampered by poor usability. EA has some configuration too, and for the most part it's fine. Again, simplicity has its virtues, but their options are considerably fewer than ESPN's. Advantage: ESPN
  • Auto Line Changes: One thing that annoys me in both games is the auto line changes feature. It always feels like one of my lines gets the shaft. In EA, it's often the second line, which only gets around 5-10 minutes of ice time, while lines one and three get the lion's share. Line four usually gets screwed as well, but you kind of expect that. This is really baffling to me, as the second line contains, well, your second best players. They should be out there almost as often as the first line (one would think they'd get the second most ice time). ESPN is slightly better in this regard, but the third line gets next to nothing and in some games, the fourth line doesn't play at all. I'm not sure why that is, but both games could use some work when it comes to that. Advantage: ESPN
I could probably add a lot more to this, but in general, I think EA's game is better right now (at least NHL 2004 is, I can't speak for 2005, which some believe is a step back). If ESPN can work through some of their rough spots, they could really give EA a run for their money in the future. As it stands now, they're probably better if all you're looking for is a straight hockey game, but if you want to get into seasons or franchise modes, EA is far superior. EA doesn't have the depth, but their interface is excellent. ESPN has lots of neat features not available in EA, but their value is largely nullified by a lack of usability, not to mention the inexcusable crashes. Again, it's astounding that such bugs made it through, and I just can't get past that. If they can fix these bugs for next year, they'll be in good shape. Of course, there might not be a next year for hockey, so that might be a problem.

Before I finish, I just want to stress that I'm talking about EA NHL 2004, not 2005. I've heard that the newer edition has generated a lot of complaints, but I have not played it so I can't say. Again, I'm no expert, but I'm not very impressed with ESPN's entry into the hockey gaming space. Perhaps in a year or two, with improvements to the UI and bug fixes, that will change.
Posted by Mark on November 14, 2004 at 08:01 PM .: link :.


End of This Day's Posts

Sunday, August 15, 2004

Convenience and Piracy
There is no silver bullet that will stop media piracy, whether it be movies, music, or video games. That doesn't stop media providers from trying, though. Of course, that is reasonable and expected, as piracy can pose a significant financial threat to their business. Unfortunately the draconian security mechanisms they employ aren't very effective, and end up alienating honest customers. I touched on this subject here a while back.

One of the first things you need to do when designing a security system is identify the attackers. Only then can you design an efficient countermeasure. So who are the pirates? Brad Wardell speculates that there are two basic groups of pirates:
Group A: The kiddies who warez everything. CD Copy protection means nothing to them. They have the game before it even hits the stores.

Group B: Potential buyers who are really more interested in convenience. The price of the game isn't as big a deal to them as convenience.
You'll never get rid of Group A, no matter what security measures you implement, but there is no reason you shouldn't be able to cut down on Group B. Unfortunately, most security systems that are implemented end up exacerbating the situation, frustrating customers and creating Group B pirates. One thing I've noticed about myself recently is that convenience is suddenly much more important to me. Spare time has become a premium for me, and thus I don't have the time or motivation to be a Group A pirate (not that I've ever been much of a pirate).

Not too long ago, I upgraded my system to Windows XP. After some time, I wanted to play some game that I had bought years ago. Naturally, all I have is the CD - not the key or the original box or anything. What to do? Suddenly, piracy becomes an option. And the next time I want to buy a game, I might think twice about going out to a store and paying top dollar to be inconvenienced by obtrusive copy-protection.

Wardell is the owner of Stardock, a company which is particularly good at not alienating customers. I have a subscription to TotalGaming.net, and am very pleased with the experience they provide. Wardell describes his philosophy for combating piracy:
That's why I think CD based copy protections are a bad idea. I think they create pirates and aren't terribly effective anyway. They're supposed to keep the honest "honest" but I propose a better way.

NOT Internet activation. Instead, game developers adopt a policy that has been very successful in the non-game software market -- after release updates.

PC games often come out buggy, get one patch, and then are largely abandoned. It's really hard to feel sympathy for game developers who treat their customers that way. Instead of doing that, release frequent updates to the game for users. For free. Have them go through a secure network so that only registered purchasing users can get the update but make it as convenient as you can.

By doing this, you create a bigger incentive to be a customer than to be a pirate. It becomes increasingly inconvenient to have the latest/greatest version of the game via the warez route than the legitimate route.
This is an interesting and apparently effective strategy (as Stardock seems to be doing well). Stardock has structured its business model so that they survive even in the face of piracy, yet don't have to resort to absurd and obtrusive security measures to combat piracy. It's a matter of policy for them, and their policy makes it more convenient to be a customer than a pirate. Of course, such a solution only really works for video games, but it is worth noting nonetheless.
Posted by Mark on August 15, 2004 at 07:54 PM .: link :.


End of This Day's Posts

Saturday, January 24, 2004

Upgradation
I will be doing some work on my beloved computer tonight and tomorrow. Mostly an OS upgrade, or several, depending on what I like. I am amazingly still running Windows 98. It has treated me well, but has become somewhat unstable over the past year, so I figured it's time to switch. I'll be starting with Windows XP, but I have a copy of Windows 2000 to fall back on if I hate XP (judging from some horror stories, that might be the case). I'll probably also take this opportunity to play around with Linux. Again. In the near future, I'll probably be getting a new hard drive and a DVD burner.

All of which is to say that if things do not go well tomorrow, I might not be able to write my regular Sunday post. Wish me luck.

Update 1.25.04: Things went well. Repartitioned the drive and started formatting it, went to the movies to kill time, and when I came back installation was waiting for me. 20 minutes later, I was good to go. Spent some time downloading and installing programs this morning, but I still got a bunch of stuff to do. So far, I like it. The many "helpful" features of XP don't seem to be bothering me much, so it looks like I might be sticking with it. Then again, little minor things can build up over time, so I guess I'll just have to wait and see.
Posted by Mark on January 24, 2004 at 08:16 PM .: link :.


End of This Day's Posts

Sunday, October 26, 2003

Pirates
All of the bickering over media piracy can be intensely frustrating because many of the issues have clear and somewhat obvious truths that are simply being ignored. For instance, it should be obvious by now that it is impossible for any media provider to completely prevent piracy of their product, especially digital piracy (A perfectly secure system is also a perfectly useless system). It should also be obvious that instituting increasingly draconian security measures only serve to exacerbate these problems as one of the main driving forces behind file sharing is ease of use and convenience.

The music industry, lead by iTunes and EMusic (certainly not perfect, but it's a start), is finally coming to recognize some of the potential inherent in digital media. Rather than fight against the flow of technology, they're beginning to embrace it and as they further commit themselves to this path, they will begin to see success. There is, after all, a lot to like about digital distribution of content, and if a reasonable price structure is set up, you could even make it more convenient to download from an approved source than from a file-sharing service like Kazaa. Of course, the music industry still has a lot of work to do if they truly want to establish a profitable digital content business model (they need to stop prosecuting file-sharers, for example), but they're at least taking steps in the right direction.

The movie industry, on the other hand, seems content to repeat the mistakes of the music industry. With the introduction of low-cost/high-bandwidth internet connections and peer-to-peer file sharing networks, the movie industry is becoming increasingly concerned with digital piracy, which is understandable, and has responded by making (or, at least, trying to make) DVDs and other media more difficult to copy. Again, this solution does little to slow the tide of piracy, and in extreme cases it makes the experience of purchasing and using the media cumbersome and frustrating. Naturally, some degree of protection is needed, and none of the really invasive solutions have caught on (for obvious reasons), but the movie industry appears to have the same moronic policy of blaming the average consumer for piracy.

Recent research out of AT&T Labs appears to show that the movie industry should reexamine who the culprit really is.
We developed a data set of 312 popular movies and located one or more samples of 183 of these movies on file sharing networks, for a total of 285 movie samples. 77% of these samples appear to have been leaked by industry insiders. Most of our samples appeared on file sharing networks prior to their official consumer DVD release date. Indeed, of the movies that had been released on DVD as of the time of our study, only 5% first appeared after their DVD release date... [emphasis mine]
As Bruce Schneier notes:
One of the first rules of security is that you need to know who your attacker is before you consider countermeasures. In this case, the movie industry has the threat wrong. The attackers aren't DVD owners making illegal copies and putting them on file sharing networks. The attackers are industry insiders making illegal copies long before the DVD is ever on the market.
Obviously, piracy is a problem which can pose a significant financial threat to the movie industry, but it has become clear that piracy is here to stay, and that the best course of action for media industries is to restructure their business model to survive even in the face of piracy, rather than go to absurd and obtrusive lengths to prevent it. As it stands now, their close-minded policies are only exacerbating the situation, frustrating customers (and potential customers) without even adequately addressing the problem... [Thanks to ChicagoBoyz for the pointer to Bruce Schneier's excellent newsletters]
Posted by Mark on October 26, 2003 at 07:59 PM .: link :.


End of This Day's Posts

Sunday, October 19, 2003

Punk Kids Play Pong
Video games have come a long way since Pong, but Electronic Gaming Monthly wanted to see what today's kids think about classic video games. The results are uniformly funny:
Niko: Hey?Pong. My parents played this game.

Brian: It takes this whole console just to do Pong?

Kirk: What is this? [Picks up and twists the paddle controller] Am I controlling the volume?

John: I'm just going to do this [twists the paddle controller as rapidly as possible].

Tim: John, don't do that. You'll die.

Andrew: This is a lot like that game. Um, whatchamacallit?air hockey.

Sheldon: Except worse.

Andrew: Blip. Blip. Blip. Blip.

Becky: I don't even see the point of having sound on this.

Andrew: Wow. The score is tied. It's so exhilarating.

Brian: I saw a documentary on this. The game was so popular in arcades that it got jammed up with quarters.

John: In this thing? [Points to the Pong game console]

Tim: I would never pay to play something like this.

John: I'd sooner jump up and down on one foot. By the way, is this supposed to be tennis or Ping-Pong?

Becky: Ping-Pong.

Gordon: It doesn't even go over the net. It goes through it. I don't even think that thing in the middle is a net.

Tim: My line is so beating the heck out of your stupid line. Fear my pink line. You have no chance. I am the undisputed lord of virtual tennis. [Misses ball] Whoops.
Brilliant. They were a little short on Atari games though. I would've loved to have seen what they said about Pitfall or Chopper Command. And this needs to be applied to all sorts of media, not just video games. We need to strap these kids in for a viewing of Knight Rider or Airwolf and see what they think. [via arstechnica]
Posted by Mark on October 19, 2003 at 11:51 PM .: link :.


End of This Day's Posts

Sunday, June 01, 2003

Amazon's Meta-Reviews
Amazon.com and the New Democracy of Opinion by Erik Ketzan : In this article, Eric Ketzan contends that Amazon.com book reviews "are invaluable documents in understanding what book reviews in periodicals could never show us: who is reading a book, why are they reading it, and how are they reading it."
The present study seeks to analyze the way these reader reviews function: what are their goals, who is their audience, and how do they differ from traditional book reviews?
Since a comprehensive study of all reviews available on Amazon.com would be absurd, he chooses to examine the 133 reviews available for Thomas Pynchon's novel, Gravity's Rainbow. The novel was chosen for the extremes of opinion which dominate people's reactions to the novel, and thus provides us with a good, if somewhat unique, subject for an analysis of the Amazon system.

Indeed, the reviews for Gravity's Rainbow are uncommonly descriptive and helpful, allowing insight into the type of person who enjoys (and doesn't enjoy) this sort of novel. Indeed, many even give advice on how the novel should be read, and what to expect. The lack of an editor allows the tone of the reviews to be somewhat informal and thus you find it easier to relate to them than to a stuffy book reviewer for the New York Times Book Review...

Obviously, many (maybe even most) reviews at Amazon don't quite live up to the standard that Gravity's Rainbow sets. Its an extraordinary novel, and thus the resulting reviews are ripe for analysis, providing much information about the nature of the novel. One of the challenges of the novel, and a theme that runs throughout many reviews (professional and Amazon), is that it is essentially futile to review it in any conventional manner. Because of this, much of the commentary about it has to do with the peripheral experiences; people explain how they read it, how long it took them to do so, what effects it had on their lives, and what type of people will get it or not get it - none of which actually has much to do with the book iteself. We are able to get an uncanny picture of who is reading Gravity's Rainbow, why are they reading it, and how are they reading it, but the book itself remains a mystery (which, basically, it is, even to someone who has read it). Other novels don't lend themselves so readily to this sort of meta-review, and thus Amazon's pages aren't quite so useful for the majority of books listed there. One has to wonder if Gravity's Rainbow actually was the best choice for this case study - sure, it provides a unique example of what Amazon reviews are capable of, but that doesn't necessarily apply to the rest of the catalog... then again, the informal tone, the passion and conviction of those who love the novel, the advice on how to read and what else to read - these are things that are generally absent from professional book reviews, so perhaps Ketzan is on to something here...
Posted by Mark on June 01, 2003 at 02:16 PM .: link :.


End of This Day's Posts

Tuesday, October 08, 2002

gods amongst mortals
Information gods is a series of articles written by Brad Wardell about those who know how to find and digest information quickly and effectively with the tools on the internet. They are "information gods", and they are much more productive than the majority of people, who are still figuring out how to open attachments on an email (if they are on the net at all). The main thrust of the articles is that "the gap between information gods and information mortals grows wider every day. The tools for gathering information gets better. The amount of data available grows. And the experience they have in finding it and using it increases." Its an interesting series, and its funny when you see info gods clash with info mortals in a debate. Guess who generally does better?
Posted by Mark on October 08, 2002 at 08:00 PM .: link :.


End of This Day's Posts

Tuesday, October 01, 2002

#!usr/bin/legal
Law School in a Nutshell, Part 1 by James Grimmelmann : Lawyers spend years learning to read and write legalese, and James makes a striking correlation between legal writing and a programming language.
To understand why legalese is so incomprehensible, think about it as the programming language Legal. It may have been clean and simple once, but that was before it suffered from a thousand years of feature creep and cut-and-paste coding. Sure, Legal is filled with bizzare keywords, strange syntax, and hideous redundancy, but what large piece of software isn't? Underneath the layers of cruft, serious work is taking place.
For the rest of the article, James goes page by page and takes you through the intricacies and minutiae of a legale brief (for Eldred v. Ashcroft). Its only the first part, but its informative and well written. Another interesting note, as commented at the bottom of the page:
If "$plain_text = $file_key ^ $xor_block" seems unapproachable, consider what those not trained in the language of legal citation would make of "111 F.Supp.2d 294, 326 (S.D.N.Y. 2000)." Each is meaningless to those unfamiliar with the language; but each is more precise and compact for those who do understand than would be an English narrative equivalent. -- James S. Tyre, Programmers' & Academics' Amici Brief in "MPAA v. 2600" Case
Updates: Part II and Part III
Posted by Mark on October 01, 2002 at 07:49 PM .: link :.


End of This Day's Posts

Thursday, May 09, 2002

Email Warfare
The art of office e-mail war by David Miller : Ah the joys of corporate email politics. Email is quick, easy, and it offers the sender nearly immediate access to anyone on a corporate network. Miller goes through a variety of different strategies for manipulating e-mail, some of which are quite amusing. Personally, I haven't really been a part of the more nefarious strategies, though I often use email's obvious strategic value. We don't have BCC where I work, so that leaves out some of your average backstabbing stories. One thing I've found useful, though, is that CCing my bosses while requesting something from someone else will almost always yield faster results than if I didn't CC them. When people see the boss's name attached, they know they better get things done quickly and efficiently. This, of course, leads to my boss getting upwards of 500 emails a day, so I try and use this only when I need it... [Thankee James]
Posted by Mark on May 09, 2002 at 01:09 PM .: link :.


End of This Day's Posts

Friday, January 11, 2002

In the beginning...
In the Beginning was the Command Line by Neal Stephenson: An intelligent essay dealing with the trials and tribulations of computer Operating Systems. Of course one of the big problems he discusses is Metaphor Shear (which is basically the point at which a metaphor fails), which is ironic because he uses quite a few metaphors himself in the essay. One of the best is when he relates the Hole Hawg (an incredibly powerful drill that with drill through just about anything, but also incredibly dangerous because it has no limitations or cheap safeguards to protect the user from themself.) with the Linux operating system. The essay is a great read, and goes into much more than just Operating Systems. Highly recommended.

If you like Stephenson's fiction, you might also want to check out The Great Simolean Caper, an interesting story set in the not to distant future. It shares some common ground with Stephensons other work (namely, Snow Crash) and is quite an enjoyable read. Its also a bit scary, because it brings up quite a few security and privacy concerns. With the advent of digital cable and set-top boxes, companies are starting to track what you are watching on television, whether you like it or not. I've seen the data myself, and I think the advertising industry is going to go wild when these numbers start piling up (the data I saw showed enormous spikes and troughs roughly coinciding with commercials). The sneaky set-top boxes in Stephenson's Caper might seem unlikely, but we're really not too far away from that right now...
Posted by Mark on January 11, 2002 at 03:27 PM .: link :.


End of This Day's Posts

Thursday, November 15, 2001

Web advertising that doesn't suck?
pyRads� is a service for purchasing, managing, and serving micro advertising on web sites. Micro advertising is different than most banners and other forms of advertising you see on the web in that: 1) It's low-cost, easy, and often highly effective for advertisers. 2) It's unobtrusive, interesting, and even useful for the audience. This is an interesting little project from Pyra (makers of Blogger) and I can see it being very, very popular. Right now, the only advertising space you can buy is on Blogger, but that is a really attractive place to advertise - plus, I'm sure ev is hard at work getting other websites in the loop... It should be interesting to see how this turns out, as this form of advertising is emminently more effective and less obtrusive than all the others. Hell, at $10.00 a pop, I'm tempted to run a "Rad," just to see how well this really works.

In other blogging news (well I guess this is kind of old, but still noteworthy), Dack is back, featuring links on "The Dumb War". I don't really like this very much, though; I still miss the old Dack.com.

"It just keeps looping, Adrian! You call this music?!" - This is the funniest thing I've read in a while. Thanks DyRE!
Posted by Mark on November 15, 2001 at 10:46 AM .: link :.


End of This Day's Posts

Wednesday, November 14, 2001

Opera 6.0 beta
Opera 6.0 for Windows Beta 1 was released yesterday. I fell in love with Opera 5.x; it became my favourite browser for a number of reasons. With Opera 6.0, I was looking forward to a host of new and exciting features. To be perfectly honest, I don't see much to get excited about. The most noticeable feature is the ability for users to choose between single or multiple document interface (SDI/MDI); this is pretty much irrelevant to existing Opera users like myself, but I suppose it could be an important step in converting users accustomed to competing browsers. The other "big" change is the completely new default user interface, which I despise (fortunately, Opera has the ability to customize the interface:) There are a bunch of other nifty enhancements (and bug fixes), but nothing approaches the big innovative leaps that Opera 5.x made. There are also a few rendering bugs that I suppose will be worked out before the official release. Still, I highly recommend you take the Opera plunge if you haven't already; download the whopping 3.2 mb installation file here.
Posted by Mark on November 14, 2001 at 11:03 AM .: link :.


End of This Day's Posts

Tuesday, June 12, 2001

More than Pong
This History of Video Games is fairly comprehensive, thoughtful and exceedingly interesting, even if you don't care too much for video games. The history even goes as far back as the late 19th century, when Nintendo started as a playing card company; then it details the evolution of several companies leading up to the current day wars between Sega, Sony, and the upcoming Microsoft Xbox. Its funny to note the parallels with the internet's collapse (and, hopefully, rebirth). After a short period of growing pains where several video game companies crashed, the industry rebounded with fewer but healthier players (Sega, Nintendo, and later, Sony). I still miss the glory days of the Commodore 64 though; I spent countless hours playing games like Test Drive and Airborne Ranger (one of my all time favourites). [via alt text]
Posted by Mark on June 12, 2001 at 01:52 PM .: link :.


End of This Day's Posts

Friday, June 08, 2001

Disjointed, Freakish Reflections™ on Web Browsers
Mozilla 0.9.1 was released today, to much fanfare. Even the Slashdotters are praising the latest release, which marks a monumental leap forward over Mozilla 0.9. After downloading it myself and playing with it, I've been very pleased, though I still have a few small gripes (right clicking on the menus should work damnit!). Otherwise it seems like a much leaner, cleaner, faster and more stable build. Great work, Mozilla developers; I'm looking foward to a 1.0 release soon. However, with the news that Netscape is going away, I don't know if any browser will be able to put a dent in Microsoft's stranglehold, which is a shame, because Mozilla is a really great browser. Right now, I'm going to continue using Opera 5.11, because that is the best browser I've ever used - its only dowside is that I can't really use it to post on Blogger or 4degreez.

Some of my previous thoughts on Browsers: Also worth noting is this article and this article by Joel Spolsky illustrating what Netscape did wrong with version 6. Mozilla has come a long way though, and I think by the time 1.0 comes out, there will be little to complain about.

Update: 4:45 p.m. ET
After using Mozilla 0.9.1 all day, I can say that while it has improved greatly over previous versions, it still has a ways to go before it can really compete with IE. I ran into a few bugs and it crashed a couple of times, so its not quite the rock solid browser I was looking for. It doesn't even come close to Opera, which is still my browser of choice. But then, 0.9.1 isn't a finished product, so I still think its coming along well and that the finished product could be worth it.
Posted by Mark on June 08, 2001 at 09:27 AM .: link :.


End of This Day's Posts

Thursday, May 31, 2001

The Weakest Links
No. I would never, ever do such a thing. Trust in me, loyal patrons (all 3 of you). Rest assured, this post has nothing to do with the annoying gameshow of the same title. It has to do with links and usability. Apparently, someone thought up 23 ways to weaken Web site links, from the obvious (broken, wrong) to the subtle (miscolored, unexpected) to the unfairly accused (embedded, wrapped). Its an interesting read, though its funny to note that weblogging, by its very nature, seems to break some of these rules. Especially those pesky memepoolers! [via webmutant]
Posted by Mark on May 31, 2001 at 12:03 AM .: link :.


End of This Day's Posts

Friday, May 04, 2001

The sky is falling
Its been falling for quite some time now, and some think it won't stop until the internet is dead. Why did it fall, and why does it continue to fall? Could it be the numerous business perversions of the english language? Perhaps dot-com communism is to blame. Its more likely, though, that this industry fallout is indicative of simple growing pains:
"What is happening now happens with every new explosion of technology. When the sky has finished falling, it will leave behind an industry with far fewer, but much healthier players. And then things will get better than they ever were."
Automobiles, television, and video games all underwant similar pains in their infancy, then grew beyond control. Soon enough, we will find that the internet is growing vigorously, even if we have to pay for some things we used to get for free... [via evhead, arts & letters]
Posted by Mark on May 04, 2001 at 02:40 PM .: link :.


End of This Day's Posts

Monday, April 30, 2001

Imitation Meme
Heromachine is another nice little avatar maker (remember that whole storTrooper craze a while back?) that is themed more towards fantasy and superheros. Once again, its a lot of fun and I made myself a rather bland one, but it'd be pretty easy to make a really wierd one. [Thanks Drifter, via the 4degreez boards.]
Posted by Mark on April 30, 2001 at 01:43 PM .: link :.


End of This Day's Posts

Thursday, April 19, 2001

Opera 5.11
What a wonderful browser Opera 5.11 is. The mouse navigation by gesture recognition, though hardly a new thing, is well implemented and clever. Theres lots of other nifty features (session storing, skins, command line switches), my personal favourite being the new web spider. Simply click Ctrl+J and you'll get a list of all the links on a given page (which can be exported to HTML) Another great feature is the much improved download manager, which allows you to resume downloads. I've always liked Opera, but I've never used it consistantly... until now. For all you fellow Opera users, here's a page by one of the Opera developers that has skins, customisations and user style sheets (among other things). Thanks to grenville for posting the info on the DyREnet Message Board!
Posted by Mark on April 19, 2001 at 10:52 PM .: link :.


End of This Day's Posts

Wednesday, April 11, 2001

Why high speed access was invented
by DyRE:
It wasn't directly to give people a faster Internet connection but I think it was created because of some geek's sister. See, this sister, she had a very active social life. Whenever she was home, she got phone calls out the wazoo. She wasn't home much though, because her callers usually invited her somewhere. She was popular.

She had a very active social life. She was popular.

Then her geeky little brother, who was petrified of social physical interaction, started going online via a dial-up connection... all the time. Soon this girl never got any calls because the line was always busy. Her parents didn't want to pay for another phone line and she couldn't afford one herself. Luckily, her father was a some sort of technician at [insert phone or cable company name here]. One day, he was brooding over his recent troubles concerning his daughter's attempt to dismantle the computer to find which part was the modem and beat his son with it. As he contemplated this situation, he inadvertantly began staring at the phone line. Or the cable TV line. Whichever came first (DSL or cable modems). Suddenly, the idea hit him and he rushed off to the company offices to present this new high speed idea to his superiors. All of them having one popular child and one geeky child themselves thought it was wonderful. Thus, the phone line was free (until the girl began getting calls again) and the bandwidth was used like nobody's business... and all were happy.

The end.
I honestly wouldn't be suprised if thats how it actually happened. [originally posted at 4degreez.com]
Posted by Mark on April 11, 2001 at 12:45 PM .: link :.


End of This Day's Posts

Thursday, March 15, 2001

The Dream Machine
I recently purchased a veritable plethora of computer hardware in an attempt to build my dream machine. Ars Technica was an invaluable resource for my efforts, especially their system recommendations and how-to guides. Not to mention their weblog, which is a great source for current tech news and information. Tom's Hardware Guide also provided some in-depth wisdom and reviews. For price comparisons, I used pricewatch.com, streetprices.com, and pricecombat.com. Another good find was jcshopper, a decent store with very good prices ($57 PC133 256MB SDRAM!). Thanks also to grenville, Four Degreez, and DyRE for all their help! Soon I'll be able to break the chains of my 200Mhz oppression! For those who are interested, I posted my purchases on the infamous Kaedrin Forum.
Posted by Mark on March 15, 2001 at 09:34 AM .: link :.


The Honor System Takes Hold
Amazon.com's Honor System, a way for Web sites to receive payments from readers, is slowly taking hold. In all honesty, while I see the motivation for having such a thing and am enthusiastic about using it, I don't see how that sort of system could really support a website. First, when given the choice, most people won't pay. Second, even when people do pay, they aren't likely to keep paying. That's why you see Metafilter making $600 in a day, then practically nothing for the next month. If you wish to prove me wrong, feel free to donate to the Kaedrin Honor System Page (or go here to find other options for supporting Kaedrin:)! It will be much appreciated!

5:30 PM: More thoughts - It would be great if Amazon was able to incorperate some of its other functionality into the Honor System. For instance, allow visitors to review the website, or the ability to create lists of themed websites. Amazon could potentially parlay the Honor System into becoming a major portal site (even recommending sites for you based on what sites you've rated and visited), and given Amazon's rediculous commission system, its in their best interest to have people donating as much money as possible! Granted, the system could be abused, but I think Amazon has a lot to gain from integrating the Honor System with reviews and recommendations. Just my 2 cents.
Posted by Mark on March 15, 2001 at 09:08 AM .: link :.


End of This Day's Posts

Tuesday, March 06, 2001

What Lies Beneath Piles of Files
Filepile.org is the latest creation of Andre; quite a good idea from a man who seems to have a lot of them... Does anyone remember the old filepile? It was a Blogger-like content management system that you could use to organize files alphabetically. It showed potential, but I don't think anyone used it for anything exciting (including myself; I believe I considered using it for the imaginary archive)

Another nifty creation I recently encountered is this. Type in a domain and you get all the <!-- comments --> present on the page. Fascinating, indeed. (try megnut; it seems she has something to say after all)
Posted by Mark on March 06, 2001 at 01:00 PM .: link :.


End of This Day's Posts

Monday, January 15, 2001

Netscape Crashes
The Day The Browser Died, a tragic shortcoming of Netscape 4.x. CSS is a wonderful technology, in part because it fails gracefully (at least, its supposed to) in browsers that don't support it. Except Netscape. Netscape tends to crash when you use CSS. I recently encountered this problem with these very pages. I seem to have fixed the problem (it had to do with the padding property being applied to a table cell), but that's no excuse for Netscape's failure.

I like Netscape. Really, I do. And you know what, as you can see in the follow up article at A List Apart, Netscape has been really cooperative with this bug. Netscape has been a consistant innovative force on the internet. However, their 4.x browser has become an embarassment, and 6.0, though standards compliant and faster, isn't what is could have been (I look forward to future releases).

I apologize to anyone who still can't view this site in Netscape, and I beg of you to consider switching over to IE (or better yet Opera). That is, if you can even get to this page to read it.
Posted by Mark on January 15, 2001 at 12:56 PM .: link :.


End of This Day's Posts

Sunday, January 14, 2001

memespreading
I've been trying to take a more novel approach recently, but I find the urge to spread some quickly growing memes is overcoming my good senses. I apologize in advance if this is the millionth time you've seen these links:)

First comes a cool Avatar maker called storTrooper. Its a nifty little java applet that lets you choose a body and clothes for a virtual representation of yourself (an avatar, if you will). I made a rather bland one (on your right), but you can make an outrageous one fairly easily. If you buy it you get lots of other clothes and styles to choose from (including the goth collection), and it would make a great supplement to a virtual community site like 4degreez, letting users goof around with their appearances...

Second is IT. What is IT? It's IT. Actually, no one knows what IT is, but IT will change the world. Some good coverage and commentary on IT can be found at Boing Boing. IT is the invention of 49-year-old scientist Dean Kamen, and IT is also code named Ginger. Of course, everyone's intrigued, including metafilter and slashdot visitors (of course). Some think it is a revolutionary form of transportation, or perhaps an infinite energy source. Steve Jobs thinks cities will be built around IT. Can IT stay a secret for long? I don't think so. We'll know what it is soon enough; no one can keep something that is supposedly this big a secret. Until then, IT is an intriguing mystery...

I now return you to your regularly scheduled programming...
Posted by Mark on January 14, 2001 at 10:43 PM .: link :.


End of This Day's Posts

Thursday, December 14, 2000

Why Browsers haven't Standardized
Why do browser companies continue to forge blindly ahead with more and more new features when they haven't even implemented existing standards correctly? Why can't they follow the standards process? Good questions. The answer is that browsers do, in fact, follow the standards process! The problem is that browsers are encouraged to innovate, to make up new (proprietary) features and technologies. They then act as a test market for the W3C, who evaluate the new features and observe how they work in the "real" world. They then make recommendations based on their findings. But when they change their specifications, the browsers are left in a lose-lose situation. This article will give you the rest of the low down in an objective manner. Its a frustrating situation, from every angle, and this sort of complex problem has no easy answer. I hope, for everyone's sake, that the process is tightened a bit so that emerging technologies can flourish. On a side note, I wonder how much an open source browser like mozilla could contribute to the standards process without having to officially release a non-standards compliant browser...
Posted by Mark on December 14, 2000 at 04:46 PM .: link :.


End of This Day's Posts

Wednesday, December 13, 2000

Mindless Entertainment
The computer versus television: I don't watch TV anymore. The hours wasted in front of the tv screen are now wasted in front of the computer monitor. Sure, I'll throw the TV on for episodes of the Simpsons or the occasional X-Files (or possibly a Flyers game), but I'm usually doing something on the computer as well. TV just isn't a priority anymore and I've noticed similar trends with those around me. Why is that? I think its because of the control you have over the web (or your computer in general). You can look up whatever you want, whenever you want, and even display it how you want. TV rigidly forces you to adhere to their schedule, while the internet gives you the power. The internet also provides a creative outlet and interactivity, things TV lacks. The internet is a much more social activity than watching the tube, and the Television industry needs to refocus its efforts if its going to regain its once lofty status...
Posted by Mark on December 13, 2000 at 01:26 PM .: link :.


End of This Day's Posts

Tuesday, December 05, 2000

Doomed Processes
Someone has figured out how to use the 3d shooter Doom as a tool for system administration. Doom creates a new metaphor for process management: Each process can be a monster, and the machines can be represented by a series of rooms. Killing a process corresponds to killing a monster. How very clever. [via usr/bin/girl]
Posted by Mark on December 05, 2000 at 12:55 PM .: link :.


End of This Day's Posts

Monday, December 04, 2000

Search Engines
This is an interesting tool that you can use to help you find keywords for your site. Type in a keyword and you can find related searches that include your term, as well as how many times that term was searched on last month. Wery useful.
Posted by Mark on December 04, 2000 at 12:07 PM .: link :.


End of This Day's Posts

Tuesday, November 28, 2000

Finally
When I first found out that Napster was being sued by the 5 largest record labels, I was appalled. Not so much at their protecting their rights and sales (though that is debateable), but that they were passing up a huge business deal. Think about it, 40 million people are using a specific piece of software to trade music. Wouldn't it make more sense to charge for the right to use that software (as opposed to shutting it down)? Instead of embracing technology, the record industry was foolishly trying to put a stop to Napster. Then all the file sharing clones and alternatives showed up. Remember, Napster is only a company that wants to make money but couldn't (because of the copyright issue). Finally, someone has realized the potential. German media giant Bertelsmann (1 of the aformentioned 5 largest record labels) recently announced that they would be forming a business alliance with Napster, possibly charging a monthly fee of up to $15.00. Though this probably won't stop file sharing, it will probably be very lucrative for the parties involved...
Posted by Mark on November 28, 2000 at 11:54 AM .: link :.


End of This Day's Posts

Wednesday, November 22, 2000

Job Security
Want to know how to make yourself an irreplaceable programmer? Go here and find out how to make your code unmaintainable by anyone but yourself. No wonder most software sucks.
Posted by Mark on November 22, 2000 at 10:50 AM .: link :.


End of This Day's Posts

Tuesday, November 21, 2000

Famous Fonts
This site has some awesome fonts from Movies, Music, Television, etc... Oh, I'm gonna have fun with this... [from grenville via Kaedrin Forum]
Posted by Mark on November 21, 2000 at 12:55 PM .: link :.


End of This Day's Posts

Monday, November 20, 2000

Netscape Six
I don't know exactly when, but Netscape has recently released the much anticipated Netscape 6.0. I went to Netscape Dowload, and it said I was using IE 5.0 and that I could "Upgrade to Netscape 6" (or Netscape 4.whatever). IMHO, releasing it was a big mistake because there are a ton of bugs and usability issues. I downloaded it this morning, played with it for 10 minutes and found the following problems:
  • The complete download was approx. 24.9 MB. That is huge!
  • Right clicking in many important places does not do anything.
  • I had a ton of trouble trying to set up a proxy server (in all fairness, it was a microsoft server and I can't get it to work on older versions of Netscape either.)
  • In fiddling with the Proxy settings, I was manually entering sites to bypass the proxy server and everytime I pressed the right arrow key to move the curser, the radio buttons also switched around. That was very disconcerting, but you'd probably have to see it in action to see what I'm talking about...
  • I didn't seem to get any errors if I typed in an incorrect URL. It simply stayed on the same screen. That's even more annoying than the generic "404 File not found" message..
Since I couldn't get the proxy working, I couldn't really test all the new features, many of which seem really cool. I'm particularly interested in seeing how well Netscape's mail client handles AOL email addresses and IMs, but then, my 10 minute trial on the browser doesn't give me much hope. Now don't get me wrong, I was looking forward to this release, but I think they rushed to put out an incomplete product, and it is a little frustrating. There are a lot of things that are really great about Netscape 6, but I think I'm gonna wait until they stabilize it a little more and work out the bugs before I really start using it...
Posted by Mark on November 20, 2000 at 10:13 PM .: link :.


End of This Day's Posts

Wednesday, November 01, 2000

Backgrounds
Go check out some super spiffy wallpaper backgrounds at EndEffect. Link via the also spiffy memepool, my current favourite site. The Giger pics on Kaedrin's Image page also make cool backgrounds...
Posted by Mark on November 01, 2000 at 12:28 PM .: link :.


End of This Day's Posts

Monday, October 30, 2000

The Unspeakable Horrors of Flash
Usability "expert" Jacob Nielsen recently published Flash: 99% Bad, an arcticle that reminds me of Dack's Flash Is Evil article published over a year ago. Dack has also done an informal Usability Test pitting HTML vs Flash. Go and read about the unspeakable horrors of Flash. Then read Kottke's response to the Flash Usability Challenge in which he makes several good points about Flash and its good uses.

In my opinion, there are two types of sites that can work with Flash:
Personal sites - Visitors to a personal site are not as goal oriented as they normally would be (at, say, an e-tailer for example). Flash won't necissarily make a personal site better, I just think its more acceptable on a personal page where I'm not looking to perform any specific tasks. Flash software isn't very cheap either, making it less viable to a personal site developer.
Graphic Design sites - Graphic Designers all but need Flash so that they can show... well, their designs. Flash offers a good compression for the kind of graphics and animation that a Graphic Design site would entail. Again, Flash makes their site less usable, but it is acceptable since it is showcasing what they are selling (graphic design).
Posted by Mark on October 30, 2000 at 01:20 PM .: link :.


End of This Day's Posts

Wednesday, October 11, 2000

Amazon
What happened at amazon.com? It seems that they are attempting to rid themselves of excess images on their "welcome" page (and they reduced the number of nested tables as well). The page is now down to 63,972 bytes total; thats down from 97,779 bytes at mid-summer. The page is still bloated and it needs some more work, but its a step in the right direction. I'm not sure it actually happened.
Posted by Mark on October 11, 2000 at 04:50 PM .: link :.


End of This Day's Posts

Thursday, October 05, 2000

E-Quilled!
Tallmania hath been e-quilled by Kaedrin regular and court advisor, grenville! Go check out the E-Quill Web Toolbar (IE 5+ for PC only) and comment the hell out of any website. Its a tremendously usefull tool for constructive criticism or commentary and I'd welcome any comments on Kaedrin (or whatever you want!) I found out about E-Quill from Kottke.org and he's recently posted a bunch of his visitor's comments.
Posted by Mark on October 05, 2000 at 09:22 AM .: link :.


End of This Day's Posts

Monday, October 02, 2000

I have eaten this brain, and I want to chat about it.
This is an interesting parody of Amazon.com aimed towards Zombies who would like to choose from a wide array of brains to eat "because some brains are just naturally better, juicier, and formerly smarter than others." Some people have too much time on their hands. Now, if you'll excuse me, Oprah's brain just arrived in the mail. Mmmm, celebrity brain... ahhhgglaaaahhhggg...
Posted by Mark on October 02, 2000 at 01:37 PM .: link :.


End of This Day's Posts

Thursday, July 20, 2000

Javascript Sucks
Hmm, AltaVista seems to have taken a page out of Google's book and created Raging Search with a nice clean interface.

Check out The Web Color Visualizer, it rocks. Very useful tool, there...
Posted by Mark on July 20, 2000 at 07:22 PM .: link :.


End of This Day's Posts

Sunday, July 16, 2000

Napster Smapster!
Who's scared of losing Napster when you can use gnutella to download any files, including mp3s, mpegs, avis, movs, wavs, and any other file you could ever want.
Posted by Mark on July 16, 2000 at 02:51 PM .: link :.


End of This Day's Posts

Where am I?
This page contains entries posted to the Kaedrin Weblog in the Computers & Internet Category.

Inside Weblog
Archives
Best Entries
Fake Webcam
email me
Kaedrin Beer Blog

Archives
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003
October 2003
September 2003
August 2003
July 2003
June 2003
May 2003
April 2003
March 2003
February 2003
January 2003
December 2002
November 2002
October 2002
September 2002
August 2002
July 2002
May 2002
April 2002
March 2002
February 2002
January 2002
December 2001
November 2001
October 2001
September 2001
August 2001
July 2001
June 2001
May 2001
April 2001
March 2001
February 2001
January 2001
December 2000
November 2000
October 2000
September 2000
August 2000
July 2000

Categories
12 Days of Christmas
2006 Movie Awards
2007 Movie Awards
2008 Movie Awards
2009 Movie Awards
2010 Movie Awards
2011 Fantastic Fest
2011 Movie Awards
2012 Movie Awards
6 Weeks of Halloween
Administration
Anime
Arts & Letters
Atari 2600
Beer
Best Entries
Commodore 64
Computers & Internet
Culture
Disgruntled, Freakish Reflections
Harry Potter
Hitchcock
Humor
Link Dump
Lists
Military
Movies
Music
Neal Stephenson
NES
Philadelphia Film Festival 2006
Philadelphia Film Festival 2008
Philadelphia Film Festival 2009
Philadelphia Film Festival 2010
Politics
Science & Technology
Science Fiction
Security & Intelligence
The Dark Tower
Uncategorized
Video Games
Weblogs
Weird Movie of the Week
Green Flag



Copyright © 1999 - 2012 by Mark Ciocco.