|Computers & Internet|
Sunday, September 15, 2013
The Myth of Digital Distribution
The movie lover's dream service would be something we could subscribe to that would give us a comprehensive selection of movies to stream. This service is easy to conceive, and it's such an alluring idea that it makes people want to eschew tried-and-true distribution methods like DVDs and Blu-Ray. We've all heard the arguments before: physical media is dead, streaming is the future. When I made the move to Blu-Ray about 6 years ago, I estimated that it would take at least 10 years for a comprehensive streaming service to become feasible. The more I see, the more I think that I drastically underestimated that timeline... and am beginning to feel like it might never happen at all.
MGK illustrates the problem well with this example:
this is the point where someone says "but we're all going digital instead" and I get irritated by this because digital is hardly an answer. First off, renting films - and when you "buy" digital movies, that's what you're doing almost every single time - is not the same as buying them. Second, digital delivery is getting more and more sporadic as rights get more and more expensive for distributors to purchase.Situations like this are an all too common occurrence, and not just with movies. It turns out that content owners can't be bothered with a title unless it's either new or in the public domain. This graph from a Rebecca Rosen article nicely illustrates the black hole that our extended copyright regime creates:
[The graph] reveals, shockingly, that there are substantially more new editions available of books from the 1910s than from the 2000s. Editions of books that fall under copyright are available in about the same quantities as those from the first half of the 19th century. Publishers are simply not publishing copyrighted titles unless they are very recent.More interpretation:
This is not a gently sloping downward curve! Publishers seem unwilling to sell their books on Amazon for more than a few years after their initial publication. The data suggest that publishing business models make books disappear fairly shortly after their publication and long before they are scheduled to fall into the public domain. Copyright law then deters their reappearance as long as they are owned. On the left side of the graph before 1920, the decline presents a more gentle time-sensitive downward sloping curve.This is absolutely absurd, though it's worth noting that it doesn't control for used books (which are generally pretty easy to find on Amazon) and while content owners don't seem to be rushing to digitize their catalog, future generations won't experience the same issue we're having with the 80s and 90s. Actually, I suspect they will have trouble with 80s and 90s content, but stuff from 2010 should theoretically be available on an indefinite basis because anything published today gets put on digital/streaming services.
Of course, intellectual property law being what it is, I'm sure that new proprietary formats and readers will render old digital copies obsolete, and once again, consumers will be hard pressed to see that 15 year old movie or book ported to the latest-and-greatest channel. It's a weird and ironic state of affairs when the content owners are so greedy in hoarding and protecting their works, yet so unwilling to actually, you know, profit from them.
I don't know what the solution is here. There have been some interesting ideas about having copyright expire for books that have been out of print for a certain period of time (say, 5-10 years), but that would only work now - again, future generations will theoretically have those digital versions available. They may be in a near obsolete format, but they're available! It doesn't seem likely that sensible copyright reform could be passed, but it would be nice to see if we could take a page from the open source playbook, but I'm seriously doubting that content owners would ever be that forward thinking.
As MGK noted, DVD ushered in an era of amazing availability, but much of that stuff has gone out of print, and we somehow appear to be regressing from that.
Wednesday, July 31, 2013
Every so often, someone posts an article like Connor Simpson's The Lost Art of the Random Find and everyone loses their shit, bemoaning the decline of big-box video, book and music stores (of course, it wasn't that long ago when similar folks were bemoaning the rise of big-box video, book and music stores for largely the same reasons, but I digress) and what that means for serendipity. This mostly leads to whining about the internet, like so:
...going to a real store and buying something because it caught your eye, not because some algorithm told you you'd like it — is slowly disappearing because of the Internet...I've got news for you, you weren't "discovering" anything back in the day either. It probably felt like you were, but you weren't. The internet is just allowing you to easily find and connect with all your fellow travelers. Occasionally something goes viral, but so what? Yeah, sometimes it sucks when a funny joke gets overtold, but hey, that's life and it happens all the time. Simpson mentions Sharknado as if it came out of nowhere. The truth of the matter is that Sharknado is the culmination of decades of crappy cult SciFi (now SyFy) movies. Don't believe me? This was written in 2006:
Nothing makes me happier when I'm flipping through the channels on a rainy Saturday afternoon than stumbling upon whatever god-awful original home-grown suckfest-and-craptasm movie is playing on the Sci-Fi Channel. Nowhere else can you find such a clusterfuck of horrible plot contrivances and ill-conceived premises careening face-first into a brick wall of one-dimensional cardboard characters and banal, inane, poorly-delivered dialogue. While most television stations and movie production houses out there are attempting to retain some shred of dignity or at least a modicum of credibility, it's nice to know that the Sci-Fi Channel has no qualms whatsoever about brazenly showing twenty minute-long fight scenes involving computer-generated dinosaurs, dragons, insects, aliens, sea monsters and Gary Bussey all shooting laser beams at each other and battling for control of a planet-destroying starship as the self-destruct mechanism slowly ticks down and the fate of a thousand parallel universes hangs in the balance. You really have to give the execs at Sci-Fi credit for basically just throwing their hands up in the air and saying, "well let's just take all this crazy shit and mash it together into one giant ridiculous mess". Nothing is off-limits for those folks; if you want to see American troops in Iraq battle a giant man-eating Chimaera, you've got it. A genetically-altered Orca Whale the eats seamen and icebergs? Check. A plane full of mutated pissed-off killer bees carrying the Hanta Virus? Check. They pull out all the stops to cater to their target audience, who are pretty much so desensitized to bad science-fiction that no plot could be too over-the-top to satiate their need for giant monsters that eat people and faster-than-light spaceships shaped like the Sphynx.And as a long time viewer of the SciFi/SyFy network since near its inception, I can tell you that this sort of love/hate has been going on for decades. That the normals finally saw the light/darkness with Sharknado was inevitable. But it will be short-lived. At least, until SyFy picks up my script for Crocoroid Versus Jellyfish.
It's always difficult for me to take arguments like this seriously. Look, analog serendipity (browsing the stacks, digging through crates, blind buying records at a store, etc...) obviously has value and yes, opportunities to do so have lessened somewhat in recent years. And yeah, it sucks. I get it. But while finding stuff serendipitously on the internet is a different experience, but it's certainly possible. Do these people even use the internet? Haven't they ever been on TV Tropes?
It turns out that I've written about this before, during another serendipity flareup back in 2006. In that post, I reference Steven Johnson's response, which is right on:
I find these arguments completely infuriating. Do these people actually use the web? I find vastly more weird, unplanned stuff online than I ever did browsing the stacks as a grad student. Browsing the stacks is one of the most overrated and abused examples in the canon of things-we-used-to-do-that-were-so-much-better. (I love the whole idea of pulling down a book because you like the "binding.") Thanks to the connective nature of hypertext, and the blogosphere's exploratory hunger for finding new stuff, the web is the greatest serendipity engine in the history of culture. It is far, far easier to sit down in front of your browser and stumble across something completely brilliant but surprising than it is walking through a library looking at the spines of books.This whole thing basically amounts to a signal versus noise problem. Serendipity is basically finding signal by accident, and it happens all the damn time on the internet. Simpson comments:
...the fall of brick-and-mortar and big-box video, book and music stores has pushed most of our consumption habits to iTunes, Amazon and Netflix. Sure, that's convenient. But it also limits our curiosity.If the internet limits your curiosity, you're doing it wrong. Though I guess if your conception of the internet is limited to iTunes, Amazon, and Netflix, I guess I can see why you'd be a little disillusioned. Believe it or not, there is more internet out there.
As I was writing this post, I listened to a few songs on Digital Mumbles (hiatus over!) as well as Dynamite Hemmorage. Right now, I'm listening to a song Mumbles describes as "something to fly a mech to." Do I love it? Not really! But it's a damn sight better than, oh, just about every time I blind bought a CD in my life (which, granted, wasn't that often, but still). I will tell you this, nothing I've listened to tonight would have been something I picked up in a record store, or on iTunes for that matter. Of course, I suck at music, so take this all with a grain of salt, but still.
In the end, I get the anxiety around the decline of analog serendipity. Really, I do. I've had plenty of pleasant experiences doing so, and there is something sad about how virtual the world is becoming. Indeed, one of the things I really love about obsessing over beer is aimlessly wandering the aisles and picking up beers based on superficial things like labels or fancy packaging (or playing Belgian Beer Roulette). Beer has the advantage of being purely physical, so it will always involve a meatspace transaction. Books, movies, and music are less fortunate, I suppose. But none of this means that the internet is ruining everything. It's just different. I suppose those differences will turn some people off, but stores are still around, and I doubt they'll completely disappear anytime soon.
In Neal Stephenson's The System of the World, the character Daniel Waterhouse ponders how new systems supplant older systems:
"It has been my view for some years that a new System of the World is being created around us. I used to suppose that it would drive out and annihilate any older Systems. But things I have seen recently ... have convinced me that new Systems never replace old ones, but only surround and encapsulate them, even as, under a microscope, we may see that living within our bodies are animalcules, smaller and simpler than us, and yet thriving even as we thrive. ... And so I say that Alchemy shall not vanish, as I always hoped. Rather, it shall be encapsulated within the new System of the World, and become a familiar and even comforting presence there, though its name may change and its practitioners speak no more about the Philosopher's Stone." (page 639)In this Slashdot interview, Stephenson applies the same "surround and encapsulate" concept to the literary world. And so perhaps the internet will surround and encapsulate, but never destroy, serendipitous analog discovery. (hat tip to the Hedonist Jive twitter feed)
Wednesday, May 29, 2013
The Irony of Copyright Protection
In Copyright Protection That Serves to Destroy, Terry Teachout lays out some of the fundamental issues surrounding the preservation of art, in particular focusing on recorded sound:
Nowadays most people understand the historical significance of recorded sound, and libraries around the world are preserving as much of it as possible. But recording technology has evolved much faster than did printing technology—so fast, in fact, that librarians can't keep up with it. It's hard enough to preserve a wax cylinder originally cut in 1900, but how do you preserve an MP3 file? Might it fade over time? And will anybody still know how to play it a quarter-century from now? If you're old enough to remember floppy disks, you'll get the point at once: A record, unlike a book, is only as durable as our ability to play it back.Digital preservation is already a big problem for current librarians, and not just because of the mammoth amounts of digital data being produced. Just from a simple technological perspective, there are many non-trivial challenges. Even if the storage medium/reading mechanisms remain compatible over the next century, there are nontrivial challenges with ensuring these devices will remain usable that far into the future. Take hard drives. A lot of film and audio (and, I suppose books these days too) are being archived on hard drives. But you can't just take a hard drive and stick it on a shelf somewhere and fire it up in 30 years. Nor should you keep it spinning for 30 years. It requires use, but not constant use. And even then you'll need to ensure redundancy because hard drives fail.
Just in writing that, you can see the problem. Hard drives clearly aren't the solution. Too many modes of failure there. We need something more permanent. Which means something completely new... and thus something that will make hard drives (and our ability to read them) obsolete.
And that's from a purely technological perspective. They're nontrivial, but I'm confident that technology will rise to the challenge. However, once you start getting into the absolutely bonkers realm of intellectual property law, things get stupid really fast. If technology will rise to the challenge, IP owners and lawmakers seem to be engaged in an ever-escalating race to the bottom of the barrel:
In Europe, sound recordings enter the public domain 50 years after their initial release. Once that happens, anyone can reissue them, which makes it easy for Europeans to purchase classic records of the past. In America, by contrast, sound recordings are "protected" by a prohibitive snarl of federal and state legislation whose effect was summed up in a report issued in 2010 by the National Recording Preservation Board of the Library of Congress: "The effective term of copyright protection for even the oldest U.S. recordings, dating from the late 19th century, will not end until the year 2067 at the earliest.… Thus, a published U.S. sound recording created in 1890 will not enter the public domain until 177 years after its creation, constituting a term of rights protection 82 years longer than that of all other forms of audio visual works made for hire."Sheer insanity. The Library of Congress appears to be on the right side of the issue, suggesting common-sense recommendations for copyright reform... that will almost certainly never be enacted by IP owners or lawmakers. Still, their "National Recording Preservation Plan" seems like a pretty good idea. Again, it's a pity that almost none of their recommendations will be enacted, and while the need for Copyright reform is blindingly obvious to anyone with a brain, I don't see it happening anytime soon. It's a sad state of affairs when the only victories we can celebrate in this realm is grassroots opposition to absurd laws like SOPA/PIPA/ACTA.
I don't know the way forward. When you look at the economics of the movie industry, as recently laid out by Steven Soderberg in a speech that's been making the rounds of late (definitely worth a watch, if you've got a half hour), you start to see why media companies are so protective of their IP. As currently set up, your movie needs to make 120 million dollars, minimum, before you start to actually turn a profit (and that's just the marketing costs - you'd have to add on the budget to get a better idea). That, too, is absurd. I don't envy the position of media companies, but on the other hand, their response to such problems isn't to fix the problem but to stomp their feet petulantly, hold on to copyrighted works for far too long, and to antagonize their best customers.
That's the irony of protecting copyright. If you protect it too much, no one actually benefits from it, not even the copyright holders...
Wednesday, May 08, 2013
I have, for the most part, been very pleased with using my Kindle Touch to read over the past couple years. However, while it got the job done, I felt like there were a lot of missed opportunities, especially when it came to metadata and personal metrics. Well, Amazon just released a new update to their Kindle software, and mixed in with the usual (i.e. boring) updates to features I don't use (like "Whispersinc" or Parental Controls), there was this little gem:
The Time To Read feature uses your reading speed to let you know how much time is left before you finish your chapter or before you finish your book. Your specific reading speed is stored only on your Kindle Touch; it is not stored on Amazon servers.Hot damn, that's exactly what I was asking for! Of course, it's all locked down and you can't really see what your reading speed is (or plot it over time, or by book, etc...), but this is the single most useful update to a device like this that I think I've ever encountered. Indeed, the fact that it tells you how much time until you finish both your chapter and the entire book is extremely useful, and it addresses my initial curmudgeonly complaints about the Kindle's hatred of page numbers and love of percentage.
Will finish this book in about 4 hours!
And I love that they give a time to read for both the current chapter and the entire book. One of the frustrating things about reading an ebook is that you never really knew how long it will take to read a chapter. With a physical book, you can easily flip ahead and see where the chapter ends. Now, ebooks have that personalized time, which is perfect.
I haven't spent a lot of time with this new feature, but so far, I love it. I haven't done any formal tracking, but it seems accurate, too (it seems like I'm reading faster than it says, but it's close). It even seems to recognize when you've taken a break (though I'm not exactly sure of that). Of course, I would love it if Amazon would allow us access to the actual reading speed data in some way. I mean, I can appreciate their commitment to privacy, and I don't think that needs to change either; I'd just like to be able to see some reports on my actual reading speed. Plot it over time, see how different books impact speed, and so on. Maybe I'm just a data visualization nerd, but think of the graphs! I love this update, but they're still only scratching the surface here. There's a lot more there for the taking. Let's hope we're on our way...
Sunday, March 17, 2013
Requiem for Google Reader
This past week, Google dropped a bombshell on a certain segment of internet nerdery: they announced they were going to discontinue Google Reader. For the uninitiated, Reader was an RSS agregator - it allowed you to subscribe to the internet, and collected all that content in one place. It was awesome, I use it every day, and Google is going to turn it off on July 1. It shouldn't have been so shocking, but it was. It shouldn't have been so disappointing, but it was. And a big part of this is on me. This post might seem whiny, and I suppose it is, but I am finding this experience interesting (in the Chinese curse sense, but still).
It's hard to talk about this without seeming hysterical. This isn't the end of the world, and it's most certainly not the end of Google. All the petitions and talk of tough lessons and quicky websites (though, for serious, I love that gif) and videos... they're really just wishful thinking. It's nice to think that our Google overlords are surprised by the immediate and intense response to what probably seemed like a straightforward business decision, but I don't think they are. Outrage on the internet happens at the speed of twitter and fades even quicker. We'll find alternatives (more about this in a moment), we'll move on, and Google will too. But my view of Google has changed pretty quickly.
Of course, I'm not so naive to think that Google really gives a crap what I think, but I used to stick up for Google. Their "Don't be Evil" motto was surprisingly effective, and it looked like they walked the walk, too. That's a rare thing, to be sure, but it also molded the perception of Google to be something idealistic, something with an optimistic vision. We're drowning in information, and Google was going to help us deal with that. Their applications felt like public services. The shuttering of Reader, while ultimately not that big of a deal in isolation, rips all that artifice away from Google's image. We caught them being a business, and that just feels like a betrayal. It's completely unfair and naive, but that doesn't make it any less real. It's also selfish, but why should I care?
For the first time in years, I'm looking into alternatives. Google is forcing me to find an alternative to Reader, but if they're going to turn off something that so many people rely on so heavily, shouldn't I look for replacements to all of Google's other services? I'm surprised by how much I use Google services, and while I can't see myself replacing Gmail anytime soon, some of this other stuff might not be so necessary.
Speaking of alternatives, I've played around with a few, and the one I like the most is Feedly. It's not perfect, but then, neither was Reader. The transition was easy and seamless - I logged into Google and provided access to Feedly and boom: my entire set of feeds (and it looks like usage history too) was ported over to the new app. Once Google sunsets Reader, Feedly will transition to their backend, built specifically for this purpose. The interface may take some getting used to, but hey, keyboard shortcuts still work and it's got a much better suite of social sharing and tagging options. I'm a little annoyed by the notion that you need to install some sort of extension to your browser to get it to work, but it still seems like the best option available at the moment. Of course, nothing stops Feedly from acting like douchebags further down the road, but they're not the only alternative either. There are lots of others. Hell, even Digg (yeah, remember them?) is trying to capitalize on this whole thing.
I still don't really understand why Reader was such an anathema to Google. A lot of people have mentioned that they could see this coming for a while, and yeah, I think any user of Reader could tell that it wasn't among Google's favorite applications. It never got as many updates as, say, Maps or Gmail, and while it had some fantastic and innovative community features like sharing and commenting (stuff that you never saw much of when it came to RSS readers), Google completely neutered all that stuff in the name of pointless integration with Google+. Google did a redesign a little while back and, while I certainly can see why they did it and I value consistency, they made Reader harder to use. I mean, the point of this application is to allow you to read stuff - why are you slathering everything in grey and dedicating so much of the screen to unnecessary global navigation? Now, I wasn't a big user of their community features and while I wasn't a fan of the redesign, it was still the best option out there.
Google's stated reason for getting rid of Reader is that usage was down and they feel like they've spread themselves too thin with the number of services they support. I can sympathize with that second part, but the first part is ridiculous. The above mentioned changes to community features and the redesign were tailored towards reducing usage of the application. That was their whole purpose - Google wanted their community on G+, which is fair enough, I guess, but then it seems disingenuous to turn around and close the app because usage is down. Rather, that's not really an explanation. It feels like something else is going on here and it's hard to put my finger on it...
People have speculated that the reason for the shutdown is because they couldn't find a way to monetize it, but that doesn't seem right. At the very least, there were no ads on it, and while people don't particularly enjoy ads, they'd probably like them better than not having reader at all. I've always considered Google's strategy to be something along the lines of: Increased internet usage in general means that we can serve more ads to more people. Reader certainly accomplished that goal, and it did so for a lot of people. Usage may have been down, but it was still large and drove massive amounts of traffic. Just look at the graph on this Buzzfeed article. It's not at all comprehensive and there are probably a lot of caveats, but I would bet the general thrust is correct - far more people discover content through Reader than they do on G+...
In a more general sense, this development is reopening the debate about RSS and the relevancy of things like Blogs, here in the age of Facebook and Twitter. There are valid concerns about this stuff, especially when it comes to average users of the internet. And I don't mean that as a slight on average users. I know the ins and outs of RSS because I'm a nerd and my profession requires that sort of knowledge. But who wants to sit down and figure this stuff out if you don't have to? People are busy, they have jobs, they have kids, they don't have time to futz with markup languages, and that's not a bad thing at all. Google Reader was a step in the right direction, but Google never really developed that aspect of it (which seems to have faded away) and I get the impression that they have lost faith in RSS as a way to help us all make sense of the morass of information on the internet.
This is a generous interpretation of Google's actions, but I like that better than the cynical explanations about difficulty monetizing Reader or Google's official line about usage. On the other hand, what is Google doing to help us sift through the detritus of the internets? I don't think Google+ is the solution, and Search has its own issues. That's why the people like me, looking for ways to aggregate and analyze data in efficient ways, were big users of Reader in the first place. It's why we're so hurt by the decision to shut it down. It would be one thing if usage of Reader was declining because there was a better way to consume content (which, I'm sure is debatable to some Social evangelists, but that's a topic for another post). Closing Reader now seems premature and baffling.
So Google cut me, they cut me deep. It's partly my own fault; I let my guard down. I'm confident that this malaise will pass and that I'll stop trying to find ways to spite them, but I won't see Google the same way I did before. I'm curious to see how Google moves forward. This isn't the first time they shuttered an application, but it might be the most widely-used and beloved service they've given the axe... On its face, this move seems as stupid as Netflix's Qwikster debacle. Netflix's solution was easy, they saw the error in their ways and reversed course. The response to that wasn't immediate, but Netflix is doing much better now. Google has a more difficult road ahead. Of course, this decision isn't as breathtakingly stupid as Qwikster and like I said above, everyone will probably move on in pretty short order. But Google may face an image problem. I don't think just turning Reader back on would do the trick, as the damage is already done and it wasn't really a direct consequence of the action. The damage here is more than the sum of its parts. Can Google repair that? I'm open to the possibility, but it might be a while...
Wednesday, February 27, 2013
Recent and Future Podcastery
I have a regular stable of podcasts that generally keep me happy on a weekly basis, but as much as I love all of them, I will sometimes greedily consume them all too quickly, leaving me with nothing. Plus, it's always good to look out for new and interesting stuff. Quite frankly, I've not done a particularly good job keeping up with the general podcasting scene, so here's a few things I caught up with recently (or am planning to listen to in the near future):
Sunday, February 10, 2013
Netflix's House of Cards
Last weekend, Netflix debuted their highly anticipated original series House of Cards. Based on an old BBC series, starring Kevin Spacey and directed by David Fincher, the show certainly has an impressive pedigree and has been garnering mostly positive reviews. From what I've watched so far, it doesn't quite reach the heights of my favorite television shows, but it's on the same playing field, which is pretty impressive for original content from an internet-based company that was predicated solely on repackaging and reselling existing content from other sources. It's a good show, but the most interesting things about the series are the meta-discussions surrounding the way it was produced and released.
Like the way free music streaming services are changing the narrative of that industry, I'm seeing something similar happening with Netflix... and like the music industry, I don't really know where this will end up. Netflix certainly fell on hard times a couple years ago; after a perfectly understandable price hike and the inexplicable Qwikster debacle their stock price plummeted from 300+ to around 60. Since then, it's been more or less ping-ponging up and and down in the 60-140 range, depending on various business events (earnings reports, etc...) and newly licensed content.
Recently, the stock has been rising rapidly, thanks to new content deals with the likes of Disney and Warner Bros., and now because of House of Cards. Perhaps fed up with wrangling the rising cost of streaming content (which are ever rising at a spectacular pace and cutting into Netflix's meager profit margins), Netflix has started to make their own content. Early last year, Netflix launched Lillyhammer to middling reviews and not a lot of fanfare... I have not watched the series (and quite frankly, the previews look like a parody or SNL sketch or something), but it perhaps represented Netflix's dry run for this recent bid for original content. A lot of the interesting things about House of Cards' release were presaged by that previous series.
For instance, the notion of releasing the entire 13 episode run of the first season on day one of release. Netflix has done a lot of research on their customers' viewing habits, observing that people will often mainline old series (or previous seasons of current series like Mad Men or Breaking Bad), watching entire seasons or even several over the course of a few days or weeks. I've wondered about this sort of thing in the past, because this is the way I prefer to consume content. I can never really get into the rhythm of "destination" television, except in very limited scenarios (the only show I watch on a weekly basis at the time it airs is Game of Thrones, because I like the show and the timeslot fits into my schedule). There are some shows that I look forward to every week, but even those usually get stored away on the DVR until I can watch several at once. So what I'm saying here is that this release of all episodes at once is right up my alley, and I'm apparently not alone.
With the lack of physical shelf space or broadcast schedule needed, I suspect this would also lead to shows actually getting to finish their season instead of being canceled after two episodes, which could be an interesting development. On the other hand, what kinds of shows will this produce? Netflix greenlit this series based on a mountain of customer data, not just about how viewers consumed TV series, but also on their response to Kevin Spacey and David Fincher, and probably a hundred other data-points.
And the series does kinda feel like it's built in a lab. Everything is top notch about the show. Great actors, high production value, solid writing, the show is optimized for that binge-watching experience. Is that a good thing? In this case, it seems to be working well enough. But can that sort of data-driven model hold up over time? Of course, that's nothing new in the entertainment industry. Look no further than the whole vampire/zombie resurgence of the past decade or so. But I wonder if Netflix will ever do something that sets the trends, rather than chasing the data.
What does this all mean for the world of streaming? Netflix appears to have stemmed the tide of defecting subscribers, but will they gain new subscribers simply because of their original content? Will this be successful enough for other streaming players to take the same gamble? Will we have Hulu and Amazon series? Will we have to subscribe to 8 different services to keep up with this? Or will Netflix actually license out their original content to the likes of Cable or Network television? Ok, that's probably unlikely, but on the other hand, it could be a big source of revenue and a way to expand their audience.
Will Netflix be able to keep growing thanks to these original content efforts? House of Cards is just the first of several original series being released this year. Will the revived Arrested Development (season 4, coming in May) draw in new subscribers? Or the new Ricky Gervais show? Will any of this allow Netflix to expand their streaming content beyond the laughable movie selection they currently command (seriously, they have a good TV selection, but their movie selection is horrible)? Will we ever get that dream service, a single subscription that will give you access to everything you could ever want to watch? Technologically, this is all possible, but technology won't drive that, and I'm curious if such a thing will ever come to fruition (Netflix or not!) In the meantime, I'm most likely going to finish off House of Cards, which is probably a good thing for Netflix.
Sunday, January 06, 2013
What's in a Book Length?
I mentioned recently that book length is something that's been bugging me. It seems that we have a somewhat elastic relationship with length when it comes to books. The traditional indicator of book length is, of course, page number... but due to variability in font size, type, spacing, format, media, and margins, the hallowed page number may not be as concrete as we'd like. Ebooks theoretically provide an easier way to maintain a consistent measurement across different books, but it doesn't look like anyone's delivered on that promise. So how are we to know the lengths of our books? Fair warning, this post is about to get pretty darn nerdy, so read on at your own peril.
In terms of page numbers, books can vary wildly. Two books with the same amount of pages might be very different in terms of actual length. Let's take two examples: Gravity's Rainbow (784 pages) and Harry Potter and the Goblet of Fire (752 pages). Looking at page number alone, you'd say that Gravity's Rainbow is only slightly longer than Goblet of Fire. With the help of the magical internets, let's a closer look at the print inside the books (click image for a bigger version):
Ebooks present a potential solution. Because Ebooks have different sized screens and even allow the reader to choose font sizes and other display options, page numbers start to seem irrelevant. So Ebook makers devised what's called reflowable documents, which adapt their presentation to the output device. For example, Amazon's Kindle uses an Ebook format that is reflowable. It does not (usually) feature page numbers, instead relying on a percentage indicator and the mysterious "Location" number.
The Location number is meant to be consistent, no matter what formatting options you're using on your ereader of choice. Sounds great, right? Well, the problem is that the Location number is pretty much just as arbitrary as page numbers. It is, of course, more granular than a page number, so you can easily skip to the exact location on multiple devices, but as for what actually constitutes a single "Location Number", that is a little more tricky.
In looking around the internets, it seems there is distressingly little information about what constitutes an actual Location. According to this thread on Amazon, someone claims that: "Each location is 128 bytes of data, including formatting and metadata." This rings true to me, but unfortunately, it also means that the Location number is pretty much meaningless.
The elastic relationship we have with book length is something I've always found interesting, but what made me want to write this post was when I wanted to pick a short book to read in early December. I was trying to make my 50 book reading goal, so I wanted something short. In looking through my book queue, I saw Alfred Bester's classic SF novel The Stars My Destination. It's one of those books I consistently see at the top of best SF lists, so it's always been on my radar, and looking at Amazon, I saw that it was only 236 pages long. Score! So I bought the ebook version and fired up my Kindle only to find that in terms of locations, it's the longest book I have on my Kindle (as of right now, I have 48 books on there). This is when I started looking around at Locations and trying to figure out what they meant. As it turns out, while the Location numbers provide a consistent reference within the book, they're not at all consistent across books.
I did a quick spot check of 6 books on my Kindle, looking at total Location numbers, total page numbers (resorting to print version when not estimated by Amazon), and file size of the ebook (in KB). I also added a column for Locations per page number and Locations per KB. This is an admittedly small sample, but what I found is that there is little consistency among any of the numbers. The notion of each Location being 128 bytes of data seems useful at first, especially when you consider that the KB information is readily available, but because that includes formatting and metadata, it's essentially meaningless. And the KB number also includes any media embedded in the book (i.e. illustrations crank up the KB, which distorts any calculations you might want to do with that data).
It turns out that The Stars My Destination will probably end up being relatively short, as the page numbers would imply. There's a fair amount of formatting within the book (which, by the way, doesn't look so hot on the Kindle), and doing spot checks of how many Locations I pass when cycling to the next screen, it appears that this particular ebook is going at a rate of about 12 Locations per cycle, while my previous book was going at a rate of around 5 or 6 per cycle. In other words, while the total Locations for The Stars My Destination were nearly twice what they were for my previously read book, I'm also cycling through Locations at double the rate. Meaning that, basically, this is the same length as my previous book.
Various attempts have been made to convert Location numbers to page numbers, with low degrees of success. This is due to the generally elastic nature of a page, combined with the inconsistent size of Locations. For most books, it seems like dividing the Location numbers by anywhere from 12-16 (the linked post posits dividing by 16.69, but the books I checked mostly ranged from 12-16) will get you a somewhat accurate page number count that is marginally consistent with print editions. Of course, for The Stars My Destination, that won't work at all. For that book, I have to divide by 40.86 to get close to the page number.
Why is this important at all? Well, there's clearly an issue with ebooks in academia, because citations are so important for that sort of work. Citing a location won't get readers of a paper anywhere close to a page number in a print edition (whereas, even using differing editions, you can usually track down the quote relatively easily if a page number is referenced). On a personal level, I enjoy reading ebooks, but one of the things I miss is the easy and instinctual notion of figuring out how long a book will take to read just by looking at it. Last year, I was shooting for reading quantity, so I wanted to tackle shorter books (this year, I'm trying not to pay attention to length as much and will be tackling a bunch of large, forbidding tomes, but that's a topic for another post)... but there really wasn't an easily accessible way to gauge the length. As we've discovered, both page numbers and Location numbers are inconsistent. In general, the larger the number, the longer the book, but as we've seen, that can be misleading in certain edge cases.
So what is the solution here? Well, we've managed to work with variable page numbers for thousands of years, so maybe no solution is really needed. A lot of newer ebooks even contain page numbers (despite the variation in display), so if we can find a way to make that more consistent, that might help make things a little better. But the ultimate solution would be to use something like Word Count. That's a number that might not be useful in the midst of reading a book, but if you're really looking to determine the actual length of the book, Word Count appears to be the best available measurement. It would also be quite easily calculated for ebooks. Is it perfect? Probably not, but it's better than page numbers or location numbers.
In the end, I enjoy using my Kindle to read books, but I wish they'd get on the ball with this sort of stuff. If you're still reading this (Kudos to you) and want to read some more babbling about ebooks and where I think they should be going, check out my initial thoughts and my ideas for additional metadata and the gamification of reading. The notion of ereaders really does open up a whole new world of possibilities... it's a shame that Amazon and other ereader companies keep their platforms so locked down and uninteresting. Of course, reading is its own reward, but I really feel like there's a lot more we can be doing with our ereader software and hardware.
Sunday, December 02, 2012
Companies Don't Force You Into Piracy
But let's be honest with ourselves, that doesn't mean that all those same media companies don't suck. Let me back up a minute, as this is an old argument. Most recently, this article from The Guardian bemoans the release window system:
A couple of months ago, I purchased the first season of the TV series Homeland from the iTunes Store. I paid $32 for 12 episodes that all landed seamlessly in my iPad. I gulped them in a few days and was left in a state of withdrawal. Then, on 30 September, when season 2 started over, I would have had no alternative but to download free but illegal torrent files. Hundreds of thousands of people anxious to find out the whereabouts of the Marine turncoat pursued by the bi-polar CIA operative were in the same quandaryThis is, of course, stupid. This guy does have a pretty simple alternative: wait a few months to watch the show. It's a shitty alternative, to be sure, but that doesn't excuse piracy. As Sonny Bunch notes:
Of course you have an alternative you ninny! It's not bread for your starving family. You're not going to die if you have to wait six months to watch a TV show. You're not morally justified in your thievery.Others have also responded as such:
This argument is both ludicrous, and wrong. Ludicrous, because if piracy is actually wrong, it doesn't get less wrong simply because you can't have the product exactly when and where you want it at a price you wish to pay. You are not entitled to shoplift Birkin bags on the grounds that they are ludicrously overpriced, and you cannot say you had no alternative but to break into an the local ice cream parlor at 2 am because you are really craving some Rocky Road and the insensitive bastards refused to stay open 24/7 so that you could have your favorite sweet treat whenever you want. You are not forced into piracy because you can't get a television show at the exact moment when you want to see it; you are choosing piracy.This is all well and good, and the original Guardian article has a poor premise... but that doesn't mean that the release window system isn't antiquated and doesn't suck. The original oped could easily be tweaked to omit the quasi-justification for piracy. Instead, the piracy is included and thus the article overreaches. On the flip side, the responses also tend to overstate their case, usually including something like this: "you can't have the product exactly when and where you want it at a price you wish to pay." This is true, of course, but that doesn't make it any less frustrating for consumers. And with respect to streaming, the media company stance is just as ludicrous as those defending piracy.
Here's a few examples I've run into:
I get that these are all businesses and need to make money, but I don't understand the insistence on alienating their own customers, frequently and thoroughly. I'm not turning to piracy, I'm just a frustrated customer. I've already bought a bunch of devices and services so that I can watch this stuff, and yet I'm still not able to watch even a small fraction of what I want. Frustration doesn't excuse piracy, but I don't see why I should be excusing these companies for being so annoying about when and where and how I can consume their content. It's especially frustrating because so much of this is done in the name of piracy. I suppose this post is coming off petulant and whiny on my part, but if you think I'm bad, just try listening to the MPAA or similar institution talk about piracy and the things they do to their customers to combat it. In essence, these companies hurt their best customers to spite non-customers. So I don't pirate shows or movies or books, but then, I often don't get to watch or read the ones I want to either. In a world where media companies are constantly whining about declining sales, it's a wonder that they don't actually, you know, try to sell me stuff I can watch/read. I guess they find it easier to assume I'm a thief and treat me as such.
Wednesday, August 22, 2012
Tweets of Glory
There's some great stuff on Twitter, but the tweets just keep coming, so there's a fair chance you've missed some funny stuff, even from the people you follow. Anywho, time is short tonight, so it's time for another installment of Tweets of Glory:
I have to admit, hatewatching The Newsroom has actually been pretty entertaining, but I'd much rather watch this proposed feline-themed show.
Yeah, so that one's a little out of date, but for the uninitiated, Duncan Jones is David Bowie's son.
(I love the internet)
Well, that happened. Stay tuned for some (hopefully) more fulfilling content on Sunday...
Wednesday, August 08, 2012
Web browsers I have known, 1996-2012
Jason Kottke recently recapped all of the browsers he used as his default for the past 18 years. It sounded like fun, so I'm going to shamelessly steal the idea and list out my default browsers for the past 16 years (prior to 1996, I was stuck in the dark ages of dialup AOL - but once I went away to college and discovered the joys of T1/T3 connections, my browsing career started in earnest, so that's when I'm starting this list).
Wednesday, May 02, 2012
Tweets of Glory
One of the frustrating things about Twitter is that it's impossible to find something once it's gone past a few days. I've gotten into the habit of favoriting ones I find particularly funny or that I need to come back to, which is nice, as it allows me to publish a cheap Wednesday blog entry (incidentally, sorry for the cheapness of this entry) that will hopefully still be fun for folks to read. So here are some tweets of glory:
Note: This was Stephenson's first tweet in a year and a half.
This one is obviously a variation on a million similar tweets (and, admit it, it's a thought we've all had), but the first one I saw (or at least, favorited - I'm sure it's far from the first time someone made that observation though)
Well, that happened. Stay tuned for some (hopefully) more fulfilling content on Sunday...
Posted by Mark on May 02, 2012 at 08:36 PM .: link :.
Sunday, April 15, 2012
When the whole Kickstarter thing started, I went through a number of phases. First, it's a neat idea and it leverages some of the stuff that makes the internet great. Second, as my systems analyst brain started chewing on it, I had some reservations... but that was shortlived as, third, some really interesting stuff started getting funded. Here are some of the ones I'm looking forward to:
Posted by Mark on April 15, 2012 at 08:28 PM .: link :.
Wednesday, April 11, 2012
More Disgruntled, Freakish Reflections on ebooks and Readers
While I have some pet peeves with the Kindle, I've mostly found it to be a good experience. That being said, there are some things I'd love to see in the future. These aren't really complaints, as some of this stuff isn't yet available, but there are a few opportunities afforded by the electronic nature of eBooks that would make the whole process better.
Posted by Mark on April 11, 2012 at 09:22 PM .: link :.
Wednesday, February 15, 2012
Last week, I looked at commonplace books and various implementation solutions. Ideally, I wanted something open and flexible that would also provide some degree of analysis in addition to the simple data aggregation most tools provide. I wanted something that would take into account a wide variety of sources in addition to my own writing (on this blog, for instance). Most tools provide a search capability of some kind, but I was hoping for something more advanced. Something that would make connections between data, or find similarities with something I'm currently writing.
At a first glance, Zemanta seemed like a promising candidate. It's a "content suggestion engine" specifically built for blogging and it comes pre-installed on a lot of blogging software (including Movable Type). I just had to activate it, which was pretty simple. Theoretically, it continually scans a post in progress (like this one) and provides content recommendations, ranging from simple text links defining key concepts (i.e. links to Wikipedia, IMDB, Amazon, etc...), to imagery (much of which seems to be integrated with Flickr and Wikipedia), to recommended blog posts from other folks' blogs. One of the things I thought was really neat was that I could input my own blogs, which would then give me more personalized recommendations.
Unfortunately, results so far have been mixed. There are some things I really like about Zemanta, but it's pretty clearly not the solution I'm looking for. Some assorted thoughts:
I will probably continue to play with Zemanta, but I suspect it will be something that doesn't last much longer. It provides some value, but it's ultimately not as convenient as I'd like, and it's analysis and recommendation functions don't seem as useful as I'd like.
I've also been playing around with Evernote more and more, and I feel like that could be a useful tool, despite the fact that it doesn't really offer any sort of analysis (though it does have a simple search function). There's at least one third party, though, that seems to be positioning itself as an analysis tool that will integrate with Evernote. That tool is called Topicmarks. Unfortunately, I seem to be having some issues integrating my Evernote data with that service. At this rate, I don't know that I'll find a great tool for what I want, but it's an interesting subject, and I'm guessing it will be something that will become more and more important as time goes on. We're living in the Information Age, it seems only fair that our aggregation and analysis tools get more sophisticated.
Posted by Mark on February 15, 2012 at 06:08 PM .: link :.
Wednesday, February 08, 2012
During the Enlightenment, most intellectuals kept what's called a Commonplace Book. Basically, folks like John Locke or Mark Twain would curate transcriptions of interesting quotes from their readings. It was a personalized record of interesting ideas that the author encountered. When I first heard about the concept, I immediately started thinking of how I could implement one... which is when I realized that I've actually been keeping one, more or less, for the past decade or so on this blog. It's not very organized, though, and it's something that's been banging around in my head for the better part of the last year or so.
Locke was a big fan of Commonplace Books, and he spent years developing an intricate system for indexing his books' content. It was, of course, a ridiculous and painstaking process, but it worked. Fortunately for us, this is exactly the sort of thing that computer systems excel at, right? The reason I'm writing this post is a small confluence of events that has lead me to consider creating a more formal Commonplace Book. Despite my earlier musing on the subject, this blog doesn't really count. It's not really organized correctly, and I don't publish all the interesting quotes that I find. Even if I did, it's not really in a format that would do me much good. So I'd need to devise another plan.
Why do I need a plan at all? What's the benefit of a commonplace book? Well, I've been reading Steven Johnson's book Where Good Ideas Come From: The Natural History of Innovation and he mentions how he uses a computerized version of the commonplace book:
For more than a decade now, I have been curating a private digital archive of quotes that I've found intriguing, my twenty-first century version of the commonplace book. ... I keep all these quotes in a database using a program called DEVONthink, where I also store my own writing: chapters, essays, blog posts, notes. By combining my own words with passages from other sources, the collection becomes something more than just a file storage system. It becomes a digital extension of my imperfect memory, an archive of all my old ideas, and the ideas that have influenced me.This DEVONthink software certainly sounds useful. It's apparently got this fancy AI that will generate semantic connections between quotes and what you're writing. It's advanced enough that many of those connections seem to be subtle and "lyrical", finding connections you didn't know you were looking for. It sounds perfect except for the fact that it only runs on Mac OSX. Drats. It's worth keeping in mind in case I ever do make the transition from PC to Mac, but it seems like lunacy to do so just to use this application (which, for all I know, will be useless to me).
As sheer happenstance, I've also been playing around with Pinterest lately, and it occurs to me that it's a sort of commonplace book, albeit one with more of a narrow focus on images and video (and recipes?) than quotes. There are actually quite a few sites like that. I've been curating a large selection of links on Delicious for years now (1600+ links on my account). Steven Johnson himself has recently contributed to a new web startup called Findings, which is primarily concerned with book quotes. All of this seems rather limiting, and quite frankly, I don't want to be using 7 completely different tools to do the same thing, but for different types of media.
I also took a look at Tumblr again, this time evaluating it from a commonplacing perspective. There are some really nice things about the interface and the ease with which you can curate your collection of media. The problem, though, is that their archiving system is even more useless than most blog software. It's not quite the hell that is Twitter archives, but that's a pretty low bar. Also, as near as I can tell, the data is locked up on their server, which means that even if I could find some sort of indexing and analysis tool to run through my data, I won't really be able to do so (Update: apparently Tumblr does have a backup tool, but only for use with OSX. Again!? What is it with you people? This is the internet, right? How hard is it to make this stuff open?)
Evernote shows a lot of promise and probably warrants further examination. It seems to be the go-to alternative for lots of researchers and writers. It's got a nice cloud implementation with a robust desktop client and the ability to export data as I see fit. I'm not sure if its search will be as sophisticated as what I ultimately want, but it could be an interesting tool.
Ultimately, I'm not sure the tool I'm looking for exists. DEVONthink sounds pretty close, but it's hard to tell how it will work without actually using the damn thing. The ideal would be a system where you can easily maintain a whole slew of data and metadata, to the point where I could be writing something (say a blog post or a requirements document for my job) and the tool would suggest relevant quotes/posts based on what I'm writing. This would probably be difficult to accmomplish in real-time, but a "Find related content" feature would still be pretty awesome. Anyone know of any alternatives?
Update: Zemanta! I completely forgot about this. It comes installed by default with my blogging software, but I had turned it off a while ago because it took forever to load and was never really that useful. It's basically a content recommendation engine, pulling content from lots of internet sources (notably Wikipedia, Amazon, Flickr and IMDB). It's also grown considerably in the time since I'd last used it, and it now features a truckload of customization options, including the ability to separate general content recommendations from your own, personally curated sources. So far, I've only connected my two blogs to the software, but it would be interesting if I could integrate Zemanta with Evernote, Delicious, etc... I have no idea how great the recommendations will be (or how far back it will look on my blogs), but this could be exactly what I was looking for. Even if integration with other services isn't working, I could probably create myself another blog just for quotes, and then use that blog with Zemanta. I'll have to play around with this some more, but I'm intrigued by the possibilities
Posted by Mark on February 08, 2012 at 05:31 PM .: link :.
Wednesday, January 18, 2012
I was going to write the annual arbitrary movie awards tonight, but since the web has apparently gone on strike, I figured I'd spend a little time talking about that instead. Many sites, including the likes of Wikipedia and Reddit, have instituted a complete blackout as part of a protest against two ill-conceived pieces of censorship legislation currently being considered by the U.S. Congress (these laws are called the Stop Online Piracy Act and Protect Intellectual Property Act, henceforth to be referred to as SOPA and PIPA). I can't even begin to pretend that blacking out my humble little site would accomplish anything, but since a lot of my personal and professional livelihood depends on the internet, I suppose I can't ignore this either.
For the uninitiated, if the bills known as SOPA and PIPA become law, many websites could be taken offline involuntarily, without warning, and without due process of law, based on little more than an alleged copyright owner's unproven and uncontested allegations of infringement1. The reason Wikipedia is blacked out today is that they depend solely on user-contributed content, which means they would be a ripe target for overzealous copyright holders. Sites like Google haven't blacked themselves out, but have staged a bit of a protest as well, because under the provisions of the bill, even just linking to a site that infringes upon copyright is grounds for action (and thus search engines have a vested interest in defeating these bills). You could argue that these bills are well intentioned, and from what I can tell, their original purpose seemed to be more about foreign websites and DNS, but the road to hell is paved with good intentions and as written, these bills are completely absurd.
Lots of other sites have been registering their feelings on the matter. ArsTechnica has been posting up a storm. Shamus has a good post on the subject which is followed by a lively comment thread. But I think Aziz hits the nail on the head:
Looks like the DNS provisions in SOPA are getting pulled, and the House is delaying action on the bill until February, so it’s gratifying to see that the activism had an effect. However, that activism would have been put to better use to educate people about why DRM is harmful, why piracy should be fought not with law but with smarter pro-consumer marketing by content owners (lowered prices, more options for digital distribution, removal of DRM, fair use, and ubiquitous time-shifting). Look at the ridiculous limitations on Hulu Plus - even if you’re a paid subscriber, some shows won’t air episodes until the week after, old episodes are not always available, some episodes can only be watched on the computer and are restricted from mobile devices. These are utterly arbitrary limitations on watching content that just drive people into the pirates’ arms.I may disagree with some of the other things in Aziz's post, but the above paragraph is important, and for some reason, people aren't talking about this aspect of the story. Sure, some folks are disputing the numbers, but few are pointing out the things that IP owners could be doing instead of legislation. For my money, the most important thing that IP owners have forgotten is convenience. Aziz points out Hulu, which is one of the worst services I've ever seen in terms of being convenient or even just intuitive to customers. I understand that piracy is frustrating for content owners and artists, but this is not the way to fight piracy. It might be disheartening to acknowledge that piracy will always exist, but it probably will, so we're going to have to figure out a way to deal with it. The one thing we've seen work is convenience. Despite the fact that iTunes had DRM, it was loose enough and convenient enough that it became a massive success (it now doesn't have DRM, which is even better). People want to spend money on this stuff, but more often than not, content owners are making it harder on the paying customer than on the pirate. SOPA/PIPA is just the latest example of this sort of thing.
I've already written about my thoughts on Intellectual Property, Copyright and DRM, so I encourage you to check that out. And if you're so inclined, you can find out what senators and representatives are supporting these bills, and throw them out in November (or in a few years, if need be). I also try to support companies or individuals that put out DRM-free content (for example, Louis CK's latest concert video has been made available, DRM free, and has apparently been a success).
Intellectual Property and Copyright is a big subject, and I have to be honest in that I don't have all the answers. But the way it works right now just doesn't seem right. A copyrighted work released just before I was born (i.e. Star Wars) probably won't enter the public domain until after I'm dead (I'm generally an optimistic guy, so I won't complain if I do make it to 2072, but still). Both protection and expiration are important parts of the way copyright works in the U.S. It's a balancing act, to be sure, but I think the pendulum has swung too far in one direction. Maybe it's time we swing it back. Now if you'll excuse me, I'm going to participate in a different kind of blackout to protest SOPA.
1 - Thanks to James for the concise description. There are lots of much longer longer and better sourced descriptions of the shortcomings of this bill and the issues surrounding it, so I won't belabor the point here.
Posted by Mark on January 18, 2012 at 06:20 PM .: link :.
Sunday, July 24, 2011
Streaming and Netflix's Woes
A few years ago, when I was still contemplating the purchase of a Blu-Ray player (which ended up being the PS3), there was a lot of huffing-and-puffing about how Blu-Ray would never catch on, physical media was dead, and that streaming was the future. My thoughts on that at the time were that streaming is indeed the future, but that it would take at least 10 years before it actually happened in an ideal form. The more I see, the more I'm convinced that I actually underestimated the time it would take to get a genuinely great streaming service running.
One of the leading examples of a streaming service is Netflix's Watch Instantly service. As a long time Netflix member, I can say that it is indeed awesome, especially now that I can easily stream it to my television. However, there is one major flaw to their streaming service: the selection. Now, they have somewhere on the order of 20,000-30,000 titles available, which is certainly a huge selection... but it's about 1/5th of what they have available on physical media. For some folks, I'm sure that's enough, but for movie nerds like myself, I'm going to want to keep the physical option on my plan...
The reason Netflix's selection is limited is the same reason I don't think we'll see an ideal streaming service anytime soon. The problems are not technological. It all comes down to intellectual property. Studios and distributors own the rights, and they often don't want to allow streaming, especially for new releases. Indeed, several studios won't even allow Netflix to rent physical media for the first month of release. In order for a streaming service to actually supplant physical media, it will have to feature a comprehensive selection. Netflix does have a vested interest in making that happen (the infrastructure needed for physical media rentals via mail is massive and costly, while streaming is, at least, more streamlined from a logistical point of view), but I don't see this happening anytime soon.
Netflix has recently encountered some issues along these lines, and as a result, they've changed their pricing structure. It used to be that you could buy a plan that would allow you to rent 1, 2, 3, or 4 DVDs or BDs at a time. If you belonged to one of those plans, you also got free, unlimited streaming. Within the past year or so, they added another option for folks who only wanted streaming. And just a few weeks ago, they made streaming an altogether separate service. Instead of buying the physical media plan of your choice and getting streaming "for free", you now also need to pay for streaming. I believe their most popular plan used to be 1 disc with unlimited streaming, which was $9.99. This plan is now $16.98.
As you might expect, this has resulted in a massive online shitstorm of infantile rage and fury. Their blog post announcing the change currently has 12,000+ comments from indignant users. There are even more comments on their Facebook page (somewhere on the order of 80,000 comments there), and of course, other social media sites like Twitter were filled with indignant posts on the subject.
So why did Netflix risk the ire of their customers? They've even acknowledged that they were expecting some outrage at the change. My guess is that the bill's about to come due, and Netflix didn't really have a choice in the matter.
Indeed, a few weeks ago, Netflix had to temporarily stop streaming all of its Sony movies (which are distributed through Starz). It turns out that there's a contractual limit on the number of subscribers that Sony will allow, so now Netflix needs to renegotiate with Sony/Starz. The current cost to license Sony/Starz content for streaming is around $30 million annually. Details aren't really public (and it's probably not finalized yet), but it's estimated that the new contract will cost Netflix somewhere on the order of $200-$350 million a year. And that's just Sony/Starz. I imagine other studios will now be chomping at the bit. And of course, all these studios will continually up their rates as Netflix tries to expand their streaming selection.
So I think that all of the invective being thrown Netflix's way is mostly unwarranted (or, rather, misplaced). All that rage should really be directed at the studios who are trying to squeeze every penny out of their IP. At least Netflix seems to be doing business in an honest and open way here, and yet everyone's bitching about it. Other companies would do something sneaky. For instance, movie theaters (which also get a raw deal from studios) seem to be raising ticket prices by a quarter every few months. Any given increase is met with a bit of a meh, but consolidated over the past few years, ticket prices have risen considerably.
Ultimately, it's quite possible that Netflix will take a big hit on this in the next few years. Internet nerd-rage notwithstanding, I'm doubting that their customer base will drop, but if their cost of doing business goes up the way it seems, I can see their profits dropping considerably. But if that happens, it won't be Netflix that we should blame, it will be the studios... I don't want to completely demonize the studios here - they do create and own the content, and are entitled to be compensated for that. However, I don't think anyone believes they're being fair about this. They've been trying to slow Netflix down for years, after all. Quite frankly, Netflix has been much more customer friendly than the studios.
Posted by Mark on July 24, 2011 at 06:33 PM .: link :.
Sunday, May 22, 2011
About two years ago (has it really been that long!?), I wrote a post about Interrupts and Context Switching. As long and ponderous as that post was, it was actually meant to be part of a larger series of posts. This post is meant to be the continuation of that original post and hopefully, I'll be able to get through the rest of the series in relatively short order (instead of dithering for another couple years). While I'm busy providing context, I should also note that this series was also planned for my internal work blog, but in the spirit of arranging my interests in parallel (and because I don't have that much time at work dedicated to blogging on our intranet), I've decided to publish what I can here. Obviously, some of the specifics of my workplace have been removed from what follows, but it should still contain enough general value to be worthwhile.
In the previous post, I wrote about how computers and humans process information and in particular, how they handle switching between multiple different tasks. It turns out that computers are much better at switching tasks than humans are (for reasons belabored in that post). When humans want to do something that requires a lot of concentration and attention, such as computer programming or complex writing, they tend to work best when they have large amounts of uninterrupted time and can work in an environment that is quiet and free of distractions. Unfortunately, such environments can be difficult to find. As such, I thought it might be worth examining the source of most interruptions and distractions: communication.
Of course, this is a massive subject that can't even be summarized in something as trivial as a blog post (even one as long and bloviated as this one is turning out to be). That being said, it's worth examining in more detail because most interruptions we face are either directly or indirectly attributable to communication. In short, communication forces us to do context switching, which, as we've already established, is bad for getting things done.
Let's say that you're working on something large and complex. You've managed to get started and have reached a mental state that psychologists refer to as flow (also colloquially known as being "in the zone"). Flow is basically a condition of deep concentration and immersion. When you're in this state, you feel energized and often don't even recognize the passage of time. Seemingly difficult tasks no longer feel like they require much effort and the work just kinda... flows. Then someone stops by your desk to ask you an unrelated question. As a nice person and an accomodating coworker, you stop what you're doing, listen to the question and hopefully provide a helpful answer. This isn't necessarily a bad thing (we all enjoy helping other people out from time to time) but it also represents a series of context switches that would most likely break you out of your flow.
Not all work requires you to reach a state of flow in order to be productive, but for anyone involved in complex tasks like engineering, computer programming, design, or in-depth writing, flow is a necessity. Unfortunately, flow is somewhat fragile. It doesn't happen instantaneously; it requires a transition period where you refamiliarize yourself with the task at hand and the myriad issues and variables you need to consider. When your collegue departs and you can turn your attention back to the task at hand, you'll need to spend some time getting your brain back up to speed.
In isolation, the kind of interruption described above might still be alright every now and again, but imagine if the above scenario happened a couple dozen times in a day. If you're supposed to be working on something complicated, such a series of distractions would be disasterous. Unfortunately, I work for a 24/7 retail company and the nature of our business sometimes requires frequen interruptions and thus there are times when I am in a near constant state of context switching. Noe of this is to say I'm not part of the problem. I am certainly guilty of interrupting others, sometimes frequently, when I need some urgent information. This makes working on particularly complicated problems extremely difficult.
In the above example, there are only two people involved: you and the person asking you a question. However, in most workplace environments, that situation indirectly impacts the people around you as well. If they're immersed in their work, an unrelated conversation two cubes down may still break them out of their flow and slow their progress. This isn't nearly as bad as some workplaces that have a public address system - basically a way to interrupt hundreds or even thousands of people in order to reach one person - but it does still represent a challenge.
Now, the really insideous part about all this is that communication is really a good thing, a necessary thing. In a large scale organization, no one person can know everything, so communication is unavoidable. Meetings and phone calls can be indispensible sources of information and enablers of collaboration. The trick is to do this sort of thing in a way that interrupts as few people as possible. In some cases, this will be impossible. For example, urgency often forces disruptive communication (because you cannot afford to wait for an answer, you will need to be more intrusive). In other cases, there are ways to minimize the impact of frequent communication.
One way to minimize communication is to have frequently requested information documented in a common repository, so that if someone has a question, they can find it there instead of interrupting you (and potentially those around you). Naturally, this isn't quite as effective as we'd like, mostly because documenting information is a difficult and time consuming task in itself and one that often gets left out due to busy schedules and tight timelines. It turns out that documentation is hard! A while ago, Shamus wrote a terrific rant about technical documentation:
The stereotype is that technical people are bad at writing documentation. Technical people are supposedly inept at organizing information, bad at translating technical concepts into plain English, and useless at intuiting what the audience needs to know. There is a reason for this stereotype. It’s completely true.I don't think it's quite as bad as Shamus points out, mostly because I think that most people suffer from the same issues as technical people. Technology tends to be complex and difficult to explain in the first place, so it's just more obvious there. Technology is also incredibly useful because it abstracts many difficult tasks, often through the use of metaphors. But when a user experiences the inevitable metaphor shear, they have to confront how the system really works, not the easy abstraction they've been using. This descent into technical details will almost always be a painful one, no matter how well documented something is, which is part of why documentation gets short shrift. I think the fact that there actually is documentation is usually a rather good sign. Then again, lots of things aren't documented at all.
There are numerous challenges for a documentation system. It takes resources, time, and motivation to write. It can become stale and inaccurate (sometimes this can happen very quickly) and thus it requires a good amount of maintenance (this can involve numerous other topics, such as version histories, automated alert systems, etc...). It has to be stored somewhere, and thus people have to know where and how to find it. And finally, the system for building, storing, maintaining, and using documentation has to be easy to learn and easy to use. This sounds all well and good, but in practice, it's a nonesuch beast. I don't want to get too carried away talking about documentation, so I'll leave it at that (if you're still interested, that nonesuch beast article is quite good). Ultimately, documentation is a good thing, but it's obviously not the only way to minimize communication strain.
I've previously mentioned that computer programming is one of those tasks that require a lot of concentration. As such, most programmers abhor interruptions. Interestingly, communication technology has been becoming more and more reliant on software. As such, it should be no surprise that a lot of new tools for communication are asynchronous, meaning that the exchange of information happens at each participant's own convenience. Email, for example, is asynchronous. You send an email to me. I choose when I want to review my messages and I also choose when I want to respond. Theoretically, email does not interrupt me (unless I use automated alerts for new email, such as the default Outlook behavior) and thus I can continue to work, uninterrupted.
The aformentioned documentation system is also a form of asynchronous communication and indeed, most of the internet itself could be considered a form of documentation. Even the communication tools used on the web are mostly asynchronous. Twitter, Facebook, YouTube, Flickr, blogs, message boards/forums, RSS and aggregators are all reliant on asynchronous communication. Mobile phones are obviously very popular, but I bet that SMS texting (which is asynchronous) is used just as much as voice, if not moreso (at least, for younger people). The only major communication tools invented in the past few decades that wouldn't be asynchronous are instant messaging and chat clients. And even those systems are often used in a more asynchronous way than traditional speech or conversation. (I suppose web conferencing is a relatively new communication tool, though it's really just an extension of conference calls.)
The benefit of asynchronous communication is, of course, that it doesn't (or at least it shouldn't) represent an interruption. If you're immersed in a particular task, you don't have to stop what you're doing to respond to an incoming communication request. You can deal with it at your own convenience. Furthermore, such correspondence (even in a supposedly short-lived medium like email) is usually stored for later reference. Such records are certainly valuable resources. Unfortunately, asynchronous communication has it's own set of difficulties as well.
Miscommunication is certainly a danger in any case, but it seems more prominent in the world of asynchronous communication. Since there is no easy back-and-forth in such a method, there is no room for clarification and one is often left only with their own interpretation. Miscommunication is doubly challenging because it creates an ongoing problem. What could have been a single conversation has now ballooned into several asynchronous touch-points and even the potential for wasted work.
One of my favorite quotations is from Anne Morrow Lindbergh:
To write or to speak is almost inevitably to lie a little. It is an attempt to clothe an intangible in a tangible form; to compress an immeasurable into a mold. And in the act of compression, how the Truth is mangled and torn!It's difficult to beat the endless nuance of face-to-face communication, and for some discussions, nothing else will do. But as Lindbergh notes, communication is, in itself, a difficult proposition. Difficult, but necessary. About the best we can do is to attempt to minimize the misunderstanding.
I suppose one way to mitigate the possibility of miscommunication is to formalize the language in which the discussion is happening. This is easier said than done, as our friends in the legal department would no doubt say. Take a close look at a formal legal contract and you can clearly see the flaws in formal language. They are ostensibly written in English, but they require a lot of effort to compose or to read. Even then, opportunities for miscommunication or loopholes exist. Such a process makes sense when dealing with two separate organizations that each have their own agenda. But for internal collaboration purposes, such a formalization of communication would be disastrous.
You could consider computer languages a form of formal communication, but for most practical purposes, this would also fall short of a meaningful method of communication. At least, with other humans. The point of a computer language is to convert human thought into computational instructions that can be carried out in an almost mechanical fashion. While such a language is indeed very formal, it is also tedious, unintuitive, and difficult to compose and read. Our brains just don't work like that. Not to mention the fact that most of the communication efforts I'm talking about are the precursors to the writing of a computer program!
Despite all of this, a light formalization can be helpful and the fact that teams are required to produce important documentation practically requires a compromise between informal and formal methods of communication. In requirements specifications, for instance, I have found it quite beneficial to formally define various systems, acronyms, and other jargon that is referenced later in the document. This allows for a certain consistency within the document itself, and it also helps establish guidelines surrounding meaningful dialogue outside of the document. Of course, it wouldn't quite be up to legal standards and it would certainly lack the rigid syntax of computer languages, but it can still be helpful.
I am not an expert in linguistics, but it seems to me that spoken language is much richer and more complex than written language. Spoken language features numerous intricacies and tonal subtleties such as inflections and pauses. Indeed, spoken language often contains its own set of grammatical patterns which can be different than written language. Furthermore, face-to-face communication also consists of body language and other signs that can influence the meaning of what is said depending on the context in which it is spoken. This sort of nuance just isn't possible in written form.
This actually illustrates a wider problem. Again, I'm no linguist and haven't spent a ton of time examining the origins of language, but it seems to me that language emerged as a more immediate form of communication than what we use it for today. In other words, language was meant to be ephemeral, but with the advent of written language and improved technological means for recording communication (which are, historically, relatively recent developments), we're treating it differently. What was meant to be short-lived and transitory is now enduring and long-lived. As a result, we get things like the ever changing concept of political-correctness. Or, more relevant to this discussion, we get the aforementioned compromise between formal and informal language.
Another drawback to asynchronous communication is the propensity for over-communication. The CC field in an email can be a dangerous thing. It's very easy to broadcast your work out to many people, but the more this happens, the more difficult it becomes to keep track of all the incoming stimuli. Also, the language used in such a communication may be optimized for one type of reader, while the audience may be more general. This applies to other asynchronous methods as well. Documentation in a wiki is infamously difficult to categorize and find later. When you have an army of volunteers (as Wikipedia does), it's not as large a problem. But most organizations don't have such luxuries. Indeed, we're usually lucky if something is documented at all, let alone well organized and optimized.
The obvious question, which I've skipped over for most of this post (and, for that matter, the previous post), is: why communicate in the first place? If there are so many difficulties that arise out of communication, why not minimize such frivolities so that we can get something done?
Indeed, many of the greatest works in history were created by one mind. Sometimes, two. If I were to ask you to name the greatest inventor of all time, what would you say? Leonardo da Vinci or perhaps Thomas Edison. Both had workshops consisting of many helping hands, but their greatest ideas and conceptual integrity came from one man. Great works of literature? Shakespeare is the clear choice. Music? Bach, Mozart, Beethoven. Painting? da Vinci (again!), Rembrandt, Michelangelo. All individuals! There are collaborations as well, but usually only among two people. The Wright brothers, Gilbert and Sullivan, and so on.
So why has design and invention gone from solo efforts to group efforts? Why do we know the names of most of the inventors of 19th and early 20th century innovations, but not later achievements? For instance, who designed the Saturn V rocket? No one knows that, because it was a large team of people (and it was the culmination of numerous predecessors made by other teams of people). Why is that?
The biggest and most obvious answer is the increasing technological sophistication in nearly every area of engineering. The infamous Lazarus Long adage that "Specialization is for insects." notwithstanding, the amount of effort and specialization in various fields is astounding. Take a relatively obscure and narrow branch of mechanical engineering like Fluid Dynamics, and you'll find people devoting most of their life to the study of that field. Furthermore, the applications of that field go far beyond what we'd assume. Someone tinkering in their garage couldn't make the Saturn V alone. They'd require too much expertise in a wide and disparate array of fields.
This isn't to say that someone tinkering in their garage can't create something wonderful. Indeed, that's where the first personal computer came from! And we certainly know the names of many innovators today. Mark Zuckerberg and Larry Page/Sergey Brin immediately come to mind... but even their inventions spawned large companies with massive teams driving future innovation and optimization. It turns out that scaling a product up often takes more effort and more people than expected. (More information about the pros and cons of moving to a collaborative structure will have to wait for a separate post.)
And with more people comes more communication. It's a necessity. You cannot collaborate without large amounts of communication. In Tom DeMarco and Timothy Lister's book Peopleware, they call this the High-Tech Illusion:
...the widely held conviction among people who deal with any aspect of new technology (as who of us does not?) that they are in an intrinsically high-tech business. ... The researchers who made fundamental breakthroughs in those areas are in a high-tech business. The rest of us are appliers of their work. We use computers and other new technology components to develop our products or to organize our affairs. Because we go about this work in teams and projects and other tightly knit working groups, we are mostly in the human communication business. Our successes stem from good human interactions by all participants in the effort, and our failures stem from poor human interactions.(Emphasis mine.) That insight is part of what initially inspired this series of posts. It's very astute, and most organizations work along those lines, and thus need to figure out ways to account for the additional costs of communication (this is particularly daunting, as such things are notoriously difficult to measure, but I'm getting ahead of myself). I suppose you could argue that both of these posts are somewhat inconclusive. Some of that is because they are part of a larger series, but also, as I've been known to say, human beings don't so much solve problems as they do trade one set of problems for another (in the hopes that the new problems are preferable the old). Recognizing and acknowledging the problems introduced by collaboration and communication is vital to working on any large project. As I mentioned towards the beginning of this post, this only really scratches the surface of the subject of communication, but for the purposes of this series, I think I've blathered on long enough. My next topic in this series will probably cover the various difficulties of providing estimates. I'm hoping the groundwork laid in these first two posts will mean that the next post won't be quite so long, but you never know!
Posted by Mark on May 22, 2011 at 07:51 PM .: link :.
Sunday, April 03, 2011
So the NY Times has an article debating the necessity of the various gadgets. The argument here is that we're seeing a lot of convergence in tech devices, and that many technologies that once warranted a dedicated device are now covered by something else. Let's take a look at their devices, what they said, and what I think:
"It has been my view for some years that a new System of the World is being created around us. I used to suppose that it would drive out and annihilate any older Systems. But things I have seen recently ... have convinced me that new Systems never replace old ones, but only surround and encapsulate them, even as, under a microscope, we may see that living within our bodies are animalcules, smaller and simpler than us, and yet thriving even as we thrive. ... And so I say that Alchemy shall not vanish, as I always hoped. Rather, it shall be encapsulated within the new System of the World, and become a familiar and even comforting presence there, though its name may change and its practitioners speak no more about the Philosopher's Stone." (page 639)That sort of "surround and encapsulate" concept seems broadly applicable to a lot of technology, actually.
Posted by Mark on April 03, 2011 at 07:42 PM .: link :.
Wednesday, March 30, 2011
Nicholas Carr cracks me up. He's a skeptic of technology, and in particular, the internet. He's the the guy who wrote the wonderfully divisive article, Is Google Making Us Stupid? The funny thing about all this is that he seems to have gained the most traction on the very platform he criticizes so much. Ultimately, though, I think he does have valuable insights and, if nothing else, he does raise very interesting questions about the impacts of technology on our lives. He makes an interesting counterweight to the techno-geeks who are busy preaching about transhumanism and the singularity. Of course, in a very real sense, his opposition dooms him to suffer from the same problems as those he criticizes. Google and the internet may not be a direct line to godhood, but it doesn't represent a descent into hell either. Still, reading some Carr is probably a good way to put techno-evangelism into perspective and perhaps reach some sort of Hegelian synthesis of what's really going on.
Otakun recently pointed to an excerpt from Carr's latest book. The general point of the article is to examine how human memory is being conflated with computer memory, and whether or not that makes sense:
...by the middle of the twentieth century memorization itself had begun to fall from favor. Progressive educators banished the practice from classrooms, dismissing it as a vestige of a less enlightened time. What had long been viewed as a stimulus for personal insight and creativity came to be seen as a barrier to imagination and then simply as a waste of mental energy. The introduction of new storage and recording media throughout the last century—audiotapes, videotapes, microfilm and microfiche, photocopiers, calculators, computer drives—greatly expanded the scope and availability of “artificial memory.” Committing information to one’s own mind seemed ever less essential. The arrival of the limitless and easily searchable data banks of the Internet brought a further shift, not just in the way we view memorization but in the way we view memory itself. The Net quickly came to be seen as a replacement for, rather than just a supplement to, personal memory. Today, people routinely talk about artificial memory as though it’s indistinguishable from biological memory.While Carr is perhaps more blunt than I would be, I have to admit that I agree with a lot of what he's saying here. We often hear about how modern education is improved by focusing on things like "thinking skills" and "problem solving", but the big problem with emphasizing that sort of work ahead of memorization is that the analysis needed for such processes require a base level of knowledge in order to be effective. This is something I've expounded on at length in a previous post, so I won't rehash that here.
The interesting thing about the internet is that it enables you to get to a certain base level of knowledge and competence very quickly. This doesn't come without it's own set of challenges, and I'm sure Carr would be quick to point out that such a crash course would yield a false sense of security on us hapless internet users. After all, how do we know when we've reached that base level of confidence? Our incompetence could very well be masking our ability to recognize our incompetence. However, I don't think that's an insurmountable problem. Most of us that use the internet a lot view it as something of a low-trust environment, which can, ironically, lead to a better result. On a personal level, I find that what the internet really helps with is to determine just how much I don't know about a subject. That might seem like a silly thing to say, but even recognizing that your unknown unknowns are large can be helpful.
Some other assorted thoughts about Carr's excerpt:
Posted by Mark on March 30, 2011 at 06:06 PM .: link :.
Wednesday, December 01, 2010
Opera 11 Beta
I'm one of the few people that actually uses Opera to do the grand majority of my web browsing. In recent years, I've been using Firefox more, especially for web development purposes (it's hard to beat the Firebug/Web Dev Toolbar combo - Opera has a tool called Dragonfly that's decent, but not quite as good). A few years ago, I wrote a comparison of Firefox and Opera across 8 categories, and it came out a tie. The biggest advantage that Opera had was it's usability and easy of use. On the other hand, Firefox's strength was its extensibility, something that Opera never fully embraced. Until now!
Opera recently released a beta of their next version, and I've been using it this week. It's looking like an excellent browser, with some big improvements over previous versions:
Posted by Mark on December 01, 2010 at 08:30 PM .: link :.
Wednesday, November 17, 2010
A few interesting links from the depths of teh interwebs:
Posted by Mark on November 17, 2010 at 09:16 PM .: link :.
Wednesday, August 04, 2010
A/B Testing Spaghetti Sauce
Earlier this week I was perusing some TED Talks and ran across this old (and apparently popular) presentation by Malcolm Gladwell. It struck me as particularly relevant to several topics I've explored on this blog, including Sunday's post on the merits of A/B testing. In the video, Gladwell explains why there are a billion different varieties of Spaghetti sauce at most supermarkets:
The key insight Gladwell discusses in his video is basically the destruction of the Platonic Ideal (I'll summarize in this paragraph in case you didn't watch the video, which covers the topic in much more depth). He talks about Howard Moskowitz, who was a market research consultant with various food industry companies that were attempting to optimize their products. After conducting lots of market research and puzzling over the results, Moskowitz eventually came to a startling conclusion: there is no perfect product, only perfect products. Moskowitz made his name working with spaghetti sauce. Prego had hired him in order to find the perfect spaghetti sauce (so that they could compete with rival company, Ragu). Moskowitz developed dozens of prototype sauces and went on the road, testing each variety with all sorts of people. What he found was that there was no single perfect spaghetti sauce, but there were basically three types of sauce that people responded to in roughly equal proportion: standard, spicy, and chunky. At the time, there were no chunky spaghetti sauces on the market, so when Prego released their chunky spaghetti sauce, their sales skyrocketed. A full third of the market was underserved, and Prego filled that need.
Decades later, this is hardly news to us and the trend has spread from the supermarket into all sorts of other arenas. In entertainment, for example, we're seeing a move towards niches. The era of huge blockbuster bands like The Beatles is coming to an end. Of course, there will always be blockbusters, but the really interesting stuff is happening in the niches. This is, in part, due to technology. Once you can fit 30,000 songs onto an iPod and you can download "free" music all over the internet, it becomes much easier to find music that fits your tastes better. Indeed, this becomes a part of peoples' identity. Instead of listening to the mass produced stuff, they listen to something a little odd and it becomes an expression of their personality. You can see evidence of this everywhere, and the internet is a huge enabler in this respect. The internet is the land of niches. Click around for a few minutes and you can easily find absurdly specific, single topic, niche websites like this one where every post features animals wielding lightsabers or this other one that's all about Flaming Garbage Cans In Hip Hop Videos (there are thousands, if not millions of these types of sites). The internet is the ultimate paradox of choice, and you're free to explore almost anything you desire, no matter how odd or obscure it may be (see also, Rule 34).
In relation to Sunday's post on A/B testing, the lesson here is that A/B testing is an optimization tool that allows you to see how various segments respond to different versions of something. In that post, I used an example where an internet retailer was attempting to find the ideal imagery to sell a diamond ring. A common debate in the retail world is whether that image should just show a closeup of the product, or if it should show a model wearing the product. One way to solve that problem is to A/B test it - create both versions of the image, segment visitors to your site, and track the results.
As discussed Sunday, there are a number of challenges with this approach, but one thing I didn't mention is the unspoken assumption that there actually is an ideal image. In reality, there are probably some people that prefer the closeup and some people who prefer the model shot. An A/B test will tell you what the majority of people like, but wouldn't it be even better if you could personalize the imagery used on the site depending on what customers like? Show the type of image people prefer, and instead of catering to the most popular segment of customer, you cater to all customers (the simple diamond ring example begins to break down at this point, but more complex or subtle tests could still show significant results when personalized). Of course, this is easier said than done - just ask Amazon, who does CRM and personalization as well as any retailer on the web, and yet manages to alienate a large portion of their customers every day! Interestingly, this really just shifts the purpose of A/B testing from one of finding the platonic ideal to finding a set of ideals that can be applied to various customer segments. Once again we run up against the need for more and better data aggregation and analysis techniques. Progress is being made, but I'm not sure what the endgame looks like here. I suppose time will tell. For now, I'm just happy that Amazon's recommendations aren't completely absurd for me at this point (which I find rather amazing, considering where they were a few years ago).
Posted by Mark on August 04, 2010 at 07:54 PM .: link :.
Sunday, August 01, 2010
Groundhog Day and A/B Testing
Jeff Atwood recently made a fascinating observation about the similarities between the classic film Groundhog Day and A/B Testing.
In case you've only recently emerged from a hermit-like existence, Groundhog Day is a film about Phil (played by Bill Murray). It seems that Phil has been doomed (or is it blessed) to live the same day over and over again. It doesn't seem to matter what he does during this day, he always wakes up at 6 am on Groundhog Day. In the film, we see the same day repeated over and over again, but only in bits and pieces (usually skipping repetitive parts). The director of the film, Harold Ramis, believes that by the end of the film, Phil has spent the equivalent of about 30 or 40 years reliving that same day.
Towards the beginning of the film, Phil does a lot of experimentation, and Atwood's observation is that this often takes the form of an A/B test. This is a concept that is perhaps a little more esoteric, but the principles are easy. Let's take a simple example from the world of retail. You want to sell a new ring on a website. What should the main image look like? For simplification purposes, let's say you narrow it down to two different concepts: one, a closeup of the ring all by itself, and the other a shot of a model wearing the ring. Which image do you use? We could speculate on the subject for hours and even rationalize some pretty convincing arguments one way or the other, but it's ultimately not up to us - in retail, it's all about the customer. You could "test" the concept in a serial fashion, but ultimately the two sets of results would not be comparable. The ring is new, so whichever image is used first would get an unfair advantage, and so on. The solution is to show both images during the same timeframe. You do this by splitting your visitors into two segments (A and B), showing each segment a different version of the image, and then tracking the results. If the two images do, in fact, cause different outcomes, and if you get enough people to look at the images, it should come out in the data.
This is what Phil does in Groundhog Day. For instance, Phil falls in love with Rita (played by Andie MacDowell) and spends what seems like months compiling lists of what she likes and doesn't like, so that he can construct the perfect relationship with her.
Phil doesn't just go on one date with Rita, he goes on thousands of dates. During each date, he makes note of what she likes and responds to, and drops everything she doesn't. At the end he arrives at -- quite literally -- the perfect date. Everything that happens is the most ideal, most desirable version of all possible outcomes on that date on that particular day. Such are the luxuries afforded to a man repeating the same day forever.As Atwood notes, the interesting thing about this process is that even once Phil has constructed that perfect date, Rita still rejects Phil. From this example and presumably from experience with A/B testing, Atwood concludes that A/B testing is empty and that subjects can often sense a lack of sincerity behind the A/B test.
It's an interesting point, but to be sure, I'm not sure it's entirely applicable in all situations. Of course, Atwood admits that A/B testing is good at smoothing out details, but there's something more at work in Groundhog's Day that Atwood is not mentioning. Namely, that Phil is using A/B testing to misrepresent himself as the ideal mate for Rita. Yes, he's done the experimentation to figure out what "works" and what doesn't, but his initial testing was ultimately shallow. Rita didn't reject him because he had all the right answers, she rejected him because he was attempting to deceive her. His was misrepresenting himself, and that certainly can lead to a feeling of emptiness.
If you look back at my example above about the ring being sold on a retail website, you'll note that there's no deception going on there. Somehow I doubt either image would result in a hollow feeling by the customer. Why is this different than Groundhog Day? Because neither image misrepresents the product, and one would assume that the website is pretty clear about the fact that you can buy things there. Of course, there are a million different variables you could test (especially once you get into text and marketing hooks, etc...) and some of those could be more deceptive than others, but most of the time, deception is not the goal. There is a simple choice to be made, instead of constantly wondering about your product image and second guessing yourself, why not A/B test it and see what customers like better?
There are tons of limitations to this approach, but I don't think it's as inherently flawed as Atwood seems to believe. Still, the data you get out of an A/B test isn't always conclusive and even if it is, whatever learnings you get out of it aren't necessarily applicable in all situations. For instance, what works for our new ring can't necessarily be applied to all new rings (this is a problem for me, as my employer has a high turnover rate for products - as such, the simple example of the ring as described above would not be a good test for my company unless the ring would be available for a very long time). Furthermore, while you can sometimes pick a winner, it's not always clear why it's a winner. This is especially the case when the differences between A and B are significant (for instance, testing an entirely redesigned page might yield results, but you will not know which of the changes to the page actually caused said results - on the other hand, A/B testing is really the only way to accurately calculate ROI on significant changes like that.)
Obviously these limitations should be taken into account when conducting an A/B test, and I think what Phil runs into in Groundhog's Day is a lack of conclusive data. One of the problems with interpreting inconclusive data is that it can be very tempting to rationalize the data. Phils initial attempts to craft the perfect date for Rita fail because he's really only scraping the surface of her needs and desires. In other words, he's testing the wrong thing, misunderstanding the data, and thus getting inconclusive results.
The interesting thing about the Groundhog's Day example is that, in the end, the movie is not a condemnation of A/B testing at all. Phil ultimately does manage to win the affections of Rita. Of course it took him decades to do so, and that's worth taking into account. Perhaps what the film is really saying is that A/B testing is often more complicated than it seems and that the only results you get depend on what you put into it. A/B testing is not the easy answer it's often portrayed as and it should not be the only tool in your toolbox (i.e. forcing employees to prove that using 3, 4 or 5 pixels for a border is ideal is probably going a bit too far ), but neither is it as empty as Atwood seems to be indicating. (And we didn't even talk about multivariate tests! Let's get Christopher Nolan on that. He'd be great at that sort of movie, wouldn't he?)
Posted by Mark on August 01, 2010 at 09:57 PM .: link :.
Sunday, May 30, 2010
Someone sent me a note about a post I wrote on the 4th Kingdom boards in 2005 (August 3, 2005, to be more precise). It was in a response to a thread about technology and consumer electronics trends, and the original poster gave two examples that were exploding at the times: "camera phones and iPods." This is what I wrote in response:
Heh, I think the next big thing will be the iPod camera phone. Or, on a more general level, mp3 player phones. There are already some nifty looking mp3 phones, most notably the Sony/Ericsson "Walkman" branded phones (most of which are not available here just yet). Current models are all based on flash memory, but it can't be long before someone releases something with a small hard drive (a la the iPod). I suspect that, in about a year, I'll be able to hit 3 birds with one stone and buy a new cell phone with an mp3 player and digital camera.For an off-the-cuff informal response, I think I did pretty well. Of course, I still got a lot of the specifics wrong. For instance, I'm pretty clearly talking about the iPhone, though that would have to wait about 2 years before it became a reality. I also didn't anticipate the expansion of flash memory to more usable sizes and prices. Though I was clearly talking about a convergence device, I didn't really say anything about what we now call "apps".
In terms of game consoles, I didn't really say much. My first thought upon reading this post was that I had completely missed the boat on the Wii, however, it appears that the Wii's new controller scheme wasn't shown until September 2005 (about a month after my post). I did manage to predict a winner in the HD video war though, even if I framed the prediction as a "high capacity DVD war" and spelled blu-ray wrong.
I'm not generally good at making predictions about this sort of thing, but it's nice to see when I do get things right. Of course, you could make the argument that I was just stating the obvious (which is basically what I did with my 2008 predictions). Then again, everything seems obvious in hindsight, so perhaps it is still a worthwhile exercise for me. If nothing else, it gets me to think in ways I'm not really used to... so here are a few predictions for the rest of this year:
Posted by Mark on May 30, 2010 at 09:00 PM .: link :.
Sunday, March 14, 2010
Remix Culture and Soviet Montage Theory
A video mashup of The Beastie Boys' popular and amusing Sabotage video with scenes from Battlestar Galactica has been making the rounds recently. It's well done, but a little on the disposable side of remix culture. The video lead Sunny Bunch to question "remix culture":
It’s quite good. But, ultimately, what’s the point?These are good questions, and I'm not surprised that the BSG Sabotage video prompted them. The implication of Sonny's post is that he thinks it is an unoriginal waste of talent (he may be playing a bit of devil's advocate here, but I'm willing to play along because these are interesting questions and because it will give me a chance to pedantically lecture about film history later in this post!) In the comments, Julian Sanchez makes a good point (based on a video he produced earlier that was referenced by someone else in the comment thread), which will be something I'll expand on later in this post:
First, the argument I’m making in that video is precisely that exclusive focus on the originality of the contribution misses the value in the activity itself. The vast majority of individual and collective cultural creation practiced by ordinary people is minimally “original” and unlikely to yield any final product of wide appeal or enduring value. I’m thinking of, e.g., people singing karaoke, playing in a garage band, drawing, building models, making silly YouTube videos, improvising freestyle poetry, whatever. What I’m positing is that there’s an intrinsic value to having a culture where people don’t simply get together to consume professionally produced songs and movies, but also routinely participate in cultural creation. And the value of that kind of cultural practice doesn’t depend on the stuff they create being particularly awe-inspiring.To which Sonny responds:
I’m actually entirely with you on the skill that it takes to produce a video like the Brooklyn hipsters did — I have no talent for lighting, camera movements, etc. I know how hard it is to edit together something like that, let alone shoot it in an aesthetically pleasing manner. That’s one of the reasons I find the final product so depressing, however: An impressive amount of skill and talent has gone into creating something that is not just unoriginal but, in a way, anti-original. These are people who are so devoid of originality that they define themselves not only by copying a video that they’ve seen before but by copying the very personalities of characters that they’ve seen before.Another good point, but I think Sonny is missing something here. The talents of the BSG Sabotage editor or the Brooklyn hipsters are certainly admirable, but while we can speculate, we don't necessarily know their motivations. About 10 years ago, a friend and amateur filmmaker showed me a video one of his friends had produced. It took scenes from Star Wars and Star Trek: The Wrath of Khan and recut them so it looked like the Millennium Falcon was fighting the Enterprise. It would show Han Solo shooting, then cut to the Enterprise being hit. Shatner would exclaim "Fire!" and then it would cut to a blast hitting the Millennium Falcon. And so on. Another video from the same guy took the musical number George Lucas had added to Return of the Jedi in the Special Edition, laid Wu-Tang Clan in as the soundtrack, then re-edited the video elements so everything matched up.
These videos sound fun, but not particularly original or even special in this day and age. However, these videos were made ten to fifteen years ago. I was watching them on a VHS(!) and the person making the edits was using analog techniques and equipment. It turns out that these videos were how he honed his craft before he officially got a job as an editor in Hollywood. I'm sure there were tons of other videos, probably much less impressive, that he had created before the ones I'm referencing. Now, I'm not saying that the BSG Sabotage editor or the Brooklyn Hipsters are angling for professional filmmaking jobs, but it's quite possible that they are at least exploring their own possibilities. I would also bet that these people have been making videos like this (though probably much less sophisticated) since they were kids. The only big difference now is that technology has enabled them to make a slicker experience and, more importantly, to distribute it widely.
It's also worth noting that this sort of thing is not without historical precedent. Indeed, the history of editing and montage is filled with this sort of thing. In the 1910s and 1920s, Russian filmmaker Lev Kuleshov conducted a series of famous experiments that helped express the role of editing in films. In these experiments, he would show a man with an expressionless face, then cut to various other shots. In one example, he showed the expressionless face, then cut to a bowl of soup. When prompted, audiences would claim that they found that the man was hungry. Kuleshov then took the exact same footage of the expressionless face and cut to a pretty girl. This time, audiences reported that the man was in love. Another experiment alternated between the expressionless face and a coffin, a juxtaposition that lead audiences to believe that the man was stricken with grief. This phenomenon has become known as the Kuleshov Effect.
For the current discussion, one notable aspect of these experiments is that Kuleshov was working entirely from pre-existing material. And this sort of thing was not uncommon, either. At the time, there was a shortage of raw film stock in Russia. Filmmakers had to make due with what they had, and often spent their time re-cutting existing material, which lead to what's now called Soviet Montage Theory. When D.W. Griffith's Intolerance, which used advanced editing techniques (it featured a series of cross cut narratives which eventually converged in the last reel), opened in Russia in 1919, it quickly became very popular. The Russian film community saw this as a validation and popularization of their theories and also as an opportunity. Russian critics and filmmakers were impressed by the film's technical qualities, but dismissed the story as "bourgeois", claiming that it failed to resolve issues of class conflict, and so on. So, not having much raw film stock of their own, they took to playing with Griffith's film, re-editing certain sections of the film to make it more "agitational" and revolutionary.
The extent to which this happened is a bit unclear, and certainly public exhibitions were not as dramatically altered as I'm making it out to be. However, there are Soviet versions of the movie that contained small edits and a newly filmed prologue. This was done to "sharpen the class conflict" and "anti-exploitation" aspects of the film, while still attempting to respect the author's original intentions. This was part of a larger trend of adding Soviet propaganda to pre-existing works of art, and given the ideals of socialism, it makes sense. (The preceeding is a simplification of history, of course... see this chapter from Inside the Film Factory for a more detailed discussion of Intolerance and it's impact on Russian cinema.) In the Russian film world, things really began to take off with Sergei Eisenstein and films like Battleship Potemkin. Watch that film today, and you'll be struck by how modern-feeling the editing is, especially during the infamous Odessa Steps sequence (which you'll also recognize if you've ever seen Brian De Palma's "homage" in The Untouchables).
Now, I'm not really suggesting that the woman who produced BSG Sabotage is going to be the next Eisenstein, merely that the act of cutting together pre-existing footage is not necessarily a sad waste of talent. I've drastically simplified the history of Soviet Montage Theory above, but there are parallels between Soviet filmmakers then and YouTube videomakers today. Due to limited resources and knowledge, they began experimenting with pre-existing footage. They learned from the experience and went on to grander modifications of larger works of art (Griffith's Intolerance). This eventually culminated in original works of art, like those produced by Eisenstein.
Now, YouTube videomakers haven't quite made that expressive leap yet, but it's only been a few years. It's going to take time, and obviously editing and montage are already well established features of film, so innovation won't necessarily come from that direction. But that doesn't mean that nothing of value can emerge from this sort of thing, nor does messing around with videos on YouTube limit these young artists to film. While Roger Ebert's valid criticisms are vaid, more and more, I'm seeing interactivity as the unexplored territory of art. Video games like Heavy Rain are an interesting experience and hint at something along these lines, but they are still severely limited in many ways (in other words, Ebert is probably right when it comes to that game). It will take a lot of experimentation to get to a point where maybe Ebert would be wrong (if it's even possible at all). Learning about the visual medium of film by editing together videos of pre-existing material would be an essential step in the process. Improving the technology with which to do so is also an important step. And so on.
To return back to the BSG Sabotage video for a moment, I think that it's worth noting the origins of that video. The video is clearly having fun by juxtaposing different genres and mediums (it is by no means the best or even a great example of this sort of thing, but it's still there. For a better example of something built entirely from pre-existing works, see Shining.). Battlestar Galactica was a popular science fiction series, beloved by many, and this video comments on the series slightly by setting the whole thing to an unconventional music choice (though given the recent Star Trek reboot's use of the same song, I have to wonder what the deal is with SF and Sabotage). Ironically, even the "original" Beastie Boys video was nothing more than a pastiche of 70s cop television shows. While I'm no expert, the music on Ill Communication, in general, has a very 70s feel to it. I suppose you could say that association only exists because of the Sabotage video itself, but even other songs on that album have that feel - for one example, take Sabrosa. Indeed, the Beastie Boys are themselves known for this sort of appropriation of pre-existing work. Their album Paul's Boutique infamously contains literally hundreds of samples and remixes of popular music. I'm not sure how they got away with some of that stuff, but I suppose this happened before getting sued for sampling was common. Nowadays, in order to get away with something like Paul's Boutique, you'll need to have deep pockets, which sorta defeats the purpose of using a sample in the first place. After all, samples are used in the absence of resources, not just because of a lack of originality (though I guess that's part of it). In 2004 Nate Harrison put together this exceptional video explaining how a 6 second drum beat (known as the Amen Break) exploded into its own sub-culture:
There is certainly some repetition here, and maybe some lack of originality, but I don't find this sort of thing "sad". To be honest, I've never been a big fan of hip hop music, but I can't deny the impact it's had on our culture and all of our music. As I write this post, I'm listening to Danger Mouse's The Grey Album:
It uses an a cappella version of rapper Jay-Z's The Black Album and couples it with instrumentals created from a multitude of unauthorized samples from The Beatles' LP The Beatles (more commonly known as The White Album). The Grey Album gained notoriety due to the response by EMI in attempting to halt its distribution.I'm not familiar with Jay-Z's album and I'm probably less familiar with The White Album than I should be, but I have to admit that this combination and the artistry with which the two seemingly incompatible works are combined into one cohesive whole is impressive. Despite the lack of an official release (that would have made Danger Mouse money), The Grey Album made many best of the year (and best of the decade) lists. I see some parallels between the 1980s and 1990s use of samples, remixes, and mashups, and what was happening in Russian film in the 1910s and 1920s. There is a pattern worth noticing here: New technology enables artists to play with existing art, then apply their learnings to something more original later. Again, I don't think that the BSG Sabotage video is particularly groundbreaking, but that doesn't mean that the entire remix culture is worthless. I'm willing to bet that remix culture will eventually contribute towards something much more original than BSG Sabotage...
Incidentally, the director of the original Beastie Boys Sabotage video? Spike Jonze, who would go on to direct movies like Being John Malkovich, Adaptation., and Where the Wild Things Are. I think we'll see some parallels between the oft-maligned music video directors, who started to emerge in the film world in the 1990s, and YouTube videomakers. At some point in the near future, we're going to see film directors coming from the world of short-form internet videos. Will this be a good thing? I'm sure there are lots of people who hate the music video aesthetic in film, but it's hard to really be that upset that people like David Fincher and Spike Jonze are making movies these days. I doubt YouTubers will have a more popular style, and I don't think they'll be dominant or anything, but I think they will arrive. Or maybe YouTube videomakers will branch out into some other medium or create something entirely new (as I mentioned earlier, there's a lot of room for innovation in the interactive realm). In all honesty, I don't really know where remix culture is going, but maybe that's why I like it. I'm looking forward to seeing where it leads.
Posted by Mark on March 14, 2010 at 02:18 PM .: link :.
Wednesday, March 10, 2010
Blast from the Past
A coworker recently unearthed a stash of a publication called The Net, a magazine published circa 1997. It's been an interesting trip down memory lane. In no particular order, here are some thoughts about this now defunct magazine.
Posted by Mark on March 10, 2010 at 07:19 PM .: link :.
Sunday, January 10, 2010
I have recently come into possession of a second LCD monitor, and hooked it up to do some dual monitor awesomeness (amazingly enough, I didn't even need to upgrade my graphics card to do so). The problem is that my current desk is one of those crappy turn-of-the-century numbers that assumes you only have one monitor and thus doesn't have space for the second. I managed to work around this... by ripping off the hutch portion of the desk, but I could still use a new desk, as this one really has seen better days.
So I started thinking about what I need my desk to do, and have quickly descended into Paradox of Choice hell. At a minimum, a new desk would need to be able to handle:
In terms of taste, I tend to be a minimalist. I don't need lots of flying doodads or space-age design. Just something simple that covers the above. In looking around, this seems to be a rarity. As per usual when it comes to this sort of thing, Jeff Atwood has already posted about this, and the comment thread there is quite interesting (and still being updated, years later).
The best desk I've found so far seems to be the D2 Pocket Desk. Of course the big problem with that one is that it's obscenely expensive (even on sale, it's wayyyy to expensive). But it's perfect for me. It's notable almost as much for what you don't see as what you do see - apparently there's a big compartment in the back that's big enough to stuff all the cables, wires, routers, etc... that I need (and you can see the two little holes meant to corral the wires into that area). It being as expensive as it is, it's not something I'm seriously considering, but I'm trying to find a cheaper, but similarly designed option (perhaps something that doesn't use cherry wood, which is apparently quite expensive). I'm kinda surprised at how few computer desks even attempt to account for cable management. Anyway, here's a quick picture:
The other notable option I found at Jeff's site was from a company called Anthro. Not the model he mentions, which is a monstrosity. However, Anthro features lots of models and everything is customizable in the extreme. While they seem like good quality desks, they're also much more reasonably priced. Unfortunately, their configuration tool does little to help you visualize what I'll end up with. Still, the 48" AnthroCart seems like it would fit my needs and given the modular nature of the desk, I can always add on to it later. If you look at the 3rd picture on that page, it's kinda what I'm looking for (but without the bottom shelf and maybe a filing cabinet attachment)
The big questions I have about the AnthroCart are how well their keyboard/mouse solutions work (all of the varieties have seem to be quite small - and my current option is actually kinda large, which I really like for some reason...) There's also the question of how well those extra shelfs on the top and bottom work. And color. Yeah, so this one is definitely in Paradox of Choice territory. However, they're apparently pretty agreeable and will help guide you in choosing the various accessories, etc... So maybe I'll start up a chat with a rep when I get a chance...
Some other stuff I've been looking at:
Update: Desk 51 from BlueDot (via) is pretty interesting. I'm wondering how sturdy it is.
Again Update: This Landon Desk from Crate and Barrel has grown on me a bit, especially after seeing a similar desk on Flickr. The good thing about C&B is that there is a store near me, so I can at least check it out in person...
Another Update: Well, that's an idea... which I suppose also brings up the "Build your own" option, which could be a rewarding experience.
Yet Another Update: For reference, here's a pic of my desk as currently configured, and here's the surprisingly sturdy keyboard tray.
Posted by Mark on January 10, 2010 at 07:00 PM .: link :.
Wednesday, November 18, 2009
Another Store You Made
I'm totally stealing an idea from Jason Kottke here (let's call it a meme!), but it's kinda neat:
Whenever I link to something at Amazon on kottke.org, there's an affiliate code associated with the link. When I log into my account, I can access a listing of what people bought1. The interesting bit is that everything someone buys after clicking through to Amazon counts and is listed, even items I didn't link to directly. These purchased-but-unlinked-to items form a sort of store created by kottke.org readers of their own accord.I have about 1/1000000th the readership of Kottke, but I do have an Amazon affiliate account (it doesn't even come close to helping pay for the site, but it does feed my book/movie/music/video game addictions). Of course, I don't sell nearly as much stuff either, but here are a few things sold that haven't been directly linked:
Posted by Mark on November 18, 2009 at 07:23 PM .: link :.
Sunday, June 28, 2009
Interrupts and Context Switching
To drastically simplify how computers work, you could say that computers do nothing more that shuffle bits (i.e. 1s and 0s) around. All computer data is based on these binary digits, which are represented in computers as voltages (5 V for a 1 and 0 V for a 0), and these voltages are physically manipulated through transistors, circuits, etc... When you get into the guts of a computer and start looking at how they work, it seems amazing how many operations it takes to do something simple, like addition or multiplication. Of course, computers have gotten a lot smaller and thus a lot faster, to the point where they can perform millions of these operations per second, so it still feels fast. The processor is performing these operations in a serial fashion - basically a single-file line of operations.
This single-file line could be quite inefficent and there are times when you want a computer to be processing many different things at once, rather than one thing at a time. For example, most computers rely on peripherals for input, but those peripherals are often much slower than the processor itself. For instance, when a program needs some data, it may have to read that data from the hard drive first. This may only take a few milliseconds, but the CPU would be idle during that time - quite inefficient. To improve efficiency, computers use multitasking. A CPU can still only be running one process at a time, but multitasking gets around that by scheduling which tasks will be running at any given time. The act of switching from one task to another is called Context Switching. Ironically, the act of context switching adds a fair amount of overhead to the computing process. To ensure that the original running program does not lose all its progress, the computer must first save the current state of the CPU in memory before switching to the new program. Later, when switching back to the original, the computer must load the state of the CPU from memory. Fortunately, this overhead is often offset by the efficiency gained with frequent context switches.
If you can do context switches frequently enough, the computer appears to be doing many things at once (even though the CPU is only processing a single task at any given time). Signaling the CPU to do a context switch is often accomplished with the use of a command called an Interrupt. For the most part, the computers we're all using are Interrupt driven, meaning that running processes are often interrupted by higher-priority requests, forcing context switches.
This might sound tedious to us, but computers are excellent at this sort of processing. They will do millions of operations per second, and generally have no problem switching from one program to the other and back again. The way software is written can be an issue, but the core functions of the computer described above happen in a very reliable way. Of course, there are physical limits to what can be done with serial computing - we can't change the speed of light or the size of atoms or a number of other physical constraints, and so performance cannot continue to improve indefinitely. The big challenge for computers in the near future will be to figure out how to use parallel computing as well as we now use serial computing. Hence all the talk about Multi-core processing (most commonly used with 2 or 4 cores).
Parallel computing can do many things which are far beyond our current technological capabilities. For a perfect example of this, look no further than the human brain. The neurons in our brain are incredibly slow when compared to computer processor speeds, yet we can rapidly do things which are far beyond the abilities of the biggest and most complex computers in existance. The reason for that is that there are truly massive numbers of neurons in our brain, and they're all operating in parallel. Furthermore, their configuration appears to be in flux, frequently changing and adapting to various stimuli. This part is key, as it's not so much the number of neurons we have as how they're organized that matters. In mammals, brain size roughly correlates with the size of the body. Big animals generally have larger brains than small animals, but that doesn't mean they're proportionally more intelligent. An elephant's brain is much larger than a human's brain, but they're obviously much less intelligent than humans.
Of course, we know very little about the details of how our brains work (and I'm not an expert), but it seems clear that brain size or neuron count are not as important as how neurons are organized and crosslinked. The human brain has a huge number of neurons (somewhere on the order of one hundred billion), and each individual neuron is connected to several thousand other neurons (leading to a total number of connections in the hundreds of trillions). Technically, neurons are "digital" in that if you were to take a snapshot of the brain at a given instant, each neuron would be either "on" or "off" (i.e. a 1 or a 0). However, neurons don't work like digital electronics. When a neuron fires, it doesn't just turn on, it pulses. What's more, each neuron is accepting input from and providing output to thousands of other neurons. Each connection has a different priority or weight, so that some connections are more powerful or influential than others. Again, these connections and their relative influence tends to be in flux, constantly changing to meet new needs.
This turns out to be a good thing in that it gives us the capability to be creative and solve problems, to be unpredictable - things humans cherish and that computers can't really do on their own.
However, this all comes with its own set of tradeoffs. With respect to this post, the most relevant of which is that humans aren't particularly good at doing context switches. Our brains are actually great at processing a lot of information in parallel. Much of it is subconscious - heart pumping, breathing, processing sensory input, etc... Those are also things that we never really cease doing (while we're alive, at least), so those resources are pretty much always in use. But because of the way our neurons are interconnected, sometimes those resources trigger other processing. For instance, if you see something familiar, that sensory input might trigger memories of childhood (or whatever).
In a computer, everything is happening in serial and thus it is easy to predict how various inputs will impact the system. What's more, when a computer stores its CPU's current state in memory, that state can be restored later with perfect accuracy. Because of the interconnected and parallel nature of the brain, doing this sort of context switching is much more difficult. Again, we know very little about how the humain brain really works, but it seems clear that there is short-term and long-term memory, and that the process of transferring data from short-term memory to long-term memory is lossy. A big part of what the brain does seems to be filtering data, determining what is important and what is not. For instance, studies have shown that people who do well on memory tests don't necessarily have a more effective memory system, they're just better at ignoring unimportant things. In any case, human memory is infamously unreliable, so doing a context switch introduces a lot of thrash in what you were originally doing because you will have to do a lot of duplicate work to get yourself back to your original state (something a computer has a much easier time doing). When you're working on something specific, you're dedicating a significant portion of your conscious brainpower towards that task. In otherwords, you're probably engaging millions if not billions of neurons in the task. When you consider that each of these is interconnected and working in parallel, you start to get an idea of how complex it would be to reconfigure the whole thing for a new task. In a computer, you need to ensure the current state of a single CPU is saved. Your brain, on the other hand, has a much tougher job, and its memory isn't quite as reliable as a computer's memory. I like to refer to this as metal inertia. This sort of issue manifests itself in many different ways.
One thing I've found is that it can be very difficult to get started on a project, but once I get going, it becomes much easier to remain focused and get a lot accomplished. But getting started can be a problem for me, and finding a few uninterrupted hours to delve into something can be difficult as well. One of my favorite essays on the subject was written by Joel Spolsky - its called Fire and Motion. A quick excerpt:
Many of my days go like this: (1) get into work (2) check email, read the web, etc. (3) decide that I might as well have lunch before getting to work (4) get back from lunch (5) check email, read the web, etc. (6) finally decide that I've got to get started (7) check email, read the web, etc. (8) decide again that I really have to get started (9) launch the damn editor and (10) write code nonstop until I don't realize that it's already 7:30 pm.I've found this sort of mental inertia to be quite common, and it turns out that there are several areas of study based around this concept. The state of thought where your brain is up to speed and humming along is often referred to as "flow" or being "in the zone." This is particularly important for working on things that require a lot of concentration and attention, such as computer programming or complex writing.
From my own personal experience a couple of years ago during a particularly demanding project, I found that my most productive hours were actually after 6 pm. Why? Because there were no interruptions or distractions, and a two hour chunk of uninterrupted time allowed me to get a lot of work done. Anecdotal evidence suggests that others have had similar experiences. Many people come into work very early in the hopes that they will be able to get more done because no one else is here (and complain when people are here that early). Indeed, a lot of productivity suggestions basically amount to carving out a large chunk of time and finding a quiet place to do your work.
A key component of flow is finding a large, uninterrupted chunk of time in which to work. It's also something that can be difficult to do here at a lot of workplaces. Mine is a 24/7 company, and the nature of our business requires frequent interruptions and thus many of us are in a near constant state of context switching. Between phone calls, emails, and instant messaging, we're sure to be interrupted many times an hour if we're constantly keeping up with them. What's more, some of those interruptions will be high priority and require immediate attention. Plus, many of us have large amounts of meetings on our calendars which only makes it more difficult to concentrate on something important.
Tell me if this sounds familiar: You wake up early and during your morning routine, you plan out what you need to get done at work today. Let's say you figure you can get 4 tasks done during the day. Then you arrive at work to find 3 voice messages and around a hundred emails and by the end of the day, you've accomplished about 15 tasks, none of which are the 4 you had originally planned to do. I think this happens more often than we care to admit.
Another example, if it's 2:40 pm and I know I have a meeting at 3 pm - should I start working on a task I know will take me 3 solid hours or so to complete? Probably not. I might be able to get started and make some progress, but as soon my brain starts firing on all cylinders, I'll have to stop working and head to the meeting. Even if I did get something accomplished during those 20 minutes, chances are when I get back to my desk to get started again, I'm going to have to refamiliarize myself with the project and what I had already done before proceeding.
Of course, none of what I'm saying here is especially new, but in today's world it can be useful to remind ourselves that we don't need to always be connected or constantly monitoring emails, RSS, facebook, twitter, etc... Those things are excellent ways to keep in touch with friends or stay on top of a given topic, but they tend to split attention in many different directions. It's funny, when you look at a lot of attempts to increase productivity, efforts tend to focus on managing time. While important, we might also want to spend some time figuring out how we manage our attention (and the things that interrupt it).
(Note: As long and ponderous as this post is, it's actually part of a larger series of posts I have planned. Some parts of the series will not be posted here, as they will be tailored towards the specifics of my workplace, but in the interest of arranging my interests in parallel (and because I don't have that much time at work dedicated to blogging on our intranet), I've decided to publish what I can here. Also, given the nature of this post, it makes sense to pursue interests in my personal life that could be repurposed in my professional life (and vice/versa).)
Posted by Mark on June 28, 2009 at 03:44 PM .: link :.
Wednesday, June 10, 2009
When I write about movies or anime, I like to include screenshots. Heck, half the fun of the Friday the 13th marathon has been the screenshots. However, I've been doing this manually and it's become somewhat time intensive... So I've been looking for ways to make the process of creating the screenshots easier. I was going to write a post about a zombie movie tonight and I had about 15 screenshots I wanted to use...
I take screenshots using PowerDVD, which produces .bmp files. To create a screenshot for a post, I will typically crop out any unsightly black borders (they're ugly and often asymmetrical), convert to .jpg and rename the file. Then I will create a smaller version (typically 320 pixels, while maintaining the aspect ratio), using a variant of the original .jpg's filename. This smaller version is what you see in my post, while the larger one is what you see when you click on the image in my post.
I've always used GIMP to accomplish this, but it's a pretty manual process, so I started looking around for some batch image processing programs. There are tons of the things out there. I found several promising programs. Batch Image Resizer was pretty awesome and did exactly what I wanted, but the free trial version inserted a huge unwanted watermark that essentially rendered the output useless. I looked at a few other free apps, but they didn't meet some of my needs.
Eventually, I came accross the open source Phatch, which looked like it would provide everything I needed. The only issue was the installation process. It turns out that Phatch was written in Python, so in addition to Phatch, you also need to download and install Python, wxPython, Python Imaging Library and the Python Win32 Extensions. What's more is that the Phatch documentation has not taken into account that new versions of all of those are available and not all of them are compatible with each other. After a false start, I managed to download and install all the necessary stuff. Then, to run the application, I have to use the goddamned command line. Yeah, I know windows users don't get much support from the linux community, but this is kinda ridiculous.
But I got it all working and now I was on my way. As I've come to expect from open source apps, Phatch has a different way of setting up your image processing than most of the other apps I'd seen... but I was able to figure it out relatively quickly. According to the Phatch documentation, the Crop action looked pretty easy to use... the only problem was that when I ran Phatch, Crop did not appear to be on the list of actions. Confused, I looked around the documentation some more and it appeared that there were several other actions that could be used to crop images. For example, if I used the Canvas action, I could technically crop the image by specifying measurements smaller than the image itself - this is how I eventually accomplished the feat of converting several screenshots from their raw form to their edited versions. Here's an example of the zombietastic results (for reference, a .jpg of the original):
Bonus points to anyone who can name the movie!
The process has been frustrating and it took me a while to get all of this done. At this point, I have to wonder if I'd have been better off just purchasing that first app I found... and then I would have been done with it (and probably wouldn't be posting this at all). I'm hardly an expert on the subject of batch image manipulation and maybe I'm missing something fairly obvious, but I have to wonder why Phatch is so difficult to download, install, and use. I like open source applications and use several of them regularly, but sometimes they make things a lot harder than they need to be.
Update: I just found David's Batch Processor (a plugin for GIMP), but its renaming functionality is horrible (you can't actually rename the images - but you can add a prefix or suffix to the original filename.) Otherwise, it's decent.
And I also found FastStone Photo Resizer, which does everything I need it to do, and I don't need to run it from the command line either. This is what I'll probably be using in the future...
Update II: I got an email from Stani, who works on Phatch and was none to pleased about the post. It seems he had trouble posting a comment here (d'oh - second person this week who mentioned that, which is strange as it seems to have been working fine for the past few months and I haven't changed anything...). Anyway, here are his responses to the above:
As your comment system doesn't work, I post it through email. Considering the rant of your blog post, I would appreciate if you publish it as a comment for: http://kaedrin.com/weblog/archive/001652.htmlAnd my response:
Apologies if my ranting wasn't stimulating enough, but considering that it took a couple of hours to get everything working and that I value my time, I wasn't exactly enthused with the application or the documentation. Believe it or not, I did click on the "edit" link the wiki with the intention of adding some notes about the updated version numbers, but it said I had to be registered and I was already pretty fed up and not in the mood to sign up for anything. I admit that I neglected to do my part, but I got into this to save time and it ended up being an enormous time-sink. If I get a chance, I'll take another look.Update III: Ben over at Midnight Tease has been having fun with Open Source as well...
Posted by Mark on June 10, 2009 at 09:54 PM .: link :.
Wednesday, February 04, 2009
I've always considered myself something of a nerd, even back when being nerdy wasn't cool. Nowadays, everyone thinks they're a nerd. MGK recently noticed this:
Recently, I was surfing the net looking for lols, and came across a personal ad on Craigslist. The ad was not in and of itself hilarious, but one thing struck me. The writer described herself as “nerdy,” and as an example of her nerdiness, explained that she loved to watch Desperate Housewives.To address this situation, he has devised "a handy guide for people to define their own nerdiness, based on a number of nerdistic passions." I'm a little surprised at how poorly I did in some of these categories.
Posted by Mark on February 04, 2009 at 10:45 PM .: link :.
Sunday, May 18, 2008
Firefox versus Opera
I use Opera to do most of my web browsing and have done so for quite a while. Is it time to switch to another browser? Or does Opera still meet my needs? After some consideration, the only realistic challenger is Firefox. What follows is not meant to be an objective comparison, though I will try to maintain impartiality and some of the criteria will be more fact based than others. Still, I'm not claiming this to be a definitive guide or anything. There are many features of both browsers that appeal to me, and many that I find irrelevant. Your experience will probably be different. Anyway, to start things off, a little history:
I first became aware of Opera in the late 1990s and I tried out version 3.5 and 4, but neither really made much of an impression. Plus, at the time, Opera was trialware... there was a free trial, but after that ended you needed to purchase the software if you wanted to keep using it. Starting with version 5, Opera became free, but it was ad-supported, and there was this big, honking banner ad built into the browser. On the other hand, Opera 5 was also the first browser to implement mouse gestures, the most addicting browser feature I've encountered (more on this later). As time went on and other browsers emerged, Opera finally relented and released a completely free browser in 2005. I've used Opera as much as possible since then, though I've occasionally used other browsers for various reasons. The biggest complaint I've had about Opera is that some websites don't render or operate correctly in Opera, thus forcing me to fire up IE or FF. This complaint has lessened with each successive release though, and Opera 9.x seems to be compatible with most websites. The only time I find myself opening another browser is to watch Netflix online movies, which only work in IE (more on this later). Opera is certainly not a perfect browser, but each release seems to contain new and innovative features, and it has always served me well.
The only browser that has really compared with Opera is Firefox. It's based on the open source Mozilla project, which began in 1998 as a replacement for the Netscape 4.x browser (which was badly in need of an overhaul). Unfortunately, development of the open source browser was slow going, allowing Microsoft to completely dominate the market. However, version 1.0 of the Mozilla Application Suite (which included more than just a browser) was launched in 2002. It was bloated and slow, but the underlying code (particularly the rendering engine, named gecko) was used as the base for several new projects, including Firefox. Firefox 1.0 was released in late 2004, and has been picking up steam ever since. It's the first browser to challenge IE's dominance of the market, and it's also far superior to IE. The current version of Firefox is mature and stable, and a new version (3.0) is on its way that will supposedly address many of the current complaints about FF.
Of course, these are not the only two browsers out there. Internet Explorer is notable for it's widespread adoption (during Q2 of 2004, IE had an asounding 95% share of the market). IE isn't very good compared to the competition, but its one virtue is that most websites will load and render properly in IE (and some websites will only work in IE). As a web developer, I have an intense dislike for IE, as it has poor standards support and is generally a pain to work with (especially IE6). IE7, while an improvement in many ways, also features some bizarre interface changes that make the browser less usable.
Also of note is Safari, Apple's default browser in OS X. Based on the open source KHTML engine (which runs KDE's Konqueror, the primary open source competitor to Mozilla/Firefox), it implements many of the same features of Opera and FF, but in a simple, lightweight way. I've never been much of a fan of Safari, though it should be noted as a valid competitor. It's a solid browser, fast and clean, but ultimately nothing really special (perhaps with more use, I would be won over). Finally, there are a number of other smaller scale or specialized browsers like Flock (which has many features tailored around integrating with social networking sites), but nothing there really fits me.
So the most realistic options for me are Opera and Firefox. Both have new browsers in Beta (or higher), but I'll be primarily using the current releases (Opera 9.27 and Firefox 220.127.116.11). I've played around with Opera 9.5 and Firefox 3 RC1 and will keep them in mind. For reference, I'm running a PC with Intel Core 2 Duo (2.4 GHz), 2 GB RAM, and Windows XP SP2.
So what does the future hold? If Opera continues to lose market share and doesn't find a way to account for the extensions of Firefox, it's going to be in real trouble (they seem to think their Widgets system will do this, but it really won't). Honestly, if FF 3 really does solve their memory problems, I might even be switching over that soon.
Posted by Mark on May 18, 2008 at 08:22 PM .: link :.
Sunday, April 27, 2008
The recent bout with myTV on DVD addiction necessitated an increase in Netflix usage, which made me curious. How well have I really taken advantage of the Netflix service, and is it worth the monthly expense?
If I were to rent a movie at a local video store like Blockbuster, each rental would cost somewhere around $4 (this is an extremely charitable estimate, as I'm sure it's probably closer to $5 at this point), plus the expense in time and effort (I mean, come on, I'd have to drive about a mile out of my way to go to one of these places!) Netflix costs me $15.99 a month for the 3-disc-at-a-time plan (this plan was $17.99 when I signed up, but decreased in price two times during around two years of membership), so it takes about 4-5 Netflix rentals to recoup my costs and bring the price of an average rental down below $4. I've been a member for one year and ten months... how did I do (click for a larger version)?
A few notes on the data:
This has been an interesting exercise, because I feel like I'm a little more consistent than the data actually shows. I'm really surprised that there are several months where my rentals went down to 6... I could have sworn I watched at least 2-3 discs a week, with the occasional exception. Still, an average of 9 movies a month is nothing to sneeze about, I guess. I've heard horror stories of where Netflix will start throttling you and take longer to deliver discs if you go above a certain amount of rentals per month (at a certain point, the cost of processing your rentals becomes more than you're paying, which I guess is what prompts Netflix to start throttling you), but I haven't had a problem yet. If I keep up my recent viewing habits though, this could change...
Posted by Mark on April 27, 2008 at 11:09 PM .: link :.
Sunday, November 25, 2007
Requiem for a Meme
In July of this year, I attempted to start a Movie Screenshot Meme. The idea was simple and (I thought) neat. I would post a screenshot, and visitors would guess what movie it was from. The person who guessed correctly would continue the game by either posting the next round on their blog, or if they didn't have a blog, they could send me a screenshot or just ask me to post another round. Things went reasonably well at first, and the game experienced some modest success. However, the game eventually morphed into the Mark, Alex, and Roy show, as the rounds kept cycling through each of our blogs. The last round was posted in September and despite a winning entry, the game has not continued.
The challenge of starting this meme was apparent from the start, but there were some other things that hindered the game a bit. Here are some assorted thoughts about the game, what held it back, and what could be done to improve the chances of adoption.
(click image for a larger version) I'd say this is difficult except that it's blatantly obvious who that is in the screenshot. It shouldn't be that hard to pick out the movie even if you haven't seen it. What the heck, the winner of this round can pick 5 blogs they'd like to see post a screenshot and post a screenshot on their blog if they desire. As I mentioned above, I'm hesitant to annoy people with this sort of thing, but hey, why not? Let's give this meme some legs.
Posted by Mark on November 25, 2007 at 03:04 PM .: link :.
Sunday, November 18, 2007
The Paradise of Choice?
A while ago, I wrote a post about the Paradox of Choice based on a talk by Barry Schwartz, the author of a book by the same name. The basic argument Schwartz makes is that choice is a double-edged sword. Choice is a good thing, but too much choice can have negative consequences, usually in the form of some kind of paralysis (where there are so many choices that you simply avoid the decision) and consumer remorse (elevated expectations, anticipated regret, etc...). The observations made by Schwartz struck me as being quite astute, and I've been keenly aware of situations where I find myself confronted with a paradox of choice ever since. Indeed, just knowing and recognizing these situations seems to help deal with the negative aspects of having too many choices available.
This past summer, I read Chris Anderson's book, The Long Tail, and I was a little pleasantly surprised to see a chapter in his book titled "The Paradise of Choice." In that chapter, Anderson explicitely addresses Schwartz's book. However, while I liked Anderson's book and generally agreed with his basic points, I think his dismissal of the Paradox of Choice is off target. Part of the problem, I think, is that Anderson is much more concerned with the choices rather than the consequences of those choices (which is what Schwartz focuses on). It's a little difficult to tell though, as Anderson only dedicates 7 pages or so to the topic. As such, his arguments don't really eviscerate Schwartz's work. There are some good points though, so let's take a closer look.
Anderson starts with a summary of Schwartz's main concepts, and points to some of Schwartz's conclusions (from page 171 in my edition):
As the number of choices keeps growing, negative aspects of having a multitude of options begin to appear. As the number of choices grows further, the negatives escalate until we become overloaded. At this point, choice no longer liberates, but debilitates. It might even be said to tyrannize.Now, the way Anderson presents this is a bit out of context, but we'll get to that in a moment. Anderson continues and then responds to some of these points (again, page 171):
As an antidote to this poison of our modern age, Schwartz recommends that consumers "satisfice," in the jargon of social science, not "maximize". In other words, they'd be happier if they just settled for what was in front of them rather than obsessing over whether something else might be even better. ...Anderson has completely missed the point here. Later in the chapter, he spends a lot of time establishing that people do, in fact, like choice. And he's right. My problem is twofold: First, Schwartz never denies that choice is a good thing, and second, he never advocates removing choice in the first place. Yes, people love choice, the more the better. However, Schwartz found that even though people preferred more options, they weren't necessarily happier because of it. That's why it's called the paradox of choice - people obviously prefer something that ends up having negative consequences. Schwartz's book isn't some sort of crusade against choice. Indeed, it's more of a guide for how to cope with being given too many choices. Take "satisficing." As Tom Slee notes in a critique of this chapter, Anderson misstates Schwartz's definition of the term. He makes it seem like satisficing is settling for something you might not want, but Schwartz's definition is much different:
To satisfice is to settle for something that is good enough and not worry about the possibility that there might be something better. A satisficer has criteria and standards. She searches until she finds an item that meets those standards, and at that point, she stops.Settling for something that is good enough to meet your needs is quite different than just settling for what's in front of you. Again, I'm not sure Anderson is really arguing against Schwartz. Indeed, Anderson even acknowledges part of the problem, though he again misstate's Schwartz's arguments:
Vast choice is not always an unalloyed good, of course. It too often forces us to ask, "Well, what do I want?" and introspection doesn't come naturally to all. But the solution is not to limit choice, but to order it so it isn't oppressive.Personally, I don't think the problem is that introspection doesn't come naturally to some people (though that could be part of it), it's more that some people just don't give a crap about certain things and don't want to spend time figuring it out. In Schwartz's talk, he gave an example about going to the Gap to buy a pair of jeans. Of course, the Gap offers a wide variety of jeans (as of right now: Standard Fit, Loose Fit, Boot Fit, Easy Fit, Morrison Slim Fit, Low Rise Fit, Toland Fit, Hayes Fit, Relaxed Fit, Baggy Fit, Carpenter Fit). The clerk asked him what he wanted, and he said "I just want a pair of jeans!"
The second part of Anderson's statement is interesting though. Aside from again misstating Schwartz's argument (he does not advocate limiting choice!), the observation that the way a choice is presented is important is interesting. Yes, the Gap has a wide variety of jean styles, but look at their website again. At the top of the page is a little guide to what each of the styles means. For the most part, it's helpful, and I think that's what Anderson is getting at. Too much choice can be oppressive, but if you have the right guide, you can get the best of both worlds. The only problem is that finding the right guide is not as easy as it sounds. The jean style guide at Gap is neat and helpful, but you do have to click through a bunch of stuff and read it. This is easier than going to a store and trying all the varieties on, but it's still a pain for someone who just wants a pair of jeans dammit.
Anderson spends some time fleshing out these guides to making choices, noting the differences between offline and online retailers:
In a bricks-and-mortar store, products sit on the shelf where they have been placed. If a consumer doesn't know what he or she wants, the only guide is whatever marketing material may be printed on the package, and the rough assumption that the product offered in the greatest volume is probably the most popular.I think it's a very good point he's making, though I think he's a bit too optimistic about how effective these guides to buying really are. For one thing, there are times when a choice isn't clear, even if you do have a guide. Also, while I think retailers that offer Recommendations based on what other customer purchases are important and helpful, who among us hasn't seen absurd recommendations? From my personal experience, a lot of people don't like the connotations of recommendations either (how do they know so much about me? etc...). Personally, I really like recommendations, but I'm a geek and I like to figure out why they're offering me what they are (Amazon actually tells you why something is recommended, which is really neat). In any case, from my own personal anecdotal observations, no one puts much faith in probablistic systems like recommendations or ratings (for a number of reasons, such as cheating or distrust). There's nothing wrong with that, and that's part of why such systems are effective. Ironically, acknowledging their imperfections allow users to better utilize the systems. Anderson knows this, but I think he's still a bit too optimistic about our tools for traversing the long tail. Personally, I think they need a lot of work.
When I was younger, one of the big problems in computing was storage. Computers are the perfect data gatering tool, but you need somewhere to store all that data. In the 1980s and early 1990s, computers and networks were significantly limited by hardware, particularly storage. By the late 1990s, Moore's law had eroded this deficiency significantly, and today, the problem of storage is largely solved. You can buy a terrabyte of storage for just a couple hundred dollars. However, as I'm fond of saying, we don't so much solve problems as trade one set of problems for another. Now that we have the ability to store all this information, how do we get at it in a meaninful way? When hardware was limited, analysis was easy enough. Now, though, you have so much data available that the simple analyses of the past don't cut it anymore. We're capturing all this new information, but are we really using it to its full potential?
I recently caught up with Malcolm Gladwell's article on the Enron collapse. The really crazy thing about Enron was that they didn't really hide what they were doing. They fully acknowledged and disclosed what they were doing... there was just so much complexity to their operations that no one really recognized the issues. They were "caught" because someone had the persistence to dig through all the public documentation that Enron had provided. Gladwell goes into a lot of detail, but here are a few excerpts:
Enron's downfall has been documented so extensively that it is easy to overlook how peculiar it was. Compare Enron, for instance, with Watergate, the prototypical scandal of the nineteen-seventies. To expose the White House coverup, Bob Woodward and Carl Bernstein used a source-Deep Throat-who had access to many secrets, and whose identity had to be concealed. He warned Woodward and Bernstein that their phones might be tapped. When Woodward wanted to meet with Deep Throat, he would move a flower pot with a red flag in it to the back of his apartment balcony. That evening, he would leave by the back stairs, take multiple taxis to make sure he wasn't being followed, and meet his source in an underground parking garage at 2 A.M. ...Again, there's a lot more detail in Gladwell's article. Just how complicated was the public documentation that Enron had released? Gladwell gives some examples, including this one:
Enron's S.P.E.s were, by any measure, evidence of extraordinary recklessness and incompetence. But you can't blame Enron for covering up the existence of its side deals. It didn't; it disclosed them. The argument against the company, then, is more accurately that it didn't tell its investors enough about its S.P.E.s. But what is enough? Enron had some three thousand S.P.E.s, and the paperwork for each one probably ran in excess of a thousand pages. It scarcely would have helped investors if Enron had made all three million pages public. What about an edited version of each deal? Steven Schwarcz, a professor at Duke Law School, recently examined a random sample of twenty S.P.E. disclosure statements from various corporations-that is, summaries of the deals put together for interested parties-and found that on average they ran to forty single-spaced pages. So a summary of Enron's S.P.E.s would have come to a hundred and twenty thousand single-spaced pages. What about a summary of all those summaries? That's what the bankruptcy examiner in the Enron case put together, and it took up a thousand pages. Well, then, what about a summary of the summary of the summaries? That's what the Powers Committee put together. The committee looked only at the "substance of the most significant transactions," and its accounting still ran to two hundred numbingly complicated pages and, as Schwarcz points out, that was "with the benefit of hindsight and with the assistance of some of the finest legal talent in the nation."Again, Gladwell's article has a lot of other details and is a fascinating read. What interested me the most, though, was the problem created by so much data. That much information is useless if you can't sift through it quickly or effectively enough. Bringing this back to the paradise of choice, the current systems we have for making such decisions are better than ever, but still require a lot of improvement. Anderson is mostly talking about simple consumer products, so none are really as complicated as the Enron case, but even then, there are still a lot of problems. If we're really going to overcome the paradox of choice, we need better information analysis tools to help guide us. That said, Anderson's general point still holds:
More choice really is better. But now we know that variety alone is not enough; we also need information about that variety and what other consumers before us have done with the same choices. ... The paradox of choice turned out to be more about the poverty of help in making that choice than a rejection of plenty. Order it wrong and choice is oppressive; order it right and it's liberating.Personally, while the help in making choices has improved, there's still a long way to go before we can really tackle the paradox of choice (though, again, just knowing about the paradox of choice seems to do wonders in coping with it).
As a side note, I wonder if the video game playing generations are better at dealing with too much choice - video games are all about decisions, so I wonder if folks who grew up working on their decision making apparatus are more comfortable with being deluged by choice.
Posted by Mark on November 18, 2007 at 09:47 PM .: link :.
Wednesday, October 17, 2007
The Spinning Silhouette
This Spinning Silhouette optical illusion is making the rounds on the internet this week, and it's being touted as a "right brain vs left brain test." The theory goes that if you see the silhouette spinning clockwise, you're right brained, and you're left brained if you see it spinning counterclockwise.
Everytime I looked at the damn thing, it was spinning a different direction. I closed my eyes and opened them again, and it spun a different direction. Every now and again, and it would stay the same direction twice in a row, but if I looked away and looked back, it changed direction. Now, if I focus my eyes on a point below the illusion, it doesn't seem to rotate all the way around at all, instead it seems like she's moving from one side to the other, then back (i.e. changing directions every time the one leg reaches the side of the screen - and the leg always seems to be in front of the silhouette).
Of course, this is the essense of the illusion. The silhouette isn't actually spinning at all, because it's two dimensional. However, since my brain is used to living in a three dimensional world (and thus parsing three dimensional images), it's assuming that the image is also three dimensional. We're actually making lots of assumptions about the image, and that's why we can see it going one way or the other.
Eventually, after looking at the image for a while and pondering the issues, I got curious. I downloaded the animated gif and opened it up in the GIMP to see how the frames are built. I could be wrong, but I'm pretty sure this thing is either broken or it's cheating. Well, I shouldn't say that. I noticed something off on one of the frames, and I'd be real curious to know how that affects people's perception of the illusion (to me, it means the image is definitely moving counterclockwise). I'm almost positive that it's too subtle to really affect anything, but I did find it interesting. More on this, including images and commentary, below the fold. First thing's first, here's the actual spinning silhouette.
Again, some of you will see it spinning in one direction, some in the other direction. Everyone seems to have a different trick for getting it to switch direction. Some say to focus on the shadow, some say to look at the ankles. Closing my eyes and reopening seems to do the trick for me. Now let's take a closer look at one of the frames. Here's frame 12:
Looking at this frame, you should be able to switch back and forth, seeing the leg behind the person or in front of the person. Again, because it's a silhouette and a two dimensional image, our brain usually makes an assumption of depth, putting the leg in front or behind the body. Switching back and forth on this static image was actually a lot easier for me. Now the tricky part comes in the next frame, number 13 (obviously, the arrow was added by me):
Now, if you look closely at the leg, you'll see a little imperfection in the silhouette. Maybe I'm wrong, but that little gash in the leg seems to imply that the leg is behind the body. If you try, you can still get yourself to see the image as having the leg in front, but then you've got this gash in the leg that just seems very out of place.
So what to make of this? First, the imperfection is subtle enough (it's on 1 frame out of 34) that everyone still seems to be able to see it rotate in both directions. Second, maybe I'm crazy, and the little gash doesn't imply what I think. Anyone have alternative explanations? Third, is that imperfection intentional? If so, why? It does not seem necessary, so I'd be curious to know if the creators knew about it, and what their intention was regarding it.
Finally, as far as the left brain versus right brain portion, I find that I don't really care, but I am interested in how the imperfection would affect this "test." This neuroscientist seems to be pretty adamant about the whole left/right thing being hogwash though:
...the notion that someone is "left-brained" or "right-brained" is absolute nonsense. All complex behaviours and cognitive functions require the integrated actions of multiple brain regions in both hemispheres of the brain. All types of information are probably processed in both the left and right hemispheres (perhaps in different ways, so that the processing carried out on one side of the brain complements, rather than substitutes, that being carried out on the other).At the very least, the traditional left/right brain theory is a wildly oversimplified version of what's really happening. The post also goes into the way the brain "fill in the gaps" for confusing visual information, thus allowing the illusion.
Update: Strange - the image appears to be rotating MUCH faster in Firefox than in Opera or IE. I wonder how that affects perception.
Posted by Mark on October 17, 2007 at 10:42 PM .: link :.
Sunday, August 05, 2007
Manuals, or the lack thereof...
When I first started playing video games and using computer applications, I remember having to read the instruction manuals to figure out what was happening on screen. I don't know if this was because I was young and couldn't figure this stuff out, or because some of the controls were obtuse and difficult. It was perhaps a combination of both, but I think the latter was more prevalent, especially when applications and games became more complex and powerful. I remember sitting down at a computer running DOS and loading up Wordperfect. The interface that appears is rather simplistic, and the developers apparently wanted to avoid the "clutter" of on-screen menus, so they used keyboard combinations. According to Wikipedia, Wordperfect used "almost every possible combination of function keys with Ctrl, Alt, and Shift modifiers." I vaguely remember needing to use those stupid keyboard templates (little pieces of laminated paper that fit snugly around the keyboard keys, helping you remember what key or combo does what.)
Video Games used to have great manuals too. I distinctly remember several great manuals from the Atari 2600 era. For example, the manual for Pitfall II was a wonderful document done in the style of Pitfall Harry's diary. The game itself had little in the way of exposition, so you had to read the manual to figure out that you were trying to rescue your niece Rhonda and her cat, Quickclaw, who became trapped in a catacomb while searching for the fabled Raj diamond. Another example for the Commodore 64 was Temple of Apshai. The game had awful graphics, but each room you entered had a number, and you had to consult your manual to get a description of the room.
By the time of the NES, the importance of manuals had waned from Apshai levels, but they were still somewhat necessary at times, and gaming companies still went to a lot of trouble to produce helpful documents. The one that stands out in my mind was the manual for Dragon Warrior III, which was huge (at least 50 pages) and also contained a nice fold-out chart of most of the monsters and wapons in the game (with really great artwork). PC games were also getting more complex, and as Roy noted recently, companies like Sierra put together really nice instruction manuals for complex games like the King's Quest series.
In the early 1990s, my family got its first Windows PC, and several things changed. With the Word for Windows software, you didn't need any of those silly keyboard templates. Everything you needed to do was in a menu somewhere, and you could just point and click instead of having to memorize strange keyboard combos. Naturally, computer purists love the keyboard, and with good reason. If you really want to be efficient, the keyboard is the way to go, which is why Linux users are so fond of the command line and simple looking but powerful applications like Emacs. But for your average user, the GUI was very important, and made things a lot easier to figure out. Word had a user manual, and it was several hundred pages long, but I don't think I ever cracked it open, except maybe in curiosity (not because I needed to).
The trends of improving interfaces and less useful manuals proceeded throughout the next decade and today, well, I can't think of the last time I had to consult a physical manual for anything. Steven Den Beste has been playing around with flash for a while, but he says he never looks at the manual. "Manuals are for wimps." In his post, Roy wonders where all the manuals have gone. He speculates that manufacturing costs are a primary culprit, and I have no doubt that they are, but there are probably a couple of other reasons as well. For one, interfaces have become much more intuitive and easy to use. This is in part due to familiarity with computers and the emergence of consistent standards for things like dialog boxes (of course, when you eschew those standards, you get what Jacob Nielson describes as a catastrophic failure). If you can easily figure it out through the interface, what use are the manuals? With respect to gaming, the in-game tutorials have largely taken the place of instruction manuals. Another thing that has perhaps affected official instruction manuals are the unofficial walkthroughs and game guides. Visit a local bookstore and you'll find entire bookcases devoted to vide game guides and walkthrough. As nice as the manual for Pitfall II was, you really didn't need much more than 10 pages to explain how to play that game, but several hundred pages barely does justice to some of the more complex video games in today's market. Perhaps the reason gaming companies don't give you instruction manuals with the game is not just that printing the manual is costly, but that they can sell you a more detailed and useful one.
Steven Johnson's book Everything Bad is Good for You has a chapter on Video Games that is very illuminating (in fact, the whole book is highly recommended - even if you don't totally agree with his premise, he still makes a compelling argument). He talks about the official guides and why they're so popular:
The dirty little secret of gaming is how much time you spend not having fun. You may be frustrated; you may be confused or disoriented; you may be stuck. When you put the game down and move back into the real world, you may find yourself mentally working through the problem you've been wrestling with, as though you were worrying a loose tooth. If this is mindless escapism, it's a strangely masochistic version.He gives an example of a man who spends six months working as a smith (mindless work) in Ultima online so that he can attain a certain ability, and he also talks about how people spend tons of money on guides for getting past various roadblocks. Why would someone do this? Johnson spends a fair amount of time going into the neurological underpinnings of this, most notably what he calls the "reward circuitry of the brain." In games, rewards are everywhere. More life, more magic spells, new equipment, etc... And how do we get these rewards? Johnson thinks there are two main modes of intellectual labor that go into video gaming, and he calls them probing and telescoping.
Probing is essentially exploration of the game and its possibilities. Much of this is simply the unconscious exploration of the controls and the interface, figuring out how the game works and how you're supposed to interact with it. However, probing also takes the more conscious form of figuring out the limitations of the game. For instance, in a racing game, it's usually interesting to see if you can turn your car around backwards, pick up a lot of speed, then crash head-on into a car going the "correct" way. Or, in Rollercoaster Tycoon, you can creatively place balloon stands next to a roller coaster to see what happens (the result is hilarious). Probing the limits of game physics and finding ways to exploit them are half the fun (or challenge) of video games these days, which is perhaps another reason why manuals are becoming less frequent.
Telescoping has more to do with the games objectives. Once you've figured out how to play the game through probing, you seek to exploit your knowledge to achieve the game's objectives, which are often nested in a hierarchical fashion. For instance, to save the princess, you must first enter the castle, but you need a key to get into the castle and the key is guarded by a dragon, etc... Indeed, the structure is sometimes even more complicated, and you essentially build this hierarchy of goals in your head as the game progresses. This is called telescoping.
So why is this important? Johnson has the answer (page 41 in my edition):
... far more than books or movies or music, games force you to make decisions. Novels may activate our imagination, and music may conjure up powerful emotions, but games force you to decide, to choose, to prioritize. All the intellectual benefits of gaming derive from this fundamental virtue, because learning how to think is ultimately about learning to make the right decisions: weighing evidence, analyzing situations, consulting your long term goals, and then deciding. No other pop culture form directly engages the brain's decision-making apparatus in the same way. From the outside, the primary activity of a gamer looks like a fury of clicking and shooting, which is why much of the conventional wisdom about games focuses on hand-eye coordination. But if you peer inside the gamer's mind, the primary activity turns out to be another creature altogether: making decisions, some of them snap judgements, some long-term strategies.Probing and telescoping are essential to learning in any sense, and the way Johnson describes them in the book reminds me of a number of critical thinking methods. Probing, developing a hypothesis, reprobing, and then rethinking the hypothesis is essentially the same thing as the scientific method or the hermenutic circle. As such, it should be interesting to see if video games ever really catch on as learning tools. There have been a lot of attempts at this sort of thing, but they're often stifled by the reputation of video games being a "colossal waste of time" (in recent years, the benefits of gaming are being acknowledged more and more, though not usually as dramatically as Johnson does in his book).
Another interesting use for video games might be evaluation. A while ago, Bill Simmons made an offhand reference to EA Sports' Madden games in the context of hiring football coaches (this shows up at #29 on his list):
The Maurice Carthon fiasco raises the annual question, "When teams are hiring offensive and defensive coordinators, why wouldn't they have them call plays in video games to get a feel for their play calling?" Seriously, what would be more valuable, hearing them B.S. about the philosophies for an hour, or seeing them call plays in a simulated game at the all-Madden level? Same goes for head coaches: How could you get a feel for a coach until you've played poker and blackjack with him?When I think about how such a thing would actually go down, I'm not so sure, because the football world created by Madden, as complex and comprehensive as it is, still isn't exactly the same as the real football world. However, I think the concept is still sound. Theoretically, you could see how a prospective coach would actually react to a new, and yet similar, football paradigm and how they'd find weaknesses and exploit them. The actual plays they call aren't that important; what you'd be trying to figure out is whether or not the coach was making intelligent decisions or not.
So where are manuals headed? I suspect that they'll become less and less prevalent as time goes on and interfaces become more and more intuitive (though there is still a long ways to go before I'd say that computer interfaces are truly intuitive, I think they're much more intuitive now than they were ten years ago). We'll see more interactive demos and in-game tutorials, and perhaps even games used as teaching tools. I could probably write a whole separate post about how this applies to Linux, which actually does require you to look at manuals sometimes (though at least they have a relatively consistent way of treating manuals; even when the documentation is bad, you can usually find it). Manuals and passive teaching devices will become less important. And to be honest, I don't think we'll miss them. They're annoying.
Posted by Mark on August 05, 2007 at 10:58 AM .: link :.
Wednesday, June 27, 2007
The Dramatic Prairie Dog
I recently came across this silly video, and have since become interested in its evolution.
Posted by Mark on June 27, 2007 at 07:50 PM .: link :.
Wednesday, April 11, 2007
So this Twitter thing seems to be all the rage these days. I signed up a few days ago, just to see what all the fuss was about. It turns out to be a little nebulous and I'm not sure it's something I'd use all that much. Everyone seems to have a different definition of what Twitter is, and they all seem to work. Mine is that it's a sorta mix between a public IM system and a stripped-down blogging system. It's got some similarities with certain aspects of MySpace and Facebook, but it's much simpler and stripped-down. Here's my twitter:
There's "Friends" and "Followers" and you can update your Twitter via a number of interfaces, including IM Clients, SMS messaging, and the web interface (amongst other similar connections). You can also get updates on such devices. I don't use any of these methods with regularity, though the concept of being able to update Twitter while waiting in line or something seems like a vaguely interesting use of normally wasted time.
I guess the idea is that if you and all your friends are on Twitter, you can keep up with what everyone's doing in one quick and easy place (the default way to read Twitter is with your posts and your friends' posts mixed together on one page). My problem: I don't think any of my friends would be into this. I suppose I could mess around on Twitter and find a bunch of folks that I'd want to keep up with for some reason, but that seems... strange. Why would I want to keep tabs on some stranger?
Jason Kottke claims that this is a huge time-saver and perfect for people who are really busy:
For people with little time, Twitter functions like an extremely stripped-down version of MySpace. Instead of customized pages, animated badges, custom music, top 8 friends, and all that crap, Twitter is just-the-facts-ma'am: where are my friends and what are they up to? ... Twitter seems to work equally well for busy people and not-busy people. It allows folks with little time to keep up with what their friends are up to without having to email and IM with them all day.I suppose this would be true, though I've been busy lately and have only managed to update Twitter once or twice a day. Naturally, there are some interesting side-projects like Twittervision, which shows updates happening in real time on a map, or Twitterverse, which shows common words and users.
It's an interesting and simple concept, and it could be useful, but I'm not sure how much I'll get into it... It seems like more of a novelty at this point. Anyone else use it?
Update: Some people are using Twitter for unintended uses, and there are some great ficticious Twitterers like Darth Vader. It's interesting how quickly people start pushing the boundries of new stuff like this and using it for things that were never intended.
Update 4.12.07: Aziz comments. He's using it to power a section of his sidebar, dedicated to songs... a pretty good idea, and using Twitter ("a device-agnostic messaging system," as he calls it) to power it is a good fit.
Oh, and it appears that my little flash badge doesn't really update (it does, but most browsers cache it and Flash won't update unless you clear your cache manually).
Wednesday, February 21, 2007
Various links for your enjoyment:
Posted by Mark on February 21, 2007 at 08:16 PM .: link :.
Wednesday, February 14, 2007
Intellectual Property, Copyright and DRM
Roy over at 79Soul has started a series of posts dealing with Intellectual Property. His first post sets the stage with an overview of the situation, and he begins to explore some of the issues, starting with the definition of theft. I'm going to cover some of the same ground in this post, and then some other things which I assume Roy will cover in his later posts.
I think most people have an intuitive understanding of what intellectual property is, but it might be useful to start with a brief definition. Perhaps a good place to start would be Article 1, Section 8 of the U.S. Constitution:
To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries;I started with this for a number of reasons. First, because I live in the U.S. and most of what follows deals with U.S. IP law. Second, because it's actually a somewhat controversial stance. The fact that IP is only secured for "limited times" is the key. In England, for example, an author does not merely hold a copyright on their work, they have a Moral Right.
The moral right of the author is considered to be -- according to the Berne convention -- an inalienable human right. This is the same serious meaning of "inalienable" the Declaration of Independence uses: not only can't these rights be forcibly stripped from you, you can't even give them away. You can't sell yourself into slavery; and neither can you (in Britain) give the right to be called the author of your writings to someone else.The U.S. is different. It doesn't grant an inalienable moral right of ownership; instead, it allows copyright. In other words, in the U.S., such works are considered property (i.e. it can be sold, traded, bartered, or given away). This represents a fundamental distinction that needs to be made: some systems emphasize individual rights and rewards, and other systems are more limited. When put that way, the U.S. system sounds pretty awful, except that it was designed for something different: our system was built to advance science and the "useful arts." The U.S. system still rewards creators, but only as a means to an end. Copyright is granted so that there is an incentive to create. However, such protections are only granted for "limited Times." This is because when a copyright is eternal, the system stagnates as protected peoples stifle competition (this need not be malicious). Copyright is thus limited so that when a work is no longer protected, it becomes freely available for everyone to use and to build upon. This is known as the public domain.
The end goal here is the advancement of society, and both protection and expiration are necessary parts of the mix. The balance between the two is important, and as Roy notes, one of the things that appears to have upset the balance is technology. This, of course, extends as far back as the printing press, records, cassettes, VHS, and other similar technologies, but more recently, a convergence between new compression techniques and increasing bandwidth of the internet created an issue. Most new recording technologies were greeted with concern, but physical limitations and costs generally put a cap on the amount of damage that could be done. With computers and large networks like the internet, such limitations became almost negligible. Digital copies of protected works became easy to copy and distribute on a very large scale.
The first major issue came up as a result of Napster, a peer-to-peer music sharing service that essentially promoted widespread copyright infringement. Lawsuits followed, and the original Napster service was shut down, only to be replaced by numerous decentralized peer-to-peer systems and darknets. This meant that no single entity could be sued for the copyright infringement that occurred on the network, but it resulted in a number of (probably ill-advised) lawsuits against regular folks (the anonymity of internet technology and state of recordkeeping being what it is, this sometimes leads to hilarious cases like when the RIAA sued a 79 year old guy who doesn't even own a computer or know how to operate one).
Roy discusses the various arguments for or against this sort of file sharing, noting that the essential difference of opinion is the definition of the word "theft." For my part, I think it's pretty obvious that downloading something for free that you'd normally have to pay for is morally wrong. However, I can see some grey area. A few months ago, I pre-ordered Tool's most recent album, 10,000 Days from Amazon. A friend who already had the album sent me a copy over the internet before I had actually recieved my copy of the CD. Does this count as theft? I would say no.
The concept of borrowing a Book, CD or DVD also seems pretty harmless to me, and I don't have a moral problem with borrowing an electronic copy, then deleting it afterwords (or purchasing it, if I liked it enough), though I can see how such a practice represents a bit of a slippery slope and wouldn't hold up in an honest debate (nor should it). It's too easy to abuse such an argument, or to apply it in retrospect. I suppose there are arguments to be made with respect to making distinctions between benefits and harms, but I generally find those arguments unpersuasive (though perhaps interesting to consider).
There are some other issues that need to be discussed as well. The concept of Fair Use allows limited use of copyrighted material without requiring permission from the rights holders. For example, including a screenshot of a film in a movie review. You're also allowed to parody copyrighted works, and in some instances make complete copies of a copyrighted work. There are rules pertaining to how much of the copyrighted work can be used and in what circumstances, but this is not the venue for such details. The point is that copyright is not absolute and consumers have rights as well.
Another topic that must be addressed is Digital Rights Management (DRM). This refers to a range of technologies used to combat digital copying of protected material. The goal of DRM is to use technology to automatically limit the abilities of a consumer who has purchased digital media. In some cases, this means that you won't be able to play an optical disc on a certain device, in others it means you can only use the media a certain number of times (among other restrictions).
To be blunt, DRM sucks. For the most part, it benefits no one. It's confusing, it basically amounts to treating legitimate customers like criminals while only barely (if that much) slowing down the piracy it purports to be thwarting, and it's lead to numerous disasters and unintended consequences. Essential reading on this subject is this talk given to Microsoft by Cory Doctorow. It's a long but well written and straightforward read that I can't summarize briefly (please read the whole thing). Some details of his argument may be debateable, but as a whole, I find it quite compelling. Put simply, DRM doesn't work and it's bad for artists, businesses, and society as a whole.
Now, the IP industries that are pushing DRM are not that stupid. They know DRM is a fundamentally absurd proposition: the whole point of selling IP media is so that people can consume it. You can't make a system that will prevent people from doing so, as the whole point of having the media in the first place is so that people can use it. The only way to perfectly secure a piece of digital media is to make it unusable (i.e. the only perfectly secure system is a perfectly useless one). That's why DRM systems are broken so quickly. It's not that the programmers are necessarily bad, it's that the entire concept is fundamentally flawed. Again, the IP industries know this, which is why they pushed the Digital Millennium Copyright Act (DMCA). As with most laws, the DMCA is a complex beast, but what it boils down to is that no one is allowed to circumvent measures taken to protect copyright. Thus, even though the copy protection on DVDs is obscenely easy to bypass, it is illegal to do so. In theory, this might be fine. In practice, this law has extended far beyond what I'd consider reasonable and has also been heavily abused. For instance, some software companies have attempted to use the DMCA to prevent security researchers from exposing bugs in their software. The law is sometimes used to silence critics by threatening them with a lawsuit, even though no copright infringement was committed. The Chilling Effects project seems to be a good source for information regarding the DMCA and it's various effects.
DRM combined with the DMCA can be stifling. A good example of how awful DRM is, and how DMCA can affect the situation is the Sony Rootkit Debacle. Boing Boing has a ridiculously comprehensive timeline of the entire fiasco. In short, Sony put DRM on certain CDs. The general idea was to prevent people from putting the CDs in their computer and ripping them to MP3s. To accomplish this, Sony surreptitiously installed software on customer's computers (without their knowledge). A security researcher happened to notice this, and in researching the matter found that the Sony DRM had installed a rootkit that made the computer vulnerable to various attacks. Rootkits are black-hat cracker tools used to disguise the workings of their malicious software. Attempting to remove the rootkit broke the windows installation. Sony reacted slowly and poorly, releasing a service pack that supposedly removed the rootkit, but which actually opened up new security vulnerabilities. And it didn't end there. Reading through the timeline is astounding (as a result, I tend to shy away from Sony these days). Though I don't believe he was called on it, the security researcher who discovered these vulnerabilities was technically breaking the law, because the rootkit was intended to protect copyright.
A few months ago, my windows computer died and I decided to give linux a try. I wanted to see if I could get linux to do everything I needed it to do. As it turns out, I could, but not legally. Watching DVDs on linux is technically illegal, because I'm circumventing the copy protection on DVDs. Similar issues exist for other media formats. The details are complex, but in the end, it turns out that I'm not legally able to watch my legitimately purchased DVDs on my computer (I have since purchased a new computer that has an approved player installed). Similarly, if I were to purchase a song from the iTunes Music Store, it comes in a DRMed format. If I want to use that format on a portable device (let's say my phone, which doesn't support Apple's DRM format), I'd have to convert it to a format that my portable device could understand, which would be illegal.
Which brings me to my next point, which is that DRM isn't really about protecting copyright. I've already established that it doesn't really accomplish that goal (and indeed, even works against many of the reasons copyright was put into place), so why is it still being pushed? One can only really speculate, but I'll bet that part of the issue has to do with IP owners wanting to "undercut fair use and then create new revenue streams where there were previously none." To continue an earlier example, if I buy a song from the iTunes music store and I want to put it on my non-Apple phone (not that I don't want one of those), the music industry would just love it if I were forced to buy the song again, in a format that is readable by my phone. Of course, that format would be incompatible with other devices, so I'd have to purchase the song again if I wanted to listen to it on those devices. When put in those terms, it's pretty easy to see why IP owners like DRM, and given the general person's reaction to such a scheme, it's also easy to see why IP owners are always careful to couch the debate in terms of piracy. This won't last forever, but it could be a bumpy ride.
Interestingly enough, distributers of digital media like Apple and Yahoo have recently come out against DRM. For the most part, these are just symbolic gestures. Cynics will look at Steve Jobs' Thoughts on Music and say that he's just passing the buck. He knows customers don't like or understand DRM, so he's just making a calculated PR move by blaming it on the music industry. Personally, I can see that, but I also think it's a very good thing. I find it encouraging that other distributers are following suit, and I also hope and believe this will lead to better things. Apple has proven that there is a large market for legally purchased music files on the internet, and other companies have even shown that selling DRM-free files yields higher sales. Indeed, the emusic service sells high quality, variable bit rate MP3 files without DRM, and it has established emusic as the #2 retailer of downloadable music behind the iTunes Music Store. Incidentally, this was not done for pure ideological reasons - it just made busines sense. As yet, these pronouncements are only symbolic, but now that online media distributers have established themselves as legitimate businesses, they have ammunition with which to challenge the IP holders. This won't happen overnight, but I think the process has begun.
Last year, I purchased a computer game called Galactic Civilizations II (and posted about it several times). This game was notable to me (in addition to the fact that it's a great game) in that it was the only game I'd purchased in years that featured no CD copy protection (i.e. DRM). As a result, when I bought a new computer, I experienced none of the usual fumbling for 16 digit CD Keys that I normally experience when trying to reinstall a game. Brad Wardell, the owner of the company that made the game, explained his thoughts on copy protection on his blog a while back:
I don't want to make it out that I'm some sort of kumbaya guy. Piracy is a problem and it does cost sales. I just don't think it's as big of a problem as the game industry thinks it is. I also don't think inconveniencing customers is the solution.For him, it's not that piracy isn't an issue, it's that it's not worth imposing draconian copy protection measures that infuriate customers. The game sold much better than expected. I doubt this was because they didn't use DRM, but I can guarantee one thing: People don't buy games because they want DRM. However, this shows that you don't need DRM to make a successful game.
The future isn't all bright, though. Peter Gutmann's excellent Cost Analysis of Windows Vista Content Protection provides a good example of how things could get considerably worse:
Windows Vista includes an extensive reworking of core OS elements in order to provide content protection for so-called "premium content", typically HD data from Blu-Ray and HD-DVD sources. Providing this protection incurs considerable costs in terms of system performance, system stability, technical support overhead, and hardware and software cost. These issues affect not only users of Vista but the entire PC industry, since the effects of the protection measures extend to cover all hardware and software that will ever come into contact with Vista, even if it's not used directly with Vista (for example hardware in a Macintosh computer or on a Linux server).This is infuriating. In case you can't tell, I've never liked DRM, but at least it could be avoided. I generally take articles like the one I'm referencing with a grain of salt, but if true, it means that the DRM in Vista is so oppressive that it will raise the price of hardware And since Microsoft commands such a huge share of the market, hardware manufacturers have to comply, even though a some people (linux users, Mac users) don't need the draconian hardware requirements. This is absurd. Microsoft should have enough clout to stand up to the media giants, there's no reason the DRM in Vista has to be so invasive (or even exist at all). As Gutmann speculates in his cost analysis, some of the potential effects of this are particularly egregious, to the point where I can't see consumers standing for it.
My previous post dealt with Web 2.0, and I posted a YouTube video that summarized how changing technology is going to force us to rethink a few things: copyright, authorship, identity, ethics, aesthetics, rhetorics, governance, privacy, commerce, love, family, ourselves. All of these are true. Earlier, I wrote that the purpose of copyright was to benefit society, and that protection and expiration were both essential. The balance between protection and expiration has been upset by technology. We need to rethink that balance. Indeed, many people smarter than I already have. The internet is replete with examples of people who have profited off of giving things away for free. Creative Commons allows you to share your content so that others can reuse and remix your content, but I don't think it has been adopted to the extent that it should be.
To some people, reusing or remixing music, for example, is not a good thing. This is certainly worthy of a debate, and it is a discussion that needs to happen. Personally, I don't mind it. For an example of why, watch this video detailing the history of the Amen Break. There are amazing things that can happen as a result of sharing, reusing and remixing, and that's only a single example. The current copyright environment seems to stifle such creativity, not the least of which because copyright lasts so long (currently the life of the author plus 70 years). In a world where technology has enabled an entire generation to accellerate the creation and consumption of media, it seems foolish to lock up so much material for what could easily be over a century. Despite all that I've written, I have to admit that I don't have a definitive answer. I'm sure I can come up with something that would work for me, but this is larger than me. We all need to rethink this, and many other things. Maybe that Web 2.0 thing can help.
Update: This post has mutated into a monster. Not only is it extremely long, but I reference several other long, detailed documents and even somewhere around 20-25 minutes of video. It's a large subject, and I'm certainly no expert. Also, I generally like to take a little more time when posting something this large, but I figured getting a draft out there would be better than nothing. Updates may be made...
Update 2.15.07: Made some minor copy edits, and added a link to an Ars Technica article that I forgot to add yesterday.
Posted by Mark on February 14, 2007 at 11:44 PM .: link :.
Sunday, February 11, 2007
Web 2.0 ... The Machine is Us/ing Us
Via The Rodent's Burrow, I come across this YouTube video on Web 2.0:
It's an interesting video, but I have to admit that the term Web 2.0 always bothered me. This is odd, because obsessing over terminology is also annoying. As you can see, I'm in a bit of a bind here. Web 2.0 has become a shorthand for the current renaissance in web development which is focused new web services and applications that emphasize social collaboration and openness. That, of course, is a lame definition. Most definitions of Web 2.0 are. However, I think Paul Graham hits the nail on the head in his essay on the subject:
Web 2.0 means using the web the way it's meant to be used. The "trends" we're seeing now are simply the inherent nature of the web emerging from under the broken models that got imposed on it during the Bubble.Right on. Key to understanding "Web 2.0" is the concept of the internet itself. I should also note that the web and the internet are not the same thing. The internet is a collection of interconnected computer networks (i.e. the physical hardware), the web is a collection of interconnected documents and data that lives on the internet. If you don't understand the historical resources that lead to the topology of the internet, "Web 2.0" won't make much sense. The internet is made by human beings, and it's history extends back to the 1950s (well, the branch of mathematics that represents our thinking about networks is called graph theory, which finds its roots in the early eighteenth century, but the physical internet has its roots in ARPANET, the 1950s governmental precursor to the internet), but it was not a centrally designed system.
The web isn't all that different, but we are, and we're taking advantage of it.
Update 2.14.07: It seems that this post has kicked off a little discussion of Intellectual Property, starting over at 79Soul with a response by me here.
Wednesday, January 10, 2007
A couple of years ago, I was in the market for a new phone. After looking around at all the options and features, I ended up settling on a relatively "low-end" phone that was good for calls and SMS and that's about it. It was small, simple, and to the point, and while it has served me well, I have kinda regretted not getting a camera in the phone (this is the paradox of choice in action). I considered the camera phone, as well as phones that played music (three birds with one stone!), but it struck me that feature packed devices like that simply weren't ready yet. They were expensive, clunky, and the interface looked awful.
Enter Apple's new iPhone. Put simply, they've done a phenominal job with this phone. I'm impressed. Watch the keynote presentation here. Some highlights that I found interesting:
Updates: Brian Tiemann has further thoughts. Kevin Murphy has some thoughts as well. Ars Technica also notes some issues with the iPhone, and has some other good commentary (actually, just read their Infiinite Loop journal). I think the biggest issue I forgot to mention is that the iPhone is exclusive to Cingular (and you have to get a 2 year plan at that).
Wednesday, December 27, 2006
Again New Computer
A few weeks ago, I wrote about what I was looking for in a new computer, and various buying options. I had it narrowed down to a few options, but being cognizant of the paradox of choice, I decided on ordering a Prelude system from Maingear, a small custom computer shop that actually had reasonable prices (I got the system I was looking for: Intel Core 2 Duo E6600, 2 GB RAM, 320 GB Hard Drive, etc...). I probably paid a little more than I would have if I just bought all the components and then put it together myself, but I was willing to pay for the convenience of a pre-configured system. Also, unlike other cheap custom PC shops like CyberPowerPC, Maingear has a fantastic reputation for building quality systems and providing excellent support. I'm pleased to report that Maingear lives up to its reputation. Shortly after ordering my PC, they contacted me to confirm a few things and ask if I had any questions or special requests (I understand they'll preinstall various games for you if you want, provided you have the CD Key. Alas, I have no such games, so I didn't get to request this, but that's a neat service.)
They also informed me that they (like every other retailer) were quite busy at this time of the year, but that they would try to get me the PC before Christmas. And it arrived just in the nick of time, on Saturday, December 23 (another Festivus miracle!). It was well packaged, and appeared to be in working order (as compared to a friend's experience with CyberPowerPC where his DVD drive was mounted incorrectly amongst a bunch of other strange problems). The case looks great (I don't know why, but most custom PC cases are very crappy looking or obscenely gaudy):
The insides are arranged about as neat as could be expected, with all the various wires and connectors hidden or tied tightly together. This is nothing short of amazing when compared to my previous computer.
And it came with a nice personalized binder that had all of the installation CDs, backup CDs, and documentation for the computer.
When I fired up the computer, I was pleased to find that no Windows configuration was really necessary. The desktop was relatively clean (no annoying special offers from AOL, etc...), all the latest patches and updated drivers had been installed, and everything was ready for me to install my favorite apps. As far as performance goes, it appears to be a champ (according to a screenshot they included, it scores a 5453 in 3DMark06 - but I have no frame of reference for telling just how good that is). They also included a copy of Hitman: Blood Money (an unexpected and pleasant bonus), which I've been working my way through ( it's one of those annoying DIAS type of games, but hey, I'm not complaining).
All in all, I couldn't be happier with my new computer. For something I use as often as I use my computer, I think it was worth every penny.
Monday, December 04, 2006
As I've recently mentioned, my old computer isn't doing so well. Built with turn-of-the-century hardware, she's lasted a long time, more than I could really expect. So it's time to get a new computer. As I've also mentioned recently, the amount of options for building a new computer are staggering (and the amount of choices can lead to problems). However, with the help of the newly released Ars Technica System Guides (specifically the Hot Rod) and some general research, I should be able to slap something together in relatively short order. After some initial poking around, here's what I'm looking for:
Update: After some fiddling, I got the Maingear PC down to around $1800 without a monitor. I'm also getting a lightscribe DVD burner, which is a totally frivolous expense (extra $70), but pretty neat too.
Sunday, November 19, 2006
Time is short this week, so a few quick links:
Update: This Lists of Bests website is neat. It remembers what movies you've seen, and applies them to other lists. For example, without even going through the AFI top 100, I know that I've seen at least 41% of the list (because of all the stuff I noted when going through the top 1000). You can also compare yourself with other people on the site, and invite others to do so as well. Cool stuff.
Friday, November 17, 2006
Bag O' Crap: Close, but no cigar
The term "woot" (or more accurately, "w00t") is slang for expressing excitement, usually on the internet (especially popular in chat and video games). The etymology is a little unclear (many speculated origins), but the word itself just sounds celebratory. In any case, there is an online store that has appropriated the term and "focuses on selling cool stuff cheap." They basically sell one item a day, and that's it. Talk about your simple concepts. I should also mention that their product descriptions are awesome - they have a lot of fun with it, so that even though I don't think I've ever bought a Woot, I still stop by frequently. For instance, a while ago, their description for a JVC Camcorder was written as a letter from Osama Bin Laden to his subordinates:
To: Media Relations DivisionHeh. Anyway, when that item sells out, the site starts selling alternate items in what is called a "Woot-Off." These alternate items are typically in shorter stock than the original Woot, so they don't usually last long, and you see a lot of items during the rest of the day (as each Woot-Off item sells out, it is replaced by the next item, and so on).
Now, the holy grail of Woot is this thing called the Bag O' Crap. Basically, instead of selling an item, they offer a grab bag that is typically filled with dollar store junk, but which sometimes contains things of significant value (I heard of someone getting a decent quality graphics card in a BOC). Naturally, this is a popular item, and it usually sells out within minutes. I have never even seen one, though I always know when I've missed it. Quite frustrating, but today was different. I go to Woot this afternoon, and I get a "Server Too Busy" error message. This essentially means that they're selling a BOC, and everyone is going to the site in a furious attempt to purchase one (well, typically you purchase 3 at a time), clogging up their servers. A few reloads later, and I see it (click for larger image):
Overjoyed, I attempted to get one. After several minutes of tense refreshing to get past server errors, I finally get to the page where you confirm your order, I click, and I get the message:
Sorry, we're now sold out of this item or we don't have enough left to complete your order.Khaaan! You win this round, Woot. But I'll be back. I'll get that Bag O' Crap someday.
Sunday, November 12, 2006
How awesome is the internet? A little while ago, I was watching David Fincher's far-fetched but entertaining thriller, The Game. If you haven't seen the film, there are spoilers ahead.
At the end of the movie, some pretty unlikely things happen, but it's a lot of fun, and I think most audiences let it slide. One of the funny moments at the end is when a character gives Michael Douglas' character a t-shirt which describes his experiences. After watching the movie, I thought it would make a pretty funny t-shirt... but I couldn't remember exactly what the shirt said. Naturally, I turned to the internet. Not only was I able to figure out what it said (from multiple sites), I also found a site that actually sells the shirt.
They've even got a screenshot from the movie. Alas, it's a bit pricey for such a simplistic shirt. Still, the idea that such a shirt would be anything more than some custom thing a film nerd whipped up is pretty funny. I mean, how many people would even get the reference?
Posted by Mark on November 12, 2006 at 09:45 PM .: link :.
Sunday, November 05, 2006
Choice, Productivity and Feature Bloat
Jacob Neilson's recent column on productivity and screen size referenced an interesting study comparing a feature-rich application with a simpler one:
The distinction between operations and tasks is important in application design because the goal is to optimize the user interface for task performance, rather than sub-optimize it for individual operations. For example, Judy Olson and Erik Nilsen wrote a classic paper comparing two user interfaces for large data tables. One interface offered many more features for table manipulation and each feature decreased task-performance time in specific circumstances. The other design lacked these optimized features and was thus slower to operate under the specific conditions addressed by the first design's special features.In this case, more choices means less productive. So why aren't all of our applications much smaller and less feature-intesive? Well, as I went over a few weeks ago, people tend to overvalue measurable things like features and undervalue less tangible aspects like usability and productivity. Here's another reason we endure feature bloat:
A lot of software developers are seduced by the old "80/20" rule. It seems to make a lot of sense: 80% of the people use 20% of the features. So you convince yourself that you only need to implement 20% of the features, and you can still sell 80% as many copies.That quote is from a relatively old article, and when I first read it, I still didn't get why you couldn't create a "lite" word processor that would be significantly smaller than Word, but still get the job done. Then I started using several of the more obscure features of Word, notably the "Track Changes" feature (which was a life saver at the time), which never would have made it into a "lite" version (yes, there are other options for collaborative editing these days, but you gotta use what you have at hand at the time). Add in the ever increasing computer power and ever decreasing cost of memory and storage, and feature bloat looks like less of a problem. However, as this post started out by noting, productivity often suffers as a result (and as Neilson's article shows, productivity is more difficult to measure than counting a list of features).
The one approach for dealing with "featuritis" that seems to be catching on these days is starting with your "lite" version, then allowing people to install plugins to fill in the missing functionality. This is one of the things that makes Firefox so popular, as it not only allows plugins, it actually encourages users to create their own. Alas, this has lead to choice problems of it's own. One of my required features for any browser that I would consider for personal use is mouse gestures. Firefox has at least 4 extensions available that implement mouse gestures in one way or another (though it's not immediately obvious what the differences are, and there appear to be other extensions which utilize mouse gestures for other functions). By contrast, my other favorite browser, Opera, natively supports mouse gestures.
Of course, this is not a new approach to the feature bloat problem. Indeed, as far as I can see, this is one of the primary driving forces behind *nix-based applications. Their text editors don't have a word count feature because there is already a utility for doing so (command line: wc [filename]). And so on. It's part of *nix's modular design, and it's one of the things that makes it great, but it also presents problems of it's own (which I belabored at length last week)
In the end, it comes down to tradeoffs. Humans don't solve problems, they exchange problems, and so on. Right now, the plugin strategy seems to make a reasonable tradeoff, but it certainly isn't perfect.
Posted by Mark on November 05, 2006 at 11:50 PM .: link :.
Sunday, October 29, 2006
Adventures in Linux, Paradox of Choice Edition
Last week, I wrote about the paradox of choice: having too many options often leads to something akin to buyer's remorse (paralysis, regret, dissatisfaction, etc...), even if their choice was ultimately a good one. I had attended a talk given by Barry Schwartz on the subject (which he's written a book about) and I found his focus on the psychological impact of making decisions fascinating. In the course of my ramblings, I made an offhand comment about computers and software:
... the amount of choices in assembling your own computer can be stifling. This is why computer and software companies like Microsoft, Dell, and Apple (yes, even Apple) insist on mediating the user's experience with their hardware & software by limiting access (i.e. by limiting choice). This turns out to be not so bad, because the number of things to consider really is staggering.The foolproofing that these companies do can sometimes be frustrating, but for the most part, it works out well. Linux, on the other hand, is the poster child for freedom and choice, and that's part of why it can be a little frustrating to use, even if it is technically a better, more stable operating system (I'm sure some OSX folks will get a bit riled with me here, but bear with me). You see this all the time with open source software, especially when switching from regular commercial software to open source.
One of the admirable things about Linux is that it is very well thought out and every design decision is usually done for a specific reason. The problem, of course, is that those reasons tend to have something to do with making programmers' lives easier... and most regular users aren't programmers. I dabble a bit here and there, but not enough to really benefit from these efficiencies. I learned most of what I know working with Windows and Mac OS, so when some enterprising open source developer decides that he doesn't like the way a certain Windows application works, you end up seeing some radical new design or paradigm which needs to be learned in order to use it. In recent years a lot of work has gone into making Linux friendlier for the regular user, and usability (especially during the installation process) has certainly improved. Still, a lot of room for improvement remains, and I think part of that has to do with the number of choices people have to make.
Let's start at the beginning and take an old Dell computer that we want to install Linux on (this is basically the computer I'm running right now). First question: which distrubution of Linux do we want to use? Well, to be sure, we could start from scratch and just install the Linux Kernel and build upwards from there (which would make the process I'm about to describe even more difficult). However, even Linux has it's limits, so there are lots of distrubutions of linux which package the OS, desktop environments, and a whole bunch of software together. This makes things a whole lot easier, but at the same time, there are a ton of distrutions to choose from. The distributions differ in a lot of ways for various reasons, including technical (issues like hardware support), philosophical (some distros poo poo commercial involvement) and organizational (things like support and updates). These are all good reasons, but when it's time to make a decision, what distro do you go with? Fedora? Suse? Mandriva? Debian? Gentoo? Ubuntu? A quick look at Wikipedia reveals a comparison of Linux distros, but there are a whopping 67 distros listed and compared in several different categories. Part of the reason there are so many distros is that there are a lot of specialized distros built off of a base distro. For example, Ubuntu has several distributions, including Kubuntu (which defaults to the KDE desktop environment), Edubuntu (for use in schools), Xubuntu (which uses yet another desktop environment called Xfce), and, of course, Ubuntu: Christian Edition (linux for Christians!).
So here's our first choice. I'm going to pick Ubuntu, primarily because their tagline is "Linux for Human Beings" and hey, I'm human, so I figure this might work for me. Ok, and it has a pretty good reputation for being an easy to use distro focused more on users than things like "enterprises."
Alright, the next step is to choose a desktop environment. Lucky for us, this choice is a little easier, but only because Ubuntu splits desktop environments into different distributions (unlike many others which give you the choice during installation). For those who don't know what I'm talking about here, I should point out that a desktop environment is basically an operating system's GUI - it uses the desktop metaphor and includes things like windows, icons, folders, and abilities like drag-and-drop. Microsoft Windows and Mac OSX are desktop environments, but they're relatively locked down (to ensure consistency and ease of use (in theory, at least)). For complicated reasons I won't go into, Linux has a modular system that allows for several different desktop environments. As with linux distributions, there are many desktop environments. However, there are really only two major players: KDE and Gnome. Which is better appears to be a perennial debate amongst linux geeks, but they're both pretty capable (there are a couple of other semi-popular ones like Xfce and Enlightenment, and then there's the old standby, twm (Tom's Window Manager)). We'll just go with the default Gnome installation.
Note that we haven't even started the installation process and if we're a regular user, we've already made two major choices, each of which will make you wonder things like: Would I have this problem if I installed Suse instead of Ubuntu? Is KDE better than Gnome?
But now we're ready for installation. This, at least, isn't all that bad, depending on the computer you're starting with. Since we're using an older Dell model, I'm assuming that the hardware is fairly standard stuff and that it will all be supported by my distro (if I were using a more bleeding edge type box, I'd probably want to check out some compatibility charts before installing). As it turns out, Ubuntu and it's focus on creating a distribution that human beings can understand has a pretty painless installation. It was actually a little easier than Windows, and when I was finished, I didn't have to remove the mess of icons and trial software offers (purchasing a Windows PC through somone like HP is apparently even worse). When you're finished installing Ubuntu, you're greeted with a desktop that looks like this (click the pic for a larger version):
No desktop clutter, no icons, no crappy trial software. It's beautiful! It's a little different from what we're used to, but not horribly so. Windows users will note that there are two bars, one on the top and one on the bottom, but everything is pretty self explanatory and this desktop actually improves on several things that are really strange about Windows (i.e. to turn off you're computer, first click on "Start!"). Personally, I think having two toolbars is a bit much so I get rid of one of them, and customize the other so that it has everything I need (I also put it at the bottom of the screen for several reasons I won't go into here as this entry is long enough as it is).
Alright, we're almost homefree, and the installation was a breeze. Plus, lots of free software has been installed, including Firefox, Open Office, and a bunch of other good stuff. We're feeling pretty good here. I've got most of my needs covered by the default software, but let's just say we want to install Amarok, so that we can update our iPod. Now we're faced with another decision: How do we install this application? Since Ubuntu has so thoughtfully optimized their desktop for human use, one of the things we immediately notice in the "Applications" menu is an option which says "Add/Remove..." and when you click on it, a list of software comes up and it appears that all you need to do is select what you want and it will install it for you. Sweet! However, the list of software there doesn't include every program, so sometimes you need to use the Synaptic package manager, which is also a GUI application installation program (though it appears to break each piece of software into smaller bits). Also, in looking around the web, you see that someone has explained that you should download and install software by typing this in the command line: apt-get install amarok. But wait! We really should be using the aptitude command instead of apt-get to install applications.
If you're keeping track, that's four different ways to install a program, and I haven't even gotten into repositories (main, restricted, universe, multiverse, oh my!), downloadable package files (these operate more or less the way a Windows user would download a .exe installation file, though not exactly), let alone downloading the source code and compiling (sounds fun, doesn't it?). To be sure, they all work, and they're all pretty easy to figure out, but there's little consistency, especially when it comes to support (most of the time, you'll get a command line in response to a question, which is completely at odds with the expectations of someone switching from Windows). Also, in the case of Amarok, I didn't fare so well (for reasons belabored in that post).
Once installed, most software works pretty much the way you'd expect. As previously mentioned, open source developers sometimes get carried away with their efficiencies, which can sometimes be confusing to a newbie, but for the most part, it works just fine. There are some exceptions, like the absurd Blender, but that's not necessarily a hugely popular application that everyone needs.
Believe it or not, I'm simplifying here. There are that many choices in Linux. Ubuntu tries its best to make things as simple as possible (with considerable success), but when using Linux, it's inevitable that you'll run into something that requires you to break down the metaphorical walls of the GUI and muck around in the complicated swarm of text files and command lines. Again, it's not that difficult to figure this stuff out, but all these choices contribute to the same decision fatigue I discussed in my last post: anticipated regret (there are so many distros - I know I'm going to choose the wrong one), actual regret (should I have installed Suse?), dissatisfaction, excalation of expectations (I've spent so much time figuring out what distro to use that it's going to perfectly suit my every need!), and leakage (i.e. a bad installation process will affect what you think of a program, even after installing it - your feelings before installing leak into the usage of the application).
None of this is to say that Linux is bad. It is free, in every sense of the word, and I believe that's a good thing. But if they ever want to create a desktop that will rival Windows or OSX, someone needs to create a distro that clamps down on some of these choices. Or maybe not. It's hard to advocate something like this when you're talking about software that is so deeply predicated on openess and freedom. However, as I concluded in my last post:
Without choices, life is miserable. When options are added, welfare is increased. Choice is a good thing. But too much choice causes the curve to level out and eventually start moving in the other direction. It becomes a matter of tradeoffs. Regular readers of this blog know what's coming: We don't so much solve problems as we trade one set of problems for another, in the hopes that the new set of problems is more favorable than the old.Choice is a double edged sword, and by embracing that freedom, Linux has to deal with the bad as well as the good (just as Microsoft and Apple have to deal with the bad aspects of suppressing freedom and choice). Is it possible to create a Linux distro that is as easy to use as Windows or OSX while retaining the openness and freedom that makes it so wonderful? I don't know, but it would certainly be interesting.
Sunday, October 22, 2006
The Paradox of Choice
At the UI11 Conference I attended last week, one of the keynote presentations was made by Barry Schwartz, author of The Paradox of Choice: Why More Is Less. Though he believes choice to be a good thing, his presentation focused more on the negative aspects of offering too many choices. He walks through a number of examples that illustrate the problems with our "official syllogism" which is:
So how do we react to all these choices? Luke Wroblewski provides an excellent summary, which I will partly steal (because, hey, he's stealing from Schwartz after all):
Another example is my old PC which has recently kicked the bucket. I actually assembled that PC from a bunch of parts, rather than going through a mainstream company like Dell, and the number of components available would probably make the Circuit City stereo example I gave earlier look tiny by comparison. Interestingly, this diversity of choices for PCs is often credited as part of the reason PCs overtook Macs:
Back in the early days of Macintoshes, Apple engineers would reportedly get into arguments with Steve Jobs about creating ports to allow people to add RAM to their Macs. The engineers thought it would be a good idea; Jobs said no, because he didn't want anyone opening up a Mac. He'd rather they just throw out their Mac when they needed new RAM, and buy a new one.But as Schwartz would note, the amount of choices in assembling your own computer can be stifling. This is why computer and software companies like Microsoft, Dell, and Apple (yes, even Apple) insist on mediating the user's experience with their hardware by limiting access (i.e. by limiting choice). This turns out to be not so bad, because the number of things to consider really is staggering. So why was I so happy with my computer? Because I really didn't make many of the decisions - I simply went over to Ars Technica's System Guide and used their recommendations. When it comes time to build my next computer, what do you think I'm going to do? Indeed, Ars is currently compiling recommendations for their October system guide, due out sometime this week. My new computer will most likely be based off of their "Hot Rod" box. (Linux presents some interesting issues in this context as well, though I think I'll save that for another post.)
So what are the lessons here? One of the big ones is to separate the analysis from the choice by getting recommendations from someone else (see the Ars Technica example above). In the market for a digital camera? Call a friend (preferably one who is into photography) and ask them what to get. Another thing that strikes me is that just knowing about this can help you overcome it to a degree. Try to keep your expectations in check, and you might open up some room for pleasant surprises (doing this is suprisingly effective with movies). If possible, try using the product first (borrow a friend's, use a rental, etc...). Don't try to maximize the results so much; settle for things that are good enough (this is what Schwartz calls satisficing).
Without choices, life is miserable. When options are added, welfare is increased. Choice is a good thing. But too much choice causes the curve to level out and eventually start moving in the other direction. It becomes a matter of tradeoffs. Regular readers of this blog know what's coming: We don't so much solve problems as we trade one set of problems for another, in the hopes that the new set of problems is more favorable than the old. So where is the sweet spot? That's probably a topic for another post, but my initial thoughts are that it would depend heavily on what you're doing and the context in which you're doing it. Also, if you were to take a wider view of things, there's something to be said for maximizing options and then narrowing the field (a la the free market). Still, the concept of choice as a double edged sword should not be all that surprising... after all, freedom isn't easy. Just ask Spider Man.
Sunday, October 15, 2006
I've been quite busy lately so once again it's time to unleash the chain-smoking monkey research squad and share the results:
Posted by Mark on October 15, 2006 at 11:09 PM .: link :.
Sunday, October 08, 2006
Linux Humor & Blog Notes
I'll be attending the User Interface 11 conference this week, and as such, won't have much time to check in. Try not to wreck the place while I'm gone. Since I'm off to the airport in fairly short order (why did I schedule a flight to conflict with the Eagles/Cowboys matchup? Dammit!) here's a quick comic with some linux humor:
The author, Randall Munroe, is a NASA scientist who has a keen sense of humor (and is apparently deathly afraid of raptors) and publishes a new comic a few times a week. The comic above is one of his most popular, and even graces one of his T-Shirts (I also like the "Science. It works, bitches." shirt)
I'm sure I'll be able to wrangle some internet access during the week, but chances are that it will be limited (I need to get me a laptop at some point). I'll be back late Thursday night, so posting will probably resume next Sunday.
Tuesday, October 03, 2006
Adventures in Linux, iPod edition
Last weekend, my Windows machine died and I decided to give linux a shot. My basic thought was that if I could get a linux box to do everything I need, why bother getting another copy of windows? So I cast about looking for applications to fulfill my needs, and thus found myself on Mark Pilgrim's recently updated list of linux Essentials (Pilgrim has recently experienced a bit of net notoriety due to his decision to abandon Apple for Ubuntu).
So I need something to replace iTunes (which I use to play music and update my iPod). No problem:
amaroK. It’s just like iTunes except it automatically fetches lyrics from Argentina, automatically looks up bands on Wikipedia, automatically identifies songs with MusicBrainz, and its developers are actively working on features that don’t involve pushing DRM-infected crap down my throat. Add the amarok repository to get the latest version. apt-get install amarokAfter taking that advice and installing Amarok, I think that paragraph would be better written as:
amaroK. It’s just like iTunes except it automatically orphans most of your library so that you can't see or play most of your music on your iPod, it doesn't handle video, it can't write to the iPod's podcast directory, and (my personal favorite) if you plug your Amarokized iPod into a windows machine, it crashes iTunes. Add the amarok repository to get the latest version, as the latest version doesn't seem to have those problems.Yes, that's right, I plugged in my iPod and Amarok corrupted the itunes database. I could still use my iPod, but I could only see 256 songs (out of around 1000). It didn't delete the files - all 1000 songs were still on the iPod - it just screwed up the database that controls the ipod. The issue turns out to be that I installed an older version of Amarok, and since Mark recommended getting the latest version, I really can't fault him for this debacle. You see, Ubuntu comes with a few user-friendly ways of installing programs. These are based on what's called "Repositories" which are basically databases full of programs that you can browse. So I fired up one of these installation programs, found Amarok, and installed it... not realizing that the default Ubuntu repository had an older version of the program.
Obviously, I had a bad experience here, but I'm still a little confused as to how Amarok is a valid iTunes replacement. Even with the latest version, it still has no support for videos (and the developers don't plan to either, their excuse being that Amarok is just a music player) and it's podcast support isn't ideal (
Despite the problems, I find myself strangely bemused at the experience. It was exactly what I feared, but in the end, I'm not that upset about it. There's a part of me that likes digging into the details of a problem and troubleshooting like this... but then, there's also a part of me that knows spending 5 hours trying to install something I could install in about 10 minutes on a Windows box is ludicrous. All's well that ends well, I guess, but consider me unimpressed. It's not enough for me to forsake linux, but it's enough to make me want to create a dual boot machine rather than a pure linux box.
Update: In using Amarok a little more, I see that it supports podcasts better than I originally thought.
Sunday, October 01, 2006
The Death of Sulaco
I have two computers running here at Kaedrin headquarters. My primary computer is a Windows box called Sulaco. My secondary computer is running Ubuntu Linux and is called Nostromo. Yesterday, Sulaco nearly died. I'll spare you the details (which are covered in the forum), but it started with some display trouble. It could have been the drivers for my video card, or it could have been that the video card itself was malfunctioning. In any case, by this morning, Sulaco's Windows registry was thoroughly corrupted. All attempts to salvage the installation failed. For some reason, my Windows XP CD failed to boot, and my trusty Win 98 floppy boot disk wouldn't let me run the setup from the XP CD (nor could I even see my hard drive, which had some files on it I wanted to retrieve).
To further complicate matters, the CD burner on my linux box has always been flaky, so I couldn't use that to create a new boot disk. However, I did remember that my Ubuntu installation disk could run as a Live CD. A few minutes of google searching yielded step-by-step instructions for booting a Windows box with an Ubuntu Live CD, mounting the Windows drive and sharing it via Windows File Sharing (i.e. Samba). A few minutes later and I was copying all appropriate data from Sulaco to Nostromo.
For all intents and purposes, Sulaco is dead. She has served me well, and it should be noted that she was constructed nearly 6 years ago with turn-of-the-century hardware. I'm actually amazed that she held up so well for so long, but her age was showing. Upgrades would have been necessary even without the display/registry problems. The question now is how to proceed.
I've been fiddling with Linux for, oh, 8 years or so. Until recently, I've never found it particularly useful. Even now, I'm wary of it. However, the ease with which I was able to install Ubuntu and get it up on my wireless network (this task had given me so much trouble in the past that I was overjoyed when I managed to get it working) made me reconsider a bit. Indeed, the fact that the way I recovered from a Windows crash was to use linux is also heartening. On the other hand, I also have to consider the fact that if someone hadn't written detailed instructions for the exact task I was attempting, I probably never would have figured it out in a reasonable timeframe. This is the problem with linux. It's hard to learn.
Yes, I know, it's a great operating system. I've fiddled with it enough to realize that some of the things that might seem maddeningly and deliberately obscure are actually done for the best of reasons in a quite logical manner (unless, of course, you're talking about the documentation, which is usually infuriating). I'm not so much worried that I can't figure it out, it's that I don't really have the time to work through its ideosyncracies. As I've said, recent experiences have been heartening, but I'm still wary. Open source software is a wonderful thing in theory, but I'd say that my experience with such applications has been mixed at best. For an example of what I'm worried about, see Shamus' attempts to use Blender, an open source 3d modeling program.
My next step will be to build a new box in Sulaco's place. As of right now, I'm leaning towards installing Ubuntu on that and using one of the various Windows emulators like WINE to run the windows proprietary software I need (which probably isn't much at this point). So right now, Nostromo is my guinea pig. If I can get this machine to do everything I need it to do in the next few days, I'll be a little less wary. If I can't, I'll find another Windows CD and install that. To be perfectly honest, Windows has served me well. Until yesterday, I've never had a problem with my installation of XP, which was stable and responsive for several years (conventional wisdom seems to dictate that running XP requires a complete reinstallation every few months - I've never had that problem). That said, I don't particularly feel like purchasing a new copy, especially when Vista is right around the corner...
Sunday, September 17, 2006
A few weeks ago, I wrote about magic and how subconscious problem solving can sometimes seem magical:
When confronted with a particularly daunting problem, I'll work on it very intensely for a while. However, I find that it's best to stop after a bit and let the problem percolate in the back of my mind while I do completely unrelated things. Sometimes, the answer will just come to me, often at the strangest times. Occasionally, this entire process will happen without my intending it, but sometimes I'm deliberately trying to harness this subconscious problem solving ability. And I don't think I'm doing anything special here; I think everyone has these sort of Eureka! moments from time to time. ...And indeed, Jason Kottke recently posted about how design works, referencing a couple of other designers, including Michael Bierut of Design Observer, who describes his process like this:
When I do a design project, I begin by listening carefully to you as you talk about your problem and read whatever background material I can find that relates to the issues you face. If you’re lucky, I have also accidentally acquired some firsthand experience with your situation. Somewhere along the way an idea for the design pops into my head from out of the blue. I can’t really explain that part; it’s like magic. Sometimes it even happens before you have a chance to tell me that much about your problem![emphasis mine] It is like magic, but as Bierut notes, this sort of thing is becoming more important as we move from an industrial economy to an information economy. He references a book about managing artists:
At the outset, the writers acknowledge that the nature of work is changing in the 21st century, characterizing it as "a shift from an industrial economy to an information economy, from physical work to knowledge work." In trying to understand how this new kind of work can be managed, they propose a model based not on industrial production, but on the collaborative arts, specifically theater.This is very interesting and dovetails nicely with several topics covered on this blog. Harnessing self-organizing forces to produce emergent results seems to be rising in importance significantly as we proceed towards an information based economy. As noted, collaboration is key. Older business models seem to focus on a more brute force way of solving problems, but as we proceed we need to find better and faster ways to collaborate. The internet, with it's hyperlinked structure and massive data stores, has been struggling with a data analysis problem since its inception. Only recently have we really begun to figure out ways to harness the collective intelligence of the internet and its users, but even now, we're only scraping the tip of the iceberg. Collaborative projects like Wikipedia or wisdom-of-crowds aggregators like Digg or Reddit represent an interesting step in the right direction. The challenge here is that we're not facing the problems directly anmore. If you want to create a comprehensive encyclopedia, you can hire a bunch of people to research, write, and edit entries. Wikipedia tried something different. They didn't explicitely create an encyclopedia, they created (or, at least, they deployed) a system that made it easy for large amount of people to collaborate on a large amount of topics. The encyclopedia is an emergent result of that collaboration. They sidestepped the problem, and as a result, they have a much larger and dynamic information resource.
None of those examples are perfect, of course, but the more I think about it, the more I think that their imperfection is what makes them work. As noted above, you're probably much better off releasing a site that is imperfect and iterating, making changes and learning from your mistakes as you go. When dealing with these complex problems, you're not going to design the perfect system all at once. I realize that I keep saying we need better information aggregation and analysis tools, and that we have these tools, but they leave something to be desired. The point of these systems, though, is that they get better with time. Many older information analysis systems break when you increase the workload quickly. They don't scale well. These newer systems only really work well once they have high participation rates and large amounts of data.
It remains to be seen whether or not these systems can actually handle that much data (and participation), but like I said, they're a good start and they're getting better with time.
Sunday, September 10, 2006
Time is short this week, so it's time for Yet Another Link Dump (YALD!):
Shockingly, it seems that I only needed to use two channels on my Monster FM Transmitter and both of those channels are the ones I use around Philly. Despite this, I've not been too happy with my FM transmitter thingy. It get's the job done, I guess, but I find myself consistently annoyed at its performace (this trip being an exception). It seems that these things are very idiosyncratic and unpredictible, working in some cars better than others (thus some people swear by one brand, while others will badmouth that same brand). In large cities like New York and Philadelphia, the FM dial gets crowded and thus it's difficult to find a suitable station, further complicating matters. I think my living in a major city area combined with an awkward placement of the cigarrette lighter in my car (which I assume is a factor) makes it somewhat difficult to find a good station. What would be really useful would be a list of available stations and an attempt to figure out ways to troubleshoot your car's idiosyncracies. Perhaps a wiki would work best for this, though I doubt I'll be motivated enought to spend the time installing a wiki system here for this purpose (does a similar site already exist? I did a quick search but came up empty-handed). (There are kits that allow you to tap into your car stereo, but they're costly and I don't feel like paying more for that than I did for the player... )
Posted by Mark on September 10, 2006 at 09:15 PM .: link :.
Wednesday, August 16, 2006
GPL & Asimov's First Law
Ars Technica reports on a Open source project called GPU. The purpose of this project is to provide an infrastructure for distributed computing (i.e. sharing CPU cycles). The developers of this project are apparently pacifists, and they've modified the GPL (the GNU General Public License, which is the primary license for open source software) to make that clear. One of the developers explains it thusly: "The fact is that open source is used by the military industry. Open source operating systems can steer warplanes and rockets. [This] patch should make clear to users of the software that this is definitely not allowed by the licenser."
Regardless of what you might think about the developers' intentions, the thing I find strangest about this is the way they've chosen to communicate their desires. They've modified the standard GPL to include a "patch" which is supposedly for no military use (full text here). Here is what this addition says [emphasis mine]:
PATCH FOR NO MILITARY USEThis is astoundingly silly, for several reasons. First, as many open source devotees have pointed out (and as the developers themselves even note in the above text), you're not allowed to modify the GPL. As Ars Technica notes:
Only sentences after their patch comes the phrase, "Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed." This is part of the GPL, and by modifying the license, the developers seem to run afoul of it. The Free Software Foundation has already contacted them about the matter.Next, Asimov's laws of robotics were written for autonomous beings called robots. This might seem obvious to some, but apparently not to the developers, who have applied it to software. As Ars notes: "Code is not an autonomous agent that can go around bombing people or hauling them from burning buildings." Also, Asimov always alluded to the fact that the plain English definitions (which is what the developers used in their "patch") just gave you the basic idea of what the law did - the code that implemented this functionality in his robots was much more complex.
Third, we have a military for a reason, and their purpose extends far beyond bombing the crap out of people. For example, many major disasters are met with international aid delivered and administered by... military transports and personnel (there are many other examples, but this is a common one that illustrates the point well). Since this software is not allowed, through inaction, to permit any human being from being harmed, wouldn't the military be justified (if not actually required) to use it? Indeed, this "inaction" clause seems like it could cause lots of unintended consequences.
Finally, Asimov created the laws of robotics in a work of fiction as a literary device that allowed him to have fun with his stories. Anyone who has actually read the robot novels knows that they're basically just an extended exercise in subverting the three laws (eventually even superseding them with a "zeroth" law). He set himself some reasonable sounding laws, then went to town finding ways to get around them. For crying out loud, he had robots attempting murder on humans all throughout the series. The laws were created precisely to demonstrate how foolish it was to have such laws. Granted, many fictional stories with robots have featured Asimov's laws (or some variation), but that's more of an artistic homage (or parody, in a lot of cases). It's not something you put into a legal document.
Ars notes that not all the developers agree on the "patch," which is good, I guess. If I were more cynical, I'd say this was just a ploy to get more attention for their project, but I doubt that was the intention. If they were really serious about this, they'd probably have been a little more thorough with their legalese. Maybe in the next revision they'll actually mention that the military isn't allowed to use the software.
Update: It seems that someone on Slashdot has similar thoughts:
Have any of them actually read I, Robot? I swear to god, am I in some tiny minority who doesn't believe that this book was all about promulgating the infallible virtue of these three laws, but was instead a series of parables about the failings that result from codifying morality into inflexible dogma?And another commenter does too:
From a plain English reading of the text "the program and its derivative work will neither be modified or executed to harm any human being nor through inaction permit any human being to be harmed", I am forced to conclude that the program will not through inaction allow any human being to be harmed. This isn't just silly; it's nonsensical. The Kwik-E-Mart's being robbed, and the program, through inaction (since it's running on a computer in another state, and has nothing to do with a convenience store), fails to save Apu from being shot in the leg. Has it violated the terms of it's own license? What does this clause even mean?Heh.
Sunday, August 06, 2006
In last week's post, I ended up linking to a whole bunch of movies on the IMDB. The process was somewhat tedious, and I lamented the lack of movable type plugins that would help. There are a few plugins that could potentially help, but not in the exact context I'm looking for (MT-Textile does have some IMDB shortcuts, but they're for IMDB searches).
So after a looking around, I decided that the best way to go would be to write a bookmarklet that would generate the code to insert a link to IMDB. I'm no expert on this stuff and I'm sure there's something wrong with the below code, but it appears to work passably well (maybe I should just call it IMDB Bookmarklet - Beta). Basically, all you need to do is go to the movie you want to link to on IMDB, click the bookmarklet in your browser, then copy and paste the text into your post (IE actually has a function that will copy a string directly to your clipboard, but no other browser will do so because of obvious security reasons. Therefore, I simply used a prompt() function to display the generated text which you have to then copy manually.)
Anyway, here's the code:
Again, all you need to do is go to the movie you want to link to on IMDB, click the bookmarklet in your browser, then copy and paste the text into your post. This is the output of the bookmarklet when you use it on IMDB's Miami Vice page:
<a href='http://imdb.com/title/tt0430357/' title='IMDB: Miami Vice'>Miami Vice</a>A few nerdy coding things to note here:
I realize this post has next to no appeal to the grand majority of my readers, but I ended up spending more time on this than I wanted. I'll see if I can make another post during the week this week...
Sunday, June 25, 2006
Art for the computer age...
I was originally planning on doing a movie review while our gentle web-master is away, but a topic has come up too many times in the past few weeks for me not to write about it. First it came up in the tag map of Kaedrin, when I noticed that some people were writing pages just to create appealing tag-maps. Then it came up in Illinois and Louisiana. They've passed laws regulating the sale and distribution of "violent games" to minors. This, of course, has led to lawsuits and claims that the law violates free speech. After that, it was the guys at Penny Arcade. They posted links to We Feel Fine and Listening Post.. Those projects search the internet for blogs (maybe this one?) and pull text from them about feelings, and present those feelings to an audience in different ways. Very interesting. Finally, it came up when I opened up the July issue of Game Informer, and read Hideo Kojima's quote:
I believe that games are not art, and will never be art. Let me explain � games will only match their era, meaning what the people of that age want reflects the outcome of the game at that time. So, if you bring a game from 20 years ago out today, no one will say �wow.� There will be some essence where it�s fun, but there won�t be any wows or touching moments. Like a car, for example. If you bring a car from 20 years ago to the modern day, it will be appealing in a classic sense, but how much gasoline it uses, or the lack of air conditioning will simply not be appreciated in that era. So games will always be a kind of mass entertainment form rather than art. Of course, there will be artistic ways of representing games in that era, but it will still be entertainment. However, I believe that games can be a culture that represent their time. If it�s a light era, or a dark era, I always try to implement that era in my works. In the end, when we look back on the projects, we can say �Oh, it was that era.� So overall, when you look back, it becomes a culture.�Every time I reread that quote, I cringe. Here's a man who is one of the most significant forces in video games today, the creator of Metal Gear, and he's saying "No, they're not art, and never will be." I find his distinction between mass entertaintment and art troubling, and his comparison to a car flawed.
It's true that games will always be a reflection of their times- just like anything else is. The limitations of the time and the attitudes of the culture at the time are going to have an effect on everything coming out of that time. A car made in the 60s is going to show the style of the 60s, and is going to have the tech of the 60s. That makes sense. Of course, a painting made in the 1700s is going to show the limits and is going to reflect the feelings of that time, too. The paints, brushes, and canvas used then aren't necessarily going to be the same as the ones used now, especially with the popular use of computers in painting. The fact that something is a reflection of the times isn't going to stop people from appreciating the artistic worth of that thing. The fact that the Egyptians hadn't mastered perspective doesn't stop anyone from wanting to see their statues.
What does that really tell us, though? Nothing. A car from the 80s may not be appreciated as much as a new model car as a means of transport, but Kojima seems to be completely forgetting that there are many cars that are appreciated as special. Nobody buys a 60s era muscle car because they think it's a good car for driving around in- they buy it because they think it's special, because some people view older cars as collectable. Some people do see them as more than a mere means of transportation. People are very much "wowed" by old cars. Is there any reason why this can't be true of games?
I am 8 Bit seems to suggest that there are people who are still wowed by those games. Kojima may be partially correct, though. Maybe most of those early games won't hold up in the long run. That shouldn't be a surprise. They're the first generation of games. The 8-Bit era was the begining of the new wave of games, though. For the first time, creators could start to tell real stories, beyond simple high-score pursuit. Game makers were just getting their wings, and starting to see what games were really capable of. Maybe early games aren't art. Does that mean that games aren't art?
The problem mostly seems to be that we're asking the wrong questions. We shouldn't be asking "are video games art" any more than we'd ask "are movies art." It's a loaded question and you'll never come to any real answer, because the answer is going to depend completely on what movie you're looking at, and who you're asking. The same holds true with games. The question shouldn't be whether all games are art, but whether a particular game has some artistic merrit. How we decide what counts as art is constantly up for debate, but there are games that raise such significant moral or philosophical questions, or have such an amazing sense of style, or tell such an amazing story, that it seems hard to argue that they have no artistic merrit.
All of this really is leading somewhere. Computers have changed everything. I know that seems obvious, but I think it's taking some people- people like Kojima- a little longer to realize it. Computers have opened up a level of interactivity and access to information that we've never really had before. I can update Kaedrin from Michigan, and can send a message to a friend in Germany, all while buying videos from Japan and playing chess with a man in Alaska (not that I'm actually doing those things... but I could). These changes are going to be reflected in the art our culture produces. There's going to be backlash and criticism, and we're going to find that some people just don't "get it" or don't want to. We've gone through the same thing countless times before. Nobody thought movies would be seen as art when they came on the scene, and they were sure that the talkies wouldn't. When Andy Warhol came out, there were plenty of nay-sayers. Soup cans? As art? Computers have generally been accepted as a tool for making art, but I think we're still seeing the limits pushed. We've barely scratched the surface. The interaction between art, artist, and viewer is blurring, and I, for one, can't wait to see what happens.
Sunday, April 30, 2006
The Mindless Internet and Choice
Nicholas Carr has observed a few things about the internet and its effect on the way we think:
You can't have too much information. Or can you? Writing in the Guardian, Andrew Orlowski examines the "glut of hazy information, the consequences of which we have barely begun to explore, that the internet has made endlessly available." He wonders whether the "aggregation of [online] information," which some see as "synonymous with wisdom," isn't actually eroding our ability to think critically ... Like me, you've probably sensed the same thing, in yourself and in others - the way the constant collection of information becomes an easy substitute for trying to achieve any kind of true understanding.Internet as "infocrack," as it were. In a follow up entry, Carr further comments:
The more we suck in information from the blogosphere or the web in general, the more we tune our minds to brief bursts of input. It becomes harder to muster the concentration required to read books or lengthy articles - or to follow the flow of dense or complex arguments in general. Haven't you, dear blog reader, noticed that, too?As a matter of fact, I have. A few years ago, I blogged about Information Overload:
Some time ago, I used to blog a lot more often than I do now. And more than that, I used to read a great deal of blogs, especially new blogs (or at least blogs that were new to me). Eventually this had the effect of inducing a sort of ADD in me. I consumed way too many things way too quickly and I became very judgemental and dismissive. There were so many blogs that I scanned (I couldn't actually read them, that would take too long for marginal gain) that this ADD began to spread across my life. I could no longer sit down and just read a book, even a novel.Carr seems to place the blame firmly on the internet (and technology in general). I don't agree, and you can see why in the above paragraph - as soon as I realized what happened, I took steps to mitigate and reverse the effect. It's a matter of choice, as Loryn at growstate writes:
Technology may change our intellectual environment, but doesn’t govern our behavior. We choose how we adapt. We choose our objectives and data sources and whether we challenge our assumptions. We choose on what to focus. We can choose.Indeed. She does an impressive job demolishing Carr's argument as well... And yes, I'm aware that this post is made up almost entirely of pull-quotes, seemingly confirming Carr's argument. However, is there anything wrong with that?
Sunday, January 29, 2006
Insert clever title for what is essentially a post full of links.
Again short on time, so just a few links turned up by the chain-smoking monkey research staff who actually run the blog:
Sunday, January 22, 2006
Time is short this week, so just a quick pointer towards an old Collision Detection post in which Clive Thompson talks about iPods and briefly digresses into some differences between Apple and Microsoft computers:
Back in the early days of Macintoshes, Apple engineers would reportedly get into arguments with Steve Jobs about creating ports to allow people to add RAM to their Macs. The engineers thought it would be a good idea; Jobs said no, because he didn't want anyone opening up a Mac. He'd rather they just throw out their Mac when they needed new RAM, and buy a new one.The concept of being "good enough" presents a few interesting dynamics that I've been considering a lot lately. One problem is, of course, how do you know what's "good enough" and what's just a piece of crap? Another interesting thing about the above anecdote is that "good enough" boils down to something that's customizable.
One thing I've been thinking about a lot lately is that some problems aren't meant to have perfect solutions. I see a lot talk about problems that are incredibly complex as if they really aren't that complex. Everyone is trying to "solve" these problems, but as I've noted many times, we don't so much solve problems as we trade one set of problems for another (with the hope that the new set of problems is more favorable than the old). As Michael Crichton noted in a recent speech on Complexity:
...one important assumption most people make is the assumption of linearity, in a world that is largely non-linear. ... Our human predisposition treat all systems as linear when they are not. A linear system is a rocket flying to Mars. Or a cannonball fired from a canon. Its behavior is quite easily described mathematically. A complex system is water gurgling over rocks, or air flowing over a bird’s wing. Here the mathematics are complicated, and in fact no understanding of these systems was possible until the widespread availability of computers.Everyone seems to expect a simple, linear solution to many of the complex problems we face, but I'm not sure such a thing is really possible. I think perhaps what we're looking for is a Nonesuch Beast; it doesn't exist. What are these problems? I think one such problem is the environment, as mentioned in Crichton's speech, but there are really tons of other problems. The Nonesuch Beast article above mentions a few scenarios, all of which I'm familiar with because of my job: Documenation and Metrics. One problem I often talk about on this blog is the need for better information analysis, and if all my longwinded talk on the subject hasn't convinced you yet, I don't think there's any simple solution to the problem.
As such, we have to settle for systems that are "good enough" like Wikipedia and Google. As Shamus Young notes in response to my posts last week, "deciding what is 'good enough' is a bit abstract: It depends on what you want to do with the emergent data, and what your standards are for usefulness." Indeed, and it really depends on the individual using the system. Wikipedia, though, is really just a specific example of the "good enough" wiki system, which can be used for any number of applications. As I mentioned last week, Wikipedia has run into some issues because people expect an encyclopedia to be accurate, but other wiki systems don't necessarily suffer from the same issues.
I think Wiki systems belong to a certain class of applications that are so generic, simple, and easy to use that people want to use it for all sorts of specialized purposes. Another application that fits this mold is Excel. Excel is an incredibly powerful application, but it's generic and simple enough that people use it to create all sorts of ad hoc applications that take advantage of some of the latent power in Excel. I look around my office, and I see people using Excel in many varied ways, some of which are not obvious uses of a spreadsheet program. I think we're going to see something similar with Wikis in the future (though Wikis may be used for different problems like documentation and collaboration). All this despite Wiki's obvious and substantial drawbacks. Wikis aren't "the solution" but they might be "good enough" for now.
Well, that turned out to be longer than I thought. There's a lot more to discuss here, but it will have to wait... another busy week approaches.
Sunday, January 15, 2006
Cheating Probabilistic Systems
Shamus Young makes some interesting comments regarding last week's post on probabilistic systems. He makes an important distinction between weblogs, which have no central point of control ("The weblog system is spontaneous and naturally occurring."), and the other systems I mentioned, which do. Systems like the ones used by Google or Amazon are centrally controlled and usually reside on a particular set of servers. Shamus then makes the observation that such centralization lends itself to "cheating." He uses Amazon as an example:
You’re a company like Amazon.com. You buy a million red widgets and a million blue widgets. You make a better margin on the blue ones, but it turns out that the red widgets are just a little better in quality. So the feedback for red is a little better. Which leads to red being recommended more often than blue, which leads to better sales, more feedback, and even more recommendations. Now you’re down to your last 100,000 red but you still have 500,000 blue.His post focuses mostly on malicious uses of the system by it's owners. This is certainly a worry, but one thing I think I need to note is that no one really thinks that these systems should be all that trustworthy. The reason the system works is that we all hold a certain degree of skepticism about it. Wikipedia, for instance, works best when you use it as a starting point. If you use it as the final authority, you're going to get burned at some point. The whole point of a probabilistic system is that the results are less consistent than traditional systems, and so people trust them less. The reason people still use such systems is that they can scale to handle the massive amounts of information being thrown at them (which is where traditional systems begin to break down).
Today Wikipedia offers 860,000 articles in English - compared with Britannica's 80,000 and Encarta's 4,500. Tomorrow the gap will be far larger.You're much more likely to find what you're looking for at Wikipedia, even though the quality of any individual entry at Wikipedia ranges from poor and inaccurate to excellent and helpful. As I mentioned in my post, this lack of trustworthiness isn't necessarily bad, so long as it's disclosed up front. For instance, the problems that Wikipedia is facing are related to the fact that some people consider everything they read there to be very trustworthy. Wikipedia's policy of writing entries from a neutral point of view tends to exacerbate this (which is why the policy is a controversial one). Weblogs do not suffer from this problem because they are written in overtly subjective terms, and thus it is blatantly obvious that you're getting a biased view that should be taken with a grain of salt. Of course, that also makes it more difficult to glean useful information from weblogs, which is why Wikipedia's policy of writing entries from a neutral point of view isn't necessarily wrong (once again, it's all about tradeoffs).
Personally, Amazon's recommendations rarely convince me to buy something. Generally, I make the decision independently. For instance, in my last post I mentioned that Amazon recommended the DVD set of the Firefly TV series based on my previous purchases. At that point, I'd already determined that I wanted to buy that set and thus Amazon's recommendation wasn't so much convincing as it was convenient. Which is the point. By tailoring their featured offerings towards a customer's preferences, Amazon stands to make more sales. They use the term "recommendations," but that's probably a bit of a misnomer. Chances are, they're things we already know about and want to buy, hence it makes more sense to promote those items... When I look at my recommendations page, many items are things I already know I want to watch or read (and sometimes even buy, which is the point).
So is Amazon cheating with its recommendations? I don't know, but it doesn't really matter that much because I don't use their recommendations as an absolute guide. Also, if Amazon is cheating, all that really means is that Amazon is leaving room for a competitor to step up and provide better recommendations (and from my personal experience working on such a site, retail websites are definitely moving towards personalized product offerings).
One other thing to consider, though, is that it isn't just Amazon or Google that could be cheating. Gaming Google's search algorithms has actually become a bit of an industry. Wikipedia is under a constant assault of spammers who abuse the openness of the system for their own gain. Amazon may have set their system up to favor items that give them a higher margin (as Shamus notes), but it's also quite possible that companies go on Amazon and write glowing reviews for their own products, etc... in an effort to get their products recommended.
The whole point is that these systems aren't trustworthy. That doesn't mean they're not useful, it just means that we shouldn't totally trust them. You aren't supposed to trust them. Ironically, acknowledging that fact makes them more useful.
In response to Chris Anderson's The Probabilistic Age post , Nicholas Carr takes a skeptical view of these systems and wonders what the broader implications are:
By providing a free, easily and universally accessible information source at an average quality level of 5, will Wikipedia slowly erode the economic incentives to produce an alternative source with a quality level of 9 or 8 or 7? Will blogging do the same for the dissemination of news? Does Google-surfing, in the end, make us smarter or dumber, broader or narrower? Can we really put our trust in an alien logic's ability to create a world to our liking? Do we want to be optimized?These are great questions, but I think it's worth noting that these new systems aren't really meant to replace the old ones. In Neal Stephenson's The System of the World, the character Daniel Waterhouse ponders how new systems supplant older systems:
"It has been my view for some years that a new System of the World is being created around us. I used to suppose that it would drive out and annihilate any older Systems. But things I have seen recently ... have convinced me that new Systems never replace old ones, but only surround and encapsulate them, even as, under a microscope, we may see that living within our bodies are animalcules, smaller and simpler than us, and yet thriving even as we thrive. ... And so I say that Alchemy shall not vanish, as I always hoped. Rather, it shall be encapsulated within the new System of the World, and become a familiar and even comforting presence there, though its name may change and its practitioners speak no more about the Philosopher's Stone." (page 639)And so these new probabilistic systems will never replace the old ones, but only surround and encapsulate them...
Sunday, January 08, 2006
Amazon's Recommendations are Probabilistic
Amazon.com is a fascinating website. It's one of the first eCommerce websites, but it started with a somewhat unique strategy. The initial launch of the site included such a comprehensive implementation of functionality that there are sites today that are still struggling to catch up. Why? Because much of the functionality that Amazon implemented early and continued to improve didn't directly attempt to solve the problems most retailers face: What products do I offer? How often do we change our offerings? And so on. Instead, Amazon attempted to set up a self-organizing system based on past usage and user preferences.
For the first several years of Amazon's existence, they operated at a net loss due to the high initial cost in setup. Competitors who didn't have such expenses seemed to be doing better. Indeed, Amazon's infamous recommendations were often criticized, and anyone who has used Amazon regularly has certainly had the experience of wondering how in the world they managed to recommend something so horrible. But over time, Amazon's recommendations engine has gained steam and produced better and better recommendations. This is due, in part, to improvements in the system (in terms of the information collected, the analysis of that information, and the technology used to do both of those things). Other factors include the growth of both Amazon's customer base and their product offerings, both of which improved their recommendation technology.
As I've written about before, the important thing about Amazon's system is that it doesn't directly solve retailing problems, it sets up a system that allows for efficient collaboration. By studying purchase habits, product ratings, common wishlist items, etc... Amazon is essentially allowing it's customers to pick recommendations for one another. As their customer base and product offerings grow, so does the quality of their recommendations. It's a self-organizing system, and recommendations are the emergent result. Many times, Amazon makes connections that I would have never made. For instance, a recent recommendation for me was the DVD set of the Firefly TV series. When I checked to see why (this openness is an excellent feature), it told me that it was recommended because I had also purchased Neal Stephenson's Baroque Cycle books. This is a connection I probably never would have made on my own, but once I saw it, it made sense.
Of course, the system isn't perfect. Truth be told, it probably won't ever be perfect, but overall, I'd bet that its still better than any manual process.
Chris Anderson (a writer for Wired who has been exploring the Long Tail concept) has an excellent post on his blog concerning these systems, which he refers to as "probabalistic systems:"
When professionals--editors, academics, journalists--are running the show, we at least know that it's someone's job to look out for such things as accuracy. But now we're depending more and more on systems where nobody's in charge; the intelligence is simply emergent. These probabilistic systems aren't perfect, but they are statistically optimized to excel over time and large numbers. They're designed to scale, and to improve with size. And a little slop at the microscale is the price of such efficiency at the macroscale.Anderson's post is essentially a response to critics of probabilistic systems like Wikipedia, Google, and blogs, all of which have come under fire because of their less-than-perfect emergent results. He does an excellent job summarizing the advantages and disadvantages of these systems and it is highly recommended reading. I reference it for several reasons. It seems that Amazon's website qualifies as a probabilistic system, and so the same advantages and disadvantages Anderson observes apply. It also seems that Anderson's post touches on a few themes that often appear on this blog.
First is that human beings rarely solve problems outright. Instead, we typically seek to exchange one set of disadvantages for another in the hopes that the new set is more desirable than the old. Solving problems is all about tradeoffs. As Anderson mentions, a probabilistic system "sacrifices perfection at the microscale for optimization at the macroscale." Is this tradeoff worth it?
Another common theme on this blog is the need for better information analysis capabilities. Last week, I examined a study on "visual working memory," and it became apparent that one thing that is extremely important when facing a large amount of information is the ability to figure out what to ignore. In information theory, this is referred to as the signal-to-noise ratio (technically, this is a more informal usage of the terms). One of the biggest challenges facing us is an increase in the quantity of information we are presented with. In the modern world, we're literally saturated in information, so the ability to separate useful information from false or irrelevant information has become much more important.
Naturally, these two themes interact. As I concluded last week's post: " Like any other technological advance, systems that help us better analyze information will involve tradeoffs." While Amazon, Wikipedia, Google or blogs may not be perfect, they do provide a much deeper look into a wider variety of subjects than their predecessors.
Is Wikipedia "authoritative"? Well, no. But what really is? Britannica is reviewed by a smaller group of reviewers with higher academic degrees on average. There are, to be sure, fewer (if any) total clunkers or fabrications than in Wikipedia. But it's not infallible either; indeed, it's a lot more flawed that we usually give it credit for.[Emphasis Mine] The bad thing about probabilistic systems is that they sacrifice perfection on the microscale. Any individual entry at Wikipedia may be less reliable than its Britannica counterpart (though not necessarily), and so we need to take any single entry with a grain of salt.
The same is true for blogs, no single one of which is authoritative. As I put it in this post, "blogs are a Long Tail, and it is always a mistake to generalize about the quality or nature of content in the Long Tail--it is, by definition, variable and diverse." But collectively they are proving more than an equal to mainstream media. You just need to read more than one of them before making up your own mind.I once wrote a series of posts concerning this subject, starting with how the insights of reflexive documentary filmmaking are being used on blogs. Put simply, Reflexive Documentaries achieve a higher degree of objectivity by embracing and acknowledging their own biases and agenda. Ironically, by acknowledging their own subjectivity, these films are more objective and reliable. Probabilistic systems would also benefit from such acknowledgements. Blogs seem to excell at this, though it seems that many of the problems facing Wikipedia and other such systems is that people aren't aware of their subjective nature and thus assume a greater degree of objectivity than is really warranted.
It's obvious that probabilistic systems are not perfect, but that is precisely why they work. Is it worth the tradeoffs? Personally, I think they are, provided that such systems properly disclose their limitations. I also think it's worth noting that such systems will not fully replace non-probabilistic systems. One commonly referenced observation about Wikipedia, for instance, is that it "should be the first source of information, not the last. It should be a site for information exploration, not the definitive source of facts."
Sunday, November 20, 2005
As I've hinted at in recent entries, I've been delving a bit into podcasts. For the uninitiated, a "podcast" is just a fancy word for pre-recorded radio shows that you can subsribe to on the internet (people often download podcasts to listen to on their iPod, hence the name - though the term really is a misnomer, as you don't need an iPod to listen to a podcast, and it's not broadcast either).
In any case, my short commute actually doesn't lend itself to listening, so I haven't listened to that many podcasts and all of the ones I've listened to are at least tangentially movie-related. So here are a few short reviews of podcasts that I've listened to (again, mostly movie related):
Sunday, June 26, 2005
This is hardly new, but since I've often observed the need for better information aggregation tools I figured I'd give del.icio.us a plug. del.icio.us is essentially an online bookmark (or favorites, in IE-speak) repository. It allows you to post sites to your own personal collection of links. This is great for those who frequently access the internet from multiple locations and different browsers (i.e. from work and home) as it is always accessible on the web. But the really powerful thing about del.icio.us is that everyone's bookmarks are public and easily viewable, and there are all sorts of ways to aggregate and correllate bookmarks. They like to call the system a social bookmarks manager.
The system uses a tagging scheme (or flat hierarchy, if you prefer) to organize links. In the context of a system like del.icio.us, tagging essentially means that for each bookmark you add, you choose a number of labels or categories (tags) which are used to organize your bookmarks so you can find them later. Again, since del.icio.us is a public system, you can see what other people are posting to the same tags. This becomes a good way to keep up on a particular topic (for example, CSS, the economy, movies, tacos or cheese). Jon Udell speculates that posted links would follow a power law distribution, where a few individuals really stand out as the most reliable contributors of valuable links for a given topic. Unfortunately, del.icio.us isn't particularly great at sorting that out yet (though you may be able to notice such patterns emerging if you really keep up on a topic and who is posting what, which can be somewhat daunting for popular tags like CSS, but perhaps not so for something more obscure like unicode). Udell also notes how useful tagging is when trying to organize something that you think will be useful in the future.
Tagging is a concept whose time has come, and despite its drawbacks, I have a feeling that 10 years from now, we're all going to look back and wonder how the heck we accomplished anything before something like tagging rolled around. del.icio.us certainly isn't the only site using tagging (Flickr has tagged photos, Technorati uses tags for blog posts, and there are several other sites). Of course, the concept does have its problems; namely, how do you know which tags to use? For instance, one of the more popular general subjects on del.icio.us is blogs and blogging, but what tags should be used? Blog, Blogging, Blogs, Weblog, Weblogs, blogosphere and so on... Luckily del.icio.us is getting better and better at this - their "experimental post" works wonders because it is actually able to recommend tags you should use based on what tags other people have used.
The system is actually quite simple and easy to use, but there's not much in the way of documentation. Check out this blog post or John Udell's screencast for some quick tutorials on how to get started. I've been playing around with it more and more, and it's proving very useful on multiple levels (organizing links I come across as well as finding new links in the first place!). If you're interested, you can check out my bookmarks. Some other interesting functionality:
The important thing about del.icio.us is not that it was designed to create the perfect information resource, but rather an efficient system of collaboration. It's a systemic improvement; as such, the improvement in information output is an emergent property of internet use. Syndication, aggregation, and filtering on the internet still need to improve considerably, but this seems like a step in the right direction.
Posted by Mark on June 26, 2005 at 08:30 PM .: link :.
Sunday, May 22, 2005
Voters and Lurkers
Debating online, whether it be through message boards or blogs or any other method, can be rewarding, but it can also be quite frustrating. When most people think of a debate, they think of a group arguing an opponent, and one of the two factions "winning" the argument. It's a process of expression in which different people with different points of view will express their opinions, and are criticised by one another.
I've often found that specific threads tend to boil down to a point where the argument is going back and forth between two sole debaters (with very few interruptions from others). Inevitably, the debate gets to the point where both sides' assumptions (or axioms) have been exposed, and neither side is willing to agree with the other. To the debaters, this can be intensely frustrating. As such, anyone who has spent a significant amount of time debating others online can usually see that they're probably never going to convince their opponents. So who wins the argument?
The debaters can't decide who wins - they obviously think their argument is better than their opponents (or, at the very least, are unwilling to admit it) and so everyone thinks that they "won." But the debaters themselves don't "win" an argument, it's the people witnessing the debate that are the real winners. They decide which arguments are persuasive and which are not.
This is what the First Amendment of the US Constitution is based on, and it is a fundamental part of our democracy. In a vigorous marketplace of ideas, the majority of voters will discern the truth and vote accordingly.
Unfortunately, there never seems to be any sort of closure when debating online, because the audience is primarily comprised of lurkers, most of whom don't say anything (plus, there are no votes), and so it seems like nothing is accomplished. However, I assure you that is not the case. Perhaps not for all lurkers, but for a lot of them, they are reading the posts with a critical eye and coming out of the debate convinced one way or the other. They are the "voters" in an online debate. They are the ones who determine who won the debate. In a scenario where only 10-15 people are reading a given thread, this might not seem like much (and it's not), but if enough of these threads occur, then you really can see results...
I'm reminded of Benjamin Franklin's essay "An apology for printers," in which Franklin defended those who printed allegedly offensive opinion pieces. His thought was that very little would be printed if publishers only produced things that were not offensive to anybody.
Printers are educated in the Belief, that when Men differ in Opinion, both sides ought equally to have the Advantage of being heard by the Public; and that when Truth and Error have fair Play, the former is always an overmatch for the latter.
Posted by Mark on May 22, 2005 at 06:58 PM .: link :.
Friday, April 22, 2005
What is a Weblog, Part II
What is a weblog? My original thoughts leaned towards thinking of blogs as a genre within the internet. Like all genres, there is a common set of conventions that define the blogging genre, but the boundaries are soft and some sites are able to blur the lines quite thoroughly. Furthermore, each individual probably has their own definition as to what constitutes a blog (again similar to genres). The very elusiveness of a definition for blog indicates that perception becomes an important part of determining whether or not something is a blog. It has become clear that there is no one answer, but if we spread the decision out to a broad number of people, each with their own independent definition of blog, we should be able to come to the conclusion that a borderline site like Slashdot is a blog because most people call it a blog.
So now that we have a (non)definition for what a blog is, just how important are blogs? Caesar at Arstechnica writes that according to a new poll, Americans are somewhat ambivalent on blogs. In particular, they don't trust blogs.
I don't particularly mind this, however. For the most part, blogs don't make much of an effort to be impartial, and as I've written before, it is the blogger's willingness to embrace their subjectivity that is their primary strength. Making mistakes on a blog is acceptable, so long as you learn from your mistakes. Since blogs are typically more informal, it's easier for bloggers to acknowledge their mistakes.
Lexington Green from ChicagoBoyz recently wrote about blogging to a writer friend of his:
To paraphrase Truman Capote's famous jibe against Jack Kerouac, blogging is not writing, it is typing. A writer who is blogging is not writing, he is blogging. A concert pianist who is sitting down at the concert grand piano in Carnegie Hall in front of a packed house is the equivalent to an author publishing a finished book. The same person sitting down at the piano in his neighborhood bar on a Saturday night and knocking out a few old standards, doing a little improvisation, and even doing some singing -- that is blogging. Same instrument -- words, piano -- different medium. We forgive the mistakes and wrong-guesses because we value the immediacy and spontaneity. Plus, publish a book, it is fixed in stone. Write a blog post you later decide is completely wrong, it is actually good, since it gives you a good hook for a later post explaining your thoughts that led to the changed conclusion. The essence of a blog is to air things informally, to throw things out, to say "this interests me because ..." From time to time a more considered and article-like post is good. But most people read blogs by skimming. If a post is too long, in my observation, it does not get much response and may not be read at all.Of course, his definition of what a blog is could be argued (as there are some popular and thoughtful bloggers who routinely write longer, more formal essays), but it actually struck me as being an excellent general description of blogging. Note his favorable attitude towards mistakes ("it gives you a good hook for a later post" is an excellent quote, though I think you might have to be a blogger to fully understand it). In the blogosphere, it's ok to be wrong:
Everyone makes mistakes. It's a fact of life. It isn't a cause for shame, it's just reality. Just as engineers are in the business of producing successful designs which can be fabricated out of less-than-ideal components, the engineering process is designed to produce successful designs out of a team made up of engineers every one of which screws up routinely. The point of the process is not to prevent errors (because that's impossible) but rather to try to detect them and correct them as early as possible.The problem with the mainstream media is that they purport to be objective, as if they're just reporting the facts. Striving for objectivity can be a very good thing, but total objectivity is impossible, and if you deny the inherent subjectivity in journalism, then something is lost.
One thing Caesar mentions is that "the sensationalism surrounding blogs has got to go. Blogs don't solve world hunger, cure disease, save damsels in distress, or any of the other heroic things attributed to them." I agree with this too, though I do think there is something sensational about blogs, or more generally, the internet.
Steven Den Beste once wrote about what he thought were the four most important inventions of all time:
In my opinion, the four most important inventions in human history are spoken language, writing, movable type printing and digital electronic information processing (computers and networks). Each represented a massive improvement in our ability to distribute information and to preserve it for later use, and this is the foundation of all other human knowledge activities. There are many other inventions which can be cited as being important (agriculture, boats, metal, money, ceramic pottery, postmodernist literary theory) but those have less pervasive overall affects.Regardless of whether or not you agree with the notion that these are the most important inventions, it is undeniable that the internet provides a stairstep in communication capability, which, in turn, significantly improves the process of large-scale collaboration that is so important to human existence.
When knowledge could only spread by speech, it might take a thousand years for a good idea to cross the planet and begin to make a difference. With writing it could take a couple of centuries. With printing it could happen in fifty years.And it appears that blogs, with their low barrier to entry and automated software processes, will play a large part in the worldwide debate. There is, of course, a ton of room for improvement, but things are progressing rapidly now and perhaps even accelerating. It is true that some blogging proponents are preaching triumphalism, but that's part of the charm. They're allowed to be wrong and if you look closely at what happens when someone makes such a comment, you see that for every exaggerated claim, there are 10 counters in other blogs that call bullshit. Those blogs might be on the long tail and probably won't garner as much attention, but that's part of the point. Blogs aren't trustworthy, which is precisely why they're so important.
Update 4.24.05: I forgot to link the four most important inventions article (and I changed some minor wording: I had originally referred to the four "greatest" inventions, which was not the wording Den Beste had used).
Posted by Mark on April 22, 2005 at 06:49 PM .: link :.
Sunday, April 17, 2005
What is a Weblog?
Caesar at ArsTechnica has written a few entries recently concerning blogs which interested me. The first simply asks: What, exactly, is a blog? Once you get past the overly-general definitions ("a blog is a frequently updated webpage"), it becomes a surprisingly difficult question.
Caesar quotes Wikipedia:
A weblog, web log or simply a blog, is a web application which contains periodic time-stamped posts on a common webpage. These posts are often but not necessarily in reverse chronological order. Such a website would typically be accessible to any Internet user. "Weblog" is a portmanteau of "web" and "log". The term "blog" came into common use as a way of avoiding confusion with the term server log.Of course, as Caesar notes, the majority of internet sites could probably be described in such a way. What differentiates blogs from discussion boards, news organizations, and the like?
Reading through the resulting discussion provides some insight, but practically every definition is either too general or too specific.
Many people like to refer to Weblogs as a medium in itself. I can see the point, but I think it's more general than that. The internet is the medium, whereas a weblog is basically a set of commonly used conventions used to communicate through that medium. Among the conventions are things like a main page with chronological posts, permalinks, archives, comments, calendars, syndication (RSS), blogging software (CMS), trackbacks, &c. One problem is that no single convention is, in itself, definitive of a weblog. It is possible to publish a weblog without syndication, comments, or a calendar. Depending on the conventions being eschewed, such blogs may be unusual, but may still be just as much a blog as any other site.
For lack of a better term, I tend to think of weblogs as a genre. This is, of course, not totally appropriate but I think it does communicate what I'm getting at. A genre is typically defined as a category of artistic expression marked by a distinctive style, form, or content. However, anyone who is familiar with genre film or literature knows that there are plenty of movies or books that are difficult to categorize. As such, specific genres such as horror, sci-fi, or comedy are actually quite inclusive. Some genres, Drama in particular, are incredibly broad and are often accompanied by the conventions of other genres (we call such pieces "cross-genre," though I think you could argue that almost everything incorporates "Drama"). The point here is that there is often a blurry line between what constitutes one genre from another.
On the medium of the internet, there are many genres, one of which is a weblog. Other genres include commercial sites (i.e. sites that try to sell you things, Amazon.com, Ebay, &c.), reference sites (i.e. dictionaries & encyclopedias), Bulletin Board Systems and Forums, news sites, personal sites, weblogs, wikis, and probably many, many others.
Any given site is probably made up of a combination of genres and it is often difficult to pinpoint any one genre as being representative. Take, for example, Kaedrin.com. It is a personal site with some random features, a bunch of book & movie reviews, a forum, and, of course, a weblog (which is what you're reading now). Everything is clearly delineated here at Kaedrin, but other sites blur the lines between genres on every page. Take ArsTechnica itself: Is it a news site or a blog or something else entirely? I would say that the front page is really a combination of many different things, one of which is a blog. It's a "cross-genre" webpage, but that doesn't necessarily make it any less effective (though there is something to be said for simplicity and it is quite possible to load a page up with too much stuff, just as it's possible for a book or movie to be too ambitious and take on too much at once) just as Alien isn't necessarily a less effective Science Fiction film because it incorporates elements of Horror and Drama (or vice-versa).
Interestingly, much of what a weblog is can be defined as an already existing literary genre: the journal. People have kept journals and diaries all throughout history. The major difference between a weblog and a journal is that a weblog is published for all to see on the public internet (and also that weblogs can be linked together through the use of the hyperlink and the infrastructure of the internet). Historically, diaries were usually private, but there are notable exceptions which have been published in book form. Theoretically, one could take such diaries and publish them online - would they be blogs? Take, for instance, The Diary of Samuel Pepys which is currently being published daily as if it's a weblog circa 1662 (i.e. Today's entry is dated "Thursday 17 April 1662"). The only difference is that the author of that diary is dead and thus doesn't interact or respond to the rest of the weblog community (though there is still interaction allowed in the form of annotations).
A few other random observations about blogs:
I don't care what the hell a weblog is. It is what I say it is. Its something I update whenever I find an interesting tidbit on the web. And its fun. So there.Heh. Interesting to note that my secondary definition there ("something I update whenever I find an interesting tidbit on the web") has changed significantly since I contributed that definition. This is why, I suppose, I had originally supplied the primary definition ("I don't care what the hell a weblog is. It is what I say it is.") and to be honest, I don't think that's changed (though I guess you could call that definition "too general"). Blogging is whatever I want it to be. Of course, I could up and call anything a blog, but I suppose it is also required that others perceive your blog as a blog. That way, the genre still retains some shape, but is still permeable enough to allow some flexibility.
I had originally intended to make several other points in this post, but since it has grown to a rather large size, I'll save them for other posts. Hopefully, I'll gather the motivation to do so before next week's scheduled entry, but there's no guarantee...
Posted by Mark on April 17, 2005 at 08:27 PM .: link :.
Sunday, March 27, 2005
Slashdot links to a fascinating and thought provoking one hour (!) audio stream of a speech "by futurist and developmental systems theorist, John Smart." The talk is essentially about the future of technology, more specifically information and communication technology. Obviously, there is a lot of speculation here, but it is interesting so long as you keep it in the "speculation" realm. Much of this is simply a high-level summary of the talk with a little commentary sprinkled in.
He starts by laying out some key motivations or guidelines to thinking about this sort of thing, and he paraphrases David Brin (and this is actually paraphrasing Smart):
We need a pragmatic optimism, a can-do attitude, a balance between innovation and preservation, honest dialogue on persistent problems, ... tolerance of the imperfect solutions we have today, and the ability to avoid both doomsaying and a paralyzing adherence to the status quo. ... Great input leads to great output.So how do new systems supplant the old? They do useful things with less matter, less energy, and less space. They do this until they reach some sort of limit along those axes (a limitation of matter, energy, or space). It turns out that evolutionary processes are great at this sort of thing.
Smart goes on to list three laws of information and communication technology:
This about halfway through the speech, and he goes on to list many examples and he explores some more interesting concepts. Here are some bits I found interesting.
Posted by Mark on March 27, 2005 at 08:40 PM .: link :.
Sunday, March 13, 2005
A tale of two software projects
A few weeks ago, David Foster wrote an excellent post about two software projects. One was a failure, and one was a success.
The first project was the FBI's new Virtual Case File system; a tool that would allow agents to better organize, analyze and communicate data on criminal and terrorism cases. After 3 years and over 100 million dollars, it was announced that the system may be totally unusable. How could this happen?
When it became clear that the project was in trouble, Aerospace Corporation was contracted to perform an independent evaluation. It recommended that the software be abandoned, saying that "lack of effective engineering discipline has led to inadequate specification, design and development of VCF." SAIC has said it believes the problem was caused largely by the FBI: specifically, too many specification changes during the development process...an SAIC executive asserted that there were an average of 1.3 changes per day during the development. SAIC also believes that the current system is useable and can serve as a base for future development.I'd be interested to see what the actual distribution of changes were (as opposed to the "average changes per day", which seems awfully vague and somewhat obtuse to me), but I don't find it that hard to believe that this sort of thing happened (especially because the software development firm was a separate entity). I've had some experience with gathering requirements, and it certainly can be a challenge, especially when you don't know the processes currently in place. This does not excuse anything, however, and the question remains: how could this happen?
The second project, the success, may be able to shed some light on that. DARPA was tapped by the US Army to help protect troops from enemy snipers. The requested application would spot incoming bullets and identify their point of origin, and it would have to be easy to use, mobile, and durable.
The system would identify bullets from their sound..the shock wave created as they travelled through the air. By using multiple microphones and precisely timing the arrival of the "crack" of the bullet, its position could, in theory, be calculated. In practice, though, there were many problems, particularly the high levels of background noise--other weapons, tank engines, people shouting. All these had to be filtered out. By Thanksgiving weekend, the BBN team was at Quantico Marine Base, collecting data from actual firing...in terrible weather, "snowy, freezing, and rainy" recalls DARPA Program Manager Karen Wood. Steve Milligan, BBN's Chief Technologist, came up with the solution to the filtering problem: use genetic algorithms. These are a kind of "simulated evolution" in which equations can mutate, be tested for effectivess, and sometimes even "mate," over thousands of simulated generations (more on genetic algorithms here.)Now what was the biggest difference between the remarkable success of the Boomerang system and the spectacular failure of the Virtual Case File system? Obviously, the two projects present very different challenges, so a direct comparison doesn't necessarily tell the whole story. However, it seems to me that discipline (in the case of the Army) or the lack of discipline (in the case of the FBI) might have been a major contributor to the outcomes of these two projects.
It's obviously no secret that discipline plays a major role in the Army, but there is more to it than just that. Independence and initiative also play an important role in a military culture. In Neal Stephenson's Cryptonomicon, the way the character Bobby Shaftoe (a Marine Raider, which is "...like a Marine, only more so.") interacts with his superiors provides some insight (page 113 in my version):
Having now experienced all the phases of military existence except for the terminal ones (violent death, court-martial, retirement), he has come to understand the culture for what it is: a system of etiquette within which it becomes possible for groups of men to live together for years, travel to the ends of the earth, and do all kinds of incredibly weird shit without killing each other or completely losing their minds in the process. The extreme formality with which he addresses these officers carries an important subtext: your problem, sir, is doing it. My gung-ho posture says that once you give the order I'm not going to bother you with any of the details - and your half of the bargain is you had better stay on your side of the line, sir, and not bother me with any of the chickenshit politics that you have to deal with for a living.Good military officers are used to giving an order, then staying out of their subordinate's way as they carry out that order. I didn't see any explicit measurement, but I would assume that there weren't too many specification changes during the development of the Boomerang system. Of course, the developers themselves made all sorts of changes to specifics and they also incorporated feedback from the Army in the field in their development process, but that is standard stuff.
I suspect that the FBI is not completely to blame, but as the report says, there was a "lack of effective engineering discipline." The FBI and SAIC share that failure. I suspect, from the number of changes requested by the FBI and the number of government managers involved, that micromanagement played a significant role. As Foster notes, we should be leveraging our technological abilities in the war on terror, and he suggests a loosely based oversight committe (headed by "a Director of Industrial Mobilization") to make sure things like this don't happen very often. Sounds like a reasonable idea to me...
Posted by Mark on March 13, 2005 at 08:47 PM .: link :.
Sunday, February 13, 2005
An Exercise in Aggregation
A few weeks ago I collected a ton of posts regarding the Iraqi elections. I did this for a few reasons. The elections were important and I wanted to know how they were going, but I could have just read up on them if that was the only reason. The real reason I made that post was to participate in and observe information aggregation and correlation in real time.
It was an interesting experience, and I learned a few things which should help in future exercises. Some of these are in my control to fix, some will depend on the further advance of technology.
Posted by Mark on February 13, 2005 at 10:39 AM .: link :.
Thursday, January 27, 2005
In a stroke of oddly compelling genius (or possibly madness), Jon Udell has put together a remarkable flash screencast (note: there is sound and it looks best in full screen mode) detailing the evolution of the Heavy metal umlaut page on Wikipedia.
It's a wonderfully silly topic, but my point is somewhat serious too. The 8.5-minute screencast turns the change history of this Wiki page into a movie, scrolls forward and backward along the timeline of the document, and follows the development of several motifs. Creating this animated narration of a document's evolution was technically challenging, but I think it suggests interesting possibilities.Wikis are one of those things that just don't sound right when you hear about what they are and how they work. It's one thing to institute a collaborative encyclopedia, but Wikis embrace a philosophy of openness that seems entirely too permissive. Wikis are open to the general public and allow anyone to modify their contents without any sort of prior review. What's to stop a troll from vandalizing a page? Nothing, except that someone will come along and correct it shortly thereafter (Udell covers an episode of vandalism in the screencast). It's a textbook self-organizing system (note that wikis focus not on the content, but rather on establishing an efficient mechanism for collaboration; the content is an emergent property of the system). It should be interesting to see how it progresses... [via Jonathon Delacour, who also has an interesting discussion about umlauts and diaereses and another older post about wikis]
Posted by Mark on January 27, 2005 at 08:02 PM .: link :.
Sunday, January 23, 2005
Long Tails, TV, and DVR
Apparently Chris Anderson (author of the Wired article I posted last week) has a blog in which he comments regularly on the long tail concept.
In one post, he speculates how the long tail relates to television programs, DVRs and the internet. In short, he proposes a browser plugin that you could use when you see a reference to a TV show that you are interested in and want to record. You would simply need to highlight the show title and right-click, where a new option would be available called "Record to DVR," at which point you could go about setting up your DVR to record the show.
I don't have a DVR, so perhaps I'm not the best person to comment, but it strikes me that if you're reading a recommendation for a show, you might want to go back and watch all the previous shows as well. For instance, a lot of people have been recommending Lost to me recently. If I had a PVR, I might set it to record the show, but I'd have missed a significant portion of the show (I don't know how important that would be or not). What I'd really love is to go back and watch the series from the beginning.
Comcast has a feature called "On Demand" which would be perfect for this, but they don't seem to have much in the way of capacity (though if you have HBO, I understand they sometimes make whole seasons of various popular shows available) and they don't have Lost. Evan Kirchoff recently posted something that put an interesting twist on this subject: other people are his PVR. When he finds a show he wants to watch, he simply downloads it via torrents:
What I really wanted all this time, it turns out, is just the assurance that somebody out there in the luminiferous aether is faithfully recording every show, in case I later decide that I want it. Setting a VCR in advance is way too much work, but having to download a 350-megabyte file is an action that's just affirmative enough to distill one's preferences.It's certainly an interesting perspective - a typical emergent property of the self-organizing internet (along with all the warts that entails) - and it's a hell of a lot better than waiting for reruns. I don't have the 400 gigs of hard drive space on my system that Evan does, but I might check out an episode or two. Of course, there's something to be said about the quality of the watching-tv-on-a-computer experience and, as Evan mentions, I'm not quite sure about the legality of such a practice (his reasoning seems logical, but that doesn't necessarily mean anything). Perhaps a micropayment solution (i.e. download an episode for a dollar, or one season for $10) would work. Of course, this would destroy the DVD market (which I imagine some people would be none to happy about), but it would also lengthen the tail, as quality niche shows (i.e. the long tail) might be able to carve out a profitable piece of the pie.
The best solution would, of course, combine all the various features above into one application/experience, but I'm not holding my breath just yet.
Posted by Mark on January 23, 2005 at 11:55 AM .: link :.
Sunday, January 16, 2005
Chasing the Tail
The Long Tail by Chris Anderson : An excellent article from Wired that demonstrates a few of the concepts and ideas I've been writing about recently. One such concept is well described by Clay Shirky's excellent article Power Laws, Weblogs, and Inequality. A system governed by a power law distribution is essentially one where the power (whether it be measured in wealth, links, etc) is concentrated in a small population (when graphed, the rest of the population's power values resemble a long tail). This concentration occurs spontaneously, and it is often strengthened because members of the system have an incentive to leverage their power to accrue more power.
In systems where many people are free to choose between many options, a small subset of the whole will get a disproportionate amount of traffic (or attention, or income), even if no members of the system actively work towards such an outcome. This has nothing to do with moral weakness, selling out, or any other psychological explanation. The very act of choosing, spread widely enough and freely enough, creates a power law distribution.As such, this distribution manifests in all sorts of human endeavors, including economics (for the accumulation of wealth), language (for word frequency), weblogs (for traffic or number of inbound links), genetics (for gene expression), and, as discussed in the Wired article, entertainment media sales. Typically, the sales of music, movies, and books follow a power law distribution, with a small number of hit artists who garner the grand majority of the sales. The typical rule of thumb is that 20% of available artists get 80% of the sales.
Because of the expense of producing the physical product, and giving it a physical point of sale (shelf-space, movie theaters, etc...), this is bad news for the 80% of artists who get 20% of the sales. Their books, movies, and music eventually go out of print and are generally forgotten, while the successful artists' works are continually reprinted and sold, building on their own success.
However, with the advent of the internet, this is beginning to change. Sales are still governed by the power law distribution, but the internet is removing the physical limitations of entertainment media.
An average movie theater will not show a film unless it can attract at least 1,500 people over a two-week run; that's essentially the rent for a screen. An average record store needs to sell at least two copies of a CD per year to make it worth carrying; that's the rent for a half inch of shelf space. And so on for DVD rental shops, videogame stores, booksellers, and newsstands.The decentralized nature of the internet makes it a much better way to distribute entertainment media, as that documentary that has a potential national (heck, worldwide) audience of half a million people could likely succeed if distributed online. The infrastructure for films isn't there yet, but it has been happening more in the digital music world, and even in a hybrid space like Amazon.com, which sells physical products, but in a non-local manner. With digital media, the cost of producing and distributing entertainment media goes way down, and thus even average artists can be considered successful, even if their sales don't approach that of the biggest sellers.
The internet isn't a broadcast medium; it is on-demand, driven by each individual's personal needs. Diversity is the key, and as Shirkey's article says: "Diversity plus freedom of choice creates inequality, and the greater the diversity, the more extreme the inequality." With respect to weblogs (or more generally, websites), big sites are, well, bigger, but links and traffic aren't the only metrics for success. Smaller websites are smaller in those terms, but are often more specialized, and thus they do better both in terms of connecting with their visitors (or customers) and in providing a more compelling value to their visitors. Larger sites, by virtue of their popularity, simply aren't able to interact with visitors as effectively. This is assuming, of course, that the smaller sites do a good job. My site is very small (in terms of traffic and links), but not very specialized, so it has somewhat limited appeal. However, the parts of my site that get the most traffic are the ones that are specialized (such as the Christmas Movies page, or the Asimov Guide). I think part of the reason the blog has never really caught on is that I cover a very wide range of topics, thus diluting the potential specialized value of any single topic.
The same can be said for online music sales. They still conform to a power law distribution, but what we're going to see is increasing sales of more diverse genres and bands. We're in the process of switching from a system in which only the top 20% are considered profitable, to one where 99% are valuable. This seems somewhat counterintuitive for a few reasons:
The first is we forget that the 20 percent rule in the entertainment industry is about hits, not sales of any sort. We're stuck in a hit-driven mindset - we think that if something isn't a hit, it won't make money and so won't return the cost of its production. We assume, in other words, that only hits deserve to exist. But Vann-Adib�, like executives at iTunes, Amazon, and Netflix, has discovered that the "misses" usually make money, too. And because there are so many more of them, that money can add up quickly to a huge new market.The need to figure out what people want out of a diverse pool of options is where self-organizing systems come into the picture. A good example is Amazon's recommendations engine, and their ability to aggregate various customer inputs into useful correlations. Their "customers who bought this item also bought" lists (and the litany of variations on that theme), more often than not, provide a way to traverse the long tail. They encourage customer participation, allowing customers to write reviews, select lists, and so on, providing feedback loops that improve the quality of recommendations. Note that none of these features was designed to directly sell more items. The focus was on allowing an efficient system of collaborative feedback. Good recommendations are an emergent result of that system. Similar features are available in the online music services, and the Wired article notes:
For instance, the front screen of Rhapsody features Britney Spears, unsurprisingly. Next to the listings of her work is a box of "similar artists." Among them is Pink. If you click on that and are pleased with what you hear, you may do the same for Pink's similar artists, which include No Doubt. And on No Doubt's page, the list includes a few "followers" and "influencers," the last of which includes the Selecter, a 1980s ska band from Coventry, England. In three clicks, Rhapsody may have enticed a Britney Spears fan to try an album that can hardly be found in a record store.Obviously, these systems aren't perfect. As I've mentioned before, a considerable amount of work needs to be done with respect to the aggregation and correlation aspects of these systems. Amazon and the online music services have a good start, and weblogs are trailing along behind them a bit, but the nature of self-organizing systems dictates that you don't get a perfect solution to start, but rather a steadily improving system. What's becoming clear, though, is that the little guys are (collectively speaking) just as important as the juggernauts, and that's why I'm not particularly upset that my blog won't be wildly popular anytime soon.
Posted by Mark on January 16, 2005 at 08:07 PM .: link :.
Sunday, January 02, 2005
Everyone Contributes in Some Way
Epic : A fascinating and possibly prophetic flash film of things to come in terms of information aggregation, recommendations, and filtering. It focuses on Google and Microsoft's (along with a host of others, including Blogger, Amazon, and Friendster) competing contributions to the field. It's eight minutes long, and well worth the watch. It touches on many of the concepts I've been writing about here, including self-organization and stigmergy, but in my opinion it stops just short of where such a system would go.
It's certainly interesting, but I don't think it gets it quite right (Googlezon?). Or perhaps it does, but the pessimistic ending doesn't feel right to me. Towards the end, it claims that a comprehensive social dossier would be compiled by Googlezon (note the name on the ID - Winston Smith) and that everyone would receive customized newscasts which are completely automated. Unfortunately, they forsee majority of these customized newscasts as being rather substandard - filled with inaccuracies, narrow, shallow and sensational. To me, this sounds an awful lot like what we have now, but on a larger (and less manageable) scale. Talented editors, who can navagate, filter, and correlate Googlezon's contents, are able to produce something astounding, but the problem (as envisioned by this movie) is that far too few people have access to these editors.
But I think that misses the point. Individual editors would produce interesting results, but if the system were designed correctly, in a way that allowed everyone to be editors and a way to implement feedback loops (i.e. selection mechanisms), there's no reason a meta-editor couldn't produce something spectacular. Of course, there would need to be a period of adjustment, where the system gets lots of things wrong, but that's how selection works. In self-organizing systems, failure is important, and it ironically ensures progress. If too many people are getting bad information in 2014 (when the movie is set), all that means is that the selection process hasn't matured quite yet. I would say that things would improve considerably by 2020.
The film is quite worth a watch. I doubt this specific scenario will play out, but it's likely that something along these lines will occur. [Via the Commissar]
Posted by Mark on January 02, 2005 at 05:34 PM .: link :.
Sunday, December 12, 2004
I've been doing a lot of reading and thinking about the concepts discussed in my last post. It's a fascinating, if a little bewildering, topic. I'm not sure I have a great handle on it, but I figured I'd share a few thoughts.
There are many systems that are incredibly flexible, yet they came into existence, grew, and self-organized without any actual planning. Such systems are often referred to as Stigmergic Systems. To a certain extent, free markets have self-organized, guided by such emergent effects as Adam Smith's "invisible hand". Many organisms are able to quickly adapt to changing conditions using a technique of continuous reproduction and selection. To an extent, there are forces on the internet that are beginning to self-organize and produce useful emergent properties, blogs among them.
Such systems are difficult to observe, and it's hard to really get a grasp on what a given system is actually indicating (or what properties are emerging). This is, in part, the way such systems are supposed to work. When many people talk about blogs, they find it hard to believe that a system composed mostly of small, irregularly updated, and downright mediocre (if not worse) blogs can have truly impressive emergent properties (I tend to model the ideal output of the blogosphere as an information resource). Believe it or not, blogging wouldn't work without all the crap. There are a few reasons for this:
The System Design: The idea isn't to design a perfect system. The point is that these systems aren't planned, they're self-organizing. What we design are systems which allow this self-organization to occur. In nature, this is accomplished through constant reproduction and selection (for example, some biological systems can be represented as a function of genes. There are hundreds of thousands of genes, with a huge and diverse number of combinations. Each combination can be judged based on some criteria, such as survival and reproduction. Nature introduces random mutations so that gene combinations vary. Efficient combinations are "selected" and passed on to the next generation through reproduction, and so on).
The important thing with respect to blogs are the tools we use. To a large extent, blogging is simply an extension of many mechanisms already available on the internet, most especially the link. Other weblog specific mechanisms like blogrolls, permanent-links, comments (with links of course) and trackbacks have added functionality to the link and made it more powerful. For a number of reasons, weblogs tend to be affected by power-law distribution, which spontaneously produces a sort of hierarchical organization. Many believe that such a distribution is inherently unfair, as many excellent blogs don't get the attention they deserve, but while many of the larger bloggers seek to promote smaller blogs (some even providing mechanisms for promotion), I'm not sure there is any reliable way to systemically "fix" the problem without harming the system's self-organizational abilities.
In systems where many people are free to choose between many options, a small subset of the whole will get a disproportionate amount of traffic (or attention, or income), even if no members of the system actively work towards such an outcome. This has nothing to do with moral weakness, selling out, or any other psychological explanation. The very act of choosing, spread widely enough and freely enough, creates a power law distribution.This self-organization is one of the important things about weblogs; any attempt to get around it will end up harming you in the long run as the important thing is to find a state in which weblogs are working most efficiently. How can the weblog community be arranged to self-organize and find its best configuration? That is what the real question is, and that is what we should be trying to accomplish (emphasis mine):
...although the purpose of this example is to build an information resource, the main strategy is concerned with creating an efficient system of collaboration. The information resource emerges as an outcome if this is successful.Failure is Important: Self-Organizing systems tend to have attractors (a preferred state of the system), such that these systems will always gravitate towards certain positions (or series of positions), no matter where they start. Surprising as it may seem, self-organization only really happens when you expose a system in a steady state to an environment that can destabilize it. By disturbing a steady state, you might cause the system to take up a more efficient position.
It's tempting to dismiss weblogs as a fad because so many of them are crap. But that crap is actually necessary because it destabilizies the system. Bloggers often add their perspective to the weblog community in the hopes that this new information will change the way others think (i.e. they are hoping to induce change - this is roughly referred to as Stigmergy). That new information will often prompt other individuals to respond in some way or another (even if not directly responding). Essentially, change is introduced in the system and this can cause unpredictable and destabilizing effects. Sometimes this destabilization actually helps the system, sometimes (and probably more often than not) it doesn't. Irregardless of its direct effects, the process is essential because it is helping the system become increasingly comprehensive. I touched on this in my last post among several others in which I claim that an argument achieves a higher degree of objectivity by embracing and acknowledging its own biases and agenda. It's not that any one blog or post is particularly reliable in itself, it's that blogs collectively are more objective and reliable than any one analyst (a journalist, for instance), despite the fact that many blogs are mediocre at best. An individual blog may fail to solve a problem, but that failure is important too when you look at the systemic level. Of course, all of this is also muddying the waters and causing the system to deteriorate to a state where it is less efficient to use. For every success story like Rathergate, there are probably 10 bizarre and absurd conspiracy theories to contend with.
This is the dilemma faced by all biological systems. The effects that cause them to become less efficient are also the effects that enable them to evolve into more efficient forms. Nature solves this problem with its evolutionary strategy of selecting for the fittest. This strategy makes sure that progress is always in a positive direction only.So what weblogs need is a selection process that separates the good blogs from the bad. This ties in with the aforementioned power-law distribution of weblogs. Links, be they blogroll links or links to an individual post, essentially represent a sort of currency of the blogosphere and provide an essential internal feedback loop. There is a rudimentary form of this sort of thing going on, and it has proven to be very successful (as Jeremy Bowers notes, it certainly seems to do so much better than the media whose selection process appears to be simple heuristics). However, the weblog system is still young and I think there is considerable room for improvement in its selection processes. We've only hit the tip of the iceberg here. Syndication, aggregation, and filtering need to improve considerably. Note that all of those things are systemic improvements. None of them directly act upon the weblog community or the desired informational output of the community. They are improvements to the strategy of creating an efficient system of collaboration. A better informational output emerges as an outcome if the systemic improvements are successful.
This is truly a massive subject, and I'm only beginning to understand some of the deeper concepts, so I might end up repeating myself a bit in future posts on this subject, as I delve deeper into the underlying concepts and gain a better understanding. The funny thing is that it doesn't seem like the subject itself is very well defined, so I'm sure lots will be changing in the future. Below are a few links to information that I found helpful in writing this post.
Posted by Mark on December 12, 2004 at 11:15 PM .: link :.
Sunday, December 05, 2004
An Epic in Parallel Form
Tyler Cowen has an interesting post on the scholarly content of blogging in which he speculates as to how blogging and academic scholarship fit together. In so doing he makes some general observations about blogging:
Blogging is a fundamentally new medium, akin to an epic in serial form, but combining the functions of editor and author. Who doesn't dream of writing an epic?It's an interesting perspective. Many blogs are general in subject, but some of the ones that really stand out have some sort of narrative (for lack of a better term) that you can follow from post to post. As Cowen puts it, an "epic in serial form." The suggestion that reading a single blog many times is more rewarding than reading the best posts from many different blogs is interesting. But while a single blog may give you a broad view of what a field is about, it can also be rewarding to aggregate the specific views of a wide variety of individuals, even biased and partisan individuals. As Cowen mentions, the blogosphere as a whole is the relevant unit of analysis. Even if each individual view is unimpressive on its own, that may not be the case when taken collectively. In a sense, while each individual is writing a flawed epic in serial form, they are all contributing to an epic in parallel form.
Which brings up another interesting aspect of blogs. When the blogosphere tackles a subject, it produces a diverse set of opinions and perspectives, all published independently by a network of analysts who are all doing work in parallel. The problem here is that the decentralized nature of the blogosphere makes aggregation difficult. Determining a group as large and diverse as the blogosphere's "answer" based on all of the disparate information they have produced is incredibly difficult, especially when the majority of data represents opinions of various analysts. A deficiency in aggregation is part of where groupthink comes from, but some groups are able to harness their disparity into something productive. The many are smarter than the few, but only if the many are able to aggregate their data properly.
In theory, blogs represent a self-organizing system that has the potential to evolve and display emergent properties (a sort of human hive mind). In practice, it's a little more difficult to say. I think it's clear that the spontaneous appearance of collective thought, as implemented through blogs or other communication systems, is happening frequently on the internet. However, each occurrence is isolated and only represents an incremental gain in productivity. In other words, a system will sometimes self-organize in order to analyze a problem and produce an enormous amount of data which is then aggregated into a shared vision (a vision which is much more sophisticated than anything that one individual could come up with), but the structure that appears in that case will disappear as the issue dies down. The incredible increase in analytic power is not a permanent stair step, nor is it ubiquitous. Indeed, it can also be hard to recognize the signal in a great sea of noise.
Of course, such systems are constantly and spontaneously self-organizing; themselves tackling problems in parallel. Some systems will compete with others, some systems will organize around trivial issues, some systems won't be nearly as effective as others. Because of this, it might be that we don't even recognize when a system really transcends its perceived limitations. Of course, such systems are not limited to blogs. In fact they are quite common, and they appear in lots of different types of systems. Business markets are, in part, self-organizing, with emergent properties like Adam Smith's "invisible hand". Open Source software is another example of a self-organizing system.
Interestingly enough, this subject ties in nicely with a series of posts I've been working on regarding the properties of Reflexive documentaries, polarized debates, computer security, and national security. One of the general ideas discussed in those posts is that an argument achieves a higher degree of objectivity by embracing and acknowledging its own biases and agenda. Ironically, in acknowledging one's own subjectivity, one becomes more objective and reliable. This applies on an individual basis, but becomes much more powerful when it is part of an emergent system of analysis as discussed above. Blogs are excellent at this sort of thing precisely because they are made up of independent parts that make no pretense at objectivity. It's not that any one blog or post is particularly reliable in itself, it's that blogs collectively are more objective and reliable than any one analyst (a journalist, for instance), despite the fact that many blogs are mediocre at best. The news media represents a competing system (the journalist being the media's equivalent of the blogger), one that is much more rigid and unyielding. The interplay between blogs and the media is fascinating, and you can see each medium evolving in response to the other (the degree to which this is occurring is naturally up for debate). You might even be able to make the argument that blogs are, themselves, emergent properties of the mainstream media.
Personally, I don't think I have that exact sort of narrative going here, though I do believe I've developed certain thematic consistencies in terms of the subjects I cover here. I'm certainly no expert and I don't post nearly often enough to establish the sort of narrative that Cowen is talking about, but I do think a reader would benefit from reading multiple posts. I try to make up for my low posting frequency by writing longer, more detailed posts, often referencing older posts on similar subjects. However, I get the feeling that if I were to break up my posts into smaller, more digestible pieces, the overall time it would take to read and produce the same material would be significantly longer. Of course, my content is rarely scholarly in nature, and my subject matter varies from week to week as well, but I found this interesting to think about nonetheless.
I think I tend to be more of an aggregator than anything else, which is interesting because I've never thought about what I do in those terms. It's also somewhat challenging, as one of my weaknesses is being timely with information. Plus aggregation appears to be one of the more tricky aspects of a system such as the ones discussed above, and with respect to blogs, it is something which definitely needs some work...
Update 12.13.04: I wrote some more on the subject. I aslo made a minor edit to this entry, moving one paragraph lower down. No content has actually changed, but the new order flows better.
Posted by Mark on December 05, 2004 at 09:23 PM .: link :.
Sunday, November 14, 2004
Hockey Video Games
With the NHL lockout upon us, I have been looking for some way to make up for this lack of hockey viewing. I've always been a big fan of hockey video games, so I figured that might do the trick. Over the past year, I've bought 2 hockey games: EA Sports NHL 2004, and ESPN NHL 2K5. I was very happy with EA's 2004 effort, but there were some annoyances and I appear to have misplaced it during the move, so I figured I'd get a 2005 game.
EA Sports is pretty much dominant when it comes to just about any sports game out there, and hockey is no exception. Ever since the halcyon days of NHL 1994 for the Genesis, EA has dominated the hockey space. So last year, in an effort to compete with EA, Sega announced that it's own hockey title was going to be branded with ESPN. Not only that, but they dropped their prices to around $20 (as compared to the standard $50 that EA charges) in the hope that the low price would lure gamers away from EA. So in looking at the reviews for EA's and ESPN's 2005 efforts, it appeared that ESPN had picked up significant ground on EA. With those reviews and that price, I figured I might as well check it out, so I took a chance and went with ESPN. To be honest, I'm not impressed. Below is a comparison between ESPN's 2005 effort and EA's 2004 game.
To give you an idea where I'm coming from, my favorite mode is franchise, so a lot of my observations will be coming from that perspective. Some things that annoy me might not annoy the casual gamer who just wants to play a game with their buddies every now and again. I'm playing on a Playstation 2, and I'm a usability nerd, so stuff that wouldn't bother other people might bother me. I'd also like to mention that I am far from a hardcore gamer, so my perceptions might be different than others.
Before I finish, I just want to stress that I'm talking about EA NHL 2004, not 2005. I've heard that the newer edition has generated a lot of complaints, but I have not played it so I can't say. Again, I'm no expert, but I'm not very impressed with ESPN's entry into the hockey gaming space. Perhaps in a year or two, with improvements to the UI and bug fixes, that will change.
Posted by Mark on November 14, 2004 at 08:01 PM .: link :.
Sunday, August 15, 2004
Convenience and Piracy
There is no silver bullet that will stop media piracy, whether it be movies, music, or video games. That doesn't stop media providers from trying, though. Of course, that is reasonable and expected, as piracy can pose a significant financial threat to their business. Unfortunately the draconian security mechanisms they employ aren't very effective, and end up alienating honest customers. I touched on this subject here a while back.
One of the first things you need to do when designing a security system is identify the attackers. Only then can you design an efficient countermeasure. So who are the pirates? Brad Wardell speculates that there are two basic groups of pirates:
Group A: The kiddies who warez everything. CD Copy protection means nothing to them. They have the game before it even hits the stores.You'll never get rid of Group A, no matter what security measures you implement, but there is no reason you shouldn't be able to cut down on Group B. Unfortunately, most security systems that are implemented end up exacerbating the situation, frustrating customers and creating Group B pirates. One thing I've noticed about myself recently is that convenience is suddenly much more important to me. Spare time has become a premium for me, and thus I don't have the time or motivation to be a Group A pirate (not that I've ever been much of a pirate).
Not too long ago, I upgraded my system to Windows XP. After some time, I wanted to play some game that I had bought years ago. Naturally, all I have is the CD - not the key or the original box or anything. What to do? Suddenly, piracy becomes an option. And the next time I want to buy a game, I might think twice about going out to a store and paying top dollar to be inconvenienced by obtrusive copy-protection.
Wardell is the owner of Stardock, a company which is particularly good at not alienating customers. I have a subscription to TotalGaming.net, and am very pleased with the experience they provide. Wardell describes his philosophy for combating piracy:
That's why I think CD based copy protections are a bad idea. I think they create pirates and aren't terribly effective anyway. They're supposed to keep the honest "honest" but I propose a better way.This is an interesting and apparently effective strategy (as Stardock seems to be doing well). Stardock has structured its business model so that they survive even in the face of piracy, yet don't have to resort to absurd and obtrusive security measures to combat piracy. It's a matter of policy for them, and their policy makes it more convenient to be a customer than a pirate. Of course, such a solution only really works for video games, but it is worth noting nonetheless.
Posted by Mark on August 15, 2004 at 07:54 PM .: link :.
Saturday, January 24, 2004
I will be doing some work on my beloved computer tonight and tomorrow. Mostly an OS upgrade, or several, depending on what I like. I am amazingly still running Windows 98. It has treated me well, but has become somewhat unstable over the past year, so I figured it's time to switch. I'll be starting with Windows XP, but I have a copy of Windows 2000 to fall back on if I hate XP (judging from some horror stories, that might be the case). I'll probably also take this opportunity to play around with Linux. Again. In the near future, I'll probably be getting a new hard drive and a DVD burner.
All of which is to say that if things do not go well tomorrow, I might not be able to write my regular Sunday post. Wish me luck.
Update 1.25.04: Things went well. Repartitioned the drive and started formatting it, went to the movies to kill time, and when I came back installation was waiting for me. 20 minutes later, I was good to go. Spent some time downloading and installing programs this morning, but I still got a bunch of stuff to do. So far, I like it. The many "helpful" features of XP don't seem to be bothering me much, so it looks like I might be sticking with it. Then again, little minor things can build up over time, so I guess I'll just have to wait and see.
Posted by Mark on January 24, 2004 at 08:16 PM .: link :.
Sunday, October 26, 2003
All of the bickering over media piracy can be intensely frustrating because many of the issues have clear and somewhat obvious truths that are simply being ignored. For instance, it should be obvious by now that it is impossible for any media provider to completely prevent piracy of their product, especially digital piracy (A perfectly secure system is also a perfectly useless system). It should also be obvious that instituting increasingly draconian security measures only serve to exacerbate these problems as one of the main driving forces behind file sharing is ease of use and convenience.
The music industry, lead by iTunes and EMusic (certainly not perfect, but it's a start), is finally coming to recognize some of the potential inherent in digital media. Rather than fight against the flow of technology, they're beginning to embrace it and as they further commit themselves to this path, they will begin to see success. There is, after all, a lot to like about digital distribution of content, and if a reasonable price structure is set up, you could even make it more convenient to download from an approved source than from a file-sharing service like Kazaa. Of course, the music industry still has a lot of work to do if they truly want to establish a profitable digital content business model (they need to stop prosecuting file-sharers, for example), but they're at least taking steps in the right direction.
The movie industry, on the other hand, seems content to repeat the mistakes of the music industry. With the introduction of low-cost/high-bandwidth internet connections and peer-to-peer file sharing networks, the movie industry is becoming increasingly concerned with digital piracy, which is understandable, and has responded by making (or, at least, trying to make) DVDs and other media more difficult to copy. Again, this solution does little to slow the tide of piracy, and in extreme cases it makes the experience of purchasing and using the media cumbersome and frustrating. Naturally, some degree of protection is needed, and none of the really invasive solutions have caught on (for obvious reasons), but the movie industry appears to have the same moronic policy of blaming the average consumer for piracy.
Recent research out of AT&T Labs appears to show that the movie industry should reexamine who the culprit really is.
We developed a data set of 312 popular movies and located one or more samples of 183 of these movies on file sharing networks, for a total of 285 movie samples. 77% of these samples appear to have been leaked by industry insiders. Most of our samples appeared on file sharing networks prior to their official consumer DVD release date. Indeed, of the movies that had been released on DVD as of the time of our study, only 5% first appeared after their DVD release date... [emphasis mine]As Bruce Schneier notes:
One of the first rules of security is that you need to know who your attacker is before you consider countermeasures. In this case, the movie industry has the threat wrong. The attackers aren't DVD owners making illegal copies and putting them on file sharing networks. The attackers are industry insiders making illegal copies long before the DVD is ever on the market.Obviously, piracy is a problem which can pose a significant financial threat to the movie industry, but it has become clear that piracy is here to stay, and that the best course of action for media industries is to restructure their business model to survive even in the face of piracy, rather than go to absurd and obtrusive lengths to prevent it. As it stands now, their close-minded policies are only exacerbating the situation, frustrating customers (and potential customers) without even adequately addressing the problem... [Thanks to ChicagoBoyz for the pointer to Bruce Schneier's excellent newsletters]
Posted by Mark on October 26, 2003 at 07:59 PM .: link :.
Sunday, October 19, 2003
Punk Kids Play Pong
Video games have come a long way since Pong, but Electronic Gaming Monthly wanted to see what today's kids think about classic video games. The results are uniformly funny:
Niko: Hey?Pong. My parents played this game.Brilliant. They were a little short on Atari games though. I would've loved to have seen what they said about Pitfall or Chopper Command. And this needs to be applied to all sorts of media, not just video games. We need to strap these kids in for a viewing of Knight Rider or Airwolf and see what they think. [via arstechnica]
Posted by Mark on October 19, 2003 at 11:51 PM .: link :.
Sunday, June 01, 2003
Amazon.com and the New Democracy of Opinion by Erik Ketzan : In this article, Eric Ketzan contends that Amazon.com book reviews "are invaluable documents in understanding what book reviews in periodicals could never show us: who is reading a book, why are they reading it, and how are they reading it."
The present study seeks to analyze the way these reader reviews function: what are their goals, who is their audience, and how do they differ from traditional book reviews?Since a comprehensive study of all reviews available on Amazon.com would be absurd, he chooses to examine the 133 reviews available for Thomas Pynchon's novel, Gravity's Rainbow. The novel was chosen for the extremes of opinion which dominate people's reactions to the novel, and thus provides us with a good, if somewhat unique, subject for an analysis of the Amazon system.
Indeed, the reviews for Gravity's Rainbow are uncommonly descriptive and helpful, allowing insight into the type of person who enjoys (and doesn't enjoy) this sort of novel. Indeed, many even give advice on how the novel should be read, and what to expect. The lack of an editor allows the tone of the reviews to be somewhat informal and thus you find it easier to relate to them than to a stuffy book reviewer for the New York Times Book Review...
Obviously, many (maybe even most) reviews at Amazon don't quite live up to the standard that Gravity's Rainbow sets. Its an extraordinary novel, and thus the resulting reviews are ripe for analysis, providing much information about the nature of the novel. One of the challenges of the novel, and a theme that runs throughout many reviews (professional and Amazon), is that it is essentially futile to review it in any conventional manner. Because of this, much of the commentary about it has to do with the peripheral experiences; people explain how they read it, how long it took them to do so, what effects it had on their lives, and what type of people will get it or not get it - none of which actually has much to do with the book iteself. We are able to get an uncanny picture of who is reading Gravity's Rainbow, why are they reading it, and how are they reading it, but the book itself remains a mystery (which, basically, it is, even to someone who has read it). Other novels don't lend themselves so readily to this sort of meta-review, and thus Amazon's pages aren't quite so useful for the majority of books listed there. One has to wonder if Gravity's Rainbow actually was the best choice for this case study - sure, it provides a unique example of what Amazon reviews are capable of, but that doesn't necessarily apply to the rest of the catalog... then again, the informal tone, the passion and conviction of those who love the novel, the advice on how to read and what else to read - these are things that are generally absent from professional book reviews, so perhaps Ketzan is on to something here...
Posted by Mark on June 01, 2003 at 02:16 PM .: link :.
Tuesday, October 08, 2002
gods amongst mortals
Information gods is a series of articles written by Brad Wardell about those who know how to find and digest information quickly and effectively with the tools on the internet. They are "information gods", and they are much more productive than the majority of people, who are still figuring out how to open attachments on an email (if they are on the net at all). The main thrust of the articles is that "the gap between information gods and information mortals grows wider every day. The tools for gathering information gets better. The amount of data available grows. And the experience they have in finding it and using it increases." Its an interesting series, and its funny when you see info gods clash with info mortals in a debate. Guess who generally does better?
Posted by Mark on October 08, 2002 at 08:00 PM .: link :.
Tuesday, October 01, 2002
Law School in a Nutshell, Part 1 by James Grimmelmann : Lawyers spend years learning to read and write legalese, and James makes a striking correlation between legal writing and a programming language.
To understand why legalese is so incomprehensible, think about it as the programming language Legal. It may have been clean and simple once, but that was before it suffered from a thousand years of feature creep and cut-and-paste coding. Sure, Legal is filled with bizzare keywords, strange syntax, and hideous redundancy, but what large piece of software isn't? Underneath the layers of cruft, serious work is taking place.For the rest of the article, James goes page by page and takes you through the intricacies and minutiae of a legale brief (for Eldred v. Ashcroft). Its only the first part, but its informative and well written. Another interesting note, as commented at the bottom of the page:
If "$plain_text = $file_key ^ $xor_block" seems unapproachable, consider what those not trained in the language of legal citation would make of "111 F.Supp.2d 294, 326 (S.D.N.Y. 2000)." Each is meaningless to those unfamiliar with the language; but each is more precise and compact for those who do understand than would be an English narrative equivalent. -- James S. Tyre, Programmers' & Academics' Amici Brief in "MPAA v. 2600" CaseUpdates: Part II and Part III
Posted by Mark on October 01, 2002 at 07:49 PM .: link :.
Thursday, May 09, 2002
The art of office e-mail war by David Miller : Ah the joys of corporate email politics. Email is quick, easy, and it offers the sender nearly immediate access to anyone on a corporate network. Miller goes through a variety of different strategies for manipulating e-mail, some of which are quite amusing. Personally, I haven't really been a part of the more nefarious strategies, though I often use email's obvious strategic value. We don't have BCC where I work, so that leaves out some of your average backstabbing stories. One thing I've found useful, though, is that CCing my bosses while requesting something from someone else will almost always yield faster results than if I didn't CC them. When people see the boss's name attached, they know they better get things done quickly and efficiently. This, of course, leads to my boss getting upwards of 500 emails a day, so I try and use this only when I need it... [Thankee James]
Posted by Mark on May 09, 2002 at 01:09 PM .: link :.
Friday, January 11, 2002
In the beginning...
In the Beginning was the Command Line by Neal Stephenson: An intelligent essay dealing with the trials and tribulations of computer Operating Systems. Of course one of the big problems he discusses is Metaphor Shear (which is basically the point at which a metaphor fails), which is ironic because he uses quite a few metaphors himself in the essay. One of the best is when he relates the Hole Hawg (an incredibly powerful drill that with drill through just about anything, but also incredibly dangerous because it has no limitations or cheap safeguards to protect the user from themself.) with the Linux operating system. The essay is a great read, and goes into much more than just Operating Systems. Highly recommended.
If you like Stephenson's fiction, you might also want to check out The Great Simolean Caper, an interesting story set in the not to distant future. It shares some common ground with Stephensons other work (namely, Snow Crash) and is quite an enjoyable read. Its also a bit scary, because it brings up quite a few security and privacy concerns. With the advent of digital cable and set-top boxes, companies are starting to track what you are watching on television, whether you like it or not. I've seen the data myself, and I think the advertising industry is going to go wild when these numbers start piling up (the data I saw showed enormous spikes and troughs roughly coinciding with commercials). The sneaky set-top boxes in Stephenson's Caper might seem unlikely, but we're really not too far away from that right now...
Posted by Mark on January 11, 2002 at 03:27 PM .: link :.
Thursday, November 15, 2001
Web advertising that doesn't suck?
pyRads� is a service for purchasing, managing, and serving micro advertising on web sites. Micro advertising is different than most banners and other forms of advertising you see on the web in that: 1) It's low-cost, easy, and often highly effective for advertisers. 2) It's unobtrusive, interesting, and even useful for the audience. This is an interesting little project from Pyra (makers of Blogger) and I can see it being very, very popular. Right now, the only advertising space you can buy is on Blogger, but that is a really attractive place to advertise - plus, I'm sure ev is hard at work getting other websites in the loop... It should be interesting to see how this turns out, as this form of advertising is emminently more effective and less obtrusive than all the others. Hell, at $10.00 a pop, I'm tempted to run a "Rad," just to see how well this really works.
In other blogging news (well I guess this is kind of old, but still noteworthy), Dack is back, featuring links on "The Dumb War". I don't really like this very much, though; I still miss the old Dack.com.
"It just keeps looping, Adrian! You call this music?!" - This is the funniest thing I've read in a while. Thanks DyRE!
Posted by Mark on November 15, 2001 at 10:46 AM .: link :.
Wednesday, November 14, 2001
Opera 6.0 beta
Opera 6.0 for Windows Beta 1 was released yesterday. I fell in love with Opera 5.x; it became my favourite browser for a number of reasons. With Opera 6.0, I was looking forward to a host of new and exciting features. To be perfectly honest, I don't see much to get excited about. The most noticeable feature is the ability for users to choose between single or multiple document interface (SDI/MDI); this is pretty much irrelevant to existing Opera users like myself, but I suppose it could be an important step in converting users accustomed to competing browsers. The other "big" change is the completely new default user interface, which I despise (fortunately, Opera has the ability to customize the interface:) There are a bunch of other nifty enhancements (and bug fixes), but nothing approaches the big innovative leaps that Opera 5.x made. There are also a few rendering bugs that I suppose will be worked out before the official release. Still, I highly recommend you take the Opera plunge if you haven't already; download the whopping 3.2 mb installation file here.
Posted by Mark on November 14, 2001 at 11:03 AM .: link :.
Tuesday, June 12, 2001
More than Pong
This History of Video Games is fairly comprehensive, thoughtful and exceedingly interesting, even if you don't care too much for video games. The history even goes as far back as the late 19th century, when Nintendo started as a playing card company; then it details the evolution of several companies leading up to the current day wars between Sega, Sony, and the upcoming Microsoft Xbox. Its funny to note the parallels with the internet's collapse (and, hopefully, rebirth). After a short period of growing pains where several video game companies crashed, the industry rebounded with fewer but healthier players (Sega, Nintendo, and later, Sony). I still miss the glory days of the Commodore 64 though; I spent countless hours playing games like Test Drive and Airborne Ranger (one of my all time favourites). [via alt text]
Posted by Mark on June 12, 2001 at 01:52 PM .: link :.
Friday, June 08, 2001
Disjointed, Freakish Reflections on Web Browsers
Mozilla 0.9.1 was released today, to much fanfare. Even the Slashdotters are praising the latest release, which marks a monumental leap forward over Mozilla 0.9. After downloading it myself and playing with it, I've been very pleased, though I still have a few small gripes (right clicking on the menus should work damnit!). Otherwise it seems like a much leaner, cleaner, faster and more stable build. Great work, Mozilla developers; I'm looking foward to a 1.0 release soon. However, with the news that Netscape is going away, I don't know if any browser will be able to put a dent in Microsoft's stranglehold, which is a shame, because Mozilla is a really great browser. Right now, I'm going to continue using Opera 5.11, because that is the best browser I've ever used - its only dowside is that I can't really use it to post on Blogger or 4degreez.
Some of my previous thoughts on Browsers:
Update: 4:45 p.m. ET
After using Mozilla 0.9.1 all day, I can say that while it has improved greatly over previous versions, it still has a ways to go before it can really compete with IE. I ran into a few bugs and it crashed a couple of times, so its not quite the rock solid browser I was looking for. It doesn't even come close to Opera, which is still my browser of choice. But then, 0.9.1 isn't a finished product, so I still think its coming along well and that the finished product could be worth it.
Posted by Mark on June 08, 2001 at 09:27 AM .: link :.
Thursday, May 31, 2001
The Weakest Links
No. I would never, ever do such a thing. Trust in me, loyal patrons (all 3 of you). Rest assured, this post has nothing to do with the annoying gameshow of the same title. It has to do with links and usability. Apparently, someone thought up 23 ways to weaken Web site links, from the obvious (broken, wrong) to the subtle (miscolored, unexpected) to the unfairly accused (embedded, wrapped). Its an interesting read, though its funny to note that weblogging, by its very nature, seems to break some of these rules. Especially those pesky memepoolers! [via webmutant]
Posted by Mark on May 31, 2001 at 12:03 AM .: link :.
Friday, May 04, 2001
The sky is falling
Its been falling for quite some time now, and some think it won't stop until the internet is dead. Why did it fall, and why does it continue to fall? Could it be the numerous business perversions of the english language? Perhaps dot-com communism is to blame. Its more likely, though, that this industry fallout is indicative of simple growing pains:
"What is happening now happens with every new explosion of technology. When the sky has finished falling, it will leave behind an industry with far fewer, but much healthier players. And then things will get better than they ever were."Automobiles, television, and video games all underwant similar pains in their infancy, then grew beyond control. Soon enough, we will find that the internet is growing vigorously, even if we have to pay for some things we used to get for free... [via evhead, arts & letters]
Posted by Mark on May 04, 2001 at 02:40 PM .: link :.
Monday, April 30, 2001
Heromachine is another nice little avatar maker (remember that whole storTrooper craze a while back?) that is themed more towards fantasy and superheros. Once again, its a lot of fun and I made myself a rather bland one, but it'd be pretty easy to make a really wierd one. [Thanks Drifter, via the 4degreez boards.]
Posted by Mark on April 30, 2001 at 01:43 PM .: link :.
Thursday, April 19, 2001
What a wonderful browser Opera 5.11 is. The mouse navigation by gesture recognition, though hardly a new thing, is well implemented and clever. Theres lots of other nifty features (session storing, skins, command line switches), my personal favourite being the new web spider. Simply click Ctrl+J and you'll get a list of all the links on a given page (which can be exported to HTML) Another great feature is the much improved download manager, which allows you to resume downloads. I've always liked Opera, but I've never used it consistantly... until now. For all you fellow Opera users, here's a page by one of the Opera developers that has skins, customisations and user style sheets (among other things). Thanks to grenville for posting the info on the DyREnet Message Board!
Posted by Mark on April 19, 2001 at 10:52 PM .: link :.
Wednesday, April 11, 2001
Why high speed access was invented
It wasn't directly to give people a faster Internet connection but I think it was created because of some geek's sister. See, this sister, she had a very active social life. Whenever she was home, she got phone calls out the wazoo. She wasn't home much though, because her callers usually invited her somewhere. She was popular.I honestly wouldn't be suprised if thats how it actually happened. [originally posted at 4degreez.com]
Posted by Mark on April 11, 2001 at 12:45 PM .: link :.
Thursday, March 15, 2001
The Dream Machine
I recently purchased a veritable plethora of computer hardware in an attempt to build my dream machine. Ars Technica was an invaluable resource for my efforts, especially their system recommendations and how-to guides. Not to mention their weblog, which is a great source for current tech news and information. Tom's Hardware Guide also provided some in-depth wisdom and reviews. For price comparisons, I used pricewatch.com, streetprices.com, and pricecombat.com. Another good find was jcshopper, a decent store with very good prices ($57 PC133 256MB SDRAM!). Thanks also to grenville, Four Degreez, and DyRE for all their help! Soon I'll be able to break the chains of my 200Mhz oppression! For those who are interested, I posted my purchases on the infamous Kaedrin Forum.
Posted by Mark on March 15, 2001 at 09:34 AM .: link :.
The Honor System Takes Hold
Amazon.com's Honor System, a way for Web sites to receive payments from readers, is slowly taking hold. In all honesty, while I see the motivation for having such a thing and am enthusiastic about using it, I don't see how that sort of system could really support a website. First, when given the choice, most people won't pay. Second, even when people do pay, they aren't likely to keep paying. That's why you see Metafilter making $600 in a day, then practically nothing for the next month. If you wish to prove me wrong, feel free to donate to the Kaedrin Honor System Page (or go here to find other options for supporting Kaedrin:)! It will be much appreciated!
5:30 PM: More thoughts - It would be great if Amazon was able to incorperate some of its other functionality into the Honor System. For instance, allow visitors to review the website, or the ability to create lists of themed websites. Amazon could potentially parlay the Honor System into becoming a major portal site (even recommending sites for you based on what sites you've rated and visited), and given Amazon's rediculous commission system, its in their best interest to have people donating as much money as possible! Granted, the system could be abused, but I think Amazon has a lot to gain from integrating the Honor System with reviews and recommendations. Just my 2 cents.
Posted by Mark on March 15, 2001 at 09:08 AM .: link :.
Tuesday, March 06, 2001
What Lies Beneath Piles of Files
Filepile.org is the latest creation of Andre; quite a good idea from a man who seems to have a lot of them... Does anyone remember the old filepile? It was a Blogger-like content management system that you could use to organize files alphabetically. It showed potential, but I don't think anyone used it for anything exciting (including myself; I believe I considered using it for the imaginary archive)
Another nifty creation I recently encountered is this. Type in a domain and you get all the <!-- comments --> present on the page. Fascinating, indeed. (try megnut; it seems she has something to say after all)
Posted by Mark on March 06, 2001 at 01:00 PM .: link :.
Monday, January 15, 2001
The Day The Browser Died, a tragic shortcoming of Netscape 4.x. CSS is a wonderful technology, in part because it fails gracefully (at least, its supposed to) in browsers that don't support it. Except Netscape. Netscape tends to crash when you use CSS. I recently encountered this problem with these very pages. I seem to have fixed the problem (it had to do with the padding property being applied to a table cell), but that's no excuse for Netscape's failure.
I like Netscape. Really, I do. And you know what, as you can see in the follow up article at A List Apart, Netscape has been really cooperative with this bug. Netscape has been a consistant innovative force on the internet. However, their 4.x browser has become an embarassment, and 6.0, though standards compliant and faster, isn't what is could have been (I look forward to future releases).
I apologize to anyone who still can't view this site in Netscape, and I beg of you to consider switching over to IE (or better yet Opera). That is, if you can even get to this page to read it.
Posted by Mark on January 15, 2001 at 12:56 PM .: link :.
Sunday, January 14, 2001
I've been trying to take a more novel approach recently, but I find the urge to spread some quickly growing memes is overcoming my good senses. I apologize in advance if this is the millionth time you've seen these links:)
First comes a cool Avatar maker called storTrooper. Its a nifty little java applet that lets you choose a body and clothes for a virtual representation of yourself (an avatar, if you will). I made a rather bland one (on your right), but you can make an outrageous one fairly easily. If you buy it you get lots of other clothes and styles to choose from (including the goth collection), and it would make a great supplement to a virtual community site like 4degreez, letting users goof around with their appearances...
Second is IT. What is IT? It's IT. Actually, no one knows what IT is, but IT will change the world. Some good coverage and commentary on IT can be found at Boing Boing. IT is the invention of 49-year-old scientist Dean Kamen, and IT is also code named Ginger. Of course, everyone's intrigued, including metafilter and slashdot visitors (of course). Some think it is a revolutionary form of transportation, or perhaps an infinite energy source. Steve Jobs thinks cities will be built around IT. Can IT stay a secret for long? I don't think so. We'll know what it is soon enough; no one can keep something that is supposedly this big a secret. Until then, IT is an intriguing mystery...
I now return you to your regularly scheduled programming...
Posted by Mark on January 14, 2001 at 10:43 PM .: link :.
Thursday, December 14, 2000
Why Browsers haven't Standardized
Why do browser companies continue to forge blindly ahead with more and more new features when they haven't even implemented existing standards correctly? Why can't they follow the standards process? Good questions. The answer is that browsers do, in fact, follow the standards process! The problem is that browsers are encouraged to innovate, to make up new (proprietary) features and technologies. They then act as a test market for the W3C, who evaluate the new features and observe how they work in the "real" world. They then make recommendations based on their findings. But when they change their specifications, the browsers are left in a lose-lose situation. This article will give you the rest of the low down in an objective manner. Its a frustrating situation, from every angle, and this sort of complex problem has no easy answer. I hope, for everyone's sake, that the process is tightened a bit so that emerging technologies can flourish. On a side note, I wonder how much an open source browser like mozilla could contribute to the standards process without having to officially release a non-standards compliant browser...
Posted by Mark on December 14, 2000 at 04:46 PM .: link :.
Wednesday, December 13, 2000
The computer versus television: I don't watch TV anymore. The hours wasted in front of the tv screen are now wasted in front of the computer monitor. Sure, I'll throw the TV on for episodes of the Simpsons or the occasional X-Files (or possibly a Flyers game), but I'm usually doing something on the computer as well. TV just isn't a priority anymore and I've noticed similar trends with those around me. Why is that? I think its because of the control you have over the web (or your computer in general). You can look up whatever you want, whenever you want, and even display it how you want. TV rigidly forces you to adhere to their schedule, while the internet gives you the power. The internet also provides a creative outlet and interactivity, things TV lacks. The internet is a much more social activity than watching the tube, and the Television industry needs to refocus its efforts if its going to regain its once lofty status...
Posted by Mark on December 13, 2000 at 01:26 PM .: link :.
Tuesday, December 05, 2000
Someone has figured out how to use the 3d shooter Doom as a tool for system administration. Doom creates a new metaphor for process management: Each process can be a monster, and the machines can be represented by a series of rooms. Killing a process corresponds to killing a monster. How very clever. [via usr/bin/girl]
Posted by Mark on December 05, 2000 at 12:55 PM .: link :.
Monday, December 04, 2000
This is an interesting tool that you can use to help you find keywords for your site. Type in a keyword and you can find related searches that include your term, as well as how many times that term was searched on last month. Wery useful.
Posted by Mark on December 04, 2000 at 12:07 PM .: link :.
Tuesday, November 28, 2000
When I first found out that Napster was being sued by the 5 largest record labels, I was appalled. Not so much at their protecting their rights and sales (though that is debateable), but that they were passing up a huge business deal. Think about it, 40 million people are using a specific piece of software to trade music. Wouldn't it make more sense to charge for the right to use that software (as opposed to shutting it down)? Instead of embracing technology, the record industry was foolishly trying to put a stop to Napster. Then all the file sharing clones and alternatives showed up. Remember, Napster is only a company that wants to make money but couldn't (because of the copyright issue). Finally, someone has realized the potential. German media giant Bertelsmann (1 of the aformentioned 5 largest record labels) recently announced that they would be forming a business alliance with Napster, possibly charging a monthly fee of up to $15.00. Though this probably won't stop file sharing, it will probably be very lucrative for the parties involved...
Posted by Mark on November 28, 2000 at 11:54 AM .: link :.
Wednesday, November 22, 2000
Want to know how to make yourself an irreplaceable programmer? Go here and find out how to make your code unmaintainable by anyone but yourself. No wonder most software sucks.
Posted by Mark on November 22, 2000 at 10:50 AM .: link :.
Tuesday, November 21, 2000
This site has some awesome fonts from Movies, Music, Television, etc... Oh, I'm gonna have fun with this... [from grenville via Kaedrin Forum]
Posted by Mark on November 21, 2000 at 12:55 PM .: link :.
Monday, November 20, 2000
I don't know exactly when, but Netscape has recently released the much anticipated Netscape 6.0. I went to Netscape Dowload, and it said I was using IE 5.0 and that I could "Upgrade to Netscape 6" (or Netscape 4.whatever). IMHO, releasing it was a big mistake because there are a ton of bugs and usability issues. I downloaded it this morning, played with it for 10 minutes and found the following problems:
Posted by Mark on November 20, 2000 at 10:13 PM .: link :.
Wednesday, November 01, 2000
Go check out some super spiffy wallpaper backgrounds at EndEffect. Link via the also spiffy memepool, my current favourite site. The Giger pics on Kaedrin's Image page also make cool backgrounds...
Posted by Mark on November 01, 2000 at 12:28 PM .: link :.
Monday, October 30, 2000
The Unspeakable Horrors of Flash
Usability "expert" Jacob Nielsen recently published Flash: 99% Bad, an arcticle that reminds me of Dack's Flash Is Evil article published over a year ago. Dack has also done an informal Usability Test pitting HTML vs Flash. Go and read about the unspeakable horrors of Flash. Then read Kottke's response to the Flash Usability Challenge in which he makes several good points about Flash and its good uses.
In my opinion, there are two types of sites that can work with Flash:
Personal sites - Visitors to a personal site are not as goal oriented as they normally would be (at, say, an e-tailer for example). Flash won't necissarily make a personal site better, I just think its more acceptable on a personal page where I'm not looking to perform any specific tasks. Flash software isn't very cheap either, making it less viable to a personal site developer.
Graphic Design sites - Graphic Designers all but need Flash so that they can show... well, their designs. Flash offers a good compression for the kind of graphics and animation that a Graphic Design site would entail. Again, Flash makes their site less usable, but it is acceptable since it is showcasing what they are selling (graphic design).
Posted by Mark on October 30, 2000 at 01:20 PM .: link :.
Wednesday, October 11, 2000
What happened at amazon.com? It seems that they are attempting to rid themselves of excess images on their "welcome" page (and they reduced the number of nested tables as well). The page is now down to 63,972 bytes total; thats down from 97,779 bytes at mid-summer. The page is still bloated and it needs some more work, but its a step in the right direction. I'm not sure it actually happened.
Posted by Mark on October 11, 2000 at 04:50 PM .: link :.
Thursday, October 05, 2000
Tallmania hath been e-quilled by Kaedrin regular and court advisor, grenville! Go check out the E-Quill Web Toolbar (IE 5+ for PC only) and comment the hell out of any website. Its a tremendously usefull tool for constructive criticism or commentary and I'd welcome any comments on Kaedrin (or whatever you want!) I found out about E-Quill from Kottke.org and he's recently posted a bunch of his visitor's comments.
Posted by Mark on October 05, 2000 at 09:22 AM .: link :.
Monday, October 02, 2000
I have eaten this brain, and I want to chat about it.
This is an interesting parody of Amazon.com aimed towards Zombies who would like to choose from a wide array of brains to eat "because some brains are just naturally better, juicier, and formerly smarter than others." Some people have too much time on their hands. Now, if you'll excuse me, Oprah's brain just arrived in the mail. Mmmm, celebrity brain... ahhhgglaaaahhhggg...
Posted by Mark on October 02, 2000 at 01:37 PM .: link :.
Thursday, July 20, 2000
Hmm, AltaVista seems to have taken a page out of Google's book and created Raging Search with a nice clean interface.
Check out The Web Color Visualizer, it rocks. Very useful tool, there...
Posted by Mark on July 20, 2000 at 07:22 PM .: link :.
Sunday, July 16, 2000
Who's scared of losing Napster when you can use gnutella to download any files, including mp3s, mpegs, avis, movs, wavs, and any other file you could ever want.
Posted by Mark on July 16, 2000 at 02:51 PM .: link :.
Where am I?
This page contains entries posted to the Kaedrin Weblog in the Computers & Internet Category.
Kaedrin Beer Blog
12 Days of Christmas
2006 Movie Awards
2007 Movie Awards
2008 Movie Awards
2009 Movie Awards
2010 Movie Awards
2011 Fantastic Fest
2011 Movie Awards
2012 Movie Awards
6 Weeks of Halloween
Arts & Letters
Computers & Internet
Disgruntled, Freakish Reflections
Philadelphia Film Festival 2006
Philadelphia Film Festival 2008
Philadelphia Film Festival 2009
Philadelphia Film Festival 2010
Science & Technology
Security & Intelligence
The Dark Tower
Weird Movie of the Week
Copyright © 1999 - 2012 by Mark Ciocco.